diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
index 1ce708499fb..1cf498f70cf 100644
--- a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
@@ -59,6 +59,6 @@ Cross-cloud service provider witness nodes are available with AWS, Azure, and Go
## For more information
-For instructions on creating a distributed high-availability cluster using the BigAnimal portal, see [Creating a distributed high-availability cluster](../../getting_started/creating_a_cluster/creating_an_eha_cluster/).
+For instructions on creating a distributed high-availability cluster using the BigAnimal portal, see [Creating a distributed high-availability cluster](../../getting_started/creating_a_cluster/creating_a_dha_cluster/).
For instructions on creating, retrieving information from, and managing a distributed high-availability cluster using the BigAnimal CLI, see [Using the BigAnimal CLI](/biganimal/latest/reference/cli/managing_clusters/#managing-distributed-high-availability-clusters).
diff --git a/product_docs/docs/biganimal/release/using_cluster/04_backup_and_restore.mdx b/product_docs/docs/biganimal/release/using_cluster/04_backup_and_restore.mdx
index 973251f665a..53f8dc36631 100644
--- a/product_docs/docs/biganimal/release/using_cluster/04_backup_and_restore.mdx
+++ b/product_docs/docs/biganimal/release/using_cluster/04_backup_and_restore.mdx
@@ -55,5 +55,5 @@ The restore operation is available for any cluster that has at least one availab
1. Select the **Node Settings** tab.
1. In the **Source** section, select **Fully Restore** or **Point in Time Restore**. A point-in-time restore restores the data group as it was at the specified date and time.
1. In the **Nodes** section, select **Two Data Nodes** or **Three Data Nodes**. For more information on node architecture, see [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/).
-1. Follow Steps 3-5 in [Creating a distributed high-availability cluster](../getting_started/creating_a_cluster/creating_an_eha_cluster/).
+1. Follow Steps 3-5 in [Creating a distributed high-availability cluster](../getting_started/creating_a_cluster/creating_a_dha_cluster/).
1. Select **Restore**.
diff --git a/product_docs/docs/pem/8/monitoring_performance/pem_remote_monitoring.mdx b/product_docs/docs/pem/8/monitoring_performance/pem_remote_monitoring.mdx
index 46173df6c63..b54131483a9 100644
--- a/product_docs/docs/pem/8/monitoring_performance/pem_remote_monitoring.mdx
+++ b/product_docs/docs/pem/8/monitoring_performance/pem_remote_monitoring.mdx
@@ -11,7 +11,7 @@ To remotely monitor a Postgres cluster with PEM, you must register the cluster w
The following scenarios require remote monitoring using PEM:
-- [Postgres cluster running on AWS RDS](/pem/latest/registering_database_server/#registering-postgres-clusters-on-aws)
+- Postgres cluster running on AWS RDS
- [Postgres cluster running on BigAnimal](../../../biganimal/latest/using_cluster/05_monitoring_and_logging/)
PEM remote monitoring supports:
diff --git a/product_docs/docs/pem/9/monitoring_performance/pem_remote_monitoring.mdx b/product_docs/docs/pem/9/monitoring_performance/pem_remote_monitoring.mdx
index dbc3781b186..b03679e5703 100644
--- a/product_docs/docs/pem/9/monitoring_performance/pem_remote_monitoring.mdx
+++ b/product_docs/docs/pem/9/monitoring_performance/pem_remote_monitoring.mdx
@@ -11,7 +11,7 @@ To remotely monitor a Postgres cluster with PEM, you must register the cluster w
The following scenarios require remote monitoring using PEM:
-- [Postgres cluster running on AWS RDS](../registering_database_server/#registering-postgres-clusters-on-aws)
+- Postgres cluster running on AWS RDS
- [Postgres cluster running on BigAnimal](/biganimal/latest/using_cluster/05_monitoring_and_logging/)
PEM remote monitoring supports:
diff --git a/product_docs/docs/pem/9/registering_agent.mdx b/product_docs/docs/pem/9/registering_agent.mdx
index b479155dbd3..4164c83ece5 100644
--- a/product_docs/docs/pem/9/registering_agent.mdx
+++ b/product_docs/docs/pem/9/registering_agent.mdx
@@ -108,7 +108,7 @@ When invoking the pemworker utility, append command line options to the command
The following are some advanced options for PEM agent registration.
### Setting the agent ID
-Each registered PEM agent must have a unique agent ID. The value `max(id)+1` is assigned to each agent ID unless a value is provided using the `-o` options as shown [below](#examples).
+Each registered PEM agent must have a unique agent ID. The value `max(id)+1` is assigned to each agent ID unless a value is provided using the `-o` options as shown [below](#overriding-default-configurations---examples).
### Overriding default configurations - examples
This example shows how to register the PEM agent overriding the default configurations.
diff --git a/product_docs/docs/pgd/3.6/index.mdx b/product_docs/docs/pgd/3.6/index.mdx
index 114c77921a4..305154e426c 100644
--- a/product_docs/docs/pgd/3.6/index.mdx
+++ b/product_docs/docs/pgd/3.6/index.mdx
@@ -22,7 +22,7 @@ Two different Postgres distributions can be used:
- [EDB Postgres Extended Server](https://techsupport.enterprisedb.com/customer_portal/sw/2ndqpostgres/) - PostgreSQL compatible and optimized for replication
What Postgres distribution and version is right for you depends on the features you need.
-See the feature matrix in [Choosing a Postgres distribution](/pgd/latest/choosing_server/) for detailed comparison.
+See the feature matrix in [Choosing a Postgres distribution](/pgd/latest/planning/choosing_server/) for detailed comparison.
## BDR
diff --git a/product_docs/docs/pgd/4/deployments/manually/03-configuring-repositories.mdx b/product_docs/docs/pgd/4/deployments/manually/03-configuring-repositories.mdx
index 3e805a34713..b69bc8ceced 100644
--- a/product_docs/docs/pgd/4/deployments/manually/03-configuring-repositories.mdx
+++ b/product_docs/docs/pgd/4/deployments/manually/03-configuring-repositories.mdx
@@ -8,7 +8,7 @@ deepToC: true
To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages.
-Perform the following operations on each host. For the purposes of this exercise, each host will be a standard data node, but the procedure would be the same for other [node types](../../node_management/node_types), such as witness or subscriber-only nodes.
+Perform the following operations on each host. For the purposes of this exercise, each host will be a standard data node, but the procedure would be the same for other node types, such as witness or subscriber-only nodes.
* Use your EDB account.
* Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page.
diff --git a/product_docs/docs/pgd/4/deployments/manually/04-installing-software.mdx b/product_docs/docs/pgd/4/deployments/manually/04-installing-software.mdx
index d752fe6f18c..25aa8b3bf4b 100644
--- a/product_docs/docs/pgd/4/deployments/manually/04-installing-software.mdx
+++ b/product_docs/docs/pgd/4/deployments/manually/04-installing-software.mdx
@@ -148,7 +148,7 @@ To communicate between multiple nodes, Postgres Distributed nodes run more worke
The default limit (8) is too low even for a small cluster.
The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors.
-To calculate the needed value, see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings).
+To calculate the needed value, see [Postgres configuration/settings](../../bdr/configuration/#postgresql-settings-for-bdr).
This example, with a 3-node cluster, uses the value of 16.
diff --git a/product_docs/docs/pgd/4/deployments/manually/07-using-pgd-cli.mdx b/product_docs/docs/pgd/4/deployments/manually/07-using-pgd-cli.mdx
index c30e3e68911..bf3763e7d35 100644
--- a/product_docs/docs/pgd/4/deployments/manually/07-using-pgd-cli.mdx
+++ b/product_docs/docs/pgd/4/deployments/manually/07-using-pgd-cli.mdx
@@ -7,7 +7,7 @@ deepToC: true
## Using PGD CLI
The PGD CLI client uses a configuration file to work out which hosts to connect to.
-There are [options](../../cli/using_cli) that allow you to override this to use alternative configuration files or explicitly point at a server, but by default PGD CLI looks for a configuration file in preset locations.
+There are [options](../../) that allow you to override this to use alternative configuration files or explicitly point at a server, but by default PGD CLI looks for a configuration file in preset locations.
The connection to the database is authenticated in the same way as other command line utilities (like the psql command) are authenticated.
diff --git a/product_docs/docs/pgd/5/cli/using_cli.mdx b/product_docs/docs/pgd/5/cli/using_cli.mdx
index 511d98767fc..fcd44c3b9f8 100644
--- a/product_docs/docs/pgd/5/cli/using_cli.mdx
+++ b/product_docs/docs/pgd/5/cli/using_cli.mdx
@@ -20,7 +20,7 @@ We recommend the first option, as the other options don't scale well with multip
## Running the PGD CLI
-Once you have [installed pgd-cli](installing_cli), run the `pgd` command to access the PGD command line interface. The `pgd` command needs details about the host, port, and database to connect to, along with your username and password.
+Once you have [installed pgd-cli](installing), run the `pgd` command to access the PGD command line interface. The `pgd` command needs details about the host, port, and database to connect to, along with your username and password.
## Passing a database connection string
diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx
index f00c6b74335..058a659a2d8 100644
--- a/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx
+++ b/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/03-configuring-repositories.mdx
@@ -11,7 +11,7 @@ redirects:
To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages.
-Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](../../node_management/node_types), such as witness or subscriber-only nodes.
+Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](../../../node_management/node_types), such as witness or subscriber-only nodes.
* Use your EDB account.
* Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page.
diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/04-installing-software.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/04-installing-software.mdx
index cedc3767e1a..1384438cf71 100644
--- a/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/04-installing-software.mdx
+++ b/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/04-installing-software.mdx
@@ -28,7 +28,7 @@ You must perform these steps on each host before proceeding to the next step.
* Increase the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.
!!! Note The `max_worker_processes` value
The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors.
- To calculate the needed value, see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings).
+ To calculate the needed value, see [Postgres configuration/settings](../../../postgres-configuration/#postgres-settings).
The value of 16 was calculated for the size of cluster being deployed in this example. It must be increased for larger clusters.
!!!
* Set a password on the EnterprisedDB/Postgres user.
diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx
index 5ee49630ce7..ca05ecab43a 100644
--- a/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx
+++ b/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/08-using-pgd-cli.mdx
@@ -11,7 +11,7 @@ redirects:
The PGD CLI command uses a configuration file to work out the hosts to connect to.
-There are [options](../../cli/using_cli) that allow you to override this to use alternative configuration files or explicitly point at a server. But, by default, PGD CLI looks for a configuration file in preset locations.
+There are [options](../../../cli/using_cli) that allow you to override this to use alternative configuration files or explicitly point at a server. But, by default, PGD CLI looks for a configuration file in preset locations.
The connection to the database is authenticated in the same way as other command line utilities, like the psql command, are authenticated.
@@ -48,7 +48,7 @@ We recommend the first option, as the other options don't scale well with multip
For more details about these commands, see the worked example that follows.
-Also consult the [PGD CLI documentation](../../cli/) for details of other configuration options and a full command reference.
+Also consult the [PGD CLI documentation](../../../cli/) for details of other configuration options and a full command reference.
## Worked example
@@ -124,7 +124,7 @@ Once PGD CLI is configured, you can use it to get PGD-level views of the cluster
### Check the health of the cluster
-The [`check-health`](../../cli/command_ref/pgd_check-health) command provides a quick way to view the health of the cluster:
+The [`check-health`](../../../cli/command_ref/pgd_check-health) command provides a quick way to view the health of the cluster:
```
pgd check-health
@@ -140,7 +140,7 @@ Version Ok All nodes are running same BDR versions
### Show the nodes in the cluster
-As previously seen, the [`show-nodes`](../../cli/command_ref/pgd_show-nodes) command lists the nodes in the cluster:
+As previously seen, the [`show-nodes`](../../../cli/command_ref/pgd_show-nodes) command lists the nodes in the cluster:
```
pgd show-nodes
@@ -167,7 +167,7 @@ node-two 5.3.0 16.1.0
### Show the proxies in the cluster
-You can view the configured proxies, with their groups and ports, using [`show-proxies`](../../cli/command_ref/pgd_show-proxies):
+You can view the configured proxies, with their groups and ports, using [`show-proxies`](../../../cli/command_ref/pgd_show-proxies):
```
pgd show-proxies
@@ -181,7 +181,7 @@ pgd-proxy-two dc1 [0.0.0.0] 6432
### Show the groups in the cluster
-Finally, the [`show-groups`](../../cli/command_ref/pgd_show-groups) command for PGD CLI shows which groups are configured, and more:
+Finally, the [`show-groups`](../../../cli/command_ref/pgd_show-groups) command for PGD CLI shows which groups are configured, and more:
```
pgd show-node_groups
@@ -204,7 +204,7 @@ The location is descriptive metadata, and so far you haven't set it. You can use
### Set a group option
-You can set group options using PGD CLI, too, using the [`set-group-options`](../../cli/command_ref/pgd_set-group-options) command.
+You can set group options using PGD CLI, too, using the [`set-group-options`](../../../cli/command_ref/pgd_set-group-options) command.
This requires a `--group-name` flag to set the group for this change to affect and an `--option` flag with the setting to change.
If you wanted to set the `dc1` group's location to `London`, you would run:
@@ -228,7 +228,7 @@ dc1 4269540889 data pgd London true true node-one
### Switching write leader
-If you need to change write leader in a group, to enable maintenance on a host, PGD CLI offers the [`switchover`](../../cli/command_ref/pgd_switchover) command.
+If you need to change write leader in a group, to enable maintenance on a host, PGD CLI offers the [`switchover`](../../../cli/command_ref/pgd_switchover) command.
It takes a `--group-name` flag with the group the node exists in and a `--node-name` flag with the name of the node to switch to.
You can then run:
@@ -249,5 +249,5 @@ pgd 1850374637 global true false
dc1 4269540889 data pgd London true true node-two
```
-More details on the available commands in PGD CLI are available in the [PGD CLI command reference](../../cli/command_ref/).
+More details on the available commands in PGD CLI are available in the [PGD CLI command reference](../../../cli/command_ref/).
diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/index.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/index.mdx
index 0e1098c6595..8fff69ab309 100644
--- a/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/index.mdx
+++ b/product_docs/docs/pgd/5/deploy-config/deploy-manual/deploying/index.mdx
@@ -17,7 +17,7 @@ redirects:
EDB offers automated PGD deployment using Trusted Postgres Architect (TPA) because it's generally more reliable than manual processes.
See [Deploying with TPA](../../deploy-tpa/deploying.mdx) for full details about how to install TPA and use its automated best-practice-driven PGD deployment options.
-Or refer to any of the [Quick start walkthroughs](../../quickstart/), which use TPA to get you up and running quickly.
+Or refer to any of the [Quick start walkthroughs](../../../quickstart/), which use TPA to get you up and running quickly.
To complement automated deployment, and to enable alternative installation and deployment processes, this section looks at the basic operations needed to manually configure a three-node PGD cluster (with a local subgroup), PGD Proxy, and PGD CLI.
diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/01-configuring.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/01-configuring.mdx
index 61270d0baa8..f59cd854eff 100644
--- a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/01-configuring.mdx
+++ b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/01-configuring.mdx
@@ -107,7 +107,7 @@ Optionally, use `--edb-repositories repository …` to specify EDB repositories
### Software versions
-By default, TPA uses the latest major version of Postgres. Specify `--postgres-version` to install an earlier supported major version, or specify both version and distribution using one of the flags described under [Configure](#configure).
+By default, TPA uses the latest major version of Postgres. Specify `--postgres-version` to install an earlier supported major version, or specify both version and distribution using one of the flags described under [Configure](#).
By default, TPA installs the latest version of every package, which is usually the desired behavior. However, in some testing scenarios, you might need to select specific package versions. For example:
diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/index.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/index.mdx
index 7d9d82a8fc8..4b6f2ff132d 100644
--- a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/index.mdx
+++ b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/index.mdx
@@ -26,4 +26,4 @@ This section of the manual covers how to use TPA to deploy and administer EDB Po
The installing section provides an example cluster which will be used in future examples.
-You can also [perform a rolling major version upgrade](upgrading_major_rolling.mdx) with PGD administered by TPA.
+You can also [perform a rolling major version upgrade](../../upgrades/upgrading_major_rolling) with PGD administered by TPA.
diff --git a/product_docs/docs/pgd/5/overview/index.mdx b/product_docs/docs/pgd/5/overview/index.mdx
index 945ae435fe5..2d0687c56ea 100644
--- a/product_docs/docs/pgd/5/overview/index.mdx
+++ b/product_docs/docs/pgd/5/overview/index.mdx
@@ -85,7 +85,7 @@ In the future, one node will be elected as the main replicator to other groups,
PGD is compatible with [PostgreSQL](https://www.postgresql.org/), [EDB Postgres Extended Server](https://techsupport.enterprisedb.com/customer_portal/sw/2ndqpostgres/), and [EDB Postgres Advanced Server](/epas/latest) and is deployed as a standard Postgres extension named BDR. See [Compatibility](../#compatibility) for details about supported version combinations.
-Some key PGD features depend on certain core capabilities being available in the target Postgres database server. Therefore, PGD users must also adopt the Postgres database server distribution that's best suited to their business needs. For example, if having the PGD feature Commit At Most Once (CAMO) is mission critical to your use case, don't adopt the community PostgreSQL distribution. It doesn't have the core capability required to handle CAMO. See the full feature matrix compatibility in [Choosing a Postgres distribution](../choosing_server/).
+Some key PGD features depend on certain core capabilities being available in the target Postgres database server. Therefore, PGD users must also adopt the Postgres database server distribution that's best suited to their business needs. For example, if having the PGD feature Commit At Most Once (CAMO) is mission critical to your use case, don't adopt the community PostgreSQL distribution. It doesn't have the core capability required to handle CAMO. See the full feature matrix compatibility in [Choosing a Postgres distribution](../planning/choosing_server/).
PGD offers close-to-native Postgres compatibility. However, some access patterns don't necessarily work as well in multi-node setup as they do on a single instance. There are also some limitations in what you can safely replicate in a multi-node setting. [Application usage](../appusage) goes into detail about how PGD behaves from an application development perspective.
diff --git a/product_docs/docs/pgd/5/planning/deployments.mdx b/product_docs/docs/pgd/5/planning/deployments.mdx
index ade05ec9343..236f3267526 100644
--- a/product_docs/docs/pgd/5/planning/deployments.mdx
+++ b/product_docs/docs/pgd/5/planning/deployments.mdx
@@ -7,7 +7,7 @@ redirects:
You can deploy and install EDB Postgres Distributed products using the following methods:
--- [Trusted Postgres Architect](/tpa/latest) (TPA) is an orchestration tool that uses Ansible to build Postgres clusters using a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations apply to quick testbed setups just as they do to production environments. TPA's flexibility allows deployments to virtual machines, AWS cloud instances or Linux host hardware. See [Deploying with TPA](/pgd/latest/install-admin/admin-tpa/installing.mdx) for more information.
+-- [Trusted Postgres Architect](/tpa/latest) (TPA) is an orchestration tool that uses Ansible to build Postgres clusters using a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations apply to quick testbed setups just as they do to production environments. TPA's flexibility allows deployments to virtual machines, AWS cloud instances or Linux host hardware. See [Deploying with TPA](../deploy-config/deploy-tpa/deploying/) for more information.
- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility that runs in your cloud account or BigAnimal's cloud account where it's operated by our Postgres experts. EDB BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and and multi-region Always On clusters. See [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) in the [BigAnimal documentation](/biganimal/latest) for more information.
diff --git a/product_docs/docs/pgd/5/quickstart/next_steps.mdx b/product_docs/docs/pgd/5/quickstart/next_steps.mdx
index 75d4a426f23..69a98ba17bc 100644
--- a/product_docs/docs/pgd/5/quickstart/next_steps.mdx
+++ b/product_docs/docs/pgd/5/quickstart/next_steps.mdx
@@ -9,11 +9,11 @@ description: >
### Architecture
-In this quick start, we created a single region cluster of high availability Postgres databases. This is the, Always On Single Location architecture, one of a range of available PGD architectures. Other architectures include Always On Multi-Location, with clusters in multiple data centers working together, and variations of both with witness nodes enhancing resilience. Read more in [architectural options](../architectures/).
+In this quick start, we created a single region cluster of high availability Postgres databases. This is the, Always On Single Location architecture, one of a range of available PGD architectures. Other architectures include Always On Multi-Location, with clusters in multiple data centers working together, and variations of both with witness nodes enhancing resilience. Read more in [architectural options](../planning/architectures/).
### Postgres versions
-In this quick start, we deployed EDB Postgres Advanced Server (EPAS) to the database nodes. PGD is able to deploy a three different kinds of Postgres distributions, EPAS, EDB Postgres Extended Server and open-source PostgreSQL. The selection of database affects PGD, offering [different capabilities](../choosing_server) dependant on server.
+In this quick start, we deployed EDB Postgres Advanced Server (EPAS) to the database nodes. PGD is able to deploy a three different kinds of Postgres distributions, EPAS, EDB Postgres Extended Server and open-source PostgreSQL. The selection of database affects PGD, offering [different capabilities](../planning/choosing_server/) dependant on server.
* Open-source PostgreSQL does not support CAMO
* EDB Postgres Extended Server supports CAMO, but does not offer Oracle compatibility
diff --git a/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx b/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx
index 0b390f14746..524911f1141 100644
--- a/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx
+++ b/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx
@@ -147,7 +147,7 @@ tpaexec configure democluster \
--hostnames-unsorted
```
-You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always On architectures](/pgd/latest/architectures/). As part of the default architecture,
+You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always On architectures](../planning/architectures/). As part of the default architecture,
this configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers, along with a [Barman](../backup#physical-backup) node for backup.
Specify that you're using AWS (`--platform aws`) and eu-west-1 as the region (`--region eu-west-1`).
@@ -183,7 +183,7 @@ less democluster/config.yml
```shell
tpaexec configure --architecture PGD-Always-ON --help
```
- - More details on PGD-Always-ON configuration options in [Deploying with TPA](../admin-tpa/installing.mdx)
+ - More details on PGD-Always-ON configuration options in [Deploying with TPA](../deploy-config/deploy-tpa/deploying/)
- [PGD-Always-ON](/tpa/latest/architecture-PGD-Always-ON/) in the Trusted Postgres Architect documentation
- [`tpaexec configure`](/tpa/latest/tpaexec-configure/) in the Trusted Postgres Architect documentation
- [AWS platform](/tpa/latest/platform-aws/) in the Trusted Postgres Architect documentation
diff --git a/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx b/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx
index 24623799579..6aab9a58948 100644
--- a/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx
+++ b/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx
@@ -154,8 +154,8 @@ tpaexec configure democluster \
```
You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which
-sets up the configuration for [PGD 5's Always On
-architectures](../architectures/). As part of the default architecture,
+sets up the configuration for [PGD 5's Always On
+architectures](../planning/architectures/). As part of the default architecture,
it configures your cluster with three data nodes, cohosting three [PGD
Proxy](../routing/proxy/) servers, along with a [Barman](../backup#physical-backup)
node for backup.
@@ -189,7 +189,7 @@ less democluster/config.yml
```shell
tpaexec configure --architecture PGD-Always-ON --help
```
- - More details on PGD-Always-ON configuration options in [Deploying with TPA](../admin-tpa/installing.mdx)
+ - More details on PGD-Always-ON configuration options in [Deploying with TPA](../deploy-config/deploy-tpa/)
- [PGD-Always-ON](/tpa/latest/architecture-PGD-Always-ON/) in the Trusted Postgres Architect documentation
- [`tpaexec configure`](/tpa/latest/tpaexec-configure/) in the Trusted Postgres Architect documentation
- [Docker platform](/tpa/latest/platform-docker/) in the Trusted Postgres Architect documentation
diff --git a/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx b/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx
index dde5ebfa7d6..dd8b3780f5e 100644
--- a/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx
+++ b/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx
@@ -132,7 +132,7 @@ tpaexec configure democluster \
--hostnames-unsorted
```
-You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always On architectures](https://www.enterprisedb.com/docs/pgd/latest/architectures/). As part of the default architecture, it configures your cluster with three data nodes, cohosting three [PGD Proxy](https://www.enterprisedb.com/docs/pgd/latest/routing/proxy/) servers and a [Barman](https://www.enterprisedb.com/docs/pgd/latest/backup/#physical-backup) node for backup.
+You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always On architectures](../planning/architectures/). As part of the default architecture, it configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers and a [Barman](/backup/#physical-backup) node for backup.
For Linux hosts, specify that you're targeting a "bare" platform (`--platform bare`). TPA will determine the Linux version running on each host during deployment. See [the EDB Postgres Distributed compatibility table](https://www.enterprisedb.com/resources/platform-compatibility) for details about the supported operating systems.
diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx
index be6583a7bb5..e45207586d6 100644
--- a/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx
+++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx
@@ -11,7 +11,7 @@ EDB Postgres Distributed version 5.2.0 is a minor version of EDB Postgres Distri
* Parallel Apply is now available for PGD’s Commit at Most Once (CAMO) synchronous commit scope and improving replication performance.
* Parallel Apply for native Postgres asynchronous and synchronous replication has been improved for workloads where the same key is being modified concurrently by multiple transactions to maintain commit sequence and avoid deadlocks.
-* PGD Proxy has added HTTP(S) APIs to allow the health of the proxy to be monitored directly for readiness and liveness. See [Proxy health check](../routing/proxy/#proxy-health-check).
+* PGD Proxy has added HTTP(S) APIs to allow the health of the proxy to be monitored directly for readiness and liveness. See [Proxy health check](../routing/monitoring/#proxy-health-check).
!!! Important Recommended upgrade
We recommend that users of PGD 5.1 upgrade to PGD 5.2.
diff --git a/product_docs/docs/pgd/5/repsets.mdx b/product_docs/docs/pgd/5/repsets.mdx
index 3699e1fbca9..6be0f762fd5 100644
--- a/product_docs/docs/pgd/5/repsets.mdx
+++ b/product_docs/docs/pgd/5/repsets.mdx
@@ -263,7 +263,7 @@ This configuration looks like this:
![Multi-Region 3 Nodes Configuration](./images/always-on-2x3-aa-updated.png)
-This is the standard Always-On multiregion configuration as discussed in the [Choosing your architecture](architectures) section.
+This is the standard Always-On multiregion configuration as discussed in the [Choosing your architecture](planning/architectures) section.
### Application Requirements
diff --git a/product_docs/docs/pgd/5/routing/installing_proxy.mdx b/product_docs/docs/pgd/5/routing/installing_proxy.mdx
index bb031e8c6a1..1d9a4da20d8 100644
--- a/product_docs/docs/pgd/5/routing/installing_proxy.mdx
+++ b/product_docs/docs/pgd/5/routing/installing_proxy.mdx
@@ -9,7 +9,7 @@ You can use two methods to install and configure PGD Proxy to manage an EDB Post
### Installing through TPA
-If the PGD cluster is being deployed through TPA, then TPA installs and configures PGD Proxy automatically as per the recommended architecture. If you want to install PGD Proxy on any other node in a PGD cluster, then you need to attach the pgd-proxy role to that instance in the TPA configuration file. Also set the `bdr_child_group` parameter before deploying, as this example shows. See [Trusted Postgres Architect](../admin-tpa/) for more information.
+If the PGD cluster is being deployed through TPA, then TPA installs and configures PGD Proxy automatically as per the recommended architecture. If you want to install PGD Proxy on any other node in a PGD cluster, then you need to attach the pgd-proxy role to that instance in the TPA configuration file. Also set the `bdr_child_group` parameter before deploying, as this example shows. See [Trusted Postgres Architect](../deploy-config/deploy-tpa/) for more information.
```yaml
- Name: proxy-a1
@@ -51,11 +51,11 @@ You can set the log level for the PGD Proxy service using the top-level config p
`cluster.endpoints` and `cluster.proxy.name` are mandatory fields in the config file. PGD Proxy always tries to connect to the first endpoint in the list. If it fails, it tries the next endpoint, and so on.
-PGD Proxy uses endpoints given in the local config file only at proxy startup. After that, PGD Proxy retrieves the list of actual endpoints (route_dsn) from the PGD Proxy catalog. Therefore, the node option `route_dsn` must be set for each PGD Proxy node. See [route_dsn](../routing#configuration) for more information.
+PGD Proxy uses endpoints given in the local config file only at proxy startup. After that, PGD Proxy retrieves the list of actual endpoints (route_dsn) from the PGD Proxy catalog. Therefore, the node option `route_dsn` must be set for each PGD Proxy node. See [route_dsn](configuration) for more information.
##### Configuring health check
-PGD Proxy provides [HTTP(S) health check APIs](proxy#proxy-health-check). If the health checks are required, you can enable them by adding the following configuration parameters to the pgd-proxy configuration file. By default, it's disabled.
+PGD Proxy provides [HTTP(S) health check APIs](../monitoring/#proxy-health-check). If the health checks are required, you can enable them by adding the following configuration parameters to the pgd-proxy configuration file. By default, it's disabled.
```yaml
cluster:
@@ -82,7 +82,7 @@ You can enable the API by adding the config `cluster.proxy.http.enable: true`. W
To enable HTTPS, set the config parameter `cluster.proxy.http.secure: true`. If it's set to `true`, the `cert_file` and `key_file` must also be set.
-The `cluster.proxy.endpoint` is an endpoint used by the proxy to connect to the current write leader as part of its checks. When `cluster.proxy.http.enable` is `true`, `cluster.proxy.endpoint` must also be set. It can be the same as BDR node [routing_dsn](../routing#configuration), where host is `listen_address` and port is `listen_port` [proxy options](../routing#configuration). If required, you can add connection string parameters in this endpoint, like `sslmode`, `sslrootcert`, `user`, and so on.
+The `cluster.proxy.endpoint` is an endpoint used by the proxy to connect to the current write leader as part of its checks. When `cluster.proxy.http.enable` is `true`, `cluster.proxy.endpoint` must also be set. It can be the same as BDR node [routing_dsn](configuration), where host is `listen_address` and port is `listen_port` [proxy options](configuration). If required, you can add connection string parameters in this endpoint, like `sslmode`, `sslrootcert`, `user`, and so on.
#### PGD Proxy user
diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx
index ba1b6e8961f..9b407070b99 100644
--- a/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx
+++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx
@@ -12,7 +12,7 @@ You can deploy EDB Postgres Distributed for Kubernetes using the provided
This section covers using `helm` to deploy of a set of default images with the latest available version.
If want to specify a different operand or proxy image, see
-[Identify your installation images and repositories](/postgres_distributed_for_kubernetes/latest/identify_images/),
+[Identify your installation images and repositories](./identify_images/),
before continuing with the installation.
## Prerequisites
diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/private_registries.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/private_registries.mdx
index 80504a63af5..3b4c0959fda 100644
--- a/product_docs/docs/postgres_distributed_for_kubernetes/1/private_registries.mdx
+++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/private_registries.mdx
@@ -103,4 +103,4 @@ The table shows the image name prefix for each Postgres distribution.
!!! Note Image naming
For more information on operand image naming and proxy image naming,
- see [Identify your image name](/postgres_distributed_for_kubernetes/latest/identify_images/identify_image_name/).
+ see [Identify your image name](identify_images/identify_image_name/).
diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
index b2b010faac1..1cecf1cf86d 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
@@ -976,7 +976,7 @@ to request an online/hot backup or an offline/cold one: additionally, you can
also tune online backups by explicitly setting the `--immediate-checkpoint` and
`--wait-for-archive` options.
-The ["Backup" section](./backup.md#backup) contains more information about
+The ["Backup" section](./backup.md) contains more information about
the configuration settings.
### Launching psql
diff --git a/product_docs/docs/tpa/23/reference/postgres_extension_configuration.mdx b/product_docs/docs/tpa/23/reference/postgres_extension_configuration.mdx
index d0c7fe52bc2..71121d987d6 100644
--- a/product_docs/docs/tpa/23/reference/postgres_extension_configuration.mdx
+++ b/product_docs/docs/tpa/23/reference/postgres_extension_configuration.mdx
@@ -24,7 +24,7 @@ the package containing the extension.
- [Adding the *vector* extension through configuration](reconciling-local-changes/)
- [Specifying extensions for configured databases](postgres_databases/)
-- [Including shared preload entries for extensions](postgresql.conf/#shared-preload-libraries)
+- [Including shared preload entries for extensions](postgresql.conf/#shared_preload_libraries)
- [Installing Postgres-related packages](postgres_installation_method_pkg/)
## TPA recognized extensions
diff --git a/product_docs/docs/tpa/23/reference/tpaexec-download-packages.mdx b/product_docs/docs/tpa/23/reference/tpaexec-download-packages.mdx
index d95fabc9195..33b577a7a56 100644
--- a/product_docs/docs/tpa/23/reference/tpaexec-download-packages.mdx
+++ b/product_docs/docs/tpa/23/reference/tpaexec-download-packages.mdx
@@ -22,7 +22,7 @@ are supported.
container of the target operating system and uses that system's package
manager to resolve dependencies and download all necessary packages. The
required Docker setup for download-packages is the same as that for
- [using Docker as a deployment platform](#platform-docker).
+ [using Docker as a deployment platform](../platform-docker/).
## Usage