Skip to content

Commit

Permalink
Merge pull request #5635 from EnterpriseDB/docs/josh/link-fixes
Browse files Browse the repository at this point in the history
various link fixes
  • Loading branch information
djw-m authored May 17, 2024
2 parents 440c083 + a389f1d commit 2db040e
Show file tree
Hide file tree
Showing 30 changed files with 45 additions and 45 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,6 @@ Cross-cloud service provider witness nodes are available with AWS, Azure, and Go

## For more information

For instructions on creating a distributed high-availability cluster using the BigAnimal portal, see [Creating a distributed high-availability cluster](../../getting_started/creating_a_cluster/creating_an_eha_cluster/).
For instructions on creating a distributed high-availability cluster using the BigAnimal portal, see [Creating a distributed high-availability cluster](../../getting_started/creating_a_cluster/creating_a_dha_cluster/).

For instructions on creating, retrieving information from, and managing a distributed high-availability cluster using the BigAnimal CLI, see [Using the BigAnimal CLI](/biganimal/latest/reference/cli/managing_clusters/#managing-distributed-high-availability-clusters).
Original file line number Diff line number Diff line change
Expand Up @@ -55,5 +55,5 @@ The restore operation is available for any cluster that has at least one availab
1. Select the **Node Settings** tab.
1. In the **Source** section, select **Fully Restore** or **Point in Time Restore**. A point-in-time restore restores the data group as it was at the specified date and time.
1. In the **Nodes** section, select **Two Data Nodes** or **Three Data Nodes**. For more information on node architecture, see [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/).
1. Follow Steps 3-5 in [Creating a distributed high-availability cluster](../getting_started/creating_a_cluster/creating_an_eha_cluster/).
1. Follow Steps 3-5 in [Creating a distributed high-availability cluster](../getting_started/creating_a_cluster/creating_a_dha_cluster/).
1. Select **Restore**.
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ To remotely monitor a Postgres cluster with PEM, you must register the cluster w

The following scenarios require remote monitoring using PEM:

- [Postgres cluster running on AWS RDS](/pem/latest/registering_database_server/#registering-postgres-clusters-on-aws)
- Postgres cluster running on AWS RDS
- [Postgres cluster running on BigAnimal](../../../biganimal/latest/using_cluster/05_monitoring_and_logging/)

PEM remote monitoring supports:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ To remotely monitor a Postgres cluster with PEM, you must register the cluster w

The following scenarios require remote monitoring using PEM:

- [Postgres cluster running on AWS RDS](../registering_database_server/#registering-postgres-clusters-on-aws)
- Postgres cluster running on AWS RDS
- [Postgres cluster running on BigAnimal](/biganimal/latest/using_cluster/05_monitoring_and_logging/)

PEM remote monitoring supports:
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pem/9/registering_agent.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ When invoking the pemworker utility, append command line options to the command
The following are some advanced options for PEM agent registration.

### Setting the agent ID
Each registered PEM agent must have a unique agent ID. The value `max(id)+1` is assigned to each agent ID unless a value is provided using the `-o` options as shown [below](#examples).
Each registered PEM agent must have a unique agent ID. The value `max(id)+1` is assigned to each agent ID unless a value is provided using the `-o` options as shown [below](#overriding-default-configurations---examples).

### Overriding default configurations - examples
This example shows how to register the PEM agent overriding the default configurations.
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/3.6/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Two different Postgres distributions can be used:
- [EDB Postgres Extended Server](https://techsupport.enterprisedb.com/customer_portal/sw/2ndqpostgres/) - PostgreSQL compatible and optimized for replication

What Postgres distribution and version is right for you depends on the features you need.
See the feature matrix in [Choosing a Postgres distribution](/pgd/latest/choosing_server/) for detailed comparison.
See the feature matrix in [Choosing a Postgres distribution](/pgd/latest/planning/choosing_server/) for detailed comparison.


## BDR
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ deepToC: true

To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages.

Perform the following operations on each host. For the purposes of this exercise, each host will be a standard data node, but the procedure would be the same for other [node types](../../node_management/node_types), such as witness or subscriber-only nodes.
Perform the following operations on each host. For the purposes of this exercise, each host will be a standard data node, but the procedure would be the same for other node types, such as witness or subscriber-only nodes.

* Use your EDB account.
* Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ To communicate between multiple nodes, Postgres Distributed nodes run more worke
The default limit (8) is too low even for a small cluster.

The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors.
To calculate the needed value, see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings).
To calculate the needed value, see [Postgres configuration/settings](../../bdr/configuration/#postgresql-settings-for-bdr).

This example, with a 3-node cluster, uses the value of 16.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ deepToC: true
## Using PGD CLI

The PGD CLI client uses a configuration file to work out which hosts to connect to.
There are [options](../../cli/using_cli) that allow you to override this to use alternative configuration files or explicitly point at a server, but by default PGD CLI looks for a configuration file in preset locations.
There are [options](../../) that allow you to override this to use alternative configuration files or explicitly point at a server, but by default PGD CLI looks for a configuration file in preset locations.

The connection to the database is authenticated in the same way as other command line utilities (like the psql command) are authenticated.

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/cli/using_cli.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ We recommend the first option, as the other options don't scale well with multip

## Running the PGD CLI

Once you have [installed pgd-cli](installing_cli), run the `pgd` command to access the PGD command line interface. The `pgd` command needs details about the host, port, and database to connect to, along with your username and password.
Once you have [installed pgd-cli](installing), run the `pgd` command to access the PGD command line interface. The `pgd` command needs details about the host, port, and database to connect to, along with your username and password.

## Passing a database connection string

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ redirects:

To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages.

Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](../../node_management/node_types), such as witness or subscriber-only nodes.
Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](../../../node_management/node_types), such as witness or subscriber-only nodes.

* Use your EDB account.
* Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ You must perform these steps on each host before proceeding to the next step.
* Increase the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.<br/><br/>
!!! Note The `max_worker_processes` value
The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors.
To calculate the needed value, see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings).
To calculate the needed value, see [Postgres configuration/settings](../../../postgres-configuration/#postgres-settings).
The value of 16 was calculated for the size of cluster being deployed in this example. It must be increased for larger clusters.
!!!
* Set a password on the EnterprisedDB/Postgres user.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ redirects:


The PGD CLI command uses a configuration file to work out the hosts to connect to.
There are [options](../../cli/using_cli) that allow you to override this to use alternative configuration files or explicitly point at a server. But, by default, PGD CLI looks for a configuration file in preset locations.
There are [options](../../../cli/using_cli) that allow you to override this to use alternative configuration files or explicitly point at a server. But, by default, PGD CLI looks for a configuration file in preset locations.

The connection to the database is authenticated in the same way as other command line utilities, like the psql command, are authenticated.

Expand Down Expand Up @@ -48,7 +48,7 @@ We recommend the first option, as the other options don't scale well with multip

For more details about these commands, see the worked example that follows.

Also consult the [PGD CLI documentation](../../cli/) for details of other configuration options and a full command reference.
Also consult the [PGD CLI documentation](../../../cli/) for details of other configuration options and a full command reference.

## Worked example

Expand Down Expand Up @@ -124,7 +124,7 @@ Once PGD CLI is configured, you can use it to get PGD-level views of the cluster

### Check the health of the cluster

The [`check-health`](../../cli/command_ref/pgd_check-health) command provides a quick way to view the health of the cluster:
The [`check-health`](../../../cli/command_ref/pgd_check-health) command provides a quick way to view the health of the cluster:

```
pgd check-health
Expand All @@ -140,7 +140,7 @@ Version Ok All nodes are running same BDR versions

### Show the nodes in the cluster

As previously seen, the [`show-nodes`](../../cli/command_ref/pgd_show-nodes) command lists the nodes in the cluster:
As previously seen, the [`show-nodes`](../../../cli/command_ref/pgd_show-nodes) command lists the nodes in the cluster:

```
pgd show-nodes
Expand All @@ -167,7 +167,7 @@ node-two 5.3.0 16.1.0

### Show the proxies in the cluster

You can view the configured proxies, with their groups and ports, using [`show-proxies`](../../cli/command_ref/pgd_show-proxies):
You can view the configured proxies, with their groups and ports, using [`show-proxies`](../../../cli/command_ref/pgd_show-proxies):

```
pgd show-proxies
Expand All @@ -181,7 +181,7 @@ pgd-proxy-two dc1 [0.0.0.0] 6432

### Show the groups in the cluster

Finally, the [`show-groups`](../../cli/command_ref/pgd_show-groups) command for PGD CLI shows which groups are configured, and more:
Finally, the [`show-groups`](../../../cli/command_ref/pgd_show-groups) command for PGD CLI shows which groups are configured, and more:

```
pgd show-node_groups
Expand All @@ -204,7 +204,7 @@ The location is descriptive metadata, and so far you haven't set it. You can use

### Set a group option

You can set group options using PGD CLI, too, using the [`set-group-options`](../../cli/command_ref/pgd_set-group-options) command.
You can set group options using PGD CLI, too, using the [`set-group-options`](../../../cli/command_ref/pgd_set-group-options) command.
This requires a `--group-name` flag to set the group for this change to affect and an `--option` flag with the setting to change.
If you wanted to set the `dc1` group's location to `London`, you would run:

Expand All @@ -228,7 +228,7 @@ dc1 4269540889 data pgd London true true node-one

### Switching write leader

If you need to change write leader in a group, to enable maintenance on a host, PGD CLI offers the [`switchover`](../../cli/command_ref/pgd_switchover) command.
If you need to change write leader in a group, to enable maintenance on a host, PGD CLI offers the [`switchover`](../../../cli/command_ref/pgd_switchover) command.
It takes a `--group-name` flag with the group the node exists in and a `--node-name` flag with the name of the node to switch to.
You can then run:

Expand All @@ -249,5 +249,5 @@ pgd 1850374637 global true false
dc1 4269540889 data pgd London true true node-two
```

More details on the available commands in PGD CLI are available in the [PGD CLI command reference](../../cli/command_ref/).
More details on the available commands in PGD CLI are available in the [PGD CLI command reference](../../../cli/command_ref/).

Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ redirects:

EDB offers automated PGD deployment using Trusted Postgres Architect (TPA) because it's generally more reliable than manual processes.
See [Deploying with TPA](../../deploy-tpa/deploying.mdx) for full details about how to install TPA and use its automated best-practice-driven PGD deployment options.
Or refer to any of the [Quick start walkthroughs](../../quickstart/), which use TPA to get you up and running quickly.
Or refer to any of the [Quick start walkthroughs](../../../quickstart/), which use TPA to get you up and running quickly.

To complement automated deployment, and to enable alternative installation and deployment processes, this section looks at the basic operations needed to manually configure a three-node PGD cluster (with a local subgroup), PGD Proxy, and PGD CLI.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ Optionally, use `--edb-repositories repository …` to specify EDB repositories


### Software versions
By default, TPA uses the latest major version of Postgres. Specify `--postgres-version` to install an earlier supported major version, or specify both version and distribution using one of the flags described under [Configure](#configure).
By default, TPA uses the latest major version of Postgres. Specify `--postgres-version` to install an earlier supported major version, or specify both version and distribution using one of the flags described under [Configure](#).

By default, TPA installs the latest version of every package, which is usually the desired behavior. However, in some testing scenarios, you might need to select specific package versions. For example:

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/deploy-config/deploy-tpa/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ This section of the manual covers how to use TPA to deploy and administer EDB Po

The installing section provides an example cluster which will be used in future examples.

You can also [perform a rolling major version upgrade](upgrading_major_rolling.mdx) with PGD administered by TPA.
You can also [perform a rolling major version upgrade](../../upgrades/upgrading_major_rolling) with PGD administered by TPA.
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/overview/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ In the future, one node will be elected as the main replicator to other groups,

PGD is compatible with [PostgreSQL](https://www.postgresql.org/), [EDB Postgres Extended Server](https://techsupport.enterprisedb.com/customer_portal/sw/2ndqpostgres/), and [EDB Postgres Advanced Server](/epas/latest) and is deployed as a standard Postgres extension named BDR. See [Compatibility](../#compatibility) for details about supported version combinations.

Some key PGD features depend on certain core capabilities being available in the target Postgres database server. Therefore, PGD users must also adopt the Postgres database server distribution that's best suited to their business needs. For example, if having the PGD feature Commit At Most Once (CAMO) is mission critical to your use case, don't adopt the community PostgreSQL distribution. It doesn't have the core capability required to handle CAMO. See the full feature matrix compatibility in [Choosing a Postgres distribution](../choosing_server/).
Some key PGD features depend on certain core capabilities being available in the target Postgres database server. Therefore, PGD users must also adopt the Postgres database server distribution that's best suited to their business needs. For example, if having the PGD feature Commit At Most Once (CAMO) is mission critical to your use case, don't adopt the community PostgreSQL distribution. It doesn't have the core capability required to handle CAMO. See the full feature matrix compatibility in [Choosing a Postgres distribution](../planning/choosing_server/).

PGD offers close-to-native Postgres compatibility. However, some access patterns don't necessarily work as well in multi-node setup as they do on a single instance. There are also some limitations in what you can safely replicate in a multi-node setting. [Application usage](../appusage) goes into detail about how PGD behaves from an application development perspective.

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/planning/deployments.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ redirects:

You can deploy and install EDB Postgres Distributed products using the following methods:

-- [Trusted Postgres Architect](/tpa/latest) (TPA) is an orchestration tool that uses Ansible to build Postgres clusters using a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations apply to quick testbed setups just as they do to production environments. TPA's flexibility allows deployments to virtual machines, AWS cloud instances or Linux host hardware. See [Deploying with TPA](/pgd/latest/install-admin/admin-tpa/installing.mdx) for more information.
-- [Trusted Postgres Architect](/tpa/latest) (TPA) is an orchestration tool that uses Ansible to build Postgres clusters using a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations apply to quick testbed setups just as they do to production environments. TPA's flexibility allows deployments to virtual machines, AWS cloud instances or Linux host hardware. See [Deploying with TPA](../deploy-config/deploy-tpa/deploying/) for more information.

- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility that runs in your cloud account or BigAnimal's cloud account where it's operated by our Postgres experts. EDB BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and and multi-region Always On clusters. See [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) in the [BigAnimal documentation](/biganimal/latest) for more information.

Expand Down
4 changes: 2 additions & 2 deletions product_docs/docs/pgd/5/quickstart/next_steps.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ description: >

### Architecture

In this quick start, we created a single region cluster of high availability Postgres databases. This is the, Always On Single Location architecture, one of a range of available PGD architectures. Other architectures include Always On Multi-Location, with clusters in multiple data centers working together, and variations of both with witness nodes enhancing resilience. Read more in [architectural options](../architectures/).
In this quick start, we created a single region cluster of high availability Postgres databases. This is the, Always On Single Location architecture, one of a range of available PGD architectures. Other architectures include Always On Multi-Location, with clusters in multiple data centers working together, and variations of both with witness nodes enhancing resilience. Read more in [architectural options](../planning/architectures/).

### Postgres versions

In this quick start, we deployed EDB Postgres Advanced Server (EPAS) to the database nodes. PGD is able to deploy a three different kinds of Postgres distributions, EPAS, EDB Postgres Extended Server and open-source PostgreSQL. The selection of database affects PGD, offering [different capabilities](../choosing_server) dependant on server.
In this quick start, we deployed EDB Postgres Advanced Server (EPAS) to the database nodes. PGD is able to deploy a three different kinds of Postgres distributions, EPAS, EDB Postgres Extended Server and open-source PostgreSQL. The selection of database affects PGD, offering [different capabilities](../planning/choosing_server/) dependant on server.

* Open-source PostgreSQL does not support CAMO
* EDB Postgres Extended Server supports CAMO, but does not offer Oracle compatibility
Expand Down
Loading

0 comments on commit 2db040e

Please sign in to comment.