Skip to content

Commit

Permalink
Merge pull request #4792 from EnterpriseDB/release/2023-09-08
Browse files Browse the repository at this point in the history
Release: 2023-09-08
  • Loading branch information
drothery-edb authored Sep 8, 2023
2 parents 5cf874e + 91f6a3a commit f7a08d1
Show file tree
Hide file tree
Showing 10 changed files with 111 additions and 71 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ This table shows the cost breakdown.

| Database type | Hourly price | Monthly price\* |
| ---------------------------- | -------------- | --------------- |
| EDB Postgres Extended Server | $0.2511 / vCPU | $188.33 / vCPU |
| EDB Postgres Advanced Server | $0.3424 / vCPU | $256.80 / vCPU |
| EDB Postgres Extended Server | $0.2511 / vCPU | $183.30 / vCPU |
| EDB Postgres Advanced Server | $0.3424 / vCPU | $249.95 / vCPU |

\* The monthly cost is approximate and assumes 730 hours in a month.

Expand Down
32 changes: 22 additions & 10 deletions product_docs/docs/efm/4/05_using_efm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -265,27 +265,39 @@ After creating the `acctg.properties` and `sales.properties` files, create a ser

### RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x

If you're using RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x, copy the `edb-efm-4.<x>` unit file to a new file with a name that is unique for each cluster. For example, if you have two clusters named acctg and sales, the unit file names might be:
If you're using RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x, copy the service file `/usr/lib/systemd/system/edb-efm-4.<x>.service` to `/etc/systemd/system` with a new name that is unique for each cluster.

```text
/usr/lib/systemd/system/efm-acctg.service
For example, if you have two clusters named `acctg` and `sales` managed by Failover Manager 4.7, the unit file names might be `efm-acctg.service` and `efm-sales.service`, and they can be created with:

/usr/lib/systemd/system/efm-sales.service
```shell
cp /usr/lib/systemd/system/edb-efm-4.7.service /etc/systemd/system/efm-acctg.service
cp /usr/lib/systemd/system/edb-efm-4.7.service /etc/systemd/system/efm-sales.service
```

Then, edit the `CLUSTER` variable in each unit file, changing the specified cluster name from `efm` to the new cluster name. For example, for a cluster named `acctg`, the value specifies:
Then use `systemctl edit` to edit the `CLUSTER` variable in each unit file, changing the specified cluster name from `efm` to the new cluster name.
Also update the value of the `PIDfile` parameter to match the new cluster name.

```text
In our example, edit the `acctg` cluster by running `systemctl edit efm-acctg.service` and write:

```ini
[Service]
Environment=CLUSTER=acctg
PIDFile=/run/efm-4.7/acctg.pid
```

Also update the value of the `PIDfile` parameter to specify the new cluster name. For example:
And edit the `sales` cluster by running `systemctl edit efm-sales.service` and write:

```ini
PIDFile=/var/run/efm-4.7/acctg.pid
[Service]
Environment=CLUSTER=sales
PIDFile=/run/efm-4.7/sales.pid
```

After copying the service scripts, enable the services:
!!!Note
You could also have edited the files in `/etc/systemd/system` directly, but then you'll have to run `systemctl daemon-reload`, which is unecessary when using `systemd edit` to change the override files.
!!!

After saving the changes, enable the services:

```text
# systemctl enable efm-acctg.service
Expand All @@ -296,7 +308,7 @@ After copying the service scripts, enable the services:
Then, use the new service scripts to start the agents. For example, to start the `acctg` agent:

```text
# systemctl start efm-acctg`
# systemctl start efm-acctg
```

For information about customizing a unit file, see [Understanding and administering systemd](https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html).
8 changes: 7 additions & 1 deletion product_docs/docs/efm/4/13_troubleshooting.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Troubleshooting"
redirects:
redirects:
- ../efm_user/13_troubleshooting
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
Expand Down Expand Up @@ -47,3 +47,9 @@ openjdk version "1.8.0_191"
OpenJDK Runtime Environment (build 1.8.0_191-b12)
OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
```
!!! Note
There is a temporary issue with OpenJDK version 11 on RHEL and its derivatives. When starting Failover Manager, you may see an error like the following:

`java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-2.el8.x86_64/lib/tzdb.dat (No such file or directory)`

If so, the workaround is to manually install the missing package using the command `sudo dnf install tzdata-java`
7 changes: 7 additions & 0 deletions product_docs/docs/efm/4/installing/prerequisites.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,13 @@ Before configuring a Failover Manager cluster, you must satisfy the prerequisite

Before using Failover Manager, you must first install Java (version 1.8 or later). Failover Manager is tested with OpenJDK, and we strongly recommend installing that version of Java. [Installation instructions for Java](https://openjdk.java.net/install/) are platform specific.

!!! Note
There is a temporary issue with OpenJDK version 11 on RHEL and its derivatives. When starting Failover Manager, you may see an error like the following:

`java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-2.el8.x86_64/lib/tzdb.dat (No such file or directory)`

If so, the workaround is to manually install the missing package using the command `sudo dnf install tzdata-java`

## Provide an SMTP server

You can receive notifications from Failover Manager as specified by a user-defined notification script, by email, or both.
Expand Down
17 changes: 11 additions & 6 deletions product_docs/docs/pgd/5/architectures.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -94,12 +94,14 @@ they aren't part of the standard Always On architectures.
* Can be 3 data nodes (recommended)
* Can be 2 data nodes and 1 witness that doesn't hold data (not depicted)
* A PGD Proxy for each data node with affinity to the applications
* Can be colocated with data node
* Can be colocated with data node (recommended)
* Can be located on a separate node
* Configuration and infrastructure symmetry of data nodes is expected to ensure proper resources are available to handle application workload when rerouted
* Barman for backup and recovery (not depicted)
* Offsite is optional but recommended
* Can be shared by multiple clusters
* Can be shared by multiple PGD clusters
* Postgres Enterprise Manager (PEM) for monitoring (not depicted)
* Can be shared by multiple clusters
* Can be shared by multiple PGD clusters

### Always On multi-location

Expand All @@ -112,14 +114,17 @@ they aren't part of the standard Always On architectures.
* Can be 3 data nodes (recommended)
* Can be 2 data nodes and 1 witness which does not hold data (not depicted)
* A PGD-Proxy for each data node with affinity to the applications
* can be co-located with data node
* can be co-located with data node (recommended)
* can be located on a separate node
* Configuration and infrastructure symmetry of data nodes and locations is expected to ensure proper resources are available to handle application workload when rerouted
* Barman for backup and recovery (not depicted).
* Can be shared by multiple clusters
* Can be shared by multiple PGD clusters
* Postgres Enterprise Manager (PEM) for monitoring (not depicted).
* Can be shared by multiple clusters
* Can be shared by multiple PGD clusters
* An optional witness node must be placed in a third region to increase tolerance for location failure.
* Otherwise, when a location fails, actions requiring global consensus are blocked, such as adding new nodes and distributed DDL.


## Choosing your architecture

All architectures provide the following:
Expand Down
45 changes: 29 additions & 16 deletions product_docs/docs/pgd/5/cli/discover_connections.mdx
Original file line number Diff line number Diff line change
@@ -1,31 +1,35 @@
---
title: "Discovering Connection Strings"
navTitle: "Discovering Connection Strings"
title: "Discovering connection strings"
navTitle: "Discovering connection strings"
indexdepth: 2
deepToC: true
---

PGD CLI can be installed on any system which is able to connect to the PGD cluster. You will require a user with PGD superuser privileges - the [bdr_superuser role](../security) - or equivalent (e.g. edb_admin on BigAnimal distributed high-availability) to use PGD CLI.
You can install PGD CLI on any system that can connect to the PGD cluster. To use PGD CLI, you need a user with PGD superuser privileges or equivalent. The PGD user with superuser privileges is the [bdr_superuser role](../security). An example of an equivalent user is `edb_admin` on an EDB BigAnimal Distributed High Availability cluster.

## PGD CLI and database connection strings

You may not need a database connection string. For example, when Trusted Postgres Architect installs the PGD CLI on a system, it also configures the connection to the PGD cluster. This means that PGD CLI will automatically connect when run.
You might not need a database connection string. For example, when Trusted Postgres Architect installs the PGD CLI on a system, it also configures the connection to the PGD cluster. This means that PGD CLI can connect to the cluster when run.

## Getting your database connection string

Every deployment method has a different way of deriving a connection string for it. This is because of the range of different configurations that PGD supports. Generally, you can obtain the required information from the configuration of your deployment; this section provides a guide of how to assemble that information into connection strings.
Because of the range of different configurations that PGD supports, every deployment method has a different way of deriving a connection string for it. Generally, you can obtain the required information from the configuration of your deployment. You can then assemble that information into connection strings.

### For a TPA-deployed PGD cluster

Because TPA is so flexible, you will have to derive your connection string from your cluster configuration file (config.yml). You will need the name or IP address of a host with the role pgd-proxy listed for it. This host will have a proxy you can connect to. Usually the proxy will be listening on port 6432 (check the setting for `default_pgd_proxy_options` and `listen_port` in the config to confirm). The default database name is `bdrdb` (check the setting `bdr_database` in the config to confirm) and the default PGD superuser will be `enterprisedb` for EPAS and `postgres` for Postgres and Postgres Extended.
Because TPA is so flexible, you have to derive your connection string from your cluster configuration file (`config.yml`).

- You need the name or IP address of a host with the role pgd-proxy listed for it. This host has a proxy you can connect to. Usually the proxy listens on port 6432. (Check the setting for `default_pgd_proxy_options` and `listen_port` in the config to confirm.)
- The default database name is `bdrdb`. (Check the setting `bdr_database` in the config to confirm.)
- The default PGD superuser is `enterprisedb` for EDB Postgres Advanced Server and `postgres` for Postgres and Postgres Extended.

You can then assemble a connection string based on that information:

```
"host=<hostnameOrIPAddress> port=<portnumber> dbname=<databasename> user=<username> sslmode=require"
```

To illustrate this, here's some excerpts of a config.yml file for a cluster:
To illustrate this, here are some excerpts of a `config.yml` file for a cluster:

```yaml
...
Expand All @@ -51,33 +55,42 @@ instances:
...
```

The connection string for this cluster would be:
The connection string for this cluster is:

```
"host=192.168.100.2 port=6432 dbname=bdrdb user=enterprisedb sslmode=require"
```

!!! Note Host name versus IP address
In our example, we use the IP address because the configuration is from a Docker TPA install with no name resolution available. Generally, you should be able to use the host name as configured.
The example uses the IP address because the configuration is from a Docker TPA install with no name resolution available. Generally, you can use the host name as configured.
!!!

### For a BigAnimal distributed high-availability cluster
### For an EDB BigAnimal Distributed High Availability cluster

1. Log into the [BigAnimal Clusters](https://portal.biganimal.com/clusters) view.
1. Log in to the [BigAnimal clusters](https://portal.biganimal.com/clusters) view.
1. In the filter, set the Cluster Type to "Distributed High Availability" to only show clusters which work with PGD CLI.
1. Select your cluster.
1. In the view of your cluster, select the Connect tab.
1. Copy the Read/Write URI from the connection info. This is your connection string.
1. In the view of your cluster, select the **Connect** tab.
1. Copy the read/write URI from the connection info. This is your connection string.

### For a cluster deployed with EDB PGD for Kubernetes

As with TPA, EDB PGD for Kubernetes is very flexible, and there are multiple ways to obtain a connection string. It depends, in large part, on how the [services](/postgres_distributed_for_kubernetes/latest/connectivity/#services) were configured for the deployment:

- If you use the Node Service Template, direct connectivity to each node and proxy service is available.
- If you use the Group Service Template, there's a gateway service to each group.
- If you use the Proxy Service Template, a single proxy provides an entry point to the cluster for all applications.

### For an EDB PGD for Kubernetes deployed cluster
Consult your configuration file to determine this information.

As with TPA, EDB PGD for Kubernetes is very flexible and there is no one way to obtain a connection string. It depends, in large part, on how the [Services](https://www.enterprisedb.com/docs/postgres_distributed_for_kubernetes/latest/connectivity/#services) have been configured for the deployment. If the Node Service Template is used, there should be direct connectivity to each node and proxy service available. If the Group Service Template, there will be a gateway service to each group. Finally, if the Proxy Service Template has been used, there should be a single proxy providing an entry point to the cluster for all applications. Consult your configuration file to determine this information. You should be able to establish a host name or IP address, port, database name (default: `bdrdb`) and username (`enterprisedb` for EPAS and `postgres` for Postgres and Postgres Extended.).
Establish a host name or IP address, port, database name, and username. The default database name is `bdrdb`, and the default username is `enterprisedb` for EDB Postgres Advanced Server and `postgres` for Postgres and Postgres Extended.).

You can then assemble a connection string based on that information:

```
"host=<hostnameOrIPAddress> port=<portnumber> dbname=<databasename> user=<username>"
```

You may need to add `sslmode=<sslmode>` if the deployment's configuration requires it.
If the deployment's configuration requires it, add `sslmode=<sslmode>`.


6 changes: 3 additions & 3 deletions product_docs/docs/pgd/5/cli/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,15 @@ directoryDefaults:
description: "The PGD Command Line Interface (CLI) is a tool to manage your EDB Postgres Distributed cluster"
---

The EDB Postgres Distributed Command Line Interface (PGD CLI) is a tool for managing your EDB Postgres Distributed cluster. It allows you to run commands against EDB Postgres Distributed clusters. It may be installed automatically on systems within a TPA-deployed PGD cluster or it can be installed manually on systems that can connect to any PGD cluster, including BigAnimal Distributed High Availability PGD clusters or PGD clusters deployed using the EDB PGD for Kubernetes operator.
The EDB Postgres Distributed Command Line Interface (PGD CLI) is a tool for managing your EDB Postgres Distributed cluster. It allows you to run commands against EDB Postgres Distributed clusters. It is installed automatically on systems in a TPA-deployed PGD cluster. Or it can be installed manually on systems that can connect to any PGD cluster, such as EDB BigAnimal Distributed High Availability clusters or PGD clusters deployed using the EDB PGD for Kubernetes operator.

See [Installing PGD CLI](installing_cli) for information about how to install PGD CLI, both automatically with Trusted Postgres Architect and manually.
See [Installing PGD CLI](installing_cli) for information about how to manually install PGD CLI on systems.

See [Using PGD CLI](using_cli) for an introduction to using the PGD CLI and connecting to your PGD cluster.

See [Configuring PGD CLI](configuring_cli) for details on creating persistent configurations for quicker connections.

See the [Command reference](command_ref) for the available commands to inspect, manage, and get information about cluster resources.

There is also a guide to [discovering connection strings](discover_connections). It shows how to obtain the correct connection strings for your PGD-powered deployment.
See [Discovering connection strings](discover_connections) to learn how to obtain the correct connection strings for your PGD-powered deployment.

9 changes: 4 additions & 5 deletions product_docs/docs/pgd/5/cli/installing_cli.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,15 @@ title: "Installing PGD CLI"
navTitle: "Installing PGD CLI"
---

PGD CLI can be installed on any system which is able to connect to the PGD cluster. You will require a user with PGD superuser privileges - the [bdr_superuser role](../security) - or equivalent (e.g. edb_admin on BigAnimal distributed high-availability) to use PGD CLI.
You can install PGD CLI on any system that can connect to the PGD cluster. To use PGD CLI, you need a user with PGD superuser privileges or equivalent. The PGD user with superuser privileges is the [bdr_superuser role](../security). An example of an equivalent user is edb_admin on a BigAnimal distributed high-availability cluster.

## Installing automatically with Trusted Postgres Architect (TPA)

By default, Trusted Postgres Architect installs and configures PGD CLI on each PGD node. If you want to install PGD CLI on any non-PGD instance in the cluster, attach the pgdcli role to that instance in Trusted Postgres Architect's configuration file before deploying. See [Trusted Postgres Architect](/tpa/latest/) for more information.

## Installing manually on Linux

PGD CLI is installable from the EDB Repositories. These repositories require a token to enable downloads from them. You will need to login to [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) to obtain your token. Then execute the following command, substituting
PGD CLI is installable from the EDB repositories. These repositories require a token to enable downloads from them. Log in to [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) to obtain your token. Then execute the command shown for your operating system, substituting
your token for `<your-token>`.

### Add repository and install PGD CLI on Debian or Ubuntu
Expand All @@ -20,13 +21,11 @@ curl -1sLf 'https://downloads.enterprisedb.com/<your-token>/postgres_distributed
sudo apt-get install edb-pgd5-cli
```

### Add repository and install PGD CLI on RHEL, Rocky, AlmaLinux or Oracle Linux
### Add repository and install PGD CLI on RHEL, Rocky, AlmaLinux, or Oracle Linux

```bash
curl -1sLf 'https://downloads.enterprisedb.com/<your-token>/postgres_distributed/setup.rpm.sh' | sudo -E bash
sudo yum install edb-pgd5-cli
```

[Next: Using PGD CLI](using_cli)


Loading

2 comments on commit f7a08d1

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸŽ‰ Published on https://edb-docs.netlify.app as production
πŸš€ Deployed on https://64fb33cc5e5dc302b07bc396--edb-docs.netlify.app

Please sign in to comment.