Skip to content

Commit

Permalink
Merge pull request #4507 from EnterpriseDB/releases/2023-07-25
Browse files Browse the repository at this point in the history
Releases: 2023-07-25
  • Loading branch information
drothery-edb authored Jul 25, 2023
2 parents 087af5b + e826f2f commit af09612
Show file tree
Hide file tree
Showing 129 changed files with 566 additions and 478 deletions.
17 changes: 11 additions & 6 deletions advocacy_docs/pg_extensions/pg_failover_slots/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,18 @@ directoryDefaults:
product: PG Failover Slots
---

PG Failover Slots (pg_failover_slots) is an extension released as open source software under the PostgreSQL LICENSE. If you have logical replication publications on Postgres databases that are also part of a streaming replication architecture,
PG Failover Slots (pg_failover_slots) is an extension released as open source software under the PostgreSQL License. If you have logical replication publications on Postgres databases that are also part of a streaming replication architecture,
PG Failover Slots avoids the need for you to reseed your logical replication tables when a new standby gets promoted to primary.

Since the replication slot used by logical replication is only maintained on the primary node, downstream subscribers don't receive any new changes from the newly promoted primary until the slot is created on the newly promoted primary. Picking up logical replication changes from the newly promoted standby is unsafe because the information that includes which data a subscriber has confirmed receiving and which log data still needs to be retained for the subscriber will have been lost, resulting in an unknown gap in data. PG Failover Slots makes logical replication slots usable across a physical failover using the following features:
Since the replication slot used by logical replication is maintained only on the primary node, downstream subscribers don't receive any new changes from the newly promoted primary until the slot is created on the newly promoted primary. Picking up logical replication changes from the newly promoted standby is unsafe because THE following information will be lost:
- The data a subscriber confirmed receiving
- The log data that still needs to be retained for the subscriber

- Copies any missing replication slots from the primary to the standby
- Removes any slots from the standby that aren't found on the primary
- Periodically synchronizes the position of slots on the standby based on the primary
- Ensures that selected standbys receive data before any of the logical slot walsenders can send data to consumers
The result is in an unknown gap in data.

PG Failover Slots makes logical replication slots usable across a physical failover by:

- Copying any missing replication slots from the primary to the standby
- Removing any slots from the standby that aren't found on the primary
- Periodically synchronizing the position of slots on the standby based on the primary
- Ensuring that selected standbys receive data before any of the logical slot walsenders can send data to consumers
2 changes: 1 addition & 1 deletion advocacy_docs/pg_extensions/pg_tuner/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ EDB Postgres Tuner is a PostgreSQL extension that automates 15+ years of EDB Pos

Postgres uses some conservative settings to cover different host sizes. Some of the settings provided by Postgres are unsuitable because they don't take advantage of the available resources. Configuration parameters set by `initdb` don't account for the amount of memory, the number of CPU cores, and the kind of storage devices available to set appropriate values for parameters. Some parameters depend on the workload. The workload provides metrics to use to fine-tune some parameters dynamically.

This extension provides safe recommendations that maximize the use of available resources. It aso allows you to control if and when to apply the changes. EDB Postgres Tuner enables you to apply tuning recommendations automatically or view tuning recommendations and selectively apply them. It's now possible to successfully manage demanding Postgres databases without tuning expertise.
This extension provides safe recommendations that maximize the use of available resources. It also allows you to control if and when to apply the changes. EDB Postgres Tuner enables you to apply tuning recommendations automatically or view tuning recommendations and selectively apply them. It's now possible to successfully manage demanding Postgres databases without tuning expertise.
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ Use this quick start to create a highly available demo cluster using EDB Postgre

---

## Etcd
## 1. Etcd

See [Installing and configuring etcd](installing_etcd) to install and set up etcd.

## EDB Postgres Advanced or Extended Server
## 2. EDB Postgres Advanced or Extended Server

On both `pg-patroni1` and `pg-patroni2` hosts, install your preferred Postgres flavor. See [EDB Postgres Advanced Server](/epas/latest/installing/linux_x86_64/) or [EDB Postgres Extended Server](/pge/latest/installing/linux_x86_64/) for more information about installing these products using the EDB repository.

Expand All @@ -27,6 +27,7 @@ export PGUSER=enterprisedb
export PGGROUP=enterprisedb
export PGDATA="/var/lib/edb-as/15/main"
export PGBIN="/usr/lib/edb-as/15/bin"
export PGBINNAME="edb-postgres"
export PGSOCKET="/var/run/edb-as"
```

Expand All @@ -38,6 +39,7 @@ export PGUSER=postgres
export PGGROUP=postgres
export PGDATA="/var/lib/edb-pge/15/data"
export PGBIN="/usr/lib/edb-pge/15/bin"
export PGBINNAME="postgres"
export PGSOCKET="/var/run/edb-pge"
```

Expand All @@ -57,7 +59,7 @@ sudo systemctl disable edb-pge-15.service
sudo rm -fr /var/lib/edb-pge/15/data
```

## Watchdog
## 3. Watchdog

Patroni is the component interacting with the watchdog device. Set the permissions of the software watchdog on both `pg-patroni1` and `pg-patroni2` hosts:

Expand All @@ -70,7 +72,7 @@ sudo modprobe softdog
sudo chown $PGUSER:$PGGROUP /dev/watchdog
```

## Patroni
## 4. Patroni

On both `pg-patroni1` and `pg-patroni2` hosts, install Patroni and its dependencies for etcd. See [Installing Patroni](installing_patroni).

Expand Down Expand Up @@ -134,6 +136,8 @@ postgresql:
connect_address: "$MY_IP:$PGPORT"
data_dir: $PGDATA
bin_dir: $PGBIN
bin_name:
postgres: $PGBINNAME
pgpass: /tmp/pgpass0
authentication:
replication:
Expand Down Expand Up @@ -167,7 +171,7 @@ EOF

`$MY_IP` and `$MY_NAME` are specific to the local host. Otherwise, the `patroni.yml` configuration is the same on all Patroni nodes.

Patroni expects to find the `postgres` binary in the `bin_dir` location, while the EDB Postgres Advanced Server binary is called `edb-postgres`. Create a symbolic link to let Patroni find that binary:
Patroni expects to find the `postgres` binary in the `bin_dir` location, while the EDB Postgres Advanced Server binary is called `edb-postgres`. The `postgresql.bin_name` setting does not exist in Patroni prior to the [`3.0.3` release](https://github.com/zalando/patroni/blob/master/docs/releases.rst#version-303) and will be silently ignored by older versions. For these versions, create an appropriately named symbolic link that points to the relevant binary:

```bash
sudo ln -s /usr/lib/edb-as/15/bin/edb-postgres /usr/lib/edb-as/15/bin/postgres
Expand Down Expand Up @@ -224,7 +228,7 @@ sudo systemctl restart edb-as15-pgagent.service
sudo systemctl reset-failed
```

## HAProxy
## 5. HAProxy

For the purpose of this example, install HAProxy on both `pg-patroni1` and `pg-patroni2` hosts:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ tags:

Patroni packages are provided through the PGDG `apt` and `yum` repositories.

See [Platform Compatibility](https://www.enterprisedb.com/resources/platform-compatibility#epas) for the supported OS list (only Linux x86-64 (amd64) currently).
See [Platform Compatibility](https://www.enterprisedb.com/resources/platform-compatibility#epas) for the supported OS list (only `Linux x86-64 (amd64)` currently).

### Debian/Ubuntu

Expand Down
16 changes: 10 additions & 6 deletions advocacy_docs/supported-open-source/patroni/rhel8_quick_start.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ Use this quick start to create a highly available demo cluster using EDB Postgre

---

## Etcd
## 1. Etcd

See [Installing and configuring etcd](installing_etcd) to install and set up etcd.

## EDB Postgres Advanced or Extended Server
## 2. EDB Postgres Advanced or Extended Server

On both `pg-patroni1` and `pg-patroni2` hosts, install your preferred flavor of Postgres flavor. See [EDB Postgres Advanced Server](/epas/latest/installing/linux_x86_64/) or [EDB Postgres Extended Server](/pge/latest/installing/linux_x86_64/) for more information about installing these products using the EDB repository.

Expand All @@ -27,6 +27,7 @@ export PGUSER=enterprisedb
export PGGROUP=enterprisedb
export PGDATA="/var/lib/edb/as15/data"
export PGBIN="/usr/edb/as15/bin"
export PGBINNAME="edb-postgres"
export PGSOCKET="/var/run/edb/as15"
```

Expand All @@ -38,6 +39,7 @@ export PGUSER=postgres
export PGGROUP=postgres
export PGDATA="/var/lib/edb-pge/15/data"
export PGBIN="/usr/edb/pge15/bin"
export PGBINNAME="postgres"
export PGSOCKET="/var/run/edb-pge"
```

Expand All @@ -59,7 +61,7 @@ sudo firewall-cmd --quiet --zone=public --add-port=$PGPORT/tcp --permanent
sudo firewall-cmd --quiet --reload
```

## Watchdog
## 3. Watchdog

Patroni is the component interacting with the watchdog device. Set the permissions of the software watchdog on both `pg-patroni1` and `pg-patroni2` hosts:

Expand All @@ -72,7 +74,7 @@ sudo modprobe softdog
sudo chown $PGUSER:$PGGROUP /dev/watchdog
```

## Patroni
## 4. Patroni

On both `pg-patroni1` and `pg-patroni2` hosts, install Patroni and its dependencies for etcd. See [Installing Patroni](installing_patroni).

Expand Down Expand Up @@ -136,6 +138,8 @@ postgresql:
connect_address: "$MY_IP:$PGPORT"
data_dir: $PGDATA
bin_dir: $PGBIN
bin_name:
postgres: $PGBINNAME
pgpass: /tmp/pgpass0
authentication:
replication:
Expand Down Expand Up @@ -169,7 +173,7 @@ EOF

`$MY_IP` and `$MY_NAME` are specific to the local host. Otherwise, the `patroni.yml` configuration is the same on all Patroni nodes.

Patroni expects to find the `postgres` binary in the `bin_dir` location, while the EDB Postgres Advanced Server binary is called `edb-postgres`. Create a symbolic link to let Patroni find that binary:
Patroni expects to find the `postgres` binary in the `bin_dir` location, while the EDB Postgres Advanced Server binary is called `edb-postgres`. The `postgresql.bin_name` setting does not exist in Patroni prior to the [`3.0.3` release](https://github.com/zalando/patroni/blob/master/docs/releases.rst#version-303) and will be silently ignored by older versions. For these versions, create an appropriately named symbolic link that points to the relevant binary:

```bash
sudo ln -s /usr/edb/as15/bin/edb-postgres /usr/edb/as15/bin/postgres
Expand Down Expand Up @@ -224,7 +228,7 @@ sudo firewall-cmd --quiet --zone=public --add-port=8008/tcp --permanent
sudo firewall-cmd --quiet --reload
```

## HAProxy
## 5. HAProxy

The PGDG yum [extras](https://yum.postgresql.org/news/new-repo-extra-packages/) repositories contain haproxy packages.

Expand Down
2 changes: 1 addition & 1 deletion advocacy_docs/supported-open-source/patroni/tips.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ __OUTPUT__
f
(1 row)
```
```
```bash
psql "host=srv1,srv2 dbname=postgres user=admin target_session_attrs=read-only" -c "SELECT pg_is_in_recovery();"
__OUTPUT__
pg_is_in_recovery
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: "Customizing Google Cloud compliance policies"
description: "Customize Google Cloud compliance configuration settings to match BigAnimal's resource configurations"
---

Google Cloud uses Assured Workloads and Organization Policies to ensure compliance rules are respected. Assured Workloads monitoring scans your environment in real time and provides alerts whenever organization policy changes violate the defined compliance posture. The monitoring dashboard shows which policy is being violated and provides instructions on how to resolve the finding.
Google Cloud uses Assured Workloads and the Organization Policy Service to ensure compliance rules are respected. Assured Workloads monitoring scans your environment in real time and provides alerts whenever organization policy changes violate the defined compliance posture. The monitoring dashboard shows which policy is being violated and provides instructions for resolving the finding.

For more information, see:
- [Overview of Assured Workloads](https://cloud.google.com/assured-workloads/docs/overview)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@ Before connecting your cloud, make sure that you're assigned the following AWS m
## Connecting your cloud

!!! tip
If you're using Cloud Shell, add the `./` prefix to the `biganimal` command (`./biganimal`).
If you're using CloudShell, add the `./` prefix to the `biganimal` command (`./biganimal`).

To connect your cloud:

1. Open the AWS Cloud Shell in your browser.
1. Open AWS CloudShell in your browser.

1. Create a BigAnimal CLI credential:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Ensure you have at least the following combined roles:
- roles/resourcemanager.projectIamAdmin
- roles/compute.viewer

Or an equivalent single role such as:
Alternatively, you can have an equivalent single role, such as:
- roles/owner

## Connecting your cloud
Expand All @@ -22,7 +22,7 @@ Or an equivalent single role such as:

To connect your cloud:

1. Open the Google Cloud Shell in your browser.
1. Open Google Cloud Shell in your browser.

1. Create a BigAnimal CLI credential:

Expand All @@ -37,14 +37,14 @@ To connect your cloud:
```
The command checks for cloud account readiness and displays the results.

1. If the following readiness checks aren't met for your cloud service provider, see [Configure your Google Cloud account](/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/#configure-your-google-cloud-account) to manually configure your cloud:
1. If the following cloud readiness checks pass for your cloud service provider, your cloud account is successfully set up:

- Is the Google Cloud CLI configured to access your Google Cloud account?

- Is the limit on the number of vCPUs and network load balancers (NLBs) in your region enough for your clusters?

If the cloud readiness checks pass, your cloud account is successfully set up.

If the readiness checks aren't met, see [Configure your Google Cloud account](/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/#configure-your-google-cloud-account) to manually configure your cloud.
1. Connect your cloud account to BigAnimal:

```shell
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,12 @@ Because the load balancer IP address used in AWS is dynamic, make sure that your

### Google load balancer in Google Cloud

BigAnimal creates a new load balancer using the Premium Network Service Tier for each cluster and tags it using an unique identifier. The corresponding frontend forwarding rule uses the same unique identifier and includes the cluster ID in the following format:
BigAnimal creates a new load balancer using the Premium Network Service Tier for each cluster and tags it using a unique identifier. The corresponding front-end forwarding rule uses the same unique identifier and includes the cluster ID in the following format:

```
{"kubernetes.io/service-name":"default/<cluster_ID>-<service_type>"}
```

An example is `{"kubernetes.io/service-name":"default/p-8jz4kedbiy-rw-external-lb"}`.

Because the load balancer IP address used in Google Cloud is dynamic, make sure that your application uses the correct DNS name to access the network load balancer of a particular cluster. To be able to access it, make sure you are using the FQDN that BigAnimal provides in the Cluster Overview or Connect page.
Because the load balancer IP address used in Google Cloud is dynamic, make sure that your application uses the correct DNS name to access the network load balancer of a particular cluster. To be able to access it, make sure you're using the FQDN that BigAnimal provides in the Cluster Overview or Connect page.
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Ensure you have at least the following combined roles:
- roles/resourcemanager.projectIamAdmin
- roles/compute.viewer

Or an equivalent single role such as:
Alternatively, you can have an equivalent single role, such as:
- roles/owner

## Required APIs and services
Expand Down Expand Up @@ -120,6 +120,7 @@ The script displays the following output:
#######################
# Overall Suggestions #
#######################
```

Make sure the GCP Project ID <project_id> is the one that you want to use for BigAnimal.
Make sure the GCP account <gcp_account> has rights to create custom roles, service accounts, keys, and assign project grants.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Follow these BigAnimal requirements and recommended resource limits in Google Cl
## vCPU limits
Any time a new VM is deployed in Google Cloud, the vCPUs for the VMs must not exceed the total vCPU limits for the region.

The number of cores required by the database cluster depends on the instance type and cluster type of the clusters. For example, if you create cluster with the Standard E2 instance type, you can calculate the number of E2 cores required for your cluster based on the following:
The number of cores required by the database cluster depends on the instance type and cluster type of the clusters. For example, if you create a cluster with the standard E2 instance type, you can calculate the number of E2 cores required for your cluster based on the following:

- A virtual machine instance of type e2-standard-{N} uses {N} cores. For example, an instance of type e2-standard-32 uses 32 e2-standard cores.

Expand All @@ -22,8 +22,8 @@ BigAnimal requires an additional four n2-standard virtual machine cores per regi
## Recommended limits
BigAnimal recommends the following per region when requesting virtual machine resource limit increases:

Total Regional vCPUs: minimum of 50 per designated region
- Total Regional vCPUs: minimum of 50 per designated region

n2-standard vCPUs: minimum of 12 per designated region
- n2-standard vCPUs: minimum of 12 per designated region

Other machine family vCPUs: depending on the instance type, cluster type, and number of clusters.
- Other machine family vCPUs: depends on the instance type, cluster type, and number of clusters.
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ The following is the possible node configuration for one region:
2 data nodes + 1 local witness node
![2 data + 1 local witness](images/extreme-high-availability-single-region.png)

If you're looking for a true active-active solution that protects against regional failures, select a three region configuration. The following is the possible configurations for three regions:
If you're looking for a true active-active solution that protects against regional failures, select a three-region configuration. The following is the possible configurations for three regions:

3 data nodes + 3 data nodes, 1 witness group in a different region
![3 data nodes + 3 data nodes, 1 witness group in a different region ](images/extreme-high-availability-3-regions.png)
Expand Down
Loading

2 comments on commit af09612

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸŽ‰ Published on https://edb-docs.netlify.app as production
πŸš€ Deployed on https://64c0342fda67d206a2fa5d4b--edb-docs.netlify.app

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.