Skip to content

Commit

Permalink
Merge pull request #4537 from EnterpriseDB/release/2023-07-28a
Browse files Browse the repository at this point in the history
Release: 2023-07-28a
  • Loading branch information
drothery-edb authored Jul 28, 2023
2 parents 62f9296 + 8477cda commit 74076f5
Show file tree
Hide file tree
Showing 17 changed files with 324 additions and 116 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ EDB provides a shell script, called [biganimal-csp-preflight](https://github.com
| `-h` or `--help`| Displays the command help. |
| `-i` or `--instance-type` | Google Cloud instance type for the BigAnimal cluster. The help command provides a list of possible VM instance types. Choose the instance type that best suits your application and workload. Choose an instance type in the memory optimized M1, M2, or M3 series for large data sets. Choose from the compute-optimized C2 series for compute-bound applications. Choose from the general purpose E2, N2, and N2D series if you don't require memory or compute optimization.|
| `-x` or `--cluster-architecture` | Defines the Cluster architecture and can be `single`, `ha`, or `eha`. See [Supported cluster types](/biganimal/release/overview/02_high_availability) for more information.|
| `-e` or `--networking` | Type of network endpoint for the BigAnimal cluster, either `public` or `private`. See [Cluster networking architecture](../creating_a_cluster/01_cluster_networking) for more information. |
| `-e` or `--networking` | Type of network endpoint for the BigAnimal cluster, either `public` or `private`. See [Cluster networking architecture](/biganimal/latest/getting_started/creating_a_cluster/01_cluster_networking/) for more information. |
| `-r` or `--activate-region` | Specifies region activation if no clusters currently exist in the region. |
| `--onboard` | Checks if the user and subscription are correctly configured.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ navTitle: Supported extensions and tools

BigAnimal supports a number of Postgres extensions and tools, which you can install on or alongside your cluster.

- See [Postgres extensions available by deployment](/pg_extensions/) for the complete list of extensions BigAnimal supports if you are using your own cloud account.
- See [Postgres extensions available by deployment](/pg_extensions/) for the complete list of extensions BigAnimal supports if you're using your own cloud account.

- See [Postgres extensions](/biganimal/release/using_cluster/extensions) for the list of extensions supported when using BigAnimal's account and for more information on installing and working with extensions.

Expand All @@ -19,6 +19,15 @@ EDB develops and maintains several extensions and tools. These include:
- [Autocluster](/pg_extensions/advanced_storage_pack/#autocluster) — Provides faster access to clustered data by keeping track of the last inserted row for any value in a side table.

- [Refdata](/pg_extensions/advanced_storage_pack/#refdata) — Can provide performance gains of 5-10% and increased scalability.


- [EDB Postgres Tuner](/pg_extensions/pg_tuner/) — Provides safe recommendations that maximize the use of available resources.

- [EDB Query Advisor](/pg_extensions/query_advisor/) — Provides index recommendations by keeping statistics on predicates found in WHERE statements, JOIN clauses, and workload queries.

- [EDB Wait States](/pg_extensions/wait_states/) — Probes each of the running sessions at regular intervals.

- [PG Failover Slots](/pg_extensions/pg_failover_slots/) — Is an extension released as open source software under the PostgreSQL License. If you have logical replication publications on Postgres databases that are also part of a streaming replication architecture, PG Failover Slots avoids the need for you to reseed your logical replication tables when a new standby gets promoted to primary.

- [Foreign Data Wrappers](foreign_data_wrappers) — Allow you to connect your Postgres database server to external data sources.
- [Connection poolers](poolers) — Allow you to manage your connections to your Postgres database.
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ redirects:
BigAnimal supports many Postgres extensions. See [Postgres extensions available by deployment](/pg_extensions/) for the complete list.

## Extensions available when using your own cloud account
Many Postgres extensions require superuser privileges to be installed. The table in [Postgres extensions available by deployment](/pg_extensions/) indicates whether an extension requires superuser privileges. If you are using your own cloud account, you can grant superuser privileges to edb_admin so that you can install these extensions on your cluster (see [superuser](/biganimal/latest/using_cluster/01_postgres_access/#superuser)).
Installing many Postgres extensions requires superuser privileges. The table in [Postgres extensions available by deployment](/pg_extensions/) indicates whether an extension requires superuser privileges. If you're using your own cloud account, you can grant superuser privileges to edb_admin so that you can install these extensions on your cluster (see [superuser](/biganimal/latest/using_cluster/01_postgres_access/#superuser)).

## Extensions available when using BigAnimal's cloud account
If you are using BigAnimal's cloud account, you can install and use the following extensions.
If you're using BigAnimal's cloud account, you can install and use the following extensions.

PostgreSQL contrib extensions/modules:
- auth_delay
Expand Down Expand Up @@ -78,7 +78,7 @@ Use the [`CREATE EXTENSION`](https://www.postgresql.org/docs/current/sql-createe

### Example: Installing multiple extensions

One way of installing multiple extensions simultaneously is to:
This example shows one way of installing multiple extensions simultaneously.

1. Create a text file containing the `CREATE EXTENSION` command for each of the extensions you want to install. In this example, the file is named `create_extensions.sql`.

Expand Down
6 changes: 3 additions & 3 deletions product_docs/docs/pgd/4/limitations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,12 @@ While it is still possible to host up to ten databases in a single instance, thi

## Other limitations

This is a (non-comprehensive) list of limitations that are expected and are by design. They are not expected to be resolved in the future.
This is a (non-comprehensive) list of limitations that are expected and are by design. They aren't expected to be resolved in the future.

- Replacing a node with its physical standby doesn't work for nodes that use CAMO/Eager/Group Commit. Combining physical standbys and BDR in general isn't recommended, even if otherwise possible.

- A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function.

- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration.
- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two aren't compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration.

- Postgres' two-phase commit (2PC) transactions (i.e. [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) cannot be used in combination with CAMO, Group Commit, nor Eager Replication, because those features use two-phase commit underneath.
- Postgres two-phase commit (2PC) transactions (that is, [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) can't be used with CAMO, Group Commit, or Eager Replication because those features use two-phase commit underneath.
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/limitations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -131,4 +131,4 @@ Consider these limitations when planning your deployment:
different from the one used by CAMO, Eager, and Group Commit. The two aren't compatible,
so don't use them together.

- Postgres' two-phase commit (2PC) transactions (i.e. [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) cannot be used in combination with CAMO, Group Commit, nor Eager Replication, because those features use two-phase commit underneath.
- Postgres two-phase commit (2PC) transactions (that is, [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) can't be used with CAMO, Group Commit, or Eager Replication because those features use two-phase commit underneath.
34 changes: 18 additions & 16 deletions product_docs/docs/pgd/5/parallelapply.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,29 +3,35 @@ title: Parallel Apply
navTitle: Parallel Apply
---

### What is Parallel Apply?
## What is Parallel Apply?

Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This generally increases the throughput of a subscription and improves replication performance.
Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This behavior generally increases the throughput of a subscription and improves replication performance.

The transactional changes from the subscription are written by the multiple Parallel Apply writers. However, each writer ensures that the final commit of its transaction does not violate the commit order as executed on the origin node. If there is a violation, an error is generated and the transaction can be rolled back.
The transactional changes from the subscription are written by the multiple Parallel Apply writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error occurs, and the transaction can be rolled back.

!!! Warning Possible deadlocks
It may be possible that this out-of-order application of changes could trigger a deadlock. PGD currently resolves such deadlocks between Parallel Apply writers by aborting and retrying the transactions involved. If you experience a large number of such deadlocks, this is an indication that Parallel Apply is not a good fit for your workload and you should consider disabling it.
It might be possible for this out-of-order application of changes to trigger a deadlock. PGD currently resolves such deadlocks between Parallel Apply writers by aborting and retrying the transactions involved. Experiencing a large number of such deadlocks is an indication that Parallel Apply isn't a good fit for your workload. In this case, consider disabling it.
!!!

### Configuring Parallel Apply
There are two variables which control Parallel Apply in PGD 5, [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription). The default settings for these are 8 and 2.
## Configuring Parallel Apply
Two variables control Parallel Apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2).

```plain
bdr.max_writers_per_subscription = 8
bdr.writers_per_subscription = 2
```

This gives each subscription two writers, but in some circumstances, the system may allocate up to 8 writers for a subscription.
This configuration gives each subscription two writers. However, in some circumstances, the system might allocate up to 8 writers for a subscription.

[`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) can only be changed with a server restart.
You can change [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) only with a server restart.

[`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) can be changed, for a specific subscription, without a restart by halting the subscription using [`bdr.alter_subscription_disable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_disable), setting the new value and then resuming the subscription using [`bdr.alter_subscription_enable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_enable). First establish the name of the subscription using `select * from bdr.subscription`. For this example, the subscription name is `bdr_bdrdb_bdrgroup_node2_node1`.
You can change [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) for a specific subscription without a restart by:

1. Halting the subscription using [`bdr.alter_subscription_disable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_disable).
1. Setting the new value.
1. Resuming the subscription using [`bdr.alter_subscription_enable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_enable).

First, though, establish the name of the subscription using `select * from bdr.subscription`. For this example, the subscription name is `bdr_bdrdb_bdrgroup_node2_node1`.


```sql
Expand All @@ -40,16 +46,12 @@ SELECT bdr.alter_subscription_enable ('bdr_bdrdb_bdrgroup_node2_node1');

### When to use Parallel Apply

Parallel Apply is always on by default and for most operations, we recommend that it is left on.
Parallel Apply is always on by default. For most operations, we recommend that you leave it on.

### When not to use Parallel Apply

As of, and up to at least PGD 5.1, Parallel Apply should not be used with Group Commit, CAMO and eager replication. You should disable Parallel Apply in these scenarios. If you are experiencing a large number of deadlocks, you may also want to disable it.
For PGD 5.1 and earlier, don't use Parallel aAply with Group Commit, CAMO, and Eager Replication. Disable Parallel Apply in these scenarios. Also, if you're experiencing a large number of deadlocks, consider disabling it.

### Disabling Parallel Apply

To disable Parallel Apply set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to 1.




To disable Parallel Apply, set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to `1`.
Original file line number Diff line number Diff line change
Expand Up @@ -892,7 +892,7 @@ RecoveryTarget allows to configure the moment where the recovery process will st
| `targetLSN ` | The target LSN (Log Sequence Number) | string |
| `targetTime ` | The target time as a timestamp in the RFC3339 standard | string |
| `targetImmediate` | End recovery as soon as a consistent state is reached | \*bool |
| `exclusive ` | Set the target to be exclusive (defaults to true) | \*bool |
| `exclusive ` | Set the target to be exclusive. If omitted, defaults to false, so that in Postgres, `recovery_target_inclusive` will be true | \*bool |

<a id='ReplicaClusterConfiguration'></a>

Expand Down
11 changes: 6 additions & 5 deletions product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -702,10 +702,11 @@ You can choose only a single one among the targets above in each
Additionally, you can specify `targetTLI` force recovery to a specific
timeline.

By default, the previous parameters are considered to be exclusive, stopping
just before the recovery target. You can request inclusive behavior,
stopping right after the recovery target, setting the `exclusive` parameter to
`false` like in the following example relying on a blob container in Azure:
By default, the previous parameters are considered to be inclusive, stopping
just after the recovery target, matching [the behavior in PostgreSQL](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-RECOVERY-TARGET-INCLUSIVE)
You can request exclusive behavior,
stopping right before the recovery target, by setting the `exclusive` parameter to
`true` like in the following example relying on a blob container in Azure:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
Expand All @@ -724,7 +725,7 @@ spec:
recoveryTarget:
backupID: 20220616T142236
targetName: "maintenance-activity"
exclusive: false
exclusive: true
externalClusters:
- name: clusterBackup
Expand Down
22 changes: 17 additions & 5 deletions product_docs/docs/postgres_for_kubernetes/1/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,6 @@ full lifecycle of a highly available Postgres database clusters with a
primary/standby architecture, using native streaming replication.

!!! Note

The operator has been renamed from Cloud Native PostgreSQL. Existing users
of Cloud Native PostgreSQL will not experience any change, as the underlying
components and resources have not changed.
Expand All @@ -90,7 +89,7 @@ primary/standby architecture, using native streaming replication.

## Features unique to EDB Postgres of Kubernetes

- Long Term Support for 1.18.x
- [Long Term Support](#long-term-support) for 1.18.x
- Red Hat certified operator for OpenShift
- Support on IBM Power
- EDB Postgres for Kubernetes Plugin
Expand All @@ -102,11 +101,26 @@ You can [evaluate EDB Postgres for Kubernetes for free](evaluation.md).
You need a valid license key to use EDB Postgres for Kubernetes in production.

!!! Note

Based on the [Operator Capability Levels model](operator_capability_levels.md),
users can expect a **"Level V - Auto Pilot"** set of capabilities from the
EDB Postgres for Kubernetes Operator.

### Long Term Support

EDB is committed to declaring one version of EDB Postgres for Kubernetes per
year as a Long Term Support version. This version will be supported and receive
maintenance releases for an additional 12 months beyond the last release of
CloudNativePG by the community for the same version. For example, the last
version of 1.18 of CloudNativePG was released on June 12, 2023. This was
declared a LTS version of EDB Postgres for Kubernetes and it will be supported
for additional 12 months until June 12, 2024. Customers can expect that they
will have at least 6 months to move between LTS versions. So they should
expect the next LTS to be available by January 12, 2024 to allow at least 6
months to migrate. While we encourage customers to regularly upgrade to the
latest version of the operator to take advantage of new features, having LTS
versions allows customers desiring additional stability to stay on the same
version for 12-18 months before upgrading.

## Licensing

EDB Postgres for Kubernetes works with both PostgreSQL and EDB Postgres
Expand All @@ -130,7 +144,6 @@ The EDB Postgres for Kubernetes Operator container images support the multi-arch
format for the following platforms: `linux/amd64`, `linux/arm64`, `linux/ppc64le`, `linux/s390x`.

!!! Warning

EDB Postgres for Kubernetes requires that all nodes in a Kubernetes cluster have the
same CPU architecture, thus a hybrid CPU architecture Kubernetes cluster is not
supported. Additionally, EDB supports `linux/ppc64le` and `linux/s390x` architectures
Expand All @@ -156,7 +169,6 @@ In case you are not familiar with some basic terminology on Kubernetes and Postg
please consult the ["Before you start" section](before_you_start.md).

!!! Note

Although the guide primarily addresses Kubernetes, all concepts can
be extended to OpenShift as well.

Expand Down
Loading

2 comments on commit 74076f5

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸŽ‰ Published on https://edb-docs.netlify.app as production
πŸš€ Deployed on https://64c4143f70fca62f6402771d--edb-docs.netlify.app

Please sign in to comment.