diff --git a/advocacy_docs/supported-open-source/cloud_native_pg/index.mdx b/advocacy_docs/supported-open-source/cloud_native_pg/index.mdx index b4184a4cc1a..0e14dc39d4f 100644 --- a/advocacy_docs/supported-open-source/cloud_native_pg/index.mdx +++ b/advocacy_docs/supported-open-source/cloud_native_pg/index.mdx @@ -13,7 +13,7 @@ redirects: CloudNativePG is the Kubernetes operator that covers the full lifecycle of a highly available Postgres database cluster with a primary/standby architecture, using native streaming replication. !!! Note - Looking for CloudNativePG documentation? Head over to [cloudnative-pg.io/docs/](https://cloudnative-pg.io/documentation/1.15.0/). + Looking for CloudNativePG documentation? Head over to [cloudnative-pg.io/docs/](https://cloudnative-pg.io/docs/). CloudNativePG was originally built by EDB, then released open source under Apache License 2.0 and submitted for CNCF Sandbox in April 2022. The source code repository is in [Github](https://github.com/cloudnative-pg/cloudnative-pg). diff --git a/product_docs/docs/pgd/4/overview/bdr/appusage.mdx b/product_docs/docs/pgd/4/bdr/appusage.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/appusage.mdx rename to product_docs/docs/pgd/4/bdr/appusage.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/camo.mdx b/product_docs/docs/pgd/4/bdr/camo.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/camo.mdx rename to product_docs/docs/pgd/4/bdr/camo.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/catalogs.mdx b/product_docs/docs/pgd/4/bdr/catalogs.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/catalogs.mdx rename to product_docs/docs/pgd/4/bdr/catalogs.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/column-level-conflicts.mdx b/product_docs/docs/pgd/4/bdr/column-level-conflicts.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/column-level-conflicts.mdx rename to product_docs/docs/pgd/4/bdr/column-level-conflicts.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/configuration.mdx b/product_docs/docs/pgd/4/bdr/configuration.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/configuration.mdx rename to product_docs/docs/pgd/4/bdr/configuration.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/conflicts.mdx b/product_docs/docs/pgd/4/bdr/conflicts.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/conflicts.mdx rename to product_docs/docs/pgd/4/bdr/conflicts.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/crdt.mdx b/product_docs/docs/pgd/4/bdr/crdt.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/crdt.mdx rename to product_docs/docs/pgd/4/bdr/crdt.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/ddl.mdx b/product_docs/docs/pgd/4/bdr/ddl.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/ddl.mdx rename to product_docs/docs/pgd/4/bdr/ddl.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/durability.mdx b/product_docs/docs/pgd/4/bdr/durability.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/durability.mdx rename to product_docs/docs/pgd/4/bdr/durability.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/eager.mdx b/product_docs/docs/pgd/4/bdr/eager.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/eager.mdx rename to product_docs/docs/pgd/4/bdr/eager.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/functions.mdx b/product_docs/docs/pgd/4/bdr/functions.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/functions.mdx rename to product_docs/docs/pgd/4/bdr/functions.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/group-commit.mdx b/product_docs/docs/pgd/4/bdr/group-commit.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/group-commit.mdx rename to product_docs/docs/pgd/4/bdr/group-commit.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/img/bdr.png b/product_docs/docs/pgd/4/bdr/img/bdr.png similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/img/bdr.png rename to product_docs/docs/pgd/4/bdr/img/bdr.png diff --git a/product_docs/docs/pgd/4/overview/bdr/img/frontpage.svg b/product_docs/docs/pgd/4/bdr/img/frontpage.svg similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/img/frontpage.svg rename to product_docs/docs/pgd/4/bdr/img/frontpage.svg diff --git a/product_docs/docs/pgd/4/overview/bdr/img/nodes.png b/product_docs/docs/pgd/4/bdr/img/nodes.png similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/img/nodes.png rename to product_docs/docs/pgd/4/bdr/img/nodes.png diff --git a/product_docs/docs/pgd/4/overview/bdr/img/nodes.svg b/product_docs/docs/pgd/4/bdr/img/nodes.svg similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/img/nodes.svg rename to product_docs/docs/pgd/4/bdr/img/nodes.svg diff --git a/product_docs/docs/pgd/4/overview/bdr/index.mdx b/product_docs/docs/pgd/4/bdr/index.mdx similarity index 98% rename from product_docs/docs/pgd/4/overview/bdr/index.mdx rename to product_docs/docs/pgd/4/bdr/index.mdx index 039465fd006..43d88128cd7 100644 --- a/product_docs/docs/pgd/4/overview/bdr/index.mdx +++ b/product_docs/docs/pgd/4/bdr/index.mdx @@ -1,8 +1,7 @@ --- +title: BDR (Bi-Directional Replication) navTitle: BDR navigation: - - index - - release_notes - overview - appusage - configuration @@ -26,8 +25,6 @@ navigation: - twophase - catalogs - functions -title: BDR (Bi-Directional Replication) - --- ## Overview diff --git a/product_docs/docs/pgd/4/overview/bdr/lag-control.mdx b/product_docs/docs/pgd/4/bdr/lag-control.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/lag-control.mdx rename to product_docs/docs/pgd/4/bdr/lag-control.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/nodes.mdx b/product_docs/docs/pgd/4/bdr/nodes.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/nodes.mdx rename to product_docs/docs/pgd/4/bdr/nodes.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/overview.mdx b/product_docs/docs/pgd/4/bdr/overview.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/overview.mdx rename to product_docs/docs/pgd/4/bdr/overview.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/repsets.mdx b/product_docs/docs/pgd/4/bdr/repsets.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/repsets.mdx rename to product_docs/docs/pgd/4/bdr/repsets.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/scaling.mdx b/product_docs/docs/pgd/4/bdr/scaling.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/scaling.mdx rename to product_docs/docs/pgd/4/bdr/scaling.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/security.mdx b/product_docs/docs/pgd/4/bdr/security.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/security.mdx rename to product_docs/docs/pgd/4/bdr/security.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/sequences.mdx b/product_docs/docs/pgd/4/bdr/sequences.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/sequences.mdx rename to product_docs/docs/pgd/4/bdr/sequences.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/striggers.mdx b/product_docs/docs/pgd/4/bdr/striggers.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/striggers.mdx rename to product_docs/docs/pgd/4/bdr/striggers.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/transaction-streaming.mdx b/product_docs/docs/pgd/4/bdr/transaction-streaming.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/transaction-streaming.mdx rename to product_docs/docs/pgd/4/bdr/transaction-streaming.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/tssnapshots.mdx b/product_docs/docs/pgd/4/bdr/tssnapshots.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/tssnapshots.mdx rename to product_docs/docs/pgd/4/bdr/tssnapshots.mdx diff --git a/product_docs/docs/pgd/4/overview/bdr/twophase.mdx b/product_docs/docs/pgd/4/bdr/twophase.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/bdr/twophase.mdx rename to product_docs/docs/pgd/4/bdr/twophase.mdx diff --git a/product_docs/docs/pgd/4/overview/harp/02_overview.mdx b/product_docs/docs/pgd/4/harp/02_overview.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/02_overview.mdx rename to product_docs/docs/pgd/4/harp/02_overview.mdx diff --git a/product_docs/docs/pgd/4/overview/harp/03_installation.mdx b/product_docs/docs/pgd/4/harp/03_installation.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/03_installation.mdx rename to product_docs/docs/pgd/4/harp/03_installation.mdx diff --git a/product_docs/docs/pgd/4/overview/harp/04_configuration.mdx b/product_docs/docs/pgd/4/harp/04_configuration.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/04_configuration.mdx rename to product_docs/docs/pgd/4/harp/04_configuration.mdx diff --git a/product_docs/docs/pgd/4/overview/harp/05_bootstrapping.mdx b/product_docs/docs/pgd/4/harp/05_bootstrapping.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/05_bootstrapping.mdx rename to product_docs/docs/pgd/4/harp/05_bootstrapping.mdx diff --git a/product_docs/docs/pgd/4/overview/harp/06_harp_manager.mdx b/product_docs/docs/pgd/4/harp/06_harp_manager.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/06_harp_manager.mdx rename to product_docs/docs/pgd/4/harp/06_harp_manager.mdx diff --git a/product_docs/docs/pgd/4/overview/harp/07_harp_proxy.mdx b/product_docs/docs/pgd/4/harp/07_harp_proxy.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/07_harp_proxy.mdx rename to product_docs/docs/pgd/4/harp/07_harp_proxy.mdx diff --git a/product_docs/docs/pgd/4/overview/harp/08_harpctl.mdx b/product_docs/docs/pgd/4/harp/08_harpctl.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/08_harpctl.mdx rename to product_docs/docs/pgd/4/harp/08_harpctl.mdx diff --git a/product_docs/docs/pgd/4/overview/harp/09_consensus-layer.mdx b/product_docs/docs/pgd/4/harp/09_consensus-layer.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/09_consensus-layer.mdx rename to product_docs/docs/pgd/4/harp/09_consensus-layer.mdx diff --git a/product_docs/docs/pgd/4/overview/harp/10_security.mdx b/product_docs/docs/pgd/4/harp/10_security.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/10_security.mdx rename to product_docs/docs/pgd/4/harp/10_security.mdx diff --git a/product_docs/docs/pgd/4/overview/harp/Makefile b/product_docs/docs/pgd/4/harp/Makefile similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/Makefile rename to product_docs/docs/pgd/4/harp/Makefile diff --git a/product_docs/docs/pgd/4/overview/harp/images/bdr-ao-spec.dia b/product_docs/docs/pgd/4/harp/images/bdr-ao-spec.dia similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/images/bdr-ao-spec.dia rename to product_docs/docs/pgd/4/harp/images/bdr-ao-spec.dia diff --git a/product_docs/docs/pgd/4/overview/harp/images/bdr-ao-spec.png b/product_docs/docs/pgd/4/harp/images/bdr-ao-spec.png similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/images/bdr-ao-spec.png rename to product_docs/docs/pgd/4/harp/images/bdr-ao-spec.png diff --git a/product_docs/docs/pgd/4/overview/harp/images/ha-ao-bdr.dia b/product_docs/docs/pgd/4/harp/images/ha-ao-bdr.dia similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/images/ha-ao-bdr.dia rename to product_docs/docs/pgd/4/harp/images/ha-ao-bdr.dia diff --git a/product_docs/docs/pgd/4/overview/harp/images/ha-ao-bdr.png b/product_docs/docs/pgd/4/harp/images/ha-ao-bdr.png similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/images/ha-ao-bdr.png rename to product_docs/docs/pgd/4/harp/images/ha-ao-bdr.png diff --git a/product_docs/docs/pgd/4/overview/harp/images/ha-ao.dia b/product_docs/docs/pgd/4/harp/images/ha-ao.dia similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/images/ha-ao.dia rename to product_docs/docs/pgd/4/harp/images/ha-ao.dia diff --git a/product_docs/docs/pgd/4/overview/harp/images/ha-ao.png b/product_docs/docs/pgd/4/harp/images/ha-ao.png similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/images/ha-ao.png rename to product_docs/docs/pgd/4/harp/images/ha-ao.png diff --git a/product_docs/docs/pgd/4/overview/harp/images/ha-unit-bdr.dia b/product_docs/docs/pgd/4/harp/images/ha-unit-bdr.dia similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/images/ha-unit-bdr.dia rename to product_docs/docs/pgd/4/harp/images/ha-unit-bdr.dia diff --git a/product_docs/docs/pgd/4/overview/harp/images/ha-unit-bdr.png b/product_docs/docs/pgd/4/harp/images/ha-unit-bdr.png similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/images/ha-unit-bdr.png rename to product_docs/docs/pgd/4/harp/images/ha-unit-bdr.png diff --git a/product_docs/docs/pgd/4/overview/harp/images/ha-unit.dia b/product_docs/docs/pgd/4/harp/images/ha-unit.dia similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/images/ha-unit.dia rename to product_docs/docs/pgd/4/harp/images/ha-unit.dia diff --git a/product_docs/docs/pgd/4/overview/harp/images/ha-unit.png b/product_docs/docs/pgd/4/harp/images/ha-unit.png similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/images/ha-unit.png rename to product_docs/docs/pgd/4/harp/images/ha-unit.png diff --git a/product_docs/docs/pgd/4/overview/harp/index.mdx b/product_docs/docs/pgd/4/harp/index.mdx similarity index 100% rename from product_docs/docs/pgd/4/overview/harp/index.mdx rename to product_docs/docs/pgd/4/harp/index.mdx diff --git a/product_docs/docs/pgd/4/index.mdx b/product_docs/docs/pgd/4/index.mdx index 32c170c8cba..b6ae426ca24 100644 --- a/product_docs/docs/pgd/4/index.mdx +++ b/product_docs/docs/pgd/4/index.mdx @@ -2,13 +2,16 @@ title: "EDB Postgres Distributed" indexCards: none redirects: -- /pgd/4/compatibility_matrix.mdx + - /pgd/4/compatibility_matrix navigation: - rel_notes - known_issues - "#Concepts" - terminology - overview + - "#Components" + - bdr + - harp - "#Planning" - architectures - choosing_server diff --git a/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.0.1_rel_notes.mdx b/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.0.1_rel_notes.mdx deleted file mode 100644 index af1726b726a..00000000000 --- a/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.0.1_rel_notes.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: "BDR 4.0.1" ---- - -This is a maintenance release for BDR 4.0 which includes minor -improvements as well as fixes for issues identified in previous -versions. - -| Type | Category | Description | -| ---- | -------- | ----------- | -| Improvement | Reliability and operability | Reduce frequency of CAMO partner connection attempts.

In case of a failure to connect to a CAMO partner to verify its configuration and check the status of transactions, do not retry immediately (leading to a fully busy pglogical manager process), but throttle down repeated attempts to reconnect and checks to once per minute.

-| Improvement | RPerformance and scalability | Implement buffered read for LCR segment file (BDR-1422)

Implement LCR segment file buffering so that multiple LCR chunks can be read at a time. This should reduce I/O and improve CPU usage of Wal Senders when using the Decoding Worker.

-| Improvement | Performance and scalability | Avoid unnecessary LCR segment reads (BDR-1426)

BDR now attempts to only read new LCR segments when there is at least one available. This reduces I/O load when Decoding Worker is enabled.

-| Improvement | Performance and scalability | Performance of COPY replication including the initial COPY during join has been greatly improved for partitioned tables (BDR-1479)

For large tables this can improve the load times by order of magnitude or more.

-| Bug fix | Performance and scalability | Fix the parallel apply worker selection (BDR-1761)

This makes parallel apply work again. In 4.0.0 parallel apply was never in effect due to this bug.

-| Bug fix | Reliability and operability | Fix Raft snapshot handling of `bdr.camo_pairs` (BDR-1753)

The previous release would not correctly propagate changes to the CAMO pair configuration when they were received via Raft snapshot.

-| Bug fix | Reliability and operability | Correctly handle Raft snapshots from BDR 3.7 after upgrades (BDR-1754) -| Bug fix | Reliability and operability | Upgrading a CAMO configured cluster taking into account the `bdr.camo_pairs` in the snapshot while still excluding the ability to perform in place upgrade of a cluster (due to upgrade limitations unrelated to CAMO). -| Bug fix | Reliability and operability | Switch from CAMO to Local Mode only after timeouts (RT74892)

Do not use the `catchup_interval` estimate when switching from CAMO protected to Local Mode, as that could induce inadvertent switching due to load spikes. Use the estimate only when switching from Local Mode back to CAMO protected (to prevent toggling forth and back due to lag on the CAMO partner).

-| Bug fix | Reliability and operability | Fix replication set cache invalidation when published replication set list have changed (BDR-1715)

In previous versions we could use stale information about which replication sets (and as a result which tables) should be published until the subscription has reconnected.

-| Bug fix | Reliability and operability | Prevent duplicate values generated locally by galloc sequence in high concurrency situations when the new chunk is used (RT76528)

The galloc sequence could have temporarily produce duplicate value when switching which chunk is used locally (but not across nodes) if there were multiple sessions waiting for the new value. This is now fixed.

-| Bug fix | Reliability and operability | Address memory leak on streaming transactions (BDR-1479)

For large transaction this reduces memory usage and I/O considerably when using the streaming transactions feature. This primarily improves performance of COPY replication.

-| Bug fix | Reliability and operability | Don't leave slot behind after PART_CATCHUP phase of node parting when the catchup source has changed while the node was parting (BDR-1716)

When node is being removed (parted) from BDR group, we do so called catchup in order to forward any missing changes from that node between remaining nodes in order to keep the data on all nodes consistent. This requires an additional replication slot to be created temporarily. Normally this replication slot is removed at the end of the catchup phase, however in certain scenarios where we have to change the source node for the changes, this slot could have previously been left behind. From this version, this slot is always correctly removed.

-| Bug fix | Reliability and operability | Ensure that the group slot is moved forward when there is only one node in the BDR group

This prevents disk exhaustion due to WAL accumulation when the group is left running with just single BDR node for a prolonged period of time. This is not recommended setup but the WAL accumulation was not intentional.

-| Bug fix | Reliability and operability | Advance Raft protocol version when there is only one node in the BDR group

Single node clusters would otherwise always stay on oldest support protocol until another node was added. This could limit available feature set on that single node.

- -## Upgrades - -This release supports upgrading from the following versions of BDR: - -- 3.7.14 -- 4.0.0 and higher - -Please make sure you read and understand the process and limitations described -in the [Upgrade Guide](/pgd/latest/upgrades/) before upgrading. diff --git a/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.0.2_rel_notes.mdx b/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.0.2_rel_notes.mdx deleted file mode 100644 index d39bfb08be5..00000000000 --- a/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.0.2_rel_notes.mdx +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: "BDR 4.0.2" ---- - -This is a maintenance release for BDR 4.0 which includes minor -improvements as well as fixes for issues identified in previous -versions. - -| Type | Category | Description | -| ---- | -------- | ----------- | -| Improvement | Reliability and operability | Add `bdr.max_worker_backoff_delay` (BDR-1767)

This changes the handling of the backoff delay to exponentially increase from `bdr.min_worker_backoff_delay` to `bdr.max_worker_backoff_delay` in presence of repeated errors. This reduces log spam and in some cases also prevents unnecessary connection attempts.

-| Improvement | User Experience | Add `execute_locally` option to `bdr.replicate_ddl_command()` (RT73533)

This allows optional queueing of ddl commands for replication to other groups without executing it locally.

-| Improvement | User Experience | Change ERROR on consensus issue during JOIN to WARNING

The reporting of these transient errors was confusing as they were also shown in bdr.worker_errors. These are now changed to WARNINGs.

-| Bug fix | Reliability and operability | WAL decoder confirms end LSN of the running transactions record (BDR-1264)

Confirm end LSN of the running transactions record processed by WAL decoder so that the WAL decoder slot remains up to date and WAL senders get the candidate in timely manner.

-| Bug fix | Reliability and operability | Don't wait for autopartition tasks to complete on parting nodes (BDR-1867)

When a node has started parting process, it makes no sense to wait for autopartition tasks on such nodes to finish since it's not part of the group anymore.

-| Bug fix | User Experience | Improve handling of node name reuse during parallel join (RT74789)

Nodes now have a generation number so that it's easier to identify the name reuse even if the node record is received as part of a snapshot.

-| Bug fix | Reliability and operability | Fix locking and snapshot use during node management in the BDR manager process (RT74789)

When processing multiple actions in the state machine, make sure to reacquire the lock on the processed node and update the snapshot to make sure all updates happening through consensus are taken into account.

-| Bug fix | Reliability and operability | Improve cleanup of catalogs on local node drop

Drop all groups, not only the primary one and drop all the node state history info as well.

-| Bug fix | User Experience | Improve error checking for join request in bdr_init_physical

Previously bdr_init_physical would simply wait forever when there was any issue with the consensus request, now we do same checking as the logical join does.

-| Bug fix | Reliability and operability | Improve handling of various timeouts and sleeps in consensus

This reduces the amount of new consensus votes needed when processing many consensus requests or time consuming consensus requests, for example during join of a new node.

-| Bug fix | Reliability and operability | Fix handling of `wal_receiver_timeout` (BDR-1848)

The `wal_receiver_timeout` has not been triggered correctly due to a regression in BDR 3.7 and 4.0.

-| Bug fix | Reliability and operability | Limit the `bdr.standby_slot_names` check when reporting flush position only to physical slots (RT77985, RT78290)

Otherwise flush progress is not reported in presence of disconnected nodes when using `bdr.standby_slot_names`.

-| Bug fix | Reliability and operability | Fix replication of data types created during bootstrap (BDR-1784) -| Bug fix | Reliability and operability | Fix replication of arrays of builtin types that don't have binary transfer support (BDR-1042) -| Bug fix | Reliability and operability | Prevent CAMO configuration warnings if CAMO is not being used (BDR-1825) - -## Upgrades - -This release supports upgrading from the following versions of BDR: - -- 4.0.0 and higher - -The upgrade path from BDR 3.7 is not currently stable and needs to be -considered beta. Tests should be performed with at least BDR 3.7.15. - -Please make sure you read and understand the process and limitations described -in the [Upgrade Guide](/pgd/latest/upgrades/) before upgrading. diff --git a/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.1.0_rel_notes.mdx b/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.1.0_rel_notes.mdx deleted file mode 100644 index c9ce68f8776..00000000000 --- a/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.1.0_rel_notes.mdx +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: "BDR 4.1.0" ---- - -This is a minor release of BDR 4 which includes new features as well -as fixes for issues identified in previous versions. - -| Type | Category | Description | -| ---- | -------- | ----------- | -| Feature | Reliability and operability | Support in-place major upgrade of Postgres on a BDR node

This BDR release include as new command-line utility `bdr_pg_upgrade` which uses `pg_upgrade` to do a major version upgrade of Postgres on a BDR node.

This reduces the time and network bandwidth necessary to do major version upgrades of Postgres in a EDB Postgres Distributed cluster.

-| Feature | Performance and scalability | Replication Lag Control

Add configuration for a replication lag threshold after which the transaction commits get throttled. This allows limiting RPO without incurring the latency impact on every transaction that comes with synchronous replication.

-| Feature | UX / Initial experience | Distributed sequences by default

The default value of `bdr.default_sequence_kind` has been changed to `'distributed'` which is new kind of sequence that uses SnowFlakeId for `bigserial` and Galloc sequences for `serial` column type.

-| Feature | UX | Simplified synchronous replication configuration

New syntax for specifying the synchronous replication options, with focus on BDR groups and SQL based management (as opposed to config file).

In future versions this will also replace the current Eager Replication and CAMO configuration options.

-| Feature | High availability and disaster recovery | Group Commit

The initial kind of synchronous commit that can be configured via the new configuration syntax.

-| Feature | High availability and disaster recovery | Allow a Raft request to be required for CAMO switching to Local Mode (RT78928)

Add a `require_raft` flag to the CAMO pairing configuration which controls the behavior of switching from CAMO protected to Local Mode, introducing the option to require a majority of nodes to be connected to allow to switch to Local Mode.

-| Feature | High availability and disaster recovery | Allow replication to continue on `ALTER TABLE ... DETACH PARTITION CONCURRENTLY` of already detached partition (RT78362)

Similarly to how BDR 4 handles `CREATE INDEX CONCURRENTLY` when same index already exists, we now allow replication to continue when `ALTER TABLE ... DETACH PARTITION CONCURRENTLY` is receiver for partition that has been already detached.

-| Feature | User Experience | Add additional filtering options to DDL filters.

DDL filters allow for replication of different DDL statements to different replication sets. Similar to how table membership in replication set allows DML on different tables to be replicated via different replication sets.

This release adds new controls that make it easier to use the DDL filters:
- query_match - if defined query must match this regex
- exclusive - if true, other matched filters are not taken into consideration (i.e. only the exclusive filter is applied), when multiple exclusive filters match, we throw error

-| Feature | User Experience | Add `bdr.lock_table_locking` configuration variable.

When enabled this changes behavior of `LOCK TABLE` command to take take a global DML lock

-| Feature | Performance and scalability | Implement buffered write for LCR segment file

This should reduce I/O and improve CPU usage of the Decoding Worker.

-| Feature | User Experience | Add support for partial unique index lookups for conflict detection (RT78368).

Indexes on expression are however still not supported for conflict detection.

-| Feature | User Experience | Add additional statistics to `bdr.stat_subscription`:
- nstream_insert => the count of INSERTs on streamed transactions
- nstream_update => the count of UPDATEs on streamed transactions
- nstream_delete => the count of DELETEs on streamed transactions
- nstream_truncate => the count of TRUNCATEs on streamed transactions
- npre_commit_confirmations => the count pre-commit confirmations, when using CAMO
- npre_commit => the count of pre-commits
- ncommit_prepared => the count of prepared commits with 2PC
- nabort_prepared => the count of aborts of prepared transactions with 2PC -| Feature | User Experience | Add execute_locally option to bdr.replicate_ddl_command (RT73533).

This allows optional queueing of ddl commands for replication to other groups without executing it locally.

-| Feature | User Experience | Add `fast` argument to `bdr.alter_subscription_disable()` (RT79798)

The argument only influences the behavior of `immediate`. When set to `true` (default) it will stop the workers without letting them finish the current work.

-| Feature | User Experience | Keep the `bdr.worker_error` records permanently for all types of workers.

BDR used to remove receiver and writer errors when those workers managed to replicate the LSN that was previously resulting in error. However this was inconsistent with how other workers behaved, as other worker errors were permanent and it also made the troubleshooting of past issues harder. So keep the last error record permanently for every worker type.

-| Feature | User Experience | Simplify `bdr.{add,remove}_camo_pair` functions to return void. -| Feature | Initial Experience | Add connectivity/lag check before taking global lock.

So that application or user does not have to wait for minutes to get lock timeout when there are obvious connectivity issues.

Can be set to DEBUG, LOG, WARNING (default) or ERROR.

-| Feature | Initial Experience | Only log conflicts to conflict log table by default. They are no longer logged to the server log file by default, but this can be overridden. -| Feature | User Experience | Improve reporting of remote errors during node join. -| Feature | Reliability and operability | Make autopartition worker's max naptime configurable. -| Feature | User Experience | Add ability to request partitions upto the given upper bound with autopartition. -| Feature | Initial Experience | Don't try replicate DDL run on subscribe-only node. It has nowhere to replicate so any attempt to do so will fail. This is same as how logical standbys behave. -| Feature | User Experience | Add `bdr.accept_connections` configuration variable. When `false`, walsender connections to replication slots using BDR output plugin will fail. This is useful primarily during restore of single node from backup. -| Bug fix | Reliability and operability | Keep the `lock_timeout` as configured on non-CAMO-partner BDR nodes

A CAMO partner uses a low `lock_timeout` when applying transactions from its origin node. This was inadvertently done for all BDR nodes rather than just the CAMO partner, which may have led to spurious `lock_timeout` errors on pglogical writer processes on normal BDR nodes.

-| Bug fix | User Experience | Show a proper wait event for CAMO / Eager confirmation waits (RT75900)

Show correct "BDR Prepare Phase"/"BDR Commit Phase" in `bdr.stat_activity` instead of the default “unknown wait event”.

-| Bug fix | User Experience | Reduce log for bdr.run_on_nodes (RT80973)

Don't log when setting `bdr.ddl_replication` to off if it's done with the "run_on_nodes" variants of function. This eliminates the flood of logs for monitoring functions.

-| Bug fix | Reliability and operability | Fix replication of arrays of composite types and arrays of builtin types that don't support binary network encoding -| Bug fix | Reliability and operability | Fix replication of data types created during bootstrap -| Bug fix | Performance and scalability | Confirm end LSN of the running transactions record processed by WAL decoder so that the WAL decoder slot remains up to date and WAL sender get the candidate in timely manner. -| Bug fix | Reliability and operability | Don't wait for autopartition tasks to complete on parting nodes -| Bug fix | Reliability and operability | Limit the `bdr.standby_slot_names` check when reporting flush position only to physical slots (RT77985, RT78290)

Otherwise flush progress is not reported in presence of disconnected nodes when using `bdr.standby_slot_names`.

-| Bug fix | Reliability and operability | Request feedback reply from walsender if we are close to wal_receiver_timeout -| Bug fix | Reliability and operability | Don't record dependency of auto-paritioned table on BDR extension more than once.

This resulted in "ERROR: unexpected number of extension dependency records" errors from auto-partition and broken replication on conflicts when this happens.

Note that existing broken tables need to still be fixed manually by removing the double dependency from `pg_depend`

-| Bug fix | Reliability and operability | Improve keepalive handling in receiver.

Don't update position based on keepalive when in middle of streaming transaction as we might lose data on crash if we do that.

There is also new flush and signalling logic that should improve latency in low TPS scenarios. -| Bug fix | Reliability and operability | Only do post `CREATE` commands processing when BDR node exists in the database. -| Bug fix | Reliability and operability | Don't try to log ERROR conflicts to conflict history table. -| Bug fix | Reliability and operability | Fixed segfault where a conflict_slot was being used after it was released during multi-insert (COPY) (RT76439). -| Bug fix | Reliability and operability | Prevent walsender processes spinning when facing lagging standby slots (RT80295, RT78290).

Correct signaling to reset a latch so that a walsender process does consume 100% of a CPU in case one of the standby slots is lagging behind.

-| Bug fix | Reliability and operability | Fix handling of `wal_sender_timeout` when `bdr.standby_slot_names` are used (RT78290) -| Bug fix | Reliability and operability | Make ALTER TABLE lock the underlying relation only once (RT80204). -| Bug fix | User Experience | Fix reporting of disconnected slots in `bdr.monitor_local_replslots`. They could have been previously reported as missing instead of disconnected. -| Bug fix | Reliability and operability | Fix apply timestamp reporting for down subscriptions in `bdr.get_subscription_progress()` function and in the `bdr.subscription_summary` that uses that function. It would report garbage value before. -| Bug fix | Reliability and operability | Fix snapshot handling in various places in BDR workers. -| Bug fix | User Experience | Be more consistent about reporting timestamps and LSNs as NULLs in monitoring functions when there is no available value for those. -| Bug fix | Reliability and operability | Reduce log information when switching between writer processes. -| Bug fix | Reliability and operability | Don't do superuser check when configuration parameter was specified on PG command-line. We can't do transactions there yet and it's guaranteed to be superuser changed at that stage. -| Bug fix | Reliability and operability | Use 64 bits for calculating lag size in bytes. To eliminate risk of overflow with large lag. - - -### Upgrades - -This release supports upgrading from the following versions of BDR: - -- 4.0.0 and higher -- 3.7.15 -- 3.7.16 - -Please make sure you read and understand the process and limitations described -in the [Upgrade Guide](/pgd/latest/upgrades/) before upgrading. diff --git a/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.1.1_rel_notes.mdx b/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.1.1_rel_notes.mdx deleted file mode 100644 index 9632dcff66c..00000000000 --- a/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4.1.1_rel_notes.mdx +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: "BDR 4.1.1" ---- - -This is a maintenance release of BDR 4 which includes new features as well -as fixes for issues identified in previous versions. - - -| Type | Category | Description | -| ---- | -------- | ----------- | -| Feature | User Experience | Add generic function bdr.is_node_connected returns true if the walsender for a given peer is active. | -| Feature | User Experience | Add generic function bdr.is_node_ready returns boolean if the lag is under a specific span. | -| Bug fix | User Experience | Add support for a `--link` argument to bdr_pg_upgrade for using hard-links. | -| Bug fix | User Experience | Prevent removing a `bdr.remove_commit_scope` if still referenced by any `bdr.node_group` as the default commit scope. | -| Bug fix | Reliability and operability | Correct Raft based switching to Local Mode for CAMO pairs of nodes (RT78928) | -| Bug fix | Reliability and operability | Prevent a potential segfault in bdr.drop_node for corner cases (RT81900) | -| Bug fix | User Experience | Prevent use of CAMO or Eager All Node transactions in combination with transaction streaming
Transaction streaming turned out to be problematic in combination with CAMO and Eager All Node transactions. Until this is resolved, BDR now prevents its combined use. This may require CAMO deployments to adjust their configuration to disable transaction streaming, see [Transaction Streaming Configuration](../transaction-streaming#configuration). | - - -### Upgrades - -This release supports upgrading from the following versions of BDR: - -- 4.0.0 and higher -- 3.7.15 -- 3.7.16 - -Please make sure you read and understand the process and limitations described -in the [Upgrade Guide](/pgd/latest/upgrades/) before upgrading. diff --git a/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4_rel_notes.mdx b/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4_rel_notes.mdx deleted file mode 100644 index d6c4111a377..00000000000 --- a/product_docs/docs/pgd/4/overview/bdr/release_notes/bdr4_rel_notes.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: "BDR 4.0.0" ---- - -BDR 4.0 is a new major version of BDR and adopted with this release number is -semantic versioning (for details see semver.org). The two previous major -versions are 3.7 and 3.6. - -| Type | Category | Description | -| ---- | -------- | ----------- | -| Feature | Compatibility | BDR on EDB Postgres Advanced 14 now supports following features which were previously only available on EDB Postgres Extended:
- Commit At Most Once - a consistency feature helping an application to commit each transaction only once, even in the presence of node failures
- Eager Replication - synchronizes between the nodes of the cluster before committing a transaction to provide conflict free replication
- Decoding Worker - separation of decoding into separate worker from wal senders allowing for better scalability with many nodes
- Estimates for Replication Catch-up times
- Timestamp-based Snapshots - providing consistent reads across multiple nodes for retrieving data as they appeared or will appear at a given time
- Automated dynamic configuration of row freezing to improve consistency of UPDATE/DELETE conflicts resolution in certain corner cases
- Assesment checks
- Support for handling missing partitions as conflicts rather than errors
- Advanced DDL Handling for NOT VALID constraints and ALTER TABLE -| Feature | Compatibility | BDR on community version of PostgreSQL 12-14 now supports following features which were previously only available on EDB Postgres Advanced or EDB Postgres Extended:
- Conflict-free Replicated Data Types - additional data types which provide mathematically proven consistency in asynchronous multi-master update scenarios
- Column Level Conflict Resolution - ability to use per column last-update wins resolution so that UPDATEs on different fields can be "merged" without losing either of them
- Transform Triggers - triggers that are executed on the incoming stream of data providing ability to modify it or to do advanced programmatic filtering
- Conflict triggers - triggers which are called when conflict is detected, providing a way to use custom conflict resolution techniques
- CREATE TABLE AS replication
- Parallel Apply - allow multiple writers to apply the incoming changes -| Feature | Performance | Support streaming of large transactions.

This allows BDR to stream a large transaction (greater than `logical_decoding_work_mem` in size) either to a file on the downstream or to a writer process. This ensures that the transaction is decoded even before it's committed, thus improving parallelism. Further, the transaction can even be applied concurrently if streamed straight to a writer. This improves parallelism even more.

When large transactions are streamed to files, they are decoded and the decoded changes are sent to the downstream even before they are committed. The changes are written to a set of files and applied when the transaction finally commits. If the transaction aborts, the changes are discarded, thus wasting resources on both upstream and downstream.

Sub-transactions are also handled automatically.

This feature is available on PostgreSQL 14, EDB Postgres Extended 13+ and EDB Postgres Advanced 14, see [Choosing a Postgres distribution](/pgd/latest/choosing_server/) appendix for more details on which features can be used on which versions of Postgres.

-| Feature | Compatibility | The differences that existed in earlier versions of BDR between standard and enterprise edition have been removed. With BDR 4.0 there is one extension for each supported Postgres distribution and version, i.e., PostgreSQL v12-14, EDB Postgres Extended v12-14, and EDB Postgres Advanced 12-14.

Not all features are available on all versions of PostgreSQL, the available features are reported via feature flags using either `bdr_config` command line utility or `bdr.bdr_features()` database function. See [Choosing a Postgres distribution](/pgd/latest/choosing_server/) for more details.

-| Feature | User Experience | There is no pglogical 4.0 extension that corresponds to the BDR 4.0 extension. BDR no longer has a requirement for pglogical.

This means also that only BDR extension and schema exist and any configuration parameters were renamed from `pglogical.` to `bdr.`.

-| Feature | Initial experience | Some configuration options have change defaults for better post-install experience:
- Parallel apply is now enabled by default (with 2 writers). Allows for better performance, especially with streaming enabled.
- `COPY` and `CREATE INDEX CONCURRENTLY` are now streamed directly to writer in parallel (on Postgres versions where streaming is supported) to all available nodes by default, eliminating or at least reducing replication lag spikes after these operations.
- The timeout for global locks have been increased to 10 minutes
- The `bdr.min_worker_backoff_delay` now defaults to 1s so that subscriptions retry connection only once per second on error -| Feature | Reliability and operability | Greatly reduced the chance of false positives in conflict detection during node join for table that use origin based conflict detection -| Feature | Reliability and operability | Move configuration of CAMO pairs to SQL catalogs

To reduce chances of misconfiguration and make CAMO pairs within the EDB Postgres Distributed cluster known globally, move the CAMO configuration from the individual node's postgresql.conf to BDR system catalogs managed by Raft. This for example can prevent against inadvertently dropping a node that's still configured to be a CAMO partner for another active node.

Please see the [Upgrades chapter](/pgd/latest/upgrades/#upgrading-a-camo-enabled-cluster) for details on the upgrade process.

This deprecates GUCs `bdr.camo_partner_of` and `bdr.camo_origin_for` and replaces the functions `bdr.get_configured_camo_origin_for()` and `get_configured_camo_partner_of` with `bdr.get_configured_camo_partner`.

- -## Upgrades - -This release supports upgrading from the following version of BDR: - -- 3.7.13.1 - -Please make sure you read and understand the process and limitations described -in the [Upgrade Guide](/pgd/latest/upgrades/) before upgrading. diff --git a/product_docs/docs/pgd/4/overview/bdr/release_notes/index.mdx b/product_docs/docs/pgd/4/overview/bdr/release_notes/index.mdx deleted file mode 100644 index bfac5a52d48..00000000000 --- a/product_docs/docs/pgd/4/overview/bdr/release_notes/index.mdx +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Release Notes -navigation: -- bdr4.1.1_rel_notes -- bdr4.1.0_rel_notes -- bdr4.0.2_rel_notes -- bdr4.0.1_rel_notes -- bdr4_rel_notes ---- - -BDR is a PostgreSQL extension providing multi-master replication and data -distribution with advanced conflict management, data-loss protection, and -throughput up to 5X faster than native logical replication, and enables -distributed PostgreSQL clusters with a very high availability. - -The release notes in this section provide information on what was new in each release. - -| Version | Release Date | -| ----------------------- | ------------ | -| [4.1.1](bdr4.1.1_rel_notes) | 2022 June 21 | -| [4.1.0](bdr4.1.0_rel_notes) | 2022 May 17 | -| [4.0.2](bdr4.0.2_rel_notes) | 2022 Feb 15 | -| [4.0.1](bdr4.0.1_rel_notes) | 2022 Jan 18 | -| [4.0.0](bdr4_rel_notes) | 2021 Dec 01 | - diff --git a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.0.1_rel_notes.mdx b/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.0.1_rel_notes.mdx deleted file mode 100644 index 208b8eb8be2..00000000000 --- a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.0.1_rel_notes.mdx +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: "Version 2.0.1" ---- - -This is a patch release of HARP 2 that includes fixes for issues identified -in previous versions. - -| Type | Description | -| ---- |------------ | -| Enhancement | Support for selecting a leader per location rather than relying on DCS like etcd to have separate setup in different locations. This still requires a majority of nodes to survive loss of a location, so an odd number of both locations and database nodes is recommended.| -| Enhancement | The BDR DCS now uses a push notification from the consensus rather than through polling nodes. This change reduces the time for new leader selection and the load that HARP does on the BDR DCS since it doesn't need to poll in short intervals anymore. | -| Enhancement | TPA now restarts each HARP Proxy one by one and wait until they come back to reduce any downtime incurred by the application during software upgrades. | -| Enhancement | The support for embedding PGBouncer directly into HARP Proxy is now deprecated and will be removed in the next major release of HARP. It's now possible to configure TPA to put PGBouncer on the same node as HARP Proxy and point to that HARP Proxy.| -| Bug Fix | `harpctl promote ` would occasionally promote a different node than the one specified. This has been fixed. [Support Ticket #75406] | -| Bug Fix | Fencing would sometimes fail when using BDR as the Distributed Consensus Service. This has been corrected. | -| Bug Fix | `harpctl apply` no longer turns off routing for leader after the cluster has been established. [Support Ticket #80790] | -| Bug Fix | Harp-manager no longer exits if it cannot start a failed database. Harp-manager will keep retrying with randomly increasing periods. [Support Ticket #78516] | -| Bug Fix | The internal pgbouncer proxy implementation had a memory leak. This has been remediated. | diff --git a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.0.2_rel_notes.mdx b/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.0.2_rel_notes.mdx deleted file mode 100644 index 407bed9c8ec..00000000000 --- a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.0.2_rel_notes.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Version 2.0.2" ---- - -This is a patch release of HARP 2 that includes fixes for issues identified -in previous versions. - -| Type | Description | -| ---- |------------ | -| Enhancement | BDR consensus now generally available.

HARP offers multiple options for Distributed Consensus Service (DCS) source: etcd and BDR. The BDR consensus option can be used in deployments where etcd isn't present. Use of the BDR consensus option is no longer considered beta and is now supported for use in production environments.

| -| Enhancement | Transport layer proxy now generally available.

HARP offers multiple proxy options for routing connections between the client application and database: application layer (L7) and transport layer (L4). The network layer 4 or transport layer proxy simply forwards network packets, and layer 7 terminates network traffic. The transport layer proxy, previously called simple proxy, is no longer considered beta and is now supported for use in production environments.

| diff --git a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.0.3_rel_notes.mdx b/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.0.3_rel_notes.mdx deleted file mode 100644 index 75722ff6794..00000000000 --- a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.0.3_rel_notes.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: "Version 2.0.3" ---- - -This is a patch release of HARP 2 that includes fixes for issues identified -in previous versions. - -| Type | Description | -| ---- |------------ | -| Enhancement | HARP Proxy supports read-only user dedicated TLS Certificate (RT78516) | -| Bug Fix | HARP Proxy continues to try and connect to DCS instead of exiting after 50 seconds. (RT75406) | diff --git a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.1.0_rel_notes.mdx b/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.1.0_rel_notes.mdx deleted file mode 100644 index bb7e5b16d47..00000000000 --- a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.1.0_rel_notes.mdx +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: "Version 2.1.0" ---- - -This is a minor release of HARP 2 that includes new features as well -as fixes for issues identified in previous versions. - -| Type | Description | -| ---- |------------ | -| Feature | The BDR DCS now uses a push notification from the consensus rather than through polling nodes.

This change reduces the time for new leader selection and the load that HARP does on the BDR DCS since it doesn't need to poll in short intervals anymore.

| -| Feature | TPA now restarts each HARP Proxy one by one and wait until they come back to reduce any downtime incurred by the application during software upgrades. | -| Feature | The support for embedding PGBouncer directly into HARP Proxy is now deprecated and will be removed in the next major release of HARP.

It's now possible to configure TPA to put PGBouncer on the same node as HARP Proxy and point to that HARP Proxy.

| -| Bug Fix | `harpctl promote ` would occasionally promote a different node than the one specified. This has been fixed. (RT75406) | -| Bug Fix | Fencing would sometimes fail when using BDR as the Distributed Consensus Service. This has been corrected. | -| Bug Fix | `harpctl apply` no longer turns off routing for leader after the cluster has been established. (RT80790) | -| Bug Fix | Harp-manager no longer exits if it cannot start a failed database. Harp-manager will keep retrying with randomly increasing periods. (RT78516) | -| Bug Fix | The internal pgbouncer proxy implementation had a memory leak. This has been remediated. | diff --git a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.1.1_rel_notes.mdx b/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.1.1_rel_notes.mdx deleted file mode 100644 index d246842271f..00000000000 --- a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2.1.1_rel_notes.mdx +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: "Version 2.1.1" ---- - -This is a patch release of HARP 2 that includes fixes for issues identified -in previous versions. - -| Type | Description | -| ---- |------------ | -| Enhancement | Log a warning on loss of DCS connection | -| Enhancement | Log a warning when metadata refresh is taking too long - usually due to high latency network | -| Bug Fix | Restart harp_proxy.service on a failure | -| Bug Fix | Fix concurrency issue with connection management in harpctl | -| Bug Fix | Don't try to proxy connections to previous leader on unmanaged cluster | -| Bug Fix | Don't panic in haprctl when location is empty | diff --git a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2_rel_notes.mdx b/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2_rel_notes.mdx deleted file mode 100644 index 8f63b7c921b..00000000000 --- a/product_docs/docs/pgd/4/overview/harp/01_release_notes/harp2_rel_notes.mdx +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: "Version 2.0.0" ---- - -This is new major release of HARP that constitutes of complete rewrite of the -product. - -| Type | Description | -| ---- |------------ | -| Engine | Complete rewrite of system in golang to optimize all operations | -| Engine | Cluster state can now be bootstrapped or revised via YAML | -| Feature | Configuration now in YAML, configuration file changed from `harp.ini` to `config.yml` | -| Feature | HARP Proxy deprecates need for HAProxy in supported architecture.

The use of HARP Router to translate DCS contents into appropriate online or offline states for HTTP-based URI requests meant a load balancer or HAProxy was necessary to determine the lead master. HARP Proxy now does this automatically without periodic iterative status checks.

| -| Feature | Utilizes DCS key subscription to respond directly to state changes.

With relevant cluster state changes, the cluster responds immediately, resulting in improved failover and switchover times.

| -| Feature | Compatibility with etcd SSL settings.

It is now possible to communicate with etcd through SSL encryption.

| -| Feature | Zero transaction lag on switchover.

Transactions are not routed to the new lead node until all replicated transactions are replayed, thereby reducing the potential for conflicts.

-| Feature | Experimental BDR Consensus layer.

Using BDR Consensus as the Distributed Consensus Service (DCS) reduces the amount of change needed for implementations.

-| Feature | Experimental built-in proxy.

Proxy implementation for increased session control.

| diff --git a/product_docs/docs/pgd/4/overview/harp/01_release_notes/index.mdx b/product_docs/docs/pgd/4/overview/harp/01_release_notes/index.mdx deleted file mode 100644 index 12718f1f8fa..00000000000 --- a/product_docs/docs/pgd/4/overview/harp/01_release_notes/index.mdx +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Release Notes -navigation: -- harp2.1.0_rel_notes -- harp2.0.3_rel_notes -- harp2.0.2_rel_notes -- harp2.0.1_rel_notes -- harp2_rel_notes ---- - -High Availability Routing for Postgres (HARP) is a cluster-management tool for -[Bi-directional Replication (BDR)](../../bdr/) clusters. The core design of -the tool is to route all application traffic in a single data center or -region to only one node at a time. This node, designated the lead master, acts -as the principle write target to reduce the potential for data conflicts. - -The release notes in this section provide information on what was new in each release. - -| Version | Release Date | -| ----------------------- | ------------ | -| [2.1.1](harp2.1.1_rel_notes) | 2022 June 21 | -| [2.1.0](harp2.1.0_rel_notes) | 2022 May 17 | -| [2.0.3](harp2.0.3_rel_notes) | 2022 Mar 31 | -| [2.0.2](harp2.0.2_rel_notes) | 2022 Feb 24 | -| [2.0.1](harp2.0.1_rel_notes) | 2021 Jan 31 | -| [2.0.0](harp2_rel_notes) | 2021 Dec 01 | diff --git a/product_docs/docs/pgd/4/overview/index.mdx b/product_docs/docs/pgd/4/overview/index.mdx index 5530e9f5009..35b22dafda7 100644 --- a/product_docs/docs/pgd/4/overview/index.mdx +++ b/product_docs/docs/pgd/4/overview/index.mdx @@ -1,5 +1,5 @@ --- -title: "Key components" +title: "Overview" --- EDB Postgres Distributed provides loosely-coupled multi-master logical replication diff --git a/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx b/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx index 9534b889bcc..2a9815bdf37 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx @@ -69,6 +69,7 @@ Below you will find a description of the defined resources: - [PoolerSecrets](#PoolerSecrets) - [PoolerSpec](#PoolerSpec) - [PoolerStatus](#PoolerStatus) +- [PostInitApplicationSQLRefs](#PostInitApplicationSQLRefs) - [PostgresConfiguration](#PostgresConfiguration) - [RecoveryTarget](#RecoveryTarget) - [ReplicaClusterConfiguration](#ReplicaClusterConfiguration) @@ -247,22 +248,23 @@ BootstrapConfiguration contains information about how to create the PostgreSQL c BootstrapInitDB is the configuration of the bootstrap process when initdb is used Refer to the Bootstrap page of the documentation for more information. -| Name | Description | Type | -| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------- | -| `database ` | Name of the database used by the application. Default: `app`. - *mandatory* | string | -| `owner ` | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. - *mandatory* | string | -| `secret ` | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | [\*LocalObjectReference](#LocalObjectReference) | -| `redwood ` | If we need to enable/disable Redwood compatibility. Requires EPAS and for EPAS defaults to true | \*bool | -| `options ` | The list of options that must be passed to initdb when creating the cluster. Deprecated: This could lead to inconsistent configurations, please use the explicit provided parameters instead. If defined, explicit values will be ignored. | \[]string | -| `dataChecksums ` | Whether the `-k` option should be passed to initdb, enabling checksums on data pages (default: `false`) | \*bool | -| `encoding ` | The value to be passed as option `--encoding` for initdb (default:`UTF8`) | string | -| `localeCollate ` | The value to be passed as option `--lc-collate` for initdb (default:`C`) | string | -| `localeCType ` | The value to be passed as option `--lc-ctype` for initdb (default:`C`) | string | -| `walSegmentSize ` | The value in megabytes (1 to 1024) to be passed to the `--wal-segsize` option for initdb (default: empty, resulting in PostgreSQL default: 16MB) | int | -| `postInitSQL ` | List of SQL queries to be executed as a superuser immediately after the cluster has been created - to be used with extreme care (by default empty) | \[]string | -| `postInitApplicationSQL` | List of SQL queries to be executed as a superuser in the application database right after is created - to be used with extreme care (by default empty) | \[]string | -| `postInitTemplateSQL ` | List of SQL queries to be executed as a superuser in the `template1` after the cluster has been created - to be used with extreme care (by default empty) | \[]string | -| `import ` | Bootstraps the new cluster by importing data from an existing PostgreSQL instance using logical backup (`pg_dump` and `pg_restore`) | [\*Import](#Import) | +| Name | Description | Type | +| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | +| `database ` | Name of the database used by the application. Default: `app`. - *mandatory* | string | +| `owner ` | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. - *mandatory* | string | +| `secret ` | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | [\*LocalObjectReference](#LocalObjectReference) | +| `redwood ` | If we need to enable/disable Redwood compatibility. Requires EPAS and for EPAS defaults to true | \*bool | +| `options ` | The list of options that must be passed to initdb when creating the cluster. Deprecated: This could lead to inconsistent configurations, please use the explicit provided parameters instead. If defined, explicit values will be ignored. | \[]string | +| `dataChecksums ` | Whether the `-k` option should be passed to initdb, enabling checksums on data pages (default: `false`) | \*bool | +| `encoding ` | The value to be passed as option `--encoding` for initdb (default:`UTF8`) | string | +| `localeCollate ` | The value to be passed as option `--lc-collate` for initdb (default:`C`) | string | +| `localeCType ` | The value to be passed as option `--lc-ctype` for initdb (default:`C`) | string | +| `walSegmentSize ` | The value in megabytes (1 to 1024) to be passed to the `--wal-segsize` option for initdb (default: empty, resulting in PostgreSQL default: 16MB) | int | +| `postInitSQL ` | List of SQL queries to be executed as a superuser immediately after the cluster has been created - to be used with extreme care (by default empty) | \[]string | +| `postInitApplicationSQL ` | List of SQL queries to be executed as a superuser in the application database right after is created - to be used with extreme care (by default empty) | \[]string | +| `postInitTemplateSQL ` | List of SQL queries to be executed as a superuser in the `template1` after the cluster has been created - to be used with extreme care (by default empty) | \[]string | +| `import ` | Bootstraps the new cluster by importing data from an existing PostgreSQL instance using logical backup (`pg_dump` and `pg_restore`) | [\*Import](#Import) | +| `postInitApplicationSQLRefs` | PostInitApplicationSQLRefs points references to ConfigMaps or Secrets which contain SQL files, the general implementation order to these references is from all Secrets to all ConfigMaps, and inside Secrets or ConfigMaps, the implementation order is same as the order of each array (by default empty) | [\*PostInitApplicationSQLRefs](#PostInitApplicationSQLRefs) | @@ -364,6 +366,7 @@ ClusterSpec defines the desired state of Cluster | `certificates ` | The configuration for the CA and related certificates | [\*CertificatesConfiguration](#CertificatesConfiguration) | | `imagePullSecrets ` | The list of pull secrets to be used to pull the images. If the license key contains a pull secret that secret will be automatically included. | [\[\]LocalObjectReference](#LocalObjectReference) | | `storage ` | Configuration of the storage of the instances | [StorageConfiguration](#StorageConfiguration) | +| `walStorage ` | Configuration of the storage for PostgreSQL WAL (Write-Ahead Log) | [\*StorageConfiguration](#StorageConfiguration) | | `startDelay ` | The time in seconds that is allowed for a PostgreSQL instance to successfully start up (default 30) | int32 | | `stopDelay ` | The time in seconds that is allowed for a PostgreSQL instance to gracefully shutdown (default 30) | int32 | | `switchoverDelay ` | The time in seconds that is allowed for a primary PostgreSQL instance to gracefully shutdown during a switchover. Default value is 40000000, greater than one year in seconds, big enough to simulate an infinite delay | int32 | @@ -402,6 +405,7 @@ ClusterStatus defines the observed state of Cluster | `resizingPVC ` | List of all the PVCs that have ResizingPVC condition. | \[]string | | `initializingPVC ` | List of all the PVCs that are being initialized by this cluster | \[]string | | `healthyPVC ` | List of all the PVCs not dangling nor initializing | \[]string | +| `unusablePVC ` | List of all the PVCs that are unusable because another PVC is missing | \[]string | | `licenseStatus ` | Status of the license | licensekey.Status | | `writeService ` | Current write pod | string | | `readService ` | Current list of read pods | string | @@ -583,7 +587,7 @@ LDAPConfig contains the parameters needed for LDAP authentication | `server ` | LDAP hostname or IP address | string | | `port ` | LDAP server port | int | | `scheme ` | LDAP schema to be used, possible options are `ldap` and `ldaps` | LDAPScheme | -| `tls ` | Set to 1 to enable LDAP over TLS | bool | +| `tls ` | Set to 'true' to enable LDAP over TLS. 'false' is default | bool | | `bindAsAuth ` | Bind as authentication configuration | [\*LDAPBindAsAuth](#LDAPBindAsAuth) | | `bindSearchAuth` | Bind+Search authentication configuration | [\*LDAPBindSearchAuth](#LDAPBindSearchAuth) | @@ -754,6 +758,17 @@ PoolerStatus defines the observed state of Pooler | `secrets ` | The resource version of the config object | [\*PoolerSecrets](#PoolerSecrets) | | `instances` | The number of pods trying to be scheduled | int32 | + + +## PostInitApplicationSQLRefs + +PostInitApplicationSQLRefs points references to ConfigMaps or Secrets which contain SQL files, the general implementation order to these references is from all Secrets to all ConfigMaps, and inside Secrets or ConfigMaps, the implementation order is same as the order of each array + +| Name | Description | Type | +| --------------- | ------------------------------------------------------ | ------------------------------------------------- | +| `secretRefs ` | SecretRefs holds a list of references to Secrets | [\[\]SecretKeySelector](#SecretKeySelector) | +| `configMapRefs` | ConfigMapRefs holds a list of references to ConfigMaps | [\[\]ConfigMapKeySelector](#ConfigMapKeySelector) | + ## PostgresConfiguration @@ -934,9 +949,9 @@ StorageConfiguration is the configuration of the storage of the PostgreSQL insta SyncReplicaElectionConstraints contains the constraints for sync replicas election. -For anti-affinity parameters two instances are considered in the same location if all the labels values match +For anti-affinity parameters two instances are considered in the same location if all the labels values match. -In future synchronous replica election restriction by name will be supported +In future synchronous replica election restriction by name will be supported. | Name | Description | Type | | ------------------------ | ---------------------------------------------------------------------------------------------------------------------------- | --------- | diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx index b3579c6853e..c7ca32653c3 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx @@ -283,6 +283,39 @@ spec: as queries are run as a superuser and can disrupt the entire cluster. An error in any of those queries interrupts the bootstrap phase, leaving the cluster incomplete. +Moreover, you can specify a list of Secrets and/or ConfigMaps which contains SQL script that will be executed after the database is created and configured. These SQL script will be executed using the **superuser** role (`postgres`), connected to the database specified in the `initdb` section: + +```yaml +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Cluster +metadata: + name: cluster-example-initdb +spec: + instances: 3 + + bootstrap: + initdb: + database: app + owner: app + postInitApplicationSQLRefs: + secretRefs: + - name: my-secret + key: secret.sql + configMapRefs: + - name: my-configmap + key: configmap.sql + storage: + size: 1Gi +``` + +!!! Note + The SQL scripts referenced in `secretRefs` will be executed before the ones referenced in `configMapRefs`. For both sections the SQL scripts will be executed respecting the order in the list. + Inside SQL scripts, each SQL statement is executed in a single exec on the server according to the [PostgreSQL semantics](https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-MULTI-STATEMENT), comments can be included, but internal command like `psql` cannot. + +!!! Warning + Please make sure the existence of the entries inside the ConfigMaps or Secrets specified in `postInitApplicationSQLRefs`, otherwise the bootstrap will fail. + Errors in any of those SQL files will prevent the bootstrap phase to complete successfully. + ### Compatibility Features EDB Postgres Advanced adds many compatibility features to the @@ -785,7 +818,7 @@ file on the source PostgreSQL instance: host replication streaming_replica all md5 ``` -The following manifest creates a new PostgreSQL 14.4 cluster, +The following manifest creates a new PostgreSQL 14.5 cluster, called `target-db`, using the `pg_basebackup` bootstrap method to clone an external PostgreSQL cluster defined as `source-db` (in the `externalClusters` array). As you can see, the `source-db` @@ -800,7 +833,7 @@ metadata: name: target-db spec: instances: 3 - imageName: quay.io/enterprisedb/postgresql:14.4 + imageName: quay.io/enterprisedb/postgresql:14.5 bootstrap: pg_basebackup: @@ -820,7 +853,7 @@ spec: ``` All the requirements must be met for the clone operation to work, including -the same PostgreSQL version (in our case 14.4). +the same PostgreSQL version (in our case 14.5). #### TLS certificate authentication @@ -835,7 +868,7 @@ in the same Kubernetes cluster. This example can be easily adapted to cover an instance that resides outside the Kubernetes cluster. -The manifest defines a new PostgreSQL 14.4 cluster called `cluster-clone-tls`, +The manifest defines a new PostgreSQL 14.5 cluster called `cluster-clone-tls`, which is bootstrapped using the `pg_basebackup` method from the `cluster-example` external cluster. The host is identified by the read/write service in the same cluster, while the `streaming_replica` user is authenticated @@ -850,7 +883,7 @@ metadata: name: cluster-clone-tls spec: instances: 3 - imageName: quay.io/enterprisedb/postgresql:14.4 + imageName: quay.io/enterprisedb/postgresql:14.5 bootstrap: pg_basebackup: diff --git a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx index 331901e376d..51defc41956 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx @@ -312,6 +312,11 @@ ones directly set by PgBouncer: - [`server_reset_query_always`](https://www.pgbouncer.org/config.html#server_reset_query_always) - [`server_round_robin`](https://www.pgbouncer.org/config.html#server_round_robin) - [`stats_period`](https://www.pgbouncer.org/config.html#stats_period) +- [`tcp_keepalive`](https://www.pgbouncer.org/config.html#tcp_keepalive) +- [`tcp_keepcnt`](https://www.pgbouncer.org/config.html#tcp_keepcnt) +- [`tcp_keepidle`](https://www.pgbouncer.org/config.html#tcp_keepidle) +- [`tcp_keepintvl`](https://www.pgbouncer.org/config.html#tcp_keepintvl) +- [`tcp_user_timeout`](https://www.pgbouncer.org/config.html#tcp_user_timeout) - [`verbose`](https://www.pgbouncer.org/config.html#verbose) Customizations of the PgBouncer configuration are written diff --git a/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx b/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx index 1532a738d89..7bb64d36c46 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx @@ -14,12 +14,13 @@ with the following requirements: - `pg_controldata` - `pg_basebackup` - Barman Cloud executables that must be in the path: - - `barman-cloud-wal-archive` - - `barman-cloud-wal-restore` - `barman-cloud-backup` - - `barman-cloud-restore` + - `barman-cloud-backup-delete` - `barman-cloud-backup-list` - `barman-cloud-check-wal-archive` + - `barman-cloud-restore` + - `barman-cloud-wal-archive` + - `barman-cloud-wal-restore` - PGAudit extension installed (optional - only if PGAudit is required in the deployed clusters) - Sensible locale settings diff --git a/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx b/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx index fcea6d97ed2..c58420a32a2 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx @@ -222,8 +222,6 @@ There are a few things you need to be aware of when using the `monolith` type: ["The `externalClusters` section"](bootstrap.md#the-externalclusters-section)) - Traffic must be allowed between the Kubernetes cluster and the `externalCluster` during the operation -- You need to specify `sslmode: disable` in the `connectionParameters` section - if you need to connect to a PostgreSQL instance without SSL - Connection to the source database must be granted with the specified user that needs to run `pg_dump` and retrieve roles information (*superuser* is OK) diff --git a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx index c2270062f3a..1c59d5874c2 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx @@ -18,6 +18,8 @@ PostgreSQL can face on a Kubernetes cluster during its lifetime. ## Storage space usage The operator will instantiate one PVC for every PostgreSQL instance to store the `PGDATA` content. +A second PVC dedicated to the WAL storage will be provisioned in case `.spec.walStorage` is +specified during cluster initialization. Such storage space is set for reuse in two cases: @@ -32,11 +34,19 @@ following command: kubectl delete -n [namespace] pvc/[cluster-name]-[serial] pod/[cluster-name]-[serial] ``` +!!! Note + If you specified a dedicated WAL volume, it will also have to be deleted during this process. + +```sh +kubectl delete -n [namespace] pvc/[cluster-name]-[serial] pvc/[cluster-name]-[serial]-wal pod/[cluster-name]-[serial] +``` + For example: ```sh -$ kubectl delete -n default pvc/cluster-example-1 pod/cluster-example-1 +$ kubectl delete -n default pvc/cluster-example-1 pvc/cluster-example-1-wal pod/cluster-example-1 persistentvolumeclaim "cluster-example-1" deleted +persistentvolumeclaim "cluster-example-1-wal" deleted pod "cluster-example-1" deleted ``` diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx index b544ba18c84..3c38c72b627 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx @@ -15,12 +15,12 @@ originalFilePath: 'src/installation_upgrade.md' The operator can be installed like any other resource in Kubernetes, through a YAML manifest applied via `kubectl`. -You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.16.1.yaml) +You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.17.0.yaml) as follows: ```sh kubectl apply -f \ - https://get.enterprisedb.io/cnp/postgresql-operator-1.16.1.yaml + https://get.enterprisedb.io/cnp/postgresql-operator-1.17.0.yaml ``` Once you have run the `kubectl` command, EDB Postgres for Kubernetes will be installed in your Kubernetes cluster. diff --git a/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx b/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx index 04657fdce9d..36fdddfe266 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx @@ -39,23 +39,23 @@ INFO[0000] Prep: Network INFO[0000] Created network 'k3d-k3s-default' INFO[0000] Created image volume k3d-k3s-default-images INFO[0000] Starting new tools node... -INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.1' +INFO[0000] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.6' INFO[0001] Creating node 'k3d-k3s-default-server-0' -INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.22.7-k3s1' -INFO[0003] Starting Node 'k3d-k3s-default-tools' +INFO[0002] Starting Node 'k3d-k3s-default-tools' +INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.24.4-k3s1' INFO[0007] Creating LoadBalancer 'k3d-k3s-default-serverlb' -INFO[0008] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.1' -INFO[0011] Using the k3d-tools node to gather environment information +INFO[0007] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.6' +INFO[0010] Using the k3d-tools node to gather environment information INFO[0011] HostIP: using network gateway 172.17.0.1 address INFO[0011] Starting cluster 'k3s-default' INFO[0011] Starting servers... INFO[0011] Starting Node 'k3d-k3s-default-server-0' INFO[0016] All agents already running. INFO[0016] Starting helpers... -INFO[0017] Starting Node 'k3d-k3s-default-serverlb' -INFO[0024] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... -INFO[0026] Cluster 'k3s-default' created successfully! -INFO[0026] You can now use it like this: +INFO[0016] Starting Node 'k3d-k3s-default-serverlb' +INFO[0023] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... +INFO[0025] Cluster 'k3s-default' created successfully! +INFO[0025] You can now use it like this: kubectl cluster-info ``` @@ -66,7 +66,7 @@ Verify that it works with the following command: kubectl get nodes __OUTPUT__ NAME STATUS ROLES AGE VERSION -k3d-k3s-default-server-0 Ready control-plane,master 35s v1.22.7+k3s1 +k3d-k3s-default-server-0 Ready control-plane,master 32s v1.24.4+k3s1 ``` You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet "Ready", wait for a few seconds and run the command above again. @@ -76,7 +76,7 @@ You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet Now that the Kubernetes cluster is running, you can proceed with EDB Postgres for Kubernetes installation as described in the ["Installation and upgrades"](installation_upgrade.md) section: ```shell -kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.16.0.yaml +kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.17.0.yaml __OUTPUT__ namespace/postgresql-operator-system created customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created @@ -152,8 +152,8 @@ immediately after applying the cluster configuration you'll see the status as `I ```shell kubectl get pods __OUTPUT__ -NAME READY STATUS RESTARTS AGE -cluster-example-1-initdb--1-2cqfw 0/1 Pending 0 3s +NAME READY STATUS RESTARTS AGE +cluster-example-1-initdb-sdr25 0/1 PodInitializing 0 20s ``` ...give it a minute, and then check on it again: @@ -162,9 +162,9 @@ cluster-example-1-initdb--1-2cqfw 0/1 Pending 0 3s kubectl get pods __OUTPUT__ NAME READY STATUS RESTARTS AGE -cluster-example-1 1/1 Running 0 56s -cluster-example-2 1/1 Running 0 35s -cluster-example-3 1/1 Running 0 19s +cluster-example-1 1/1 Running 0 47s +cluster-example-2 1/1 Running 0 24s +cluster-example-3 1/1 Running 0 8s ``` Now we can check the status of the cluster: @@ -179,12 +179,12 @@ metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}} - creationTimestamp: "2022-07-28T21:57:47Z" + creationTimestamp: "2022-09-06T21:18:53Z" generation: 1 name: cluster-example namespace: default - resourceVersion: "1945" - uid: 8725e025-e304-4bff-abb8-aac617d32fd6 + resourceVersion: "2037" + uid: e6d88753-e5d5-414c-a7ec-35c6c27f5a9a spec: affinity: podAntiAffinityType: preferred @@ -197,7 +197,7 @@ spec: localeCollate: C owner: app enableSuperuserAccess: true - imageName: quay.io/enterprisedb/postgresql:14.4 + imageName: quay.io/enterprisedb/postgresql:14.5 instances: 3 logLevel: info maxSyncReplicas: 0 @@ -245,9 +245,9 @@ status: certificates: clientCASecret: cluster-example-ca expirations: - cluster-example-ca: 2022-10-26 21:52:48 +0000 UTC - cluster-example-replication: 2022-10-26 21:52:48 +0000 UTC - cluster-example-server: 2022-10-26 21:52:48 +0000 UTC + cluster-example-ca: 2022-12-05 21:13:54 +0000 UTC + cluster-example-replication: 2022-12-05 21:13:54 +0000 UTC + cluster-example-server: 2022-12-05 21:13:54 +0000 UTC replicationTLSSecret: cluster-example-replication serverAltDNSNames: - cluster-example-rw @@ -261,18 +261,34 @@ status: - cluster-example-ro.default.svc serverCASecret: cluster-example-ca serverTLSSecret: cluster-example-server - cloudNativePostgresqlCommitHash: acccf408 - cloudNativePostgresqlOperatorHash: 68df07920f226d668c98d3dc6d58d433da7242f84aae762d71f47c32d89eaee3 + cloudNativePostgresqlCommitHash: ad578cb1 + cloudNativePostgresqlOperatorHash: 9f5db5e0e804fb51c6962140c0a447766bf2dd4d96dfa8d8529b8542754a23a4 + conditions: + - lastTransitionTime: "2022-09-06T21:20:12Z" + message: Cluster is Ready + reason: ClusterIsReady + status: "True" + type: Ready configMapResourceVersion: metrics: - postgresql-operator-default-monitoring: "863" + postgresql-operator-default-monitoring: "810" currentPrimary: cluster-example-1 - currentPrimaryTimestamp: "2022-07-28T21:58:20Z" + currentPrimaryTimestamp: "2022-09-06T21:19:31.040336Z" healthyPVC: - cluster-example-1 - cluster-example-2 - cluster-example-3 instances: 3 + instancesReportedState: + cluster-example-1: + isPrimary: true + timeLineID: 1 + cluster-example-2: + isPrimary: false + timeLineID: 1 + cluster-example-3: + isPrimary: false + timeLineID: 1 instancesStatus: healthy: - cluster-example-1 @@ -282,7 +298,7 @@ status: licenseStatus: isImplicit: true isTrial: true - licenseExpiration: "2022-08-27T21:57:47Z" + licenseExpiration: "2022-10-06T21:18:53Z" licenseStatus: Implicit trial license repositoryAccess: false valid: true @@ -293,14 +309,15 @@ status: readService: cluster-example-r readyInstances: 3 secretsResourceVersion: - applicationSecretVersion: "829" - clientCaSecretVersion: "825" - replicationSecretVersion: "827" - serverCaSecretVersion: "825" - serverSecretVersion: "826" - superuserSecretVersion: "828" + applicationSecretVersion: "778" + clientCaSecretVersion: "774" + replicationSecretVersion: "776" + serverCaSecretVersion: "774" + serverSecretVersion: "775" + superuserSecretVersion: "777" targetPrimary: cluster-example-1 - targetPrimaryTimestamp: "2022-07-28T21:57:48Z" + targetPrimaryTimestamp: "2022-09-06T21:18:54.556099Z" + timelineID: 1 topology: instances: cluster-example-1: {} @@ -308,7 +325,7 @@ status: cluster-example-3: {} successfullyExtracted: true writeService: cluster-example-rw -``` + ``` !!! Note By default, the operator will install the latest available minor version @@ -333,7 +350,7 @@ curl -sSfL \ sudo sh -s -- -b /usr/local/bin __OUTPUT__ EnterpriseDB/kubectl-cnp info checking GitHub for latest tag -EnterpriseDB/kubectl-cnp info found version: 1.16.0 for v1.16.0/linux/x86_64 +EnterpriseDB/kubectl-cnp info found version: 1.17.0 for v1.17.0/linux/x86_64 EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp ``` @@ -345,8 +362,8 @@ __OUTPUT__ Cluster Summary Name: cluster-example Namespace: default -System ID: 7125546140512505874 -PostgreSQL Image: quay.io/enterprisedb/postgresql:14.4 +System ID: 7140379538380623889 +PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5 Primary instance: cluster-example-1 Status: Cluster in healthy state Instances: 3 @@ -356,9 +373,9 @@ Current Write LSN: 0/5000060 (Timeline: 1 - WAL File: 000000010000000000000005) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-10-26 21:52:48 +0000 UTC 89.99 -cluster-example-replication 2022-10-26 21:52:48 +0000 UTC 89.99 -cluster-example-server 2022-10-26 21:52:48 +0000 UTC 89.99 +cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 Continuous Backup status Not configured @@ -372,9 +389,9 @@ cluster-example-3 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00 Instances status Name Database Size Current LSN Replication role Status QoS Manager Version Node ---- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-1 33 MB 0/5000060 Primary OK BestEffort 1.16.0 k3d-k3s-default-server-0 -cluster-example-2 33 MB 0/5000060 Standby (async) OK BestEffort 1.16.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/5000060 Standby (async) OK BestEffort 1.16.0 k3d-k3s-default-server-0 +cluster-example-1 33 MB 0/5000060 Primary OK BestEffort 1.17.0 k3d-k3s-default-server-0 +cluster-example-2 33 MB 0/5000060 Standby (async) OK BestEffort 1.17.0 k3d-k3s-default-server-0 +cluster-example-3 33 MB 0/5000060 Standby (async) OK BestEffort 1.17.0 k3d-k3s-default-server-0 ``` !!! Note "There's more" @@ -397,11 +414,12 @@ Now if we check the status... kubectl cnp status cluster-example __OUTPUT__ Cluster Summary +Switchover in progress Name: cluster-example Namespace: default -System ID: 7125546140512505874 -PostgreSQL Image: quay.io/enterprisedb/postgresql:14.4 -Primary instance: cluster-example-2 +System ID: 7140379538380623889 +PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5 +Primary instance: cluster-example-1 (switching to cluster-example-2) Status: Failing over Failing over from cluster-example-1 to cluster-example-2 Instances: 3 Ready instances: 2 @@ -410,9 +428,9 @@ Current Write LSN: 0/6000F58 (Timeline: 2 - WAL File: 000000020000000000000006) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-10-26 21:52:48 +0000 UTC 89.99 -cluster-example-replication 2022-10-26 21:52:48 +0000 UTC 89.99 -cluster-example-server 2022-10-26 21:52:48 +0000 UTC 89.99 +cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 Continuous Backup status Not configured @@ -421,11 +439,10 @@ Streaming Replication status Not available yet Instances status -Name Database Size Current LSN Replication role Status QoS Manager Version Node ----- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-2 33 MB 0/6000F58 Primary OK BestEffort 1.16.0 k3d-k3s-default-server-0 -cluster-example-3 - - - pod not available BestEffort - k3d-k3s-default-server-0 -cluster-example-1 - - - pod not available BestEffort - k3d-k3s-default-server-0 +Name Database Size Current LSN Replication role Status QoS Manager Version Node +---- ------------- ----------- ---------------- ------ --- --------------- ---- +cluster-example-2 33 MB 0/6000F58 Primary OK BestEffort 1.17.0 k3d-k3s-default-server-0 +cluster-example-3 33 MB 0/60000A0 Standby (file based) OK BestEffort 1.17.0 k3d-k3s-default-server-0 ``` ...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary: @@ -436,20 +453,20 @@ __OUTPUT__ Cluster Summary Name: cluster-example Namespace: default -System ID: 7125546140512505874 -PostgreSQL Image: quay.io/enterprisedb/postgresql:14.4 +System ID: 7140379538380623889 +PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5 Primary instance: cluster-example-2 Status: Cluster in healthy state Instances: 3 Ready instances: 3 -Current Write LSN: 0/7000110 (Timeline: 2 - WAL File: 000000020000000000000007) +Current Write LSN: 0/6004CD8 (Timeline: 2 - WAL File: 000000020000000000000006) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-10-26 21:52:48 +0000 UTC 89.98 -cluster-example-replication 2022-10-26 21:52:48 +0000 UTC 89.98 -cluster-example-server 2022-10-26 21:52:48 +0000 UTC 89.98 +cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 Continuous Backup status Not configured @@ -457,15 +474,15 @@ Not configured Streaming Replication status Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority ---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- -cluster-example-1 0/7000110 0/7000110 0/7000110 0/7000110 00:00:00 00:00:00 00:00:00 streaming async 0 -cluster-example-3 0/7000110 0/7000110 0/7000110 0/7000110 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-1 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-3 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0 Instances status Name Database Size Current LSN Replication role Status QoS Manager Version Node ---- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-2 33 MB 0/7000110 Primary OK BestEffort 1.16.0 k3d-k3s-default-server-0 -cluster-example-1 33 MB 0/7000110 Standby (async) OK BestEffort 1.16.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/7000110 Standby (async) OK BestEffort 1.16.0 k3d-k3s-default-server-0 +cluster-example-2 33 MB 0/6004CD8 Primary OK BestEffort 1.17.0 k3d-k3s-default-server-0 +cluster-example-1 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.17.0 k3d-k3s-default-server-0 +cluster-example-3 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.17.0 k3d-k3s-default-server-0 ``` diff --git a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx index 9ddcc67fc58..f8ad95664d9 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx @@ -467,7 +467,7 @@ Here is a short description of all the available fields: - `primary`: whether to run the query only on the primary instance - `master`: same as `primary` (for compatibility with the Prometheus PostgreSQL exporter's syntax - deprecated) - `runonserver`: a semantic version range to limit the versions of PostgreSQL the query should run on - (e.g. `">=10.0.0"` or `">=12.0.0 <=14.4.0"`) + (e.g. `">=10.0.0"` or `">=12.0.0 <=14.5.0"`) - `target_databases`: a list of databases to run the `query` against, or a [shell-like pattern](#example-of-a-user-defined-metric-running-on-multiple-databases) to enable auto discovery. Overwrites the default database if provided. diff --git a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx index 71dffe165c9..5c671435016 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx @@ -94,6 +94,9 @@ workload requirements, based on what the underlying Kubernetes environment can offer. This implies choosing a particular storage class in a public cloud environment or fine-tuning the generated PVC through a PVC template in the CR's `storage` parameter. +For better performance and finer control, you can also choose to host your +cluster's Write-Ahead Log (WAL, also known as `pg_wal`) on a separate volume, +preferably on different storage. The [`cnp-bench`](https://github.com/EnterpriseDB/cnp-bench) open source project can be used to benchmark both the storage and the database prior to production. diff --git a/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx b/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx index 7e1617e3dac..d02024ad87d 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx @@ -108,7 +108,7 @@ values from the ones in this document): ```console $ kubectl exec -ti postgis-example-1 -- psql app Defaulted container "postgres" out of: postgres, bootstrap-controller (init) -psql (14.4 (Debian 14.4-1.pgdg110+1)) +psql (14.5 (Debian 14.5-1.pgdg110+1)) Type "help" for help. app=# SELECT * FROM pg_available_extensions WHERE name ~ '^postgis' ORDER BY 1; diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_4_rel_notes.mdx new file mode 100644 index 00000000000..f8f625f7d7d --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_15_4_rel_notes.mdx @@ -0,0 +1,11 @@ +--- +title: "EDB Postgres for Kubernetes 1.15.4 release notes" +navTitle: "Version 1.15.4" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ---------------------------------------------------------------------------------------------------------------------------------- | +| Upstream merge | Merged with community CloudNativePG 1.15.4. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.15/release_notes/v1.15/). | + diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_2_rel_notes.mdx new file mode 100644 index 00000000000..aab481edbcc --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_16_2_rel_notes.mdx @@ -0,0 +1,11 @@ +--- +title: "EDB Postgres for Kubernetes 1.16.2 release notes" +navTitle: "Version 1.16.2" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Upstream merge | Merged with community CloudNativePG 1.16.2. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.16/release_notes/v1.16/). | + diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_rel_notes.mdx new file mode 100644 index 00000000000..4bf7bb2ea99 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_17_rel_notes.mdx @@ -0,0 +1,10 @@ +--- +title: "EDB Postgres for Kubernetes 1.17.0 release notes" +navTitle: "Version 1.17.0" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Upstream merge | Merged with community CloudNativePG 1.17. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.17/release_notes/#v1.17). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx index da7d9da294a..0e78cd1ad1e 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx @@ -2,8 +2,11 @@ title: EDB Postgres for Kubernetes Release notes navTitle: "Release notes" navigation: +- 1_17_rel_notes +- 1_16_2_rel_notes - 1_16_1_rel_notes - 1_16_rel_notes +- 1_15_4_rel_notes - 1_15_3_rel_notes - 1_15_2_rel_notes - 1_15_1_rel_notes @@ -34,11 +37,14 @@ The EDB Postgres for Kubernetes documentation describes the major version of EDB | Version | Release date | Upstream merges | | -------------------------- | ------------ | ------------------------------------------------------------------------------------------- | -| [1.16.1](1_16_1_rel_notes) | 2022 Aug 12 | Upstream [1.16.1](https://cloudnative-pg.io/documentation/1.16/release_notes/v1.16/) | +| [1.17.0](1_17_rel_notes) | 2022 Sep 6 | Upstream [1.17.0](https://cloudnative-pg.io/documentation/1.16/release_notes/v1.17/) | +| [1.16.2](1_16_2_rel_notes) | 2022 Sep 6 | Upstream [1.16.2](https://cloudnative-pg.io/documentation/1.16/release_notes/v1.16/) | +| [1.16.1](1_16_1_rel_notes) | 2022 Aug 12 | Upstream [1.16.1](https://cloudnative-pg.io/documentation/1.16/release_notes/v1.16/) | | [1.16.0](1_16_rel_notes) | 2022 Jul 07 | Upstream [1.16.0](https://cloudnative-pg.io/documentation/1.16/release_notes/v1.16) | +| [1.15.4](1_15_4_rel_notes) | 2022 Sep 6 | Upstream [1.15.4](https://cloudnative-pg.io/documentation/1.15/release_notes/v1.15) | | [1.15.3](1_15_3_rel_notes) | 2022 Aug 12 | Upstream [1.15.3](https://cloudnative-pg.io/documentation/1.15/release_notes/v1.15) | | [1.15.2](1_15_2_rel_notes) | 2022 Jul 07 | Upstream [1.15.2](https://cloudnative-pg.io/documentation/1.15/release_notes/v1.15) | -| [1.15.1](1_15_1_rel_notes) | 2022 May 27 | Upstream [1.15.1](https://cloudnative-pg.io/documentation/1.15/release_notes/v1.15) | +| [1.15.1](1_15_1_rel_notes) | 2022 May 27 | Upstream [1.15.1](https://cloudnative-pg.io/documentation/1.15/release_notes/v1.15) | | [1.15.0](1_15_rel_notes) | 2022 Apr 21 | Upstream [1.15.0](https://cloudnative-pg.io/documentation/1.15/release_notes/v1.15) | | [1.14.0](1_14_rel_notes) | 2022 Mar 25 | NA | | [1.13.0](1_13_rel_notes) | 2022 Feb 17 | NA | diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples.mdx b/product_docs/docs/postgres_for_kubernetes/1/samples.mdx index 3892375c90e..f8f1b4e8e25 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/samples.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/samples.mdx @@ -68,4 +68,12 @@ Replica cluster via backup : [`cluster-example-replica-from-backup-simple.yaml`](../samples/cluster-example-replica-from-backup-simple.yaml): a replica cluster following a cluster with backup configured. +Bootstrap cluster with SQL files +: [`cluster-example-initdb-sql-refs.yaml`](../samples/cluster-example-initdb-sql-refs.yaml): + a cluster example that will execute a set of queries defined in a Secret and a ConfigMap right after the database is created. + +Sample cluster with customized `pg_hba` configuration +: [`cluster-example-pg-hba.yaml`](../samples/cluster-example-pg-hba.yaml): + a basic cluster that enables user `app` to authenticate using certificates. + For a list of available options, please refer to the ["API Reference" page](api_reference.md). \ No newline at end of file diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml index 361ef0db561..961cfe5bd99 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml +++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml @@ -35,7 +35,7 @@ metadata: name: cluster-example-full spec: description: "Example of cluster" - imageName: quay.io/enterprisedb/postgresql:14.4 + imageName: quay.io/enterprisedb/postgresql:14.5 # imagePullSecret is only required if the images are located in a private registry # imagePullSecrets: # - name: private_registry_access diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb-sql-refs.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb-sql-refs.yaml new file mode 100644 index 00000000000..b853658e570 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-initdb-sql-refs.yaml @@ -0,0 +1,47 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: post-init-sql-configmap +data: + configmap.sql: | + create table configmaps (i integer); + insert into configmaps (select generate_series(1,10000)); +--- +apiVersion: v1 +kind: Secret +metadata: + name: post-init-sql-secret +stringData: + secret.sql: | + create table secrets (i integer); + insert into secrets (select generate_series(1,10000)); +--- +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Cluster +metadata: + name: cluster-example-initdb +spec: + instances: 3 + + bootstrap: + initdb: + database: appdb + owner: appuser + postInitSQL: + - create table numbers (i integer) + - insert into numbers (select generate_series(1,10000)) + postInitTemplateSQL: + - create extension intarray + postInitApplicationSQL: + - create table application_numbers (i integer) + - insert into application_numbers (select generate_series(1,10000)) + postInitApplicationSQLRefs: + configMapRefs: + - name: post-init-sql-configmap + key: configmap.sql + secretRefs: + - name: post-init-sql-secret + key: secret.sql + + storage: + size: 1Gi diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-pg-hba.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-pg-hba.yaml new file mode 100644 index 00000000000..253f5bdfc41 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-pg-hba.yaml @@ -0,0 +1,12 @@ +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Cluster +metadata: + name: cluster-example +spec: + instances: 3 + postgresql: + pg_hba: + - hostssl app all all cert + + storage: + size: 1Gi diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-wal-storage.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-wal-storage.yaml new file mode 100644 index 00000000000..eb08e8fa6e2 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-wal-storage.yaml @@ -0,0 +1,10 @@ +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Cluster +metadata: + name: cluster-example +spec: + instances: 3 + storage: + size: 1Gi + walStorage: + size: 1Gi diff --git a/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx b/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx index 8ddcb6c3970..84778978e6e 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx @@ -61,7 +61,7 @@ metadata: name: cluster-example spec: instances: 3 - imageName: quay.io/enterprisedb/postgresql:14.4 + imageName: quay.io/enterprisedb/postgresql:14.5 affinity: enablePodAntiAffinity: true #default value diff --git a/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx b/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx index fbf7c230cd0..676ab7fabf4 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx @@ -13,10 +13,10 @@ Authority (CA) to create and sign TLS client certificates. Through the `cnp` plu issue a new TLS client certificate which can be used to authenticate a user instead of using passwords. Please refer to the following steps to authenticate via TLS/SSL certificates, which assume you have -installed a cluster using the [cluster-example.yaml](../samples/cluster-example.yaml) deployment manifest. -According to the convention over configuration paradigm, that file automatically creates an `app` database -which is owned by a user called `app` (you can change this convention through the `initdb` configuration -in the `bootstrap` section). +installed a cluster using the [cluster-example-pg-hba.yaml](../samples/cluster-example-pg-hba.yaml) +manifest. According to the convention over configuration paradigm, that file automatically creates an `app` +database which is owned by a user called `app` (you can change this convention through the `initdb` +configuration in the `bootstrap` section). ## Issuing a new certificate @@ -166,7 +166,7 @@ Output : version -------------------------------------------------------------------------------------- ------------------ -PostgreSQL 14.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat +PostgreSQL 14.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5), 64-bit (1 row) ``` \ No newline at end of file diff --git a/product_docs/docs/postgres_for_kubernetes/1/storage.mdx b/product_docs/docs/postgres_for_kubernetes/1/storage.mdx index 9462c2fa9eb..2f8ecf906cb 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/storage.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/storage.mdx @@ -70,6 +70,10 @@ Briefly, `cnp-bench` is designed to operate at two levels: The operator creates a persistent volume claim (PVC) for each PostgreSQL instance, with the goal to store the `PGDATA`, and then mounts it into each Pod. +Additionally, it supports the creation of clusters with a separate PVC +on which to store PostgreSQL Write-Ahead Log (WAL), as explained in the +["Volume for WAL" section](#volume-for-wal) below. + ## Configuration via a storage class The easier way to configure the storage for a PostgreSQL class is to just @@ -131,6 +135,65 @@ spec: volumeMode: Filesystem ``` +## Volume for WAL + +By default, PostgreSQL stores all its data in the so-called `PGDATA` (a directory). +One of the core directories inside `PGDATA` is `pg_wal` (historically +known as `pg_xlog` in PostgreSQL), which contains the log of transactional +changes occurred in the database, in the form of segment files. + +!!! Info + Normally, each segment is 16 MB in size, but the size can be configured + through the `walSegmentSize` option, applied at cluster initialization time, as + described in ["Bootstrap an empty cluster"](bootstrap.md#bootstrap-an-empty-cluster-initdb). + +While in most cases, having `pg_wal` on the same volume where `PGDATA` +resides is fine, there are a few benefits from having WALs stored in a separate +volume: + +- **I/O performance**: by storing WAL files on different storage than `PGDATA`, + PostgreSQL can exploit parallel I/O for WAL operations (normally + sequential writes) and for data files (tables and indexes for example), thus + improving vertical scalability + +- **more reliability**: by reserving dedicated disk space to WAL files, you + can always be sure that exhaustion of space on the `PGDATA` volume will + never interfere with WAL writing, ensuring that your PostgreSQL primary + is correctly shut down. + +- **finer control**: you can define the amount of space dedicated to both + `PGDATA` and `pg_wal`, fine tune [WAL + configuration](https://www.postgresql.org/docs/current/wal-configuration.html) + and checkpoints, even use a different storage class for cost optimization + +- **better I/O monitoring**: you can constantly monitor the load and disk usage + on both `PGDATA` and `pg_wal`, and set proper alerts that notify you in case, + for example, `PGDATA` requires resizing + +!!! Seealso "Write-Ahead Log (WAL)" + Please refer to the ["Reliability and the Write-Ahead Log" page](https://www.postgresql.org/docs/current/wal.html) + from the official PostgreSQL documentation for more information. + +You can add a separate volume for WAL through the `.spec.walStorage` option, +which follows the same rules described for the `storage` field and provisions a +dedicated PVC. For example: + +```yaml +apiVersion: postgresql.k8s.enterprisedb.io/v1 +kind: Cluster +metadata: + name: separate-pgwal-volume +spec: + instances: 3 + storage: + size: 1Gi + walStorage: + size: 1Gi +``` + +!!! Important + `walStorage` initialization is only supported during cluster creation. + ## Volume expansion Kubernetes exposes an API allowing [expanding PVCs](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims) @@ -298,6 +361,17 @@ As an example, to recreate the storage for `cluster-example-3` you can: $ kubectl delete pvc/cluster-example-3 pod/cluster-example-3 ``` +!!! Important + In case you have created a dedicated WAL volume, both PVCs will have to be deleted during this process. + Additionally, the same procedure applies in case you want to regenerate the WAL volume PVC, which can be done + by disabling `resizeInUseVolumes` also for the `.spec.walStorage` section. + +For example (in case a PVC dedicated to WAL storage is present): + +``` +$ kubectl delete pvc/cluster-example-3 pvc/cluster-example-3-wal pod/cluster-example-3 +``` + Having done that, the operator will orchestrate the creation of another replica with a resized PVC: @@ -306,6 +380,6 @@ $ kubectl get pods NAME READY STATUS RESTARTS AGE cluster-example-1 1/1 Running 0 5m58s cluster-example-2 1/1 Running 0 5m43s -cluster-example-4-join-v2bfg 0/1 Completed 0 17s +cluster-example-4-join-v2 0/1 Completed 0 17s cluster-example-4 1/1 Running 0 10s ``` \ No newline at end of file diff --git a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx index 4ba6deec1b9..67692906c05 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx @@ -127,7 +127,7 @@ Cluster in healthy state Name: cluster-example Namespace: default System ID: 7044925089871458324 -PostgreSQL Image: quay.io/enterprisedb/postgresql:14.4-3 +PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5-3 Primary instance: cluster-example-1 Instances: 3 Ready instances: 3 @@ -203,7 +203,7 @@ kubectl describe cluster -n | grep "Image Name" Output: ```shell - Image Name: quay.io/enterprisedb/postgresql:14.4-3 + Image Name: quay.io/enterprisedb/postgresql:14.5-3 ``` !!! Note @@ -405,6 +405,17 @@ event to occur instead of relying on the overall cluster health state. Available - LastBackupSucceeded - ContinuousArchiving +- Ready + +`LastBackupSucceeded` is reporting the status of the latest backup. If set to `True` the +last backup has been taken correctly, it is set to `False` otherwise. + +`ContinuousArchiving` is reporting the status of the WAL archiving. If set to `True` the +last WAL archival process has been terminated correctly, it is set to `False` otherwise. + +`Ready` is `True` when the cluster has the number of instances specified by the user +and the primary instance is ready. This condition can be used in scripts to wait for +the cluster to be created. ### How to wait for a particular condition @@ -420,6 +431,12 @@ $ kubectl wait --for=condition=LastBackupSucceeded cluster/ -n -n ``` +- Ready (Cluster is ready or not): + +```bash +$ kubectl wait --for=condition=Ready cluster/ -n +``` + Below is a snippet of a `cluster.status` that contains a failing condition. ```bash @@ -431,14 +448,21 @@ $ kubectl get cluster/ -o yaml conditions: - message: 'unexpected failure invoking barman-cloud-wal-archive: exit status 2' - reason: Continuous Archiving is Failing + reason: ContinuousArchivingFailing status: "False" type: ContinuousArchiving - message: exit status 2 - reason: Backup is failed + reason: LastBackupFailed status: "False" type: LastBackupSucceeded + + - message: Cluster Is Not Ready + reason: ClusterIsNotReady + status: "False" + type: Ready + + ``` ## Some common issues @@ -498,5 +522,5 @@ PODNAME= VOLNAME=$(kubectl get pv -o json | \ jq -r '.items[]|select(.spec.claimRef.name=='\"$PODNAME\"')|.metadata.name') -kubectl delete pod/$PODNAME pvc/$PODNAME pv/$VOLNAME +kubectl delete pod/$PODNAME pvc/$PODNAME pvc/$PODNAME-wal pv/$VOLNAME ``` \ No newline at end of file diff --git a/static/_redirects b/static/_redirects index 496ae820b55..a42e5f0b6df 100644 --- a/static/_redirects +++ b/static/_redirects @@ -36,6 +36,8 @@ /docs/harp/1.0/* /docs/harp/latest/:splat 301 # PGD +/docs/pgd/latest/overview/bdr/* /docs/pgd/latest/bdr/:splat 302 +/docs/pgd/latest/overview/harp/* /docs/pgd/latest/harp/:splat 302 /docs/bdr/latest/* /docs/pgd/latest/overview/bdr/:splat 302 /docs/bdr/4/* /docs/pgd/4/overview/bdr/:splat 301 /docs/bdr/3.7/* /docs/pgd/3.7/bdr/:splat 301