From 2e5227934aeaf578ab31ff57b6ef51996aaf0593 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Wed, 26 Jul 2023 12:28:56 -0400 Subject: [PATCH 01/19] BigAnimal: Q2 extensions --- .../docs/biganimal/release/overview/extensions_tools.mdx | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/product_docs/docs/biganimal/release/overview/extensions_tools.mdx b/product_docs/docs/biganimal/release/overview/extensions_tools.mdx index b3c6680c2b2..26cbc8b932d 100644 --- a/product_docs/docs/biganimal/release/overview/extensions_tools.mdx +++ b/product_docs/docs/biganimal/release/overview/extensions_tools.mdx @@ -19,6 +19,15 @@ EDB develops and maintains several extensions and tools. These include: - [Autocluster](/pg_extensions/advanced_storage_pack/#autocluster) — Provides faster access to clustered data by keeping track of the last inserted row for any value in a side table. - [Refdata](/pg_extensions/advanced_storage_pack/#refdata) — Can provide performance gains of 5-10% and increased scalability. + + - [EDB Postgres Tuner](/pg_extensions/pg_tuner/) — Provides safe recommendations that maximize the use of available resources. + +- [EDB Query Advisor](/pg_extensions/query_advisor/) — Provides index recommendations by keeping statistics on predicates found in WHERE statements, JOIN clauses, and workload queries. + +- [EDB Wait States](/pg_extensions/wait_states/) — Probes each of the running sessions at regular intervals. + +- [PG Failover Slots](/pg_extensions/pg_failover_slots/) — Is an extension released as open source software under the PostgreSQL License. If you have logical replication publications on Postgres databases that are also part of a streaming replication architecture, PG Failover Slots avoids the need for you to reseed your logical replication tables when a new standby gets promoted to primary. + - [Foreign Data Wrappers](foreign_data_wrappers) — Allow you to connect your Postgres database server to external data sources. - [Connection poolers](poolers) — Allow you to manage your connections to your Postgres database. From eed17a1e339b7a941315ad62387755ef2b48711a Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 27 Jul 2023 12:46:12 -0400 Subject: [PATCH 02/19] Copy edits to PR4510 --- product_docs/docs/pgd/4/limitations.mdx | 6 +++--- product_docs/docs/pgd/5/limitations.mdx | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx index 402b91397b5..a4325a1bfe9 100644 --- a/product_docs/docs/pgd/4/limitations.mdx +++ b/product_docs/docs/pgd/4/limitations.mdx @@ -42,12 +42,12 @@ While it is still possible to host up to ten databases in a single instance, thi ## Other limitations -This is a (non-comprehensive) list of limitations that are expected and are by design. They are not expected to be resolved in the future. +This is a (non-comprehensive) list of limitations that are expected and are by design. They aren't expected to be resolved in the future. - Replacing a node with its physical standby doesn't work for nodes that use CAMO/Eager/Group Commit. Combining physical standbys and BDR in general isn't recommended, even if otherwise possible. - A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function. -- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration. +- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two aren't compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration. -- Postgres' two-phase commit (2PC) transactions (i.e. [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) cannot be used in combination with CAMO, Group Commit, nor Eager Replication, because those features use two-phase commit underneath. +- Postgres two-phase commit (2PC) transactions (that is, [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) can't be used with CAMO, Group Commit, or Eager Replication because those features use two-phase commit underneath. diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index 0cadbc96671..286c2829b5b 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -131,4 +131,4 @@ Consider these limitations when planning your deployment: different from the one used by CAMO, Eager, and Group Commit. The two aren't compatible, so don't use them together. -- Postgres' two-phase commit (2PC) transactions (i.e. [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) cannot be used in combination with CAMO, Group Commit, nor Eager Replication, because those features use two-phase commit underneath. +- Postgres two-phase commit (2PC) transactions (that is, [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) can't be used with CAMO, Group Commit, or Eager Replication because those features use two-phase commit underneath. From 9a74e5e8c5ea53769faef5df343f882cc7217304 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 27 Jul 2023 13:02:46 -0400 Subject: [PATCH 03/19] edits to PR4292 --- .../docs/biganimal/release/overview/extensions_tools.mdx | 2 +- .../docs/biganimal/release/using_cluster/extensions.mdx | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/biganimal/release/overview/extensions_tools.mdx b/product_docs/docs/biganimal/release/overview/extensions_tools.mdx index 45bccf25877..6391f165375 100644 --- a/product_docs/docs/biganimal/release/overview/extensions_tools.mdx +++ b/product_docs/docs/biganimal/release/overview/extensions_tools.mdx @@ -5,7 +5,7 @@ navTitle: Supported extensions and tools BigAnimal supports a number of Postgres extensions and tools, which you can install on or alongside your cluster. -- See [Postgres extensions available by deployment](/pg_extensions/) for the complete list of extensions BigAnimal supports if you are using your own cloud account. +- See [Postgres extensions available by deployment](/pg_extensions/) for the complete list of extensions BigAnimal supports if you're using your own cloud account. - See [Postgres extensions](/biganimal/release/using_cluster/extensions) for the list of extensions supported when using BigAnimal's account and for more information on installing and working with extensions. diff --git a/product_docs/docs/biganimal/release/using_cluster/extensions.mdx b/product_docs/docs/biganimal/release/using_cluster/extensions.mdx index d657307a473..bc3dfd15b9c 100644 --- a/product_docs/docs/biganimal/release/using_cluster/extensions.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/extensions.mdx @@ -7,10 +7,10 @@ redirects: BigAnimal supports many Postgres extensions. See [Postgres extensions available by deployment](/pg_extensions/) for the complete list. ## Extensions available when using your own cloud account -Many Postgres extensions require superuser privileges to be installed. The table in [Postgres extensions available by deployment](/pg_extensions/) indicates whether an extension requires superuser privileges. If you are using your own cloud account, you can grant superuser privileges to edb_admin so that you can install these extensions on your cluster (see [superuser](/biganimal/latest/using_cluster/01_postgres_access/#superuser)). +Installing many Postgres extensions requires superuser privileges. The table in [Postgres extensions available by deployment](/pg_extensions/) indicates whether an extension requires superuser privileges. If you're using your own cloud account, you can grant superuser privileges to edb_admin so that you can install these extensions on your cluster (see [superuser](/biganimal/latest/using_cluster/01_postgres_access/#superuser)). ## Extensions available when using BigAnimal's cloud account -If you are using BigAnimal's cloud account, you can install and use the following extensions. +If you're using BigAnimal's cloud account, you can install and use the following extensions. PostgreSQL contrib extensions/modules: - auth_delay @@ -78,7 +78,7 @@ Use the [`CREATE EXTENSION`](https://www.postgresql.org/docs/current/sql-createe ### Example: Installing multiple extensions -One way of installing multiple extensions simultaneously is to: +This example shows one way of installing multiple extensions simultaneously. 1. Create a text file containing the `CREATE EXTENSION` command for each of the extensions you want to install. In this example, the file is named `create_extensions.sql`. From f65034c7b410715b6d974545b0e3037cf9c6ebd3 Mon Sep 17 00:00:00 2001 From: cnp-autobot Date: Fri, 28 Jul 2023 10:39:29 +0000 Subject: [PATCH 04/19] [create-pull-request] automated change --- .../1/api_reference.mdx | 2 +- .../postgres_for_kubernetes/1/bootstrap.mdx | 15 ++++---- .../docs/postgres_for_kubernetes/1/index.mdx | 22 ++++++++--- .../1/installation_upgrade.mdx | 4 +- .../postgres_for_kubernetes/1/openshift.mdx | 6 +-- .../1/troubleshooting.mdx | 38 ++++++++++++++++++- scripts/fileProcessor/package-lock.json | 5 +-- scripts/source/package-lock.json | 5 +-- 8 files changed, 72 insertions(+), 25 deletions(-) diff --git a/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx b/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx index 697665045f8..c23f4c19fe8 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx @@ -892,7 +892,7 @@ RecoveryTarget allows to configure the moment where the recovery process will st | `targetLSN ` | The target LSN (Log Sequence Number) | string | | `targetTime ` | The target time as a timestamp in the RFC3339 standard | string | | `targetImmediate` | End recovery as soon as a consistent state is reached | \*bool | -| `exclusive ` | Set the target to be exclusive (defaults to true) | \*bool | +| `exclusive ` | Set the target to be exclusive. If omitted, defaults to false, so that in Postgres, `recovery_target_inclusive` will be true | \*bool | diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx index 5eb2bba42c0..45aecfb6ae4 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx @@ -37,7 +37,7 @@ For more detailed information about this feature, please refer to the !!! Info EDB Postgres for Kubernetes is gradually introducing support for - [Kubernetes' native `VolumeSnapshot` API](https://github.com/cloudnative-pg/cloudnative-pg/issues/2081) + [Kubernetes' native `VolumeSnapshot` API](https://github.com/EnterpriseDB/cloud-native-postgres/issues/2081) for both incremental and differential copy in backup and recovery operations - if supported by the underlying storage classes. Please see ["Recovery from Volume Snapshot objects"](#recovery-from-volumesnapshot-objects) @@ -552,7 +552,7 @@ bootstrap: The `kubectl cnp snapshot` command is able to take consistent snapshots of a replica through a technique known as *cold backup*, by fencing the standby before taking a physical copy of the volumes. For details, please refer to -["Snapshotting a Postgres cluster"](kubectl-plugin/#snapshotting-a-postgres-cluster). +["Snapshotting a Postgres cluster"](#snapshotting-a-postgres-cluster). #### Additional considerations @@ -702,10 +702,11 @@ You can choose only a single one among the targets above in each Additionally, you can specify `targetTLI` force recovery to a specific timeline. -By default, the previous parameters are considered to be exclusive, stopping -just before the recovery target. You can request inclusive behavior, -stopping right after the recovery target, setting the `exclusive` parameter to -`false` like in the following example relying on a blob container in Azure: +By default, the previous parameters are considered to be inclusive, stopping +just after the recovery target, matching [the behavior in PostgreSQL](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-RECOVERY-TARGET-INCLUSIVE) +You can request exclusive behavior, +stopping right before the recovery target, by setting the `exclusive` parameter to +`true` like in the following example relying on a blob container in Azure: ```yaml apiVersion: postgresql.k8s.enterprisedb.io/v1 @@ -724,7 +725,7 @@ spec: recoveryTarget: backupID: 20220616T142236 targetName: "maintenance-activity" - exclusive: false + exclusive: true externalClusters: - name: clusterBackup diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/index.mdx index 9d677c49bb4..6f00e5e9e10 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/index.mdx @@ -67,7 +67,6 @@ full lifecycle of a highly available Postgres database clusters with a primary/standby architecture, using native streaming replication. !!! Note - The operator has been renamed from Cloud Native PostgreSQL. Existing users of Cloud Native PostgreSQL will not experience any change, as the underlying components and resources have not changed. @@ -90,7 +89,7 @@ primary/standby architecture, using native streaming replication. ## Features unique to EDB Postgres of Kubernetes -- Long Term Support for 1.18.x +- [Long Term Support](#long-term-support) for 1.18.x - Red Hat certified operator for OpenShift - Support on IBM Power - EDB Postgres for Kubernetes Plugin @@ -102,11 +101,26 @@ You can [evaluate EDB Postgres for Kubernetes for free](evaluation.md). You need a valid license key to use EDB Postgres for Kubernetes in production. !!! Note - Based on the [Operator Capability Levels model](operator_capability_levels.md), users can expect a **"Level V - Auto Pilot"** set of capabilities from the EDB Postgres for Kubernetes Operator. +### Long Term Support + +EDB is committed to declaring one version of EDB Postgres for Kubernetes per +year as a Long Term Support version. This version will be supported and receive +maintenance releases for an additional 12 months beyond the last release of +CloudNativePG by the community for the same version. For example, the last +version of 1.18 of CloudNativePG was released on June 12, 2023. This was +declared a LTS version of EDB Postgres for Kubernetes and it will be supported +for additional 12 months until June 12, 2024. Customers can expect that they +will have at least 6 months to move between LTS versions. So they should +expect the next LTS to be available by January 12, 2024 to allow at least 6 +months to migrate. While we encourage customers to regularly upgrade to the +latest version of the operator to take advantage of new features, having LTS +versions allows customers desiring additional stability to stay on the same +version for 12-18 months before upgrading. + ## Licensing EDB Postgres for Kubernetes works with both PostgreSQL and EDB Postgres @@ -130,7 +144,6 @@ The EDB Postgres for Kubernetes Operator container images support the multi-arch format for the following platforms: `linux/amd64`, `linux/arm64`, `linux/ppc64le`, `linux/s390x`. !!! Warning - EDB Postgres for Kubernetes requires that all nodes in a Kubernetes cluster have the same CPU architecture, thus a hybrid CPU architecture Kubernetes cluster is not supported. Additionally, EDB supports `linux/ppc64le` and `linux/s390x` architectures @@ -156,7 +169,6 @@ In case you are not familiar with some basic terminology on Kubernetes and Postg please consult the ["Before you start" section](before_you_start.md). !!! Note - Although the guide primarily addresses Kubernetes, all concepts can be extended to OpenShift as well. diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx index 585ff61179e..421ac797426 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx @@ -19,12 +19,12 @@ The operator can be installed using the provided [Helm chart](https://github.com The operator can be installed like any other resource in Kubernetes, through a YAML manifest applied via `kubectl`. -You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.20.1.yaml) +You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml) for this minor release as follows: ```sh kubectl apply -f \ - https://get.enterprisedb.io/cnp/postgresql-operator-1.20.1.yaml + https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml ``` You can verify that with: diff --git a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx index 3104b5635c9..4dec98b6713 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx @@ -820,9 +820,9 @@ Please pay close attention to the following table and notes: | EDB Postgres for Kubernetes Version | OpenShift Versions | Supported SCC | | ----------------------------------- | ------------------ | ------------------------- | -| 1.20.x | 4.10-4.12 | restricted, restricted-v2 | -| 1.19.x | 4.10-4.12 | restricted, restricted-v2 | -| 1.18.x | 4.10-4.12 | restricted, restricted-v2 | +| 1.20.x | 4.10-4.13 | restricted, restricted-v2 | +| 1.19.x | 4.10-4.13 | restricted, restricted-v2 | +| 1.18.x | 4.10-4.13 | restricted, restricted-v2 | !!! Important Since version 4.10 only provides `restricted`, EDB Postgres for Kubernetes diff --git a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx index 296d88870cc..c65e8fd390b 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx @@ -656,4 +656,40 @@ spec: In the [networking page](networking.md) you can find a network policy file that you can customize to create a `NetworkPolicy` explicitly allowing the -operator to connect cross-namespace to cluster pods. \ No newline at end of file +operator to connect cross-namespace to cluster pods. + +### Error while bootstrapping the data directory + +If your Cluster's initialization job crashes with a "Bus error (core dumped) +child process exited with exit code 135", you likely need to fix the Cluster +hugepages settings. + +The reason is the incomplete support of hugepages in the cgroup v1 that should +be fixed in v2. For more information, check the PostgreSQL [BUG #17757: Not +honoring huge_pages setting during initdb causes DB crash in +Kubernetes](https://www.postgresql.org/message-id/17757-dbdfc1f1c954a6db%40postgresql.org). + +To check whether hugepages are enabled, run `grep HugePages /proc/meminfo` on +the Kubernetes node and check if hugepages are present, their size, and how many +are free. + +If the hugepages are present, you need to configure how much hugepages memory +every PostgreSQL pod should have available. + +For example: + +```yaml + postgresql: + parameters: + shared_buffers: "128MB" + + resources: + requests: + memory: "512Mi" + limits: + hugepages-2Mi: "512Mi" +``` + +Please remember that you must have enough hugepages memory available to schedule +every Pod in the Cluster (in the example above, at least 512MiB per Pod must be +free). \ No newline at end of file diff --git a/scripts/fileProcessor/package-lock.json b/scripts/fileProcessor/package-lock.json index 29e5152b1ba..9ee4123f70e 100644 --- a/scripts/fileProcessor/package-lock.json +++ b/scripts/fileProcessor/package-lock.json @@ -2401,7 +2401,7 @@ "parse-entities": "^2.0.0", "repeat-string": "^1.5.4", "state-toggle": "^1.0.0", - "trim": ">=0.0.3", + "trim": "0.0.1", "trim-trailing-lines": "^1.0.0", "unherit": "^1.0.4", "unist-util-remove-position": "^2.0.0", @@ -2528,8 +2528,7 @@ } }, "trim": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz", + "version": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz", "integrity": "sha512-3JVP2YVqITUisXblCDq/Bi4P9457G/sdEamInkyvCsjbTcXLXIiG7XCb4kGMFWh6JGXesS3TKxOPtrncN/xe8w==" }, "trim-trailing-lines": { diff --git a/scripts/source/package-lock.json b/scripts/source/package-lock.json index 43917129155..7f4b8ebd71e 100644 --- a/scripts/source/package-lock.json +++ b/scripts/source/package-lock.json @@ -3200,7 +3200,7 @@ "parse-entities": "^2.0.0", "repeat-string": "^1.5.4", "state-toggle": "^1.0.0", - "trim": ">=0.0.3", + "trim": "0.0.1", "trim-trailing-lines": "^1.0.0", "unherit": "^1.0.4", "unist-util-remove-position": "^2.0.0", @@ -3348,8 +3348,7 @@ "integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw==" }, "trim": { - "version": "1.0.1", - "resolved": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz", + "version": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz", "integrity": "sha512-3JVP2YVqITUisXblCDq/Bi4P9457G/sdEamInkyvCsjbTcXLXIiG7XCb4kGMFWh6JGXesS3TKxOPtrncN/xe8w==" }, "trim-trailing-lines": { From 5b321d3d26622a8dab00cd40ffde80ade3446acd Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 17 Jul 2023 15:01:18 -0400 Subject: [PATCH 05/19] first pass at parallel apply content --- product_docs/docs/pgd/5/parallelapply.mdx | 28 ++++++++++++----------- 1 file changed, 15 insertions(+), 13 deletions(-) diff --git a/product_docs/docs/pgd/5/parallelapply.mdx b/product_docs/docs/pgd/5/parallelapply.mdx index aca40703a20..8cf5184f100 100644 --- a/product_docs/docs/pgd/5/parallelapply.mdx +++ b/product_docs/docs/pgd/5/parallelapply.mdx @@ -7,25 +7,31 @@ navTitle: Parallel Apply Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This generally increases the throughput of a subscription and improves replication performance. -The transactional changes from the subscription are written by the multiple Parallel Apply writers. However, each writer ensures that the final commit of its transaction does not violate the commit order as executed on the origin node. If there is a violation, an error is generated and the transaction can be rolled back. +The transactional changes from the subscription are written by the multiple Parallel Apply writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error is generated and the transaction can be rolled back. !!! Warning Possible deadlocks -It may be possible that this out-of-order application of changes could trigger a deadlock. PGD currently resolves such deadlocks between Parallel Apply writers by aborting and retrying the transactions involved. If you experience a large number of such deadlocks, this is an indication that Parallel Apply is not a good fit for your workload and you should consider disabling it. +It might be possible for this out-of-order application of changes to trigger a deadlock. PGD currently resolves such deadlocks between Parallel Apply writers by aborting and retrying the transactions involved. If you experience a large number of such deadlocks, this is an indication that Parallel Apply isn't a good fit for your workload. In this case, consider disabling it. !!! ### Configuring Parallel Apply -There are two variables which control Parallel Apply in PGD 5, [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription). The default settings for these are 8 and 2. +Two variables control Parallel Apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription). The default setting for `bdr.max_writers_per_subscription` is 8. The default for `bdr.writers_per_subscription` is 2. ```plain bdr.max_writers_per_subscription = 8 bdr.writers_per_subscription = 2 ``` -This gives each subscription two writers, but in some circumstances, the system may allocate up to 8 writers for a subscription. +This configuration gives each subscription two writers. However, in some circumstances, the system might allocate up to 8 writers for a subscription. -[`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) can only be changed with a server restart. +[`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) can be changed only with a server restart. -[`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) can be changed, for a specific subscription, without a restart by halting the subscription using [`bdr.alter_subscription_disable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_disable), setting the new value and then resuming the subscription using [`bdr.alter_subscription_enable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_enable). First establish the name of the subscription using `select * from bdr.subscription`. For this example, the subscription name is `bdr_bdrdb_bdrgroup_node2_node1`. +[`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) can be changed, for a specific subscription without a restart by: + +1. Halting the subscription using [`bdr.alter_subscription_disable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_disable). +1. Setting the new value. +1. Resuming the subscription using [`bdr.alter_subscription_enable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_enable). + +First, establish the name of the subscription using `select * from bdr.subscription`. For this example, the subscription name is `bdr_bdrdb_bdrgroup_node2_node1`. ```sql @@ -40,16 +46,12 @@ SELECT bdr.alter_subscription_enable ('bdr_bdrdb_bdrgroup_node2_node1'); ### When to use Parallel Apply -Parallel Apply is always on by default and for most operations, we recommend that it is left on. +Parallel Apply is always on by default. For most operations, we recommend that you leave it on. ### When not to use Parallel Apply -As of, and up to at least PGD 5.1, Parallel Apply should not be used with Group Commit, CAMO and eager replication. You should disable Parallel Apply in these scenarios. If you are experiencing a large number of deadlocks, you may also want to disable it. +As of, and up to at least PGD 5.1, don't use Parallel Apply with Group Commit, CAMO, and Eager Replication. Disable Parallel Apply in these scenarios. Also, if you're experiencing a large number of deadlocks, you might want to disable it. ### Disabling Parallel Apply -To disable Parallel Apply set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to 1. - - - - +To disable Parallel Apply, set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to `1`. From 274e1600d2927d4f1fa8b0ad85a55bf1ab9c70e1 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 18 Jul 2023 09:19:09 -0400 Subject: [PATCH 06/19] Update product_docs/docs/pgd/5/parallelapply.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> --- product_docs/docs/pgd/5/parallelapply.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/parallelapply.mdx b/product_docs/docs/pgd/5/parallelapply.mdx index 8cf5184f100..acfba0d1765 100644 --- a/product_docs/docs/pgd/5/parallelapply.mdx +++ b/product_docs/docs/pgd/5/parallelapply.mdx @@ -14,7 +14,7 @@ It might be possible for this out-of-order application of changes to trigger a d !!! ### Configuring Parallel Apply -Two variables control Parallel Apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription). The default setting for `bdr.max_writers_per_subscription` is 8. The default for `bdr.writers_per_subscription` is 2. +Two variables control Parallel Apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). ```plain bdr.max_writers_per_subscription = 8 From 552153444b4c75e67eb87128240512bf480db20d Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 18 Jul 2023 09:40:06 -0400 Subject: [PATCH 07/19] Final edits of Parallel Apply release --- product_docs/docs/pgd/5/parallelapply.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/product_docs/docs/pgd/5/parallelapply.mdx b/product_docs/docs/pgd/5/parallelapply.mdx index acfba0d1765..645344b60ee 100644 --- a/product_docs/docs/pgd/5/parallelapply.mdx +++ b/product_docs/docs/pgd/5/parallelapply.mdx @@ -3,17 +3,17 @@ title: Parallel Apply navTitle: Parallel Apply --- -### What is Parallel Apply? +## What is Parallel Apply? -Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This generally increases the throughput of a subscription and improves replication performance. +Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This behavior generally increases the throughput of a subscription and improves replication performance. -The transactional changes from the subscription are written by the multiple Parallel Apply writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error is generated and the transaction can be rolled back. +The transactional changes from the subscription are written by the multiple Parallel Apply writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error occurs, and the transaction can be rolled back. !!! Warning Possible deadlocks -It might be possible for this out-of-order application of changes to trigger a deadlock. PGD currently resolves such deadlocks between Parallel Apply writers by aborting and retrying the transactions involved. If you experience a large number of such deadlocks, this is an indication that Parallel Apply isn't a good fit for your workload. In this case, consider disabling it. +It might be possible for this out-of-order application of changes to trigger a deadlock. PGD currently resolves such deadlocks between Parallel Apply writers by aborting and retrying the transactions involved. Experiencing a large number of such deadlocks is an indication that Parallel Apply isn't a good fit for your workload. In this case, consider disabling it. !!! -### Configuring Parallel Apply +## Configuring Parallel Apply Two variables control Parallel Apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). ```plain @@ -23,15 +23,15 @@ bdr.writers_per_subscription = 2 This configuration gives each subscription two writers. However, in some circumstances, the system might allocate up to 8 writers for a subscription. -[`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) can be changed only with a server restart. +You can change [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) only with a server restart. -[`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) can be changed, for a specific subscription without a restart by: +You can change [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) for a specific subscription without a restart by: 1. Halting the subscription using [`bdr.alter_subscription_disable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_disable). 1. Setting the new value. 1. Resuming the subscription using [`bdr.alter_subscription_enable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_enable). -First, establish the name of the subscription using `select * from bdr.subscription`. For this example, the subscription name is `bdr_bdrdb_bdrgroup_node2_node1`. +First, though, establish the name of the subscription using `select * from bdr.subscription`. For this example, the subscription name is `bdr_bdrdb_bdrgroup_node2_node1`. ```sql @@ -50,7 +50,7 @@ Parallel Apply is always on by default. For most operations, we recommend that y ### When not to use Parallel Apply -As of, and up to at least PGD 5.1, don't use Parallel Apply with Group Commit, CAMO, and Eager Replication. Disable Parallel Apply in these scenarios. Also, if you're experiencing a large number of deadlocks, you might want to disable it. +For PGD 5.1 and earlier, don't use Parallel Apply with Group Commit, CAMO, and Eager Replication. Disable Parallel Apply in these scenarios. Also, if you're experiencing a large number of deadlocks, consider disabling it. ### Disabling Parallel Apply From 0518356576b2ceb51790ef6098757b57880143f9 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 18 Jul 2023 09:47:41 -0400 Subject: [PATCH 08/19] Fixed capitalization of feature name --- product_docs/docs/pgd/5/parallelapply.mdx | 28 +++++++++++------------ 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/product_docs/docs/pgd/5/parallelapply.mdx b/product_docs/docs/pgd/5/parallelapply.mdx index 645344b60ee..1af0a9c6260 100644 --- a/product_docs/docs/pgd/5/parallelapply.mdx +++ b/product_docs/docs/pgd/5/parallelapply.mdx @@ -1,20 +1,20 @@ --- -title: Parallel Apply -navTitle: Parallel Apply +title: Parallel apply +navTitle: Parallel apply --- -## What is Parallel Apply? +## What is parallel apply? -Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This behavior generally increases the throughput of a subscription and improves replication performance. +Parallel apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This behavior generally increases the throughput of a subscription and improves replication performance. -The transactional changes from the subscription are written by the multiple Parallel Apply writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error occurs, and the transaction can be rolled back. +The transactional changes from the subscription are written by the multiple parallel apply writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error occurs, and the transaction can be rolled back. !!! Warning Possible deadlocks -It might be possible for this out-of-order application of changes to trigger a deadlock. PGD currently resolves such deadlocks between Parallel Apply writers by aborting and retrying the transactions involved. Experiencing a large number of such deadlocks is an indication that Parallel Apply isn't a good fit for your workload. In this case, consider disabling it. +It might be possible for this out-of-order application of changes to trigger a deadlock. PGD currently resolves such deadlocks between parallel apply writers by aborting and retrying the transactions involved. Experiencing a large number of such deadlocks is an indication that parallel apply isn't a good fit for your workload. In this case, consider disabling it. !!! -## Configuring Parallel Apply -Two variables control Parallel Apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). +## Configuring parallel apply +Two variables control parallel apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). ```plain bdr.max_writers_per_subscription = 8 @@ -44,14 +44,14 @@ WHERE sub_name = 'bdr_bdrdb_bdrgroup_node2_node1'; SELECT bdr.alter_subscription_enable ('bdr_bdrdb_bdrgroup_node2_node1'); ``` -### When to use Parallel Apply +### When to use parallel apply -Parallel Apply is always on by default. For most operations, we recommend that you leave it on. +Parallel apply is always on by default. For most operations, we recommend that you leave it on. -### When not to use Parallel Apply +### When not to use parallel apply -For PGD 5.1 and earlier, don't use Parallel Apply with Group Commit, CAMO, and Eager Replication. Disable Parallel Apply in these scenarios. Also, if you're experiencing a large number of deadlocks, consider disabling it. +For PGD 5.1 and earlier, don't use parallel apply with Group Commit, CAMO, and Eager Replication. Disable parallel apply in these scenarios. Also, if you're experiencing a large number of deadlocks, consider disabling it. -### Disabling Parallel Apply +### Disabling parallel apply -To disable Parallel Apply, set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to `1`. +To disable parallel apply, set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to `1`. From 25a594f8c44aed9a8778748346bc082d7b4c173e Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Fri, 28 Jul 2023 11:40:38 +0100 Subject: [PATCH 09/19] Refix casing on Parallel Apply TBD, adjust feature list --- product_docs/docs/pgd/5/parallelapply.mdx | 28 +++++++++++------------ 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/product_docs/docs/pgd/5/parallelapply.mdx b/product_docs/docs/pgd/5/parallelapply.mdx index 1af0a9c6260..2eaf9a09e15 100644 --- a/product_docs/docs/pgd/5/parallelapply.mdx +++ b/product_docs/docs/pgd/5/parallelapply.mdx @@ -1,20 +1,20 @@ --- -title: Parallel apply -navTitle: Parallel apply +title: Parallel Apply +navTitle: Parallel Apply --- -## What is parallel apply? +## What is Parallel Apply? -Parallel apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This behavior generally increases the throughput of a subscription and improves replication performance. +Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This behavior generally increases the throughput of a subscription and improves replication performance. -The transactional changes from the subscription are written by the multiple parallel apply writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error occurs, and the transaction can be rolled back. +The transactional changes from the subscription are written by the multiple Parallel Apply writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error occurs, and the transaction can be rolled back. !!! Warning Possible deadlocks -It might be possible for this out-of-order application of changes to trigger a deadlock. PGD currently resolves such deadlocks between parallel apply writers by aborting and retrying the transactions involved. Experiencing a large number of such deadlocks is an indication that parallel apply isn't a good fit for your workload. In this case, consider disabling it. +It might be possible for this out-of-order application of changes to trigger a deadlock. PGD currently resolves such deadlocks between Parallel Apply writers by aborting and retrying the transactions involved. Experiencing a large number of such deadlocks is an indication that Parallel Apply isn't a good fit for your workload. In this case, consider disabling it. !!! -## Configuring parallel apply -Two variables control parallel apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). +## Configuring Parallel Apply +Two variables control Parallel Apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). ```plain bdr.max_writers_per_subscription = 8 @@ -44,14 +44,14 @@ WHERE sub_name = 'bdr_bdrdb_bdrgroup_node2_node1'; SELECT bdr.alter_subscription_enable ('bdr_bdrdb_bdrgroup_node2_node1'); ``` -### When to use parallel apply +### When to use Parallel Apply -Parallel apply is always on by default. For most operations, we recommend that you leave it on. +Parallel Apply is always on by default. For most operations, we recommend that you leave it on. -### When not to use parallel apply +### When not to use Parallel Apply -For PGD 5.1 and earlier, don't use parallel apply with Group Commit, CAMO, and Eager Replication. Disable parallel apply in these scenarios. Also, if you're experiencing a large number of deadlocks, consider disabling it. +For PGD 5.1 and earlier, don't use Parallel aAply with Group Commit, CAMO, and Eager Replication. Disable Parallel Apply in these scenarios. Also, if you're experiencing a large number of deadlocks, consider disabling it. -### Disabling parallel apply +### Disabling Parallel Apply -To disable parallel apply, set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to `1`. +To disable Parallel Apply, set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to `1`. From 1328a8cb379e992e9cab3e7dfe09509a3e55bb11 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Fri, 28 Jul 2023 11:52:11 -0400 Subject: [PATCH 10/19] 1.19.4 and 1.20.2 updates --- .../1/rel_notes/1_19_4_rel_notes.mdx | 10 ++++++++++ .../1/rel_notes/1_20_2_rel_notes.mdx | 10 ++++++++++ .../docs/postgres_for_kubernetes/1/rel_notes/index.mdx | 4 ++++ 3 files changed, 24 insertions(+) create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx new file mode 100644 index 00000000000..5d72401998f --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx @@ -0,0 +1,10 @@ +--- +title: "EDB Postgres for Kubernetes 1.19.4 release notes" +navTitle: "Version 1.19.4" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Enhancements | See the community [Release Notes](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx new file mode 100644 index 00000000000..517a6565a96 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx @@ -0,0 +1,10 @@ +--- +title: "EDB Postgres for Kubernetes 1.20.2 release notes" +navTitle: "Version 1.20.2" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Enhancements | See the community [Release Notes](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx index a5c64f0459d..e844ecaa47a 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx @@ -4,8 +4,10 @@ navTitle: "Release notes" redirects: - ../release_notes navigation: +- 1_20_2_rel_notes - 1_20_1_rel_notes - 1_20_0_rel_notes +- 1_19_4_rel_notes - 1_19_3_rel_notes - 1_19_2_rel_notes - 1_19_1_rel_notes @@ -60,8 +62,10 @@ The EDB Postgres for Kubernetes documentation describes the major version of EDB | Version | Release date | Upstream merges | | -------------------------- | ------------ | ------------------------------------------------------------------------------------------- | +| [1.20.2](1_20_2_rel_notes) | 2023 Jul 27 | None | | [1.20.1](1_20_1_rel_notes) | 2023 Jun 13 | Upstream [1.20.1](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | | [1.20.0](1_20_0_rel_notes) | 2023 Apr 27 | Upstream [1.20.0](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | +| [1.19.4](1_19_4_rel_notes) | 2023 Jul 27 | None | | [1.19.3](1_19_3_rel_notes) | 2023 Jun 13 | Upstream [1.19.3](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.2](1_19_2_rel_notes) | 2023 Apr 27 | Upstream [1.19.2](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.1](1_19_1_rel_notes) | 2023 Mar 20 | Upstream [1.19.1](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | From d87d6b962c562ee45da8d15266682057d8bff7d1 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Fri, 28 Jul 2023 15:56:47 +0000 Subject: [PATCH 11/19] Revert some unnecessary changes --- product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx | 4 ++-- scripts/fileProcessor/package-lock.json | 5 +++-- scripts/source/package-lock.json | 5 +++-- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx index 45aecfb6ae4..427408ac4fa 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx @@ -37,7 +37,7 @@ For more detailed information about this feature, please refer to the !!! Info EDB Postgres for Kubernetes is gradually introducing support for - [Kubernetes' native `VolumeSnapshot` API](https://github.com/EnterpriseDB/cloud-native-postgres/issues/2081) + [Kubernetes' native `VolumeSnapshot` API](https://github.com/cloudnative-pg/cloudnative-pg/issues/2081) for both incremental and differential copy in backup and recovery operations - if supported by the underlying storage classes. Please see ["Recovery from Volume Snapshot objects"](#recovery-from-volumesnapshot-objects) @@ -552,7 +552,7 @@ bootstrap: The `kubectl cnp snapshot` command is able to take consistent snapshots of a replica through a technique known as *cold backup*, by fencing the standby before taking a physical copy of the volumes. For details, please refer to -["Snapshotting a Postgres cluster"](#snapshotting-a-postgres-cluster). +["Snapshotting a Postgres cluster"](kubectl-plugin/#snapshotting-a-postgres-cluster). #### Additional considerations diff --git a/scripts/fileProcessor/package-lock.json b/scripts/fileProcessor/package-lock.json index 9ee4123f70e..29e5152b1ba 100644 --- a/scripts/fileProcessor/package-lock.json +++ b/scripts/fileProcessor/package-lock.json @@ -2401,7 +2401,7 @@ "parse-entities": "^2.0.0", "repeat-string": "^1.5.4", "state-toggle": "^1.0.0", - "trim": "0.0.1", + "trim": ">=0.0.3", "trim-trailing-lines": "^1.0.0", "unherit": "^1.0.4", "unist-util-remove-position": "^2.0.0", @@ -2528,7 +2528,8 @@ } }, "trim": { - "version": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz", + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz", "integrity": "sha512-3JVP2YVqITUisXblCDq/Bi4P9457G/sdEamInkyvCsjbTcXLXIiG7XCb4kGMFWh6JGXesS3TKxOPtrncN/xe8w==" }, "trim-trailing-lines": { diff --git a/scripts/source/package-lock.json b/scripts/source/package-lock.json index 7f4b8ebd71e..43917129155 100644 --- a/scripts/source/package-lock.json +++ b/scripts/source/package-lock.json @@ -3200,7 +3200,7 @@ "parse-entities": "^2.0.0", "repeat-string": "^1.5.4", "state-toggle": "^1.0.0", - "trim": "0.0.1", + "trim": ">=0.0.3", "trim-trailing-lines": "^1.0.0", "unherit": "^1.0.4", "unist-util-remove-position": "^2.0.0", @@ -3348,7 +3348,8 @@ "integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw==" }, "trim": { - "version": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz", + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz", "integrity": "sha512-3JVP2YVqITUisXblCDq/Bi4P9457G/sdEamInkyvCsjbTcXLXIiG7XCb4kGMFWh6JGXesS3TKxOPtrncN/xe8w==" }, "trim-trailing-lines": { From b9c2dcf11b436d93537bafa89c44212653def703 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Fri, 28 Jul 2023 16:23:44 +0000 Subject: [PATCH 12/19] Update demo (fix broken link, bump version & output) --- .../1/interactive_demo.mdx | 251 ++++++++++++------ 1 file changed, 177 insertions(+), 74 deletions(-) diff --git a/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx b/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx index 6cd3c99fefd..2f826edd838 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx @@ -39,22 +39,22 @@ INFO[0000] Prep: Network INFO[0000] Created network 'k3d-k3s-default' INFO[0000] Created image volume k3d-k3s-default-images INFO[0000] Starting new tools node... -INFO[0000] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.6' +INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.5.1' INFO[0001] Creating node 'k3d-k3s-default-server-0' -INFO[0002] Starting Node 'k3d-k3s-default-tools' -INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.24.4-k3s1' -INFO[0007] Creating LoadBalancer 'k3d-k3s-default-serverlb' -INFO[0007] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.6' +INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.26.4-k3s1' +INFO[0003] Starting Node 'k3d-k3s-default-tools' +INFO[0006] Creating LoadBalancer 'k3d-k3s-default-serverlb' +INFO[0007] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.5.1' INFO[0010] Using the k3d-tools node to gather environment information -INFO[0011] HostIP: using network gateway 172.17.0.1 address -INFO[0011] Starting cluster 'k3s-default' -INFO[0011] Starting servers... -INFO[0011] Starting Node 'k3d-k3s-default-server-0' -INFO[0016] All agents already running. -INFO[0016] Starting helpers... -INFO[0016] Starting Node 'k3d-k3s-default-serverlb' -INFO[0023] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... -INFO[0025] Cluster 'k3s-default' created successfully! +INFO[0010] HostIP: using network gateway 172.17.0.1 address +INFO[0010] Starting cluster 'k3s-default' +INFO[0010] Starting servers... +INFO[0010] Starting Node 'k3d-k3s-default-server-0' +INFO[0015] All agents already running. +INFO[0015] Starting helpers... +INFO[0015] Starting Node 'k3d-k3s-default-serverlb' +INFO[0022] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... +INFO[0024] Cluster 'k3s-default' created successfully! INFO[0025] You can now use it like this: kubectl cluster-info ``` @@ -66,7 +66,7 @@ Verify that it works with the following command: kubectl get nodes __OUTPUT__ NAME STATUS ROLES AGE VERSION -k3d-k3s-default-server-0 Ready control-plane,master 32s v1.24.4+k3s1 +k3d-k3s-default-server-0 Ready control-plane,master 17s v1.26.4+k3s1 ``` You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet "Ready", wait for a few seconds and run the command above again. @@ -76,7 +76,7 @@ You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet Now that the Kubernetes cluster is running, you can proceed with EDB Postgres for Kubernetes installation as described in the ["Installation and upgrades"](installation_upgrade.md) section: ```shell -kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.18.0.yaml +kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml __OUTPUT__ namespace/postgresql-operator-system created customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created @@ -179,12 +179,12 @@ metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}} - creationTimestamp: "2022-09-06T21:18:53Z" + creationTimestamp: "2023-07-28T16:14:08Z" generation: 1 name: cluster-example namespace: default - resourceVersion: "2037" - uid: e6d88753-e5d5-414c-a7ec-35c6c27f5a9a + resourceVersion: "1115" + uid: 70e054ae-b487-41e3-941b-b7c969f950be spec: affinity: podAntiAffinityType: preferred @@ -197,7 +197,8 @@ spec: localeCollate: C owner: app enableSuperuserAccess: true - imageName: quay.io/enterprisedb/postgresql:15.0 + failoverDelay: 0 + imageName: quay.io/enterprisedb/postgresql:15.3 instances: 3 logLevel: info maxSyncReplicas: 0 @@ -232,7 +233,7 @@ spec: wal_sender_timeout: 5s syncReplicaElectionConstraint: enabled: false - primaryUpdateMethod: switchover + primaryUpdateMethod: restart primaryUpdateStrategy: unsupervised resources: {} startDelay: 30 @@ -245,9 +246,9 @@ status: certificates: clientCASecret: cluster-example-ca expirations: - cluster-example-ca: 2022-12-05 21:13:54 +0000 UTC - cluster-example-replication: 2022-12-05 21:13:54 +0000 UTC - cluster-example-server: 2022-12-05 21:13:54 +0000 UTC + cluster-example-ca: 2023-10-26 16:09:09 +0000 UTC + cluster-example-replication: 2023-10-26 16:09:09 +0000 UTC + cluster-example-server: 2023-10-26 16:09:09 +0000 UTC replicationTLSSecret: cluster-example-replication serverAltDNSNames: - cluster-example-rw @@ -261,23 +262,47 @@ status: - cluster-example-ro.default.svc serverCASecret: cluster-example-ca serverTLSSecret: cluster-example-server - cloudNativePostgresqlCommitHash: ad578cb1 - cloudNativePostgresqlOperatorHash: 9f5db5e0e804fb51c6962140c0a447766bf2dd4d96dfa8d8529b8542754a23a4 + cloudNativePostgresqlCommitHash: c42ca1c2 + cloudNativePostgresqlOperatorHash: 1d51c15adffb02c81dbc4e8752ddb68f709699c78d9c3384ed9292188685971b conditions: - - lastTransitionTime: "2022-09-06T21:20:12Z" + - lastTransitionTime: "2023-07-28T16:15:29Z" message: Cluster is Ready reason: ClusterIsReady status: "True" type: Ready + - lastTransitionTime: "2023-07-28T16:15:29Z" + message: velero addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/velero + - lastTransitionTime: "2023-07-28T16:15:29Z" + message: external-backup-adapter addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/externalBackupAdapter + - lastTransitionTime: "2023-07-28T16:15:30Z" + message: external-backup-adapter-cluster addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/externalBackupAdapterCluster + - lastTransitionTime: "2023-07-28T16:15:30Z" + message: kasten addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/kasten configMapResourceVersion: metrics: - postgresql-operator-default-monitoring: "810" + postgresql-operator-default-monitoring: "788" currentPrimary: cluster-example-1 - currentPrimaryTimestamp: "2022-09-06T21:19:31.040336Z" + currentPrimaryTimestamp: "2023-07-28T16:14:48.609086Z" healthyPVC: - cluster-example-1 - cluster-example-2 - cluster-example-3 + instanceNames: + - cluster-example-1 + - cluster-example-2 + - cluster-example-3 instances: 3 instancesReportedState: cluster-example-1: @@ -298,10 +323,11 @@ status: licenseStatus: isImplicit: true isTrial: true - licenseExpiration: "2022-10-06T21:18:53Z" + licenseExpiration: "2023-08-27T16:14:08Z" licenseStatus: Implicit trial license repositoryAccess: false valid: true + managedRolesStatus: {} phase: Cluster in healthy state poolerIntegrations: pgBouncerIntegration: {} @@ -309,23 +335,24 @@ status: readService: cluster-example-r readyInstances: 3 secretsResourceVersion: - applicationSecretVersion: "778" - clientCaSecretVersion: "774" - replicationSecretVersion: "776" - serverCaSecretVersion: "774" - serverSecretVersion: "775" - superuserSecretVersion: "777" + applicationSecretVersion: "760" + clientCaSecretVersion: "756" + replicationSecretVersion: "758" + serverCaSecretVersion: "756" + serverSecretVersion: "757" + superuserSecretVersion: "759" targetPrimary: cluster-example-1 - targetPrimaryTimestamp: "2022-09-06T21:18:54.556099Z" + targetPrimaryTimestamp: "2023-07-28T16:14:09.501164Z" timelineID: 1 topology: instances: cluster-example-1: {} cluster-example-2: {} cluster-example-3: {} + nodesUsed: 1 successfullyExtracted: true writeService: cluster-example-rw - ``` +``` !!! Note By default, the operator will install the latest available minor version @@ -342,7 +369,7 @@ status: ## Install the kubectl-cnp plugin -EDB Postgres for Kubernetes provides [a plugin for kubectl](cnp-plugin) to manage a cluster in Kubernetes, along with a script to install it: +EDB Postgres for Kubernetes provides [a plugin for kubectl](kubectl-plugin) to manage a cluster in Kubernetes, along with a script to install it: ```shell curl -sSfL \ @@ -350,7 +377,7 @@ curl -sSfL \ sudo sh -s -- -b /usr/local/bin __OUTPUT__ EnterpriseDB/kubectl-cnp info checking GitHub for latest tag -EnterpriseDB/kubectl-cnp info found version: 1.18.0 for v1.18.0/linux/x86_64 +EnterpriseDB/kubectl-cnp info found version: 1.20.2 for v1.20.2/linux/x86_64 EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp ``` @@ -362,20 +389,20 @@ __OUTPUT__ Cluster Summary Name: cluster-example Namespace: default -System ID: 7140379538380623889 -PostgreSQL Image: quay.io/enterprisedb/postgresql:15.0 +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 Primary instance: cluster-example-1 Status: Cluster in healthy state Instances: 3 Ready instances: 3 -Current Write LSN: 0/5000060 (Timeline: 1 - WAL File: 000000010000000000000005) +Current Write LSN: 0/6054B60 (Timeline: 1 - WAL File: 000000010000000000000006) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 Continuous Backup status Not configured @@ -383,15 +410,18 @@ Not configured Streaming Replication status Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority ---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- -cluster-example-2 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00:00 00:00:00 streaming async 0 -cluster-example-3 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-2 0/6054B60 0/6054B60 0/6054B60 0/6054B60 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-3 0/6054B60 0/6054B60 0/6054B60 0/6054B60 00:00:00 00:00:00 00:00:00 streaming async 0 + +Unmanaged Replication Slot Status +No unmanaged replication slots found Instances status Name Database Size Current LSN Replication role Status QoS Manager Version Node ---- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-1 33 MB 0/5000060 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-2 33 MB 0/5000060 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/5000060 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 +cluster-example-1 29 MB 0/6054B60 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-2 29 MB 0/6054B60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/6054B60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 ``` !!! Note "There's more" @@ -414,23 +444,22 @@ Now if we check the status... kubectl cnp status cluster-example __OUTPUT__ Cluster Summary -Switchover in progress Name: cluster-example Namespace: default -System ID: 7140379538380623889 -PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5 -Primary instance: cluster-example-1 (switching to cluster-example-2) +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +Primary instance: cluster-example-2 Status: Failing over Failing over from cluster-example-1 to cluster-example-2 Instances: 3 Ready instances: 2 -Current Write LSN: 0/6000F58 (Timeline: 2 - WAL File: 000000020000000000000006) +Current Write LSN: 0/7001000 (Timeline: 2 - WAL File: 000000020000000000000007) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 Continuous Backup status Not configured @@ -438,11 +467,14 @@ Not configured Streaming Replication status Not available yet +Unmanaged Replication Slot Status +No unmanaged replication slots found + Instances status Name Database Size Current LSN Replication role Status QoS Manager Version Node ---- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-2 33 MB 0/6000F58 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/60000A0 Standby (file based) OK BestEffort 1.18.0 k3d-k3s-default-server-0 +cluster-example-2 29 MB 0/7001000 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 ``` ...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary: @@ -453,20 +485,53 @@ __OUTPUT__ Cluster Summary Name: cluster-example Namespace: default -System ID: 7140379538380623889 -PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5 +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +Primary instance: cluster-example-2 +Status: Failing over Failing over from cluster-example-1 to cluster-example-2 +Instances: 3 +Ready instances: 2 +Current Write LSN: 0/7001000 (Timeline: 2 - WAL File: 000000020000000000000007) + +Certificates Status +Certificate Name Expiration Date Days Left Until Expiration +---------------- --------------- -------------------------- +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 + +Continuous Backup status +Not configured + +Streaming Replication status +Not available yet + +Unmanaged Replication Slot Status +No unmanaged replication slots found + +Instances status +Name Database Size Current LSN Replication role Status QoS Manager Version Node +---- ------------- ----------- ---------------- ------ --- --------------- ---- +cluster-example-2 29 MB 0/7001000 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +$ kubectl cnp status cluster-example +Cluster Summary +Name: cluster-example +Namespace: default +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 Primary instance: cluster-example-2 Status: Cluster in healthy state Instances: 3 Ready instances: 3 -Current Write LSN: 0/6004CD8 (Timeline: 2 - WAL File: 000000020000000000000006) +Current Write LSN: 0/7004D60 (Timeline: 2 - WAL File: 000000020000000000000007) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 Continuous Backup status Not configured @@ -474,15 +539,53 @@ Not configured Streaming Replication status Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority ---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- -cluster-example-1 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0 -cluster-example-3 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-1 0/7004D60 0/7004D60 0/7004D60 0/7004D60 00:00:00 00:00:00 00:00:00 streaming async 0 + +Unmanaged Replication Slot Status +No unmanaged replication slots found Instances status -Name Database Size Current LSN Replication role Status QoS Manager Version Node ----- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-2 33 MB 0/6004CD8 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-1 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 +Name Database Size Current LSN Replication role Status QoS Manager Version Node +---- ------------- ----------- ---------------- ------ --- --------------- ---- +cluster-example-2 29 MB 0/7004D60 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-1 29 MB 0/7004D60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +$ kubectl cnp status cluster-example +Cluster Summary +Name: cluster-example +Namespace: default +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +Primary instance: cluster-example-2 +Status: Cluster in healthy state +Instances: 3 +Ready instances: 3 +Current Write LSN: 0/7004D98 (Timeline: 2 - WAL File: 000000020000000000000007) + +Certificates Status +Certificate Name Expiration Date Days Left Until Expiration +---------------- --------------- -------------------------- +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 + +Continuous Backup status +Not configured + +Streaming Replication status +Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority +---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- +cluster-example-1 0/7004D98 0/7004D98 0/7004D98 0/7004D98 00:00:00 00:00:00 00:00:00 streaming async 0 + +Unmanaged Replication Slot Status +No unmanaged replication slots found + +Instances status +Name Database Size Current LSN Replication role Status QoS Manager Version Node +---- ------------- ----------- ---------------- ------ --- --------------- ---- +cluster-example-2 29 MB 0/7004D98 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-1 29 MB 0/7004D98 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 ``` From 183d7461a04c200fe5ba9552802388d013db9b66 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Fri, 28 Jul 2023 12:37:41 -0400 Subject: [PATCH 13/19] fixed mix up with the upstream merges --- .../postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx | 2 +- .../postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx | 2 +- .../docs/postgres_for_kubernetes/1/rel_notes/index.mdx | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx index 5d72401998f..a996c7aef8b 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx @@ -7,4 +7,4 @@ This release of EDB Postgres for Kubernetes includes the following: | Type | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Enhancements | See the community [Release Notes](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/). | +| Enhancements | Merged with community CloudNativePG 1.19.4. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx index 517a6565a96..005e4416a6f 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx @@ -7,4 +7,4 @@ This release of EDB Postgres for Kubernetes includes the following: | Type | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Enhancements | See the community [Release Notes](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/). | +| Enhancements | Merged with community CloudNativePG 1.20.2. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx index e844ecaa47a..ae4d8af0340 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx @@ -62,10 +62,10 @@ The EDB Postgres for Kubernetes documentation describes the major version of EDB | Version | Release date | Upstream merges | | -------------------------- | ------------ | ------------------------------------------------------------------------------------------- | -| [1.20.2](1_20_2_rel_notes) | 2023 Jul 27 | None | +| [1.20.2](1_20_2_rel_notes) | 2023 Jul 27 | Upstream [1.20.2](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | | [1.20.1](1_20_1_rel_notes) | 2023 Jun 13 | Upstream [1.20.1](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | | [1.20.0](1_20_0_rel_notes) | 2023 Apr 27 | Upstream [1.20.0](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | -| [1.19.4](1_19_4_rel_notes) | 2023 Jul 27 | None | +| [1.19.4](1_19_4_rel_notes) | 2023 Jul 27 | Upstream [1.19.4](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.3](1_19_3_rel_notes) | 2023 Jun 13 | Upstream [1.19.3](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.2](1_19_2_rel_notes) | 2023 Apr 27 | Upstream [1.19.2](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.1](1_19_1_rel_notes) | 2023 Mar 20 | Upstream [1.19.1](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | From 662d4ec5559829dda7eeecf2e68043f9d35cc84c Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Fri, 28 Jul 2023 12:50:01 -0400 Subject: [PATCH 14/19] 1.18.6 rel notes --- .../1/rel_notes/1_18_6_rel_notes.mdx | 26 +++++++++++++++++++ .../1/rel_notes/index.mdx | 1 + 2 files changed, 27 insertions(+) create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx new file mode 100644 index 00000000000..ec30ff7f084 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx @@ -0,0 +1,26 @@ +--- +title: "EDB Postgres for Kubernetes 1.18.6 release notes" +navTitle: "Version 1.18.6" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Enhancement | Added a metric and status field to monitor node usage by a CloudNativePG cluster. +| +| Enhancement | Added troubleshooting instructions relating to hugepages to the documentation. +| +| Enhancement | Extended the FAQs page in the documentation. | +| Enhancement | Added a check at the start of the restore process to ensure it can proceed; give improved error diagnostics if it cannot. + | +| Bug fix | Ensured the logic of setting the recovery target matches that of Postgres. + | +| Bug fix | Prevented taking over service accounts not owned by the cluster by setting ownerMetadata only during service account creation. + | +| Bug fix | Prevented a possible crash of the instance manager during the configuration reload. +| +| Bug fix | Prevented the LastFailedArchiveTime alert from triggering if a new backup has been successful after the failed ones. +| +| Security fix | Updated all project dependencies to the latest versions| + diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx index ae4d8af0340..bcd85c8c7e8 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx @@ -70,6 +70,7 @@ The EDB Postgres for Kubernetes documentation describes the major version of EDB | [1.19.2](1_19_2_rel_notes) | 2023 Apr 27 | Upstream [1.19.2](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.1](1_19_1_rel_notes) | 2023 Mar 20 | Upstream [1.19.1](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.0](1_19_0_rel_notes) | 2023 Feb 14 | Upstream [1.19.0](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | +| [1.18.6](1_18_6_rel_notes) | 2023 Jul 27 | None | | [1.18.5](1_18_5_rel_notes) | 2023 Jun 13 | Upstream [1.18.5](https://cloudnative-pg.io/documentation/1.18/release_notes/v1.18/) | | [1.18.4](1_18_4_rel_notes) | 2023 Apr 27 | Upstream [1.18.4](https://cloudnative-pg.io/documentation/1.18/release_notes/v1.18/) | | [1.18.3](1_18_3_rel_notes) | 2023 Mar 20 | Upstream [1.18.3](https://cloudnative-pg.io/documentation/1.18/release_notes/v1.18/) | From 856405c8d3bffbb42e1831a00ff64f7e0b170d94 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Fri, 28 Jul 2023 12:56:02 -0400 Subject: [PATCH 15/19] fixed rel note type for 1.19.4 and 1.20.2 --- .../postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx | 2 +- .../postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx index a996c7aef8b..5cde55b56b8 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx @@ -7,4 +7,4 @@ This release of EDB Postgres for Kubernetes includes the following: | Type | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Enhancements | Merged with community CloudNativePG 1.19.4. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/). | +| Upstream merge | Merged with community CloudNativePG 1.19.4. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx index 005e4416a6f..305d0cd0f6f 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx @@ -7,4 +7,4 @@ This release of EDB Postgres for Kubernetes includes the following: | Type | Description | | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Enhancements | Merged with community CloudNativePG 1.20.2. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/). | +| Upstream merge | Merged with community CloudNativePG 1.20.2. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/). | From 6edf74f69e7cf2e363f8e37bcf7b267c28437d61 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Fri, 28 Jul 2023 12:57:08 -0400 Subject: [PATCH 16/19] fixed 1.18.6 nav order --- product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx | 1 + 1 file changed, 1 insertion(+) diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx index bcd85c8c7e8..335c160cdfa 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx @@ -12,6 +12,7 @@ navigation: - 1_19_2_rel_notes - 1_19_1_rel_notes - 1_19_0_rel_notes +- 1_18_6_rel_notes - 1_18_5_rel_notes - 1_18_4_rel_notes - 1_18_3_rel_notes From e9a4910a4038cedaaa65de658ae62ed04ca14809 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Fri, 28 Jul 2023 12:59:11 -0400 Subject: [PATCH 17/19] fixed 1.18.6 rel note table formatting --- .../1/rel_notes/1_18_6_rel_notes.mdx | 29 +++++++------------ 1 file changed, 11 insertions(+), 18 deletions(-) diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx index ec30ff7f084..1c42537aedc 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx @@ -5,22 +5,15 @@ navTitle: "Version 1.18.6" This release of EDB Postgres for Kubernetes includes the following: -| Type | Description | -| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| Enhancement | Added a metric and status field to monitor node usage by a CloudNativePG cluster. -| -| Enhancement | Added troubleshooting instructions relating to hugepages to the documentation. -| -| Enhancement | Extended the FAQs page in the documentation. | -| Enhancement | Added a check at the start of the restore process to ensure it can proceed; give improved error diagnostics if it cannot. - | -| Bug fix | Ensured the logic of setting the recovery target matches that of Postgres. - | -| Bug fix | Prevented taking over service accounts not owned by the cluster by setting ownerMetadata only during service account creation. - | -| Bug fix | Prevented a possible crash of the instance manager during the configuration reload. -| -| Bug fix | Prevented the LastFailedArchiveTime alert from triggering if a new backup has been successful after the failed ones. -| -| Security fix | Updated all project dependencies to the latest versions| +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------ | +| Enhancement | Added a metric and status field to monitor node usage by a CloudNativePG cluster. | | +| Enhancement | Added troubleshooting instructions relating to hugepages to the documentation. | +| Enhancement | Extended the FAQs page in the documentation. | +| Enhancement | Added a check at the start of the restore process to ensure it can proceed; give improved error diagnostics if it cannot. | +| Bug fix | Ensured the logic of setting the recovery target matches that of Postgres. | +| Bug fix | Prevented taking over service accounts not owned by the cluster by setting ownerMetadata only during service account creation. | +| Bug fix | Prevented a possible crash of the instance manager during the configuration reload. | +| Bug fix | Prevented the LastFailedArchiveTime alert from triggering if a new backup has been successful after the failed ones. | +| Security fix | Updated all project dependencies to the latest versions | From fb7f5a23402c0cbff0bb90f7489fd5860431d8f0 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Fri, 28 Jul 2023 13:09:00 -0400 Subject: [PATCH 18/19] PG4K: fixed product name in 1.8.6 rel notes --- .../postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx index 1c42537aedc..7d5eb2ad2e3 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx @@ -7,7 +7,7 @@ This release of EDB Postgres for Kubernetes includes the following: | Type | Description | | ------------ | ------------------------------------------------------------------------------------------------------------------------------ | -| Enhancement | Added a metric and status field to monitor node usage by a CloudNativePG cluster. | | +| Enhancement | Added a metric and status field to monitor node usage by an EDB Postgres for Kubernetes cluster. | | | Enhancement | Added troubleshooting instructions relating to hugepages to the documentation. | | Enhancement | Extended the FAQs page in the documentation. | | Enhancement | Added a check at the start of the restore process to ensure it can proceed; give improved error diagnostics if it cannot. | From 624a5b477c9b9941fb3c3ebce92b9eb274722f97 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Fri, 28 Jul 2023 13:47:15 -0400 Subject: [PATCH 19/19] BigAnimal link --- .../preparing_cloud_account/preparing_gcp/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx index 2104282a32b..2e734390a1d 100644 --- a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx @@ -61,7 +61,7 @@ EDB provides a shell script, called [biganimal-csp-preflight](https://github.com | `-h` or `--help`| Displays the command help. | | `-i` or `--instance-type` | Google Cloud instance type for the BigAnimal cluster. The help command provides a list of possible VM instance types. Choose the instance type that best suits your application and workload. Choose an instance type in the memory optimized M1, M2, or M3 series for large data sets. Choose from the compute-optimized C2 series for compute-bound applications. Choose from the general purpose E2, N2, and N2D series if you don't require memory or compute optimization.| | `-x` or `--cluster-architecture` | Defines the Cluster architecture and can be `single`, `ha`, or `eha`. See [Supported cluster types](/biganimal/release/overview/02_high_availability) for more information.| - | `-e` or `--networking` | Type of network endpoint for the BigAnimal cluster, either `public` or `private`. See [Cluster networking architecture](../creating_a_cluster/01_cluster_networking) for more information. | + | `-e` or `--networking` | Type of network endpoint for the BigAnimal cluster, either `public` or `private`. See [Cluster networking architecture](/biganimal/latest/getting_started/creating_a_cluster/01_cluster_networking/) for more information. | | `-r` or `--activate-region` | Specifies region activation if no clusters currently exist in the region. | | `--onboard` | Checks if the user and subscription are correctly configured.