diff --git a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx index 2104282a32b..2e734390a1d 100644 --- a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx @@ -61,7 +61,7 @@ EDB provides a shell script, called [biganimal-csp-preflight](https://github.com | `-h` or `--help`| Displays the command help. | | `-i` or `--instance-type` | Google Cloud instance type for the BigAnimal cluster. The help command provides a list of possible VM instance types. Choose the instance type that best suits your application and workload. Choose an instance type in the memory optimized M1, M2, or M3 series for large data sets. Choose from the compute-optimized C2 series for compute-bound applications. Choose from the general purpose E2, N2, and N2D series if you don't require memory or compute optimization.| | `-x` or `--cluster-architecture` | Defines the Cluster architecture and can be `single`, `ha`, or `eha`. See [Supported cluster types](/biganimal/release/overview/02_high_availability) for more information.| - | `-e` or `--networking` | Type of network endpoint for the BigAnimal cluster, either `public` or `private`. See [Cluster networking architecture](../creating_a_cluster/01_cluster_networking) for more information. | + | `-e` or `--networking` | Type of network endpoint for the BigAnimal cluster, either `public` or `private`. See [Cluster networking architecture](/biganimal/latest/getting_started/creating_a_cluster/01_cluster_networking/) for more information. | | `-r` or `--activate-region` | Specifies region activation if no clusters currently exist in the region. | | `--onboard` | Checks if the user and subscription are correctly configured. diff --git a/product_docs/docs/biganimal/release/overview/extensions_tools.mdx b/product_docs/docs/biganimal/release/overview/extensions_tools.mdx index 45bccf25877..c4d368a395d 100644 --- a/product_docs/docs/biganimal/release/overview/extensions_tools.mdx +++ b/product_docs/docs/biganimal/release/overview/extensions_tools.mdx @@ -5,7 +5,7 @@ navTitle: Supported extensions and tools BigAnimal supports a number of Postgres extensions and tools, which you can install on or alongside your cluster. -- See [Postgres extensions available by deployment](/pg_extensions/) for the complete list of extensions BigAnimal supports if you are using your own cloud account. +- See [Postgres extensions available by deployment](/pg_extensions/) for the complete list of extensions BigAnimal supports if you're using your own cloud account. - See [Postgres extensions](/biganimal/release/using_cluster/extensions) for the list of extensions supported when using BigAnimal's account and for more information on installing and working with extensions. @@ -19,6 +19,15 @@ EDB develops and maintains several extensions and tools. These include: - [Autocluster](/pg_extensions/advanced_storage_pack/#autocluster) — Provides faster access to clustered data by keeping track of the last inserted row for any value in a side table. - [Refdata](/pg_extensions/advanced_storage_pack/#refdata) — Can provide performance gains of 5-10% and increased scalability. + + - [EDB Postgres Tuner](/pg_extensions/pg_tuner/) — Provides safe recommendations that maximize the use of available resources. + +- [EDB Query Advisor](/pg_extensions/query_advisor/) — Provides index recommendations by keeping statistics on predicates found in WHERE statements, JOIN clauses, and workload queries. + +- [EDB Wait States](/pg_extensions/wait_states/) — Probes each of the running sessions at regular intervals. + +- [PG Failover Slots](/pg_extensions/pg_failover_slots/) — Is an extension released as open source software under the PostgreSQL License. If you have logical replication publications on Postgres databases that are also part of a streaming replication architecture, PG Failover Slots avoids the need for you to reseed your logical replication tables when a new standby gets promoted to primary. + - [Foreign Data Wrappers](foreign_data_wrappers) — Allow you to connect your Postgres database server to external data sources. - [Connection poolers](poolers) — Allow you to manage your connections to your Postgres database. diff --git a/product_docs/docs/biganimal/release/using_cluster/extensions.mdx b/product_docs/docs/biganimal/release/using_cluster/extensions.mdx index d657307a473..bc3dfd15b9c 100644 --- a/product_docs/docs/biganimal/release/using_cluster/extensions.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/extensions.mdx @@ -7,10 +7,10 @@ redirects: BigAnimal supports many Postgres extensions. See [Postgres extensions available by deployment](/pg_extensions/) for the complete list. ## Extensions available when using your own cloud account -Many Postgres extensions require superuser privileges to be installed. The table in [Postgres extensions available by deployment](/pg_extensions/) indicates whether an extension requires superuser privileges. If you are using your own cloud account, you can grant superuser privileges to edb_admin so that you can install these extensions on your cluster (see [superuser](/biganimal/latest/using_cluster/01_postgres_access/#superuser)). +Installing many Postgres extensions requires superuser privileges. The table in [Postgres extensions available by deployment](/pg_extensions/) indicates whether an extension requires superuser privileges. If you're using your own cloud account, you can grant superuser privileges to edb_admin so that you can install these extensions on your cluster (see [superuser](/biganimal/latest/using_cluster/01_postgres_access/#superuser)). ## Extensions available when using BigAnimal's cloud account -If you are using BigAnimal's cloud account, you can install and use the following extensions. +If you're using BigAnimal's cloud account, you can install and use the following extensions. PostgreSQL contrib extensions/modules: - auth_delay @@ -78,7 +78,7 @@ Use the [`CREATE EXTENSION`](https://www.postgresql.org/docs/current/sql-createe ### Example: Installing multiple extensions -One way of installing multiple extensions simultaneously is to: +This example shows one way of installing multiple extensions simultaneously. 1. Create a text file containing the `CREATE EXTENSION` command for each of the extensions you want to install. In this example, the file is named `create_extensions.sql`. diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx index 402b91397b5..a4325a1bfe9 100644 --- a/product_docs/docs/pgd/4/limitations.mdx +++ b/product_docs/docs/pgd/4/limitations.mdx @@ -42,12 +42,12 @@ While it is still possible to host up to ten databases in a single instance, thi ## Other limitations -This is a (non-comprehensive) list of limitations that are expected and are by design. They are not expected to be resolved in the future. +This is a (non-comprehensive) list of limitations that are expected and are by design. They aren't expected to be resolved in the future. - Replacing a node with its physical standby doesn't work for nodes that use CAMO/Eager/Group Commit. Combining physical standbys and BDR in general isn't recommended, even if otherwise possible. - A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function. -- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration. +- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two aren't compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration. -- Postgres' two-phase commit (2PC) transactions (i.e. [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) cannot be used in combination with CAMO, Group Commit, nor Eager Replication, because those features use two-phase commit underneath. +- Postgres two-phase commit (2PC) transactions (that is, [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) can't be used with CAMO, Group Commit, or Eager Replication because those features use two-phase commit underneath. diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index 0cadbc96671..286c2829b5b 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -131,4 +131,4 @@ Consider these limitations when planning your deployment: different from the one used by CAMO, Eager, and Group Commit. The two aren't compatible, so don't use them together. -- Postgres' two-phase commit (2PC) transactions (i.e. [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) cannot be used in combination with CAMO, Group Commit, nor Eager Replication, because those features use two-phase commit underneath. +- Postgres two-phase commit (2PC) transactions (that is, [`PREPARE TRANSACTION`](https://www.postgresql.org/docs/current/sql-prepare-transaction.html)) can't be used with CAMO, Group Commit, or Eager Replication because those features use two-phase commit underneath. diff --git a/product_docs/docs/pgd/5/parallelapply.mdx b/product_docs/docs/pgd/5/parallelapply.mdx index aca40703a20..2eaf9a09e15 100644 --- a/product_docs/docs/pgd/5/parallelapply.mdx +++ b/product_docs/docs/pgd/5/parallelapply.mdx @@ -3,29 +3,35 @@ title: Parallel Apply navTitle: Parallel Apply --- -### What is Parallel Apply? +## What is Parallel Apply? -Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This generally increases the throughput of a subscription and improves replication performance. +Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This behavior generally increases the throughput of a subscription and improves replication performance. -The transactional changes from the subscription are written by the multiple Parallel Apply writers. However, each writer ensures that the final commit of its transaction does not violate the commit order as executed on the origin node. If there is a violation, an error is generated and the transaction can be rolled back. +The transactional changes from the subscription are written by the multiple Parallel Apply writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error occurs, and the transaction can be rolled back. !!! Warning Possible deadlocks -It may be possible that this out-of-order application of changes could trigger a deadlock. PGD currently resolves such deadlocks between Parallel Apply writers by aborting and retrying the transactions involved. If you experience a large number of such deadlocks, this is an indication that Parallel Apply is not a good fit for your workload and you should consider disabling it. +It might be possible for this out-of-order application of changes to trigger a deadlock. PGD currently resolves such deadlocks between Parallel Apply writers by aborting and retrying the transactions involved. Experiencing a large number of such deadlocks is an indication that Parallel Apply isn't a good fit for your workload. In this case, consider disabling it. !!! -### Configuring Parallel Apply -There are two variables which control Parallel Apply in PGD 5, [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription). The default settings for these are 8 and 2. +## Configuring Parallel Apply +Two variables control Parallel Apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). ```plain bdr.max_writers_per_subscription = 8 bdr.writers_per_subscription = 2 ``` -This gives each subscription two writers, but in some circumstances, the system may allocate up to 8 writers for a subscription. +This configuration gives each subscription two writers. However, in some circumstances, the system might allocate up to 8 writers for a subscription. -[`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) can only be changed with a server restart. +You can change [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) only with a server restart. -[`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) can be changed, for a specific subscription, without a restart by halting the subscription using [`bdr.alter_subscription_disable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_disable), setting the new value and then resuming the subscription using [`bdr.alter_subscription_enable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_enable). First establish the name of the subscription using `select * from bdr.subscription`. For this example, the subscription name is `bdr_bdrdb_bdrgroup_node2_node1`. +You can change [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) for a specific subscription without a restart by: + +1. Halting the subscription using [`bdr.alter_subscription_disable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_disable). +1. Setting the new value. +1. Resuming the subscription using [`bdr.alter_subscription_enable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_enable). + +First, though, establish the name of the subscription using `select * from bdr.subscription`. For this example, the subscription name is `bdr_bdrdb_bdrgroup_node2_node1`. ```sql @@ -40,16 +46,12 @@ SELECT bdr.alter_subscription_enable ('bdr_bdrdb_bdrgroup_node2_node1'); ### When to use Parallel Apply -Parallel Apply is always on by default and for most operations, we recommend that it is left on. +Parallel Apply is always on by default. For most operations, we recommend that you leave it on. ### When not to use Parallel Apply -As of, and up to at least PGD 5.1, Parallel Apply should not be used with Group Commit, CAMO and eager replication. You should disable Parallel Apply in these scenarios. If you are experiencing a large number of deadlocks, you may also want to disable it. +For PGD 5.1 and earlier, don't use Parallel aAply with Group Commit, CAMO, and Eager Replication. Disable Parallel Apply in these scenarios. Also, if you're experiencing a large number of deadlocks, consider disabling it. ### Disabling Parallel Apply -To disable Parallel Apply set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to 1. - - - - +To disable Parallel Apply, set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to `1`. diff --git a/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx b/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx index 697665045f8..c23f4c19fe8 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx @@ -892,7 +892,7 @@ RecoveryTarget allows to configure the moment where the recovery process will st | `targetLSN ` | The target LSN (Log Sequence Number) | string | | `targetTime ` | The target time as a timestamp in the RFC3339 standard | string | | `targetImmediate` | End recovery as soon as a consistent state is reached | \*bool | -| `exclusive ` | Set the target to be exclusive (defaults to true) | \*bool | +| `exclusive ` | Set the target to be exclusive. If omitted, defaults to false, so that in Postgres, `recovery_target_inclusive` will be true | \*bool | diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx index 5eb2bba42c0..427408ac4fa 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx @@ -702,10 +702,11 @@ You can choose only a single one among the targets above in each Additionally, you can specify `targetTLI` force recovery to a specific timeline. -By default, the previous parameters are considered to be exclusive, stopping -just before the recovery target. You can request inclusive behavior, -stopping right after the recovery target, setting the `exclusive` parameter to -`false` like in the following example relying on a blob container in Azure: +By default, the previous parameters are considered to be inclusive, stopping +just after the recovery target, matching [the behavior in PostgreSQL](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-RECOVERY-TARGET-INCLUSIVE) +You can request exclusive behavior, +stopping right before the recovery target, by setting the `exclusive` parameter to +`true` like in the following example relying on a blob container in Azure: ```yaml apiVersion: postgresql.k8s.enterprisedb.io/v1 @@ -724,7 +725,7 @@ spec: recoveryTarget: backupID: 20220616T142236 targetName: "maintenance-activity" - exclusive: false + exclusive: true externalClusters: - name: clusterBackup diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/index.mdx index 9d677c49bb4..6f00e5e9e10 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/index.mdx @@ -67,7 +67,6 @@ full lifecycle of a highly available Postgres database clusters with a primary/standby architecture, using native streaming replication. !!! Note - The operator has been renamed from Cloud Native PostgreSQL. Existing users of Cloud Native PostgreSQL will not experience any change, as the underlying components and resources have not changed. @@ -90,7 +89,7 @@ primary/standby architecture, using native streaming replication. ## Features unique to EDB Postgres of Kubernetes -- Long Term Support for 1.18.x +- [Long Term Support](#long-term-support) for 1.18.x - Red Hat certified operator for OpenShift - Support on IBM Power - EDB Postgres for Kubernetes Plugin @@ -102,11 +101,26 @@ You can [evaluate EDB Postgres for Kubernetes for free](evaluation.md). You need a valid license key to use EDB Postgres for Kubernetes in production. !!! Note - Based on the [Operator Capability Levels model](operator_capability_levels.md), users can expect a **"Level V - Auto Pilot"** set of capabilities from the EDB Postgres for Kubernetes Operator. +### Long Term Support + +EDB is committed to declaring one version of EDB Postgres for Kubernetes per +year as a Long Term Support version. This version will be supported and receive +maintenance releases for an additional 12 months beyond the last release of +CloudNativePG by the community for the same version. For example, the last +version of 1.18 of CloudNativePG was released on June 12, 2023. This was +declared a LTS version of EDB Postgres for Kubernetes and it will be supported +for additional 12 months until June 12, 2024. Customers can expect that they +will have at least 6 months to move between LTS versions. So they should +expect the next LTS to be available by January 12, 2024 to allow at least 6 +months to migrate. While we encourage customers to regularly upgrade to the +latest version of the operator to take advantage of new features, having LTS +versions allows customers desiring additional stability to stay on the same +version for 12-18 months before upgrading. + ## Licensing EDB Postgres for Kubernetes works with both PostgreSQL and EDB Postgres @@ -130,7 +144,6 @@ The EDB Postgres for Kubernetes Operator container images support the multi-arch format for the following platforms: `linux/amd64`, `linux/arm64`, `linux/ppc64le`, `linux/s390x`. !!! Warning - EDB Postgres for Kubernetes requires that all nodes in a Kubernetes cluster have the same CPU architecture, thus a hybrid CPU architecture Kubernetes cluster is not supported. Additionally, EDB supports `linux/ppc64le` and `linux/s390x` architectures @@ -156,7 +169,6 @@ In case you are not familiar with some basic terminology on Kubernetes and Postg please consult the ["Before you start" section](before_you_start.md). !!! Note - Although the guide primarily addresses Kubernetes, all concepts can be extended to OpenShift as well. diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx index 585ff61179e..421ac797426 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx @@ -19,12 +19,12 @@ The operator can be installed using the provided [Helm chart](https://github.com The operator can be installed like any other resource in Kubernetes, through a YAML manifest applied via `kubectl`. -You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.20.1.yaml) +You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml) for this minor release as follows: ```sh kubectl apply -f \ - https://get.enterprisedb.io/cnp/postgresql-operator-1.20.1.yaml + https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml ``` You can verify that with: diff --git a/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx b/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx index 6cd3c99fefd..2f826edd838 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx @@ -39,22 +39,22 @@ INFO[0000] Prep: Network INFO[0000] Created network 'k3d-k3s-default' INFO[0000] Created image volume k3d-k3s-default-images INFO[0000] Starting new tools node... -INFO[0000] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.6' +INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.5.1' INFO[0001] Creating node 'k3d-k3s-default-server-0' -INFO[0002] Starting Node 'k3d-k3s-default-tools' -INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.24.4-k3s1' -INFO[0007] Creating LoadBalancer 'k3d-k3s-default-serverlb' -INFO[0007] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.6' +INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.26.4-k3s1' +INFO[0003] Starting Node 'k3d-k3s-default-tools' +INFO[0006] Creating LoadBalancer 'k3d-k3s-default-serverlb' +INFO[0007] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.5.1' INFO[0010] Using the k3d-tools node to gather environment information -INFO[0011] HostIP: using network gateway 172.17.0.1 address -INFO[0011] Starting cluster 'k3s-default' -INFO[0011] Starting servers... -INFO[0011] Starting Node 'k3d-k3s-default-server-0' -INFO[0016] All agents already running. -INFO[0016] Starting helpers... -INFO[0016] Starting Node 'k3d-k3s-default-serverlb' -INFO[0023] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... -INFO[0025] Cluster 'k3s-default' created successfully! +INFO[0010] HostIP: using network gateway 172.17.0.1 address +INFO[0010] Starting cluster 'k3s-default' +INFO[0010] Starting servers... +INFO[0010] Starting Node 'k3d-k3s-default-server-0' +INFO[0015] All agents already running. +INFO[0015] Starting helpers... +INFO[0015] Starting Node 'k3d-k3s-default-serverlb' +INFO[0022] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... +INFO[0024] Cluster 'k3s-default' created successfully! INFO[0025] You can now use it like this: kubectl cluster-info ``` @@ -66,7 +66,7 @@ Verify that it works with the following command: kubectl get nodes __OUTPUT__ NAME STATUS ROLES AGE VERSION -k3d-k3s-default-server-0 Ready control-plane,master 32s v1.24.4+k3s1 +k3d-k3s-default-server-0 Ready control-plane,master 17s v1.26.4+k3s1 ``` You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet "Ready", wait for a few seconds and run the command above again. @@ -76,7 +76,7 @@ You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet Now that the Kubernetes cluster is running, you can proceed with EDB Postgres for Kubernetes installation as described in the ["Installation and upgrades"](installation_upgrade.md) section: ```shell -kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.18.0.yaml +kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml __OUTPUT__ namespace/postgresql-operator-system created customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created @@ -179,12 +179,12 @@ metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}} - creationTimestamp: "2022-09-06T21:18:53Z" + creationTimestamp: "2023-07-28T16:14:08Z" generation: 1 name: cluster-example namespace: default - resourceVersion: "2037" - uid: e6d88753-e5d5-414c-a7ec-35c6c27f5a9a + resourceVersion: "1115" + uid: 70e054ae-b487-41e3-941b-b7c969f950be spec: affinity: podAntiAffinityType: preferred @@ -197,7 +197,8 @@ spec: localeCollate: C owner: app enableSuperuserAccess: true - imageName: quay.io/enterprisedb/postgresql:15.0 + failoverDelay: 0 + imageName: quay.io/enterprisedb/postgresql:15.3 instances: 3 logLevel: info maxSyncReplicas: 0 @@ -232,7 +233,7 @@ spec: wal_sender_timeout: 5s syncReplicaElectionConstraint: enabled: false - primaryUpdateMethod: switchover + primaryUpdateMethod: restart primaryUpdateStrategy: unsupervised resources: {} startDelay: 30 @@ -245,9 +246,9 @@ status: certificates: clientCASecret: cluster-example-ca expirations: - cluster-example-ca: 2022-12-05 21:13:54 +0000 UTC - cluster-example-replication: 2022-12-05 21:13:54 +0000 UTC - cluster-example-server: 2022-12-05 21:13:54 +0000 UTC + cluster-example-ca: 2023-10-26 16:09:09 +0000 UTC + cluster-example-replication: 2023-10-26 16:09:09 +0000 UTC + cluster-example-server: 2023-10-26 16:09:09 +0000 UTC replicationTLSSecret: cluster-example-replication serverAltDNSNames: - cluster-example-rw @@ -261,23 +262,47 @@ status: - cluster-example-ro.default.svc serverCASecret: cluster-example-ca serverTLSSecret: cluster-example-server - cloudNativePostgresqlCommitHash: ad578cb1 - cloudNativePostgresqlOperatorHash: 9f5db5e0e804fb51c6962140c0a447766bf2dd4d96dfa8d8529b8542754a23a4 + cloudNativePostgresqlCommitHash: c42ca1c2 + cloudNativePostgresqlOperatorHash: 1d51c15adffb02c81dbc4e8752ddb68f709699c78d9c3384ed9292188685971b conditions: - - lastTransitionTime: "2022-09-06T21:20:12Z" + - lastTransitionTime: "2023-07-28T16:15:29Z" message: Cluster is Ready reason: ClusterIsReady status: "True" type: Ready + - lastTransitionTime: "2023-07-28T16:15:29Z" + message: velero addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/velero + - lastTransitionTime: "2023-07-28T16:15:29Z" + message: external-backup-adapter addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/externalBackupAdapter + - lastTransitionTime: "2023-07-28T16:15:30Z" + message: external-backup-adapter-cluster addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/externalBackupAdapterCluster + - lastTransitionTime: "2023-07-28T16:15:30Z" + message: kasten addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/kasten configMapResourceVersion: metrics: - postgresql-operator-default-monitoring: "810" + postgresql-operator-default-monitoring: "788" currentPrimary: cluster-example-1 - currentPrimaryTimestamp: "2022-09-06T21:19:31.040336Z" + currentPrimaryTimestamp: "2023-07-28T16:14:48.609086Z" healthyPVC: - cluster-example-1 - cluster-example-2 - cluster-example-3 + instanceNames: + - cluster-example-1 + - cluster-example-2 + - cluster-example-3 instances: 3 instancesReportedState: cluster-example-1: @@ -298,10 +323,11 @@ status: licenseStatus: isImplicit: true isTrial: true - licenseExpiration: "2022-10-06T21:18:53Z" + licenseExpiration: "2023-08-27T16:14:08Z" licenseStatus: Implicit trial license repositoryAccess: false valid: true + managedRolesStatus: {} phase: Cluster in healthy state poolerIntegrations: pgBouncerIntegration: {} @@ -309,23 +335,24 @@ status: readService: cluster-example-r readyInstances: 3 secretsResourceVersion: - applicationSecretVersion: "778" - clientCaSecretVersion: "774" - replicationSecretVersion: "776" - serverCaSecretVersion: "774" - serverSecretVersion: "775" - superuserSecretVersion: "777" + applicationSecretVersion: "760" + clientCaSecretVersion: "756" + replicationSecretVersion: "758" + serverCaSecretVersion: "756" + serverSecretVersion: "757" + superuserSecretVersion: "759" targetPrimary: cluster-example-1 - targetPrimaryTimestamp: "2022-09-06T21:18:54.556099Z" + targetPrimaryTimestamp: "2023-07-28T16:14:09.501164Z" timelineID: 1 topology: instances: cluster-example-1: {} cluster-example-2: {} cluster-example-3: {} + nodesUsed: 1 successfullyExtracted: true writeService: cluster-example-rw - ``` +``` !!! Note By default, the operator will install the latest available minor version @@ -342,7 +369,7 @@ status: ## Install the kubectl-cnp plugin -EDB Postgres for Kubernetes provides [a plugin for kubectl](cnp-plugin) to manage a cluster in Kubernetes, along with a script to install it: +EDB Postgres for Kubernetes provides [a plugin for kubectl](kubectl-plugin) to manage a cluster in Kubernetes, along with a script to install it: ```shell curl -sSfL \ @@ -350,7 +377,7 @@ curl -sSfL \ sudo sh -s -- -b /usr/local/bin __OUTPUT__ EnterpriseDB/kubectl-cnp info checking GitHub for latest tag -EnterpriseDB/kubectl-cnp info found version: 1.18.0 for v1.18.0/linux/x86_64 +EnterpriseDB/kubectl-cnp info found version: 1.20.2 for v1.20.2/linux/x86_64 EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp ``` @@ -362,20 +389,20 @@ __OUTPUT__ Cluster Summary Name: cluster-example Namespace: default -System ID: 7140379538380623889 -PostgreSQL Image: quay.io/enterprisedb/postgresql:15.0 +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 Primary instance: cluster-example-1 Status: Cluster in healthy state Instances: 3 Ready instances: 3 -Current Write LSN: 0/5000060 (Timeline: 1 - WAL File: 000000010000000000000005) +Current Write LSN: 0/6054B60 (Timeline: 1 - WAL File: 000000010000000000000006) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 Continuous Backup status Not configured @@ -383,15 +410,18 @@ Not configured Streaming Replication status Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority ---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- -cluster-example-2 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00:00 00:00:00 streaming async 0 -cluster-example-3 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-2 0/6054B60 0/6054B60 0/6054B60 0/6054B60 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-3 0/6054B60 0/6054B60 0/6054B60 0/6054B60 00:00:00 00:00:00 00:00:00 streaming async 0 + +Unmanaged Replication Slot Status +No unmanaged replication slots found Instances status Name Database Size Current LSN Replication role Status QoS Manager Version Node ---- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-1 33 MB 0/5000060 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-2 33 MB 0/5000060 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/5000060 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 +cluster-example-1 29 MB 0/6054B60 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-2 29 MB 0/6054B60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/6054B60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 ``` !!! Note "There's more" @@ -414,23 +444,22 @@ Now if we check the status... kubectl cnp status cluster-example __OUTPUT__ Cluster Summary -Switchover in progress Name: cluster-example Namespace: default -System ID: 7140379538380623889 -PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5 -Primary instance: cluster-example-1 (switching to cluster-example-2) +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +Primary instance: cluster-example-2 Status: Failing over Failing over from cluster-example-1 to cluster-example-2 Instances: 3 Ready instances: 2 -Current Write LSN: 0/6000F58 (Timeline: 2 - WAL File: 000000020000000000000006) +Current Write LSN: 0/7001000 (Timeline: 2 - WAL File: 000000020000000000000007) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 Continuous Backup status Not configured @@ -438,11 +467,14 @@ Not configured Streaming Replication status Not available yet +Unmanaged Replication Slot Status +No unmanaged replication slots found + Instances status Name Database Size Current LSN Replication role Status QoS Manager Version Node ---- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-2 33 MB 0/6000F58 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/60000A0 Standby (file based) OK BestEffort 1.18.0 k3d-k3s-default-server-0 +cluster-example-2 29 MB 0/7001000 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 ``` ...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary: @@ -453,20 +485,53 @@ __OUTPUT__ Cluster Summary Name: cluster-example Namespace: default -System ID: 7140379538380623889 -PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5 +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +Primary instance: cluster-example-2 +Status: Failing over Failing over from cluster-example-1 to cluster-example-2 +Instances: 3 +Ready instances: 2 +Current Write LSN: 0/7001000 (Timeline: 2 - WAL File: 000000020000000000000007) + +Certificates Status +Certificate Name Expiration Date Days Left Until Expiration +---------------- --------------- -------------------------- +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 + +Continuous Backup status +Not configured + +Streaming Replication status +Not available yet + +Unmanaged Replication Slot Status +No unmanaged replication slots found + +Instances status +Name Database Size Current LSN Replication role Status QoS Manager Version Node +---- ------------- ----------- ---------------- ------ --- --------------- ---- +cluster-example-2 29 MB 0/7001000 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +$ kubectl cnp status cluster-example +Cluster Summary +Name: cluster-example +Namespace: default +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 Primary instance: cluster-example-2 Status: Cluster in healthy state Instances: 3 Ready instances: 3 -Current Write LSN: 0/6004CD8 (Timeline: 2 - WAL File: 000000020000000000000006) +Current Write LSN: 0/7004D60 (Timeline: 2 - WAL File: 000000020000000000000007) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 Continuous Backup status Not configured @@ -474,15 +539,53 @@ Not configured Streaming Replication status Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority ---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- -cluster-example-1 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0 -cluster-example-3 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-1 0/7004D60 0/7004D60 0/7004D60 0/7004D60 00:00:00 00:00:00 00:00:00 streaming async 0 + +Unmanaged Replication Slot Status +No unmanaged replication slots found Instances status -Name Database Size Current LSN Replication role Status QoS Manager Version Node ----- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-2 33 MB 0/6004CD8 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-1 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 +Name Database Size Current LSN Replication role Status QoS Manager Version Node +---- ------------- ----------- ---------------- ------ --- --------------- ---- +cluster-example-2 29 MB 0/7004D60 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-1 29 MB 0/7004D60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +$ kubectl cnp status cluster-example +Cluster Summary +Name: cluster-example +Namespace: default +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +Primary instance: cluster-example-2 +Status: Cluster in healthy state +Instances: 3 +Ready instances: 3 +Current Write LSN: 0/7004D98 (Timeline: 2 - WAL File: 000000020000000000000007) + +Certificates Status +Certificate Name Expiration Date Days Left Until Expiration +---------------- --------------- -------------------------- +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 + +Continuous Backup status +Not configured + +Streaming Replication status +Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority +---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- +cluster-example-1 0/7004D98 0/7004D98 0/7004D98 0/7004D98 00:00:00 00:00:00 00:00:00 streaming async 0 + +Unmanaged Replication Slot Status +No unmanaged replication slots found + +Instances status +Name Database Size Current LSN Replication role Status QoS Manager Version Node +---- ------------- ----------- ---------------- ------ --- --------------- ---- +cluster-example-2 29 MB 0/7004D98 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-1 29 MB 0/7004D98 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 ``` diff --git a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx index 3104b5635c9..4dec98b6713 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx @@ -820,9 +820,9 @@ Please pay close attention to the following table and notes: | EDB Postgres for Kubernetes Version | OpenShift Versions | Supported SCC | | ----------------------------------- | ------------------ | ------------------------- | -| 1.20.x | 4.10-4.12 | restricted, restricted-v2 | -| 1.19.x | 4.10-4.12 | restricted, restricted-v2 | -| 1.18.x | 4.10-4.12 | restricted, restricted-v2 | +| 1.20.x | 4.10-4.13 | restricted, restricted-v2 | +| 1.19.x | 4.10-4.13 | restricted, restricted-v2 | +| 1.18.x | 4.10-4.13 | restricted, restricted-v2 | !!! Important Since version 4.10 only provides `restricted`, EDB Postgres for Kubernetes diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx new file mode 100644 index 00000000000..7d5eb2ad2e3 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx @@ -0,0 +1,19 @@ +--- +title: "EDB Postgres for Kubernetes 1.18.6 release notes" +navTitle: "Version 1.18.6" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------ | +| Enhancement | Added a metric and status field to monitor node usage by an EDB Postgres for Kubernetes cluster. | | +| Enhancement | Added troubleshooting instructions relating to hugepages to the documentation. | +| Enhancement | Extended the FAQs page in the documentation. | +| Enhancement | Added a check at the start of the restore process to ensure it can proceed; give improved error diagnostics if it cannot. | +| Bug fix | Ensured the logic of setting the recovery target matches that of Postgres. | +| Bug fix | Prevented taking over service accounts not owned by the cluster by setting ownerMetadata only during service account creation. | +| Bug fix | Prevented a possible crash of the instance manager during the configuration reload. | +| Bug fix | Prevented the LastFailedArchiveTime alert from triggering if a new backup has been successful after the failed ones. | +| Security fix | Updated all project dependencies to the latest versions | + diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx new file mode 100644 index 00000000000..5cde55b56b8 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx @@ -0,0 +1,10 @@ +--- +title: "EDB Postgres for Kubernetes 1.19.4 release notes" +navTitle: "Version 1.19.4" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Upstream merge | Merged with community CloudNativePG 1.19.4. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx new file mode 100644 index 00000000000..305d0cd0f6f --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx @@ -0,0 +1,10 @@ +--- +title: "EDB Postgres for Kubernetes 1.20.2 release notes" +navTitle: "Version 1.20.2" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Upstream merge | Merged with community CloudNativePG 1.20.2. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx index a5c64f0459d..335c160cdfa 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx @@ -4,12 +4,15 @@ navTitle: "Release notes" redirects: - ../release_notes navigation: +- 1_20_2_rel_notes - 1_20_1_rel_notes - 1_20_0_rel_notes +- 1_19_4_rel_notes - 1_19_3_rel_notes - 1_19_2_rel_notes - 1_19_1_rel_notes - 1_19_0_rel_notes +- 1_18_6_rel_notes - 1_18_5_rel_notes - 1_18_4_rel_notes - 1_18_3_rel_notes @@ -60,12 +63,15 @@ The EDB Postgres for Kubernetes documentation describes the major version of EDB | Version | Release date | Upstream merges | | -------------------------- | ------------ | ------------------------------------------------------------------------------------------- | +| [1.20.2](1_20_2_rel_notes) | 2023 Jul 27 | Upstream [1.20.2](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | | [1.20.1](1_20_1_rel_notes) | 2023 Jun 13 | Upstream [1.20.1](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | | [1.20.0](1_20_0_rel_notes) | 2023 Apr 27 | Upstream [1.20.0](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | +| [1.19.4](1_19_4_rel_notes) | 2023 Jul 27 | Upstream [1.19.4](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.3](1_19_3_rel_notes) | 2023 Jun 13 | Upstream [1.19.3](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.2](1_19_2_rel_notes) | 2023 Apr 27 | Upstream [1.19.2](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.1](1_19_1_rel_notes) | 2023 Mar 20 | Upstream [1.19.1](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.0](1_19_0_rel_notes) | 2023 Feb 14 | Upstream [1.19.0](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | +| [1.18.6](1_18_6_rel_notes) | 2023 Jul 27 | None | | [1.18.5](1_18_5_rel_notes) | 2023 Jun 13 | Upstream [1.18.5](https://cloudnative-pg.io/documentation/1.18/release_notes/v1.18/) | | [1.18.4](1_18_4_rel_notes) | 2023 Apr 27 | Upstream [1.18.4](https://cloudnative-pg.io/documentation/1.18/release_notes/v1.18/) | | [1.18.3](1_18_3_rel_notes) | 2023 Mar 20 | Upstream [1.18.3](https://cloudnative-pg.io/documentation/1.18/release_notes/v1.18/) | diff --git a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx index 296d88870cc..c65e8fd390b 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx @@ -656,4 +656,40 @@ spec: In the [networking page](networking.md) you can find a network policy file that you can customize to create a `NetworkPolicy` explicitly allowing the -operator to connect cross-namespace to cluster pods. \ No newline at end of file +operator to connect cross-namespace to cluster pods. + +### Error while bootstrapping the data directory + +If your Cluster's initialization job crashes with a "Bus error (core dumped) +child process exited with exit code 135", you likely need to fix the Cluster +hugepages settings. + +The reason is the incomplete support of hugepages in the cgroup v1 that should +be fixed in v2. For more information, check the PostgreSQL [BUG #17757: Not +honoring huge_pages setting during initdb causes DB crash in +Kubernetes](https://www.postgresql.org/message-id/17757-dbdfc1f1c954a6db%40postgresql.org). + +To check whether hugepages are enabled, run `grep HugePages /proc/meminfo` on +the Kubernetes node and check if hugepages are present, their size, and how many +are free. + +If the hugepages are present, you need to configure how much hugepages memory +every PostgreSQL pod should have available. + +For example: + +```yaml + postgresql: + parameters: + shared_buffers: "128MB" + + resources: + requests: + memory: "512Mi" + limits: + hugepages-2Mi: "512Mi" +``` + +Please remember that you must have enough hugepages memory available to schedule +every Pod in the Cluster (in the example above, at least 512MiB per Pod must be +free). \ No newline at end of file