diff --git a/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx b/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx index 697665045f8..c23f4c19fe8 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/api_reference.mdx @@ -892,7 +892,7 @@ RecoveryTarget allows to configure the moment where the recovery process will st | `targetLSN ` | The target LSN (Log Sequence Number) | string | | `targetTime ` | The target time as a timestamp in the RFC3339 standard | string | | `targetImmediate` | End recovery as soon as a consistent state is reached | \*bool | -| `exclusive ` | Set the target to be exclusive (defaults to true) | \*bool | +| `exclusive ` | Set the target to be exclusive. If omitted, defaults to false, so that in Postgres, `recovery_target_inclusive` will be true | \*bool | diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx index 5eb2bba42c0..427408ac4fa 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx @@ -702,10 +702,11 @@ You can choose only a single one among the targets above in each Additionally, you can specify `targetTLI` force recovery to a specific timeline. -By default, the previous parameters are considered to be exclusive, stopping -just before the recovery target. You can request inclusive behavior, -stopping right after the recovery target, setting the `exclusive` parameter to -`false` like in the following example relying on a blob container in Azure: +By default, the previous parameters are considered to be inclusive, stopping +just after the recovery target, matching [the behavior in PostgreSQL](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-RECOVERY-TARGET-INCLUSIVE) +You can request exclusive behavior, +stopping right before the recovery target, by setting the `exclusive` parameter to +`true` like in the following example relying on a blob container in Azure: ```yaml apiVersion: postgresql.k8s.enterprisedb.io/v1 @@ -724,7 +725,7 @@ spec: recoveryTarget: backupID: 20220616T142236 targetName: "maintenance-activity" - exclusive: false + exclusive: true externalClusters: - name: clusterBackup diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/index.mdx index 9d677c49bb4..6f00e5e9e10 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/index.mdx @@ -67,7 +67,6 @@ full lifecycle of a highly available Postgres database clusters with a primary/standby architecture, using native streaming replication. !!! Note - The operator has been renamed from Cloud Native PostgreSQL. Existing users of Cloud Native PostgreSQL will not experience any change, as the underlying components and resources have not changed. @@ -90,7 +89,7 @@ primary/standby architecture, using native streaming replication. ## Features unique to EDB Postgres of Kubernetes -- Long Term Support for 1.18.x +- [Long Term Support](#long-term-support) for 1.18.x - Red Hat certified operator for OpenShift - Support on IBM Power - EDB Postgres for Kubernetes Plugin @@ -102,11 +101,26 @@ You can [evaluate EDB Postgres for Kubernetes for free](evaluation.md). You need a valid license key to use EDB Postgres for Kubernetes in production. !!! Note - Based on the [Operator Capability Levels model](operator_capability_levels.md), users can expect a **"Level V - Auto Pilot"** set of capabilities from the EDB Postgres for Kubernetes Operator. +### Long Term Support + +EDB is committed to declaring one version of EDB Postgres for Kubernetes per +year as a Long Term Support version. This version will be supported and receive +maintenance releases for an additional 12 months beyond the last release of +CloudNativePG by the community for the same version. For example, the last +version of 1.18 of CloudNativePG was released on June 12, 2023. This was +declared a LTS version of EDB Postgres for Kubernetes and it will be supported +for additional 12 months until June 12, 2024. Customers can expect that they +will have at least 6 months to move between LTS versions. So they should +expect the next LTS to be available by January 12, 2024 to allow at least 6 +months to migrate. While we encourage customers to regularly upgrade to the +latest version of the operator to take advantage of new features, having LTS +versions allows customers desiring additional stability to stay on the same +version for 12-18 months before upgrading. + ## Licensing EDB Postgres for Kubernetes works with both PostgreSQL and EDB Postgres @@ -130,7 +144,6 @@ The EDB Postgres for Kubernetes Operator container images support the multi-arch format for the following platforms: `linux/amd64`, `linux/arm64`, `linux/ppc64le`, `linux/s390x`. !!! Warning - EDB Postgres for Kubernetes requires that all nodes in a Kubernetes cluster have the same CPU architecture, thus a hybrid CPU architecture Kubernetes cluster is not supported. Additionally, EDB supports `linux/ppc64le` and `linux/s390x` architectures @@ -156,7 +169,6 @@ In case you are not familiar with some basic terminology on Kubernetes and Postg please consult the ["Before you start" section](before_you_start.md). !!! Note - Although the guide primarily addresses Kubernetes, all concepts can be extended to OpenShift as well. diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx index 585ff61179e..421ac797426 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx @@ -19,12 +19,12 @@ The operator can be installed using the provided [Helm chart](https://github.com The operator can be installed like any other resource in Kubernetes, through a YAML manifest applied via `kubectl`. -You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.20.1.yaml) +You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml) for this minor release as follows: ```sh kubectl apply -f \ - https://get.enterprisedb.io/cnp/postgresql-operator-1.20.1.yaml + https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml ``` You can verify that with: diff --git a/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx b/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx index 6cd3c99fefd..2f826edd838 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx @@ -39,22 +39,22 @@ INFO[0000] Prep: Network INFO[0000] Created network 'k3d-k3s-default' INFO[0000] Created image volume k3d-k3s-default-images INFO[0000] Starting new tools node... -INFO[0000] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.6' +INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.5.1' INFO[0001] Creating node 'k3d-k3s-default-server-0' -INFO[0002] Starting Node 'k3d-k3s-default-tools' -INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.24.4-k3s1' -INFO[0007] Creating LoadBalancer 'k3d-k3s-default-serverlb' -INFO[0007] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.6' +INFO[0002] Pulling image 'docker.io/rancher/k3s:v1.26.4-k3s1' +INFO[0003] Starting Node 'k3d-k3s-default-tools' +INFO[0006] Creating LoadBalancer 'k3d-k3s-default-serverlb' +INFO[0007] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.5.1' INFO[0010] Using the k3d-tools node to gather environment information -INFO[0011] HostIP: using network gateway 172.17.0.1 address -INFO[0011] Starting cluster 'k3s-default' -INFO[0011] Starting servers... -INFO[0011] Starting Node 'k3d-k3s-default-server-0' -INFO[0016] All agents already running. -INFO[0016] Starting helpers... -INFO[0016] Starting Node 'k3d-k3s-default-serverlb' -INFO[0023] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... -INFO[0025] Cluster 'k3s-default' created successfully! +INFO[0010] HostIP: using network gateway 172.17.0.1 address +INFO[0010] Starting cluster 'k3s-default' +INFO[0010] Starting servers... +INFO[0010] Starting Node 'k3d-k3s-default-server-0' +INFO[0015] All agents already running. +INFO[0015] Starting helpers... +INFO[0015] Starting Node 'k3d-k3s-default-serverlb' +INFO[0022] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... +INFO[0024] Cluster 'k3s-default' created successfully! INFO[0025] You can now use it like this: kubectl cluster-info ``` @@ -66,7 +66,7 @@ Verify that it works with the following command: kubectl get nodes __OUTPUT__ NAME STATUS ROLES AGE VERSION -k3d-k3s-default-server-0 Ready control-plane,master 32s v1.24.4+k3s1 +k3d-k3s-default-server-0 Ready control-plane,master 17s v1.26.4+k3s1 ``` You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet "Ready", wait for a few seconds and run the command above again. @@ -76,7 +76,7 @@ You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet Now that the Kubernetes cluster is running, you can proceed with EDB Postgres for Kubernetes installation as described in the ["Installation and upgrades"](installation_upgrade.md) section: ```shell -kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.18.0.yaml +kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml __OUTPUT__ namespace/postgresql-operator-system created customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created @@ -179,12 +179,12 @@ metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}} - creationTimestamp: "2022-09-06T21:18:53Z" + creationTimestamp: "2023-07-28T16:14:08Z" generation: 1 name: cluster-example namespace: default - resourceVersion: "2037" - uid: e6d88753-e5d5-414c-a7ec-35c6c27f5a9a + resourceVersion: "1115" + uid: 70e054ae-b487-41e3-941b-b7c969f950be spec: affinity: podAntiAffinityType: preferred @@ -197,7 +197,8 @@ spec: localeCollate: C owner: app enableSuperuserAccess: true - imageName: quay.io/enterprisedb/postgresql:15.0 + failoverDelay: 0 + imageName: quay.io/enterprisedb/postgresql:15.3 instances: 3 logLevel: info maxSyncReplicas: 0 @@ -232,7 +233,7 @@ spec: wal_sender_timeout: 5s syncReplicaElectionConstraint: enabled: false - primaryUpdateMethod: switchover + primaryUpdateMethod: restart primaryUpdateStrategy: unsupervised resources: {} startDelay: 30 @@ -245,9 +246,9 @@ status: certificates: clientCASecret: cluster-example-ca expirations: - cluster-example-ca: 2022-12-05 21:13:54 +0000 UTC - cluster-example-replication: 2022-12-05 21:13:54 +0000 UTC - cluster-example-server: 2022-12-05 21:13:54 +0000 UTC + cluster-example-ca: 2023-10-26 16:09:09 +0000 UTC + cluster-example-replication: 2023-10-26 16:09:09 +0000 UTC + cluster-example-server: 2023-10-26 16:09:09 +0000 UTC replicationTLSSecret: cluster-example-replication serverAltDNSNames: - cluster-example-rw @@ -261,23 +262,47 @@ status: - cluster-example-ro.default.svc serverCASecret: cluster-example-ca serverTLSSecret: cluster-example-server - cloudNativePostgresqlCommitHash: ad578cb1 - cloudNativePostgresqlOperatorHash: 9f5db5e0e804fb51c6962140c0a447766bf2dd4d96dfa8d8529b8542754a23a4 + cloudNativePostgresqlCommitHash: c42ca1c2 + cloudNativePostgresqlOperatorHash: 1d51c15adffb02c81dbc4e8752ddb68f709699c78d9c3384ed9292188685971b conditions: - - lastTransitionTime: "2022-09-06T21:20:12Z" + - lastTransitionTime: "2023-07-28T16:15:29Z" message: Cluster is Ready reason: ClusterIsReady status: "True" type: Ready + - lastTransitionTime: "2023-07-28T16:15:29Z" + message: velero addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/velero + - lastTransitionTime: "2023-07-28T16:15:29Z" + message: external-backup-adapter addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/externalBackupAdapter + - lastTransitionTime: "2023-07-28T16:15:30Z" + message: external-backup-adapter-cluster addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/externalBackupAdapterCluster + - lastTransitionTime: "2023-07-28T16:15:30Z" + message: kasten addon is disabled + reason: Disabled + status: "False" + type: k8s.enterprisedb.io/kasten configMapResourceVersion: metrics: - postgresql-operator-default-monitoring: "810" + postgresql-operator-default-monitoring: "788" currentPrimary: cluster-example-1 - currentPrimaryTimestamp: "2022-09-06T21:19:31.040336Z" + currentPrimaryTimestamp: "2023-07-28T16:14:48.609086Z" healthyPVC: - cluster-example-1 - cluster-example-2 - cluster-example-3 + instanceNames: + - cluster-example-1 + - cluster-example-2 + - cluster-example-3 instances: 3 instancesReportedState: cluster-example-1: @@ -298,10 +323,11 @@ status: licenseStatus: isImplicit: true isTrial: true - licenseExpiration: "2022-10-06T21:18:53Z" + licenseExpiration: "2023-08-27T16:14:08Z" licenseStatus: Implicit trial license repositoryAccess: false valid: true + managedRolesStatus: {} phase: Cluster in healthy state poolerIntegrations: pgBouncerIntegration: {} @@ -309,23 +335,24 @@ status: readService: cluster-example-r readyInstances: 3 secretsResourceVersion: - applicationSecretVersion: "778" - clientCaSecretVersion: "774" - replicationSecretVersion: "776" - serverCaSecretVersion: "774" - serverSecretVersion: "775" - superuserSecretVersion: "777" + applicationSecretVersion: "760" + clientCaSecretVersion: "756" + replicationSecretVersion: "758" + serverCaSecretVersion: "756" + serverSecretVersion: "757" + superuserSecretVersion: "759" targetPrimary: cluster-example-1 - targetPrimaryTimestamp: "2022-09-06T21:18:54.556099Z" + targetPrimaryTimestamp: "2023-07-28T16:14:09.501164Z" timelineID: 1 topology: instances: cluster-example-1: {} cluster-example-2: {} cluster-example-3: {} + nodesUsed: 1 successfullyExtracted: true writeService: cluster-example-rw - ``` +``` !!! Note By default, the operator will install the latest available minor version @@ -342,7 +369,7 @@ status: ## Install the kubectl-cnp plugin -EDB Postgres for Kubernetes provides [a plugin for kubectl](cnp-plugin) to manage a cluster in Kubernetes, along with a script to install it: +EDB Postgres for Kubernetes provides [a plugin for kubectl](kubectl-plugin) to manage a cluster in Kubernetes, along with a script to install it: ```shell curl -sSfL \ @@ -350,7 +377,7 @@ curl -sSfL \ sudo sh -s -- -b /usr/local/bin __OUTPUT__ EnterpriseDB/kubectl-cnp info checking GitHub for latest tag -EnterpriseDB/kubectl-cnp info found version: 1.18.0 for v1.18.0/linux/x86_64 +EnterpriseDB/kubectl-cnp info found version: 1.20.2 for v1.20.2/linux/x86_64 EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp ``` @@ -362,20 +389,20 @@ __OUTPUT__ Cluster Summary Name: cluster-example Namespace: default -System ID: 7140379538380623889 -PostgreSQL Image: quay.io/enterprisedb/postgresql:15.0 +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 Primary instance: cluster-example-1 Status: Cluster in healthy state Instances: 3 Ready instances: 3 -Current Write LSN: 0/5000060 (Timeline: 1 - WAL File: 000000010000000000000005) +Current Write LSN: 0/6054B60 (Timeline: 1 - WAL File: 000000010000000000000006) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 Continuous Backup status Not configured @@ -383,15 +410,18 @@ Not configured Streaming Replication status Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority ---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- -cluster-example-2 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00:00 00:00:00 streaming async 0 -cluster-example-3 0/5000060 0/5000060 0/5000060 0/5000060 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-2 0/6054B60 0/6054B60 0/6054B60 0/6054B60 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-3 0/6054B60 0/6054B60 0/6054B60 0/6054B60 00:00:00 00:00:00 00:00:00 streaming async 0 + +Unmanaged Replication Slot Status +No unmanaged replication slots found Instances status Name Database Size Current LSN Replication role Status QoS Manager Version Node ---- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-1 33 MB 0/5000060 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-2 33 MB 0/5000060 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/5000060 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 +cluster-example-1 29 MB 0/6054B60 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-2 29 MB 0/6054B60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/6054B60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 ``` !!! Note "There's more" @@ -414,23 +444,22 @@ Now if we check the status... kubectl cnp status cluster-example __OUTPUT__ Cluster Summary -Switchover in progress Name: cluster-example Namespace: default -System ID: 7140379538380623889 -PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5 -Primary instance: cluster-example-1 (switching to cluster-example-2) +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +Primary instance: cluster-example-2 Status: Failing over Failing over from cluster-example-1 to cluster-example-2 Instances: 3 Ready instances: 2 -Current Write LSN: 0/6000F58 (Timeline: 2 - WAL File: 000000020000000000000006) +Current Write LSN: 0/7001000 (Timeline: 2 - WAL File: 000000020000000000000007) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 Continuous Backup status Not configured @@ -438,11 +467,14 @@ Not configured Streaming Replication status Not available yet +Unmanaged Replication Slot Status +No unmanaged replication slots found + Instances status Name Database Size Current LSN Replication role Status QoS Manager Version Node ---- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-2 33 MB 0/6000F58 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/60000A0 Standby (file based) OK BestEffort 1.18.0 k3d-k3s-default-server-0 +cluster-example-2 29 MB 0/7001000 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 ``` ...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary: @@ -453,20 +485,53 @@ __OUTPUT__ Cluster Summary Name: cluster-example Namespace: default -System ID: 7140379538380623889 -PostgreSQL Image: quay.io/enterprisedb/postgresql:14.5 +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +Primary instance: cluster-example-2 +Status: Failing over Failing over from cluster-example-1 to cluster-example-2 +Instances: 3 +Ready instances: 2 +Current Write LSN: 0/7001000 (Timeline: 2 - WAL File: 000000020000000000000007) + +Certificates Status +Certificate Name Expiration Date Days Left Until Expiration +---------------- --------------- -------------------------- +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 + +Continuous Backup status +Not configured + +Streaming Replication status +Not available yet + +Unmanaged Replication Slot Status +No unmanaged replication slots found + +Instances status +Name Database Size Current LSN Replication role Status QoS Manager Version Node +---- ------------- ----------- ---------------- ------ --- --------------- ---- +cluster-example-2 29 MB 0/7001000 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +$ kubectl cnp status cluster-example +Cluster Summary +Name: cluster-example +Namespace: default +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 Primary instance: cluster-example-2 Status: Cluster in healthy state Instances: 3 Ready instances: 3 -Current Write LSN: 0/6004CD8 (Timeline: 2 - WAL File: 000000020000000000000006) +Current Write LSN: 0/7004D60 (Timeline: 2 - WAL File: 000000020000000000000007) Certificates Status Certificate Name Expiration Date Days Left Until Expiration ---------------- --------------- -------------------------- -cluster-example-ca 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-replication 2022-12-05 21:13:54 +0000 UTC 89.99 -cluster-example-server 2022-12-05 21:13:54 +0000 UTC 89.99 +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 Continuous Backup status Not configured @@ -474,15 +539,53 @@ Not configured Streaming Replication status Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority ---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- -cluster-example-1 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0 -cluster-example-3 0/6004CD8 0/6004CD8 0/6004CD8 0/6004CD8 00:00:00 00:00:00 00:00:00 streaming async 0 +cluster-example-1 0/7004D60 0/7004D60 0/7004D60 0/7004D60 00:00:00 00:00:00 00:00:00 streaming async 0 + +Unmanaged Replication Slot Status +No unmanaged replication slots found Instances status -Name Database Size Current LSN Replication role Status QoS Manager Version Node ----- ------------- ----------- ---------------- ------ --- --------------- ---- -cluster-example-2 33 MB 0/6004CD8 Primary OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-1 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 -cluster-example-3 33 MB 0/6004CD8 Standby (async) OK BestEffort 1.18.0 k3d-k3s-default-server-0 +Name Database Size Current LSN Replication role Status QoS Manager Version Node +---- ------------- ----------- ---------------- ------ --- --------------- ---- +cluster-example-2 29 MB 0/7004D60 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-1 29 MB 0/7004D60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +$ kubectl cnp status cluster-example +Cluster Summary +Name: cluster-example +Namespace: default +System ID: 7260903692491026447 +PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3 +Primary instance: cluster-example-2 +Status: Cluster in healthy state +Instances: 3 +Ready instances: 3 +Current Write LSN: 0/7004D98 (Timeline: 2 - WAL File: 000000020000000000000007) + +Certificates Status +Certificate Name Expiration Date Days Left Until Expiration +---------------- --------------- -------------------------- +cluster-example-ca 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-replication 2023-10-26 16:09:09 +0000 UTC 89.99 +cluster-example-server 2023-10-26 16:09:09 +0000 UTC 89.99 + +Continuous Backup status +Not configured + +Streaming Replication status +Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority +---- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- ------------- +cluster-example-1 0/7004D98 0/7004D98 0/7004D98 0/7004D98 00:00:00 00:00:00 00:00:00 streaming async 0 + +Unmanaged Replication Slot Status +No unmanaged replication slots found + +Instances status +Name Database Size Current LSN Replication role Status QoS Manager Version Node +---- ------------- ----------- ---------------- ------ --- --------------- ---- +cluster-example-2 29 MB 0/7004D98 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-1 29 MB 0/7004D98 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0 +cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0 ``` diff --git a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx index 3104b5635c9..4dec98b6713 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx @@ -820,9 +820,9 @@ Please pay close attention to the following table and notes: | EDB Postgres for Kubernetes Version | OpenShift Versions | Supported SCC | | ----------------------------------- | ------------------ | ------------------------- | -| 1.20.x | 4.10-4.12 | restricted, restricted-v2 | -| 1.19.x | 4.10-4.12 | restricted, restricted-v2 | -| 1.18.x | 4.10-4.12 | restricted, restricted-v2 | +| 1.20.x | 4.10-4.13 | restricted, restricted-v2 | +| 1.19.x | 4.10-4.13 | restricted, restricted-v2 | +| 1.18.x | 4.10-4.13 | restricted, restricted-v2 | !!! Important Since version 4.10 only provides `restricted`, EDB Postgres for Kubernetes diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx new file mode 100644 index 00000000000..1c42537aedc --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_6_rel_notes.mdx @@ -0,0 +1,19 @@ +--- +title: "EDB Postgres for Kubernetes 1.18.6 release notes" +navTitle: "Version 1.18.6" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| ------------ | ------------------------------------------------------------------------------------------------------------------------------ | +| Enhancement | Added a metric and status field to monitor node usage by a CloudNativePG cluster. | | +| Enhancement | Added troubleshooting instructions relating to hugepages to the documentation. | +| Enhancement | Extended the FAQs page in the documentation. | +| Enhancement | Added a check at the start of the restore process to ensure it can proceed; give improved error diagnostics if it cannot. | +| Bug fix | Ensured the logic of setting the recovery target matches that of Postgres. | +| Bug fix | Prevented taking over service accounts not owned by the cluster by setting ownerMetadata only during service account creation. | +| Bug fix | Prevented a possible crash of the instance manager during the configuration reload. | +| Bug fix | Prevented the LastFailedArchiveTime alert from triggering if a new backup has been successful after the failed ones. | +| Security fix | Updated all project dependencies to the latest versions | + diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx new file mode 100644 index 00000000000..5cde55b56b8 --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_19_4_rel_notes.mdx @@ -0,0 +1,10 @@ +--- +title: "EDB Postgres for Kubernetes 1.19.4 release notes" +navTitle: "Version 1.19.4" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Upstream merge | Merged with community CloudNativePG 1.19.4. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx new file mode 100644 index 00000000000..305d0cd0f6f --- /dev/null +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_20_2_rel_notes.mdx @@ -0,0 +1,10 @@ +--- +title: "EDB Postgres for Kubernetes 1.20.2 release notes" +navTitle: "Version 1.20.2" +--- + +This release of EDB Postgres for Kubernetes includes the following: + +| Type | Description | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Upstream merge | Merged with community CloudNativePG 1.20.2. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/). | diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx index a5c64f0459d..335c160cdfa 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx @@ -4,12 +4,15 @@ navTitle: "Release notes" redirects: - ../release_notes navigation: +- 1_20_2_rel_notes - 1_20_1_rel_notes - 1_20_0_rel_notes +- 1_19_4_rel_notes - 1_19_3_rel_notes - 1_19_2_rel_notes - 1_19_1_rel_notes - 1_19_0_rel_notes +- 1_18_6_rel_notes - 1_18_5_rel_notes - 1_18_4_rel_notes - 1_18_3_rel_notes @@ -60,12 +63,15 @@ The EDB Postgres for Kubernetes documentation describes the major version of EDB | Version | Release date | Upstream merges | | -------------------------- | ------------ | ------------------------------------------------------------------------------------------- | +| [1.20.2](1_20_2_rel_notes) | 2023 Jul 27 | Upstream [1.20.2](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | | [1.20.1](1_20_1_rel_notes) | 2023 Jun 13 | Upstream [1.20.1](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | | [1.20.0](1_20_0_rel_notes) | 2023 Apr 27 | Upstream [1.20.0](https://cloudnative-pg.io/documentation/1.20/release_notes/v1.20/) | +| [1.19.4](1_19_4_rel_notes) | 2023 Jul 27 | Upstream [1.19.4](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.3](1_19_3_rel_notes) | 2023 Jun 13 | Upstream [1.19.3](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.2](1_19_2_rel_notes) | 2023 Apr 27 | Upstream [1.19.2](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.1](1_19_1_rel_notes) | 2023 Mar 20 | Upstream [1.19.1](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | | [1.19.0](1_19_0_rel_notes) | 2023 Feb 14 | Upstream [1.19.0](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) | +| [1.18.6](1_18_6_rel_notes) | 2023 Jul 27 | None | | [1.18.5](1_18_5_rel_notes) | 2023 Jun 13 | Upstream [1.18.5](https://cloudnative-pg.io/documentation/1.18/release_notes/v1.18/) | | [1.18.4](1_18_4_rel_notes) | 2023 Apr 27 | Upstream [1.18.4](https://cloudnative-pg.io/documentation/1.18/release_notes/v1.18/) | | [1.18.3](1_18_3_rel_notes) | 2023 Mar 20 | Upstream [1.18.3](https://cloudnative-pg.io/documentation/1.18/release_notes/v1.18/) | diff --git a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx index 296d88870cc..c65e8fd390b 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx @@ -656,4 +656,40 @@ spec: In the [networking page](networking.md) you can find a network policy file that you can customize to create a `NetworkPolicy` explicitly allowing the -operator to connect cross-namespace to cluster pods. \ No newline at end of file +operator to connect cross-namespace to cluster pods. + +### Error while bootstrapping the data directory + +If your Cluster's initialization job crashes with a "Bus error (core dumped) +child process exited with exit code 135", you likely need to fix the Cluster +hugepages settings. + +The reason is the incomplete support of hugepages in the cgroup v1 that should +be fixed in v2. For more information, check the PostgreSQL [BUG #17757: Not +honoring huge_pages setting during initdb causes DB crash in +Kubernetes](https://www.postgresql.org/message-id/17757-dbdfc1f1c954a6db%40postgresql.org). + +To check whether hugepages are enabled, run `grep HugePages /proc/meminfo` on +the Kubernetes node and check if hugepages are present, their size, and how many +are free. + +If the hugepages are present, you need to configure how much hugepages memory +every PostgreSQL pod should have available. + +For example: + +```yaml + postgresql: + parameters: + shared_buffers: "128MB" + + resources: + requests: + memory: "512Mi" + limits: + hugepages-2Mi: "512Mi" +``` + +Please remember that you must have enough hugepages memory available to schedule +every Pod in the Cluster (in the example above, at least 512MiB per Pod must be +free). \ No newline at end of file