diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx
index 0173c9dc96c..2e9c7e0d537 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx
@@ -30,12 +30,18 @@ Below you will find a description of the defined resources:
- [BarmanObjectStoreConfiguration](#BarmanObjectStoreConfiguration)
- [BootstrapConfiguration](#BootstrapConfiguration)
- [BootstrapInitDB](#BootstrapInitDB)
+- [BootstrapPgBaseBackup](#BootstrapPgBaseBackup)
- [BootstrapRecovery](#BootstrapRecovery)
+- [CertificatesConfiguration](#CertificatesConfiguration)
+- [CertificatesStatus](#CertificatesStatus)
- [Cluster](#Cluster)
- [ClusterList](#ClusterList)
- [ClusterSpec](#ClusterSpec)
- [ClusterStatus](#ClusterStatus)
+- [ConfigMapKeySelector](#ConfigMapKeySelector)
- [DataBackupConfiguration](#DataBackupConfiguration)
+- [ExternalCluster](#ExternalCluster)
+- [LocalObjectReference](#LocalObjectReference)
- [MonitoringConfiguration](#MonitoringConfiguration)
- [NodeMaintenanceWindow](#NodeMaintenanceWindow)
- [PostgresConfiguration](#PostgresConfiguration)
@@ -46,6 +52,7 @@ Below you will find a description of the defined resources:
- [ScheduledBackupList](#ScheduledBackupList)
- [ScheduledBackupSpec](#ScheduledBackupSpec)
- [ScheduledBackupStatus](#ScheduledBackupStatus)
+- [SecretKeySelector](#SecretKeySelector)
- [SecretsResourceVersion](#SecretsResourceVersion)
- [StorageConfiguration](#StorageConfiguration)
- [WalBackupConfiguration](#WalBackupConfiguration)
@@ -57,11 +64,12 @@ Below you will find a description of the defined resources:
AffinityConfiguration contains the info we need to create the affinity rules for Pods
-Name | Description | Type
---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -----------------
-`enablePodAntiAffinity` | Activates anti-affinity for the pods. The operator will define pods anti-affinity unless this field is explicitly set to false | *bool
-`topologyKey ` | TopologyKey to use for anti-affinity configuration. See k8s documentation for more info on that - *mandatory* | string
-`nodeSelector ` | NodeSelector is map of key-value pairs used to define the nodes on which the pods can run. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | map[string]string
+Name | Description | Type
+--------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------
+`enablePodAntiAffinity` | Activates anti-affinity for the pods. The operator will define pods anti-affinity unless this field is explicitly set to false | *bool
+`topologyKey ` | TopologyKey to use for anti-affinity configuration. See k8s documentation for more info on that - *mandatory* | string
+`nodeSelector ` | NodeSelector is map of key-value pairs used to define the nodes on which the pods can run. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | map[string]string
+`tolerations ` | Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run on tainted nodes. More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ | []corev1.Toleration
@@ -71,7 +79,7 @@ Backup is the Schema for the backups API
Name | Description | Type
-------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------
-`metadata` | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#objectmeta-v1-meta)
+`metadata` | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#objectmeta-v1-meta)
`spec ` | Specification of the desired behavior of the backup. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | [BackupSpec](#BackupSpec)
`status ` | Most recently observed status of the backup. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | [BackupStatus](#BackupStatus)
@@ -93,7 +101,7 @@ BackupList contains a list of Backup
Name | Description | Type
-------- | ---------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------
-`metadata` | Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#listmeta-v1-meta)
+`metadata` | Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta)
`items ` | List of backups - *mandatory* | [[]Backup](#Backup)
@@ -102,9 +110,9 @@ Name | Description
BackupSpec defines the desired state of Backup
-Name | Description | Type
-------- | --------------------- | ----------------------------------------------------------------------------------------------------------------------------
-`cluster` | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#localobjectreference-v1-core)
+Name | Description | Type
+------- | --------------------- | ---------------------------------------------
+`cluster` | The cluster to backup | [LocalObjectReference](#LocalObjectReference)
@@ -121,11 +129,15 @@ Name | Description
`encryption ` | Encryption method required to S3 API | string
`backupId ` | The ID of the Barman backup | string
`phase ` | The last backup status | BackupPhase
-`startedAt ` | When the backup was started | [*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#time-v1-meta)
-`stoppedAt ` | When the backup was terminated | [*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#time-v1-meta)
+`startedAt ` | When the backup was started | [*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta)
+`stoppedAt ` | When the backup was terminated | [*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta)
+`beginWal ` | The starting WAL | string
+`endWal ` | The ending WAL | string
+`beginLSN ` | The starting xlog | string
+`endLSN ` | The ending xlog | string
`error ` | The detected error | string
-`commandOutput ` | The backup command output | string
-`commandError ` | The backup command output | string
+`commandOutput ` | Unused. Retained for compatibility with old versions. | string
+`commandError ` | The backup command output in case of error | string
@@ -148,10 +160,11 @@ Name | Description
BootstrapConfiguration contains information about how to create the PostgreSQL cluster. Only a single bootstrap method can be defined among the supported ones. `initdb` will be used as the bootstrap method if left unspecified. Refer to the Bootstrap page of the documentation for more information.
-Name | Description | Type
--------- | ----------------------------------- | ----------------------------------------
-`initdb ` | Bootstrap the cluster via initdb | [*BootstrapInitDB](#BootstrapInitDB)
-`recovery` | Bootstrap the cluster from a backup | [*BootstrapRecovery](#BootstrapRecovery)
+Name | Description | Type
+------------- | ---------------------------------------------------------------------------------------- | ------------------------------------------------
+`initdb ` | Bootstrap the cluster via initdb | [*BootstrapInitDB](#BootstrapInitDB)
+`recovery ` | Bootstrap the cluster from a backup | [*BootstrapRecovery](#BootstrapRecovery)
+`pg_basebackup` | Bootstrap the cluster taking a physical backup of another compatible PostgreSQL instance | [*BootstrapPgBaseBackup](#BootstrapPgBaseBackup)
@@ -159,13 +172,23 @@ Name | Description | Type
BootstrapInitDB is the configuration of the bootstrap process when initdb is used Refer to the Bootstrap page of the documentation for more information.
-Name | Description | Type
--------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------
-`database` | Name of the database used by the application. Default: `app`. - *mandatory* | string
-`owner ` | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. - *mandatory* | string
-`secret ` | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | [*corev1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#localobjectreference-v1-core)
-`redwood ` | If we need to enable/disable Redwood compatibility. Requires EPAS and for EPAS defaults to true | *bool
-`options ` | The list of options that must be passed to initdb when creating the cluster | []string
+Name | Description | Type
+-------- | -------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------
+`database` | Name of the database used by the application. Default: `app`. - *mandatory* | string
+`owner ` | Name of the owner of the database in the instance to be used by applications. Defaults to the value of the `database` key. - *mandatory* | string
+`secret ` | Name of the secret containing the initial credentials for the owner of the user database. If empty a new secret will be created from scratch | [*LocalObjectReference](#LocalObjectReference)
+`redwood ` | If we need to enable/disable Redwood compatibility. Requires EPAS and for EPAS defaults to true | *bool
+`options ` | The list of options that must be passed to initdb when creating the cluster | []string
+
+
+
+## BootstrapPgBaseBackup
+
+BootstrapPgBaseBackup contains the configuration required to take a physical backup of an existing PostgreSQL cluster
+
+Name | Description | Type
+------ | ----------------------------------------------------------------- | ------
+`source` | The name of the server of which we need to take a physical backup - *mandatory* | string
@@ -173,10 +196,45 @@ Name | Description
BootstrapRecovery contains the configuration required to restore the backup with the specified name and, after having changed the password with the one chosen for the superuser, will use it to bootstrap a full cluster cloning all the instances from the restored primary. Refer to the Bootstrap page of the documentation for more information.
-Name | Description | Type
--------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------
-`backup ` | The backup we need to restore - *mandatory* | [corev1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#localobjectreference-v1-core)
-`recoveryTarget` | By default the recovery will end as soon as a consistent state is reached: in this case that means at the end of a backup. This option allows to fine tune the recovery process | [*RecoveryTarget](#RecoveryTarget)
+Name | Description | Type
+-------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------
+`backup ` | The backup we need to restore - *mandatory* | [LocalObjectReference](#LocalObjectReference)
+`recoveryTarget` | By default, the recovery process applies all the available WAL files in the archive (full recovery). However, you can also end the recovery as soon as a consistent state is reached or recover to a point-in-time (PITR) by specifying a `RecoveryTarget` object, as expected by PostgreSQL (i.e., timestamp, transaction Id, LSN, ...). More info: https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET | [*RecoveryTarget](#RecoveryTarget)
+
+
+
+## CertificatesConfiguration
+
+CertificatesConfiguration contains the needed configurations to handle server certificates.
+
+Name | Description | Type
+----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------
+`serverCASecret ` | The secret containing the Server CA certificate. If not defined, a new secret will be created with a self-signed CA and will be used to generate the TLS certificate ServerTLSSecret.
+
+Contains:
+
+- `ca.crt`: CA that should be used to validate the server certificate,
+ used as `sslrootcert` in client connection strings.
+- `ca.key`: key used to generate Server SSL certs, if ServerTLSSecret is provided,
+ this can be omitted. | string
+`serverTLSSecret ` | The secret of type kubernetes.io/tls containing the server TLS certificate and key that will be set as `ssl_cert_file` and `ssl_key_file` so that clients can connect to postgres securely. If not defined, ServerCASecret must provide also `ca.key` and a new secret will be created using the provided CA. | string
+`serverAltDNSNames` | The list of the server alternative DNS names to be added to the generated server TLS certificates, when required. | []string
+
+
+
+## CertificatesStatus
+
+CertificatesStatus contains configuration certificates and related expiration dates.
+
+Name | Description | Type
+-------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -----------------
+`clientCASecret ` | The secret containing the Client CA certificate. This secret contains a self-signed CA and is used to sign TLS certificates used for client authentication.
+
+Contains:
+
+- `ca.crt`: CA that should be used to validate the client certificate, used as `ssl_ca_file`. - `ca.key`: key used to sign client SSL certs. | string
+`replicationTLSSecret` | The secret of type kubernetes.io/tls containing the TLS client certificate to authenticate as `streaming_replica` user. | string
+`expirations ` | Expiration dates for all certificates. | map[string]string
@@ -186,7 +244,7 @@ Cluster is the Schema for the PostgreSQL API
Name | Description | Type
-------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------
-`metadata` | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#objectmeta-v1-meta)
+`metadata` | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#objectmeta-v1-meta)
`spec ` | Specification of the desired behavior of the cluster. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | [ClusterSpec](#ClusterSpec)
`status ` | Most recently observed status of the cluster. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | [ClusterStatus](#ClusterStatus)
@@ -198,7 +256,7 @@ ClusterList contains a list of Cluster
Name | Description | Type
-------- | ---------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------
-`metadata` | Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#listmeta-v1-meta)
+`metadata` | Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta)
`items ` | List of clusters - *mandatory* | [[]Cluster](#Cluster)
@@ -207,29 +265,31 @@ Name | Description
ClusterSpec defines the desired state of Cluster
-Name | Description | Type
---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------
-`description ` | Description of this PostgreSQL cluster | string
-`imageName ` | Name of the container image | string
-`postgresUID ` | The UID of the `postgres` user inside the image, defaults to `26` | int64
-`postgresGID ` | The GID of the `postgres` user inside the image, defaults to `26` | int64
-`instances ` | Number of instances required in the cluster - *mandatory* | int32
-`minSyncReplicas ` | Minimum number of instances required in synchronous replication with the primary. Undefined or 0 allow writes to complete when no standby is available. | int32
-`maxSyncReplicas ` | The target value for the synchronous replication quorum, that can be decreased if the number of ready standbys is lower than this. Undefined or 0 disable synchronous replication. | int32
-`postgresql ` | Configuration of the PostgreSQL server | [PostgresConfiguration](#PostgresConfiguration)
-`bootstrap ` | Instructions to bootstrap this cluster | [*BootstrapConfiguration](#BootstrapConfiguration)
-`superuserSecret ` | The secret containing the superuser password. If not defined a new secret will be created with a randomly generated password | [*corev1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#localobjectreference-v1-core)
-`imagePullSecrets ` | The list of pull secrets to be used to pull the images. If the license key contains a pull secret that secret will be automatically included. | [[]corev1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#localobjectreference-v1-core)
-`storage ` | Configuration of the storage of the instances | [StorageConfiguration](#StorageConfiguration)
-`startDelay ` | The time in seconds that is allowed for a PostgreSQL instance to successfully start up (default 30) | int32
-`stopDelay ` | The time in seconds that is allowed for a PostgreSQL instance node to gracefully shutdown (default 30) | int32
-`affinity ` | Affinity/Anti-affinity rules for Pods | [AffinityConfiguration](#AffinityConfiguration)
-`resources ` | Resources requirements of every generated Pod. Please refer to https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more information. | [corev1.ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#resourcerequirements-v1-core)
-`primaryUpdateStrategy` | Strategy to follow to upgrade the primary server during a rolling update procedure, after all replicas have been successfully updated: it can be automated (`unsupervised` - default) or manual (`supervised`) | PrimaryUpdateStrategy
-`backup ` | The configuration to be used for backups | [*BackupConfiguration](#BackupConfiguration)
-`nodeMaintenanceWindow` | Define a maintenance window for the Kubernetes nodes | [*NodeMaintenanceWindow](#NodeMaintenanceWindow)
-`licenseKey ` | The license key of the cluster. When empty, the cluster operates in trial mode and after the expiry date (default 30 days) the operator will cease any reconciliation attempt. For details, please refer to the license agreement that comes with the operator. | string
-`monitoring ` | The configuration of the monitoring infrastructure of this cluster | [*MonitoringConfiguration](#MonitoringConfiguration)
+Name | Description | Type
+--------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------
+`description ` | Description of this PostgreSQL cluster | string
+`imageName ` | Name of the container image, supporting both tags (`:`) and digests for deterministic and repeatable deployments (`:@sha256:`) | string
+`postgresUID ` | The UID of the `postgres` user inside the image, defaults to `26` | int64
+`postgresGID ` | The GID of the `postgres` user inside the image, defaults to `26` | int64
+`instances ` | Number of instances required in the cluster - *mandatory* | int32
+`minSyncReplicas ` | Minimum number of instances required in synchronous replication with the primary. Undefined or 0 allow writes to complete when no standby is available. | int32
+`maxSyncReplicas ` | The target value for the synchronous replication quorum, that can be decreased if the number of ready standbys is lower than this. Undefined or 0 disable synchronous replication. | int32
+`postgresql ` | Configuration of the PostgreSQL server | [PostgresConfiguration](#PostgresConfiguration)
+`bootstrap ` | Instructions to bootstrap this cluster | [*BootstrapConfiguration](#BootstrapConfiguration)
+`superuserSecret ` | The secret containing the superuser password. If not defined a new secret will be created with a randomly generated password | [*LocalObjectReference](#LocalObjectReference)
+`certificates ` | The configuration for the CA and related certificates | [*CertificatesConfiguration](#CertificatesConfiguration)
+`imagePullSecrets ` | The list of pull secrets to be used to pull the images. If the license key contains a pull secret that secret will be automatically included. | [[]LocalObjectReference](#LocalObjectReference)
+`storage ` | Configuration of the storage of the instances | [StorageConfiguration](#StorageConfiguration)
+`startDelay ` | The time in seconds that is allowed for a PostgreSQL instance to successfully start up (default 30) | int32
+`stopDelay ` | The time in seconds that is allowed for a PostgreSQL instance node to gracefully shutdown (default 30) | int32
+`affinity ` | Affinity/Anti-affinity rules for Pods | [AffinityConfiguration](#AffinityConfiguration)
+`resources ` | Resources requirements of every generated Pod. Please refer to https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ for more information. | [corev1.ResourceRequirements](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#resourcerequirements-v1-core)
+`primaryUpdateStrategy` | Strategy to follow to upgrade the primary server during a rolling update procedure, after all replicas have been successfully updated: it can be automated (`unsupervised` - default) or manual (`supervised`) | PrimaryUpdateStrategy
+`backup ` | The configuration to be used for backups | [*BackupConfiguration](#BackupConfiguration)
+`nodeMaintenanceWindow` | Define a maintenance window for the Kubernetes nodes | [*NodeMaintenanceWindow](#NodeMaintenanceWindow)
+`licenseKey ` | The license key of the cluster. When empty, the cluster operates in trial mode and after the expiry date (default 30 days) the operator will cease any reconciliation attempt. For details, please refer to the license agreement that comes with the operator. | string
+`monitoring ` | The configuration of the monitoring infrastructure of this cluster | [*MonitoringConfiguration](#MonitoringConfiguration)
+`externalClusters ` | The list of external clusters which are used in the configuration | [[]ExternalCluster](#ExternalCluster)
@@ -255,6 +315,17 @@ Name | Description
`phase ` | Current phase of the cluster | string
`phaseReason ` | Reason for the current phase | string
`secretsResourceVersion` | The list of resource versions of the secrets managed by the operator. Every change here is done in the interest of the instance manager, which will refresh the secret data | [SecretsResourceVersion](#SecretsResourceVersion)
+`certificates ` | The configuration for the CA and related certificates, initialized with defaults. | [CertificatesStatus](#CertificatesStatus)
+
+
+
+## ConfigMapKeySelector
+
+ConfigMapKeySelector contains enough information to let you locate the key of a ConfigMap
+
+Name | Description | Type
+--- | ----------------- | ------
+`key` | The key to select - *mandatory* | string
@@ -269,16 +340,41 @@ Name | Description
`immediateCheckpoint` | Control whether the I/O workload for the backup initial checkpoint will be limited, according to the `checkpoint_completion_target` setting on the PostgreSQL server. If set to true, an immediate checkpoint will be used, meaning PostgreSQL will complete the checkpoint as soon as possible. `false` by default. | bool
`jobs ` | The number of parallel jobs to be used to upload the backup, defaults to 2 | *int32
+
+
+## ExternalCluster
+
+ExternalCluster represents the connection parameters of an external server which is used in the cluster configuration
+
+Name | Description | Type
+-------------------- | ---------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------
+`name ` | The server name, required - *mandatory* | string
+`connectionParameters` | The list of connection parameters, such as dbname, host, username, etc | map[string]string
+`sslCert ` | The reference to an SSL certificate to be used to connect to this instance | [*corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#secretkeyselector-v1-core)
+`sslKey ` | The reference to an SSL private key to be used to connect to this instance | [*corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#secretkeyselector-v1-core)
+`sslRootCert ` | The reference to an SSL CA public key to be used to connect to this instance | [*corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#secretkeyselector-v1-core)
+`password ` | The reference to the password to be used to connect to the server | [*corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#secretkeyselector-v1-core)
+
+
+
+## LocalObjectReference
+
+LocalObjectReference contains enough information to let you locate a local object with a known type inside the same namespace
+
+Name | Description | Type
+---- | --------------------- | ------
+`name` | Name of the referent. - *mandatory* | string
+
## MonitoringConfiguration
MonitoringConfiguration is the type containing all the monitoring configuration for a certain cluster
-Name | Description | Type
----------------------- | ----------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------
-`customQueriesConfigMap` | The list of config maps containing the custom queries | [[]corev1.ConfigMapKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#configmapkeyselector-v1-core)
-`customQueriesSecret ` | The list of secrets containing the custom queries | [[]corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core)
+Name | Description | Type
+---------------------- | ----------------------------------------------------- | -----------------------------------------------
+`customQueriesConfigMap` | The list of config maps containing the custom queries | [[]ConfigMapKeySelector](#ConfigMapKeySelector)
+`customQueriesSecret ` | The list of secrets containing the custom queries | [[]SecretKeySelector](#SecretKeySelector)
@@ -329,7 +425,7 @@ RollingUpdateStatus contains the information about an instance which is being up
Name | Description | Type
--------- | ----------------------------------- | ------------------------------------------------------------------------------------------------
`imageName` | The image which we put into the Pod - *mandatory* | string
-`startedAt` | When the update has been started | [metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#time-v1-meta)
+`startedAt` | When the update has been started | [metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta)
@@ -337,10 +433,10 @@ Name | Description | Type
S3Credentials is the type for the credentials to be used to upload files to S3
-Name | Description | Type
---------------- | -------------------------------------- | --------------------------------------------------------------------------------------------------------------------------
-`accessKeyId ` | The reference to the access key id - *mandatory* | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core)
-`secretAccessKey` | The reference to the secret access key - *mandatory* | [corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#secretkeyselector-v1-core)
+Name | Description | Type
+--------------- | -------------------------------------- | ---------------------------------------
+`accessKeyId ` | The reference to the access key id - *mandatory* | [SecretKeySelector](#SecretKeySelector)
+`secretAccessKey` | The reference to the secret access key - *mandatory* | [SecretKeySelector](#SecretKeySelector)
@@ -350,7 +446,7 @@ ScheduledBackup is the Schema for the scheduledbackups API
Name | Description | Type
-------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------
-`metadata` | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#objectmeta-v1-meta)
+`metadata` | | [metav1.ObjectMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#objectmeta-v1-meta)
`spec ` | Specification of the desired behavior of the ScheduledBackup. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | [ScheduledBackupSpec](#ScheduledBackupSpec)
`status ` | Most recently observed status of the ScheduledBackup. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status | [ScheduledBackupStatus](#ScheduledBackupStatus)
@@ -362,7 +458,7 @@ ScheduledBackupList contains a list of ScheduledBackup
Name | Description | Type
-------- | ---------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------
-`metadata` | Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#listmeta-v1-meta)
+`metadata` | Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | [metav1.ListMeta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#listmeta-v1-meta)
`items ` | List of clusters - *mandatory* | [[]ScheduledBackup](#ScheduledBackup)
@@ -371,11 +467,11 @@ Name | Description
ScheduledBackupSpec defines the desired state of ScheduledBackup
-Name | Description | Type
--------- | -------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------
-`suspend ` | If this backup is suspended of not | *bool
-`schedule` | The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron. - *mandatory* | string
-`cluster ` | The cluster to backup | [v1.LocalObjectReference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#localobjectreference-v1-core)
+Name | Description | Type
+-------- | -------------------------------------------------------------------- | ---------------------------------------------
+`suspend ` | If this backup is suspended of not | *bool
+`schedule` | The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron. - *mandatory* | string
+`cluster ` | The cluster to backup | [LocalObjectReference](#LocalObjectReference)
@@ -385,9 +481,19 @@ ScheduledBackupStatus defines the observed state of ScheduledBackup
Name | Description | Type
---------------- | -------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------
-`lastCheckTime ` | The latest time the schedule | [*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#time-v1-meta)
-`lastScheduleTime` | Information when was the last time that backup was successfully scheduled. | [*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#time-v1-meta)
-`nextScheduleTime` | Next time we will run a backup | [*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#time-v1-meta)
+`lastCheckTime ` | The latest time the schedule | [*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta)
+`lastScheduleTime` | Information when was the last time that backup was successfully scheduled. | [*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta)
+`nextScheduleTime` | Next time we will run a backup | [*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta)
+
+
+
+## SecretKeySelector
+
+SecretKeySelector contains enough information to let you locate the key of a Secret
+
+Name | Description | Type
+--- | ----------------- | ------
+`key` | The key to select - *mandatory* | string
@@ -395,13 +501,15 @@ Name | Description
SecretsResourceVersion is the resource versions of the secrets managed by the operator
-Name | Description | Type
------------------------- | ----------------------------------------------------------------- | ------
-`superuserSecretVersion ` | The resource version of the "postgres" user secret - *mandatory* | string
-`replicationSecretVersion` | The resource version of the "streaming_replication" user secret - *mandatory* | string
-`applicationSecretVersion` | The resource version of the "app" user secret - *mandatory* | string
-`caSecretVersion ` | The resource version of the "ca" secret version - *mandatory* | string
-`serverSecretVersion ` | The resource version of the PostgreSQL server-side secret version - *mandatory* | string
+Name | Description | Type
+------------------------ | -------------------------------------------------------------------- | ------
+`superuserSecretVersion ` | The resource version of the "postgres" user secret - *mandatory* | string
+`replicationSecretVersion` | The resource version of the "streaming_replication" user secret - *mandatory* | string
+`applicationSecretVersion` | The resource version of the "app" user secret - *mandatory* | string
+`caSecretVersion ` | Unused. Retained for compatibility with old versions. | string
+`clientCaSecretVersion ` | The resource version of the PostgreSQL client-side CA secret version - *mandatory* | string
+`serverCaSecretVersion ` | The resource version of the PostgreSQL server-side CA secret version - *mandatory* | string
+`serverSecretVersion ` | The resource version of the PostgreSQL server-side secret version - *mandatory* | string
@@ -414,7 +522,7 @@ Name | Description
`storageClass ` | StorageClass to use for database data (`PGDATA`). Applied after evaluating the PVC template, if available. If not specified, generated PVCs will be satisfied by the default storage class | *string
`size ` | Size of the storage. Required if not already specified in the PVC template. Changes to this field are automatically reapplied to the created PVCs. Size cannot be decreased. - *mandatory* | string
`resizeInUseVolumes` | Resize existent PVCs, defaults to true | *bool
-`pvcTemplate ` | Template to be used to generate the Persistent Volume Claim | [*corev1.PersistentVolumeClaimSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#persistentvolumeclaim-v1-core)
+`pvcTemplate ` | Template to be used to generate the Persistent Volume Claim | [*corev1.PersistentVolumeClaimSpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#persistentvolumeclaim-v1-core)
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/architecture.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/architecture.mdx
index 4dccd20f717..814957ab9e9 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/architecture.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/architecture.mdx
@@ -124,3 +124,4 @@ The `-app` credentials are the ones that should be used by applications
connecting to the PostgreSQL cluster.
The `-superuser` ones are supposed to be used only for administrative purposes.
+
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/bootstrap.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/bootstrap.mdx
index a6288dce3c4..11e1806e562 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/bootstrap.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/bootstrap.mdx
@@ -4,6 +4,11 @@ originalFilePath: 'src/bootstrap.md'
product: 'Cloud Native Operator'
---
+!!! Note
+ When referring to "PostgreSQL cluster" in this section, the same
+ concepts apply to both PostgreSQL and EDB Postgres Advanced, unless
+ differently stated.
+
This section describes the options you have to create a new
PostgreSQL cluster and the design rationale behind them.
@@ -34,9 +39,13 @@ The `initdb` bootstrap method is used.
We currently support the following bootstrap methods:
-- `initdb`: initialise an empty PostgreSQL cluster
-- `recovery`: create a PostgreSQL cluster restoring from an existing backup
- and replaying all the available WAL files.
+- `initdb`: initialize an empty PostgreSQL cluster
+- `recovery`: create a PostgreSQL cluster by restoring from an existing backup
+ and replaying all the available WAL files or up to a given point in time
+- `pg_basebackup`: create a PostgreSQL cluster by cloning an existing one of the
+ same major version using `pg_basebackup` via streaming replication protocol -
+ useful if you want to migrate databases to Cloud Native PostgreSQL, even
+ from outside Kubernetes.
## initdb
@@ -306,3 +315,256 @@ spec:
targetName: "maintenance-activity"
exclusive: false
```
+
+## pg_basebackup
+
+The `pg_basebackup` bootstrap mode lets you create a new cluster (*target*) as
+an exact physical copy of an existing and **binary compatible** PostgreSQL
+instance (*source*), through a valid *streaming replication* connection.
+The source instance can be either a primary or a standby PostgreSQL server.
+
+The primary use case for this method is represented by **migrations** to Cloud Native PostgreSQL,
+either from outside Kubernetes or within Kubernetes (e.g., from another operator).
+
+!!! Warning
+ The current implementation creates a *snapshot* of the origin PostgreSQL
+ instance when the cloning process terminates and immediately starts
+ the created cluster. See ["Current limitations"](#current-limitations) below for details.
+
+Similar to the case of the `recovery` bootstrap method, once the clone operation
+completes, the operator will take ownership of the target cluster, starting from
+the first instance. This includes overriding some configuration parameters, as
+required by Cloud Native PostgreSQL, resetting the superuser password, creating
+the `streaming_replica` user, managing the replicas, and so on. The resulting
+cluster will be completely independent of the source instance.
+
+!!! Important
+ Configuring the network between the target instance and the source instance
+ goes beyond the scope of Cloud Native PostgreSQL documentation, as it depends
+ on the actual context and environment.
+
+The streaming replication client on the target instance, which will be
+transparently managed by `pg_basebackup`, can authenticate itself on the source
+instance in any of the following ways:
+
+1. via [username/password](#usernamepassword-authentication)
+2. via [TLS client certificate](#tls-certificate-authentication)
+
+The latter is the recommended one if you connect to a source managed
+by Cloud Native PostgreSQL or configured for TLS authentication.
+The first option is, however, the most common form of authentication to a
+PostgreSQL server in general, and might be the easiest way if the source
+instance is on a traditional environment outside Kubernetes.
+Both cases are explained below.
+
+### Requirements
+
+The following requirements apply to the `pg_basebackup` bootstrap method:
+
+- target and source must have the same hardware architecture
+- target and source must have the same major PostgreSQL version
+- source must not have any tablespace defined (see ["Current limitations"](#current-limitations) below)
+- source must be configured with enough `max_wal_senders` to grant
+ access from the target for this one-off operation by providing at least
+ one *walsender* for the backup plus one for WAL streaming
+- the network between source and target must be configured to enable the target
+ instance to connect to the PostgreSQL port on the source instance
+- source must have a role with `REPLICATION LOGIN` privileges and must accept
+ connections from the target instance for this role in `pg_hba.conf`, preferably
+ via TLS (see ["About the replication user"](#about-the-replication-user) below)
+- target must be able to successfully connect to the source PostgreSQL instance
+ using a role with `REPLICATION LOGIN` privileges
+
+!!! Seealso
+ For further information, please refer to the
+ ["Planning" section for Warm Standby](https://www.postgresql.org/docs/current/warm-standby.html#STANDBY-PLANNING),
+ the
+ [`pg_basebackup` page](https://www.postgresql.org/docs/current/app-pgbasebackup.html)
+ and the
+ ["High Availability, Load Balancing, and Replication" chapter](https://www.postgresql.org/docs/current/high-availability.html)
+ in the PostgreSQL documentation.
+
+### About the replication user
+
+As explained in the requirements section, you need to have a user
+with either the `SUPERUSER` or, preferably, just the `REPLICATION`
+privilege in the source instance.
+
+If the source database is created with Cloud Native PostgreSQL, you
+can reuse the `streaming_replica` user and take advantage of client
+TLS certificates authentication (which, by default, is the only allowed
+connection method for `streaming_replica`).
+
+For all other cases, including outside Kubernetes, please verify that
+you already have a user with the `REPLICATION` privilege, or create
+a new one by following the instructions below.
+
+As `postgres` user on the source system, please run:
+
+```console
+createuser -P --replication streaming_replica
+```
+
+Enter the password at the prompt and save it for later, as you
+will need to add it to a secret in the target instance.
+
+!!! Note
+ Although the name is not important, we will use `streaming_replica`
+ for the sake of simplicity. Feel free to change it as you like,
+ provided you adapt the instructions in the following sections.
+
+### Username/Password authentication
+
+The first authentication method supported by Cloud Native PostgreSQL
+with the `pg_basebackup` bootstrap is based on username and password matching.
+
+Make sure you have the following information before you start the procedure:
+
+- location of the source instance, identified by a hostname or an IP address
+ and a TCP port
+- replication username (`streaming_replica` for simplicity)
+- password
+
+You might need to add a line similar to the following to the `pg_hba.conf`
+file on the source PostgreSQL instance:
+
+```
+# A more restrictive rule for TLS and IP of origin is recommended
+host replication streaming_replica all md5
+```
+
+The following manifest creates a new PostgreSQL 13.3 cluster,
+called `target-db`, using the `pg_basebackup` bootstrap method
+to clone an external PostgreSQL cluster defined as `source-db`
+(in the `externalClusters` array). As you can see, the `source-db`
+definition points to the `source-db.foo.com` host and connects as
+the `streaming_replica` user, whose password is stored in the
+`password` key of the `source-db-replica-user` secret.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: target-db
+spec:
+ instances: 3
+ imageName: quay.io/enterprisedb/postgresql:13.3
+
+ bootstrap:
+ pg_basebackup:
+ source: source-db
+
+ storage:
+ size: 1Gi
+
+ externalClusters:
+ - name: source-db
+ connectionParameters:
+ host: source-db.foo.com
+ user: streaming_replica
+ password:
+ name: source-db-replica-user
+ key: password
+```
+
+All the requirements must be met for the clone operation to work, including
+the same PostgreSQL version (in our case 13.3).
+
+### TLS certificate authentication
+
+The second authentication method supported by Cloud Native PostgreSQL
+with the `pg_basebackup` bootstrap is based on TLS client certificates.
+This is the recommended approach from a security standpoint.
+
+The following example clones an existing PostgreSQL cluster (`cluster-example`)
+in the same Kubernetes cluster.
+
+!!! Note
+ This example can be easily adapted to cover an instance that resides
+ outside the Kubernetes cluster.
+
+The manifest defines a new PostgreSQL 13.3 cluster called `cluster-clone-tls`,
+which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
+external cluster. The host is identified by the read/write service
+in the same cluster, while the `streaming_replica` user is authenticated
+thanks to the provided keys, certificate, and certification authority
+information (respectively in the `cluster-example-replication` and
+`cluster-example-ca` secrets).
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-clone-tls
+spec:
+ instances: 3
+ imageName: quay.io/enterprisedb/postgresql:13.3
+
+ bootstrap:
+ pg_basebackup:
+ source: cluster-example
+
+ storage:
+ size: 1Gi
+
+ externalClusters:
+ - name: cluster-example
+ connectionParameters:
+ host: cluster-example-rw.default.svc
+ user: streaming_replica
+ sslmode: verify-full
+ sslKey:
+ name: cluster-example-replication
+ key: tls.key
+ sslCert:
+ name: cluster-example-replication
+ key: tls.crt
+ sslRootCert:
+ name: cluster-example-ca
+ key: ca.crt
+```
+
+### Current limitations
+
+#### Missing tablespace support
+
+Cloud Native PostgreSQL does not currently include full declarative management
+of PostgreSQL global objects, namely roles, databases, and tablespaces.
+While roles and databases are copied from the source instance to the target
+cluster, tablespaces require a capability that this version of
+Cloud Native PostgreSQL is missing: definition and management of additional
+persistent volumes. When dealing with base backup and tablespaces, PostgreSQL
+itself requires that the exact mount points in the source instance
+must also exist in the target instance, in our case, the pods in Kubernetes
+that Cloud Native PostgreSQL manages. For this reason, you cannot directly
+migrate in Cloud Native PostgreSQL a PostgreSQL instance that takes advantage
+of tablespaces (you first need to remove them from the source or, if your
+organization requires this feature, contact EDB to prioritize it).
+
+#### Snapshot copy
+
+The `pg_basebackup` method takes a snapshot of the source instance in the form of
+a PostgreSQL base backup. All transactions written from the start of
+the backup to the correct termination of the backup will be streamed to the target
+instance using a second connection (see the `--wal-method=stream` option for
+`pg_basebackup`).
+
+Once the backup is completed, the new instance will be started on a new timeline
+and diverge from the source.
+For this reason, it is advised to stop all write operations to the source database
+before migrating to the target database in Kubernetes.
+
+!!! Important
+ Before you attempt a migration, you must test both the procedure
+ and the applications. In particular, it is fundamental that you run the migration
+ procedure as many times as needed to systematically measure the downtime of your
+ applications in production. Feel free to contact EDB for assistance.
+
+Future versions of Cloud Native PostgreSQL will enable users to control
+PostgreSQL's continuous recovery mechanism via Write-Ahead Log (WAL) shipping
+by creating a new cluster that is a replica of another PostgreSQL instance.
+This will open up two main use cases:
+
+- replication over different Kubernetes clusters in Cloud Native PostgreSQL
+- *0 cutover time* migrations to Cloud Native PostgreSQL with the `pg_basebackup`
+ bootstrap method
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/certificates.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/certificates.mdx
new file mode 100644
index 00000000000..30169f5b0a1
--- /dev/null
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/certificates.mdx
@@ -0,0 +1,160 @@
+---
+title: 'Certificates'
+originalFilePath: 'src/certificates.md'
+product: 'Cloud Native Operator'
+---
+
+Cloud Native PostgreSQL has been designed to natively support TLS certificates.
+In order to set up a `Cluster`, the operator requires:
+
+- a server Certification Authority (CA) certificate
+- a server TLS certificate signed by the server Certification Authority
+- a client Certification Authority certificate
+- a streaming replication client certificate generated by the client Certification Authority
+
+!!! Note
+ You can find all the secrets used by the cluster and their expiration dates
+ in the cluster's status.
+
+## Operator managed mode
+
+By default, the operator generates a single Certification Authority and uses it
+for both client and server certificates, which are then managed and renewed
+automatically.
+
+### Server CA Secret
+
+The operator generates a self-signed CA and stores it in a generic secret
+containing the following keys:
+
+- `ca.crt`: CA certificate used to validate the server certificate, used as `sslrootcert` in clients' connection strings.
+- `ca.key`: the key used to sign Server SSL certificate automatically
+
+### Server TLS Secret
+
+The operator uses the generated self-signed CA to sign a server TLS
+certificate, stored in a Secret of type `kubernetes.io/tls` and configured to
+be used as `ssl_cert_file` and `ssl_key_file` by the instances so that clients
+can verify their identity and connect securely.
+
+### Server alternative DNS names
+
+You can specify DNS server alternative names that will be part of the
+generated server TLS secret in addition to the default ones.
+
+## User-provided server certificate mode
+
+If required, you can also provide the two server certificates, generating them
+using a separate component such as [cert-manager](https://cert-manager.io/). In
+order to use a custom server TLS certificate for a Cluster, you must specify
+the following parameters:
+
+- `serverTLSSecret`: the name of a Secret of type `kubernetes.io/tls`,
+ containing the server TLS certificate. It must contain both the standard
+ `tls.crt` and `tls.key` keys.
+- `serverCASecret`: the name of a Secret containing the `ca.crt` key.
+
+!!! Note
+ The operator will still create and manage the two secrets related to client
+ certificates.
+
+See below for a complete example.
+
+### Example
+
+Given the following files:
+
+- `server-ca.crt`: the certificate of the CA that signed the server TLS certificate.
+- `server.crt`: the certificate of the server TLS certificate.
+- `server.key`: the private key of the server TLS certificate.
+
+Create a secret containing the CA certificate:
+
+```
+kubectl create secret generic my-postgresql-server-ca \
+ --from-file=ca.crt=./server-ca.crt
+```
+
+Create a secret with the TLS certificate:
+
+```
+kubectl create secret tls my-postgresql-server \
+ --cert=./server.crt --key=./server.key
+```
+
+Create a `Cluster` referencing those secrets:
+
+```bash
+kubectl apply -f - <
@@ -196,9 +198,10 @@ Native PostgreSQL's exporter:
Similarly, the `pg_version` field of a column definition is not implemented.
-# Monitoring the operator
+## Monitoring the operator
-The operator exposes [Prometheus](https://prometheus.io/) metrics via HTTP on port 8080, named `metrics`.
+The operator internally exposes [Prometheus](https://prometheus.io/) metrics
+via HTTP on port 8080, named `metrics`.
Metrics can be accessed as follows:
@@ -209,9 +212,9 @@ curl http://:8080/metrics
Currently, the operator exposes default `kubebuilder` metrics, see
[kubebuilder documentation](https://book.kubebuilder.io/reference/metrics.html) for more details.
-## Prometheus Operator example
+### Prometheus Operator example
-The deployment operator can be monitored using the
+The operator deployment can be monitored using the
[Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator) by defining the following
[PodMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/v0.47.1/Documentation/api.md#podmonitor)
resource:
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/operator_capability_levels.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/operator_capability_levels.mdx
index d2d4c9f44fa..14867b69545 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/operator_capability_levels.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/operator_capability_levels.mdx
@@ -58,7 +58,8 @@ Community and published on Quay.io by EnterpriseDB.
You can use any compatible image of PostgreSQL supporting the
primary/standby architecture directly by setting the `imageName`
attribute in the CR. The operator also supports `imagePullSecretsNames`
-to access private container registries.
+to access private container registries, as well as digests in addition to
+tags for finer control of container image immutability.
### Labels and annotations
@@ -130,8 +131,11 @@ allocated UID and SELinux context.
The operator supports basic pod affinity/anti-affinity rules to deploy PostgreSQL
pods on different nodes, based on the selected `topologyKey` (for example `node` or
-`zone`). Additionally, it supports node affinity through the `nodeSelector`
-configuration attribute, as [expected by Kubernetes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/).
+`zone`). it supports node affinity/anti-affinity through the `nodeSelector`
+configuration attribute, to be specified as [expected by Kubernetes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/)
+and tolerations through the `tolerations` configuration attribute, which will be added for all the pods created by the
+operator related to a specific Cluster, using kubernetes [standard syntax](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/).
+
### License keys
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/postgresql_conf.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/postgresql_conf.mdx
index 613db920160..df0455940ff 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/postgresql_conf.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/postgresql_conf.mdx
@@ -71,6 +71,17 @@ The **default parameters for PostgreSQL 10 to 12** are:
wal_keep_segments = '32'
```
+!!! Warning
+ It is your duty to plan for WAL segments retention in your PostgreSQL
+ cluster and properly configure either `wal_keep_segments` or `wal_keep_size`,
+ depending on the server version, based on the expected and observed workloads.
+ Until Cloud Native PostgreSQL supports replication slots, and if you don't have
+ continuous backup in place, this is the only way at the moment that protects
+ from the case of a standby falling out of sync and returning error messages like:
+ `"could not receive data from WAL stream: ERROR: requested WAL segment ************************ has already been removed"`.
+ This will require you to dedicate a part of your `PGDATA` to keep older
+ WAL segments for streaming replication purposes.
+
The following parameters are **fixed** and exclusively controlled by the operator:
```text
@@ -82,7 +93,7 @@ hot_standby = 'true'
listen_addresses = '*'
port = '5432'
ssl = 'on'
-ssl_ca_file = '/controller/certificates/ca.crt'
+ssl_ca_file = '/controller/certificates/client-ca.crt'
ssl_cert_file = '/controller/certificates/server.crt'
ssl_key_file = '/controller/certificates/server.key'
unix_socket_directories = '/var/run/postgresql'
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/quickstart.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/quickstart.mdx
index 3918b32167c..fcd151e0b49 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/quickstart.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/quickstart.mdx
@@ -181,3 +181,5 @@ spec:
Never use tags like `latest` or `13` in a production environment
as it might lead to unpredictable scenarios in terms of update
policies and version consistency in the cluster.
+ For strict deterministic and repeatable deployments, you can add the digests
+ to the image name, through the `:@sha256:` format.
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/release_notes.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/release_notes.mdx
index 666072a7322..114a8851414 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/release_notes.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/release_notes.mdx
@@ -6,6 +6,55 @@ product: 'Cloud Native Operator'
History of user-visible changes for Cloud Native PostgreSQL.
+## Version 1.5.0
+
+**Release date:** 11 June 2021
+
+Features:
+
+- Introduce the `pg_basebackup` bootstrap method to create a new PostgreSQL
+ cluster as a copy of an existing PostgreSQL instance of the same major
+ version, even outside Kubernetes
+- Add support for Kubernetes’ tolerations in the `Affinity` section of the
+ `Cluster` resource, allowing users to distribute PostgreSQL instances on
+ Kubernetes nodes with the required taint
+- Enable specification of a digest to an image name, through the
+ `:@sha256:` format, for more deterministic and
+ repeatable deployments
+
+Security Enhancements:
+
+- Customize TLS certificates to authenticate the PostgreSQL server by defining
+ secrets for the server certificate and the related Certification Authority
+ that signed it
+- Raise the `sslmode` for the WAL receiver process of internal and
+ automatically managed streaming replicas from `require` to `verify-ca`
+
+Changes:
+
+- Enhance the `promote` subcommand of the `cnp` plugin for `kubectl` to accept
+ just the node number rather than the whole name of the pod
+- Adopt DNS-1035 validation scheme for cluster names (from which service names
+ are inherited)
+- Enforce streaming replication connection when cloning a standby instance or
+ when bootstrapping using the `pg_basebackup` method
+- Integrate the `Backup` resource with `beginWal`, `endWal`, `beginLSN`,
+ `endLSN`, `startedAt` and `stoppedAt` regarding the physical base backup
+- Documentation improvements:
+ - Provide a list of ports exposed by the operator and the operand container
+ - Introduce the `cnp-bench` helm charts and guidelines for benchmarking the
+ storage and PostgreSQL for database workloads
+- E2E tests enhancements:
+ - Test Kubernetes 1.21
+ - Add test for High Availability of the operator
+ - Add test for node draining
+- Minor bug fixes, including:
+ - Timeout to pg_ctl start during recovery operations too short
+ - Operator not watching over direct events on PVCs
+ - Fix handling of `immediateCheckpoint` and `jobs` parameter in
+ `barmanObjectStore` backups
+ - Empty logs when recovering from a backup
+
## Version 1.4.0
**Release date:** 18 May 2021
@@ -139,4 +188,3 @@ Kubernetes with the following main capabilities:
- Support for synchronous replicas
- Support for node affinity via `nodeSelector` property
- Standard output logging of PostgreSQL error messages
-
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/resource_management.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/resource_management.mdx
index 6257b775957..9fd4381fbe9 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/resource_management.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/resource_management.mdx
@@ -54,7 +54,8 @@ while creating a cluster:
- Specify your required PostgreSQL memory parameters consistently with the pod resources (as you would do
in a VM or physical machine scenario - see below).
- Set up database server pods on a dedicated node using nodeSelector.
- See the ["nodeSelector field of the affinityconfiguration resource on the API reference page"](api_reference.md#affinityconfiguration).
+ See the "nodeSelector" and "tolerations" fields of the
+ [“affinityconfiguration"](api_reference.md#affinityconfiguration) resource on the API reference page.
You can refer to the following example manifest:
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-clone-basicauth.yaml b/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-clone-basicauth.yaml
new file mode 100644
index 00000000000..5783b0c38c1
--- /dev/null
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-clone-basicauth.yaml
@@ -0,0 +1,28 @@
+# IMPORTANT: this configuration requires an appropriate line
+# in the host-based access rules allowing replication connections
+# to the postgres user.
+#
+# The following line met the requisites
+# - "host replication postgres all md5"
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-clone-basicauth
+spec:
+ instances: 3
+
+ bootstrap:
+ pg_basebackup:
+ source: cluster-example
+
+ storage:
+ size: 1Gi
+
+ externalClusters:
+ - name: cluster-example
+ connectionParameters:
+ host: cluster-example-rw.default.svc
+ user: postgres
+ password:
+ name: cluster-example-superuser
+ key: password
\ No newline at end of file
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-clone-tls.yaml b/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-clone-tls.yaml
new file mode 100644
index 00000000000..2b509e63c7f
--- /dev/null
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-clone-tls.yaml
@@ -0,0 +1,29 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-clone-tls
+spec:
+ instances: 3
+
+ bootstrap:
+ pg_basebackup:
+ source: cluster-example
+
+ storage:
+ size: 1Gi
+
+ externalClusters:
+ - name: cluster-example
+ connectionParameters:
+ host: cluster-example-rw.default.svc
+ user: streaming_replica
+ sslmode: verify-full
+ sslKey:
+ name: cluster-example-replication
+ key: tls.key
+ sslCert:
+ name: cluster-example-replication
+ key: tls.crt
+ sslRootCert:
+ name: cluster-example-ca
+ key: ca.crt
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-example-full.yaml b/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-example-full.yaml
index 71e497e2baf..ce7e649538f 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-example-full.yaml
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-example-full.yaml
@@ -33,7 +33,7 @@ metadata:
name: cluster-example-full
spec:
description: "Example of cluster"
- imageName: quay.io/enterprisedb/postgresql:13.2
+ imageName: quay.io/enterprisedb/postgresql:13.3
# imagePullSecret is only required if the images are located in a private registry
# imagePullSecrets:
# - name: private_registry_access
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/security.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/security.mdx
index c867b96398c..f811f77de14 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/security.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/security.mdx
@@ -113,13 +113,28 @@ to enable/disable inbound and outbound network access at IP and TCP level.
!!! Important
The operator needs to communicate to each instance on TCP port 8000
- to get information about the status of the PostgreSQL server. Make sure
- you keep this in mind in case you add any network policy.
+ to get information about the status of the PostgreSQL server. Please
+ make sure you keep this in mind in case you add any network policy,
+ and refer to the "Exposed Ports" section below for a list of ports used by
+ Cloud Native PostgreSQL for finer control.
Network policies are beyond the scope of this document.
Please refer to the ["Network policies"](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
section of the Kubernetes documentation for further information.
+#### Exposed Ports
+
+Cloud Native PostgreSQL exposes ports at operator, instance manager and operand
+levels, as listed in the table below:
+
+System | Port number | Exposing | Name | Certificates | Authentication
+:--------------- | :----------- | :------------------ | :------------------ | :------------ | :--------------
+operator | 9443 | webhook server | `webhook-server` | TLS | Yes
+operator | 8080 | metrics | `metrics` | no TLS | No
+instance manager | 9187 | metrics | `metrics` | no TLS | No
+instance manager | 8000 | status | `status` | no TLS | No
+operand | 5432 | PostgreSQL instance | `postgresql` | optional TLS | Yes
+
### PostgreSQL
The current implementation of Cloud Native PostgreSQL automatically creates
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/ssl_connections.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/ssl_connections.mdx
index bda8a53b6bb..3e422be50b8 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/ssl_connections.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/ssl_connections.mdx
@@ -84,7 +84,7 @@ spec:
app: webtest
spec:
containers:
- - image: leonardoce/webtest:1.0.0
+ - image: quay.io/leonardoce/webtest:1.3.0
name: cert-test
volumeMounts:
- name: secret-volume-root-ca
@@ -163,7 +163,7 @@ Output :
version
--------------------------------------------------------------------------------------
------------------
-PostgreSQL 13.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat
+PostgreSQL 13.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat
8.3.1-5), 64-bit
(1 row)
```
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/storage.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/storage.mdx
index c3028831f95..a26ed1b545c 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/storage.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/storage.mdx
@@ -35,11 +35,37 @@ guarantees higher and more predictable performance.
!!! Warning
Before you deploy a PostgreSQL cluster with Cloud Native PostgreSQL,
- make sure that the storage you are using is recommended for database
+ ensure that the storage you are using is recommended for database
workloads. Our advice is to clearly set performance expectations by
first benchmarking the storage using tools such as [fio](https://fio.readthedocs.io/en/latest/fio_doc.html),
and then the database using [pgbench](https://www.postgresql.org/docs/current/pgbench.html).
+## Benchmarking Cloud Native PostgreSQL
+
+EDB maintains [cnp-bench](https://github.com/EnterpriseDB/cnp-bench),
+an open source set of guidelines and Helm charts for benchmarking Cloud Native PostgreSQL
+in a controlled Kubernetes environment, before deploying the database in production.
+
+Briefly, `cnp-bench` is designed to operate at two levels:
+
+- measuring the performance of the underlying storage using `fio`, with relevant
+ metrics for database workloads such as throughput for sequential reads, sequential
+ writes, random reads and random writes
+- measuring the performance of the database using the default benchmarking tool
+ distributed along with PostgreSQL: `pgbench`
+
+!!! Important
+ Measuring both the storage and database performance is an activity that
+ must be done **before the database goes in production**. However, such results
+ are extremely valuable not only in the planning phase (e.g., capacity planning),
+ but also in the production lifecycle, especially in emergency situations
+ (when we don't have the luxury anymore to run this kind of tests). Databases indeed
+ change and evolve over time, so does the distribution of data, potentially affecting
+ performance: knowing the theoretical maximum throughput of sequential reads or
+ writes will turn out to be extremely useful in those situations. Especially in
+ shared-nothing contexts, where results do not vary due to the influence of external workloads.
+ **Know your system, benchmark it.**
+
## Persistent Volume Claim
The operator creates a persistent volume claim (PVC) for each PostgreSQL
@@ -77,6 +103,11 @@ spec:
size: 1Gi
```
+!!! Important
+ Cloud Native PostgreSQL has been designed to be storage class agnostic.
+ As usual, our recommendation is to properly benchmark the storage class
+ in a controlled environment, before hitting production.
+
## Configuration via a PVC template
To further customize the generated PVCs, you can provide a PVC template inside the Custom Resource,
diff --git a/merge_sources/kubernetes/cloud_native_postgresql/interactive_demo.mdx b/merge_sources/kubernetes/cloud_native_postgresql/interactive_demo.mdx
index f943440f93b..b9d1f93f494 100644
--- a/merge_sources/kubernetes/cloud_native_postgresql/interactive_demo.mdx
+++ b/merge_sources/kubernetes/cloud_native_postgresql/interactive_demo.mdx
@@ -65,7 +65,7 @@ You will see one node called `minikube`. If the status isn't yet "Ready", wait f
Now that the Minikube cluster is running, you can proceed with Cloud Native PostgreSQL installation as described in the ["Installation"](installation_upgrade.md) section:
```shell
-kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.4.0.yaml
+kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.5.0.yaml
__OUTPUT__
namespace/postgresql-operator-system created
customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
@@ -245,7 +245,7 @@ curl -sSfL \
sudo sh -s -- -b /usr/local/bin
__OUTPUT__
EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
-EnterpriseDB/kubectl-cnp info found version: 1.4.0 for v1.4.0/linux/x86_64
+EnterpriseDB/kubectl-cnp info found version: 1.5.0 for v1.5.0/linux/x86_64
EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp
```
diff --git a/product_docs/docs/epas/12/epas_compat_ora_dev_guide/index.mdx b/product_docs/docs/epas/12/epas_compat_ora_dev_guide/index.mdx
index 565ac395002..bc8068886c5 100644
--- a/product_docs/docs/epas/12/epas_compat_ora_dev_guide/index.mdx
+++ b/product_docs/docs/epas/12/epas_compat_ora_dev_guide/index.mdx
@@ -1,6 +1,6 @@
---
navTitle: User Guide
-title: "Database Compatibility for Oracle Developer's Guide"
+title: "Database Compatibility for Oracle Developers Guide"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/13/index.html"
diff --git a/product_docs/docs/epas/12/epas_compat_tools_guide/02_edb_loader.mdx b/product_docs/docs/epas/12/epas_compat_tools_guide/02_edb_loader.mdx
index 8014fa788f5..d47d62c7dd1 100644
--- a/product_docs/docs/epas/12/epas_compat_tools_guide/02_edb_loader.mdx
+++ b/product_docs/docs/epas/12/epas_compat_tools_guide/02_edb_loader.mdx
@@ -24,9 +24,7 @@ These features are explained in detail in the following sections.
!!! Note
The following are important version compatibility restrictions between the EDB\*Loader client and the database server.
-- When you invoke the EDB\*Loader program (called `edbldr`), you pass in parameters and directive information to the database server. **We strongly recommend that the version 12 EDB\*Loader client (the edbldr program supplied with Advanced Server 12) be used to load data only into version 12 of the database server. In general, the EDB\*Loader client and database server should be the same version.**
-
-- Use of a version 12, 11, 10, or 9.6 EDB\*Loader client is not supported for Advanced Server with version 9.2 or earlier.
+When you invoke the EDB\*Loader program (called `edbldr`), you pass in parameters and directive information to the database server. **We strongly recommend that the version 12 EDB\*Loader client (the edbldr program supplied with Advanced Server 12) be used to load data only into version 12 of the database server. In general, the EDB\*Loader client and database server should be the same version.**
diff --git a/product_docs/docs/epas/12/epas_compat_tools_guide/index.mdx b/product_docs/docs/epas/12/epas_compat_tools_guide/index.mdx
index 42126bf0504..06f493cd0e4 100644
--- a/product_docs/docs/epas/12/epas_compat_tools_guide/index.mdx
+++ b/product_docs/docs/epas/12/epas_compat_tools_guide/index.mdx
@@ -1,6 +1,6 @@
---
navTitle: Tools and Utilities Guide
-title: "Database Compatibility for Oracle Developer’s Tools and Utilities Guide"
+title: "Database Compatibility for Oracle Developers Tools and Utilities Guide"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/13/index.html"
diff --git a/product_docs/docs/epas/12/epas_rel_notes/index.mdx b/product_docs/docs/epas/12/epas_rel_notes/index.mdx
index b1688971515..f289efea3d8 100644
--- a/product_docs/docs/epas/12/epas_rel_notes/index.mdx
+++ b/product_docs/docs/epas/12/epas_rel_notes/index.mdx
@@ -1,6 +1,6 @@
---
navTitle: Release Notes
-title: "EDB Postgres Advanced Server 12 Release Notes"
+title: "EDB Postgres Advanced Server Release Notes"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
diff --git a/product_docs/docs/epas/13/edb_pgadmin_linux_qs/index.mdx b/product_docs/docs/epas/13/edb_pgadmin_linux_qs/index.mdx
index 3f65778bdba..890ce29fbef 100644
--- a/product_docs/docs/epas/13/edb_pgadmin_linux_qs/index.mdx
+++ b/product_docs/docs/epas/13/edb_pgadmin_linux_qs/index.mdx
@@ -1,5 +1,5 @@
---
-title: "EDB pgAdmin4 Quickstart Linux Guide for EPAS"
+title: "EDB pgAdmin4 Quickstart Linux Guide"
legacyRedirects:
- "/edb-docs/d/pgadmin-4/quick-start/quick-start-guide/4.26/index.html"
---
diff --git a/product_docs/docs/epas/13/epas_compat_ora_dev_guide/index.mdx b/product_docs/docs/epas/13/epas_compat_ora_dev_guide/index.mdx
index d340bafe379..0911a75f66c 100644
--- a/product_docs/docs/epas/13/epas_compat_ora_dev_guide/index.mdx
+++ b/product_docs/docs/epas/13/epas_compat_ora_dev_guide/index.mdx
@@ -1,6 +1,6 @@
---
navTitle: User Guide
-title: "Database Compatibility for Oracle Developer's Guide"
+title: "Database Compatibility for Oracle Developers Guide"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/13/index.html"
diff --git a/product_docs/docs/epas/13/epas_compat_tools_guide/02_edb_loader.mdx b/product_docs/docs/epas/13/epas_compat_tools_guide/02_edb_loader.mdx
index ba46c1c30da..31b6f3cfc58 100644
--- a/product_docs/docs/epas/13/epas_compat_tools_guide/02_edb_loader.mdx
+++ b/product_docs/docs/epas/13/epas_compat_tools_guide/02_edb_loader.mdx
@@ -34,8 +34,6 @@ These features are explained in detail in the following sections.
psycopg2 copy_from
```
-- Use of a version 13, 12, 11, 10, or 9.6 EDB\*Loader client is not supported for Advanced Server with version 9.2 or earlier.
-
## Data Loading Methods
diff --git a/product_docs/docs/epas/13/epas_compat_tools_guide/index.mdx b/product_docs/docs/epas/13/epas_compat_tools_guide/index.mdx
index feb3a2d897f..b0b83983c5a 100644
--- a/product_docs/docs/epas/13/epas_compat_tools_guide/index.mdx
+++ b/product_docs/docs/epas/13/epas_compat_tools_guide/index.mdx
@@ -1,6 +1,6 @@
---
navTitle: Tools and Utilities Guide
-title: "Database Compatibility for Oracle Developer’s Tools and Utilities Guide"
+title: "Database Compatibility for Oracle Developers Tools and Utilities Guide"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/13/index.html"
diff --git a/product_docs/docs/epas/13/epas_qs_windows/index.mdx b/product_docs/docs/epas/13/epas_qs_windows/index.mdx
index 9b64d41c4d8..7bc8729be2e 100644
--- a/product_docs/docs/epas/13/epas_qs_windows/index.mdx
+++ b/product_docs/docs/epas/13/epas_qs_windows/index.mdx
@@ -30,8 +30,6 @@ Among the components that make up an Advanced Server deployment are:
**Supporting Functions, Procedures, Data Types, Index Types, Operators, Utilities, and Aggregates** - Advanced Server includes a number of features that help you manage your data.
-Please note: The `data` directory of a production database should not be stored on an NFS file system.
-
**Installation Prerequisites**
**User Privileges**
diff --git a/product_docs/docs/epas/13/epas_rel_notes/index.mdx b/product_docs/docs/epas/13/epas_rel_notes/index.mdx
index 254f0438167..d68086306a3 100644
--- a/product_docs/docs/epas/13/epas_rel_notes/index.mdx
+++ b/product_docs/docs/epas/13/epas_rel_notes/index.mdx
@@ -1,6 +1,6 @@
---
navTitle: Release Notes
-title: "EDB Postgres Advanced Server 13 Release Notes"
+title: "EDB Postgres Advanced Server Release Notes"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/02_requirements_overview.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/02_requirements_overview.mdx
index 4c9969fe4e6..afaf5d7b204 100644
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/02_requirements_overview.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2.0.7/02_requirements_overview.mdx
@@ -14,7 +14,7 @@ The Hadoop Foreign Data Wrapper is supported on the following platforms:
> - RHEL 8.x and 7.x
> - CentOS 8.x and 7.x
-> - OEL 8.x and 7.x
+> - OL 8.x and 7.x
> - Ubuntu 20.04 and 18.04 LTS
> - Debian 10.x and 9.x
diff --git a/product_docs/docs/mongo_data_adapter/5.2.8/02_requirements_overview.mdx b/product_docs/docs/mongo_data_adapter/5.2.8/02_requirements_overview.mdx
index 810cc135c95..6078203da1c 100644
--- a/product_docs/docs/mongo_data_adapter/5.2.8/02_requirements_overview.mdx
+++ b/product_docs/docs/mongo_data_adapter/5.2.8/02_requirements_overview.mdx
@@ -14,7 +14,7 @@ The MongoDB Foreign Data Wrapper is supported on the following platforms:
> - RHEL 8.x/7.x
> - CentOS 8.x/7.x
-> - OEL 8.x/7.x
+> - OL 8.x/7.x
> - Ubuntu 20.04/18.04 LTS
> - Debian 10.x/9.x
diff --git a/product_docs/docs/mysql_data_adapter/2.5.5/02_requirements_overview.mdx b/product_docs/docs/mysql_data_adapter/2.5.5/02_requirements_overview.mdx
index 3b8ed138c80..298bdbc1073 100644
--- a/product_docs/docs/mysql_data_adapter/2.5.5/02_requirements_overview.mdx
+++ b/product_docs/docs/mysql_data_adapter/2.5.5/02_requirements_overview.mdx
@@ -14,7 +14,7 @@ The MySQL Foreign Data Wrapper is supported on the following platforms:
> - RHEL 8.x/7.x
> - CentOS 8.x/7.x
-> - OEL 8.x/7.x
+> - OL 8.x/7.x
> - Ubuntu 20.04/18.04 LTS
> - Debian 10.x/9.x
diff --git a/product_docs/docs/mysql_data_adapter/2.6.0/02_requirements_overview.mdx b/product_docs/docs/mysql_data_adapter/2.6.0/02_requirements_overview.mdx
index c5d8ce0b2f9..76e502543e2 100644
--- a/product_docs/docs/mysql_data_adapter/2.6.0/02_requirements_overview.mdx
+++ b/product_docs/docs/mysql_data_adapter/2.6.0/02_requirements_overview.mdx
@@ -14,7 +14,7 @@ The MySQL Foreign Data Wrapper is certified with EDB Postgres Advanced Server 9.
- RHEL 8.x/7.x
- CentOS 8.x/7.x
-- OEL 8.x/7.x
+- OL 8.x/7.x
- Ubuntu 20.04/18.04 LTS
- Debian 10.x/9.x
@@ -24,7 +24,7 @@ The MySQL Foreign Data Wrapper is certified with EDB Postgres Advanced Server 9.
- RHEL 7.x
- CentOS 7.x
-- OEL 7.x
+- OL 7.x
- Ubuntu 18.04 LTS
- Debian 10.x/9.x