- kind: VolumeSnapshot
- apiGroup: snapshot.storage.k8s.io
-```
-
-The `kubectl cnp snapshot` command is able to take consistent snapshots of a
-replica through a technique known as *cold backup*, by fencing the standby
-before taking a physical copy of the volumes. For details, please refer to
-["Snapshotting a Postgres cluster"](kubectl-plugin/#snapshotting-a-postgres-cluster).
-
-#### Additional considerations
-
-Whether you recover from a recovery object store or an existing `Backup`
-resource, the following considerations apply:
-
-- The application database name and the application database user are preserved
- from the backup that is being restored. The operator does not currently attempt
- to back up the underlying secrets, as this is part of the usual maintenance
- activity of the Kubernetes cluster itself.
-- In case you don't supply any `superuserSecret`, a new one is automatically
- generated with a secure and random password. The secret is then used to
- reset the password for the `postgres` user of the cluster.
-- By default, the recovery will continue up to the latest
- available WAL on the default target timeline (`current` for PostgreSQL up to
- 11, `latest` for version 12 and above).
- You can optionally specify a `recoveryTarget` to perform a point in time
- recovery (see the ["Point in time recovery" section](#point-in-time-recovery-pitr)).
-
-!!! Important
- Consider using the `barmanObjectStore.wal.maxParallel` option to speed
- up WAL fetching from the archive by concurrently downloading the transaction
- logs from the recovery object store.
-
-#### Point in time recovery (PITR)
-
-Instead of replaying all the WALs up to the latest one, we can ask PostgreSQL
-to stop replaying WALs at any given point in time, after having extracted a
-base backup. PostgreSQL uses this technique to achieve *point-in-time* recovery
-(PITR).
-
-!!! Note
- PITR is available from recovery object stores as well as `Backup` objects.
-
-The operator will generate the configuration parameters required for this
-feature to work in case a recovery target is specified, like in the following
-example that uses a recovery object stored in Azure and a timestamp based
-goal:
-
-```yaml
-apiVersion: postgresql.k8s.enterprisedb.io/v1
-kind: Cluster
-metadata:
- name: cluster-restore-pitr
-spec:
- instances: 3
-
- storage:
- size: 5Gi
-
- bootstrap:
- recovery:
- source: clusterBackup
- recoveryTarget:
- targetTime: "2020-11-26 15:22:00.00000+00"
-
- externalClusters:
- - name: clusterBackup
- barmanObjectStore:
- destinationPath: https://STORAGEACCOUNTNAME.blob.core.windows.net/CONTAINERNAME/
- azureCredentials:
- storageAccount:
- name: recovery-object-store-secret
- key: storage_account_name
- storageKey:
- name: recovery-object-store-secret
- key: storage_account_key
- wal:
- maxParallel: 8
-```
-
-You might have noticed that in the above example you only had to specify
-the `targetTime` in the form of a timestamp, without having to worry about
-specifying the base backup from which to start the recovery.
-
-The `backupID` option is the one that allows you to specify the base backup
-from which to initiate the recovery process. By default, this value is
-empty.
-
-If you assign a value to it (in the form of a Barman backup ID), the operator
-will use that backup as base for the recovery.
-
-!!! Important
- You need to make sure that such a backup exists and is accessible.
-
-If the backup ID is not specified, the operator will automatically detect the
-base backup for the recovery as follows:
-
-- when you use `targetTime` or `targetLSN`, the operator selects the closest
- backup that was completed before that target
-- otherwise the operator selects the last available backup in chronological
- order.
-
-Here are the recovery target criteria you can use:
-
-targetTime
-: time stamp up to which recovery will proceed, expressed in
- [RFC 3339](https://datatracker.ietf.org/doc/html/rfc3339) format
- (the precise stopping point is also influenced by the `exclusive` option)
-
-targetXID
-: transaction ID up to which recovery will proceed
- (the precise stopping point is also influenced by the `exclusive` option);
- keep in mind that while transaction IDs are assigned sequentially at
- transaction start, transactions can complete in a different numeric order.
- The transactions that will be recovered are those that committed before
- (and optionally including) the specified one
-
-targetName
-: named restore point (created with `pg_create_restore_point()`) to which
- recovery will proceed
-
-targetLSN
-: LSN of the write-ahead log location up to which recovery will proceed
- (the precise stopping point is also influenced by the `exclusive` option)
-
-targetImmediate
-: recovery should end as soon as a consistent state is reached - i.e. as early
- as possible. When restoring from an online backup, this means the point where
- taking the backup ended
-
-!!! Important
- While the operator is able to automatically retrieve the closest backup
- when either `targetTime` or `targetLSN` is specified, this is not possible
- for the remaining targets: `targetName`, `targetXID`, and `targetImmediate`.
- In such cases, it is important to specify `backupID`, unless you are OK with
- the last available backup in the catalog.
-
-The example below uses a `targetName` based recovery target:
-
-```yaml
-apiVersion: postgresql.k8s.enterprisedb.io/v1
-kind: Cluster
-[...]
- bootstrap:
- recovery:
- source: clusterBackup
- recoveryTarget:
- backupID: 20220616T142236
- targetName: 'restore_point_1'
-[...]
-```
-
-You can choose only a single one among the targets above in each
-`recoveryTarget` configuration.
-
-Additionally, you can specify `targetTLI` force recovery to a specific
-timeline.
-
-By default, the previous parameters are considered to be inclusive, stopping
-just after the recovery target, matching [the behavior in PostgreSQL](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-RECOVERY-TARGET-INCLUSIVE)
-You can request exclusive behavior,
-stopping right before the recovery target, by setting the `exclusive` parameter to
-`true` like in the following example relying on a blob container in Azure:
-
-```yaml
-apiVersion: postgresql.k8s.enterprisedb.io/v1
-kind: Cluster
-metadata:
- name: cluster-restore-pitr
-spec:
- instances: 3
-
- storage:
- size: 5Gi
-
- bootstrap:
- recovery:
- source: clusterBackup
- recoveryTarget:
- backupID: 20220616T142236
- targetName: "maintenance-activity"
- exclusive: true
-
- externalClusters:
- - name: clusterBackup
- barmanObjectStore:
- destinationPath: https://STORAGEACCOUNTNAME.blob.core.windows.net/CONTAINERNAME/
- azureCredentials:
- storageAccount:
- name: recovery-object-store-secret
- key: storage_account_name
- storageKey:
- name: recovery-object-store-secret
- key: storage_account_key
- wal:
- maxParallel: 8
-```
-
-#### Configure the application database
-
-For the recovered cluster, we can configure the application database name and
-credentials with additional configuration. To update application database
-credentials, we can generate our own passwords, store them as secrets, and
-update the database use the secrets. Or we can also let the operator generate a
-secret with randomly secure password for use. Please reference the
-["Bootstrap an empty cluster"](#bootstrap-an-empty-cluster-initdb)
-section for more information about secrets.
-
-The following example configure the application database `app` with owner
-`app`, and supplied secret `app-secret`.
-
-```yaml
-apiVersion: postgresql.k8s.enterprisedb.io/v1
-kind: Cluster
-[...]
-spec:
- bootstrap:
- recovery:
- database: app
- owner: app
- secret:
- name: app-secret
- [...]
-```
-
-With the above configuration, the following will happen after recovery is completed:
-
-1. if database `app` does not exist, a new database `app` will be created.
-2. if user `app` does not exist, a new user `app` will be created.
-3. if user `app` is not the owner of database, user `app` will be granted
- as owner of database `app`.
-4. If value of `username` match value of `owner` in secret, the password of
- application database will be changed to the value of `password` in secret.
-
-!!! Important
- For a replica cluster with replica mode enabled, the operator will not
- create any database or user in the PostgreSQL instance, as these will be
- recovered from the original cluster.
+Given the several possibilities, methods, and combinations that the
+EDB Postgres for Kubernetes operator provides in terms of backup and recovery, please refer
+to the ["Recovery" section](recovery.md).
### Bootstrap from a live cluster (`pg_basebackup`)
@@ -900,7 +503,7 @@ file on the source PostgreSQL instance:
host replication streaming_replica all md5
```
-The following manifest creates a new PostgreSQL 15.3 cluster,
+The following manifest creates a new PostgreSQL 16.0 cluster,
called `target-db`, using the `pg_basebackup` bootstrap method
to clone an external PostgreSQL cluster defined as `source-db`
(in the `externalClusters` array). As you can see, the `source-db`
@@ -915,7 +518,7 @@ metadata:
name: target-db
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:15.3
+ imageName: quay.io/enterprisedb/postgresql:16.0
bootstrap:
pg_basebackup:
@@ -935,7 +538,7 @@ spec:
```
All the requirements must be met for the clone operation to work, including
-the same PostgreSQL version (in our case 15.3).
+the same PostgreSQL version (in our case 16.0).
#### TLS certificate authentication
@@ -950,7 +553,7 @@ in the same Kubernetes cluster.
This example can be easily adapted to cover an instance that resides
outside the Kubernetes cluster.
-The manifest defines a new PostgreSQL 15.3 cluster called `cluster-clone-tls`,
+The manifest defines a new PostgreSQL 16.0 cluster called `cluster-clone-tls`,
which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
external cluster. The host is identified by the read/write service
in the same cluster, while the `streaming_replica` user is authenticated
@@ -965,7 +568,7 @@ metadata:
name: cluster-clone-tls
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:15.3
+ imageName: quay.io/enterprisedb/postgresql:16.0
bootstrap:
pg_basebackup:
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cloudnative-pg.v1.mdx b/product_docs/docs/postgres_for_kubernetes/1/cloudnative-pg.v1.mdx
new file mode 100644
index 00000000000..afada8f512f
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/cloudnative-pg.v1.mdx
@@ -0,0 +1,4151 @@
+---
+title: 'API Reference'
+originalFilePath: 'src/cloudnative-pg.v1.md'
+---
+
+Package v1 contains API Schema definitions for the postgresql v1 API group
+
+## Resource Types
+
+- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup)
+- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster)
+- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler)
+- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup)
+
+## Backup {#postgresql-k8s-enterprisedb-io-v1-Backup}
+
+Backup is the Schema for the backups API
+
+
+Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Backup |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+BackupSpec
+ |
+
+ Specification of the desired behavior of the backup.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+BackupStatus
+ |
+
+ Most recently observed status of the backup. This data may not be up to
+date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+## Cluster {#postgresql-k8s-enterprisedb-io-v1-Cluster}
+
+Cluster is the Schema for the PostgreSQL API
+
+
+Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Cluster |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+ClusterSpec
+ |
+
+ Specification of the desired behavior of the cluster.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+ClusterStatus
+ |
+
+ Most recently observed status of the cluster. This data may not be up
+to date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+## Pooler {#postgresql-k8s-enterprisedb-io-v1-Pooler}
+
+Pooler is the Schema for the poolers API
+
+
+Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | Pooler |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+PoolerSpec
+ |
+
+ Specification of the desired behavior of the Pooler.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+PoolerStatus
+ |
+
+ Most recently observed status of the Pooler. This data may not be up to
+date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+## ScheduledBackup {#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup}
+
+ScheduledBackup is the Schema for the scheduledbackups API
+
+
+Field | Description |
+
+apiVersion [Required] string | postgresql.k8s.enterprisedb.io/v1 |
+kind [Required] string | ScheduledBackup |
+metadata [Required]
+meta/v1.ObjectMeta
+ |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
+spec [Required]
+ScheduledBackupSpec
+ |
+
+ Specification of the desired behavior of the ScheduledBackup.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+status
+ScheduledBackupStatus
+ |
+
+ Most recently observed status of the ScheduledBackup. This data may not be up
+to date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+## AffinityConfiguration {#postgresql-k8s-enterprisedb-io-v1-AffinityConfiguration}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+AffinityConfiguration contains the info we need to create the
+affinity rules for Pods
+
+
+Field | Description |
+
+enablePodAntiAffinity
+bool
+ |
+
+ Activates anti-affinity for the pods. The operator will define pods
+anti-affinity unless this field is explicitly set to false
+ |
+
+topologyKey
+string
+ |
+
+ TopologyKey to use for anti-affinity configuration. See k8s documentation
+for more info on that
+ |
+
+nodeSelector
+map[string]string
+ |
+
+ NodeSelector is map of key-value pairs used to define the nodes on which
+the pods can run.
+More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ |
+
+nodeAffinity
+core/v1.NodeAffinity
+ |
+
+ NodeAffinity describes node affinity scheduling rules for the pod.
+More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
+ |
+
+tolerations
+[]core/v1.Toleration
+ |
+
+ Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run
+on tainted nodes.
+More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
+ |
+
+podAntiAffinityType
+string
+ |
+
+ PodAntiAffinityType allows the user to decide whether pod anti-affinity between cluster instance has to be
+considered a strong requirement during scheduling or not. Allowed values are: "preferred" (default if empty) or
+"required". Setting it to "required", could lead to instances remaining pending until new kubernetes nodes are
+added if all the existing nodes don't match the required pod anti-affinity rule.
+More info:
+https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
+ |
+
+additionalPodAntiAffinity
+core/v1.PodAntiAffinity
+ |
+
+ AdditionalPodAntiAffinity allows to specify pod anti-affinity terms to be added to the ones generated
+by the operator if EnablePodAntiAffinity is set to true (default) or to be used exclusively if set to false.
+ |
+
+additionalPodAffinity
+core/v1.PodAffinity
+ |
+
+ AdditionalPodAffinity allows to specify pod affinity terms to be passed to all the cluster's pods.
+ |
+
+
+
+
+## AzureCredentials {#postgresql-k8s-enterprisedb-io-v1-AzureCredentials}
+
+**Appears in:**
+
+- [BarmanCredentials](#postgresql-k8s-enterprisedb-io-v1-BarmanCredentials)
+
+AzureCredentials is the type for the credentials to be used to upload
+files to Azure Blob Storage. The connection string contains every needed
+information. If the connection string is not specified, we'll need the
+storage account name and also one (and only one) of:
+
+
+
+Field | Description |
+
+connectionString
+SecretKeySelector
+ |
+
+ The connection string to be used
+ |
+
+storageAccount
+SecretKeySelector
+ |
+
+ The storage account where to upload data
+ |
+
+storageKey
+SecretKeySelector
+ |
+
+ The storage account key to be used in conjunction
+with the storage account name
+ |
+
+storageSasToken
+SecretKeySelector
+ |
+
+ A shared-access-signature to be used in conjunction with
+the storage account name
+ |
+
+inheritFromAzureAD
+bool
+ |
+
+ Use the Azure AD based authentication without providing explicitly the keys.
+ |
+
+
+
+
+## BackupConfiguration {#postgresql-k8s-enterprisedb-io-v1-BackupConfiguration}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+BackupConfiguration defines how the backup of the cluster are taken.
+The supported backup methods are BarmanObjectStore and VolumeSnapshot.
+For details and examples refer to the Backup and Recovery section of the
+documentation
+
+
+Field | Description |
+
+volumeSnapshot
+VolumeSnapshotConfiguration
+ |
+
+ VolumeSnapshot provides the configuration for the execution of volume snapshot backups.
+ |
+
+barmanObjectStore
+BarmanObjectStoreConfiguration
+ |
+
+ The configuration for the barman-cloud tool suite
+ |
+
+retentionPolicy
+string
+ |
+
+ RetentionPolicy is the retention policy to be used for backups
+and WALs (i.e. '60d'). The retention policy is expressed in the form
+of XXu where XX is a positive integer and u is in [dwm] -
+days, weeks, months.
+It's currently only applicable when using the BarmanObjectStore method.
+ |
+
+target
+BackupTarget
+ |
+
+ The policy to decide which instance should perform backups. Available
+options are empty string, which will default to prefer-standby policy,
+primary to have backups run always on primary instances, prefer-standby
+to have backups run preferably on the most updated standby, if available.
+ |
+
+
+
+
+## BackupMethod {#postgresql-k8s-enterprisedb-io-v1-BackupMethod}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+BackupMethod defines the way of executing the physical base backups of
+the selected PostgreSQL instance
+
+## BackupPhase {#postgresql-k8s-enterprisedb-io-v1-BackupPhase}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+BackupPhase is the phase of the backup
+
+## BackupSnapshotStatus {#postgresql-k8s-enterprisedb-io-v1-BackupSnapshotStatus}
+
+**Appears in:**
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+BackupSnapshotStatus the fields exclusive to the volumeSnapshot method backup
+
+
+Field | Description |
+
+snapshots
+[]string
+ |
+
+ The snapshot lists, populated if it is a snapshot type backup
+ |
+
+
+
+
+## BackupSource {#postgresql-k8s-enterprisedb-io-v1-BackupSource}
+
+**Appears in:**
+
+- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery)
+
+BackupSource contains the backup we need to restore from, plus some
+information that could be needed to correctly restore it.
+
+
+Field | Description |
+
+LocalObjectReference
+LocalObjectReference
+ |
+(Members of LocalObjectReference are embedded into this type.)
+ No description provided. |
+
+endpointCA
+SecretKeySelector
+ |
+
+ EndpointCA store the CA bundle of the barman endpoint.
+Useful when using self-signed certificates to avoid
+errors with certificate issuer and barman-cloud-wal-archive.
+ |
+
+
+
+
+## BackupSpec {#postgresql-k8s-enterprisedb-io-v1-BackupSpec}
+
+**Appears in:**
+
+- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup)
+
+BackupSpec defines the desired state of Backup
+
+
+Field | Description |
+
+cluster [Required]
+LocalObjectReference
+ |
+
+ The cluster to backup
+ |
+
+target
+BackupTarget
+ |
+
+ The policy to decide which instance should perform this backup. If empty,
+it defaults to cluster.spec.backup.target .
+Available options are empty string, primary and prefer-standby .
+primary to have backups run always on primary instances,
+prefer-standby to have backups run preferably on the most updated
+standby, if available.
+ |
+
+method
+BackupMethod
+ |
+
+ The backup method to be used, possible options are barmanObjectStore
+and volumeSnapshot . Defaults to: barmanObjectStore .
+ |
+
+
+
+
+## BackupStatus {#postgresql-k8s-enterprisedb-io-v1-BackupStatus}
+
+**Appears in:**
+
+- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup)
+
+BackupStatus defines the observed state of Backup
+
+
+Field | Description |
+
+BarmanCredentials
+BarmanCredentials
+ |
+(Members of BarmanCredentials are embedded into this type.)
+ The potential credentials for each cloud provider
+ |
+
+endpointCA
+SecretKeySelector
+ |
+
+ EndpointCA store the CA bundle of the barman endpoint.
+Useful when using self-signed certificates to avoid
+errors with certificate issuer and barman-cloud-wal-archive.
+ |
+
+endpointURL
+string
+ |
+
+ Endpoint to be used to upload data to the cloud,
+overriding the automatic endpoint discovery
+ |
+
+destinationPath
+string
+ |
+
+ The path where to store the backup (i.e. s3://bucket/path/to/folder)
+this path, with different destination folders, will be used for WALs
+and for data. This may not be populated in case of errors.
+ |
+
+serverName
+string
+ |
+
+ The server name on S3, the cluster name is used if this
+parameter is omitted
+ |
+
+encryption
+string
+ |
+
+ Encryption method required to S3 API
+ |
+
+backupId
+string
+ |
+
+ The ID of the Barman backup
+ |
+
+backupName
+string
+ |
+
+ The Name of the Barman backup
+ |
+
+phase
+BackupPhase
+ |
+
+ The last backup status
+ |
+
+startedAt
+meta/v1.Time
+ |
+
+ When the backup was started
+ |
+
+stoppedAt
+meta/v1.Time
+ |
+
+ When the backup was terminated
+ |
+
+beginWal
+string
+ |
+
+ The starting WAL
+ |
+
+endWal
+string
+ |
+
+ The ending WAL
+ |
+
+beginLSN
+string
+ |
+
+ The starting xlog
+ |
+
+endLSN
+string
+ |
+
+ The ending xlog
+ |
+
+error
+string
+ |
+
+ The detected error
+ |
+
+commandOutput
+string
+ |
+
+ Unused. Retained for compatibility with old versions.
+ |
+
+commandError
+string
+ |
+
+ The backup command output in case of error
+ |
+
+instanceID
+InstanceID
+ |
+
+ Information to identify the instance where the backup has been taken from
+ |
+
+snapshotBackupStatus
+BackupSnapshotStatus
+ |
+
+ Status of the volumeSnapshot backup
+ |
+
+method
+BackupMethod
+ |
+
+ The backup method being used
+ |
+
+
+
+
+## BackupTarget {#postgresql-k8s-enterprisedb-io-v1-BackupTarget}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [BackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-BackupConfiguration)
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+BackupTarget describes the preferred targets for a backup
+
+## BarmanCredentials {#postgresql-k8s-enterprisedb-io-v1-BarmanCredentials}
+
+**Appears in:**
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+- [BarmanObjectStoreConfiguration](#postgresql-k8s-enterprisedb-io-v1-BarmanObjectStoreConfiguration)
+
+BarmanCredentials an object containing the potential credentials for each cloud provider
+
+
+Field | Description |
+
+googleCredentials
+GoogleCredentials
+ |
+
+ The credentials to use to upload data to Google Cloud Storage
+ |
+
+s3Credentials
+S3Credentials
+ |
+
+ The credentials to use to upload data to S3
+ |
+
+azureCredentials
+AzureCredentials
+ |
+
+ The credentials to use to upload data to Azure Blob Storage
+ |
+
+
+
+
+## BarmanObjectStoreConfiguration {#postgresql-k8s-enterprisedb-io-v1-BarmanObjectStoreConfiguration}
+
+**Appears in:**
+
+- [BackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-BackupConfiguration)
+
+- [ExternalCluster](#postgresql-k8s-enterprisedb-io-v1-ExternalCluster)
+
+BarmanObjectStoreConfiguration contains the backup configuration
+using Barman against an S3-compatible object storage
+
+
+Field | Description |
+
+BarmanCredentials
+BarmanCredentials
+ |
+(Members of BarmanCredentials are embedded into this type.)
+ The potential credentials for each cloud provider
+ |
+
+endpointURL
+string
+ |
+
+ Endpoint to be used to upload data to the cloud,
+overriding the automatic endpoint discovery
+ |
+
+endpointCA
+SecretKeySelector
+ |
+
+ EndpointCA store the CA bundle of the barman endpoint.
+Useful when using self-signed certificates to avoid
+errors with certificate issuer and barman-cloud-wal-archive
+ |
+
+destinationPath [Required]
+string
+ |
+
+ The path where to store the backup (i.e. s3://bucket/path/to/folder)
+this path, with different destination folders, will be used for WALs
+and for data
+ |
+
+serverName
+string
+ |
+
+ The server name on S3, the cluster name is used if this
+parameter is omitted
+ |
+
+wal
+WalBackupConfiguration
+ |
+
+ The configuration for the backup of the WAL stream.
+When not defined, WAL files will be stored uncompressed and may be
+unencrypted in the object store, according to the bucket default policy.
+ |
+
+data
+DataBackupConfiguration
+ |
+
+ The configuration to be used to backup the data files
+When not defined, base backups files will be stored uncompressed and may
+be unencrypted in the object store, according to the bucket default
+policy.
+ |
+
+tags
+map[string]string
+ |
+
+ Tags is a list of key value pairs that will be passed to the
+Barman --tags option.
+ |
+
+historyTags
+map[string]string
+ |
+
+ HistoryTags is a list of key value pairs that will be passed to the
+Barman --history-tags option.
+ |
+
+
+
+
+## BootstrapConfiguration {#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+BootstrapConfiguration contains information about how to create the PostgreSQL
+cluster. Only a single bootstrap method can be defined among the supported
+ones. initdb
will be used as the bootstrap method if left
+unspecified. Refer to the Bootstrap page of the documentation for more
+information.
+
+
+Field | Description |
+
+initdb
+BootstrapInitDB
+ |
+
+ Bootstrap the cluster via initdb
+ |
+
+recovery
+BootstrapRecovery
+ |
+
+ Bootstrap the cluster from a backup
+ |
+
+pg_basebackup
+BootstrapPgBaseBackup
+ |
+
+ Bootstrap the cluster taking a physical backup of another compatible
+PostgreSQL instance
+ |
+
+
+
+
+## BootstrapInitDB {#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB}
+
+**Appears in:**
+
+- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration)
+
+BootstrapInitDB is the configuration of the bootstrap process when
+initdb is used
+Refer to the Bootstrap page of the documentation for more information.
+
+
+Field | Description |
+
+database
+string
+ |
+
+ Name of the database used by the application. Default: app .
+ |
+
+owner
+string
+ |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the database key.
+ |
+
+secret
+LocalObjectReference
+ |
+
+ Name of the secret containing the initial credentials for the
+owner of the user database. If empty a new secret will be
+created from scratch
+ |
+
+redwood
+bool
+ |
+
+ If we need to enable/disable Redwood compatibility. Requires
+EPAS and for EPAS defaults to true
+ |
+
+options
+[]string
+ |
+
+ The list of options that must be passed to initdb when creating the cluster.
+Deprecated: This could lead to inconsistent configurations,
+please use the explicit provided parameters instead.
+If defined, explicit values will be ignored.
+ |
+
+dataChecksums
+bool
+ |
+
+ Whether the -k option should be passed to initdb,
+enabling checksums on data pages (default: false )
+ |
+
+encoding
+string
+ |
+
+ The value to be passed as option --encoding for initdb (default:UTF8 )
+ |
+
+localeCollate
+string
+ |
+
+ The value to be passed as option --lc-collate for initdb (default:C )
+ |
+
+localeCType
+string
+ |
+
+ The value to be passed as option --lc-ctype for initdb (default:C )
+ |
+
+walSegmentSize
+int
+ |
+
+ The value in megabytes (1 to 1024) to be passed to the --wal-segsize
+option for initdb (default: empty, resulting in PostgreSQL default: 16MB)
+ |
+
+postInitSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser immediately
+after the cluster has been created - to be used with extreme care
+(by default empty)
+ |
+
+postInitApplicationSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the application
+database right after is created - to be used with extreme care
+(by default empty)
+ |
+
+postInitTemplateSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the template1
+after the cluster has been created - to be used with extreme care
+(by default empty)
+ |
+
+import
+Import
+ |
+
+ Bootstraps the new cluster by importing data from an existing PostgreSQL
+instance using logical backup (pg_dump and pg_restore )
+ |
+
+postInitApplicationSQLRefs
+PostInitApplicationSQLRefs
+ |
+
+ PostInitApplicationSQLRefs points references to ConfigMaps or Secrets which
+contain SQL files, the general implementation order to these references is
+from all Secrets to all ConfigMaps, and inside Secrets or ConfigMaps,
+the implementation order is same as the order of each array
+(by default empty)
+ |
+
+
+
+
+## BootstrapPgBaseBackup {#postgresql-k8s-enterprisedb-io-v1-BootstrapPgBaseBackup}
+
+**Appears in:**
+
+- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration)
+
+BootstrapPgBaseBackup contains the configuration required to take
+a physical backup of an existing PostgreSQL cluster
+
+
+Field | Description |
+
+source [Required]
+string
+ |
+
+ The name of the server of which we need to take a physical backup
+ |
+
+database
+string
+ |
+
+ Name of the database used by the application. Default: app .
+ |
+
+owner
+string
+ |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the database key.
+ |
+
+secret
+LocalObjectReference
+ |
+
+ Name of the secret containing the initial credentials for the
+owner of the user database. If empty a new secret will be
+created from scratch
+ |
+
+
+
+
+## BootstrapRecovery {#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery}
+
+**Appears in:**
+
+- [BootstrapConfiguration](#postgresql-k8s-enterprisedb-io-v1-BootstrapConfiguration)
+
+BootstrapRecovery contains the configuration required to restore
+from an existing cluster using 3 methodologies: external cluster,
+volume snapshots or backup objects. Full recovery and Point-In-Time
+Recovery are supported.
+The method can be also be used to create clusters in continuous recovery
+(replica clusters), also supporting cascading replication when instances
>
+
+- Once the cluster exits recovery, the password for the superuser
+will be changed through the provided secret.
+Refer to the Bootstrap page of the documentation for more information.
+
+
+
+Field | Description |
+
+backup
+BackupSource
+ |
+
+ The backup object containing the physical base backup from which to
+initiate the recovery procedure.
+Mutually exclusive with source and volumeSnapshots .
+ |
+
+source
+string
+ |
+
+ The external cluster whose backup we will restore. This is also
+used as the name of the folder under which the backup is stored,
+so it must be set to the name of the source cluster
+Mutually exclusive with backup .
+ |
+
+volumeSnapshots
+DataSource
+ |
+
+ The static PVC data source(s) from which to initiate the
+recovery procedure. Currently supporting VolumeSnapshot
+and PersistentVolumeClaim resources that map an existing
+PVC group, compatible with EDB Postgres for Kubernetes, and taken with
+a cold backup copy on a fenced Postgres instance (limitation
+which will be removed in the future when online backup
+will be implemented).
+Mutually exclusive with backup .
+ |
+
+recoveryTarget
+RecoveryTarget
+ |
+
+ By default, the recovery process applies all the available
+WAL files in the archive (full recovery). However, you can also
+end the recovery as soon as a consistent state is reached or
+recover to a point-in-time (PITR) by specifying a RecoveryTarget object,
+as expected by PostgreSQL (i.e., timestamp, transaction Id, LSN, ...).
+More info: https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET
+ |
+
+database
+string
+ |
+
+ Name of the database used by the application. Default: app .
+ |
+
+owner
+string
+ |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the database key.
+ |
+
+secret
+LocalObjectReference
+ |
+
+ Name of the secret containing the initial credentials for the
+owner of the user database. If empty a new secret will be
+created from scratch
+ |
+
+
+
+
+## CertificatesConfiguration {#postgresql-k8s-enterprisedb-io-v1-CertificatesConfiguration}
+
+**Appears in:**
+
+- [CertificatesStatus](#postgresql-k8s-enterprisedb-io-v1-CertificatesStatus)
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+CertificatesConfiguration contains the needed configurations to handle server certificates.
+
+
+Field | Description |
+
+serverCASecret
+string
+ |
+
+ The secret containing the Server CA certificate. If not defined, a new secret will be created
+with a self-signed CA and will be used to generate the TLS certificate ServerTLSSecret.
+
+Contains:
+
+
+ca.crt : CA that should be used to validate the server certificate,
+used as sslrootcert in client connection strings.
+ca.key : key used to generate Server SSL certs, if ServerTLSSecret is provided,
+this can be omitted.
+
+ |
+
+serverTLSSecret
+string
+ |
+
+ The secret of type kubernetes.io/tls containing the server TLS certificate and key that will be set as
+ssl_cert_file and ssl_key_file so that clients can connect to postgres securely.
+If not defined, ServerCASecret must provide also ca.key and a new secret will be
+created using the provided CA.
+ |
+
+replicationTLSSecret
+string
+ |
+
+ The secret of type kubernetes.io/tls containing the client certificate to authenticate as
+the streaming_replica user.
+If not defined, ClientCASecret must provide also ca.key , and a new secret will be
+created using the provided CA.
+ |
+
+clientCASecret
+string
+ |
+
+ The secret containing the Client CA certificate. If not defined, a new secret will be created
+with a self-signed CA and will be used to generate all the client certificates.
+
+Contains:
+
+
+ca.crt : CA that should be used to validate the client certificates,
+used as ssl_ca_file of all the instances.
+ca.key : key used to generate client certificates, if ReplicationTLSSecret is provided,
+this can be omitted.
+
+ |
+
+serverAltDNSNames
+[]string
+ |
+
+ The list of the server alternative DNS names to be added to the generated server TLS certificates, when required.
+ |
+
+
+
+
+## CertificatesStatus {#postgresql-k8s-enterprisedb-io-v1-CertificatesStatus}
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+CertificatesStatus contains configuration certificates and related expiration dates.
+
+
+Field | Description |
+
+CertificatesConfiguration
+CertificatesConfiguration
+ |
+(Members of CertificatesConfiguration are embedded into this type.)
+ Needed configurations to handle server certificates, initialized with default values, if needed.
+ |
+
+expirations
+map[string]string
+ |
+
+ Expiration dates for all certificates.
+ |
+
+
+
+
+## ClusterSpec {#postgresql-k8s-enterprisedb-io-v1-ClusterSpec}
+
+**Appears in:**
+
+- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster)
+
+ClusterSpec defines the desired state of Cluster
+
+
+Field | Description |
+
+description
+string
+ |
+
+ Description of this PostgreSQL cluster
+ |
+
+inheritedMetadata
+EmbeddedObjectMetadata
+ |
+
+ Metadata that will be inherited by all objects related to the Cluster
+ |
+
+imageName
+string
+ |
+
+ Name of the container image, supporting both tags (<image>:<tag> )
+and digests for deterministic and repeatable deployments
+(<image>:<tag>@sha256:<digestValue> )
+ |
+
+imagePullPolicy
+core/v1.PullPolicy
+ |
+
+ Image pull policy.
+One of Always , Never or IfNotPresent .
+If not defined, it defaults to IfNotPresent .
+Cannot be updated.
+More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
+ |
+
+schedulerName
+string
+ |
+
+ If specified, the pod will be dispatched by specified Kubernetes
+scheduler. If not specified, the pod will be dispatched by the default
+scheduler. More info:
+https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
+ |
+
+postgresUID
+int64
+ |
+
+ The UID of the postgres user inside the image, defaults to 26
+ |
+
+postgresGID
+int64
+ |
+
+ The GID of the postgres user inside the image, defaults to 26
+ |
+
+instances [Required]
+int
+ |
+
+ Number of instances required in the cluster
+ |
+
+minSyncReplicas
+int
+ |
+
+ Minimum number of instances required in synchronous replication with the
+primary. Undefined or 0 allow writes to complete when no standby is
+available.
+ |
+
+maxSyncReplicas
+int
+ |
+
+ The target value for the synchronous replication quorum, that can be
+decreased if the number of ready standbys is lower than this.
+Undefined or 0 disable synchronous replication.
+ |
+
+postgresql
+PostgresConfiguration
+ |
+
+ Configuration of the PostgreSQL server
+ |
+
+replicationSlots
+ReplicationSlotsConfiguration
+ |
+
+ Replication slots management configuration
+ |
+
+bootstrap
+BootstrapConfiguration
+ |
+
+ Instructions to bootstrap this cluster
+ |
+
+replica
+ReplicaClusterConfiguration
+ |
+
+ Replica cluster configuration
+ |
+
+superuserSecret
+LocalObjectReference
+ |
+
+ The secret containing the superuser password. If not defined a new
+secret will be created with a randomly generated password
+ |
+
+enableSuperuserAccess
+bool
+ |
+
+ When this option is enabled, the operator will use the SuperuserSecret
+to update the postgres user password (if the secret is
+not present, the operator will automatically create one). When this
+option is disabled, the operator will ignore the SuperuserSecret content, delete
+it when automatically created, and then blank the password of the postgres
+user by setting it to NULL . Enabled by default.
+ |
+
+certificates
+CertificatesConfiguration
+ |
+
+ The configuration for the CA and related certificates
+ |
+
+imagePullSecrets
+[]LocalObjectReference
+ |
+
+ The list of pull secrets to be used to pull the images. If the license key
+contains a pull secret that secret will be automatically included.
+ |
+
+storage
+StorageConfiguration
+ |
+
+ Configuration of the storage of the instances
+ |
+
+serviceAccountTemplate
+ServiceAccountTemplate
+ |
+
+ Configure the generation of the service account
+ |
+
+walStorage
+StorageConfiguration
+ |
+
+ Configuration of the storage for PostgreSQL WAL (Write-Ahead Log)
+ |
+
+startDelay
+int32
+ |
+
+ The time in seconds that is allowed for a PostgreSQL instance to
+successfully start up (default 3600).
+The startup probe failure threshold is derived from this value using the formula:
+ceiling(startDelay / 10).
+ |
+
+stopDelay
+int32
+ |
+
+ The time in seconds that is allowed for a PostgreSQL instance to
+gracefully shutdown (default 1800)
+ |
+
+smartStopDelay
+int32
+ |
+
+ The time in seconds that controls the window of time reserved for the smart shutdown of Postgres to complete.
+this formula to compute the timeout of smart shutdown is max(stopDelay - smartStopDelay, 30) ,
+ |
+
+switchoverDelay
+int32
+ |
+
+ The time in seconds that is allowed for a primary PostgreSQL instance
+to gracefully shutdown during a switchover.
+Default value is 3600 seconds (1 hour).
+ |
+
+failoverDelay
+int32
+ |
+
+ The amount of time (in seconds) to wait before triggering a failover
+after the primary PostgreSQL instance in the cluster was detected
+to be unhealthy
+ |
+
+affinity
+AffinityConfiguration
+ |
+
+ Affinity/Anti-affinity rules for Pods
+ |
+
+topologySpreadConstraints
+[]core/v1.TopologySpreadConstraint
+ |
+
+ TopologySpreadConstraints specifies how to spread matching pods among the given topology.
+More info:
+https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/
+ |
+
+resources
+core/v1.ResourceRequirements
+ |
+
+ Resources requirements of every generated Pod. Please refer to
+https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
+for more information.
+ |
+
+priorityClassName
+string
+ |
+
+ Name of the priority class which will be used in every generated Pod, if the PriorityClass
+specified does not exist, the pod will not be able to schedule. Please refer to
+https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
+for more information
+ |
+
+primaryUpdateStrategy
+PrimaryUpdateStrategy
+ |
+
+ Deployment strategy to follow to upgrade the primary server during a rolling
+update procedure, after all replicas have been successfully updated:
+it can be automated (unsupervised - default) or manual (supervised )
+ |
+
+primaryUpdateMethod
+PrimaryUpdateMethod
+ |
+
+ Method to follow to upgrade the primary server during a rolling
+update procedure, after all replicas have been successfully updated:
+it can be with a switchover (switchover ) or in-place (restart - default)
+ |
+
+backup
+BackupConfiguration
+ |
+
+ The configuration to be used for backups
+ |
+
+nodeMaintenanceWindow
+NodeMaintenanceWindow
+ |
+
+ Define a maintenance window for the Kubernetes nodes
+ |
+
+licenseKey
+string
+ |
+
+ The license key of the cluster. When empty, the cluster operates in
+trial mode and after the expiry date (default 30 days) the operator
+will cease any reconciliation attempt. For details, please refer to
+the license agreement that comes with the operator.
+ |
+
+licenseKeySecret
+core/v1.SecretKeySelector
+ |
+
+ The reference to the license key. When this is set it take precedence over LicenseKey.
+ |
+
+monitoring
+MonitoringConfiguration
+ |
+
+ The configuration of the monitoring infrastructure of this cluster
+ |
+
+externalClusters
+[]ExternalCluster
+ |
+
+ The list of external clusters which are used in the configuration
+ |
+
+logLevel
+string
+ |
+
+ The instances' log level, one of the following values: error, warning, info (default), debug, trace
+ |
+
+projectedVolumeTemplate
+core/v1.ProjectedVolumeSource
+ |
+
+ Template to be used to define projected volumes, projected volumes will be mounted
+under /projected base folder
+ |
+
+env
+[]core/v1.EnvVar
+ |
+
+ Env follows the Env format to pass environment variables
+to the pods created in the cluster
+ |
+
+envFrom
+[]core/v1.EnvFromSource
+ |
+
+ EnvFrom follows the EnvFrom format to pass environment variables
+sources to the pods to be used by Env
+ |
+
+managed
+ManagedConfiguration
+ |
+
+ The configuration that is used by the portions of PostgreSQL that are managed by the instance manager
+ |
+
+seccompProfile
+core/v1.SeccompProfile
+ |
+
+ The SeccompProfile applied to every Pod and Container.
+Defaults to: RuntimeDefault
+ |
+
+
+
+
+## ClusterStatus {#postgresql-k8s-enterprisedb-io-v1-ClusterStatus}
+
+**Appears in:**
+
+- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster)
+
+ClusterStatus defines the observed state of Cluster
+
+
+Field | Description |
+
+instances
+int
+ |
+
+ The total number of PVC Groups detected in the cluster. It may differ from the number of existing instance pods.
+ |
+
+readyInstances
+int
+ |
+
+ The total number of ready instances in the cluster. It is equal to the number of ready instance pods.
+ |
+
+instancesStatus
+map[github.com/EnterpriseDB/cloud-native-postgres/pkg/utils.PodStatus][]string
+ |
+
+ InstancesStatus indicates in which status the instances are
+ |
+
+instancesReportedState
+map[github.com/EnterpriseDB/cloud-native-postgres/api/v1.PodName]github.com/EnterpriseDB/cloud-native-postgres/api/v1.InstanceReportedState
+ |
+
+ The reported state of the instances during the last reconciliation loop
+ |
+
+managedRolesStatus
+ManagedRoles
+ |
+
+ ManagedRolesStatus reports the state of the managed roles in the cluster
+ |
+
+timelineID
+int
+ |
+
+ The timeline of the Postgres cluster
+ |
+
+topology
+Topology
+ |
+
+ Instances topology.
+ |
+
+latestGeneratedNode
+int
+ |
+
+ ID of the latest generated node (used to avoid node name clashing)
+ |
+
+currentPrimary
+string
+ |
+
+ Current primary instance
+ |
+
+targetPrimary
+string
+ |
+
+ Target primary instance, this is different from the previous one
+during a switchover or a failover
+ |
+
+pvcCount
+int32
+ |
+
+ How many PVCs have been created by this cluster
+ |
+
+jobCount
+int32
+ |
+
+ How many Jobs have been created by this cluster
+ |
+
+danglingPVC
+[]string
+ |
+
+ List of all the PVCs created by this cluster and still available
+which are not attached to a Pod
+ |
+
+resizingPVC
+[]string
+ |
+
+ List of all the PVCs that have ResizingPVC condition.
+ |
+
+initializingPVC
+[]string
+ |
+
+ List of all the PVCs that are being initialized by this cluster
+ |
+
+healthyPVC
+[]string
+ |
+
+ List of all the PVCs not dangling nor initializing
+ |
+
+unusablePVC
+[]string
+ |
+
+ List of all the PVCs that are unusable because another PVC is missing
+ |
+
+licenseStatus
+github.com/EnterpriseDB/cloud-native-postgres/pkg/licensekey.Status
+ |
+
+ Status of the license
+ |
+
+writeService
+string
+ |
+
+ Current write pod
+ |
+
+readService
+string
+ |
+
+ Current list of read pods
+ |
+
+phase
+string
+ |
+
+ Current phase of the cluster
+ |
+
+phaseReason
+string
+ |
+
+ Reason for the current phase
+ |
+
+secretsResourceVersion
+SecretsResourceVersion
+ |
+
+ The list of resource versions of the secrets
+managed by the operator. Every change here is done in the
+interest of the instance manager, which will refresh the
+secret data
+ |
+
+configMapResourceVersion
+ConfigMapResourceVersion
+ |
+
+ The list of resource versions of the configmaps,
+managed by the operator. Every change here is done in the
+interest of the instance manager, which will refresh the
+configmap data
+ |
+
+certificates
+CertificatesStatus
+ |
+
+ The configuration for the CA and related certificates, initialized with defaults.
+ |
+
+firstRecoverabilityPoint
+string
+ |
+
+ The first recoverability point, stored as a date in RFC3339 format
+ |
+
+lastSuccessfulBackup
+string
+ |
+
+ Stored as a date in RFC3339 format
+ |
+
+lastFailedBackup
+string
+ |
+
+ Stored as a date in RFC3339 format
+ |
+
+cloudNativePostgresqlCommitHash
+string
+ |
+
+ The commit hash number of which this operator running
+ |
+
+currentPrimaryTimestamp
+string
+ |
+
+ The timestamp when the last actual promotion to primary has occurred
+ |
+
+currentPrimaryFailingSinceTimestamp
+string
+ |
+
+ The timestamp when the primary was detected to be unhealthy
+This field is reported when spec.failoverDelay is populated or during online upgrades
+ |
+
+targetPrimaryTimestamp
+string
+ |
+
+ The timestamp when the last request for a new primary has occurred
+ |
+
+poolerIntegrations
+PoolerIntegrations
+ |
+
+ The integration needed by poolers referencing the cluster
+ |
+
+cloudNativePostgresqlOperatorHash
+string
+ |
+
+ The hash of the binary of the operator
+ |
+
+conditions
+[]meta/v1.Condition
+ |
+
+ Conditions for cluster object
+ |
+
+instanceNames
+[]string
+ |
+
+ List of instance names in the cluster
+ |
+
+onlineUpdateEnabled
+bool
+ |
+
+ OnlineUpdateEnabled shows if the online upgrade is enabled inside the cluster
+ |
+
+azurePVCUpdateEnabled
+bool
+ |
+
+ AzurePVCUpdateEnabled shows if the PVC online upgrade is enabled for this cluster
+ |
+
+
+
+
+## CompressionType {#postgresql-k8s-enterprisedb-io-v1-CompressionType}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [DataBackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-DataBackupConfiguration)
+
+- [WalBackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-WalBackupConfiguration)
+
+CompressionType encapsulates the available types of compression
+
+## ConfigMapKeySelector {#postgresql-k8s-enterprisedb-io-v1-ConfigMapKeySelector}
+
+**Appears in:**
+
+- [MonitoringConfiguration](#postgresql-k8s-enterprisedb-io-v1-MonitoringConfiguration)
+
+- [PostInitApplicationSQLRefs](#postgresql-k8s-enterprisedb-io-v1-PostInitApplicationSQLRefs)
+
+ConfigMapKeySelector contains enough information to let you locate
+the key of a ConfigMap
+
+
+Field | Description |
+
+LocalObjectReference
+LocalObjectReference
+ |
+(Members of LocalObjectReference are embedded into this type.)
+ The name of the secret in the pod's namespace to select from.
+ |
+
+key [Required]
+string
+ |
+
+ The key to select
+ |
+
+
+
+
+## ConfigMapResourceVersion {#postgresql-k8s-enterprisedb-io-v1-ConfigMapResourceVersion}
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+ConfigMapResourceVersion is the resource versions of the secrets
+managed by the operator
+
+
+Field | Description |
+
+metrics
+map[string]string
+ |
+
+ A map with the versions of all the config maps used to pass metrics.
+Map keys are the config map names, map values are the versions
+ |
+
+
+
+
+## DataBackupConfiguration {#postgresql-k8s-enterprisedb-io-v1-DataBackupConfiguration}
+
+**Appears in:**
+
+- [BarmanObjectStoreConfiguration](#postgresql-k8s-enterprisedb-io-v1-BarmanObjectStoreConfiguration)
+
+DataBackupConfiguration is the configuration of the backup of
+the data directory
+
+
+Field | Description |
+
+compression
+CompressionType
+ |
+
+ Compress a backup file (a tar file per tablespace) while streaming it
+to the object store. Available options are empty string (no
+compression, default), gzip , bzip2 or snappy .
+ |
+
+encryption
+EncryptionType
+ |
+
+ Whenever to force the encryption of files (if the bucket is
+not already configured for that).
+Allowed options are empty string (use the bucket policy, default),
+AES256 and aws:kms
+ |
+
+jobs
+int32
+ |
+
+ The number of parallel jobs to be used to upload the backup, defaults
+to 2
+ |
+
+immediateCheckpoint
+bool
+ |
+
+ Control whether the I/O workload for the backup initial checkpoint will
+be limited, according to the checkpoint_completion_target setting on
+the PostgreSQL server. If set to true, an immediate checkpoint will be
+used, meaning PostgreSQL will complete the checkpoint as soon as
+possible. false by default.
+ |
+
+
+
+
+## DataSource {#postgresql-k8s-enterprisedb-io-v1-DataSource}
+
+**Appears in:**
+
+- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery)
+
+DataSource contains the configuration required to bootstrap a
+PostgreSQL cluster from an existing storage
+
+
+
+## EPASConfiguration {#postgresql-k8s-enterprisedb-io-v1-EPASConfiguration}
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+EPASConfiguration contains EDB Postgres Advanced Server specific configurations
+
+
+Field | Description |
+
+audit
+bool
+ |
+
+ If true enables edb_audit logging
+ |
+
+tde
+TDEConfiguration
+ |
+
+ TDE configuration
+ |
+
+
+
+
+## EmbeddedObjectMetadata {#postgresql-k8s-enterprisedb-io-v1-EmbeddedObjectMetadata}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+EmbeddedObjectMetadata contains metadata to be inherited by all resources related to a Cluster
+
+
+Field | Description |
+
+labels
+map[string]string
+ |
+
+ No description provided. |
+
+annotations
+map[string]string
+ |
+
+ No description provided. |
+
+
+
+
+## EncryptionType {#postgresql-k8s-enterprisedb-io-v1-EncryptionType}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [DataBackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-DataBackupConfiguration)
+
+- [WalBackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-WalBackupConfiguration)
+
+EncryptionType encapsulated the available types of encryption
+
+## EnsureOption {#postgresql-k8s-enterprisedb-io-v1-EnsureOption}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [RoleConfiguration](#postgresql-k8s-enterprisedb-io-v1-RoleConfiguration)
+
+EnsureOption represents whether we should enforce the presence or absence of
+a Role in a PostgreSQL instance
+
+## ExternalCluster {#postgresql-k8s-enterprisedb-io-v1-ExternalCluster}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ExternalCluster represents the connection parameters to an
+external cluster which is used in the other sections of the configuration
+
+
+Field | Description |
+
+name [Required]
+string
+ |
+
+ The server name, required
+ |
+
+connectionParameters
+map[string]string
+ |
+
+ The list of connection parameters, such as dbname, host, username, etc
+ |
+
+sslCert
+core/v1.SecretKeySelector
+ |
+
+ The reference to an SSL certificate to be used to connect to this
+instance
+ |
+
+sslKey
+core/v1.SecretKeySelector
+ |
+
+ The reference to an SSL private key to be used to connect to this
+instance
+ |
+
+sslRootCert
+core/v1.SecretKeySelector
+ |
+
+ The reference to an SSL CA public key to be used to connect to this
+instance
+ |
+
+password
+core/v1.SecretKeySelector
+ |
+
+ The reference to the password to be used to connect to the server
+ |
+
+barmanObjectStore
+BarmanObjectStoreConfiguration
+ |
+
+ The configuration for the barman-cloud tool suite
+ |
+
+
+
+
+## GoogleCredentials {#postgresql-k8s-enterprisedb-io-v1-GoogleCredentials}
+
+**Appears in:**
+
+- [BarmanCredentials](#postgresql-k8s-enterprisedb-io-v1-BarmanCredentials)
+
+GoogleCredentials is the type for the Google Cloud Storage credentials.
+This needs to be specified even if we run inside a GKE environment.
+
+
+Field | Description |
+
+applicationCredentials
+SecretKeySelector
+ |
+
+ The secret containing the Google Cloud Storage JSON file with the credentials
+ |
+
+gkeEnvironment
+bool
+ |
+
+ If set to true, will presume that it's running inside a GKE environment,
+default to false.
+ |
+
+
+
+
+## Import {#postgresql-k8s-enterprisedb-io-v1-Import}
+
+**Appears in:**
+
+- [BootstrapInitDB](#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB)
+
+Import contains the configuration to init a database from a logic snapshot of an externalCluster
+
+
+Field | Description |
+
+source [Required]
+ImportSource
+ |
+
+ The source of the import
+ |
+
+type [Required]
+SnapshotType
+ |
+
+ The import type. Can be microservice or monolith .
+ |
+
+databases [Required]
+[]string
+ |
+
+ The databases to import
+ |
+
+roles
+[]string
+ |
+
+ The roles to import
+ |
+
+postImportApplicationSQL
+[]string
+ |
+
+ List of SQL queries to be executed as a superuser in the application
+database right after is imported - to be used with extreme care
+(by default empty). Only available in microservice type.
+ |
+
+schemaOnly
+bool
+ |
+
+ When set to true, only the pre-data and post-data sections of
+pg_restore are invoked, avoiding data import. Default: false .
+ |
+
+
+
+
+## ImportSource {#postgresql-k8s-enterprisedb-io-v1-ImportSource}
+
+**Appears in:**
+
+- [Import](#postgresql-k8s-enterprisedb-io-v1-Import)
+
+ImportSource describes the source for the logical snapshot
+
+
+Field | Description |
+
+externalCluster [Required]
+string
+ |
+
+ The name of the externalCluster used for import
+ |
+
+
+
+
+## InstanceID {#postgresql-k8s-enterprisedb-io-v1-InstanceID}
+
+**Appears in:**
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+InstanceID contains the information to identify an instance
+
+
+Field | Description |
+
+podName
+string
+ |
+
+ The pod name
+ |
+
+ContainerID
+string
+ |
+
+ The container ID
+ |
+
+
+
+
+## InstanceReportedState {#postgresql-k8s-enterprisedb-io-v1-InstanceReportedState}
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+InstanceReportedState describes the last reported state of an instance during a reconciliation loop
+
+
+Field | Description |
+
+isPrimary [Required]
+bool
+ |
+
+ indicates if an instance is the primary one
+ |
+
+timeLineID
+int
+ |
+
+ indicates on which TimelineId the instance is
+ |
+
+
+
+
+## LDAPBindAsAuth {#postgresql-k8s-enterprisedb-io-v1-LDAPBindAsAuth}
+
+**Appears in:**
+
+- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig)
+
+LDAPBindAsAuth provides the required fields to use the
+bind authentication for LDAP
+
+
+Field | Description |
+
+prefix
+string
+ |
+
+ Prefix for the bind authentication option
+ |
+
+suffix
+string
+ |
+
+ Suffix for the bind authentication option
+ |
+
+
+
+
+## LDAPBindSearchAuth {#postgresql-k8s-enterprisedb-io-v1-LDAPBindSearchAuth}
+
+**Appears in:**
+
+- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig)
+
+LDAPBindSearchAuth provides the required fields to use
+the bind+search LDAP authentication process
+
+
+Field | Description |
+
+baseDN
+string
+ |
+
+ Root DN to begin the user search
+ |
+
+bindDN
+string
+ |
+
+ DN of the user to bind to the directory
+ |
+
+bindPassword
+core/v1.SecretKeySelector
+ |
+
+ Secret with the password for the user to bind to the directory
+ |
+
+searchAttribute
+string
+ |
+
+ Attribute to match against the username
+ |
+
+searchFilter
+string
+ |
+
+ Search filter to use when doing the search+bind authentication
+ |
+
+
+
+
+## LDAPConfig {#postgresql-k8s-enterprisedb-io-v1-LDAPConfig}
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+LDAPConfig contains the parameters needed for LDAP authentication
+
+
+Field | Description |
+
+server
+string
+ |
+
+ LDAP hostname or IP address
+ |
+
+port
+int
+ |
+
+ LDAP server port
+ |
+
+scheme
+LDAPScheme
+ |
+
+ LDAP schema to be used, possible options are ldap and ldaps
+ |
+
+bindAsAuth
+LDAPBindAsAuth
+ |
+
+ Bind as authentication configuration
+ |
+
+bindSearchAuth
+LDAPBindSearchAuth
+ |
+
+ Bind+Search authentication configuration
+ |
+
+tls
+bool
+ |
+
+ Set to 'true' to enable LDAP over TLS. 'false' is default
+ |
+
+
+
+
+## LDAPScheme {#postgresql-k8s-enterprisedb-io-v1-LDAPScheme}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [LDAPConfig](#postgresql-k8s-enterprisedb-io-v1-LDAPConfig)
+
+LDAPScheme defines the possible schemes for LDAP
+
+## LocalObjectReference {#postgresql-k8s-enterprisedb-io-v1-LocalObjectReference}
+
+**Appears in:**
+
+- [BackupSource](#postgresql-k8s-enterprisedb-io-v1-BackupSource)
+
+- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec)
+
+- [BootstrapInitDB](#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB)
+
+- [BootstrapPgBaseBackup](#postgresql-k8s-enterprisedb-io-v1-BootstrapPgBaseBackup)
+
+- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery)
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+- [ConfigMapKeySelector](#postgresql-k8s-enterprisedb-io-v1-ConfigMapKeySelector)
+
+- [PgBouncerSpec](#postgresql-k8s-enterprisedb-io-v1-PgBouncerSpec)
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+- [RoleConfiguration](#postgresql-k8s-enterprisedb-io-v1-RoleConfiguration)
+
+- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec)
+
+- [SecretKeySelector](#postgresql-k8s-enterprisedb-io-v1-SecretKeySelector)
+
+LocalObjectReference contains enough information to let you locate a
+local object with a known type inside the same namespace
+
+
+Field | Description |
+
+name [Required]
+string
+ |
+
+ Name of the referent.
+ |
+
+
+
+
+## ManagedConfiguration {#postgresql-k8s-enterprisedb-io-v1-ManagedConfiguration}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ManagedConfiguration represents the portions of PostgreSQL that are managed
+by the instance manager
+
+
+Field | Description |
+
+roles
+[]RoleConfiguration
+ |
+
+ Database roles managed by the Cluster
+ |
+
+
+
+
+## ManagedRoles {#postgresql-k8s-enterprisedb-io-v1-ManagedRoles}
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+ManagedRoles tracks the status of a cluster's managed roles
+
+
+Field | Description |
+
+byStatus
+map[github.com/EnterpriseDB/cloud-native-postgres/api/v1.RoleStatus][]string
+ |
+
+ ByStatus gives the list of roles in each state
+ |
+
+cannotReconcile
+map[string][]string
+ |
+
+ CannotReconcile lists roles that cannot be reconciled in PostgreSQL,
+with an explanation of the cause
+ |
+
+passwordStatus
+map[string]github.com/EnterpriseDB/cloud-native-postgres/api/v1.PasswordState
+ |
+
+ PasswordStatus gives the last transaction id and password secret version for each managed role
+ |
+
+
+
+
+## Metadata {#postgresql-k8s-enterprisedb-io-v1-Metadata}
+
+**Appears in:**
+
+- [PodTemplateSpec](#postgresql-k8s-enterprisedb-io-v1-PodTemplateSpec)
+
+- [ServiceAccountTemplate](#postgresql-k8s-enterprisedb-io-v1-ServiceAccountTemplate)
+
+Metadata is a structure similar to the metav1.ObjectMeta, but still
+parseable by controller-gen to create a suitable CRD for the user.
+The comment of PodTemplateSpec has an explanation of why we are
+not using the core data types.
+
+
+Field | Description |
+
+labels
+map[string]string
+ |
+
+ Map of string keys and values that can be used to organize and categorize
+(scope and select) objects. May match selectors of replication controllers
+and services.
+More info: http://kubernetes.io/docs/user-guide/labels
+ |
+
+annotations
+map[string]string
+ |
+
+ Annotations is an unstructured key value map stored with a resource that may be
+set by external tools to store and retrieve arbitrary metadata. They are not
+queryable and should be preserved when modifying objects.
+More info: http://kubernetes.io/docs/user-guide/annotations
+ |
+
+
+
+
+## MonitoringConfiguration {#postgresql-k8s-enterprisedb-io-v1-MonitoringConfiguration}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+MonitoringConfiguration is the type containing all the monitoring
+configuration for a certain cluster
+
+
+Field | Description |
+
+disableDefaultQueries
+bool
+ |
+
+ Whether the default queries should be injected.
+Set it to true if you don't want to inject default queries into the cluster.
+Default: false.
+ |
+
+customQueriesConfigMap
+[]ConfigMapKeySelector
+ |
+
+ The list of config maps containing the custom queries
+ |
+
+customQueriesSecret
+[]SecretKeySelector
+ |
+
+ The list of secrets containing the custom queries
+ |
+
+enablePodMonitor
+bool
+ |
+
+ Enable or disable the PodMonitor
+ |
+
+
+
+
+## NodeMaintenanceWindow {#postgresql-k8s-enterprisedb-io-v1-NodeMaintenanceWindow}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+NodeMaintenanceWindow contains information that the operator
+will use while upgrading the underlying node.
+This option is only useful when the chosen storage prevents the Pods
+from being freely moved across nodes.
+
+
+Field | Description |
+
+reusePVC
+bool
+ |
+
+ Reuse the existing PVC (wait for the node to come
+up again) or not (recreate it elsewhere - when instances >1)
+ |
+
+inProgress
+bool
+ |
+
+ Is there a node maintenance activity in progress?
+ |
+
+
+
+
+## PasswordState {#postgresql-k8s-enterprisedb-io-v1-PasswordState}
+
+**Appears in:**
+
+- [ManagedRoles](#postgresql-k8s-enterprisedb-io-v1-ManagedRoles)
+
+PasswordState represents the state of the password of a managed RoleConfiguration
+
+
+Field | Description |
+
+transactionID
+int64
+ |
+
+ the last transaction ID to affect the role definition in PostgreSQL
+ |
+
+resourceVersion
+string
+ |
+
+ the resource version of the password secret
+ |
+
+
+
+
+## PgBouncerIntegrationStatus {#postgresql-k8s-enterprisedb-io-v1-PgBouncerIntegrationStatus}
+
+**Appears in:**
+
+- [PoolerIntegrations](#postgresql-k8s-enterprisedb-io-v1-PoolerIntegrations)
+
+PgBouncerIntegrationStatus encapsulates the needed integration for the pgbouncer poolers referencing the cluster
+
+
+Field | Description |
+
+secrets
+[]string
+ |
+
+ No description provided. |
+
+
+
+
+## PgBouncerPoolMode {#postgresql-k8s-enterprisedb-io-v1-PgBouncerPoolMode}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [PgBouncerSpec](#postgresql-k8s-enterprisedb-io-v1-PgBouncerSpec)
+
+PgBouncerPoolMode is the mode of PgBouncer
+
+## PgBouncerSecrets {#postgresql-k8s-enterprisedb-io-v1-PgBouncerSecrets}
+
+**Appears in:**
+
+- [PoolerSecrets](#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets)
+
+PgBouncerSecrets contains the versions of the secrets used
+by pgbouncer
+
+
+Field | Description |
+
+authQuery
+SecretVersion
+ |
+
+ The auth query secret version
+ |
+
+
+
+
+## PgBouncerSpec {#postgresql-k8s-enterprisedb-io-v1-PgBouncerSpec}
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PgBouncerSpec defines how to configure PgBouncer
+
+
+Field | Description |
+
+poolMode
+PgBouncerPoolMode
+ |
+
+ The pool mode. Default: session .
+ |
+
+authQuerySecret
+LocalObjectReference
+ |
+
+ The credentials of the user that need to be used for the authentication
+query. In case it is specified, also an AuthQuery
+(e.g. "SELECT usename, passwd FROM pg_shadow WHERE usename=$1")
+has to be specified and no automatic CNP Cluster integration will be triggered.
+ |
+
+authQuery
+string
+ |
+
+ The query that will be used to download the hash of the password
+of a certain user. Default: "SELECT usename, passwd FROM user_search($1)".
+In case it is specified, also an AuthQuerySecret has to be specified and
+no automatic CNP Cluster integration will be triggered.
+ |
+
+parameters
+map[string]string
+ |
+
+ Additional parameters to be passed to PgBouncer - please check
+the CNP documentation for a list of options you can configure
+ |
+
+pg_hba
+[]string
+ |
+
+ PostgreSQL Host Based Authentication rules (lines to be appended
+to the pg_hba.conf file)
+ |
+
+paused
+bool
+ |
+
+ When set to true , PgBouncer will disconnect from the PostgreSQL
+server, first waiting for all queries to complete, and pause all new
+client connections until this value is set to false (default). Internally,
+the operator calls PgBouncer's PAUSE and RESUME commands.
+ |
+
+
+
+
+## PodTemplateSpec {#postgresql-k8s-enterprisedb-io-v1-PodTemplateSpec}
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PodTemplateSpec is a structure allowing the user to set
+a template for Pod generation.
+Unfortunately we can't use the corev1.PodTemplateSpec
+type because the generated CRD won't have the field for the
+metadata section.
+References:
+https://github.com/kubernetes-sigs/controller-tools/issues/385
+https://github.com/kubernetes-sigs/controller-tools/issues/448
+https://github.com/prometheus-operator/prometheus-operator/issues/3041
+
+
+Field | Description |
+
+metadata
+Metadata
+ |
+
+ Standard object's metadata.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+ |
+
+spec
+core/v1.PodSpec
+ |
+
+ Specification of the desired behavior of the pod.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+ |
+
+
+
+
+## PodTopologyLabels {#postgresql-k8s-enterprisedb-io-v1-PodTopologyLabels}
+
+(Alias of `map[string]string`)
+
+**Appears in:**
+
+- [Topology](#postgresql-k8s-enterprisedb-io-v1-Topology)
+
+PodTopologyLabels represent the topology of a Pod. map[labelName]labelValue
+
+## PoolerIntegrations {#postgresql-k8s-enterprisedb-io-v1-PoolerIntegrations}
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+PoolerIntegrations encapsulates the needed integration for the poolers referencing the cluster
+
+
+
+## PoolerMonitoringConfiguration {#postgresql-k8s-enterprisedb-io-v1-PoolerMonitoringConfiguration}
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PoolerMonitoringConfiguration is the type containing all the monitoring
+configuration for a certain Pooler.
+Mirrors the Cluster's MonitoringConfiguration but without the custom queries
+part for now.
+
+
+Field | Description |
+
+enablePodMonitor
+bool
+ |
+
+ Enable or disable the PodMonitor
+ |
+
+
+
+
+## PoolerSecrets {#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets}
+
+**Appears in:**
+
+- [PoolerStatus](#postgresql-k8s-enterprisedb-io-v1-PoolerStatus)
+
+PoolerSecrets contains the versions of all the secrets used
+
+
+Field | Description |
+
+serverTLS
+SecretVersion
+ |
+
+ The server TLS secret version
+ |
+
+serverCA
+SecretVersion
+ |
+
+ The server CA secret version
+ |
+
+clientCA
+SecretVersion
+ |
+
+ The client CA secret version
+ |
+
+pgBouncerSecrets
+PgBouncerSecrets
+ |
+
+ The version of the secrets used by PgBouncer
+ |
+
+
+
+
+## PoolerSpec {#postgresql-k8s-enterprisedb-io-v1-PoolerSpec}
+
+**Appears in:**
+
+- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler)
+
+PoolerSpec defines the desired state of Pooler
+
+
+Field | Description |
+
+cluster [Required]
+LocalObjectReference
+ |
+
+ This is the cluster reference on which the Pooler will work.
+Pooler name should never match with any cluster name within the same namespace.
+ |
+
+type
+PoolerType
+ |
+
+ Type of service to forward traffic to. Default: rw .
+ |
+
+instances
+int32
+ |
+
+ The number of replicas we want. Default: 1.
+ |
+
+template
+PodTemplateSpec
+ |
+
+ The template of the Pod to be created
+ |
+
+pgbouncer [Required]
+PgBouncerSpec
+ |
+
+ The PgBouncer configuration
+ |
+
+deploymentStrategy
+apps/v1.DeploymentStrategy
+ |
+
+ The deployment strategy to use for pgbouncer to replace existing pods with new ones
+ |
+
+monitoring
+PoolerMonitoringConfiguration
+ |
+
+ The configuration of the monitoring infrastructure of this pooler.
+ |
+
+
+
+
+## PoolerStatus {#postgresql-k8s-enterprisedb-io-v1-PoolerStatus}
+
+**Appears in:**
+
+- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler)
+
+PoolerStatus defines the observed state of Pooler
+
+
+Field | Description |
+
+secrets
+PoolerSecrets
+ |
+
+ The resource version of the config object
+ |
+
+instances
+int32
+ |
+
+ The number of pods trying to be scheduled
+ |
+
+
+
+
+## PoolerType {#postgresql-k8s-enterprisedb-io-v1-PoolerType}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec)
+
+PoolerType is the type of the connection pool, meaning the service
+we are targeting. Allowed values are rw
and ro
.
+
+## PostInitApplicationSQLRefs {#postgresql-k8s-enterprisedb-io-v1-PostInitApplicationSQLRefs}
+
+**Appears in:**
+
+- [BootstrapInitDB](#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB)
+
+PostInitApplicationSQLRefs points references to ConfigMaps or Secrets which
+contain SQL files, the general implementation order to these references is
+from all Secrets to all ConfigMaps, and inside Secrets or ConfigMaps,
+the implementation order is same as the order of each array
+
+
+Field | Description |
+
+secretRefs
+[]SecretKeySelector
+ |
+
+ SecretRefs holds a list of references to Secrets
+ |
+
+configMapRefs
+[]ConfigMapKeySelector
+ |
+
+ ConfigMapRefs holds a list of references to ConfigMaps
+ |
+
+
+
+
+## PostgresConfiguration {#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+PostgresConfiguration defines the PostgreSQL configuration
+
+
+Field | Description |
+
+parameters
+map[string]string
+ |
+
+ PostgreSQL configuration options (postgresql.conf)
+ |
+
+pg_hba
+[]string
+ |
+
+ PostgreSQL Host Based Authentication rules (lines to be appended
+to the pg_hba.conf file)
+ |
+
+epas
+EPASConfiguration
+ |
+
+ EDB Postgres Advanced Server specific configurations
+ |
+
+syncReplicaElectionConstraint
+SyncReplicaElectionConstraints
+ |
+
+ Requirements to be met by sync replicas. This will affect how the "synchronous_standby_names" parameter will be
+set up.
+ |
+
+shared_preload_libraries
+[]string
+ |
+
+ Lists of shared preload libraries to add to the default ones
+ |
+
+ldap
+LDAPConfig
+ |
+
+ Options to specify LDAP configuration
+ |
+
+promotionTimeout
+int32
+ |
+
+ Specifies the maximum number of seconds to wait when promoting an instance to primary.
+Default value is 40000000, greater than one year in seconds,
+big enough to simulate an infinite timeout
+ |
+
+
+
+
+## PrimaryUpdateMethod {#postgresql-k8s-enterprisedb-io-v1-PrimaryUpdateMethod}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+PrimaryUpdateMethod contains the method to use when upgrading
+the primary server of the cluster as part of rolling updates
+
+## PrimaryUpdateStrategy {#postgresql-k8s-enterprisedb-io-v1-PrimaryUpdateStrategy}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+PrimaryUpdateStrategy contains the strategy to follow when upgrading
+the primary server of the cluster as part of rolling updates
+
+## RecoveryTarget {#postgresql-k8s-enterprisedb-io-v1-RecoveryTarget}
+
+**Appears in:**
+
+- [BootstrapRecovery](#postgresql-k8s-enterprisedb-io-v1-BootstrapRecovery)
+
+RecoveryTarget allows to configure the moment where the recovery process
+will stop. All the target options except TargetTLI are mutually exclusive.
+
+
+Field | Description |
+
+backupID
+string
+ |
+
+ The ID of the backup from which to start the recovery process.
+If empty (default) the operator will automatically detect the backup
+based on targetTime or targetLSN if specified. Otherwise use the
+latest available backup in chronological order.
+ |
+
+targetTLI
+string
+ |
+
+ The target timeline ("latest" or a positive integer)
+ |
+
+targetXID
+string
+ |
+
+ The target transaction ID
+ |
+
+targetName
+string
+ |
+
+ The target name (to be previously created
+with pg_create_restore_point )
+ |
+
+targetLSN
+string
+ |
+
+ The target LSN (Log Sequence Number)
+ |
+
+targetTime
+string
+ |
+
+ The target time as a timestamp in the RFC3339 standard
+ |
+
+targetImmediate
+bool
+ |
+
+ End recovery as soon as a consistent state is reached
+ |
+
+exclusive
+bool
+ |
+
+ Set the target to be exclusive. If omitted, defaults to false, so that
+in Postgres, recovery_target_inclusive will be true
+ |
+
+
+
+
+## ReplicaClusterConfiguration {#postgresql-k8s-enterprisedb-io-v1-ReplicaClusterConfiguration}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ReplicaClusterConfiguration encapsulates the configuration of a replica
+cluster
+
+
+Field | Description |
+
+source [Required]
+string
+ |
+
+ The name of the external cluster which is the replication origin
+ |
+
+enabled [Required]
+bool
+ |
+
+ If replica mode is enabled, this cluster will be a replica of an
+existing cluster. Replica cluster can be created from a recovery
+object store or via streaming through pg_basebackup.
+Refer to the Replica clusters page of the documentation for more information.
+ |
+
+
+
+
+## ReplicationSlotsConfiguration {#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ReplicationSlotsConfiguration encapsulates the configuration
+of replication slots
+
+
+Field | Description |
+
+highAvailability
+ReplicationSlotsHAConfiguration
+ |
+
+ Replication slots for high availability configuration
+ |
+
+updateInterval
+int
+ |
+
+ Standby will update the status of the local replication slots
+every updateInterval seconds (default 30).
+ |
+
+
+
+
+## ReplicationSlotsHAConfiguration {#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsHAConfiguration}
+
+**Appears in:**
+
+- [ReplicationSlotsConfiguration](#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration)
+
+ReplicationSlotsHAConfiguration encapsulates the configuration
+of the replication slots that are automatically managed by
+the operator to control the streaming replication connections
+with the standby instances for high availability (HA) purposes.
+Replication slots are a PostgreSQL feature that makes sure
+that PostgreSQL automatically keeps WAL files in the primary
+when a streaming client (in this specific case a replica that
+is part of the HA cluster) gets disconnected.
+
+
+Field | Description |
+
+enabled
+bool
+ |
+
+ If enabled, the operator will automatically manage replication slots
+on the primary instance and use them in streaming replication
+connections with all the standby instances that are part of the HA
+cluster. If disabled (default), the operator will not take advantage
+of replication slots in streaming connections with the replicas.
+This feature also controls replication slots in replica cluster,
+from the designated primary to its cascading replicas. This can only
+be set at creation time.
+ |
+
+slotPrefix
+string
+ |
+
+ Prefix for replication slots managed by the operator for HA.
+It may only contain lower case letters, numbers, and the underscore character.
+This can only be set at creation time. By default set to _cnp_ .
+ |
+
+
+
+
+## RoleConfiguration {#postgresql-k8s-enterprisedb-io-v1-RoleConfiguration}
+
+**Appears in:**
+
+- [ManagedConfiguration](#postgresql-k8s-enterprisedb-io-v1-ManagedConfiguration)
+
+RoleConfiguration is the representation, in Kubernetes, of a PostgreSQL role
+with the additional field Ensure specifying whether to ensure the presence or
+absence of the role in the database
+The defaults of the CREATE ROLE command are applied
+Reference: https://www.postgresql.org/docs/current/sql-createrole.html
+
+
+Field | Description |
+
+name [Required]
+string
+ |
+
+ Name of the role
+ |
+
+comment
+string
+ |
+
+ Description of the role
+ |
+
+ensure
+EnsureOption
+ |
+
+ Ensure the role is present or absent - defaults to "present"
+ |
+
+passwordSecret
+LocalObjectReference
+ |
+
+ Secret containing the password of the role (if present)
+If null, the password will be ignored unless DisablePassword is set
+ |
+
+connectionLimit
+int64
+ |
+
+ If the role can log in, this specifies how many concurrent
+connections the role can make. -1 (the default) means no limit.
+ |
+
+validUntil
+meta/v1.Time
+ |
+
+ Date and time after which the role's password is no longer valid.
+When omitted, the password will never expire (default).
+ |
+
+inRoles
+[]string
+ |
+
+ List of one or more existing roles to which this role will be
+immediately added as a new member. Default empty.
+ |
+
+inherit
+bool
+ |
+
+ Whether a role "inherits" the privileges of roles it is a member of.
+Defaults is true .
+ |
+
+disablePassword
+bool
+ |
+
+ DisablePassword indicates that a role's password should be set to NULL in Postgres
+ |
+
+superuser
+bool
+ |
+
+ Whether the role is a superuser who can override all access
+restrictions within the database - superuser status is dangerous and
+should be used only when really needed. You must yourself be a
+superuser to create a new superuser. Defaults is false .
+ |
+
+createdb
+bool
+ |
+
+ When set to true , the role being defined will be allowed to create
+new databases. Specifying false (default) will deny a role the
+ability to create databases.
+ |
+
+createrole
+bool
+ |
+
+ Whether the role will be permitted to create, alter, drop, comment
+on, change the security label for, and grant or revoke membership in
+other roles. Default is false .
+ |
+
+login
+bool
+ |
+
+ Whether the role is allowed to log in. A role having the login
+attribute can be thought of as a user. Roles without this attribute
+are useful for managing database privileges, but are not users in
+the usual sense of the word. Default is false .
+ |
+
+replication
+bool
+ |
+
+ Whether a role is a replication role. A role must have this
+attribute (or be a superuser) in order to be able to connect to the
+server in replication mode (physical or logical replication) and in
+order to be able to create or drop replication slots. A role having
+the replication attribute is a very highly privileged role, and
+should only be used on roles actually used for replication. Default
+is false .
+ |
+
+bypassrls
+bool
+ |
+
+ Whether a role bypasses every row-level security (RLS) policy.
+Default is false .
+ |
+
+
+
+
+## S3Credentials {#postgresql-k8s-enterprisedb-io-v1-S3Credentials}
+
+**Appears in:**
+
+- [BarmanCredentials](#postgresql-k8s-enterprisedb-io-v1-BarmanCredentials)
+
+S3Credentials is the type for the credentials to be used to upload
+files to S3. It can be provided in two alternative ways:
+
+
+
+Field | Description |
+
+accessKeyId
+SecretKeySelector
+ |
+
+ The reference to the access key id
+ |
+
+secretAccessKey
+SecretKeySelector
+ |
+
+ The reference to the secret access key
+ |
+
+region
+SecretKeySelector
+ |
+
+ The reference to the secret containing the region name
+ |
+
+sessionToken
+SecretKeySelector
+ |
+
+ The references to the session key
+ |
+
+inheritFromIAMRole
+bool
+ |
+
+ Use the role based authentication without providing explicitly the keys.
+ |
+
+
+
+
+## ScheduledBackupSpec {#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec}
+
+**Appears in:**
+
+- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup)
+
+ScheduledBackupSpec defines the desired state of ScheduledBackup
+
+
+Field | Description |
+
+suspend
+bool
+ |
+
+ If this backup is suspended or not
+ |
+
+immediate
+bool
+ |
+
+ If the first backup has to be immediately start after creation or not
+ |
+
+schedule [Required]
+string
+ |
+
+ The schedule does not follow the same format used in Kubernetes CronJobs
+as it includes an additional seconds specifier,
+see https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format
+ |
+
+cluster [Required]
+LocalObjectReference
+ |
+
+ The cluster to backup
+ |
+
+backupOwnerReference
+string
+ |
+
+ Indicates which ownerReference should be put inside the created backup resources.
+
+- none: no owner reference for created backup objects (same behavior as before the field was introduced)
+- self: sets the Scheduled backup object as owner of the backup
+- cluster: set the cluster as owner of the backup
+
+ |
+
+target
+BackupTarget
+ |
+
+ The policy to decide which instance should perform this backup. If empty,
+it defaults to cluster.spec.backup.target .
+Available options are empty string, primary and prefer-standby .
+primary to have backups run always on primary instances,
+prefer-standby to have backups run preferably on the most updated
+standby, if available.
+ |
+
+method
+BackupMethod
+ |
+
+ The backup method to be used, possible options are barmanObjectStore
+and volumeSnapshot . Defaults to: barmanObjectStore .
+ |
+
+
+
+
+## ScheduledBackupStatus {#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupStatus}
+
+**Appears in:**
+
+- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup)
+
+ScheduledBackupStatus defines the observed state of ScheduledBackup
+
+
+Field | Description |
+
+lastCheckTime
+meta/v1.Time
+ |
+
+ The latest time the schedule
+ |
+
+lastScheduleTime
+meta/v1.Time
+ |
+
+ Information when was the last time that backup was successfully scheduled.
+ |
+
+nextScheduleTime
+meta/v1.Time
+ |
+
+ Next time we will run a backup
+ |
+
+
+
+
+## SecretKeySelector {#postgresql-k8s-enterprisedb-io-v1-SecretKeySelector}
+
+**Appears in:**
+
+- [AzureCredentials](#postgresql-k8s-enterprisedb-io-v1-AzureCredentials)
+
+- [BackupSource](#postgresql-k8s-enterprisedb-io-v1-BackupSource)
+
+- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus)
+
+- [BarmanObjectStoreConfiguration](#postgresql-k8s-enterprisedb-io-v1-BarmanObjectStoreConfiguration)
+
+- [GoogleCredentials](#postgresql-k8s-enterprisedb-io-v1-GoogleCredentials)
+
+- [MonitoringConfiguration](#postgresql-k8s-enterprisedb-io-v1-MonitoringConfiguration)
+
+- [PostInitApplicationSQLRefs](#postgresql-k8s-enterprisedb-io-v1-PostInitApplicationSQLRefs)
+
+- [S3Credentials](#postgresql-k8s-enterprisedb-io-v1-S3Credentials)
+
+SecretKeySelector contains enough information to let you locate
+the key of a Secret
+
+
+Field | Description |
+
+LocalObjectReference
+LocalObjectReference
+ |
+(Members of LocalObjectReference are embedded into this type.)
+ The name of the secret in the pod's namespace to select from.
+ |
+
+key [Required]
+string
+ |
+
+ The key to select
+ |
+
+
+
+
+## SecretVersion {#postgresql-k8s-enterprisedb-io-v1-SecretVersion}
+
+**Appears in:**
+
+- [PgBouncerSecrets](#postgresql-k8s-enterprisedb-io-v1-PgBouncerSecrets)
+
+- [PoolerSecrets](#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets)
+
+SecretVersion contains a secret name and its ResourceVersion
+
+
+Field | Description |
+
+name
+string
+ |
+
+ The name of the secret
+ |
+
+version
+string
+ |
+
+ The ResourceVersion of the secret
+ |
+
+
+
+
+## SecretsResourceVersion {#postgresql-k8s-enterprisedb-io-v1-SecretsResourceVersion}
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+SecretsResourceVersion is the resource versions of the secrets
+managed by the operator
+
+
+Field | Description |
+
+superuserSecretVersion
+string
+ |
+
+ The resource version of the "postgres" user secret
+ |
+
+replicationSecretVersion
+string
+ |
+
+ The resource version of the "streaming_replica" user secret
+ |
+
+applicationSecretVersion
+string
+ |
+
+ The resource version of the "app" user secret
+ |
+
+managedRoleSecretVersion
+map[string]string
+ |
+
+ The resource versions of the managed roles secrets
+ |
+
+caSecretVersion
+string
+ |
+
+ Unused. Retained for compatibility with old versions.
+ |
+
+clientCaSecretVersion
+string
+ |
+
+ The resource version of the PostgreSQL client-side CA secret version
+ |
+
+serverCaSecretVersion
+string
+ |
+
+ The resource version of the PostgreSQL server-side CA secret version
+ |
+
+serverSecretVersion
+string
+ |
+
+ The resource version of the PostgreSQL server-side secret version
+ |
+
+barmanEndpointCA
+string
+ |
+
+ The resource version of the Barman Endpoint CA if provided
+ |
+
+metrics
+map[string]string
+ |
+
+ A map with the versions of all the secrets used to pass metrics.
+Map keys are the secret names, map values are the versions
+ |
+
+
+
+
+## ServiceAccountTemplate {#postgresql-k8s-enterprisedb-io-v1-ServiceAccountTemplate}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+ServiceAccountTemplate contains the template needed to generate the service accounts
+
+
+Field | Description |
+
+metadata [Required]
+Metadata
+ |
+
+ Metadata are the metadata to be used for the generated
+service account
+ |
+
+
+
+
+## SnapshotOwnerReference {#postgresql-k8s-enterprisedb-io-v1-SnapshotOwnerReference}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [VolumeSnapshotConfiguration](#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration)
+
+SnapshotOwnerReference defines the reference type for the owner of the snapshot.
+This specifies which owner the processed resources should relate to.
+
+## SnapshotType {#postgresql-k8s-enterprisedb-io-v1-SnapshotType}
+
+(Alias of `string`)
+
+**Appears in:**
+
+- [Import](#postgresql-k8s-enterprisedb-io-v1-Import)
+
+SnapshotType is a type of allowed import
+
+## StorageConfiguration {#postgresql-k8s-enterprisedb-io-v1-StorageConfiguration}
+
+**Appears in:**
+
+- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec)
+
+StorageConfiguration is the configuration of the storage of the PostgreSQL instances
+
+
+Field | Description |
+
+storageClass
+string
+ |
+
+ StorageClass to use for database data (PGDATA ). Applied after
+evaluating the PVC template, if available.
+If not specified, generated PVCs will be satisfied by the
+default storage class
+ |
+
+size
+string
+ |
+
+ Size of the storage. Required if not already specified in the PVC template.
+Changes to this field are automatically reapplied to the created PVCs.
+Size cannot be decreased.
+ |
+
+resizeInUseVolumes
+bool
+ |
+
+ Resize existent PVCs, defaults to true
+ |
+
+pvcTemplate
+core/v1.PersistentVolumeClaimSpec
+ |
+
+ Template to be used to generate the Persistent Volume Claim
+ |
+
+
+
+
+## SyncReplicaElectionConstraints {#postgresql-k8s-enterprisedb-io-v1-SyncReplicaElectionConstraints}
+
+**Appears in:**
+
+- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration)
+
+SyncReplicaElectionConstraints contains the constraints for sync replicas election.
+For anti-affinity parameters two instances are considered in the same location
+if all the labels values match.
+In future synchronous replica election restriction by name will be supported.
+
+
+Field | Description |
+
+nodeLabelsAntiAffinity
+[]string
+ |
+
+ A list of node labels values to extract and compare to evaluate if the pods reside in the same topology or not
+ |
+
+enabled [Required]
+bool
+ |
+
+ This flag enables the constraints for sync replicas
+ |
+
+
+
+
+## TDEConfiguration {#postgresql-k8s-enterprisedb-io-v1-TDEConfiguration}
+
+**Appears in:**
+
+- [EPASConfiguration](#postgresql-k8s-enterprisedb-io-v1-EPASConfiguration)
+
+TDEConfiguration contains the Transparent Data Encryption configuration
+
+
+Field | Description |
+
+enabled
+bool
+ |
+
+ True if we want to have TDE enabled
+ |
+
+secretKeyRef
+core/v1.SecretKeySelector
+ |
+
+ Reference to the secret that contains the encryption key
+ |
+
+wrapCommand
+core/v1.SecretKeySelector
+ |
+
+ WrapCommand is the encrypt command provided by the user
+ |
+
+unwrapCommand
+core/v1.SecretKeySelector
+ |
+
+ UnwrapCommand is the decryption command provided by the user
+ |
+
+passphraseCommand
+core/v1.SecretKeySelector
+ |
+
+ PassphraseCommand is the command executed to get the passphrase that will be
+passed to the OpenSSL command to encrypt and decrypt
+ |
+
+
+
+
+## Topology {#postgresql-k8s-enterprisedb-io-v1-Topology}
+
+**Appears in:**
+
+- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus)
+
+Topology contains the cluster topology
+
+
+Field | Description |
+
+instances
+map[github.com/EnterpriseDB/cloud-native-postgres/api/v1.PodName]github.com/EnterpriseDB/cloud-native-postgres/api/v1.PodTopologyLabels
+ |
+
+ Instances contains the pod topology of the instances
+ |
+
+nodesUsed
+int32
+ |
+
+ NodesUsed represents the count of distinct nodes accommodating the instances.
+A value of '1' suggests that all instances are hosted on a single node,
+implying the absence of High Availability (HA). Ideally, this value should
+be the same as the number of instances in the Postgres HA cluster, implying
+shared nothing architecture on the compute side.
+ |
+
+successfullyExtracted
+bool
+ |
+
+ SuccessfullyExtracted indicates if the topology data was extract. It is useful to enact fallback behaviors
+in synchronous replica election in case of failures
+ |
+
+
+
+
+## VolumeSnapshotConfiguration {#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration}
+
+**Appears in:**
+
+- [BackupConfiguration](#postgresql-k8s-enterprisedb-io-v1-BackupConfiguration)
+
+VolumeSnapshotConfiguration represents the configuration for the execution of snapshot backups.
+
+
+Field | Description |
+
+labels
+map[string]string
+ |
+
+ Labels are key-value pairs that will be added to .metadata.labels snapshot resources.
+ |
+
+annotations
+map[string]string
+ |
+
+ Annotations key-value pairs that will be added to .metadata.annotations snapshot resources.
+ |
+
+className
+string
+ |
+
+ ClassName specifies the Snapshot Class to be used for PG_DATA PersistentVolumeClaim.
+It is the default class for the other types if no specific class is present
+ |
+
+walClassName
+string
+ |
+
+ WalClassName specifies the Snapshot Class to be used for the PG_WAL PersistentVolumeClaim.
+ |
+
+snapshotOwnerReference
+SnapshotOwnerReference
+ |
+
+ SnapshotOwnerReference indicates the type of owner reference the snapshot should have. .
+ |
+
+
+
+
+## WalBackupConfiguration {#postgresql-k8s-enterprisedb-io-v1-WalBackupConfiguration}
+
+**Appears in:**
+
+- [BarmanObjectStoreConfiguration](#postgresql-k8s-enterprisedb-io-v1-BarmanObjectStoreConfiguration)
+
+WalBackupConfiguration is the configuration of the backup of the
+WAL stream
+
+
+Field | Description |
+
+compression
+CompressionType
+ |
+
+ Compress a WAL file before sending it to the object store. Available
+options are empty string (no compression, default), gzip , bzip2 or snappy .
+ |
+
+encryption
+EncryptionType
+ |
+
+ Whenever to force the encryption of files (if the bucket is
+not already configured for that).
+Allowed options are empty string (use the bucket policy, default),
+AES256 and aws:kms
+ |
+
+maxParallel
+int
+ |
+
+ Number of WAL files to be either archived in parallel (when the
+PostgreSQL instance is archiving to a backup object store) or
+restored in parallel (when a PostgreSQL standby is fetching WAL
+files from a recovery object store). If not specified, WAL files
+will be processed one at a time. It accepts a positive integer as a
+value - with 1 being the minimum accepted value.
+ |
+
+
+
\ No newline at end of file
diff --git a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
index 63521eb0afc..103c6fb197f 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
@@ -71,7 +71,7 @@ Additionally, EDB Postgres for Kubernetes automatically creates a secret with th
same name of the pooler containing the configuration files used with PgBouncer.
!!! Seealso "API reference"
- For details, please refer to [`PgBouncerSpec` section](api_reference.md#PgBouncerSpec)
+ For details, please refer to [`PgBouncerSpec` section](cloudnative-pg.v1.md#postgresql-k8s-enterprisedb-io-v1-PgBouncerSpec)
in the API reference.
## Pooler resource lifecycle
@@ -177,7 +177,7 @@ GRANT EXECUTE ON FUNCTION user_search(text)
You can take advantage of pod templates specification in the `template`
section of a `Pooler` resource. For details, please refer to [`PoolerSpec`
-section](api_reference.md#PoolerSpec) in the API reference.
+section](cloudnative-pg.v1.md#postgresql-k8s-enterprisedb-io-v1-PoolerSpec) in the API reference.
Through templates you can configure pods as you like, including fine
control over affinity and anti-affinity rules for pods and nodes.
@@ -344,12 +344,13 @@ metrics having the `cnp_pgbouncer_` prefix, by running:
Similarly to the EDB Postgres for Kubernetes instance, the exporter runs on port
`9127` of each pod running PgBouncer, and also provides metrics related to the
-Go runtime (with prefix `go_*`). You can debug the exporter on a pod running
-PgBouncer through the following command:
+Go runtime (with prefix `go_*`).
-```console
-kubectl exec -ti -- curl 127.0.0.1:9127/metrics
-```
+!!! Info
+ You can inspect the exported metrics on a pod running PgBouncer, by following
+ the instructions provided in the
+ ["How to inspect the exported metrics" section from the "Monitoring" page](monitoring.md/#how-to-inspect-the-exported-metrics),
+ making sure that you use the correct IP and the `9127` port.
An example of the output for `cnp_pgbouncer` metrics:
diff --git a/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx b/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx
index bd6906061e8..5dacb1ae67f 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx
@@ -61,7 +61,7 @@ $ kubectl cnp status
Cluster Summary
Name: cluster-example
Namespace: default
-PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
+PostgreSQL Image: quay.io/enterprisedb/postgresql:16.0
Primary instance: cluster-example-2
Status: Cluster in healthy state
Instances: 3
diff --git a/product_docs/docs/postgres_for_kubernetes/1/declarative_role_management.mdx b/product_docs/docs/postgres_for_kubernetes/1/declarative_role_management.mdx
index 44b34bd9470..2d963716e9f 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/declarative_role_management.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/declarative_role_management.mdx
@@ -42,7 +42,7 @@ spec:
The role specification in `spec.managed.roles` adheres to the
[PostgreSQL structure and naming conventions](https://www.postgresql.org/docs/current/sql-createrole.html).
-Please refer to the [API reference](api_reference.md#RoleConfiguration) for
+Please refer to the [API reference](cloudnative-pg.v1.md#postgresql-k8s-enterprisedb-io-v1-RoleConfiguration) for
the full list of attributes you can define for each role.
A few points are worth noting:
diff --git a/product_docs/docs/postgres_for_kubernetes/1/fencing.mdx b/product_docs/docs/postgres_for_kubernetes/1/fencing.mdx
index aa23a285ee7..f7b69a34488 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/fencing.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/fencing.mdx
@@ -81,8 +81,8 @@ kubectl cnp fencing off cluster-example "*"
Once an instance is set for fencing, the procedure to shut down the
`postmaster` process is initiated. This consists of an initial smart shutdown
-with a timeout set to `.spec.stopDelay`, followed by a fast shutdown if
-required. Then:
+with a timeout set to `.spec.smartStopDelay`, followed by a fast shutdown if
+required for up to `.spec.stopDelay` seconds. Then:
- the Pod will be kept alive
diff --git a/product_docs/docs/postgres_for_kubernetes/1/images/grafana-local.png b/product_docs/docs/postgres_for_kubernetes/1/images/grafana-local.png
index 740b8cd6dee..8ba6940cd99 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/images/grafana-local.png
+++ b/product_docs/docs/postgres_for_kubernetes/1/images/grafana-local.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:ef0f2c974fe4037fe0e43d6bf2dcb6318cc251524b8e4cd05fc9518906a13a59
-size 303983
+oid sha256:1b6fd7597138faadf132fd13dce4df89bbef2e771a45241d2defa32607f029a5
+size 241795
diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/index.mdx
index 554cedde64e..3fdbf36f667 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/index.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/index.mdx
@@ -96,12 +96,11 @@ primary/standby architecture, using native streaming replication.
## Features unique to EDB Postgres of Kubernetes
-- [Long Term Support](#long-term-support)
+- [Long Term Support](#long-term-support) for 1.18.x
- Red Hat certified operator for OpenShift
-- Support on IBM Power and z/Linux through partnership with IBM
-- [Oracle compatibility](https://www.enterprisedb.com/docs/epas/latest/fundamentals/epas_fundamentals/epas_compat_ora_dev_guide/) through EDB Postgres Advanced Sever
-- [Transparent Data Encryption (TDE)](https://www.enterprisedb.com/docs/tde/latest/) through EDB Postgres Advanced Server
+- Support on IBM Power
- EDB Postgres for Kubernetes Plugin
+- Oracle compatibility through EDB Postgres Advanced Sever
- Velero/OADP cold backup support
- Generic adapter for third-party Kubernetes backup tools
@@ -115,20 +114,18 @@ You need a valid license key to use EDB Postgres for Kubernetes in production.
### Long Term Support
-EDB is committed to declaring a Long Term Support (LTS) version of EDB
-Postgres for Kubernetes annually (1.18 was our first). Each LTS version will
-receive maintenance releases and be supported for an additional 12 months beyond
-the last community release of CloudNativePG for the same version.
-
-For example, the last version of 1.18 of CloudNativePG was released on June 12, 2023.
-Because this was declared an LTS version of EDB Postgres for Kubernetes, it will be supported
-for additional 12 months until June 12, 2024.
-
-In addition, customers will always have at least 6 months to move between LTS versions. This
-means a new LTS version will be available by January 12, 2024 at the latest.
-
-While we encourage customers to regularly upgrade to the latest version of the operator to take
-advantage of new features, having LTS versions allows customers desiring additional stability to stay on the same
+EDB is committed to declaring one version of EDB Postgres for Kubernetes per
+year as a Long Term Support version. This version will be supported and receive
+maintenance releases for an additional 12 months beyond the last release of
+CloudNativePG by the community for the same version. For example, the last
+version of 1.18 of CloudNativePG was released on June 12, 2023. This was
+declared a LTS version of EDB Postgres for Kubernetes and it will be supported
+for additional 12 months until June 12, 2024. Customers can expect that they
+will have at least 6 months to move between LTS versions. So they should
+expect the next LTS to be available by January 12, 2024 to allow at least 6
+months to migrate. While we encourage customers to regularly upgrade to the
+latest version of the operator to take advantage of new features, having LTS
+versions allows customers desiring additional stability to stay on the same
version for 12-18 months before upgrading.
## Licensing
diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
index 0cfb10c663b..0df44f4443d 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
@@ -261,7 +261,7 @@ convention over configuration.
#### Backup from a standby
-[Backup from a standby](backup_recovery.md#backup-from-a-standby)
+[Backup from a standby](backup.md#backup-from-a-standby)
was introduced in EDB Postgres for Kubernetes 1.19, but disabled by default - meaning that
the base backup is taken from the primary unless the target is explicitly
set to prefer standby.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx b/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx
index b89463072e4..27475b519ba 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/instance_manager.mdx
@@ -51,20 +51,22 @@ When a Pod running Postgres is deleted, either manually or by Kubernetes
following a node drain operation, the kubelet will send a termination signal to the
instance manager, and the instance manager will take care of shutting down
PostgreSQL in an appropriate way.
-The `.spec.stopDelay`, expressed in seconds, is the amount of time
-given to PostgreSQL to shut down. The value defaults to 30 seconds.
+The `.spec.smartStopDelay` and `.spec.stopDelay` options, expressed in seconds,
+control the amount of time given to PostgreSQL to shut down. The values default
+to 180 and 1800 seconds, respectively.
The shutdown procedure is composed of two steps:
1. The instance manager requests a **smart** shut down, disallowing any
- new connection to PostgreSQL. This step will last for half of the
- time set in `.spec.stopDelay`.
+ new connection to PostgreSQL. This step will last for up to
+ `.spec.smartStopDelay` seconds.
2. If PostgreSQL is still up, the instance manager requests a **fast**
shut down, terminating any existing connection and exiting promptly.
If the instance is archiving and/or streaming WAL files, the process
- will wait for up to the remaining half of the time set in `.spec.stopDelay`
- to complete the operation and then forcibly shut down.
+ will wait for up to the remaining time set in `.spec.stopDelay` to complete the
+ operation and then forcibly shut down. Such timeout is calculated using the
+ following formula: `max(stopDelay - smartStopDelay, 30)`.
!!! Important
In order to avoid any data loss in the Postgres cluster, which impacts
@@ -80,10 +82,7 @@ in order to ensure that all the data are available on the new primary.
For this reason, the `.spec.switchoverDelay`, expressed in seconds, controls
the time given to the former primary to shut down gracefully and archive all
-the WAL files.
-During this time frame, the primary instance does not accept connections.
-The value defaults is greater than one year in seconds, big enough to simulate
-an infinite delay and therefore preserve data durability.
+the WAL files. By default it is set to `3600` (1 hour).
!!! Warning
The `.spec.switchoverDelay` option affects the RPO and RTO of your
diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
index 84a5744a4e3..2f1cace2e97 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
@@ -862,7 +862,7 @@ it from the actual pod. This means that you will be using the `postgres` user.
```shell
kubectl cnp psql cluster-example
-psql (15.3)
+psql (15.3 (Debian 15.3-1.pgdg110+1))
Type "help" for help.
postgres=#
@@ -873,7 +873,7 @@ select to work against a replica by using the `--replica` option:
```shell
kubectl cnp psql --replica cluster-example
-psql (15.3)
+psql (15.3 (Debian 15.3-1.pgdg110+1))
Type "help" for help.
@@ -889,16 +889,6 @@ postgres=# \q
This command will start `kubectl exec`, and the `kubectl` executable must be
reachable in your `PATH` variable to correctly work.
-!!! Note
-When connecting to instances running on OpenShift, you must explicitly
-pass a username to the `psql` command, because of a [security measure built into
-OpenShift](https://cloud.redhat.com/blog/a-guide-to-openshift-and-uids):
-
-```shell
-kubectl cnp psql cluster-example -- -U postgres
-```
-!!!
-
### Snapshotting a Postgres cluster
The `kubectl cnp snapshot` creates consistent snapshots of a Postgres
diff --git a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
index 901043803e7..cb33a4d534b 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
@@ -29,6 +29,126 @@ they are automatically inherited by all resources created by it (including pods)
Label and annotation inheritance is the technique adopted by EDB Postgres for Kubernetes
in lieu of alternative approaches such as pod templates.
+## Predefined labels
+
+Below is a list of predefined labels that are managed by EDB Postgres for Kubernetes.
+
+`k8s.enterprisedb.io/backupName`
+: Backup identifier, only available on `Backup` and `VolumeSnapshot`
+ resources
+
+`k8s.enterprisedb.io/cluster`
+: Name of the cluster
+
+`k8s.enterprisedb.io/immediateBackup`
+: Applied to a `Backup` resource if the backup is the first one created from
+ a `ScheduledBackup` object having `immediate` set to `true`.
+
+`k8s.enterprisedb.io/instanceName`
+: Name of the PostgreSQL instance - this label replaces the old and
+ deprecated `postgresql` label
+
+`k8s.enterprisedb.io/jobRole`
+: Role of the job (i.e. `import`, `initdb`, `join`, ...)
+
+`k8s.enterprisedb.io/podRole`
+: Currently fixed to `instance` to identify a pod running PostgreSQL
+
+`k8s.enterprisedb.io/poolerName`
+: Name of the PgBouncer pooler
+
+`k8s.enterprisedb.io/pvcRole`
+: Purpose of the PVC, such as `PG_DATA` or `PG_WAL`
+
+`k8s.enterprisedb.io/reload`
+: Available on `ConfigMap` and `Secret` resources. When set to `true`,
+ a change in the resource will be automatically reloaded by the operator.
+
+`k8s.enterprisedb.io/scheduled-backup`
+: When available, name of the `ScheduledBackup` resource that created a given
+ `Backup` object.
+
+`role`
+: Whether the instance running in a pod is a `primary` or a `replica`
+
+## Predefined annotations
+
+Below is a list of predefined annotations that are managed by EDB Postgres for Kubernetes.
+
+`container.apparmor.security.beta.kubernetes.io/*`
+: Name of the AppArmor profile to apply to the named container.
+ See [AppArmor](security.md#restricting-pod-access-using-apparmor)
+ documentation for details
+
+`k8s.enterprisedb.io/coredumpFilter`
+: Filter to control the coredump of Postgres processes, expressed with a
+ bitmask. By default it is set to `0x31` in order to exclude shared memory
+ segments from the dump. Please refer to ["PostgreSQL core dumps"](troubleshooting.md#postgresql-core-dumps)
+ for more information.
+
+`k8s.enterprisedb.io/clusterManifest`
+: Manifest of the `Cluster` owning this resource (such as a PVC) - this label
+ replaces the old and deprecated `k8s.enterprisedb.io/hibernateClusterManifest` label
+
+`k8s.enterprisedb.io/fencedInstances`
+: List, expressed in JSON format, of the instances that need to be fenced.
+ The whole cluster is fenced if the list contains the `*` element.
+
+`k8s.enterprisedb.io/forceLegacyBackup`
+: Applied to a `Cluster` resource for testing purposes only, in order to
+ simulate the behavior of `barman-cloud-backup` prior to version 3.4 (Jan 2023)
+ when the `--name` option was not available.
+
+`k8s.enterprisedb.io/hash`
+: The hash value of the resource
+
+`k8s.enterprisedb.io/hibernation`
+: Applied to a `Cluster` resource to control the [declarative hibernation feature](declarative_hibernation.md).
+ Allowed values are `on` and `off`.
+
+`k8s.enterprisedb.io/managedSecrets`
+: Pull secrets managed by the operator and automatically set in the
+ `ServiceAccount` resources for each Postgres cluster
+
+`k8s.enterprisedb.io/nodeSerial`
+: On a pod resource, identifies the serial number of the instance within the
+ Postgres cluster
+
+`k8s.enterprisedb.io/operatorVersion`
+: Version of the operator
+
+`k8s.enterprisedb.io/pgControldata`
+: Output of the `pg_controldata` command - this annotation replaces the old and
+ deprecated `k8s.enterprisedb.io/hibernatePgControlData` annotation
+
+`k8s.enterprisedb.io/podEnvHash`
+: *Deprecated* as the `k8s.enterprisedb.io/podSpec` annotation now also contains the pod environment
+
+`k8s.enterprisedb.io/podSpec`
+: Snapshot of the `spec` of the Pod generated by the operator - this annotation replaces
+ the old and deprecated `k8s.enterprisedb.io/podEnvHash` annotation
+
+`k8s.enterprisedb.io/poolerSpecHash`
+: Hash of the pooler resource
+
+`k8s.enterprisedb.io/pvcStatus`
+: Current status of the pvc, one of `initializing`, `ready`, `detached`
+
+`k8s.enterprisedb.io/reconciliationLoop`
+: When set to `disabled` on a `Cluster`, the operator prevents the
+ reconciliation loop from running
+
+`k8s.enterprisedb.io/reloadedAt`
+: Contains the latest cluster `reload` time, `reload` is triggered by user through plugin
+
+`k8s.enterprisedb.io/skipEmptyWalArchiveCheck`
+: When set to `true` on a `Cluster` resource, the operator disables the check
+ that ensures that the WAL archive is empty before writing data. Use at your own
+ risk.
+
+`kubectl.kubernetes.io/restartedAt`
+: When available, the time of last requested restart of a Postgres cluster
+
## Pre-requisites
By default, no label or annotation defined in the cluster's metadata is
diff --git a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
index 2221bd3cb0d..80daa8fc175 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
@@ -176,7 +176,7 @@ cnp_collector_up{cluster="cluster-example"} 1
# HELP cnp_collector_postgres_version Postgres version
# TYPE cnp_collector_postgres_version gauge
-cnp_collector_postgres_version{cluster="cluster-example",full="15.3"} 15.3
+cnp_collector_postgres_version{cluster="cluster-example",full="16.0"} 16.0
# HELP cnp_collector_last_failed_backup_timestamp The last failed backup as a unix timestamp
# TYPE cnp_collector_last_failed_backup_timestamp gauge
@@ -689,7 +689,7 @@ metadata:
spec:
containers:
- name: curl
- image: curlimages/curl:7.84.0
+ image: curlimages/curl:8.2.1
command: ['sleep', '3600']
```
diff --git a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx
index cb417fda042..4dec98b6713 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/openshift.mdx
@@ -984,7 +984,7 @@ enabled, so you can peek the `cnp_` prefix:
![Prometheus queries](./images/openshift/prometheus-queries.png)
It is easy to define Alerts based on the default metrics as `PrometheusRules`.
-You can find some examples of rules in the [prometheusrule.yaml](../samples/monitoring/prometheusrule.yaml)
+You can find some examples of rules in the [cnp-prometheusrule.yaml](../samples/monitoring/cnp-prometheusrule.yaml)
file, which you can download.
Before applying the rules, again, some OpenShift setup may be necessary.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
index 8a3cc8edb00..254526d27ae 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
@@ -335,20 +335,18 @@ failover and switchover operations. This area includes enhancements in:
- connection pooling, to improve performance and control through a
connection pooling layer with pgBouncer.
-### PostgreSQL Hot Backups
+### PostgreSQL WAL archive
-The operator has been designed to provide application-level backups using
-PostgreSQL’s native continuous hot backup technology based on
-physical base backups and continuous WAL archiving. Specifically,
-the operator currently supports only backups on object stores (AWS S3 and
-S3-compatible, Azure Blob Storage, Google Cloud Storage, and gateways like
-MinIO).
-
-WAL archiving and base backups are defined at the cluster level, declaratively,
-through the `backup` parameter in the cluster definition, by specifying
-an S3 protocol destination URL (for example, to point to a specific folder in
-an AWS S3 bucket) and, optionally, a generic endpoint URL. WAL archiving,
-a prerequisite for continuous backup, does not require any further
+The operator supports PostgreSQL continuous archiving of WAL files
+to an object store (AWS S3 and S3-compatible, Azure Blob Storage, Google Cloud
+Storage, and gateways like MinIO).
+
+WAL archiving is defined at the cluster level, declaratively, through the
+`backup` parameter in the cluster definition, by specifying an S3 protocol
+destination URL (for example, to point to a specific folder in an AWS S3
+bucket) and, optionally, a generic endpoint URL.
+
+WAL archiving, a prerequisite for continuous backup, does not require any further
action from the user: the operator will automatically and transparently set
the `archive_command` to rely on `barman-cloud-wal-archive` to ship WAL
files to the defined endpoint. Users can decide the compression algorithm,
@@ -357,11 +355,31 @@ in the archive. In addition to that `Instance Manager` automatically checks
the correctness of the archive destination, by performing `barman-cloud-check-wal-archive`
command before beginning to ship the very first set of WAL files.
+### PostgreSQL Hot Backups
+
+The operator has been designed to provide application-level backups using
+PostgreSQL’s native continuous hot backup technology based on
+physical base backups and continuous WAL archiving.
+Base backups can be saved on:
+
+- Kubernetes Volume Snapshots
+- object stores (AWS S3 and S3-compatible, Azure Blob Storage, Google Cloud
+ Storage, and gateways like MinIO)
+
+Base backups are defined at the cluster level, declaratively,
+through the `backup` parameter in the cluster definition.
+
You can define base backups in two ways: on-demand (through the `Backup`
custom resource definition) or scheduled (through the `ScheduledBackup`
-customer resource definition, using a cron-like syntax). They both rely on
-`barman-cloud-backup` for the job (distributed as part of the application
-container image) to relay backups in the same endpoint, alongside WAL files.
+custom resource definition, using a cron-like syntax).
+
+Volume Snapshots rely directly on the Kubernetes API, which delegates this
+capability to the underlying storage classes and CSI drivers. Volume snapshot
+backups are suitable for Very Large Database (VLDB) contexts.
+
+Object store backups rely on `barman-cloud-backup` for the job (distributed as
+part of the application container image) to relay backups in the same endpoint,
+alongside WAL files.
Both `barman-cloud-wal-restore` and `barman-cloud-backup` are distributed in
the application container image under GNU GPL 3 terms.
@@ -375,10 +393,12 @@ particular I/O, for standard database operations.
### Full restore from a backup
The operator enables you to bootstrap a new cluster (with its settings)
-starting from an existing and accessible backup taken using
-`barman-cloud-backup`. Once the bootstrap process is completed, the operator
-initiates the instance in recovery mode and replays all available WAL files
-from the specified archive, exiting recovery and starting as a primary.
+starting from an existing and accessible backup, either on a volume snapshot
+or in an object store.
+
+Once the bootstrap process is completed, the operator initiates the instance in
+recovery mode and replays all available WAL files from the specified archive,
+exiting recovery and starting as a primary.
Subsequently, the operator will clone the requested number of standby instances
from the primary.
EDB Postgres for Kubernetes supports parallel WAL fetching from the archive.
@@ -389,7 +409,7 @@ The operator enables you to create a new PostgreSQL cluster by recovering
an existing backup to a specific point-in-time, defined with a timestamp, a
label or a transaction ID. This capability is built on top of the full restore
one and supports all the options available in
-[PostgreSQL for PITR](https://www.postgresql.org/docs/13/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET).
+[PostgreSQL for PITR](https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET).
### Zero Data Loss clusters through synchronous replication
@@ -414,9 +434,9 @@ version: such a source can be anywhere, as long as a direct streaming
connection via TLS is allowed from the two endpoints.
Moreover, the source can be even outside Kubernetes, running in a physical or
virtual environment.
-Replica clusters can be created from a recovery object store (backup in Barman
-Cloud format) or via streaming through `pg_basebackup`. Both WAL file shipping
-and WAL streaming are allowed.
+Replica clusters can be created from a volume snapshot, a recovery object store
+(backup in Barman Cloud format) or via streaming through `pg_basebackup`.
+Both WAL file shipping and WAL streaming are allowed.
Replica clusters dramatically improve the business continuity posture of your
PostgreSQL databases in Kubernetes, spanning over multiple datacenters and
opening up for hybrid and multi-cloud setups (currently, manual switchover
diff --git a/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx b/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx
index c018111c7be..9cdc8f7ff83 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/postgis.mdx
@@ -98,7 +98,7 @@ both the template database and the application database, ready for use.
!!! Info
Take some time and look at the available options in `.spec.bootstrap.initdb`
- from the [API reference](api_reference.md#BootstrapInitDB), such as
+ from the [API reference](cloudnative-pg.v1.md#postgresql-k8s-enterprisedb-io-v1-BootstrapInitDB), such as
`postInitApplicationSQL`.
You can easily verify the available version of PostGIS that is in the
@@ -108,7 +108,7 @@ values from the ones in this document):
```console
$ kubectl exec -ti postgis-example-1 -- psql app
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
-psql (15.3 (Debian 15.3-1.pgdg110+1))
+psql (16.0 (Debian 16.0-1.pgdg110+1))
Type "help" for help.
app=# SELECT * FROM pg_available_extensions WHERE name ~ '^postgis' ORDER BY 1;
diff --git a/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx b/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx
index 881fb86b8fa..3046c0ccb02 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx
@@ -17,8 +17,7 @@ using EDB Postgres for Kubernetes on a local Kubernetes cluster in [Kind](https:
Red Hat OpenShift Container Platform users can test the certified operator for
-EDB Postgres for Kubernetes on the [Red Hat CodeReady Containers (CRC)](https://developers.redhat.com/products/codeready-containers/overview)
-for OpenShift.
+EDB Postgres for Kubernetes on the [Red Hat OpenShift Local](https://developers.redhat.com/products/openshift-local/overview) (formerly Red Hat CodeReady Containers).
!!! Warning
The instructions contained in this section are for demonstration,
@@ -32,7 +31,7 @@ cluster on your local Kubernetes/Openshift installation and experiment with it.
!!! Important
Make sure that you have `kubectl` installed on your machine in order
- to connect to the Kubernetes cluster, or `oc` if using CRC for OpenShift.
+ to connect to the Kubernetes cluster, or `oc` if using OpenShift Local.
Please follow the Kubernetes documentation on [how to install `kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
or the Openshift documentation on [how to install `oc`](https://docs.openshift.com/container-platform/4.6/cli_reference/openshift_cli/getting-started-cli.html).
@@ -40,9 +39,9 @@ cluster on your local Kubernetes/Openshift installation and experiment with it.
If you are running Openshift, use `oc` every time `kubectl` is mentioned
in this documentation. `kubectl` commands are compatible with `oc` ones.
-## Part 1: Setup the local Kubernetes/Openshift playground
+## Part 1 - Setup the local Kubernetes/Openshift Local playground
-The first part is about installing Minikube, Kind, or CRC. Please spend some time
+The first part is about installing Minikube, Kind, or OpenShift Local. Please spend some time
reading about the systems and decide which one to proceed with.
After setting up one of them, please proceed with part 2.
@@ -85,9 +84,9 @@ then create a Kubernetes cluster with:
kind create cluster --name pg
```
-### CodeReady Containers (CRC)
+### OpenShift Local (formerly CodeReady Containers (CRC))
-1. [Download Red Hat CRC](https://developers.redhat.com/products/codeready-containers/overview)
+1. [Download OpenShift Local](https://developers.redhat.com/products/openshift-local/overview)
and move the binary inside a directory in your `PATH`.
2. Run the following commands:
@@ -106,7 +105,7 @@ kind create cluster --name pg
command. You can also open the web console running `crc console`.
User and password are the same as for the `oc login` command.
-5. CRC doesn't come with a StorageClass, so one has to be configured.
+5. OpenShift Local doesn't come with a StorageClass, so one has to be configured.
Follow the [Dynamic volume provisioning wiki page](https://github.com/code-ready/crc/wiki/Dynamic-volume-provisioning)
and install `rancher/local-path-provisioner`.
@@ -150,7 +149,7 @@ spec:
!!! Note "There's more"
For more detailed information about the available options, please refer
- to the ["API Reference" section](api_reference.md).
+ to the ["API Reference" section](cloudnative-pg.v1.md).
In order to create the 3-node PostgreSQL cluster, you need to run the following command:
diff --git a/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx b/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx
new file mode 100644
index 00000000000..d5cfcc9251a
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/recovery.mdx
@@ -0,0 +1,591 @@
+---
+title: 'Recovery'
+originalFilePath: 'src/recovery.md'
+---
+
+In PostgreSQL terminology, recovery is the process of starting a PostgreSQL
+instance using a previously taken backup. PostgreSQL recovery mechanism
+is very solid and rich. It also supports Point In Time Recovery, which allows
+you to restore a given cluster up to any point in time from the first available
+backup in your catalog to the last archived WAL (as you can see, the WAL
+archive is mandatory in this case).
+
+In EDB Postgres for Kubernetes, recovery cannot be performed "in-place" on an existing
+cluster. Recovery is rather a way to bootstrap a new Postgres cluster
+starting from an available physical backup.
+
+!!! Note
+ For details on the `bootstrap` stanza, please refer to the
+ ["Bootstrap" section](bootstrap.md).
+
+The `recovery` bootstrap mode lets you create a new cluster from an existing
+physical base backup, and then reapply the WAL files containing the REDO log
+from the archive.
+
+WAL files are pulled from the defined *recovery object store*.
+
+Base backups depend on the actual method used to take them, either object
+stores or volume snapshots.
+
+
+
+Recovery from a *recovery object store* can be achieved in two ways:
+
+- using a recovery object store, that is a backup of another cluster
+ created by Barman Cloud and defined via the `barmanObjectStore` option
+ in the `externalClusters` section (*recommended*)
+- using an existing `Backup` object in the same namespace (this was the
+ only option available before version 1.8.0).
+
+Both recovery methods enable either full recovery (up to the last
+available WAL) or up to a [point in time](#point-in-time-recovery-pitr).
+When performing a full recovery, the cluster can also be started
+in replica mode. Also, make sure that the PostgreSQL configuration
+(`.spec.postgresql.parameters`) of the recovered cluster is
+compatible, from a physical replication standpoint, with the original one.
+
+EDB Postgres for Kubernetes is also introducing support for Kubernetes' volume snapshots.
+With the current version of EDB Postgres for Kubernetes, you can:
+
+- take a consistent cold backup of the Postgres cluster from a standby through
+ the `kubectl cnp snapshot` command - which creates the necessary
+ `VolumeSnapshot` objects (currently one or two, if you have WALs in a separate
+ volume)
+- recover from the above *VolumeSnapshot* objects through the `volumeSnapshots`
+ option in the `.spec.bootstrap.recovery` stanza, as described in
+ ["Recovery from `VolumeSnapshot` objects"](#recovery-from-volumesnapshot-objects)
+ below
+
+## Recovery from an object store
+
+You can recover from a backup created by Barman Cloud and stored on a supported
+object storage. Once you have defined the external cluster, including all the
+required configuration in the `barmanObjectStore` section, you need to
+reference it in the `.spec.recovery.source` option. The following example
+defines a recovery object store in a blob container in Azure:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore
+spec:
+ [...]
+
+ superuserSecret:
+ name: superuser-secret
+
+ bootstrap:
+ recovery:
+ source: clusterBackup
+
+ externalClusters:
+ - name: clusterBackup
+ barmanObjectStore:
+ destinationPath: https://STORAGEACCOUNTNAME.blob.core.windows.net/CONTAINERNAME/
+ azureCredentials:
+ storageAccount:
+ name: recovery-object-store-secret
+ key: storage_account_name
+ storageKey:
+ name: recovery-object-store-secret
+ key: storage_account_key
+ wal:
+ maxParallel: 8
+```
+
+!!! Important
+ By default the `recovery` method strictly uses the `name` of the
+ cluster in the `externalClusters` section to locate the main folder
+ of the backup data within the object store, which is normally reserved
+ for the name of the server. You can specify a different one with the
+ `barmanObjectStore.serverName` property (by default assigned to the
+ value of `name` in the external clusters definition).
+
+!!! Note
+ In the above example we are taking advantage of the parallel WAL restore
+ feature, dedicating up to 8 jobs to concurrently fetch the required WAL
+ files from the archive. This feature can appreciably reduce the recovery time.
+ Make sure that you plan ahead for this scenario and correctly tune the
+ value of this parameter for your environment. It will certainly make a
+ difference **when** (not if) you'll need it.
+
+## Recovery from `VolumeSnapshot` objects
+
+EDB Postgres for Kubernetes can create a new cluster from a `VolumeSnapshot` of a PVC of an
+existing `Cluster` that's been taken using the declarative API for
+[volume snapshot backups](backup_volumesnapshot.md).
+You need to specify the name of the snapshot as in the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore
+spec:
+ [...]
+
+bootstrap:
+ recovery:
+ volumeSnapshots:
+ storage:
+ name:
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+```
+
+!!! Warning
+ As the development of declarative support for Kubernetes' `VolumeSnapshot` API
+ progresses, you'll be able to use this technique in conjunction with a WAL
+ archive for Point In Time Recovery operations or replica clusters.
+
+In case the backed-up cluster was using a separate PVC to store the WAL files,
+the recovery must include that too:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore
+spec:
+ [...]
+
+bootstrap:
+ recovery:
+ volumeSnapshots:
+ storage:
+ name:
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+
+ walStorage:
+ name:
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+```
+
+## Recovery from a `Backup` object
+
+In case a Backup resource is already available in the namespace in which the
+cluster should be created, you can specify its name through
+`.spec.bootstrap.recovery.backup.name`, as in the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-initdb
+spec:
+ instances: 3
+
+ superuserSecret:
+ name: superuser-secret
+
+ bootstrap:
+ recovery:
+ backup:
+ name: backup-example
+
+ storage:
+ size: 1Gi
+```
+
+This bootstrap method allows you to specify just a reference to the
+backup that needs to be restored.
+
+## Additional considerations
+
+Whether you recover from a recovery object store, a volume snapshot, or an
+existing `Backup` resource, the following considerations apply:
+
+- The application database name and the application database user are preserved
+ from the backup that is being restored. The operator does not currently attempt
+ to back up the underlying secrets, as this is part of the usual maintenance
+ activity of the Kubernetes cluster itself.
+- In case you don't supply any `superuserSecret`, a new one is automatically
+ generated with a secure and random password. The secret is then used to
+ reset the password for the `postgres` user of the cluster.
+- By default, the recovery will continue up to the latest
+ available WAL on the default target timeline (`current` for PostgreSQL up to
+ 11, `latest` for version 12 and above).
+ You can optionally specify a `recoveryTarget` to perform a point in time
+ recovery (see the ["Point in time recovery" section](#point-in-time-recovery-pitr)).
+
+!!! Important
+ Consider using the `barmanObjectStore.wal.maxParallel` option to speed
+ up WAL fetching from the archive by concurrently downloading the transaction
+ logs from the recovery object store.
+
+## Point in time recovery (PITR)
+
+Instead of replaying all the WALs up to the latest one, we can ask PostgreSQL
+to stop replaying WALs at any given point in time, after having extracted a
+base backup. PostgreSQL uses this technique to achieve *point-in-time* recovery
+(PITR). The presence of a WAL archive is mandatory.
+
+!!! Important
+ PITR requires you to specify a **recovery target**, by using the options
+ described in the ["Recovery targets" section](#recovery-targets) below.
+
+The operator will generate the configuration parameters required for this
+feature to work in case a recovery target is specified.
+
+#### PITR from an object store
+
+The example below uses a recovery object store in Azure that contains both
+the base backups and the WAL archive. The recovery target is based on a
+requested timestamp:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore-pitr
+spec:
+ instances: 3
+
+ storage:
+ size: 5Gi
+
+ bootstrap:
+ recovery:
+ # Recovery object store containing WAL archive and base backups
+ source: clusterBackup
+ recoveryTarget:
+ # Time base target for the recovery
+ targetTime: "2023-08-11 11:14:21.00000+02"
+
+ externalClusters:
+ - name: clusterBackup
+ barmanObjectStore:
+ destinationPath: https://STORAGEACCOUNTNAME.blob.core.windows.net/CONTAINERNAME/
+ azureCredentials:
+ storageAccount:
+ name: recovery-object-store-secret
+ key: storage_account_name
+ storageKey:
+ name: recovery-object-store-secret
+ key: storage_account_key
+ wal:
+ maxParallel: 8
+```
+
+You might have noticed that in the above example you only had to specify
+the `targetTime` in the form of a timestamp, without having to worry about
+specifying the base backup from which to start the recovery.
+
+The `backupID` option is the one that allows you to specify the base backup
+from which to initiate the recovery process. By default, this value is
+empty.
+
+If you assign a value to it (in the form of a Barman backup ID), the operator
+will use that backup as base for the recovery.
+
+!!! Important
+ You need to make sure that such a backup exists and is accessible.
+
+If the backup ID is not specified, the operator will automatically detect the
+base backup for the recovery as follows:
+
+- when you use `targetTime` or `targetLSN`, the operator selects the closest
+ backup that was completed before that target
+- otherwise the operator selects the last available backup in chronological
+ order.
+
+### PITR from `VolumeSnapshot` Objects
+
+The example below uses:
+
+- a Kubernetes volume snapshot for the `PGDATA` containing the base backup from
+ which to start the recovery process, identified in the
+ `recovery.volumeSnapshots` section and called `test-snapshot-1`
+- a recovery object store in MinIO containing the WAL archive, identified by
+ the `recovery.source` option in the form of an external cluster definition
+
+The recovery target is based on a requested timestamp.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-snapshot
+spec:
+ # ...
+ bootstrap:
+ recovery:
+ source: cluster-example-with-backup
+ volumeSnapshots:
+ storage:
+ name: test-snapshot-1
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+ recoveryTarget:
+ targetTime: "2023-07-06T08:00:39"
+ externalClusters:
+ - name: cluster-example-with-backup
+ barmanObjectStore:
+ destinationPath: s3://backups/
+ endpointURL: http://minio:9000
+ s3Credentials:
+ accessKeyId:
+ name: minio
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio
+ key: ACCESS_SECRET_KEY
+```
+
+!!! Note
+ In case the backed up Cluster had `walStorage` enabled, you also must
+ specify the volume snapshot containing the `PGWAL` directory, as mentioned
+ in the [Recovery from VolumeSnapshot objects](#recovery-from-volumeSnapshot-objects)
+ section.
+
+!!! Warning
+ It is your responsibility to ensure that the end time of the base backup in
+ the volume snapshot is prior to the recovery target timestamp.
+
+### Recovery targets
+
+Here are the recovery target criteria you can use:
+
+targetTime
+: time stamp up to which recovery will proceed, expressed in
+ [RFC 3339](https://datatracker.ietf.org/doc/html/rfc3339) format
+ (the precise stopping point is also influenced by the `exclusive` option)
+
+targetXID
+: transaction ID up to which recovery will proceed
+ (the precise stopping point is also influenced by the `exclusive` option);
+ keep in mind that while transaction IDs are assigned sequentially at
+ transaction start, transactions can complete in a different numeric order.
+ The transactions that will be recovered are those that committed before
+ (and optionally including) the specified one
+
+targetName
+: named restore point (created with `pg_create_restore_point()`) to which
+ recovery will proceed
+
+targetLSN
+: LSN of the write-ahead log location up to which recovery will proceed
+ (the precise stopping point is also influenced by the `exclusive` option)
+
+targetImmediate
+: recovery should end as soon as a consistent state is reached - i.e. as early
+ as possible. When restoring from an online backup, this means the point where
+ taking the backup ended
+
+!!! Important
+ While the operator is able to automatically retrieve the closest backup
+ when either `targetTime` or `targetLSN` is specified, this is not possible
+ for the remaining targets: `targetName`, `targetXID`, and `targetImmediate`.
+ In such cases, it is important to specify `backupID`, unless you are OK with
+ the last available backup in the catalog.
+
+The example below uses a `targetName` based recovery target:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+ bootstrap:
+ recovery:
+ source: clusterBackup
+ recoveryTarget:
+ backupID: 20220616T142236
+ targetName: 'restore_point_1'
+[...]
+```
+
+You can choose only a single one among the targets above in each
+`recoveryTarget` configuration.
+
+Additionally, you can specify `targetTLI` force recovery to a specific
+timeline.
+
+By default, the previous parameters are considered to be inclusive, stopping
+just after the recovery target, matching [the behavior in PostgreSQL](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-RECOVERY-TARGET-INCLUSIVE)
+You can request exclusive behavior,
+stopping right before the recovery target, by setting the `exclusive` parameter to
+`true` like in the following example relying on a blob container in Azure
+for both base backups and the WAL archive:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore-pitr
+spec:
+ instances: 3
+
+ storage:
+ size: 5Gi
+
+ bootstrap:
+ recovery:
+ source: clusterBackup
+ recoveryTarget:
+ backupID: 20220616T142236
+ targetName: "maintenance-activity"
+ exclusive: true
+
+ externalClusters:
+ - name: clusterBackup
+ barmanObjectStore:
+ destinationPath: https://STORAGEACCOUNTNAME.blob.core.windows.net/CONTAINERNAME/
+ azureCredentials:
+ storageAccount:
+ name: recovery-object-store-secret
+ key: storage_account_name
+ storageKey:
+ name: recovery-object-store-secret
+ key: storage_account_key
+ wal:
+ maxParallel: 8
+```
+
+## Configure the application database
+
+For the recovered cluster, we can configure the application database name and
+credentials with additional configuration. To update application database
+credentials, we can generate our own passwords, store them as secrets, and
+update the database use the secrets. Or we can also let the operator generate a
+secret with randomly secure password for use. Please reference the
+["Bootstrap an empty cluster"](bootstrap.md#bootstrap-an-empty-cluster-initdb)
+section for more information about secrets.
+
+The following example configure the application database `app` with owner
+`app`, and supplied secret `app-secret`.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ bootstrap:
+ recovery:
+ database: app
+ owner: app
+ secret:
+ name: app-secret
+ [...]
+```
+
+With the above configuration, the following will happen after recovery is completed:
+
+1. if database `app` does not exist, a new database `app` will be created.
+2. if user `app` does not exist, a new user `app` will be created.
+3. if user `app` is not the owner of database, user `app` will be granted
+ as owner of database `app`.
+4. If value of `username` match value of `owner` in secret, the password of
+ application database will be changed to the value of `password` in secret.
+
+!!! Important
+ For a replica cluster with replica mode enabled, the operator will not
+ create any database or user in the PostgreSQL instance, as these will be
+ recovered from the original cluster.
+
+## How recovery works under the hood
+
+
+
+You can use the data uploaded to the object storage to *bootstrap* a
+new cluster from a previously taken backup.
+The operator will orchestrate the recovery process using the
+`barman-cloud-restore` tool (for the base backup) and the
+`barman-cloud-wal-restore` tool (for WAL files, including parallel support, if
+requested).
+
+For details and instructions on the `recovery` bootstrap method, please refer
+to the ["Bootstrap from a backup" section](bootstrap.md#bootstrap-from-a-backup-recovery).
+
+!!! Important
+ If you are not familiar with how [PostgreSQL PITR](https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-PITR-RECOVERY)
+ works, we suggest that you configure the recovery cluster as the original
+ one when it comes to `.spec.postgresql.parameters`. Once the new cluster is
+ restored, you can then change the settings as desired.
+
+Under the hood, the operator will inject an init container in the first
+instance of the new cluster, and the init container will start recovering the
+backup from the object storage.
+
+!!! Important
+ The duration of the base backup copy in the new PVC depends on
+ the size of the backup, as well as the speed of both the network and the
+ storage.
+
+When the base backup recovery process is completed, the operator starts the
+Postgres instance in recovery mode: in this phase, PostgreSQL is up, albeit not
+able to accept connections, and the pod is healthy according to the
+liveness probe. Through the `restore_command`, PostgreSQL starts fetching WAL
+files from the archive (you can speed up this phase by setting the
+`maxParallel` option and enable the parallel WAL restore capability).
+
+This phase terminates when PostgreSQL reaches the target (either the end of the
+WAL or the required target in case of Point-In-Time-Recovery). Indeed, you can
+optionally specify a `recoveryTarget` to perform a point in time recovery. If
+left unspecified, the recovery will continue up to the latest available WAL on
+the default target timeline (`current` for PostgreSQL up to 11, `latest` for
+version 12 and above).
+
+Once the recovery is complete, the operator will set the required
+superuser password into the instance. The new primary instance will start
+as usual, and the remaining instances will join the cluster as replicas.
+
+The process is transparent for the user and it is managed by the instance
+manager running in the Pods.
+
+## Restoring into a cluster with a backup section
+
+
+
+A manifest for a cluster restore may include a `backup` section.
+This means that the new cluster, after recovery, will start archiving WAL's and
+taking backups if configured to do so.
+
+For example, the section below could be part of a manifest for a Cluster
+bootstrapping from Cluster `cluster-example-backup`, and would create a
+new folder in the storage bucket named `recoveredCluster` where the base backups
+and WAL's of the recovered cluster would be stored.
+
+```yaml
+ backup:
+ barmanObjectStore:
+ destinationPath: s3://backups/
+ endpointURL: http://minio:9000
+ serverName: "recoveredCluster"
+ s3Credentials:
+ accessKeyId:
+ name: minio
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio
+ key: ACCESS_SECRET_KEY
+ retentionPolicy: "30d"
+
+ externalClusters:
+ - name: cluster-example-backup
+ barmanObjectStore:
+ destinationPath: s3://backups/
+ endpointURL: http://minio:9000
+ s3Credentials:
+```
+
+You should not re-use the exact same `barmanObjectStore` configuration
+for different clusters. There could be cases where the existing information
+in the storage buckets could be overwritten by the new cluster.
+
+!!! Warning
+ The operator includes a safety check to ensure a cluster will not
+ overwrite a storage bucket that contained information. A cluster that would
+ overwrite existing storage will remain in state `Setting up primary` with
+ Pods in an Error state.
+ The pod logs will show:
+ `ERROR: WAL archive check failed for server recoveredCluster: Expected empty archive`
+
+!!! Important
+ If you set the `k8s.enterprisedb.io/skipEmptyWalArchiveCheck` annotation to `enabled` in
+ the recovered cluster, you can skip the above check. This is not recommended
+ as for the general use case the above check works fine. Please don't do
+ this unless you are familiar with PostgreSQL recovery system, as this can lead
+ you to severe data loss.
\ No newline at end of file
diff --git a/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx b/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx
index e568ba94361..1372ffb7b86 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx
@@ -25,29 +25,39 @@ and kept synchronized through the
[replica cluster](architecture.md#deployments-across-kubernetes-clusters) feature. The source
can be a primary cluster or another replica cluster (cascading replica cluster).
-The available options in terms of replication, both at bootstrap and continuous
-recovery level, are:
+The first step is to bootstrap the replica cluster, choosing among one of the
+available methods:
+
+- streaming replication, via `pg_basebackup`
+- recovery from a volume snapshot
+- recovery from a Barman Cloud backup in an object store
+
+Please refer to the ["Bootstrap" section](bootstrap.md#bootstrap-from-another-cluster)
+for information on how to clone a PostgreSQL server using either
+`pg_basebackup` (streaming) or `recovery` (volume snapshot or object store).
+
+Once the replica cluster's base backup is available, you need to define how
+changes are replicated from the origin, through PostgreSQL continuous recovery.
+There are two options:
- use streaming replication between the replica cluster and the source
(this will certainly require some administrative and security related
work to be done to make sure that the network connection between the
two clusters are correctly setup)
-- use a Barman Cloud object store for recovery of the base backups and
- the WAL files that are regularly shipped from the source to the object
- store and pulled by `barman-cloud-wal-restore` in the replica cluster
+- use the WAL archive (on an object store) to fetch the WAL files that are
+ regularly shipped from the source to the object store and pulled by
+ `barman-cloud-wal-restore` in the replica cluster
- any of the two
All you have to do is actually define an external cluster.
-Please refer to the ["Bootstrap" section](bootstrap.md#bootstrap-from-another-cluster)
-for information on how to clone a PostgreSQL server using either
-`pg_basebackup` (streaming) or `recovery` (object store).
If the external cluster contains a `barmanObjectStore` section:
+- you'll be able to use the WAL archive, and EDB Postgres for Kubernetes will automatically
+ set the `restore_command` in the designated primary instance
- you'll be able to bootstrap the replica cluster from an object store
- using the `recovery` section
-- EDB Postgres for Kubernetes will automatically set the `restore_command`
- in the designated primary instance
+ using the `recovery` section, in case you cannot take advantage of
+ volume snapshots
If the external cluster contains a `connectionParameters` section:
@@ -79,12 +89,14 @@ file and define the following parts accordingly:
- define the `externalClusters` section in the replica cluster
- define the bootstrap part for the replica cluster. We can either bootstrap via
- streaming using the `pg_basebackup` section, or bootstrap from an object store
- using the `recovery` section
+ streaming using the `pg_basebackup` section, or bootstrap from a volume snapshot
+ or an object store using the `recovery` section
- define the continuous recovery part (`spec.replica`) in the replica cluster. All
we need to do is to enable the replica mode through option `spec.replica.enabled`
and set the `externalClusters` name in option `spec.replica.source`
+#### Example using pg_basebackup
+
This **first example** defines a replica cluster using streaming replication in
both bootstrap and continuous recovery. The replica cluster connects to the
source cluster using TLS authentication.
@@ -128,6 +140,8 @@ in case the replica cluster is in a separate namespace.
key: ca.crt
```
+#### Example using a Backup from an object store
+
The **second example** defines a replica cluster that bootstraps from an object
store using the `recovery` section and continuous recovery using both streaming
replication and the given object store. For streaming replication, the replica
@@ -176,6 +190,21 @@ a backup of the source cluster has been created already.
clusters, and that all the necessary secrets which hold passwords or
certificates are properly created in advance.
+#### Example using a Volume Snapshot
+
+If you use volume snapshots and your storage class provides
+snapshots cross-cluster availability, you can leverage that to
+bootstrap a replica cluster through a volume snapshot of the
+source cluster.
+
+The **third example** defines a replica cluster that bootstraps
+from a volume snapshot using the `recovery` section. It uses
+streaming replication (via basic authentication) and the object
+store to fetch the WAL files.
+
+You can check the [sample YAML](../samples/cluster-example-replica-from-volume-snapshot.yaml)
+for it in the `samples/` subdirectory.
+
## Promoting the designated primary in the replica cluster
To promote the **designated primary** to **primary**, all we need to do is to
diff --git a/product_docs/docs/postgres_for_kubernetes/1/replication.mdx b/product_docs/docs/postgres_for_kubernetes/1/replication.mdx
index 21277446ce8..4cd121efa84 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/replication.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/replication.mdx
@@ -229,7 +229,7 @@ In EDB Postgres for Kubernetes, we use the terms:
This feature, introduced in EDB Postgres for Kubernetes 1.18, is now enabled by default and
can be disabled via configuration. For details, please refer to the
-["replicationSlots" section in the API reference](api_reference.md#ReplicationSlotsConfiguration).
+["replicationSlots" section in the API reference](cloudnative-pg.v1.md#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration).
Here follows a brief description of the main options:
`.spec.replicationSlots.highAvailability.enabled`
@@ -305,4 +305,28 @@ the lag from the primary.
!!! Seealso "Monitoring"
Please refer to the ["Monitoring" section](monitoring.md) for details on
- how to monitor a EDB Postgres for Kubernetes deployment.
\ No newline at end of file
+ how to monitor a EDB Postgres for Kubernetes deployment.
+
+### Capping the WAL size retained for replication slots
+
+When replication slots is enabled, you might end up running out of disk
+space due to PostgreSQL trying to retain WAL files requested by a replication
+slot. This might happen due to a standby that is (temporarily?) down, or
+lagging, or simply an orphan replication slot.
+
+Starting with PostgreSQL 13, you can take advantage of the
+[`max_slot_wal_keep_size`](https://www.postgresql.org/docs/current/runtime-config-replication.html#GUC-MAX-SLOT-WAL-KEEP-SIZE)
+configuration option controlling the maximum size of WAL files that replication
+slots are allowed to retain in the `pg_wal` directory at checkpoint time.
+By default, in PostgreSQL `max_slot_wal_keep_size` is set to `-1`, meaning that
+replication slots may retain an unlimited amount of WAL files.
+As a result, our recommendation is to explicitly set `max_slot_wal_keep_size`
+when replication slots support is enabled. For example:
+
+```ini
+ # ...
+ postgresql:
+ parameters:
+ max_slot_wal_keep_size: "10GB"
+ # ...
+```
\ No newline at end of file
diff --git a/product_docs/docs/postgres_for_kubernetes/1/resource_management.mdx b/product_docs/docs/postgres_for_kubernetes/1/resource_management.mdx
index 42e2d78e0fc..704ef4eb44c 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/resource_management.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/resource_management.mdx
@@ -54,7 +54,7 @@ while creating a cluster:
in a VM or physical machine scenario - see below).
- Set up database server pods on a dedicated node using nodeSelector.
See the "nodeSelector" and "tolerations" fields of the
- [“affinityconfiguration"](api_reference.md#affinityconfiguration) resource on the API reference page.
+ [“affinityconfiguration"](cloudnative-pg.v1.md#postgresql-k8s-enterprisedb-io-v1-AffinityConfiguration) resource on the API reference page.
You can refer to the following example manifest:
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples.mdx b/product_docs/docs/postgres_for_kubernetes/1/samples.mdx
index 3b97553f228..7add2d6e076 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples.mdx
@@ -48,7 +48,7 @@ PostGIS example
: [`postgis-example.yaml`](../samples/postgis-example.yaml):
an example of "PostGIS cluster" (see the [PostGIS section](postgis.md) for details.)
-Replica cluster via streaming
+Replica cluster via streaming (pg_basebackup)
: **Prerequisites**: [`cluster-example.yaml`](../samples/cluster-example.yaml)
applied and Healthy
: [`cluster-example-replica-streaming.yaml`](../samples/cluster-example-replica-streaming.yaml): a replica cluster following `cluster-example` with streaming replication.
@@ -59,7 +59,7 @@ Simple cluster with backup configured
: [`cluster-example-with-backup.yaml`](../samples/cluster-example-with-backup.yaml)
a basic cluster with backups configured.
-Replica cluster via backup
+Replica cluster via Backup from an object store
: **Prerequisites**:
[`cluster-storage-class-with-backup.yaml`](../samples/cluster-storage-class-with-backup.yaml) applied and Healthy.
And a backup
@@ -68,6 +68,15 @@ Replica cluster via backup
: [`cluster-example-replica-from-backup-simple.yaml`](../samples/cluster-example-replica-from-backup-simple.yaml):
a replica cluster following a cluster with backup configured.
+Replica cluster via Volume Snapshot
+: **Prerequisites**:
+ [`cluster-example-with-volume-snapshot.yaml`](../samples/cluster-example-with-volume-snapshot.yaml) applied and Healthy.
+ And a volume snapshot
+ [`backup-with-volume-snapshot.yaml`](../samples/backup-with-volume-snapshot.yaml)
+ applied and Completed.
+: [`cluster-example-replica-from-volume-snapshot.yaml`](../samples/cluster-example-replica-from-volume-snapshot.yaml):
+ a replica cluster following a cluster with volume snapshot configured.
+
Bootstrap cluster with SQL files
: [`cluster-example-initdb-sql-refs.yaml`](../samples/cluster-example-initdb-sql-refs.yaml):
a cluster example that will execute a set of queries defined in a Secret and a ConfigMap right after the database is created.
@@ -90,4 +99,4 @@ Cluster with TDE enabled
an EPAS 15 cluster with TDE. Note that you will need access credentials
to download the image used.
-For a list of available options, please refer to the ["API Reference" page](api_reference.md).
\ No newline at end of file
+For a list of available options, please refer to the ["API Reference" page](cloudnative-pg.v1.md).
\ No newline at end of file
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/backup-with-volume-snapshot.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/backup-with-volume-snapshot.yaml
new file mode 100644
index 00000000000..371c8f0beba
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/backup-with-volume-snapshot.yaml
@@ -0,0 +1,8 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Backup
+metadata:
+ name: backup-with-volume-snapshot
+spec:
+ method: volumeSnapshot
+ cluster:
+ name: cluster-example-with-volume-snapshot
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml
index 79b833f6912..f7e772b6c02 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml
@@ -35,7 +35,7 @@ metadata:
name: cluster-example-full
spec:
description: "Example of cluster"
- imageName: quay.io/enterprisedb/postgresql:15.3
+ imageName: quay.io/enterprisedb/postgresql:16.0
# imagePullSecret is only required if the images are located in a private registry
# imagePullSecrets:
# - name: private_registry_access
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-monitoring.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-monitoring.yaml
index 42834b2f7fd..88e6951652a 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-monitoring.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-monitoring.yaml
@@ -25,38 +25,6 @@ metadata:
k8s.enterprisedb.io/reload: ""
data:
custom-queries: |
- pg_replication:
- query: "SELECT CASE WHEN NOT pg_is_in_recovery()
- THEN 0
- ELSE GREATEST (0,
- EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())))
- END AS lag,
- pg_is_in_recovery() AS in_recovery,
- EXISTS (TABLE pg_stat_wal_receiver) AS is_wal_receiver_up,
- (SELECT count(*) FROM pg_stat_replication) AS streaming_replicas"
-
- metrics:
- - lag:
- usage: "GAUGE"
- description: "Replication lag behind primary in seconds"
- - in_recovery:
- usage: "GAUGE"
- description: "Whether the instance is in recovery"
- - is_wal_receiver_up:
- usage: "GAUGE"
- description: "Whether the instance wal_receiver is up"
- - streaming_replicas:
- usage: "GAUGE"
- description: "Number of streaming replicas connected to the instance"
-
- pg_postmaster:
- query: "SELECT pg_postmaster_start_time as start_time from pg_postmaster_start_time()"
- primary: true
- metrics:
- - start_time:
- usage: "GAUGE"
- description: "Time at which postgres started"
-
pg_stat_user_tables:
target_databases:
- "*"
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-from-volume-snapshot.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-from-volume-snapshot.yaml
new file mode 100644
index 00000000000..ca3bc3dc2eb
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-from-volume-snapshot.yaml
@@ -0,0 +1,54 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-replica-from-snapshot
+spec:
+ instances: 1
+
+ storage:
+ storageClass: csi-hostpath-sc
+ size: 1Gi
+ walStorage:
+ storageClass: csi-hostpath-sc
+ size: 1Gi
+
+ bootstrap:
+ recovery:
+ source: cluster-example-with-volume-snapshot
+ volumeSnapshots:
+ storage:
+ name: cluster-example-with-volume-snapshot-2-1692618163
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+ walStorage:
+ name: cluster-example-with-volume-snapshot-2-wal-1692618163
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+
+ replica:
+ enabled: true
+ source: cluster-example-with-volume-snapshot
+
+ externalClusters:
+ - name: cluster-example-with-volume-snapshot
+
+ connectionParameters:
+ host: cluster-example-with-volume-snapshot-rw.default.svc
+ user: postgres
+ dbname: postgres
+ password:
+ name: cluster-example-with-volume-snapshot-superuser
+ key: password
+
+ barmanObjectStore:
+ destinationPath: s3://backups/
+ endpointURL: http://minio:9000
+ s3Credentials:
+ accessKeyId:
+ name: minio
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio
+ key: ACCESS_SECRET_KEY
+ wal:
+ maxParallel: 8
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-streaming.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-streaming.yaml
index 847a9d4dbe3..63eba35085d 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-streaming.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-replica-streaming.yaml
@@ -10,7 +10,7 @@ spec:
source: cluster-example
replica:
- enabled: false
+ enabled: true
source: cluster-example
storage:
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-backup.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-backup.yaml
index 9caf09bb71d..a0a99d90b41 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-backup.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-backup.yaml
@@ -8,7 +8,7 @@ spec:
# Persistent storage configuration
storage:
- storageClass: standard
+ storageClass: csi-hostpath-sc
size: 1Gi
# Backup properties
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-volume-snapshot.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-volume-snapshot.yaml
new file mode 100644
index 00000000000..ef58162a061
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-with-volume-snapshot.yaml
@@ -0,0 +1,32 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example-with-volume-snapshot
+spec:
+ instances: 3
+ primaryUpdateStrategy: unsupervised
+
+ # Persistent storage configuration
+ storage:
+ storageClass: csi-hostpath-sc
+ size: 1Gi
+ walStorage:
+ storageClass: csi-hostpath-sc
+ size: 1Gi
+
+ # Backup properties
+ backup:
+ volumeSnapshot:
+ className: csi-hostpath-snapclass
+ barmanObjectStore:
+ destinationPath: s3://backups/
+ endpointURL: http://minio:9000
+ s3Credentials:
+ accessKeyId:
+ name: minio
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio
+ key: ACCESS_SECRET_KEY
+ wal:
+ compression: gzip
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-full.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-full.yaml
new file mode 100644
index 00000000000..04017896e4c
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-full.yaml
@@ -0,0 +1,37 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore-pitr
+spec:
+ instances: 3
+
+ storage:
+ size: 1Gi
+ storageClass: csi-hostpath-sc
+
+ externalClusters:
+ - name: origin
+
+ barmanObjectStore:
+ serverName: cluster-example-with-backup
+ destinationPath: s3://backups/
+ endpointURL: http://minio:9000
+ s3Credentials:
+ accessKeyId:
+ name: minio
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio
+ key: ACCESS_SECRET_KEY
+ wal:
+ maxParallel: 8
+
+ bootstrap:
+ recovery:
+ source: origin
+
+ volumeSnapshots:
+ storage:
+ name: cluster-example-with-backup-3-1692618163
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-pitr.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-pitr.yaml
new file mode 100644
index 00000000000..67890530b5f
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot-pitr.yaml
@@ -0,0 +1,40 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore-pitr
+spec:
+ instances: 3
+
+ storage:
+ size: 1Gi
+ storageClass: csi-hostpath-sc
+
+ externalClusters:
+ - name: origin
+
+ barmanObjectStore:
+ serverName: cluster-example-with-backup
+ destinationPath: s3://backups/
+ endpointURL: http://minio:9000
+ s3Credentials:
+ accessKeyId:
+ name: minio
+ key: ACCESS_KEY_ID
+ secretAccessKey:
+ name: minio
+ key: ACCESS_SECRET_KEY
+ wal:
+ maxParallel: 8
+
+ bootstrap:
+ recovery:
+ source: origin
+
+ recoveryTarget:
+ targetTime: "2023-08-21 12:00:00.00000+00"
+
+ volumeSnapshots:
+ storage:
+ name: cluster-example-with-backup-3-1692618163
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot.yaml
index 5a2f24f2883..8c0acbf8a14 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-restore-snapshot.yaml
@@ -7,12 +7,13 @@ spec:
storage:
size: 1Gi
+ storageClass: csi-hostpath-sc
bootstrap:
recovery:
volumeSnapshots:
storage:
- name: my-backup
+ name: cluster-example-with-backup-3-1692618163
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/grafana-configmap.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/grafana-configmap.yaml
index 1e14184ae1f..c93bc112053 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/grafana-configmap.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/grafana-configmap.yaml
@@ -77,8 +77,8 @@ data:
},
"id": 334,
"options": {
- "alertInstanceLabelFilter": "",
- "alertName": "Database",
+ "alertInstanceLabelFilter": "{namespace=~\"$namespace\",pod=~\"$cluster-[0-9]+$\"}",
+ "alertName": "",
"dashboardAlerts": false,
"folder": "",
"groupBy": [],
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/grafana-dashboard.json b/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/grafana-dashboard.json
index f389574ea43..07a7a146fdc 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/grafana-dashboard.json
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/monitoring/grafana-dashboard.json
@@ -135,8 +135,8 @@
},
"id": 334,
"options": {
- "alertInstanceLabelFilter": "",
- "alertName": "Database",
+ "alertInstanceLabelFilter": "{namespace=~\"$namespace\",pod=~\"$cluster-[0-9]+$\"}",
+ "alertName": "",
"dashboardAlerts": false,
"folder": "",
"groupBy": [],
diff --git a/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx b/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx
index df12107675f..aee1552fdb6 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx
@@ -14,7 +14,7 @@ the best node possible, based on several criteria.
anti-affinity, node selectors, and so on.
You can control how the EDB Postgres for Kubernetes cluster's instances should be
-scheduled through the [`affinity`](api_reference.md#AffinityConfiguration)
+scheduled through the [`affinity`](cloudnative-pg.v1.md#postgresql-k8s-enterprisedb-io-v1-AffinityConfiguration)
section in the definition of the cluster, which supports:
- pod affinity/anti-affinity
@@ -61,7 +61,7 @@ metadata:
name: cluster-example
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:15.3
+ imageName: quay.io/enterprisedb/postgresql:16.0
affinity:
enablePodAntiAffinity: true #default value
diff --git a/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx b/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
index ca9bba80492..8d7bc013078 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
@@ -166,7 +166,7 @@ Output :
version
--------------------------------------------------------------------------------------
------------------
-PostgreSQL 15.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat
+PostgreSQL 16.0 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat
8.3.1-5), 64-bit
(1 row)
```
\ No newline at end of file
diff --git a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx
index c65e8fd390b..7709366978a 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx
@@ -183,7 +183,7 @@ Cluster in healthy state
Name: cluster-example
Namespace: default
System ID: 7044925089871458324
-PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3-3
+PostgreSQL Image: quay.io/enterprisedb/postgresql:16.0-3
Primary instance: cluster-example-1
Instances: 3
Ready instances: 3
@@ -259,7 +259,7 @@ kubectl describe cluster -n | grep "Image Name"
Output:
```shell
- Image Name: quay.io/enterprisedb/postgresql:15.3-3
+ Image Name: quay.io/enterprisedb/postgresql:16.0-3
```
!!! Note
@@ -547,6 +547,61 @@ allow-prometheus k8s.enterprisedb.io/cluster=cluster-example 47m
default-deny-ingress 57m
```
+## PostgreSQL core dumps
+
+Although rare, PostgreSQL can sometimes crash and generate a core dump
+in the `PGDATA` folder. When that happens, normally it is a bug in PostgreSQL
+(and most likely it has already been solved - this is why it is important
+to always run the latest minor version of PostgreSQL).
+
+EDB Postgres for Kubernetes allows you to control what to include in the core dump through
+the `k8s.enterprisedb.io/coredumpFilter` annotation.
+
+!!! Info
+ Please refer to ["Labels and annotations"](labels_annotations.md)
+ for more details on the standard annotations that EDB Postgres for Kubernetes provides.
+
+By default, the `k8s.enterprisedb.io/coredumpFilter` is set to `0x31` in order to
+exclude shared memory segments from the dump, as this is the safest
+approach in most cases.
+
+!!! Info
+ Please refer to
+ ["Core dump filtering settings" section of "The `/proc` Filesystem" page of the Linux Kernel documentation](https://docs.kernel.org/filesystems/proc.html#proc-pid-coredump-filter-core-dump-filtering-settings).
+ for more details on how to set the bitmask that controls the core dump filter.
+
+!!! Important
+ Beware that this setting only takes effect during Pod startup and that changing
+ the annotation doesn't trigger an automated rollout of the instances.
+
+Although you might not personally be involved in inspecting core dumps,
+you might be asked to provide them so that a Postgres expert can look
+into them. First, verify that you have a core dump in the `PGDATA`
+directory with the following command (please run it against the
+correct pod where the Postgres instance is running):
+
+```sh
+kubectl exec -ti POD -c postgres \
+ -- find /var/lib/postgresql/data/pgdata -name 'core.*'
+```
+
+Under normal circumstances, this should return an empty set. Suppose, for
+example, that we have a core dump file:
+
+```
+/var/lib/postgresql/data/pgdata/core.14177
+```
+
+Once you have verified the space on disk is sufficient, you can collect the
+core dump on your machine through `kubectl cp` as follows:
+
+```sh
+kubectl cp POD:/var/lib/postgresql/data/pgdata/core.14177 core.14177
+```
+
+You now have the file. Make sure you free the space on the server by
+removing the core dumps.
+
## Some common issues
### Storage is full
diff --git a/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx b/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx
new file mode 100644
index 00000000000..c3de7ae5f3a
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx
@@ -0,0 +1,79 @@
+---
+title: 'WAL archiving'
+originalFilePath: 'src/wal_archiving.md'
+---
+
+WAL archiving is the process that feeds a [WAL archive](backup.md#wal-archive)
+in EDB Postgres for Kubernetes.
+
+!!! Important
+ EDB Postgres for Kubernetes currently only supports WAL archives on object stores. Such
+ WAL archives serve for both object store backups and volume snapshot backups.
+
+The WAL archive is defined in the `.spec.backup.barmanObjectStore` stanza of
+a `Cluster` resource. Please proceed with the same instructions you find in
+the ["Backup on object stores" section](backup_barmanobjectstore.md) to set up
+the WAL archive.
+
+!!! Info
+ Please refer to [`BarmanObjectStoreConfiguration`](cloudnative-pg.v1.md#postgresql-k8s-enterprisedb-io-v1-barmanobjectstoreconfiguration)
+ in the API reference for a full list of options.
+
+If required, you can choose to compress WAL files as soon as they
+are uploaded and/or encrypt them:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ [...]
+ wal:
+ compression: gzip
+ encryption: AES256
+```
+
+You can configure the encryption directly in your bucket, and the operator
+will use it unless you override it in the cluster configuration.
+
+PostgreSQL implements a sequential archiving scheme, where the
+`archive_command` will be executed sequentially for every WAL
+segment to be archived.
+
+!!! Important
+ By default, EDB Postgres for Kubernetes sets `archive_timeout` to `5min`, ensuring
+ that WAL files, even in case of low workloads, are closed and archived
+ at least every 5 minutes, providing a deterministic time-based value for
+ your Recovery Point Objective (RPO). Even though you change the value
+ of the [`archive_timeout` setting in the PostgreSQL configuration](https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT),
+ our experience suggests that the default value set by the operator is
+ suitable for most use cases.
+
+When the bandwidth between the PostgreSQL instance and the object
+store allows archiving more than one WAL file in parallel, you
+can use the parallel WAL archiving feature of the instance manager
+like in the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ [...]
+ wal:
+ compression: gzip
+ maxParallel: 8
+ encryption: AES256
+```
+
+In the previous example, the instance manager optimizes the WAL
+archiving process by archiving in parallel at most eight ready
+WALs, including the one requested by PostgreSQL.
+
+When PostgreSQL will request the archiving of a WAL that has
+already been archived by the instance manager as an optimization,
+that archival request will be just dismissed with a positive status.
\ No newline at end of file
diff --git a/scripts/fileProcessor/package-lock.json b/scripts/fileProcessor/package-lock.json
index 29e5152b1ba..9ee4123f70e 100644
--- a/scripts/fileProcessor/package-lock.json
+++ b/scripts/fileProcessor/package-lock.json
@@ -2401,7 +2401,7 @@
"parse-entities": "^2.0.0",
"repeat-string": "^1.5.4",
"state-toggle": "^1.0.0",
- "trim": ">=0.0.3",
+ "trim": "0.0.1",
"trim-trailing-lines": "^1.0.0",
"unherit": "^1.0.4",
"unist-util-remove-position": "^2.0.0",
@@ -2528,8 +2528,7 @@
}
},
"trim": {
- "version": "1.0.1",
- "resolved": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz",
+ "version": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz",
"integrity": "sha512-3JVP2YVqITUisXblCDq/Bi4P9457G/sdEamInkyvCsjbTcXLXIiG7XCb4kGMFWh6JGXesS3TKxOPtrncN/xe8w=="
},
"trim-trailing-lines": {
diff --git a/scripts/source/package-lock.json b/scripts/source/package-lock.json
index 43917129155..7f4b8ebd71e 100644
--- a/scripts/source/package-lock.json
+++ b/scripts/source/package-lock.json
@@ -3200,7 +3200,7 @@
"parse-entities": "^2.0.0",
"repeat-string": "^1.5.4",
"state-toggle": "^1.0.0",
- "trim": ">=0.0.3",
+ "trim": "0.0.1",
"trim-trailing-lines": "^1.0.0",
"unherit": "^1.0.4",
"unist-util-remove-position": "^2.0.0",
@@ -3348,8 +3348,7 @@
"integrity": "sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw=="
},
"trim": {
- "version": "1.0.1",
- "resolved": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz",
+ "version": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz",
"integrity": "sha512-3JVP2YVqITUisXblCDq/Bi4P9457G/sdEamInkyvCsjbTcXLXIiG7XCb4kGMFWh6JGXesS3TKxOPtrncN/xe8w=="
},
"trim-trailing-lines": {