diff --git a/product_docs/docs/postgres_for_kubernetes/1/backup_barmanobjectstore.mdx b/product_docs/docs/postgres_for_kubernetes/1/backup_barmanobjectstore.mdx
index deb0cecaec6..29420b58970 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/backup_barmanobjectstore.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/backup_barmanobjectstore.mdx
@@ -99,9 +99,9 @@ algorithms via `barman-cloud-backup` (for backups) and
- snappy
The compression settings for backups and WALs are independent. See the
-[DataBackupConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#BarmanObjectStoreConfiguration) and
+[DataBackupConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#DataBackupConfiguration) and
[WALBackupConfiguration](https://pkg.go.dev/github.com/cloudnative-pg/barman-cloud/pkg/api#WalBackupConfiguration) sections in
-the API reference.
+the barman-cloud API reference.
It is important to note that archival time, restore time, and size change
between the algorithms, so the compression algorithm should be chosen according
diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
index 650b6c544b3..10b4c70cc0e 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
@@ -32,7 +32,7 @@ For more detailed information about this feature, please refer to the
EDB Postgres for Kubernetes requires both the `postgres` user and database to
always exists. Using the local Unix Domain Socket, it needs to connect
as `postgres` user to the `postgres` database via `peer` authentication in
- order to perform administrative tasks on the cluster.
+ order to perform administrative tasks on the cluster.
**DO NOT DELETE** the `postgres` user or the `postgres` database!!!
!!! Info
@@ -212,36 +212,87 @@ The user that owns the database defaults to the database name instead.
The application user is not used internally by the operator, which instead
relies on the superuser to reconcile the cluster with the desired status.
-### Passing options to `initdb`
+### Passing Options to `initdb`
-The actual PostgreSQL data directory is created via an invocation of the
-`initdb` PostgreSQL command. If you need to add custom options to that command
-(i.e., to change the `locale` used for the template databases or to add data
-checksums), you can use the following parameters:
+The PostgreSQL data directory is initialized using the
+[`initdb` PostgreSQL command](https://www.postgresql.org/docs/current/app-initdb.html).
+
+EDB Postgres for Kubernetes enables you to customize the behavior of `initdb` to modify
+settings such as default locale configurations and data checksums.
+
+!!! Warning
+ EDB Postgres for Kubernetes acts only as a direct proxy to `initdb` for locale-related
+ options, due to the ongoing and significant enhancements in PostgreSQL's locale
+ support. It is your responsibility to ensure that the correct options are
+ provided, following the PostgreSQL documentation, and to verify that the
+ bootstrap process completes successfully.
+
+To include custom options in the `initdb` command, you can use the following
+parameters:
+
+builtinLocale
+: When `builtinLocale` is set to a value, EDB Postgres for Kubernetes passes it to the
+ `--builtin-locale` option in `initdb`. This option controls the builtin locale, as
+ defined in ["Locale Support"](https://www.postgresql.org/docs/current/locale.html)
+ from the PostgreSQL documentation (default: empty). Note that this option requires
+ `localeProvider` to be set to `builtin`. Available from PostgreSQL 17.
dataChecksums
-: When `dataChecksums` is set to `true`, CNP invokes the `-k` option in
+: When `dataChecksums` is set to `true`, EDB Postgres for Kubernetes invokes the `-k` option in
`initdb` to enable checksums on data pages and help detect corruption by the
I/O system - that would otherwise be silent (default: `false`).
encoding
-: When `encoding` set to a value, CNP passes it to the `--encoding` option in `initdb`,
- which selects the encoding of the template database (default: `UTF8`).
+: When `encoding` set to a value, EDB Postgres for Kubernetes passes it to the `--encoding`
+ option in `initdb`, which selects the encoding of the template database
+ (default: `UTF8`).
+
+icuLocale
+: When `icuLocale` is set to a value, EDB Postgres for Kubernetes passes it to the
+ `--icu-locale` option in `initdb`. This option controls the ICU locale, as
+ defined in ["Locale Support"](https://www.postgresql.org/docs/current/locale.html)
+ from the PostgreSQL documentation (default: empty).
+ Note that this option requires `localeProvider` to be set to `icu`.
+ Available from PostgreSQL 15.
+
+icuRules
+: When `icuRules` is set to a value, EDB Postgres for Kubernetes passes it to the
+ `--icu-rules` option in `initdb`. This option controls the ICU locale, as
+ defined in ["Locale
+ Support"](https://www.postgresql.org/docs/current/locale.html) from the
+ PostgreSQL documentation (default: empty). Note that this option requires
+ `localeProvider` to be set to `icu`. Available from PostgreSQL 16.
+
+locale
+: When `locale` is set to a value, EDB Postgres for Kubernetes passes it to the `--locale`
+ option in `initdb`. This option controls the locale, as defined in
+ ["Locale Support"](https://www.postgresql.org/docs/current/locale.html) from
+ the PostgreSQL documentation. By default, the locale parameter is empty. In
+ this case, environment variables such as `LANG` are used to determine the
+ locale. Be aware that these variables can vary between container images,
+ potentially leading to inconsistent behavior.
localeCollate
-: When `localeCollate` is set to a value, CNP passes it to the `--lc-collate`
+: When `localeCollate` is set to a value, EDB Postgres for Kubernetes passes it to the `--lc-collate`
option in `initdb`. This option controls the collation order (`LC_COLLATE`
subcategory), as defined in ["Locale Support"](https://www.postgresql.org/docs/current/locale.html)
from the PostgreSQL documentation (default: `C`).
localeCType
-: When `localeCType` is set to a value, CNP passes it to the `--lc-ctype` option in
+: When `localeCType` is set to a value, EDB Postgres for Kubernetes passes it to the `--lc-ctype` option in
`initdb`. This option controls the collation order (`LC_CTYPE` subcategory), as
defined in ["Locale Support"](https://www.postgresql.org/docs/current/locale.html)
from the PostgreSQL documentation (default: `C`).
+localeProvider
+: When `localeProvider` is set to a value, EDB Postgres for Kubernetes passes it to the `--locale-provider`
+option in `initdb`. This option controls the locale provider, as defined in
+["Locale Support"](https://www.postgresql.org/docs/current/locale.html) from the
+PostgreSQL documentation (default: empty, which means `libc` for PostgreSQL).
+Available from PostgreSQL 15.
+
walSegmentSize
-: When `walSegmentSize` is set to a value, CNP passes it to the `--wal-segsize`
+: When `walSegmentSize` is set to a value, EDB Postgres for Kubernetes passes it to the `--wal-segsize`
option in `initdb` (default: not set - defined by PostgreSQL as 16 megabytes).
!!! Note
@@ -430,44 +481,59 @@ to the ["Recovery" section](recovery.md).
### Bootstrap from a live cluster (`pg_basebackup`)
-The `pg_basebackup` bootstrap mode lets you create a new cluster (*target*) as
-an exact physical copy of an existing and **binary compatible** PostgreSQL
-instance (*source*), through a valid *streaming replication* connection.
-The source instance can be either a primary or a standby PostgreSQL server.
+The `pg_basebackup` bootstrap mode allows you to create a new cluster
+(*target*) as an exact physical copy of an existing and **binary-compatible**
+PostgreSQL instance (*source*) managed by EDB Postgres for Kubernetes, using a valid
+*streaming replication* connection. The source instance can either be a primary
+or a standby PostgreSQL server. It’s crucial to thoroughly review the
+requirements section below, as the pros and cons of PostgreSQL physical
+replication fully apply.
-The primary use case for this method is represented by **migrations** to EDB Postgres for Kubernetes,
-either from outside Kubernetes or within Kubernetes (e.g., from another operator).
+The primary use cases for this method include:
-!!! Warning
- The current implementation creates a *snapshot* of the origin PostgreSQL
- instance when the cloning process terminates and immediately starts
- the created cluster. See ["Current limitations"](#current-limitations) below for details.
+- Reporting and business intelligence clusters that need to be regenerated
+ periodically (daily, weekly)
+- Test databases containing live data that require periodic regeneration
+ (daily, weekly, monthly) and anonymization
+- Rapid spin-up of a standalone replica cluster
+- Physical migrations of EDB Postgres for Kubernetes clusters to different namespaces or
+ Kubernetes clusters
-Similar to the case of the `recovery` bootstrap method, once the clone operation
-completes, the operator will take ownership of the target cluster, starting from
-the first instance. This includes overriding some configuration parameters, as
-required by EDB Postgres for Kubernetes, resetting the superuser password, creating
-the `streaming_replica` user, managing the replicas, and so on. The resulting
-cluster will be completely independent of the source instance.
+!!! Important
+ Avoid using this method, based on physical replication, to migrate an
+ existing PostgreSQL cluster outside of Kubernetes into EDB Postgres for Kubernetes unless you
+ are completely certain that all requirements are met and the operation has been
+ thoroughly tested. The EDB Postgres for Kubernetes community does not endorse this approach
+ for such use cases and recommends using logical import instead. It is
+ exceedingly rare that all requirements for physical replication are met in a
+ way that seamlessly works with EDB Postgres for Kubernetes.
+
+!!! Warning
+ In its current implementation, this method clones the source PostgreSQL
+ instance, thereby creating a *snapshot*. Once the cloning process has finished,
+ the new cluster is immediately started.
+ Refer to ["Current limitations"](#current-limitations) for more details.
+
+Similar to the `recovery` bootstrap method, once the cloning operation is
+complete, the operator takes full ownership of the target cluster, starting
+from the first instance. This includes overriding certain configuration
+parameters as required by EDB Postgres for Kubernetes, resetting the superuser password,
+creating the `streaming_replica` user, managing replicas, and more. The
+resulting cluster operates independently from the source instance.
!!! Important
- Configuring the network between the target instance and the source instance
- goes beyond the scope of EDB Postgres for Kubernetes documentation, as it depends
- on the actual context and environment.
+ Configuring the network connection between the target and source instances
+ lies outside the scope of EDB Postgres for Kubernetes documentation, as it depends heavily on
+ the specific context and environment.
-The streaming replication client on the target instance, which will be
-transparently managed by `pg_basebackup`, can authenticate itself on the source
-instance in any of the following ways:
+The streaming replication client on the target instance, managed transparently
+by `pg_basebackup`, can authenticate on the source instance using one of the
+following methods:
-1. via [username/password](#usernamepassword-authentication)
-2. via [TLS client certificate](#tls-certificate-authentication)
+1. [Username/password](#usernamepassword-authentication)
+2. [TLS client certificate](#tls-certificate-authentication)
-The latter is the recommended one if you connect to a source managed
-by EDB Postgres for Kubernetes or configured for TLS authentication.
-The first option is, however, the most common form of authentication to a
-PostgreSQL server in general, and might be the easiest way if the source
-instance is on a traditional environment outside Kubernetes.
-Both cases are explained below.
+Both authentication methods are detailed below.
#### Requirements
@@ -545,7 +611,7 @@ file on the source PostgreSQL instance:
host replication streaming_replica all md5
```
-The following manifest creates a new PostgreSQL 17.0 cluster,
+The following manifest creates a new PostgreSQL 17.2 cluster,
called `target-db`, using the `pg_basebackup` bootstrap method
to clone an external PostgreSQL cluster defined as `source-db`
(in the `externalClusters` array). As you can see, the `source-db`
@@ -560,7 +626,7 @@ metadata:
name: target-db
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:17.0
+ imageName: quay.io/enterprisedb/postgresql:17.2
bootstrap:
pg_basebackup:
@@ -580,7 +646,7 @@ spec:
```
All the requirements must be met for the clone operation to work, including
-the same PostgreSQL version (in our case 17.0).
+the same PostgreSQL version (in our case 17.2).
#### TLS certificate authentication
@@ -595,7 +661,7 @@ in the same Kubernetes cluster.
This example can be easily adapted to cover an instance that resides
outside the Kubernetes cluster.
-The manifest defines a new PostgreSQL 17.0 cluster called `cluster-clone-tls`,
+The manifest defines a new PostgreSQL 17.2 cluster called `cluster-clone-tls`,
which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
external cluster. The host is identified by the read/write service
in the same cluster, while the `streaming_replica` user is authenticated
@@ -610,7 +676,7 @@ metadata:
name: cluster-clone-tls
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:17.0
+ imageName: quay.io/enterprisedb/postgresql:17.2
bootstrap:
pg_basebackup:
@@ -691,7 +757,7 @@ instance using a second connection (see the `--wal-method=stream` option for
Once the backup is completed, the new instance will be started on a new timeline
and diverge from the source.
For this reason, it is advised to stop all write operations to the source database
-before migrating to the target database in Kubernetes.
+before migrating to the target database.
!!! Important
Before you attempt a migration, you must test both the procedure
diff --git a/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx b/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
index 829e682b172..e19687881f4 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
@@ -132,14 +132,14 @@ Given the following files:
Create a secret containing the CA certificate:
-```
+```sh
kubectl create secret generic my-postgresql-server-ca \
--from-file=ca.crt=./server-ca.crt
```
Create a secret with the TLS certificate:
-```
+```sh
kubectl create secret tls my-postgresql-server \
--cert=./server.crt --key=./server.key
```
diff --git a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
index 3943ef00288..2e52773f714 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
@@ -335,13 +335,17 @@ are the ones directly set by PgBouncer.
- [`application_name_add_host`](https://www.pgbouncer.org/config.html#application_name_add_host)
- [`autodb_idle_timeout`](https://www.pgbouncer.org/config.html#autodb_idle_timeout)
+- [`cancel_wait_timeout`](https://www.pgbouncer.org/config.html#cancel_wait_timeout)
- [`client_idle_timeout`](https://www.pgbouncer.org/config.html#client_idle_timeout)
- [`client_login_timeout`](https://www.pgbouncer.org/config.html#client_login_timeout)
- [`default_pool_size`](https://www.pgbouncer.org/config.html#default_pool_size)
- [`disable_pqexec`](https://www.pgbouncer.org/config.html#disable_pqexec)
+- [`dns_max_ttl`](https://www.pgbouncer.org/config.html#dns_max_ttl)
+- [`dns_nxdomain_ttl`](https://www.pgbouncer.org/config.html#dns_nxdomain_ttl)
- [`idle_transaction_timeout`](https://www.pgbouncer.org/config.html#idle_transaction_timeout)
- [`ignore_startup_parameters`](https://www.pgbouncer.org/config.html#ignore_startup_parameters):
- to be appended to `extra_float_digits,options` - required by CNP
+ to be appended to `extra_float_digits,options` - required by EDB Postgres for Kubernetes
+- [`listen_backlog`](https://www.pgbouncer.org/config.html#listen_backlog)
- [`log_connections`](https://www.pgbouncer.org/config.html#log_connections)
- [`log_disconnections`](https://www.pgbouncer.org/config.html#log_disconnections)
- [`log_pooler_errors`](https://www.pgbouncer.org/config.html#log_pooler_errors)
@@ -350,13 +354,16 @@ are the ones directly set by PgBouncer.
export as described in the ["Monitoring"](#monitoring) section below
- [`max_client_conn`](https://www.pgbouncer.org/config.html#max_client_conn)
- [`max_db_connections`](https://www.pgbouncer.org/config.html#max_db_connections)
+- [`max_packet_size`](https://www.pgbouncer.org/config.html#max_packet_size)
- [`max_prepared_statements`](https://www.pgbouncer.org/config.html#max_prepared_statements)
- [`max_user_connections`](https://www.pgbouncer.org/config.html#max_user_connections)
- [`min_pool_size`](https://www.pgbouncer.org/config.html#min_pool_size)
+- [`pkt_buf`](https://www.pgbouncer.org/config.html#pkt_buf)
- [`query_timeout`](https://www.pgbouncer.org/config.html#query_timeout)
- [`query_wait_timeout`](https://www.pgbouncer.org/config.html#query_wait_timeout)
- [`reserve_pool_size`](https://www.pgbouncer.org/config.html#reserve_pool_size)
- [`reserve_pool_timeout`](https://www.pgbouncer.org/config.html#reserve_pool_timeout)
+- [`sbuf_loopcnt`](https://www.pgbouncer.org/config.html#sbuf_loopcnt)
- [`server_check_delay`](https://www.pgbouncer.org/config.html#server_check_delay)
- [`server_check_query`](https://www.pgbouncer.org/config.html#server_check_query)
- [`server_connect_timeout`](https://www.pgbouncer.org/config.html#server_connect_timeout)
@@ -367,12 +374,18 @@ are the ones directly set by PgBouncer.
- [`server_reset_query`](https://www.pgbouncer.org/config.html#server_reset_query)
- [`server_reset_query_always`](https://www.pgbouncer.org/config.html#server_reset_query_always)
- [`server_round_robin`](https://www.pgbouncer.org/config.html#server_round_robin)
+- [`server_tls_ciphers`](https://www.pgbouncer.org/config.html#server_tls_ciphers)
+- [`server_tls_server_tls_protocols`](https://www.pgbouncer.org/config.html#server_tls_protocols)
- [`stats_period`](https://www.pgbouncer.org/config.html#stats_period)
+- [`suspend_timeout`](https://www.pgbouncer.org/config.html#suspend_timeout)
+- [`tcp_defer_accept`](https://www.pgbouncer.org/config.html#tcp_defer_accept)
- [`tcp_keepalive`](https://www.pgbouncer.org/config.html#tcp_keepalive)
- [`tcp_keepcnt`](https://www.pgbouncer.org/config.html#tcp_keepcnt)
- [`tcp_keepidle`](https://www.pgbouncer.org/config.html#tcp_keepidle)
- [`tcp_keepintvl`](https://www.pgbouncer.org/config.html#tcp_keepintvl)
- [`tcp_user_timeout`](https://www.pgbouncer.org/config.html#tcp_user_timeout)
+- [`tcp_socket_buffer`](https://www.pgbouncer.org/config.html#tcp_socket_buffer)
+- [`track_extra_parameters`](https://www.pgbouncer.org/config.html#track_extra_parameters)
- [`verbose`](https://www.pgbouncer.org/config.html#verbose)
Customizations of the PgBouncer configuration are written declaratively in the
diff --git a/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx b/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx
index 2dc53155226..d203baffb82 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/database_import.mdx
@@ -76,7 +76,7 @@ performed in 4 steps:
- `initdb` bootstrap of the new cluster
- export of the selected database (in `initdb.import.databases`) using
- `pg_dump -Fc`
+ `pg_dump -Fd`
- import of the database using `pg_restore --no-acl --no-owner` into the
`initdb.database` (application database) owned by the `initdb.owner` user
- cleanup of the database dump file
@@ -148,7 +148,7 @@ There are a few things you need to be aware of when using the `microservice` typ
`externalCluster` during the operation
- Connection to the source database must be granted with the specified user
that needs to run `pg_dump` and read roles information (*superuser* is OK)
-- Currently, the `pg_dump -Fc` result is stored temporarily inside the `dumps`
+- Currently, the `pg_dump -Fd` result is stored temporarily inside the `dumps`
folder in the `PGDATA` volume, so there should be enough available space to
temporarily contain the dump result on the assigned node, as well as the
restored data and indexes. Once the import operation is completed, this
@@ -165,7 +165,7 @@ The operation is performed in the following steps:
- `initdb` bootstrap of the new cluster
- export and import of the selected roles
- export of the selected databases (in `initdb.import.databases`), one at a time,
- using `pg_dump -Fc`
+ using `pg_dump -Fd`
- create each of the selected databases and import data using `pg_restore`
- run `ANALYZE` on each imported database
- cleanup of the database dump files
@@ -225,7 +225,7 @@ There are a few things you need to be aware of when using the `monolith` type:
- Connection to the source database must be granted with the specified user
that needs to run `pg_dump` and retrieve roles information (*superuser* is
OK)
-- Currently, the `pg_dump -Fc` result is stored temporarily inside the `dumps`
+- Currently, the `pg_dump -Fd` result is stored temporarily inside the `dumps`
folder in the `PGDATA` volume, so there should be enough available space to
temporarily contain the dump result on the assigned node, as well as the
restored data and indexes. Once the import operation is completed, this
@@ -270,3 +270,51 @@ topic is beyond the scope of EDB Postgres for Kubernetes, we recommend that you
unnecessary writes in the checkpoint area by tuning Postgres GUCs like
`shared_buffers`, `max_wal_size`, `checkpoint_timeout` directly in the
`Cluster` configuration.
+
+## Customizing `pg_dump` and `pg_restore` Behavior
+
+You can customize the behavior of `pg_dump` and `pg_restore` by specifying
+additional options using the `pgDumpExtraOptions` and `pgRestoreExtraOptions`
+parameters. For instance, you can enable parallel jobs to speed up data
+import/export processes, as shown in the following example:
+
+```yaml
+ # Database is the Schema for the databases API Specification of the desired Database.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Most recently observed status of the Database. This data may not be up to
+date. Populated by the system. Read-only.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Publication is the Schema for the publications API Subscription is the Schema for the subscriptions API
backups: create |
+| certificate | clusters: get
secrets: get,create |
+| destroy | pods: get,delete
jobs: delete,list
PVCs: list,delete,update |
+| fencing | clusters: get,patch
pods: get |
+| fio | PVCs: create
configmaps: create
deployment: create |
+| hibernate | clusters: get,patch,delete
pods: list,get,delete
pods/exec: create
jobs: list
PVCs: get,list,update,patch,delete |
+| install | none |
+| logs | clusters: get
pods: list
pods/log: get |
+| maintenance | clusters: get,patch,list
|
+| pgadmin4 | clusters: get
configmaps: create
deployments: create
services: create
secrets: create |
+| pgbench | clusters: get
jobs: create
|
+| promote | clusters: get
clusters/status: patch
pods: get |
+| psql | pods: get,list
pods/exec: create |
+| publication | clusters: get
pods: get,list
pods/exec: create |
+| reload | clusters: get,patch |
+| report cluster | clusters: get
pods: list
pods/log: get
jobs: list
events: list
PVCs: list |
+| report operator | configmaps: get
deployments: get
events: list
pods: list
pods/log: get
secrets: get
services: get
mutatingwebhookconfigurations: list[^1]
validatingwebhookconfigurations: list[^1]
If OLM is present on the K8s cluster, also:
clusterserviceversions: list
installplans: list
subscriptions: list |
+| restart | clusters: get,patch
pods: get,delete |
+| status | clusters: get
pods: list
pods/exec: create
pods/proxy: create
PDBs: list |
+| subscription | clusters: get
pods: get,list
pods/exec: create |
+| version | none |
+
+[^1]: The permissions are cluster scope ClusterRole resources.
+
+///Footnotes Go Here///
+
+Additionally, assigning the `list` permission on the `clusters` will enable
+autocompletion for multiple commands.
+
+### Role examples
+
+It is possible to create roles with restricted permissions.
+The following example creates a role that only has access to the cluster logs:
+
+```yaml
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: cnp-log
+rules:
+ - verbs:
+ - get
+ apiGroups:
+ - postgresql.k8s.enterprisedb.io
+ resources:
+ - clusters
+ - verbs:
+ - list
+ apiGroups:
+ - ''
+ resources:
+ - pods
+ - verbs:
+ - get
+ apiGroups:
+ - ''
+ resources:
+ - pods/log
+```
+
+The next example shows a role with the minimal permissions required to get
+the cluster status using the plugin's `status` command:
+
+```yaml
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: cnp-status
+rules:
+ - verbs:
+ - get
+ apiGroups:
+ - postgresql.k8s.enterprisedb.io
+ resources:
+ - clusters
+ - verbs:
+ - list
+ apiGroups:
+ - ''
+ resources:
+ - pods
+ - verbs:
+ - create
+ apiGroups:
+ - ''
+ resources:
+ - pods/exec
+ - verbs:
+ - create
+ apiGroups:
+ - ''
+ resources:
+ - pods/proxy
+ - verbs:
+ - list
+ apiGroups:
+ - policy
+ resources:
+ - poddisruptionbudgets
+```
+
+!!! Important
+ Keeping the verbs restricted per `resources` and per `apiGroups` helps to
+ prevent inadvertently granting more than intended permissions.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
index cf7c1cef737..d397620c9ea 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
@@ -198,7 +198,7 @@ These predefined annotations are managed by EDB Postgres for Kubernetes.
risk.
`k8s.enterprisedb.io/skipWalArchiving`
-: When set to `true` on a `Cluster` resource, the operator disables WAL archiving.
+: When set to `enabled` on a `Cluster` resource, the operator disables WAL archiving.
This will set `archive_mode` to `off` and require a restart of all PostgreSQL
instances. Use at your own risk.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/license_keys.mdx b/product_docs/docs/postgres_for_kubernetes/1/license_keys.mdx
index ef6c3994c98..e0c378f213a 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/license_keys.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/license_keys.mdx
@@ -3,11 +3,11 @@ title: 'License and License keys'
originalFilePath: 'src/license_keys.md'
---
-A license key is always required for the operator to work.
+License keys are a legacy management mechanism for EDB Postgres for Kubernetes. You do not need a license key if you have installed using an EDB subscription token, and in this case, the licensing commands in this section can be ignored.
-The only exception is when you run the operator with Community PostgreSQL:
-in this case, if the license key is unset, a cluster will be started with the default
-trial license - which automatically expires after 30 days.
+If you are not using an EDB subscription token and installing from public repositories, then you will need a license key. The only exception is when you run the operator with Community PostgreSQL: in this case, if the license key is unset, a cluster will be started with the default trial license - which automatically expires after 30 days. This is not the recommended way of trialing EDB Postgres for Kubernetes - see the [installation guide](installation_upgrade.md) for the recommended options.
+
+The following documentation is only for users who have installed the operator using a license key.
## Company level license keys
diff --git a/product_docs/docs/postgres_for_kubernetes/1/logging.mdx b/product_docs/docs/postgres_for_kubernetes/1/logging.mdx
index ace768a7e17..6a2cdd3a3f1 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/logging.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/logging.mdx
@@ -39,10 +39,10 @@ You can configure the log level for the instance pods in the cluster
specification using the `logLevel` option. Available log levels are: `error`,
`warning`, `info` (default), `debug`, and `trace`.
-!!!Important
- Currently, the log level can only be set at the time the instance starts.
- Changes to the log level in the cluster specification after the cluster has
- started will only apply to new pods, not existing ones.
+!!! Important
+ Currently, the log level can only be set at the time the instance starts.
+ Changes to the log level in the cluster specification after the cluster has
+ started will only apply to new pods, not existing ones.
## Operator Logs
@@ -303,6 +303,6 @@ the `logger` field indicating the process that produced them. The possible
- `wal-restore`: logs from the `wal-restore` subcommand of the instance manager
- `instance-manager`: from the [PostgreSQL instance manager](./instance_manager.md)
-With the exception of `postgres` and `edb_audit`, which follows a specific structure, all other
-`logger` values contain the `msg` field with the escaped message that is
+With the exception of `postgres` and `edb_audit`, which follows a specific structure,
+all other `logger` values contain the `msg` field with the escaped message that is
logged.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx b/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx
new file mode 100644
index 00000000000..14ce50caae0
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/logical_replication.mdx
@@ -0,0 +1,447 @@
+---
+title: 'Logical Replication'
+originalFilePath: 'src/logical_replication.md'
+---
+
+PostgreSQL extends its replication capabilities beyond physical replication,
+which operates at the level of exact block addresses and byte-by-byte copying,
+by offering [logical replication](https://www.postgresql.org/docs/current/logical-replication.html).
+Logical replication replicates data objects and their changes based on a
+defined replication identity, typically the primary key.
+
+Logical replication uses a publish-and-subscribe model, where subscribers
+connect to publications on a publisher node. Subscribers pull data changes from
+these publications and can re-publish them, enabling cascading replication and
+complex topologies.
+
+This flexible model is particularly useful for:
+
+- Online data migrations
+- Live PostgreSQL version upgrades
+- Data distribution across systems
+- Real-time analytics
+- Integration with external applications
+
+!!! Info
+ For more details, examples, and limitations, please refer to the
+ [official PostgreSQL documentation on Logical Replication](https://www.postgresql.org/docs/current/logical-replication.html).
+
+**EDB Postgres for Kubernetes** enhances this capability by providing declarative support for
+key PostgreSQL logical replication objects:
+
+- **Publications** via the `Publication` resource
+- **Subscriptions** via the `Subscription` resource
+
+## Publications
+
+In PostgreSQL's publish-and-subscribe replication model, a
+[**publication**](https://www.postgresql.org/docs/current/logical-replication-publication.html)
+is the source of data changes. It acts as a logical container for the change
+sets (also known as *replication sets*) generated from one or more tables within
+a database. Publications can be defined on any PostgreSQL 10+ instance acting
+as the *publisher*, including instances managed by popular DBaaS solutions in the
+public cloud. Each publication is tied to a single database and provides
+fine-grained control over which tables and changes are replicated.
+
+For publishers outside Kubernetes, you can [create publications using SQL](https://www.postgresql.org/docs/current/sql-createpublication.html)
+or leverage the [`cnp publication create` plugin command](kubectl-plugin.md#logical-replication-publications).
+
+When managing `Cluster` objects with **EDB Postgres for Kubernetes**, PostgreSQL publications
+can be defined declaratively through the `Publication` resource.
+
+!!! Info
+ Please refer to the [API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-Publication)
+ for the full list of attributes you can define for each `Publication` object.
+
+Suppose you have a cluster named `freddie` and want to replicate all tables in
+the `app` database. Here's a `Publication` manifest:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Publication
+metadata:
+ name: freddie-publisher
+spec:
+ cluster:
+ name: freddie
+ dbname: app
+ name: publisher
+ target:
+ allTables: true
+```
+
+In the above example:
+
+- The publication object is named `freddie-publisher` (`metadata.name`).
+- The publication is created via the primary of the `freddie` cluster
+ (`spec.cluster.name`) with name `publisher` (`spec.name`).
+- It includes all tables (`spec.target.allTables: true`) from the `app`
+ database (`spec.dbname`).
+
+!!! Important
+ While `allTables` simplifies configuration, PostgreSQL offers fine-grained
+ control for replicating specific tables or targeted data changes. For advanced
+ configurations, consult the [PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication.html).
+ Additionally, refer to the [EDB Postgres for Kubernetes API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-PublicationTarget)
+ for details on declaratively customizing replication targets.
+
+### Required Fields in the `Publication` Manifest
+
+The following fields are required for a `Publication` object:
+
+- `metadata.name`: Unique name for the Kubernetes `Publication` object.
+- `spec.cluster.name`: Name of the PostgreSQL cluster.
+- `spec.dbname`: Database name where the publication is created.
+- `spec.name`: Publication name in PostgreSQL.
+- `spec.target`: Specifies the tables or changes to include in the publication.
+
+The `Publication` object must reference a specific `Cluster`, determining where
+the publication will be created. It is managed by the cluster's primary instance,
+ensuring the publication is created or updated as needed.
+
+### Reconciliation and Status
+
+After creating a `Publication`, EDB Postgres for Kubernetes manages it on the primary
+instance of the specified cluster. Following a successful reconciliation cycle,
+the `Publication` status will reflect the following:
+
+- `applied: true`, indicates the configuration has been successfully applied.
+- `observedGeneration` matches `metadata.generation`, confirming the applied
+ configuration corresponds to the most recent changes.
+
+If an error occurs during reconciliation, `status.applied` will be `false`, and
+an error message will be included in the `status.message` field.
+
+### Removing a publication
+
+The `publicationReclaimPolicy` field controls the behavior when deleting a
+`Publication` object:
+
+- `retain` (default): Leaves the publication in PostgreSQL for manual
+ management.
+- `delete`: Automatically removes the publication from PostgreSQL.
+
+Consider the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Publication
+metadata:
+ name: freddie-publisher
+spec:
+ cluster:
+ name: freddie
+ dbname: app
+ name: publisher
+ target:
+ allTables: true
+ publicationReclaimPolicy: delete
+```
+
+In this case, deleting the `Publication` object also removes the `publisher`
+publication from the `app` database of the `freddie` cluster.
+
+## Subscriptions
+
+In PostgreSQL's publish-and-subscribe replication model, a
+[**subscription**](https://www.postgresql.org/docs/current/logical-replication-subscription.html)
+represents the downstream component that consumes data changes.
+A subscription establishes the connection to a publisher's database and
+specifies the set of publications (one or more) it subscribes to. Subscriptions
+can be created on any supported PostgreSQL instance acting as the *subscriber*.
+
+!!! Important
+ Since schema definitions are not replicated, the subscriber must have the
+ corresponding tables already defined before data replication begins.
+
+EDB Postgres for Kubernetes simplifies subscription management by enabling you to define them
+declaratively using the `Subscription` resource.
+
+!!! Info
+ Please refer to the [API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-Subscription)
+ for the full list of attributes you can define for each `Subscription` object.
+
+Suppose you want to replicate changes from the `publisher` publication on the
+`app` database of the `freddie` cluster (*publisher*) to the `app` database of
+the `king` cluster (*subscriber*). Here's an example of a `Subscription`
+manifest:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Subscription
+metadata:
+ name: freddie-to-king-subscription
+spec:
+ cluster:
+ name: king
+ dbname: app
+ name: subscriber
+ externalClusterName: freddie
+ publicationName: publisher
+```
+
+In the above example:
+
+- The subscription object is named `freddie-to-king-subscriber` (`metadata.name`).
+- The subscription is created in the `app` database (`spec.dbname`) of the
+ `king` cluster (`spec.cluster.name`), with name `subscriber` (`spec.name`).
+- It connects to the `publisher` publication in the external `freddie` cluster,
+ referenced by `spec.externalClusterName`.
+
+To facilitate this setup, the `freddie` external cluster must be defined in the
+`king` cluster's configuration. Below is an example excerpt showing how to
+define the external cluster in the `king` manifest:
+
+```yaml
+externalClusters:
+ - name: freddie
+ connectionParameters:
+ host: freddie-rw.default.svc
+ user: postgres
+ dbname: app
+```
+
+!!! Info
+ For more details on configuring the `externalClusters` section, see the
+ ["Bootstrap" section](bootstrap.md#the-externalclusters-section) of the
+ documentation.
+
+As you can see, a subscription can connect to any PostgreSQL database
+accessible over the network. This flexibility allows you to seamlessly migrate
+your data into Kubernetes with nearly zero downtime. It’s an excellent option
+for transitioning from various environments, including popular cloud-based
+Database-as-a-Service (DBaaS) platforms.
+
+### Required Fields in the `Subscription` Manifest
+
+The following fields are mandatory for defining a `Subscription` object:
+
+- `metadata.name`: A unique name for the Kubernetes `Subscription` object
+ within its namespace.
+- `spec.cluster.name`: The name of the PostgreSQL cluster where the
+ subscription will be created.
+- `spec.dbname`: The name of the database in which the subscription will be
+ created.
+- `spec.name`: The name of the subscription as it will appear in PostgreSQL.
+- `spec.externalClusterName`: The name of the external cluster, as defined in
+ the `spec.cluster.name` cluster's configuration. This references the
+ publisher database.
+- `spec.publicationName`: The name of the publication in the publisher database
+ to which the subscription will connect.
+
+The `Subscription` object must reference a specific `Cluster`, determining
+where the subscription will be managed. EDB Postgres for Kubernetes ensures that the
+subscription is created or updated on the primary instance of the specified
+cluster.
+
+### Reconciliation and Status
+
+After creating a `Subscription`, EDB Postgres for Kubernetes manages it on the primary
+instance of the specified cluster. Following a successful reconciliation cycle,
+the `Subscription` status will reflect the following:
+
+- `applied: true`, indicates the configuration has been successfully applied.
+- `observedGeneration` matches `metadata.generation`, confirming the applied
+ configuration corresponds to the most recent changes.
+
+If an error occurs during reconciliation, `status.applied` will be `false`, and
+an error message will be included in the `status.message` field.
+
+### Removing a subscription
+
+The `subscriptionReclaimPolicy` field controls the behavior when deleting a
+`Subscription` object:
+
+- `retain` (default): Leaves the subscription in PostgreSQL for manual
+ management.
+- `delete`: Automatically removes the subscription from PostgreSQL.
+
+Consider the following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Subscription
+metadata:
+ name: freddie-to-king-subscription
+spec:
+ cluster:
+ name: king
+ dbname: app
+ name: subscriber
+ externalClusterName: freddie
+ publicationName: publisher
+ subscriptionReclaimPolicy: delete
+```
+
+In this case, deleting the `Subscription` object also removes the `subscriber`
+subscription from the `app` database of the `king` cluster.
+
+## Limitations
+
+Logical replication in PostgreSQL has some inherent limitations, as outlined in
+the [official documentation](https://www.postgresql.org/docs/current/logical-replication-restrictions.html).
+Notably, the following objects are not replicated:
+
+- **Database schema and DDL commands**
+- **Sequence data**
+- **Large objects**
+
+### Addressing Schema Replication
+
+The first limitation, related to schema replication, can be easily addressed
+using EDB Postgres for Kubernetes' capabilities. For instance, you can leverage the `import`
+bootstrap feature to copy the schema of the tables you need to replicate.
+Alternatively, you can manually create the schema as you would for any
+PostgreSQL database.
+
+### Handling Sequences
+
+While sequences are not automatically kept in sync through logical replication,
+EDB Postgres for Kubernetes provides a solution to be used in live migrations.
+You can use the [`cnp` plugin](kubectl-plugin.md#synchronizing-sequences)
+to synchronize sequence values, ensuring consistency between the publisher and
+subscriber databases.
+
+## Example of live migration and major Postgres upgrade with logical replication
+
+To highlight the powerful capabilities of logical replication, this example
+demonstrates how to replicate data from a publisher database (`freddie`)
+running PostgreSQL 16 to a subscriber database (`king`) running the latest
+PostgreSQL version. This setup can be deployed in your Kubernetes cluster for
+evaluation and hands-on learning.
+
+This example illustrates how logical replication facilitates live migrations
+and upgrades between PostgreSQL versions while ensuring data consistency. By
+combining logical replication with EDB Postgres for Kubernetes, you can easily set up,
+manage, and evaluate such scenarios in a Kubernetes environment.
+
+### Step 1: Setting Up the Publisher (`freddie`)
+
+The first step involves creating a `freddie` PostgreSQL cluster with version 16.
+The cluster contains a single instance and includes an `app` database
+initialized with a table, `n`, storing 10,000 numbers. A logical replication
+publication named `publisher` is also configured to include all tables in the
+database.
+
+Here’s the manifest for setting up the `freddie` cluster and its publication
+resource:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: freddie
+spec:
+ instances: 1
+
+ imageName: quay.io/enterprisedb/postgresql:16
+
+ storage:
+ size: 1Gi
+
+ bootstrap:
+ initdb:
+ postInitApplicationSQL:
+ - CREATE TABLE n (i SERIAL PRIMARY KEY, m INTEGER)
+ - INSERT INTO n (m) (SELECT generate_series(1, 10000))
+ - ALTER TABLE n OWNER TO app
+
+ managed:
+ roles:
+ - name: app
+ login: true
+ replication: true
+---
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Publication
+metadata:
+ name: freddie-publisher
+spec:
+ cluster:
+ name: freddie
+ dbname: app
+ name: publisher
+ target:
+ allTables: true
+```
+
+### Step 2: Setting Up the Subscriber (`king`)
+
+Next, create the `king` PostgreSQL cluster, running the latest version of
+PostgreSQL. This cluster initializes by importing the schema from the `app`
+database on the `freddie` cluster using the external cluster configuration. A
+`Subscription` resource, `freddie-to-king-subscription`, is then configured to
+consume changes published by the `publisher` on `freddie`.
+
+Below is the manifest for setting up the `king` cluster and its subscription:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: king
+spec:
+ instances: 1
+
+ storage:
+ size: 1Gi
+
+ bootstrap:
+ initdb:
+ import:
+ type: microservice
+ schemaOnly: true
+ databases:
+ - app
+ source:
+ externalCluster: freddie
+
+ externalClusters:
+ - name: freddie
+ connectionParameters:
+ host: freddie-rw.default.svc
+ user: app
+ dbname: app
+ password:
+ name: freddie-app
+ key: password
+---
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Subscription
+metadata:
+ name: freddie-to-king-subscription
+spec:
+ cluster:
+ name: king
+ dbname: app
+ name: subscriber
+ externalClusterName: freddie
+ publicationName: publisher
+```
+
+Once the `king` cluster is running, you can verify that the replication is
+working by connecting to the `app` database and counting the records in the `n`
+table. The following example uses the `psql` command provided by the `cnp`
+plugin for simplicity:
+
+```console
+kubectl cnp psql king -- app -qAt -c 'SELECT count(*) FROM n'
+10000
+```
+
+This command should return `10000`, confirming that the data from the `freddie`
+cluster has been successfully replicated to the `king` cluster.
+
+Using the `cnp` plugin, you can also synchronize existing sequences to ensure
+consistency between the publisher and subscriber. The example below
+demonstrates how to synchronize a sequence for the `king` cluster:
+
+```console
+kubectl cnp subscription sync-sequences king --subscription=subscriber
+SELECT setval('"public"."n_i_seq"', 10000);
+
+10000
+```
+
+This command updates the sequence `n_i_seq` in the `king` cluster to match the
+current value, ensuring it is in sync with the source database.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
index 73a3ff65112..4e0c2a1958a 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
@@ -220,7 +220,7 @@ cnp_collector_up{cluster="cluster-example"} 1
# HELP cnp_collector_postgres_version Postgres version
# TYPE cnp_collector_postgres_version gauge
-cnp_collector_postgres_version{cluster="cluster-example",full="17.0"} 17.0
+cnp_collector_postgres_version{cluster="cluster-example",full="17.2"} 17.2
# HELP cnp_collector_last_failed_backup_timestamp The last failed backup as a unix timestamp
# TYPE cnp_collector_last_failed_backup_timestamp gauge
diff --git a/product_docs/docs/postgres_for_kubernetes/1/object_stores.mdx b/product_docs/docs/postgres_for_kubernetes/1/object_stores.mdx
index abde40cb577..d74b8942080 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/object_stores.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/object_stores.mdx
@@ -146,10 +146,31 @@ spec:
[...]
```
-!!! Important
- Suppose you configure an Object Storage provider which uses a certificate signed with a private CA,
- like when using OpenShift or MinIO via HTTPS. In that case, you need to set the option `endpointCA`
- referring to a secret containing the CA bundle so that Barman can verify the certificate correctly.
+### Using Object Storage with a private CA
+
+Suppose you configure an Object Storage provider which uses a certificate
+signed with a private CA, for example when using OpenShift or MinIO via HTTPS. In that case,
+you need to set the option `endpointCA` inside `barmanObjectStore` referring
+to a secret containing the CA bundle, so that Barman can verify the certificate
+correctly.
+You can find instructions on creating a secret using your cert files in the
+[certificates](certificates.md#example) document.
+Once you have created the secret, you can populate the `endpointCA` as in the
+following example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ [...]
+ backup:
+ barmanObjectStore:
+ endpointURL:
+
+
## ImageCatalog
@@ -276,6 +317,40 @@ More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-
+
+
+## Publication
+
+**Appears in:**
+
+
+
+Field Description
+apiVersion
[Required]
stringpostgresql.k8s.enterprisedb.io/v1
+kind
[Required]
stringDatabase
+
+metadata
[Required]
+meta/v1.ObjectMeta
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the
+metadata
field.
+
+spec
[Required]
+DatabaseSpec
+
+
+
+
+
+status
+DatabaseStatus
+
+
+
+
+
## ScheduledBackup
@@ -313,6 +388,40 @@ More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-
+
+
+## Subscription
+
+**Appears in:**
+
+
+
+Field Description
+apiVersion
[Required]
stringpostgresql.k8s.enterprisedb.io/v1
+kind
[Required]
stringPublication
+
+metadata
[Required]
+meta/v1.ObjectMeta
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the
+metadata
field.
+
+spec
[Required]
+PublicationSpec
+
+ No description provided.
+
+
+
+status
[Required]
+PublicationStatus
+
+ No description provided.
+
+
+
## AffinityConfiguration
@@ -573,7 +682,7 @@ plugin for this backup
+
+Field Description
+apiVersion
[Required]
stringpostgresql.k8s.enterprisedb.io/v1
+kind
[Required]
stringSubscription
+
+metadata
[Required]
+meta/v1.ObjectMeta
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the
+metadata
field.
+
+spec
[Required]
+SubscriptionSpec
+
+ No description provided.
+
+
+
+status
[Required]
+SubscriptionStatus
+
+ No description provided.
+
Type is tho role of the snapshot in the cluster, such as PG_DATA, PG_WAL and PG_TABLESPACE
-tablespaceName
[Required]tablespaceName
The backup method being used
online
[Required]online
Whether the backup was online/hot (true
) or offline/cold (false
)
pluginMetadata
A map containing the plugin metadata
+false
)
The value to be passed as option --lc-ctype
for initdb (default:C
)
locale
Sets the default collation order and character classification in the new database.
+localeProvider
This option sets the locale provider for databases created in the new cluster. +Available from PostgreSQL 16.
+icuLocale
Specifies the ICU locale when the ICU provider is used.
+This option requires localeProvider
to be set to icu
.
+Available from PostgreSQL 15.
icuRules
Specifies additional collation rules to customize the behavior of the default collation.
+This option requires localeProvider
to be set to icu
.
+Available from PostgreSQL 16.
builtinLocale
Specifies the locale name when the builtin provider is used.
+This option requires localeProvider
to be set to builtin
.
+Available from PostgreSQL 17.
walSegmentSize
ephemeralVolumesSizeLimit
[Required]ephemeralVolumesSizeLimit
externalClusters
The list of external clusters which are used in the configuration
@@ -1863,8 +2021,8 @@ advisable for any PostgreSQL cluster employed for development/staging purposes.plugins
[Required]plugins
The plugins configuration, containing @@ -1965,7 +2123,7 @@ any plugin to be loaded with the corresponding configuration
during a switchover or a failoverlastPromotionToken
[Required]lastPromotionToken
.spec.failoverDelay
is populated or dur
Image contains the image name used by the pods
pluginStatus
[Required]pluginStatus
DataDurabilityLevel specifies how strictly to enforce synchronous replication
+when cluster instances are unavailable. Options are required
or preferred
.
DatabaseReclaimPolicy describes a policy for end-of-life maintenance of databases.
+ ## DatabaseRoleRef @@ -2319,197 +2502,312 @@ PostgreSQL cluster from an existing storage - + -## EPASConfiguration +## DatabaseSpec **Appears in:** -- [PostgresConfiguration](#postgresql-k8s-enterprisedb-io-v1-PostgresConfiguration) +- [Database](#postgresql-k8s-enterprisedb-io-v1-Database) -EPASConfiguration contains EDB Postgres Advanced Server specific configurations
+DatabaseSpec is the specification of a Postgresql Database
Field | Description |
---|---|
audit -bool + | |
cluster [Required]+core/v1.LocalObjectReference |
- If true enables edb_audit logging +The corresponding cluster |
tde -TDEConfiguration + | |
ensure +EnsureOption |
- TDE configuration +Ensure the PostgreSQL database is |
EmbeddedObjectMetadata contains metadata to be inherited by all resources related to a Cluster
- -Field | Description |
---|---|
labels -map[string]string + | |
name [Required]+string |
- No description provided. | +
annotations -map[string]string + | |
owner [Required]+string |
- No description provided. | +
EnsureOption represents whether we should enforce the presence or absence of -a Role in a PostgreSQL instance
- - - -## EphemeralVolumesSizeLimitConfiguration - -**Appears in:** - -- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) - -EphemeralVolumesSizeLimitConfiguration contains the configuration of the ephemeral -storage
- -Field | Description |
---|---|
shm [Required]-k8s.io/apimachinery/pkg/api/resource.Quantity + | |
template +string |
- Shm is the size limit of the shared memory volume +The name of the template from which to create the new database |
temporaryData [Required]-k8s.io/apimachinery/pkg/api/resource.Quantity + | |
encoding +string |
- TemporaryData is the size limit of the temporary data volume +The encoding (cannot be changed) |
ExternalCluster represents the connection parameters to an -external cluster which is used in the other sections of the configuration
- -Field | Description |
---|---|
name [Required]+ | |
locale string |
- The server name, required +The locale (cannot be changed) |
connectionParameters -map[string]string + | |
locale_provider +string |
- The list of connection parameters, such as dbname, host, username, etc +The locale provider (cannot be changed) |
sslCert -core/v1.SecretKeySelector + | |
lc_collate +string |
- The reference to an SSL certificate to be used to connect to this -instance +The LC_COLLATE (cannot be changed) |
sslKey -core/v1.SecretKeySelector + | |
lc_ctype +string |
- The reference to an SSL private key to be used to connect to this -instance +The LC_CTYPE (cannot be changed) |
sslRootCert -core/v1.SecretKeySelector + | |
icu_locale +string |
- The reference to an SSL CA public key to be used to connect to this -instance +The ICU_LOCALE (cannot be changed) |
password -core/v1.SecretKeySelector + | |
icu_rules +string |
- The reference to the password to be used to connect to the server.
-If a password is provided, EDB Postgres for Kubernetes creates a PostgreSQL
-passfile at The ICU_RULES (cannot be changed) |
barmanObjectStore -github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanObjectStoreConfiguration + | |
builtin_locale +string |
- The configuration for the barman-cloud tool suite +The BUILTIN_LOCALE (cannot be changed) |
ImageCatalogRef defines the reference to a major version in an ImageCatalog
- -Field | Description |
---|---|
TypedLocalObjectReference + | |
collation_version +string + |
+
+ The COLLATION_VERSION (cannot be changed) + |
+
isTemplate +bool + |
+
+ True when the database is a template + |
+
allowConnections +bool + |
+
+ True when connections to this database are allowed + |
+
connectionLimit +int + |
+
+ Connection limit, -1 means no limit and -2 means the +database is not valid + |
+
tablespace +string + |
+
+ The default tablespace of this database + |
+
databaseReclaimPolicy +DatabaseReclaimPolicy + |
+
+ The policy for end-of-life maintenance of this database + |
+
DatabaseStatus defines the observed state of Database
+ +Field | Description |
---|---|
observedGeneration +int64 + |
+
+ A sequence number representing the latest +desired state that was synchronized + |
+
applied +bool + |
+
+ Applied is true if the database was reconciled correctly + |
+
message +string + |
+
+ Message is the reconciliation output message + |
+
EPASConfiguration contains EDB Postgres Advanced Server specific configurations
+ +Field | Description |
---|---|
audit +bool + |
+
+ If true enables edb_audit logging + |
+
tde +TDEConfiguration + |
+
+ TDE configuration + |
+
EmbeddedObjectMetadata contains metadata to be inherited by all resources related to a Cluster
+ +Field | Description |
---|---|
labels +map[string]string + |
++ No description provided. | +
annotations +map[string]string + |
++ No description provided. | +
EnsureOption represents whether we should enforce the presence or absence of +a Role in a PostgreSQL instance
+ + + +## EphemeralVolumesSizeLimitConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +EphemeralVolumesSizeLimitConfiguration contains the configuration of the ephemeral +storage
+ +Field | Description |
---|---|
shm +k8s.io/apimachinery/pkg/api/resource.Quantity + |
+
+ Shm is the size limit of the shared memory volume + |
+
temporaryData +k8s.io/apimachinery/pkg/api/resource.Quantity + |
+
+ TemporaryData is the size limit of the temporary data volume + |
+
ImageCatalogRef defines the reference to a major version in an ImageCatalog
+ +Field | Description |
---|---|
TypedLocalObjectReference core/v1.TypedLocalObjectReference |
(Members of TypedLocalObjectReference are embedded into this type.)
@@ -2608,6 +2906,26 @@ database right after is imported - to be used with extreme care
pg_restore are invoked, avoiding data import. Default: false .
|
pgDumpExtraOptions +[]string + |
+
+ List of custom options to pass to the |
+
pgRestoreExtraOptions +[]string + |
+
+ List of custom options to pass to the |
+
updateStrategy
[Required]updateStrategy
additional
[Required]additional
Field | Description |
---|---|
name [Required]+ | |
name string |
@@ -3342,6 +3660,55 @@ the operator calls PgBouncer's PAUSE and RESUME comman
|
PluginConfiguration specifies a plugin that need to be loaded for this +cluster to be reconciled
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the plugin name + |
+
enabled +bool + |
+
+ Enabled is true if this plugin will be used + |
+
parameters +map[string]string + |
+
+ Parameters is the configuration of the plugin + |
+
PluginConfigurationList represent a set of plugin with their +configuration parameters
+ ## PluginStatus @@ -3370,7 +3737,7 @@ the operator calls PgBouncer'sPAUSE
and RESUME
comman
latest reconciliation loop
capabilities
[Required]capabilities
operatorCapabilities
[Required]operatorCapabilities
walCapabilities
[Required]walCapabilities
backupCapabilities
[Required]backupCapabilities
status
[Required]restoreJobHookCapabilities
RestoreJobHookCapabilities are the list of capabilities of the +plugin regarding the RestoreJobHook management
+status
PrimaryUpdateStrategy contains the strategy to follow when upgrading the primary server of the cluster as part of rolling updates
+ + +## PublicationReclaimPolicy + +(Alias of `string`) + +**Appears in:** + +- [PublicationSpec](#postgresql-k8s-enterprisedb-io-v1-PublicationSpec) + +PublicationReclaimPolicy defines a policy for end-of-life maintenance of Publications.
+ + + +## PublicationSpec + +**Appears in:** + +- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication) + +PublicationSpec defines the desired state of Publication
+ +Field | Description |
---|---|
cluster [Required]+core/v1.LocalObjectReference + |
+
+ The name of the PostgreSQL cluster that identifies the "publisher" + |
+
name [Required]+string + |
+
+ The name of the publication inside PostgreSQL + |
+
dbname [Required]+string + |
+
+ The name of the database where the publication will be installed in +the "publisher" cluster + |
+
parameters +map[string]string + |
+
+ Publication parameters part of the |
+
target [Required]+PublicationTarget + |
+
+ Target of the publication as expected by PostgreSQL |
+
publicationReclaimPolicy +PublicationReclaimPolicy + |
+
+ The policy for end-of-life maintenance of this publication + |
+
PublicationStatus defines the observed state of Publication
+ +Field | Description |
---|---|
observedGeneration +int64 + |
+
+ A sequence number representing the latest +desired state that was synchronized + |
+
applied +bool + |
+
+ Applied is true if the publication was reconciled correctly + |
+
message +string + |
+
+ Message is the reconciliation output message + |
+
PublicationTarget is what this publication should publish
+ +Field | Description |
---|---|
allTables +bool + |
+
+ Marks the publication as one that replicates changes for all tables
+in the database, including tables created in the future.
+Corresponding to |
+
objects +[]PublicationTargetObject + |
+
+ Just the following schema objects + |
+
PublicationTargetObject is an object to publish
+ +Field | Description |
---|---|
tablesInSchema +string + |
+
+ Marks the publication as one that replicates changes for all tables
+in the specified list of schemas, including tables created in the
+future. Corresponding to |
+
table +PublicationTargetTable + |
+
+ Specifies a list of tables to add to the publication. Corresponding
+to |
+
PublicationTargetTable is a table to publish
+ +Field | Description |
---|---|
only +bool + |
+
+ Whether to limit to the table only or include all its descendants + |
+
name [Required]+string + |
+
+ The table name + |
+
schema +string + |
+
+ The schema name + |
+
columns +[]string + |
+
+ The columns to publish + |
+
Field | Description |
---|---|
self [Required]+ | |
self string |
@@ -3906,7 +4500,7 @@ cluster
or a replica cluster, comparing it with primary
|
primary [Required]+ | |
primary string |
@@ -3921,7 +4515,7 @@ topology specified in externalClusters
The name of the external cluster which is the replication origin |
enabled [Required]+ | |
enabled bool |
@@ -3931,7 +4525,7 @@ object store or via streaming through pg_basebackup. Refer to the Replica clusters page of the documentation for more information. |
promotionToken [Required]+ | |
promotionToken string |
@@ -3939,7 +4533,7 @@ Refer to the Replica clusters page of the documentation for more information. |
minApplyDelay [Required]+ | |
minApplyDelay meta/v1.Duration |
@@ -4647,6 +5241,132 @@ Size cannot be decreased. |
SubscriptionReclaimPolicy describes a policy for end-of-life maintenance of Subscriptions.
+ + + +## SubscriptionSpec + +**Appears in:** + +- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription) + +SubscriptionSpec defines the desired state of Subscription
+ +Field | Description |
---|---|
cluster [Required]+core/v1.LocalObjectReference + |
+
+ The name of the PostgreSQL cluster that identifies the "subscriber" + |
+
name [Required]+string + |
+
+ The name of the subscription inside PostgreSQL + |
+
dbname [Required]+string + |
+
+ The name of the database where the publication will be installed in +the "subscriber" cluster + |
+
parameters +map[string]string + |
+
+ Subscription parameters part of the |
+
publicationName [Required]+string + |
+
+ The name of the publication inside the PostgreSQL database in the +"publisher" + |
+
publicationDBName +string + |
+
+ The name of the database containing the publication on the external +cluster. Defaults to the one in the external cluster definition. + |
+
externalClusterName [Required]+string + |
+
+ The name of the external cluster with the publication ("publisher") + |
+
subscriptionReclaimPolicy +SubscriptionReclaimPolicy + |
+
+ The policy for end-of-life maintenance of this subscription + |
+
SubscriptionStatus defines the observed state of Subscription
+ +Field | Description |
---|---|
observedGeneration +int64 + |
+
+ A sequence number representing the latest +desired state that was synchronized + |
+
applied +bool + |
+
+ Applied is true if the subscription was reconciled correctly + |
+
message +string + |
+
+ Message is the reconciliation output message + |
+
dataDurability
If set to "required", data durability is strictly enforced. Write operations
+with synchronous commit settings (on
, remote_write
, or remote_apply
) will
+block if there are insufficient healthy replicas, ensuring data persistence.
+If set to "preferred", data durability is maintained when healthy replicas
+are available, but the required number of instances will adjust dynamically
+if replicas become unavailable. This setting relaxes strict durability enforcement
+to allow for operational continuity. This setting is only applicable if both
+standbyNamesPre
and standbyNamesPost
are unset (empty).
Package v1 contains API Schema definitions for the postgresql v1 API group
+ +## Resource Types + +- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup) +- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster) +- [ClusterImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ClusterImageCatalog) +- [Database](#postgresql-k8s-enterprisedb-io-v1-Database) +- [ImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ImageCatalog) +- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler) +- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication) +- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup) +- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription) + + + +## Backup + +Backup is the Schema for the backups API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Backup |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+BackupSpec + |
+
+ Specification of the desired behavior of the backup. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
status +BackupStatus + |
+
+ Most recently observed status of the backup. This data may not be up to +date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
Cluster is the Schema for the PostgreSQL API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Cluster |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+ClusterSpec + |
+
+ Specification of the desired behavior of the cluster. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
status +ClusterStatus + |
+
+ Most recently observed status of the cluster. This data may not be up +to date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
ClusterImageCatalog is the Schema for the clusterimagecatalogs API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | ClusterImageCatalog |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+ImageCatalogSpec + |
+
+ Specification of the desired behavior of the ClusterImageCatalog. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
Database is the Schema for the databases API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Database |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+DatabaseSpec + |
+
+ Specification of the desired Database. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
status +DatabaseStatus + |
+
+ Most recently observed status of the Database. This data may not be up to +date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
ImageCatalog is the Schema for the imagecatalogs API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | ImageCatalog |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+ImageCatalogSpec + |
+
+ Specification of the desired behavior of the ImageCatalog. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
Pooler is the Schema for the poolers API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Pooler |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+PoolerSpec + |
+
+ Specification of the desired behavior of the Pooler. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
status +PoolerStatus + |
+
+ Most recently observed status of the Pooler. This data may not be up to +date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
Publication is the Schema for the publications API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Publication |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+PublicationSpec + |
++ No description provided. | +
status [Required]+PublicationStatus + |
++ No description provided. | +
ScheduledBackup is the Schema for the scheduledbackups API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | ScheduledBackup |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+ScheduledBackupSpec + |
+
+ Specification of the desired behavior of the ScheduledBackup. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
status +ScheduledBackupStatus + |
+
+ Most recently observed status of the ScheduledBackup. This data may not be up +to date. Populated by the system. Read-only. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
Subscription is the Schema for the subscriptions API
+ +Field | Description |
---|---|
apiVersion [Required]string | postgresql.k8s.enterprisedb.io/v1 |
kind [Required]string | Subscription |
metadata [Required]+meta/v1.ObjectMeta + |
+
+ No description provided.Refer to the Kubernetes API documentation for the fields of the metadata field. |
+
spec [Required]+SubscriptionSpec + |
++ No description provided. | +
status [Required]+SubscriptionStatus + |
++ No description provided. | +
AffinityConfiguration contains the info we need to create the +affinity rules for Pods
+ +Field | Description |
---|---|
enablePodAntiAffinity +bool + |
+
+ Activates anti-affinity for the pods. The operator will define pods +anti-affinity unless this field is explicitly set to false + |
+
topologyKey +string + |
+
+ TopologyKey to use for anti-affinity configuration. See k8s documentation +for more info on that + |
+
nodeSelector +map[string]string + |
+
+ NodeSelector is map of key-value pairs used to define the nodes on which +the pods can run. +More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ + |
+
nodeAffinity +core/v1.NodeAffinity + |
+
+ NodeAffinity describes node affinity scheduling rules for the pod. +More info: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + |
+
tolerations +[]core/v1.Toleration + |
+
+ Tolerations is a list of Tolerations that should be set for all the pods, in order to allow them to run +on tainted nodes. +More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ + |
+
podAntiAffinityType +string + |
+
+ PodAntiAffinityType allows the user to decide whether pod anti-affinity between cluster instance has to be +considered a strong requirement during scheduling or not. Allowed values are: "preferred" (default if empty) or +"required". Setting it to "required", could lead to instances remaining pending until new kubernetes nodes are +added if all the existing nodes don't match the required pod anti-affinity rule. +More info: +https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + |
+
additionalPodAntiAffinity +core/v1.PodAntiAffinity + |
+
+ AdditionalPodAntiAffinity allows to specify pod anti-affinity terms to be added to the ones generated +by the operator if EnablePodAntiAffinity is set to true (default) or to be used exclusively if set to false. + |
+
additionalPodAffinity +core/v1.PodAffinity + |
+
+ AdditionalPodAffinity allows to specify pod affinity terms to be passed to all the cluster's pods. + |
+
AvailableArchitecture represents the state of a cluster's architecture
+ +Field | Description |
---|---|
goArch [Required]+string + |
+
+ GoArch is the name of the executable architecture + |
+
hash [Required]+string + |
+
+ Hash is the hash of the executable + |
+
BackupConfiguration defines how the backup of the cluster are taken. +The supported backup methods are BarmanObjectStore and VolumeSnapshot. +For details and examples refer to the Backup and Recovery section of the +documentation
+ +Field | Description |
---|---|
volumeSnapshot +VolumeSnapshotConfiguration + |
+
+ VolumeSnapshot provides the configuration for the execution of volume snapshot backups. + |
+
barmanObjectStore +github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanObjectStoreConfiguration + |
+
+ The configuration for the barman-cloud tool suite + |
+
retentionPolicy +string + |
+
+ RetentionPolicy is the retention policy to be used for backups
+and WALs (i.e. '60d'). The retention policy is expressed in the form
+of |
+
target +BackupTarget + |
+
+ The policy to decide which instance should perform backups. Available
+options are empty string, which will default to |
+
BackupMethod defines the way of executing the physical base backups of +the selected PostgreSQL instance
+ + + +## BackupPhase + +(Alias of `string`) + +**Appears in:** + +- [BackupStatus](#postgresql-k8s-enterprisedb-io-v1-BackupStatus) + +BackupPhase is the phase of the backup
+ + + +## BackupPluginConfiguration + +**Appears in:** + +- [BackupSpec](#postgresql-k8s-enterprisedb-io-v1-BackupSpec) + +- [ScheduledBackupSpec](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackupSpec) + +BackupPluginConfiguration contains the backup configuration used by +the backup plugin
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the name of the plugin managing this backup + |
+
parameters +map[string]string + |
+
+ Parameters are the configuration parameters passed to the backup +plugin for this backup + |
+
BackupSnapshotElementStatus is a volume snapshot that is part of a volume snapshot method backup
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the snapshot resource name + |
+
type [Required]+string + |
+
+ Type is tho role of the snapshot in the cluster, such as PG_DATA, PG_WAL and PG_TABLESPACE + |
+
tablespaceName +string + |
+
+ TablespaceName is the name of the snapshotted tablespace. Only set +when type is PG_TABLESPACE + |
+
BackupSnapshotStatus the fields exclusive to the volumeSnapshot method backup
+ +Field | Description |
---|---|
elements +[]BackupSnapshotElementStatus + |
+
+ The elements list, populated with the gathered volume snapshots + |
+
BackupSource contains the backup we need to restore from, plus some +information that could be needed to correctly restore it.
+ +Field | Description |
---|---|
LocalObjectReference +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+(Members of LocalObjectReference are embedded into this type.)
+ No description provided. |
+
endpointCA +github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector + |
+
+ EndpointCA store the CA bundle of the barman endpoint. +Useful when using self-signed certificates to avoid +errors with certificate issuer and barman-cloud-wal-archive. + |
+
BackupSpec defines the desired state of Backup
+ +Field | Description |
---|---|
cluster [Required]+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ The cluster to backup + |
+
target +BackupTarget + |
+
+ The policy to decide which instance should perform this backup. If empty,
+it defaults to |
+
method +BackupMethod + |
+
+ The backup method to be used, possible options are |
+
pluginConfiguration +BackupPluginConfiguration + |
+
+ Configuration parameters passed to the plugin managing this backup + |
+
online +bool + |
+
+ Whether the default type of backup with volume snapshots is
+online/hot ( |
+
onlineConfiguration +OnlineConfiguration + |
+
+ Configuration parameters to control the online/hot backup with volume snapshots +Overrides the default settings specified in the cluster '.backup.volumeSnapshot.onlineConfiguration' stanza + |
+
BackupStatus defines the observed state of Backup
+ +Field | Description |
---|---|
BarmanCredentials +github.com/cloudnative-pg/barman-cloud/pkg/api.BarmanCredentials + |
+(Members of BarmanCredentials are embedded into this type.)
+ The potential credentials for each cloud provider + |
+
endpointCA +github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector + |
+
+ EndpointCA store the CA bundle of the barman endpoint. +Useful when using self-signed certificates to avoid +errors with certificate issuer and barman-cloud-wal-archive. + |
+
endpointURL +string + |
+
+ Endpoint to be used to upload data to the cloud, +overriding the automatic endpoint discovery + |
+
destinationPath +string + |
+
+ The path where to store the backup (i.e. s3://bucket/path/to/folder) +this path, with different destination folders, will be used for WALs +and for data. This may not be populated in case of errors. + |
+
serverName +string + |
+
+ The server name on S3, the cluster name is used if this +parameter is omitted + |
+
encryption +string + |
+
+ Encryption method required to S3 API + |
+
backupId +string + |
+
+ The ID of the Barman backup + |
+
backupName +string + |
+
+ The Name of the Barman backup + |
+
phase +BackupPhase + |
+
+ The last backup status + |
+
startedAt +meta/v1.Time + |
+
+ When the backup was started + |
+
stoppedAt +meta/v1.Time + |
+
+ When the backup was terminated + |
+
beginWal +string + |
+
+ The starting WAL + |
+
endWal +string + |
+
+ The ending WAL + |
+
beginLSN +string + |
+
+ The starting xlog + |
+
endLSN +string + |
+
+ The ending xlog + |
+
error +string + |
+
+ The detected error + |
+
commandOutput +string + |
+
+ Unused. Retained for compatibility with old versions. + |
+
commandError +string + |
+
+ The backup command output in case of error + |
+
backupLabelFile +[]byte + |
+
+ Backup label file content as returned by Postgres in case of online (hot) backups + |
+
tablespaceMapFile +[]byte + |
+
+ Tablespace map file content as returned by Postgres in case of online (hot) backups + |
+
instanceID +InstanceID + |
+
+ Information to identify the instance where the backup has been taken from + |
+
snapshotBackupStatus +BackupSnapshotStatus + |
+
+ Status of the volumeSnapshot backup + |
+
method +BackupMethod + |
+
+ The backup method being used + |
+
online +bool + |
+
+ Whether the backup was online/hot ( |
+
pluginMetadata +map[string]string + |
+
+ A map containing the plugin metadata + |
+
BackupTarget describes the preferred targets for a backup
+ + + +## BootstrapConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +BootstrapConfiguration contains information about how to create the PostgreSQL
+cluster. Only a single bootstrap method can be defined among the supported
+ones. initdb
will be used as the bootstrap method if left
+unspecified. Refer to the Bootstrap page of the documentation for more
+information.
Field | Description |
---|---|
initdb +BootstrapInitDB + |
+
+ Bootstrap the cluster via initdb + |
+
recovery +BootstrapRecovery + |
+
+ Bootstrap the cluster from a backup + |
+
pg_basebackup +BootstrapPgBaseBackup + |
+
+ Bootstrap the cluster taking a physical backup of another compatible +PostgreSQL instance + |
+
BootstrapInitDB is the configuration of the bootstrap process when +initdb is used +Refer to the Bootstrap page of the documentation for more information.
+ +Field | Description |
---|---|
database +string + |
+
+ Name of the database used by the application. Default: |
+
owner +string + |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the |
+
secret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ Name of the secret containing the initial credentials for the +owner of the user database. If empty a new secret will be +created from scratch + |
+
redwood +bool + |
+
+ If we need to enable/disable Redwood compatibility. Requires +EPAS and for EPAS defaults to true + |
+
options +[]string + |
+
+ The list of options that must be passed to initdb when creating the cluster. +Deprecated: This could lead to inconsistent configurations, +please use the explicit provided parameters instead. +If defined, explicit values will be ignored. + |
+
dataChecksums +bool + |
+
+ Whether the |
+
encoding +string + |
+
+ The value to be passed as option |
+
localeCollate +string + |
+
+ The value to be passed as option |
+
localeCType +string + |
+
+ The value to be passed as option |
+
locale +string + |
+
+ Sets the default collation order and character classification in the new database. + |
+
localeProvider +string + |
+
+ This option sets the locale provider for databases created in the new cluster. +Available from PostgreSQL 16. + |
+
icuLocale +string + |
+
+ Specifies the ICU locale when the ICU provider is used.
+This option requires |
+
icuRules +string + |
+
+ Specifies additional collation rules to customize the behavior of the default collation.
+This option requires |
+
builtinLocale +string + |
+
+ Specifies the locale name when the builtin provider is used.
+This option requires |
+
walSegmentSize +int + |
+
+ The value in megabytes (1 to 1024) to be passed to the |
+
postInitSQL +[]string + |
+
+ List of SQL queries to be executed as a superuser in the |
+
postInitApplicationSQL +[]string + |
+
+ List of SQL queries to be executed as a superuser in the application +database right after the cluster has been created - to be used with extreme care +(by default empty) + |
+
postInitTemplateSQL +[]string + |
+
+ List of SQL queries to be executed as a superuser in the |
+
import +Import + |
+
+ Bootstraps the new cluster by importing data from an existing PostgreSQL
+instance using logical backup ( |
+
postInitApplicationSQLRefs +SQLRefs + |
+
+ List of references to ConfigMaps or Secrets containing SQL files +to be executed as a superuser in the application database right after +the cluster has been created. The references are processed in a specific order: +first, all Secrets are processed, followed by all ConfigMaps. +Within each group, the processing order follows the sequence specified +in their respective arrays. +(by default empty) + |
+
postInitTemplateSQLRefs +SQLRefs + |
+
+ List of references to ConfigMaps or Secrets containing SQL files
+to be executed as a superuser in the |
+
postInitSQLRefs +SQLRefs + |
+
+ List of references to ConfigMaps or Secrets containing SQL files
+to be executed as a superuser in the |
+
BootstrapPgBaseBackup contains the configuration required to take +a physical backup of an existing PostgreSQL cluster
+ +Field | Description |
---|---|
source [Required]+string + |
+
+ The name of the server of which we need to take a physical backup + |
+
database +string + |
+
+ Name of the database used by the application. Default: |
+
owner +string + |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the |
+
secret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ Name of the secret containing the initial credentials for the +owner of the user database. If empty a new secret will be +created from scratch + |
+
BootstrapRecovery contains the configuration required to restore
+from an existing cluster using 3 methodologies: external cluster,
+volume snapshots or backup objects. Full recovery and Point-In-Time
+Recovery are supported.
+The method can be also be used to create clusters in continuous recovery
+(replica clusters), also supporting cascading replication when instances
>
Field | Description |
---|---|
backup +BackupSource + |
+
+ The backup object containing the physical base backup from which to
+initiate the recovery procedure.
+Mutually exclusive with |
+
source +string + |
+
+ The external cluster whose backup we will restore. This is also
+used as the name of the folder under which the backup is stored,
+so it must be set to the name of the source cluster
+Mutually exclusive with |
+
volumeSnapshots +DataSource + |
+
+ The static PVC data source(s) from which to initiate the
+recovery procedure. Currently supporting |
+
recoveryTarget +RecoveryTarget + |
+
+ By default, the recovery process applies all the available
+WAL files in the archive (full recovery). However, you can also
+end the recovery as soon as a consistent state is reached or
+recover to a point-in-time (PITR) by specifying a |
+
database +string + |
+
+ Name of the database used by the application. Default: |
+
owner +string + |
+
+ Name of the owner of the database in the instance to be used
+by applications. Defaults to the value of the |
+
secret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ Name of the secret containing the initial credentials for the +owner of the user database. If empty a new secret will be +created from scratch + |
+
CatalogImage defines the image and major version
+ +Field | Description |
---|---|
image [Required]+string + |
+
+ The image reference + |
+
major [Required]+int + |
+
+ The PostgreSQL major version of the image. Must be unique within the catalog. + |
+
CertificatesConfiguration contains the needed configurations to handle server certificates.
+ +Field | Description |
---|---|
serverCASecret +string + |
+
+ The secret containing the Server CA certificate. If not defined, a new secret will be created +with a self-signed CA and will be used to generate the TLS certificate ServerTLSSecret. + +Contains: + + +
|
+
serverTLSSecret +string + |
+
+ The secret of type kubernetes.io/tls containing the server TLS certificate and key that will be set as
+ |
+
replicationTLSSecret +string + |
+
+ The secret of type kubernetes.io/tls containing the client certificate to authenticate as
+the |
+
clientCASecret +string + |
+
+ The secret containing the Client CA certificate. If not defined, a new secret will be created +with a self-signed CA and will be used to generate all the client certificates. + +Contains: + + +
|
+
serverAltDNSNames +[]string + |
+
+ The list of the server alternative DNS names to be added to the generated server TLS certificates, when required. + |
+
CertificatesStatus contains configuration certificates and related expiration dates.
+ +Field | Description |
---|---|
CertificatesConfiguration +CertificatesConfiguration + |
+(Members of CertificatesConfiguration are embedded into this type.)
+ Needed configurations to handle server certificates, initialized with default values, if needed. + |
+
expirations +map[string]string + |
+
+ Expiration dates for all certificates. + |
+
ClusterMonitoringTLSConfiguration is the type containing the TLS configuration +for the cluster's monitoring
+ +Field | Description |
---|---|
enabled +bool + |
+
+ Enable TLS for the monitoring endpoint. +Changing this option will force a rollout of all instances. + |
+
ClusterSpec defines the desired state of Cluster
+ +Field | Description |
---|---|
description +string + |
+
+ Description of this PostgreSQL cluster + |
+
inheritedMetadata +EmbeddedObjectMetadata + |
+
+ Metadata that will be inherited by all objects related to the Cluster + |
+
imageName +string + |
+
+ Name of the container image, supporting both tags ( |
+
imageCatalogRef +ImageCatalogRef + |
+
+ Defines the major PostgreSQL version we want to use within an ImageCatalog + |
+
imagePullPolicy +core/v1.PullPolicy + |
+
+ Image pull policy.
+One of |
+
schedulerName +string + |
+
+ If specified, the pod will be dispatched by specified Kubernetes +scheduler. If not specified, the pod will be dispatched by the default +scheduler. More info: +https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/ + |
+
postgresUID +int64 + |
+
+ The UID of the |
+
postgresGID +int64 + |
+
+ The GID of the |
+
instances [Required]+int + |
+
+ Number of instances required in the cluster + |
+
minSyncReplicas +int + |
+
+ Minimum number of instances required in synchronous replication with the +primary. Undefined or 0 allow writes to complete when no standby is +available. + |
+
maxSyncReplicas +int + |
+
+ The target value for the synchronous replication quorum, that can be +decreased if the number of ready standbys is lower than this. +Undefined or 0 disable synchronous replication. + |
+
postgresql +PostgresConfiguration + |
+
+ Configuration of the PostgreSQL server + |
+
replicationSlots +ReplicationSlotsConfiguration + |
+
+ Replication slots management configuration + |
+
bootstrap +BootstrapConfiguration + |
+
+ Instructions to bootstrap this cluster + |
+
replica +ReplicaClusterConfiguration + |
+
+ Replica cluster configuration + |
+
superuserSecret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ The secret containing the superuser password. If not defined a new +secret will be created with a randomly generated password + |
+
enableSuperuserAccess +bool + |
+
+ When this option is enabled, the operator will use the |
+
certificates +CertificatesConfiguration + |
+
+ The configuration for the CA and related certificates + |
+
imagePullSecrets +[]github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ The list of pull secrets to be used to pull the images. If the license key +contains a pull secret that secret will be automatically included. + |
+
storage +StorageConfiguration + |
+
+ Configuration of the storage of the instances + |
+
serviceAccountTemplate +ServiceAccountTemplate + |
+
+ Configure the generation of the service account + |
+
walStorage +StorageConfiguration + |
+
+ Configuration of the storage for PostgreSQL WAL (Write-Ahead Log) + |
+
ephemeralVolumeSource +core/v1.EphemeralVolumeSource + |
+
+ EphemeralVolumeSource allows the user to configure the source of ephemeral volumes. + |
+
startDelay +int32 + |
+
+ The time in seconds that is allowed for a PostgreSQL instance to +successfully start up (default 3600). +The startup probe failure threshold is derived from this value using the formula: +ceiling(startDelay / 10). + |
+
stopDelay +int32 + |
+
+ The time in seconds that is allowed for a PostgreSQL instance to +gracefully shutdown (default 1800) + |
+
smartStopDelay +int32 + |
+
+ Deprecated: please use SmartShutdownTimeout instead + |
+
smartShutdownTimeout +int32 + |
+
+ The time in seconds that controls the window of time reserved for the smart shutdown of Postgres to complete.
+Make sure you reserve enough time for the operator to request a fast shutdown of Postgres
+(that is: |
+
switchoverDelay +int32 + |
+
+ The time in seconds that is allowed for a primary PostgreSQL instance +to gracefully shutdown during a switchover. +Default value is 3600 seconds (1 hour). + |
+
failoverDelay +int32 + |
+
+ The amount of time (in seconds) to wait before triggering a failover +after the primary PostgreSQL instance in the cluster was detected +to be unhealthy + |
+
livenessProbeTimeout +int32 + |
+
+ LivenessProbeTimeout is the time (in seconds) that is allowed for a PostgreSQL instance +to successfully respond to the liveness probe (default 30). +The Liveness probe failure threshold is derived from this value using the formula: +ceiling(livenessProbe / 10). + |
+
affinity +AffinityConfiguration + |
+
+ Affinity/Anti-affinity rules for Pods + |
+
topologySpreadConstraints +[]core/v1.TopologySpreadConstraint + |
+
+ TopologySpreadConstraints specifies how to spread matching pods among the given topology. +More info: +https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ + |
+
resources +core/v1.ResourceRequirements + |
+
+ Resources requirements of every generated Pod. Please refer to +https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ +for more information. + |
+
ephemeralVolumesSizeLimit +EphemeralVolumesSizeLimitConfiguration + |
+
+ EphemeralVolumesSizeLimit allows the user to set the limits for the ephemeral +volumes + |
+
priorityClassName +string + |
+
+ Name of the priority class which will be used in every generated Pod, if the PriorityClass +specified does not exist, the pod will not be able to schedule. Please refer to +https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass +for more information + |
+
primaryUpdateStrategy +PrimaryUpdateStrategy + |
+
+ Deployment strategy to follow to upgrade the primary server during a rolling
+update procedure, after all replicas have been successfully updated:
+it can be automated ( |
+
primaryUpdateMethod +PrimaryUpdateMethod + |
+
+ Method to follow to upgrade the primary server during a rolling
+update procedure, after all replicas have been successfully updated:
+it can be with a switchover ( |
+
backup +BackupConfiguration + |
+
+ The configuration to be used for backups + |
+
nodeMaintenanceWindow +NodeMaintenanceWindow + |
+
+ Define a maintenance window for the Kubernetes nodes + |
+
licenseKey +string + |
+
+ The license key of the cluster. When empty, the cluster operates in +trial mode and after the expiry date (default 30 days) the operator +will cease any reconciliation attempt. For details, please refer to +the license agreement that comes with the operator. + |
+
licenseKeySecret +core/v1.SecretKeySelector + |
+
+ The reference to the license key. When this is set it take precedence over LicenseKey. + |
+
monitoring +MonitoringConfiguration + |
+
+ The configuration of the monitoring infrastructure of this cluster + |
+
externalClusters +ExternalClusterList + |
+
+ The list of external clusters which are used in the configuration + |
+
logLevel +string + |
+
+ The instances' log level, one of the following values: error, warning, info (default), debug, trace + |
+
projectedVolumeTemplate +core/v1.ProjectedVolumeSource + |
+
+ Template to be used to define projected volumes, projected volumes will be mounted
+under |
+
env +[]core/v1.EnvVar + |
+
+ Env follows the Env format to pass environment variables +to the pods created in the cluster + |
+
envFrom +[]core/v1.EnvFromSource + |
+
+ EnvFrom follows the EnvFrom format to pass environment variables +sources to the pods to be used by Env + |
+
managed +ManagedConfiguration + |
+
+ The configuration that is used by the portions of PostgreSQL that are managed by the instance manager + |
+
seccompProfile +core/v1.SeccompProfile + |
+
+ The SeccompProfile applied to every Pod and Container.
+Defaults to: |
+
tablespaces +[]TablespaceConfiguration + |
+
+ The tablespaces configuration + |
+
enablePDB +bool + |
+
+ Manage the |
+
plugins +PluginConfigurationList + |
+
+ The plugins configuration, containing +any plugin to be loaded with the corresponding configuration + |
+
ClusterStatus defines the observed state of Cluster
+ +Field | Description |
---|---|
instances +int + |
+
+ The total number of PVC Groups detected in the cluster. It may differ from the number of existing instance pods. + |
+
readyInstances +int + |
+
+ The total number of ready instances in the cluster. It is equal to the number of ready instance pods. + |
+
instancesStatus +map[PodStatus][]string + |
+
+ InstancesStatus indicates in which status the instances are + |
+
instancesReportedState +map[PodName]InstanceReportedState + |
+
+ The reported state of the instances during the last reconciliation loop + |
+
managedRolesStatus +ManagedRoles + |
+
+ ManagedRolesStatus reports the state of the managed roles in the cluster + |
+
tablespacesStatus +[]TablespaceState + |
+
+ TablespacesStatus reports the state of the declarative tablespaces in the cluster + |
+
timelineID +int + |
+
+ The timeline of the Postgres cluster + |
+
topology +Topology + |
+
+ Instances topology. + |
+
latestGeneratedNode +int + |
+
+ ID of the latest generated node (used to avoid node name clashing) + |
+
currentPrimary +string + |
+
+ Current primary instance + |
+
targetPrimary +string + |
+
+ Target primary instance, this is different from the previous one +during a switchover or a failover + |
+
lastPromotionToken +string + |
+
+ LastPromotionToken is the last verified promotion token that +was used to promote a replica cluster + |
+
pvcCount +int32 + |
+
+ How many PVCs have been created by this cluster + |
+
jobCount +int32 + |
+
+ How many Jobs have been created by this cluster + |
+
danglingPVC +[]string + |
+
+ List of all the PVCs created by this cluster and still available +which are not attached to a Pod + |
+
resizingPVC +[]string + |
+
+ List of all the PVCs that have ResizingPVC condition. + |
+
initializingPVC +[]string + |
+
+ List of all the PVCs that are being initialized by this cluster + |
+
healthyPVC +[]string + |
+
+ List of all the PVCs not dangling nor initializing + |
+
unusablePVC +[]string + |
+
+ List of all the PVCs that are unusable because another PVC is missing + |
+
licenseStatus +github.com/EnterpriseDB/cloud-native-postgres/pkg/licensekey.Status + |
+
+ Status of the license + |
+
writeService +string + |
+
+ Current write pod + |
+
readService +string + |
+
+ Current list of read pods + |
+
phase +string + |
+
+ Current phase of the cluster + |
+
phaseReason +string + |
+
+ Reason for the current phase + |
+
secretsResourceVersion +SecretsResourceVersion + |
+
+ The list of resource versions of the secrets +managed by the operator. Every change here is done in the +interest of the instance manager, which will refresh the +secret data + |
+
configMapResourceVersion +ConfigMapResourceVersion + |
+
+ The list of resource versions of the configmaps, +managed by the operator. Every change here is done in the +interest of the instance manager, which will refresh the +configmap data + |
+
certificates +CertificatesStatus + |
+
+ The configuration for the CA and related certificates, initialized with defaults. + |
+
firstRecoverabilityPoint +string + |
+
+ The first recoverability point, stored as a date in RFC3339 format. +This field is calculated from the content of FirstRecoverabilityPointByMethod + |
+
firstRecoverabilityPointByMethod +map[BackupMethod]meta/v1.Time + |
+
+ The first recoverability point, stored as a date in RFC3339 format, per backup method type + |
+
lastSuccessfulBackup +string + |
+
+ Last successful backup, stored as a date in RFC3339 format +This field is calculated from the content of LastSuccessfulBackupByMethod + |
+
lastSuccessfulBackupByMethod +map[BackupMethod]meta/v1.Time + |
+
+ Last successful backup, stored as a date in RFC3339 format, per backup method type + |
+
lastFailedBackup +string + |
+
+ Stored as a date in RFC3339 format + |
+
cloudNativePostgresqlCommitHash +string + |
+
+ The commit hash number of which this operator running + |
+
currentPrimaryTimestamp +string + |
+
+ The timestamp when the last actual promotion to primary has occurred + |
+
currentPrimaryFailingSinceTimestamp +string + |
+
+ The timestamp when the primary was detected to be unhealthy
+This field is reported when |
+
targetPrimaryTimestamp +string + |
+
+ The timestamp when the last request for a new primary has occurred + |
+
poolerIntegrations +PoolerIntegrations + |
+
+ The integration needed by poolers referencing the cluster + |
+
cloudNativePostgresqlOperatorHash +string + |
+
+ The hash of the binary of the operator + |
+
availableArchitectures +[]AvailableArchitecture + |
+
+ AvailableArchitectures reports the available architectures of a cluster + |
+
conditions +[]meta/v1.Condition + |
+
+ Conditions for cluster object + |
+
instanceNames +[]string + |
+
+ List of instance names in the cluster + |
+
onlineUpdateEnabled +bool + |
+
+ OnlineUpdateEnabled shows if the online upgrade is enabled inside the cluster + |
+
azurePVCUpdateEnabled +bool + |
+
+ AzurePVCUpdateEnabled shows if the PVC online upgrade is enabled for this cluster + |
+
image +string + |
+
+ Image contains the image name used by the pods + |
+
pluginStatus +[]PluginStatus + |
+
+ PluginStatus is the status of the loaded plugins + |
+
switchReplicaClusterStatus +SwitchReplicaClusterStatus + |
+
+ SwitchReplicaClusterStatus is the status of the switch to replica cluster + |
+
demotionToken +string + |
+
+ DemotionToken is a JSON token containing the information +from pg_controldata such as Database system identifier, Latest checkpoint's +TimeLineID, Latest checkpoint's REDO location, Latest checkpoint's REDO +WAL file, and Time of latest checkpoint + |
+
ConfigMapResourceVersion is the resource versions of the secrets +managed by the operator
+ +Field | Description |
---|---|
metrics +map[string]string + |
+
+ A map with the versions of all the config maps used to pass metrics. +Map keys are the config map names, map values are the versions + |
+
DataDurabilityLevel specifies how strictly to enforce synchronous replication
+when cluster instances are unavailable. Options are required
or preferred
.
DataSource contains the configuration required to bootstrap a +PostgreSQL cluster from an existing storage
+ +Field | Description |
---|---|
storage [Required]+core/v1.TypedLocalObjectReference + |
+
+ Configuration of the storage of the instances + |
+
walStorage +core/v1.TypedLocalObjectReference + |
+
+ Configuration of the storage for PostgreSQL WAL (Write-Ahead Log) + |
+
tablespaceStorage +map[string]core/v1.TypedLocalObjectReference + |
+
+ Configuration of the storage for PostgreSQL tablespaces + |
+
DatabaseReclaimPolicy describes a policy for end-of-life maintenance of databases.
+ + + +## DatabaseRoleRef + +**Appears in:** + +- [TablespaceConfiguration](#postgresql-k8s-enterprisedb-io-v1-TablespaceConfiguration) + +DatabaseRoleRef is a reference an a role available inside PostgreSQL
+ +Field | Description |
---|---|
name +string + |
++ No description provided. | +
DatabaseSpec is the specification of a Postgresql Database
+ +Field | Description |
---|---|
cluster [Required]+core/v1.LocalObjectReference + |
+
+ The corresponding cluster + |
+
ensure +EnsureOption + |
+
+ Ensure the PostgreSQL database is |
+
name [Required]+string + |
+
+ The name inside PostgreSQL + |
+
owner [Required]+string + |
+
+ The owner + |
+
template +string + |
+
+ The name of the template from which to create the new database + |
+
encoding +string + |
+
+ The encoding (cannot be changed) + |
+
locale +string + |
+
+ The locale (cannot be changed) + |
+
locale_provider +string + |
+
+ The locale provider (cannot be changed) + |
+
lc_collate +string + |
+
+ The LC_COLLATE (cannot be changed) + |
+
lc_ctype +string + |
+
+ The LC_CTYPE (cannot be changed) + |
+
icu_locale +string + |
+
+ The ICU_LOCALE (cannot be changed) + |
+
icu_rules +string + |
+
+ The ICU_RULES (cannot be changed) + |
+
builtin_locale +string + |
+
+ The BUILTIN_LOCALE (cannot be changed) + |
+
collation_version +string + |
+
+ The COLLATION_VERSION (cannot be changed) + |
+
isTemplate +bool + |
+
+ True when the database is a template + |
+
allowConnections +bool + |
+
+ True when connections to this database are allowed + |
+
connectionLimit +int + |
+
+ Connection limit, -1 means no limit and -2 means the +database is not valid + |
+
tablespace +string + |
+
+ The default tablespace of this database + |
+
databaseReclaimPolicy +DatabaseReclaimPolicy + |
+
+ The policy for end-of-life maintenance of this database + |
+
DatabaseStatus defines the observed state of Database
+ +Field | Description |
---|---|
observedGeneration +int64 + |
+
+ A sequence number representing the latest +desired state that was synchronized + |
+
applied +bool + |
+
+ Applied is true if the database was reconciled correctly + |
+
message +string + |
+
+ Message is the reconciliation output message + |
+
EPASConfiguration contains EDB Postgres Advanced Server specific configurations
+ +Field | Description |
---|---|
audit +bool + |
+
+ If true enables edb_audit logging + |
+
tde +TDEConfiguration + |
+
+ TDE configuration + |
+
EmbeddedObjectMetadata contains metadata to be inherited by all resources related to a Cluster
+ +Field | Description |
---|---|
labels +map[string]string + |
++ No description provided. | +
annotations +map[string]string + |
++ No description provided. | +
EnsureOption represents whether we should enforce the presence or absence of +a Role in a PostgreSQL instance
+ + + +## EphemeralVolumesSizeLimitConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +EphemeralVolumesSizeLimitConfiguration contains the configuration of the ephemeral +storage
+ +Field | Description |
---|---|
shm +k8s.io/apimachinery/pkg/api/resource.Quantity + |
+
+ Shm is the size limit of the shared memory volume + |
+
temporaryData +k8s.io/apimachinery/pkg/api/resource.Quantity + |
+
+ TemporaryData is the size limit of the temporary data volume + |
+
ImageCatalogRef defines the reference to a major version in an ImageCatalog
+ +Field | Description |
---|---|
TypedLocalObjectReference +core/v1.TypedLocalObjectReference + |
+(Members of TypedLocalObjectReference are embedded into this type.)
+ No description provided. |
+
major [Required]+int + |
+
+ The major version of PostgreSQL we want to use from the ImageCatalog + |
+
ImageCatalogSpec defines the desired ImageCatalog
+ +Field | Description |
---|---|
images [Required]+[]CatalogImage + |
+
+ List of CatalogImages available in the catalog + |
+
Import contains the configuration to init a database from a logic snapshot of an externalCluster
+ +Field | Description |
---|---|
source [Required]+ImportSource + |
+
+ The source of the import + |
+
type [Required]+SnapshotType + |
+
+ The import type. Can be |
+
databases [Required]+[]string + |
+
+ The databases to import + |
+
roles +[]string + |
+
+ The roles to import + |
+
postImportApplicationSQL +[]string + |
+
+ List of SQL queries to be executed as a superuser in the application +database right after is imported - to be used with extreme care +(by default empty). Only available in microservice type. + |
+
schemaOnly +bool + |
+
+ When set to true, only the |
+
pgDumpExtraOptions +[]string + |
+
+ List of custom options to pass to the |
+
pgRestoreExtraOptions +[]string + |
+
+ List of custom options to pass to the |
+
ImportSource describes the source for the logical snapshot
+ +Field | Description |
---|---|
externalCluster [Required]+string + |
+
+ The name of the externalCluster used for import + |
+
InstanceID contains the information to identify an instance
+ +Field | Description |
---|---|
podName +string + |
+
+ The pod name + |
+
ContainerID +string + |
+
+ The container ID + |
+
InstanceReportedState describes the last reported state of an instance during a reconciliation loop
+ +Field | Description |
---|---|
isPrimary [Required]+bool + |
+
+ indicates if an instance is the primary one + |
+
timeLineID +int + |
+
+ indicates on which TimelineId the instance is + |
+
LDAPBindAsAuth provides the required fields to use the +bind authentication for LDAP
+ +Field | Description |
---|---|
prefix +string + |
+
+ Prefix for the bind authentication option + |
+
suffix +string + |
+
+ Suffix for the bind authentication option + |
+
LDAPBindSearchAuth provides the required fields to use +the bind+search LDAP authentication process
+ +Field | Description |
---|---|
baseDN +string + |
+
+ Root DN to begin the user search + |
+
bindDN +string + |
+
+ DN of the user to bind to the directory + |
+
bindPassword +core/v1.SecretKeySelector + |
+
+ Secret with the password for the user to bind to the directory + |
+
searchAttribute +string + |
+
+ Attribute to match against the username + |
+
searchFilter +string + |
+
+ Search filter to use when doing the search+bind authentication + |
+
LDAPConfig contains the parameters needed for LDAP authentication
+ +Field | Description |
---|---|
server +string + |
+
+ LDAP hostname or IP address + |
+
port +int + |
+
+ LDAP server port + |
+
scheme +LDAPScheme + |
+
+ LDAP schema to be used, possible options are |
+
bindAsAuth +LDAPBindAsAuth + |
+
+ Bind as authentication configuration + |
+
bindSearchAuth +LDAPBindSearchAuth + |
+
+ Bind+Search authentication configuration + |
+
tls +bool + |
+
+ Set to 'true' to enable LDAP over TLS. 'false' is default + |
+
LDAPScheme defines the possible schemes for LDAP
+ + + +## ManagedConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +ManagedConfiguration represents the portions of PostgreSQL that are managed +by the instance manager
+ +Field | Description |
---|---|
roles +[]RoleConfiguration + |
+
+ Database roles managed by the |
+
services +ManagedServices + |
+
+ Services roles managed by the |
+
ManagedRoles tracks the status of a cluster's managed roles
+ +Field | Description |
---|---|
byStatus +map[RoleStatus][]string + |
+
+ ByStatus gives the list of roles in each state + |
+
cannotReconcile +map[string][]string + |
+
+ CannotReconcile lists roles that cannot be reconciled in PostgreSQL, +with an explanation of the cause + |
+
passwordStatus +map[string]PasswordState + |
+
+ PasswordStatus gives the last transaction id and password secret version for each managed role + |
+
ManagedService represents a specific service managed by the cluster. +It includes the type of service and its associated template specification.
+ +Field | Description |
---|---|
selectorType [Required]+ServiceSelectorType + |
+
+ SelectorType specifies the type of selectors that the service will have. +Valid values are "rw", "r", and "ro", representing read-write, read, and read-only services. + |
+
updateStrategy +ServiceUpdateStrategy + |
+
+ UpdateStrategy describes how the service differences should be reconciled + |
+
serviceTemplate [Required]+ServiceTemplateSpec + |
+
+ ServiceTemplate is the template specification for the service. + |
+
ManagedServices represents the services managed by the cluster.
+ +Field | Description |
---|---|
disabledDefaultServices +[]ServiceSelectorType + |
+
+ DisabledDefaultServices is a list of service types that are disabled by default. +Valid values are "r", and "ro", representing read, and read-only services. + |
+
additional +[]ManagedService + |
+
+ Additional is a list of additional managed services specified by the user. + |
+
Metadata is a structure similar to the metav1.ObjectMeta, but still +parseable by controller-gen to create a suitable CRD for the user. +The comment of PodTemplateSpec has an explanation of why we are +not using the core data types.
+ +Field | Description |
---|---|
name +string + |
+
+ The name of the resource. Only supported for certain types + |
+
labels +map[string]string + |
+
+ Map of string keys and values that can be used to organize and categorize +(scope and select) objects. May match selectors of replication controllers +and services. +More info: http://kubernetes.io/docs/user-guide/labels + |
+
annotations +map[string]string + |
+
+ Annotations is an unstructured key value map stored with a resource that may be +set by external tools to store and retrieve arbitrary metadata. They are not +queryable and should be preserved when modifying objects. +More info: http://kubernetes.io/docs/user-guide/annotations + |
+
MonitoringConfiguration is the type containing all the monitoring +configuration for a certain cluster
+ +Field | Description |
---|---|
disableDefaultQueries +bool + |
+
+ Whether the default queries should be injected.
+Set it to |
+
customQueriesConfigMap +[]github.com/cloudnative-pg/machinery/pkg/api.ConfigMapKeySelector + |
+
+ The list of config maps containing the custom queries + |
+
customQueriesSecret +[]github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector + |
+
+ The list of secrets containing the custom queries + |
+
enablePodMonitor +bool + |
+
+ Enable or disable the |
+
tls +ClusterMonitoringTLSConfiguration + |
+
+ Configure TLS communication for the metrics endpoint. +Changing tls.enabled option will force a rollout of all instances. + |
+
podMonitorMetricRelabelings +[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig + |
+
+ The list of metric relabelings for the |
+
podMonitorRelabelings +[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig + |
+
+ The list of relabelings for the |
+
NodeMaintenanceWindow contains information that the operator +will use while upgrading the underlying node.
+This option is only useful when the chosen storage prevents the Pods +from being freely moved across nodes.
+ +Field | Description |
---|---|
reusePVC +bool + |
+
+ Reuse the existing PVC (wait for the node to come
+up again) or not (recreate it elsewhere - when |
+
inProgress +bool + |
+
+ Is there a node maintenance activity in progress? + |
+
OnlineConfiguration contains the configuration parameters for the online volume snapshot
+ +Field | Description |
---|---|
waitForArchive +bool + |
+
+ If false, the function will return immediately after the backup is completed, +without waiting for WAL to be archived. +This behavior is only useful with backup software that independently monitors WAL archiving. +Otherwise, WAL required to make the backup consistent might be missing and make the backup useless. +By default, or when this parameter is true, pg_backup_stop will wait for WAL to be archived when archiving is +enabled. +On a standby, this means that it will wait only when archive_mode = always. +If write activity on the primary is low, it may be useful to run pg_switch_wal on the primary in order to trigger +an immediate segment switch. + |
+
immediateCheckpoint +bool + |
+
+ Control whether the I/O workload for the backup initial checkpoint will
+be limited, according to the |
+
PasswordState represents the state of the password of a managed RoleConfiguration
+ +Field | Description |
---|---|
transactionID +int64 + |
+
+ the last transaction ID to affect the role definition in PostgreSQL + |
+
resourceVersion +string + |
+
+ the resource version of the password secret + |
+
PgBouncerIntegrationStatus encapsulates the needed integration for the pgbouncer poolers referencing the cluster
+ +Field | Description |
---|---|
secrets +[]string + |
++ No description provided. | +
PgBouncerPoolMode is the mode of PgBouncer
+ + + +## PgBouncerSecrets + +**Appears in:** + +- [PoolerSecrets](#postgresql-k8s-enterprisedb-io-v1-PoolerSecrets) + +PgBouncerSecrets contains the versions of the secrets used +by pgbouncer
+ +Field | Description |
---|---|
authQuery +SecretVersion + |
+
+ The auth query secret version + |
+
PgBouncerSpec defines how to configure PgBouncer
+ +Field | Description |
---|---|
poolMode +PgBouncerPoolMode + |
+
+ The pool mode. Default: |
+
authQuerySecret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ The credentials of the user that need to be used for the authentication +query. In case it is specified, also an AuthQuery +(e.g. "SELECT usename, passwd FROM pg_catalog.pg_shadow WHERE usename=$1") +has to be specified and no automatic CNP Cluster integration will be triggered. + |
+
authQuery +string + |
+
+ The query that will be used to download the hash of the password +of a certain user. Default: "SELECT usename, passwd FROM public.user_search($1)". +In case it is specified, also an AuthQuerySecret has to be specified and +no automatic CNP Cluster integration will be triggered. + |
+
parameters +map[string]string + |
+
+ Additional parameters to be passed to PgBouncer - please check +the CNP documentation for a list of options you can configure + |
+
pg_hba +[]string + |
+
+ PostgreSQL Host Based Authentication rules (lines to be appended +to the pg_hba.conf file) + |
+
paused +bool + |
+
+ When set to |
+
PluginConfiguration specifies a plugin that need to be loaded for this +cluster to be reconciled
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the plugin name + |
+
enabled +bool + |
+
+ Enabled is true if this plugin will be used + |
+
parameters +map[string]string + |
+
+ Parameters is the configuration of the plugin + |
+
PluginConfigurationList represent a set of plugin with their +configuration parameters
+ + + +## PluginStatus + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +PluginStatus is the status of a loaded plugin
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the name of the plugin + |
+
version [Required]+string + |
+
+ Version is the version of the plugin loaded by the +latest reconciliation loop + |
+
capabilities +[]string + |
+
+ Capabilities are the list of capabilities of the +plugin + |
+
operatorCapabilities +[]string + |
+
+ OperatorCapabilities are the list of capabilities of the +plugin regarding the reconciler + |
+
walCapabilities +[]string + |
+
+ WALCapabilities are the list of capabilities of the +plugin regarding the WAL management + |
+
backupCapabilities +[]string + |
+
+ BackupCapabilities are the list of capabilities of the +plugin regarding the Backup management + |
+
restoreJobHookCapabilities +[]string + |
+
+ RestoreJobHookCapabilities are the list of capabilities of the +plugin regarding the RestoreJobHook management + |
+
status +string + |
+
+ Status contain the status reported by the plugin through the SetStatusInCluster interface + |
+
PodTemplateSpec is a structure allowing the user to set +a template for Pod generation.
+Unfortunately we can't use the corev1.PodTemplateSpec +type because the generated CRD won't have the field for the +metadata section.
+References: +https://github.com/kubernetes-sigs/controller-tools/issues/385 +https://github.com/kubernetes-sigs/controller-tools/issues/448 +https://github.com/prometheus-operator/prometheus-operator/issues/3041
+ +Field | Description |
---|---|
metadata +Metadata + |
+
+ Standard object's metadata. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + |
+
spec +core/v1.PodSpec + |
+
+ Specification of the desired behavior of the pod. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
PodTopologyLabels represent the topology of a Pod. map[labelName]labelValue
+ + + +## PoolerIntegrations + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +PoolerIntegrations encapsulates the needed integration for the poolers referencing the cluster
+ +Field | Description |
---|---|
pgBouncerIntegration +PgBouncerIntegrationStatus + |
++ No description provided. | +
PoolerMonitoringConfiguration is the type containing all the monitoring +configuration for a certain Pooler.
+Mirrors the Cluster's MonitoringConfiguration but without the custom queries +part for now.
+ +Field | Description |
---|---|
enablePodMonitor +bool + |
+
+ Enable or disable the |
+
podMonitorMetricRelabelings +[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig + |
+
+ The list of metric relabelings for the |
+
podMonitorRelabelings +[]github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1.RelabelConfig + |
+
+ The list of relabelings for the |
+
PoolerSecrets contains the versions of all the secrets used
+ +Field | Description |
---|---|
serverTLS +SecretVersion + |
+
+ The server TLS secret version + |
+
serverCA +SecretVersion + |
+
+ The server CA secret version + |
+
clientCA +SecretVersion + |
+
+ The client CA secret version + |
+
pgBouncerSecrets +PgBouncerSecrets + |
+
+ The version of the secrets used by PgBouncer + |
+
PoolerSpec defines the desired state of Pooler
+ +Field | Description |
---|---|
cluster [Required]+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ This is the cluster reference on which the Pooler will work. +Pooler name should never match with any cluster name within the same namespace. + |
+
type +PoolerType + |
+
+ Type of service to forward traffic to. Default: |
+
instances +int32 + |
+
+ The number of replicas we want. Default: 1. + |
+
template +PodTemplateSpec + |
+
+ The template of the Pod to be created + |
+
pgbouncer [Required]+PgBouncerSpec + |
+
+ The PgBouncer configuration + |
+
deploymentStrategy +apps/v1.DeploymentStrategy + |
+
+ The deployment strategy to use for pgbouncer to replace existing pods with new ones + |
+
monitoring +PoolerMonitoringConfiguration + |
+
+ The configuration of the monitoring infrastructure of this pooler. + |
+
serviceTemplate +ServiceTemplateSpec + |
+
+ Template for the Service to be created + |
+
PoolerStatus defines the observed state of Pooler
+ +Field | Description |
---|---|
secrets +PoolerSecrets + |
+
+ The resource version of the config object + |
+
instances +int32 + |
+
+ The number of pods trying to be scheduled + |
+
PoolerType is the type of the connection pool, meaning the service
+we are targeting. Allowed values are rw
and ro
.
PostgresConfiguration defines the PostgreSQL configuration
+ +Field | Description |
---|---|
parameters +map[string]string + |
+
+ PostgreSQL configuration options (postgresql.conf) + |
+
synchronous +SynchronousReplicaConfiguration + |
+
+ Configuration of the PostgreSQL synchronous replication feature + |
+
pg_hba +[]string + |
+
+ PostgreSQL Host Based Authentication rules (lines to be appended +to the pg_hba.conf file) + |
+
pg_ident +[]string + |
+
+ PostgreSQL User Name Maps rules (lines to be appended +to the pg_ident.conf file) + |
+
epas +EPASConfiguration + |
+
+ EDB Postgres Advanced Server specific configurations + |
+
syncReplicaElectionConstraint +SyncReplicaElectionConstraints + |
+
+ Requirements to be met by sync replicas. This will affect how the "synchronous_standby_names" parameter will be +set up. + |
+
shared_preload_libraries +[]string + |
+
+ Lists of shared preload libraries to add to the default ones + |
+
ldap +LDAPConfig + |
+
+ Options to specify LDAP configuration + |
+
promotionTimeout +int32 + |
+
+ Specifies the maximum number of seconds to wait when promoting an instance to primary. +Default value is 40000000, greater than one year in seconds, +big enough to simulate an infinite timeout + |
+
enableAlterSystem +bool + |
+
+ If this parameter is true, the user will be able to invoke |
+
PrimaryUpdateMethod contains the method to use when upgrading +the primary server of the cluster as part of rolling updates
+ + + +## PrimaryUpdateStrategy + +(Alias of `string`) + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +PrimaryUpdateStrategy contains the strategy to follow when upgrading +the primary server of the cluster as part of rolling updates
+ + + +## PublicationReclaimPolicy + +(Alias of `string`) + +**Appears in:** + +- [PublicationSpec](#postgresql-k8s-enterprisedb-io-v1-PublicationSpec) + +PublicationReclaimPolicy defines a policy for end-of-life maintenance of Publications.
+ + + +## PublicationSpec + +**Appears in:** + +- [Publication](#postgresql-k8s-enterprisedb-io-v1-Publication) + +PublicationSpec defines the desired state of Publication
+ +Field | Description |
---|---|
cluster [Required]+core/v1.LocalObjectReference + |
+
+ The name of the PostgreSQL cluster that identifies the "publisher" + |
+
name [Required]+string + |
+
+ The name of the publication inside PostgreSQL + |
+
dbname [Required]+string + |
+
+ The name of the database where the publication will be installed in +the "publisher" cluster + |
+
parameters +map[string]string + |
+
+ Publication parameters part of the |
+
target [Required]+PublicationTarget + |
+
+ Target of the publication as expected by PostgreSQL |
+
publicationReclaimPolicy +PublicationReclaimPolicy + |
+
+ The policy for end-of-life maintenance of this publication + |
+
PublicationStatus defines the observed state of Publication
+ +Field | Description |
---|---|
observedGeneration +int64 + |
+
+ A sequence number representing the latest +desired state that was synchronized + |
+
applied +bool + |
+
+ Applied is true if the publication was reconciled correctly + |
+
message +string + |
+
+ Message is the reconciliation output message + |
+
PublicationTarget is what this publication should publish
+ +Field | Description |
---|---|
allTables +bool + |
+
+ Marks the publication as one that replicates changes for all tables
+in the database, including tables created in the future.
+Corresponding to |
+
objects +[]PublicationTargetObject + |
+
+ Just the following schema objects + |
+
PublicationTargetObject is an object to publish
+ +Field | Description |
---|---|
tablesInSchema +string + |
+
+ Marks the publication as one that replicates changes for all tables
+in the specified list of schemas, including tables created in the
+future. Corresponding to |
+
table +PublicationTargetTable + |
+
+ Specifies a list of tables to add to the publication. Corresponding
+to |
+
PublicationTargetTable is a table to publish
+ +Field | Description |
---|---|
only +bool + |
+
+ Whether to limit to the table only or include all its descendants + |
+
name [Required]+string + |
+
+ The table name + |
+
schema +string + |
+
+ The schema name + |
+
columns +[]string + |
+
+ The columns to publish + |
+
RecoveryTarget allows to configure the moment where the recovery process +will stop. All the target options except TargetTLI are mutually exclusive.
+ +Field | Description |
---|---|
backupID +string + |
+
+ The ID of the backup from which to start the recovery process. +If empty (default) the operator will automatically detect the backup +based on targetTime or targetLSN if specified. Otherwise use the +latest available backup in chronological order. + |
+
targetTLI +string + |
+
+ The target timeline ("latest" or a positive integer) + |
+
targetXID +string + |
+
+ The target transaction ID + |
+
targetName +string + |
+
+ The target name (to be previously created
+with |
+
targetLSN +string + |
+
+ The target LSN (Log Sequence Number) + |
+
targetTime +string + |
+
+ The target time as a timestamp in the RFC3339 standard + |
+
targetImmediate +bool + |
+
+ End recovery as soon as a consistent state is reached + |
+
exclusive +bool + |
+
+ Set the target to be exclusive. If omitted, defaults to false, so that
+in Postgres, |
+
ReplicaClusterConfiguration encapsulates the configuration of a replica +cluster
+ +Field | Description |
---|---|
self +string + |
+
+ Self defines the name of this cluster. It is used to determine if this is a primary
+or a replica cluster, comparing it with |
+
primary +string + |
+
+ Primary defines which Cluster is defined to be the primary in the distributed PostgreSQL cluster, based on the +topology specified in externalClusters + |
+
source [Required]+string + |
+
+ The name of the external cluster which is the replication origin + |
+
enabled +bool + |
+
+ If replica mode is enabled, this cluster will be a replica of an +existing cluster. Replica cluster can be created from a recovery +object store or via streaming through pg_basebackup. +Refer to the Replica clusters page of the documentation for more information. + |
+
promotionToken +string + |
+
+ A demotion token generated by an external cluster used to +check if the promotion requirements are met. + |
+
minApplyDelay +meta/v1.Duration + |
+
+ When replica mode is enabled, this parameter allows you to replay +transactions only when the system time is at least the configured +time past the commit time. This provides an opportunity to correct +data loss errors. Note that when this parameter is set, a promotion +token cannot be used. + |
+
ReplicationSlotsConfiguration encapsulates the configuration +of replication slots
+ +Field | Description |
---|---|
highAvailability +ReplicationSlotsHAConfiguration + |
+
+ Replication slots for high availability configuration + |
+
updateInterval +int + |
+
+ Standby will update the status of the local replication slots
+every |
+
synchronizeReplicas +SynchronizeReplicasConfiguration + |
+
+ Configures the synchronization of the user defined physical replication slots + |
+
ReplicationSlotsHAConfiguration encapsulates the configuration +of the replication slots that are automatically managed by +the operator to control the streaming replication connections +with the standby instances for high availability (HA) purposes. +Replication slots are a PostgreSQL feature that makes sure +that PostgreSQL automatically keeps WAL files in the primary +when a streaming client (in this specific case a replica that +is part of the HA cluster) gets disconnected.
+ +Field | Description |
---|---|
enabled +bool + |
+
+ If enabled (default), the operator will automatically manage replication slots +on the primary instance and use them in streaming replication +connections with all the standby instances that are part of the HA +cluster. If disabled, the operator will not take advantage +of replication slots in streaming connections with the replicas. +This feature also controls replication slots in replica cluster, +from the designated primary to its cascading replicas. + |
+
slotPrefix +string + |
+
+ Prefix for replication slots managed by the operator for HA.
+It may only contain lower case letters, numbers, and the underscore character.
+This can only be set at creation time. By default set to |
+
RoleConfiguration is the representation, in Kubernetes, of a PostgreSQL role +with the additional field Ensure specifying whether to ensure the presence or +absence of the role in the database
+The defaults of the CREATE ROLE command are applied +Reference: https://www.postgresql.org/docs/current/sql-createrole.html
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name of the role + |
+
comment +string + |
+
+ Description of the role + |
+
ensure +EnsureOption + |
+
+ Ensure the role is |
+
passwordSecret +github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ Secret containing the password of the role (if present) +If null, the password will be ignored unless DisablePassword is set + |
+
connectionLimit +int64 + |
+
+ If the role can log in, this specifies how many concurrent
+connections the role can make. |
+
validUntil +meta/v1.Time + |
+
+ Date and time after which the role's password is no longer valid. +When omitted, the password will never expire (default). + |
+
inRoles +[]string + |
+
+ List of one or more existing roles to which this role will be +immediately added as a new member. Default empty. + |
+
inherit +bool + |
+
+ Whether a role "inherits" the privileges of roles it is a member of.
+Defaults is |
+
disablePassword +bool + |
+
+ DisablePassword indicates that a role's password should be set to NULL in Postgres + |
+
superuser +bool + |
+
+ Whether the role is a |
+
createdb +bool + |
+
+ When set to |
+
createrole +bool + |
+
+ Whether the role will be permitted to create, alter, drop, comment
+on, change the security label for, and grant or revoke membership in
+other roles. Default is |
+
login +bool + |
+
+ Whether the role is allowed to log in. A role having the |
+
replication +bool + |
+
+ Whether a role is a replication role. A role must have this
+attribute (or be a superuser) in order to be able to connect to the
+server in replication mode (physical or logical replication) and in
+order to be able to create or drop replication slots. A role having
+the |
+
bypassrls +bool + |
+
+ Whether a role bypasses every row-level security (RLS) policy.
+Default is |
+
SQLRefs holds references to ConfigMaps or Secrets +containing SQL files. The references are processed in a specific order: +first, all Secrets are processed, followed by all ConfigMaps. +Within each group, the processing order follows the sequence specified +in their respective arrays.
+ +Field | Description |
---|---|
secretRefs +[]github.com/cloudnative-pg/machinery/pkg/api.SecretKeySelector + |
+
+ SecretRefs holds a list of references to Secrets + |
+
configMapRefs +[]github.com/cloudnative-pg/machinery/pkg/api.ConfigMapKeySelector + |
+
+ ConfigMapRefs holds a list of references to ConfigMaps + |
+
ScheduledBackupSpec defines the desired state of ScheduledBackup
+ +Field | Description |
---|---|
suspend +bool + |
+
+ If this backup is suspended or not + |
+
immediate +bool + |
+
+ If the first backup has to be immediately start after creation or not + |
+
schedule [Required]+string + |
+
+ The schedule does not follow the same format used in Kubernetes CronJobs +as it includes an additional seconds specifier, +see https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format + |
+
cluster [Required]+github.com/cloudnative-pg/machinery/pkg/api.LocalObjectReference + |
+
+ The cluster to backup + |
+
backupOwnerReference +string + |
+
+ Indicates which ownerReference should be put inside the created backup resources. +
|
+
target +BackupTarget + |
+
+ The policy to decide which instance should perform this backup. If empty,
+it defaults to |
+
method +BackupMethod + |
+
+ The backup method to be used, possible options are |
+
pluginConfiguration +BackupPluginConfiguration + |
+
+ Configuration parameters passed to the plugin managing this backup + |
+
online +bool + |
+
+ Whether the default type of backup with volume snapshots is
+online/hot ( |
+
onlineConfiguration +OnlineConfiguration + |
+
+ Configuration parameters to control the online/hot backup with volume snapshots +Overrides the default settings specified in the cluster '.backup.volumeSnapshot.onlineConfiguration' stanza + |
+
ScheduledBackupStatus defines the observed state of ScheduledBackup
+ +Field | Description |
---|---|
lastCheckTime +meta/v1.Time + |
+
+ The latest time the schedule + |
+
lastScheduleTime +meta/v1.Time + |
+
+ Information when was the last time that backup was successfully scheduled. + |
+
nextScheduleTime +meta/v1.Time + |
+
+ Next time we will run a backup + |
+
SecretVersion contains a secret name and its ResourceVersion
+ +Field | Description |
---|---|
name +string + |
+
+ The name of the secret + |
+
version +string + |
+
+ The ResourceVersion of the secret + |
+
SecretsResourceVersion is the resource versions of the secrets +managed by the operator
+ +Field | Description |
---|---|
superuserSecretVersion +string + |
+
+ The resource version of the "postgres" user secret + |
+
replicationSecretVersion +string + |
+
+ The resource version of the "streaming_replica" user secret + |
+
applicationSecretVersion +string + |
+
+ The resource version of the "app" user secret + |
+
managedRoleSecretVersion +map[string]string + |
+
+ The resource versions of the managed roles secrets + |
+
caSecretVersion +string + |
+
+ Unused. Retained for compatibility with old versions. + |
+
clientCaSecretVersion +string + |
+
+ The resource version of the PostgreSQL client-side CA secret version + |
+
serverCaSecretVersion +string + |
+
+ The resource version of the PostgreSQL server-side CA secret version + |
+
serverSecretVersion +string + |
+
+ The resource version of the PostgreSQL server-side secret version + |
+
barmanEndpointCA +string + |
+
+ The resource version of the Barman Endpoint CA if provided + |
+
externalClusterSecretVersion +map[string]string + |
+
+ The resource versions of the external cluster secrets + |
+
metrics +map[string]string + |
+
+ A map with the versions of all the secrets used to pass metrics. +Map keys are the secret names, map values are the versions + |
+
ServiceAccountTemplate contains the template needed to generate the service accounts
+ +Field | Description |
---|---|
metadata [Required]+Metadata + |
+
+ Metadata are the metadata to be used for the generated +service account + |
+
ServiceSelectorType describes a valid value for generating the service selectors. +It indicates which type of service the selector applies to, such as read-write, read, or read-only
+ + + +## ServiceTemplateSpec + +**Appears in:** + +- [ManagedService](#postgresql-k8s-enterprisedb-io-v1-ManagedService) + +- [PoolerSpec](#postgresql-k8s-enterprisedb-io-v1-PoolerSpec) + +ServiceTemplateSpec is a structure allowing the user to set +a template for Service generation.
+ +Field | Description |
---|---|
metadata +Metadata + |
+
+ Standard object's metadata. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata + |
+
spec +core/v1.ServiceSpec + |
+
+ Specification of the desired behavior of the service. +More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status + |
+
ServiceUpdateStrategy describes how the changes to the managed service should be handled
+ + + +## SnapshotOwnerReference + +(Alias of `string`) + +**Appears in:** + +- [VolumeSnapshotConfiguration](#postgresql-k8s-enterprisedb-io-v1-VolumeSnapshotConfiguration) + +SnapshotOwnerReference defines the reference type for the owner of the snapshot. +This specifies which owner the processed resources should relate to.
+ + + +## SnapshotType + +(Alias of `string`) + +**Appears in:** + +- [Import](#postgresql-k8s-enterprisedb-io-v1-Import) + +SnapshotType is a type of allowed import
+ + + +## StorageConfiguration + +**Appears in:** + +- [ClusterSpec](#postgresql-k8s-enterprisedb-io-v1-ClusterSpec) + +- [TablespaceConfiguration](#postgresql-k8s-enterprisedb-io-v1-TablespaceConfiguration) + +StorageConfiguration is the configuration used to create and reconcile PVCs, +usable for WAL volumes, PGDATA volumes, or tablespaces
+ +Field | Description |
---|---|
storageClass +string + |
+
+ StorageClass to use for PVCs. Applied after +evaluating the PVC template, if available. +If not specified, the generated PVCs will use the +default storage class + |
+
size +string + |
+
+ Size of the storage. Required if not already specified in the PVC template. +Changes to this field are automatically reapplied to the created PVCs. +Size cannot be decreased. + |
+
resizeInUseVolumes +bool + |
+
+ Resize existent PVCs, defaults to true + |
+
pvcTemplate +core/v1.PersistentVolumeClaimSpec + |
+
+ Template to be used to generate the Persistent Volume Claim + |
+
SubscriptionReclaimPolicy describes a policy for end-of-life maintenance of Subscriptions.
+ + + +## SubscriptionSpec + +**Appears in:** + +- [Subscription](#postgresql-k8s-enterprisedb-io-v1-Subscription) + +SubscriptionSpec defines the desired state of Subscription
+ +Field | Description |
---|---|
cluster [Required]+core/v1.LocalObjectReference + |
+
+ The name of the PostgreSQL cluster that identifies the "subscriber" + |
+
name [Required]+string + |
+
+ The name of the subscription inside PostgreSQL + |
+
dbname [Required]+string + |
+
+ The name of the database where the publication will be installed in +the "subscriber" cluster + |
+
parameters +map[string]string + |
+
+ Subscription parameters part of the |
+
publicationName [Required]+string + |
+
+ The name of the publication inside the PostgreSQL database in the +"publisher" + |
+
publicationDBName +string + |
+
+ The name of the database containing the publication on the external +cluster. Defaults to the one in the external cluster definition. + |
+
externalClusterName [Required]+string + |
+
+ The name of the external cluster with the publication ("publisher") + |
+
subscriptionReclaimPolicy +SubscriptionReclaimPolicy + |
+
+ The policy for end-of-life maintenance of this subscription + |
+
SubscriptionStatus defines the observed state of Subscription
+ +Field | Description |
---|---|
observedGeneration +int64 + |
+
+ A sequence number representing the latest +desired state that was synchronized + |
+
applied +bool + |
+
+ Applied is true if the subscription was reconciled correctly + |
+
message +string + |
+
+ Message is the reconciliation output message + |
+
SwitchReplicaClusterStatus contains all the statuses regarding the switch of a cluster to a replica cluster
+ +Field | Description |
---|---|
inProgress +bool + |
+
+ InProgress indicates if there is an ongoing procedure of switching a cluster to a replica cluster. + |
+
SyncReplicaElectionConstraints contains the constraints for sync replicas election.
+For anti-affinity parameters two instances are considered in the same location +if all the labels values match.
+In future synchronous replica election restriction by name will be supported.
+ +Field | Description |
---|---|
nodeLabelsAntiAffinity +[]string + |
+
+ A list of node labels values to extract and compare to evaluate if the pods reside in the same topology or not + |
+
enabled [Required]+bool + |
+
+ This flag enables the constraints for sync replicas + |
+
SynchronizeReplicasConfiguration contains the configuration for the synchronization of user defined +physical replication slots
+ +Field | Description |
---|---|
enabled [Required]+bool + |
+
+ When set to true, every replication slot that is on the primary is synchronized on each standby + |
+
excludePatterns +[]string + |
+
+ List of regular expression patterns to match the names of replication slots to be excluded (by default empty) + |
+
SynchronousReplicaConfiguration contains the configuration of the
+PostgreSQL synchronous replication feature.
+Important: at this moment, also .spec.minSyncReplicas
and .spec.maxSyncReplicas
+need to be considered.
Field | Description |
---|---|
method [Required]+SynchronousReplicaConfigurationMethod + |
+
+ Method to select synchronous replication standbys from the listed +servers, accepting 'any' (quorum-based synchronous replication) or +'first' (priority-based synchronous replication) as values. + |
+
number [Required]+int + |
+
+ Specifies the number of synchronous standby servers that +transactions must wait for responses from. + |
+
maxStandbyNamesFromCluster +int + |
+
+ Specifies the maximum number of local cluster pods that can be
+automatically included in the |
+
standbyNamesPre +[]string + |
+
+ A user-defined list of application names to be added to
+ |
+
standbyNamesPost +[]string + |
+
+ A user-defined list of application names to be added to
+ |
+
dataDurability +DataDurabilityLevel + |
+
+ If set to "required", data durability is strictly enforced. Write operations
+with synchronous commit settings ( |
+
SynchronousReplicaConfigurationMethod configures whether to use +quorum based replication or a priority list
+ + + +## TDEConfiguration + +**Appears in:** + +- [EPASConfiguration](#postgresql-k8s-enterprisedb-io-v1-EPASConfiguration) + +TDEConfiguration contains the Transparent Data Encryption configuration
+ +Field | Description |
---|---|
enabled +bool + |
+
+ True if we want to have TDE enabled + |
+
secretKeyRef +core/v1.SecretKeySelector + |
+
+ Reference to the secret that contains the encryption key + |
+
wrapCommand +core/v1.SecretKeySelector + |
+
+ WrapCommand is the encrypt command provided by the user + |
+
unwrapCommand +core/v1.SecretKeySelector + |
+
+ UnwrapCommand is the decryption command provided by the user + |
+
passphraseCommand +core/v1.SecretKeySelector + |
+
+ PassphraseCommand is the command executed to get the passphrase that will be +passed to the OpenSSL command to encrypt and decrypt + |
+
TablespaceConfiguration is the configuration of a tablespace, and includes +the storage specification for the tablespace
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ The name of the tablespace + |
+
storage [Required]+StorageConfiguration + |
+
+ The storage configuration for the tablespace + |
+
owner +DatabaseRoleRef + |
+
+ Owner is the PostgreSQL user owning the tablespace + |
+
temporary +bool + |
+
+ When set to true, the tablespace will be added as a |
+
TablespaceState represents the state of a tablespace in a cluster
+ +Field | Description |
---|---|
name [Required]+string + |
+
+ Name is the name of the tablespace + |
+
owner +string + |
+
+ Owner is the PostgreSQL user owning the tablespace + |
+
state [Required]+TablespaceStatus + |
+
+ State is the latest reconciliation state + |
+
error +string + |
+
+ Error is the reconciliation error, if any + |
+
TablespaceStatus represents the status of a tablespace in the cluster
+ + + +## Topology + +**Appears in:** + +- [ClusterStatus](#postgresql-k8s-enterprisedb-io-v1-ClusterStatus) + +Topology contains the cluster topology
+ +Field | Description |
---|---|
instances +map[PodName]PodTopologyLabels + |
+
+ Instances contains the pod topology of the instances + |
+
nodesUsed +int32 + |
+
+ NodesUsed represents the count of distinct nodes accommodating the instances. +A value of '1' suggests that all instances are hosted on a single node, +implying the absence of High Availability (HA). Ideally, this value should +be the same as the number of instances in the Postgres HA cluster, implying +shared nothing architecture on the compute side. + |
+
successfullyExtracted +bool + |
+
+ SuccessfullyExtracted indicates if the topology data was extract. It is useful to enact fallback behaviors +in synchronous replica election in case of failures + |
+
VolumeSnapshotConfiguration represents the configuration for the execution of snapshot backups.
+ +Field | Description |
---|---|
labels +map[string]string + |
+
+ Labels are key-value pairs that will be added to .metadata.labels snapshot resources. + |
+
annotations +map[string]string + |
+
+ Annotations key-value pairs that will be added to .metadata.annotations snapshot resources. + |
+
className +string + |
+
+ ClassName specifies the Snapshot Class to be used for PG_DATA PersistentVolumeClaim. +It is the default class for the other types if no specific class is present + |
+
walClassName +string + |
+
+ WalClassName specifies the Snapshot Class to be used for the PG_WAL PersistentVolumeClaim. + |
+
tablespaceClassName +map[string]string + |
+
+ TablespaceClassName specifies the Snapshot Class to be used for the tablespaces. +defaults to the PGDATA Snapshot Class, if set + |
+
snapshotOwnerReference +SnapshotOwnerReference + |
+
+ SnapshotOwnerReference indicates the type of owner reference the snapshot should have + |
+
online +bool + |
+
+ Whether the default type of backup with volume snapshots is
+online/hot ( |
+
onlineConfiguration +OnlineConfiguration + |
+
+ Configuration parameters to control the online/hot backup with volume snapshots + |
+