Skip to content

Commit

Permalink
Merge pull request #5988 from EnterpriseDB/automatic_docs_update/repo…
Browse files Browse the repository at this point in the history
…_EnterpriseDB/cloud-native-postgres/ref_refs/tags/v1.24.0
  • Loading branch information
josh-heyer authored Aug 26, 2024
2 parents c0ff531 + 3f426e5 commit 8ef2ef7
Show file tree
Hide file tree
Showing 46 changed files with 2,342 additions and 873 deletions.
17 changes: 7 additions & 10 deletions product_docs/docs/postgres_for_kubernetes/1/applications.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,10 @@ originalFilePath: 'src/applications.md'
---

Applications are supposed to work with the services created by EDB Postgres for Kubernetes
in the same Kubernetes cluster:
in the same Kubernetes cluster.

- `[cluster name]-rw`
- `[cluster name]-ro`
- `[cluster name]-r`

Those services are entirely managed by the Kubernetes cluster and
implement a form of Virtual IP as described in the
["Service" page of the Kubernetes Documentation](https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies).
For more information on services and how to manage them, please refer to the
["Service management"](service_management.md) section.

!!! Hint
It is highly recommended using those services in your applications,
Expand Down Expand Up @@ -85,5 +80,7 @@ connecting to the PostgreSQL cluster, and correspond to the user *owning* the
database.

The `-superuser` ones are supposed to be used only for administrative purposes,
and correspond to the `postgres` user. Since version 1.21, superuser access
over the network is disabled by default.
and correspond to the `postgres` user.

!!! Important
Superuser access over the network is disabled by default.
270 changes: 198 additions & 72 deletions product_docs/docs/postgres_for_kubernetes/1/architecture.mdx

Large diffs are not rendered by default.

8 changes: 0 additions & 8 deletions product_docs/docs/postgres_for_kubernetes/1/backup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,6 @@ title: 'Backup'
originalFilePath: 'src/backup.md'
---

!!! Important
With version 1.21, backup and recovery capabilities in EDB Postgres for Kubernetes
have sensibly changed due to the introduction of native support for
[Kubernetes Volume Snapshots](backup_volumesnapshot.md).
Up to that point, backup and recovery were available only for object
stores. Please carefully read this section and the [recovery](recovery.md)
one if you have been a user of EDB Postgres for Kubernetes 1.15 through 1.20.

PostgreSQL natively provides first class backup and recovery capabilities based
on file system level (physical) copy. These have been successfully used for
more than 15 years in mission critical production databases, helping
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -151,24 +151,28 @@ spec:
backupRetentionPolicy: "keep"
```

## Extra options for the backup command
## Extra options for the backup and WAL commands

You can append additional options to the `barman-cloud-backup` command by using
You can append additional options to the `barman-cloud-backup` and `barman-cloud-wal-archive` commands by using
the `additionalCommandArgs` property in the
`.spec.backup.barmanObjectStore.data` section.
This property is a list of strings that will be appended to the
`barman-cloud-backup` command.
`.spec.backup.barmanObjectStore.data` and `.spec.backup.barmanObjectStore.wal` sections respectively.
This properties are lists of strings that will be appended to the
`barman-cloud-backup` and `barman-cloud-wal-archive` commands.

For example, you can use the `--read-timeout=60` to customize the connection
reading timeout.
For additional options supported by `barman-cloud-backup` you can refer to the
official barman documentation [here](https://www.pgbarman.org/documentation/).

For additional options supported by `barman-cloud-backup` and `barman-cloud-wal-archive` commands you can refer to the
official barman documentation [here](https://www.pgbarman.org/documentation/).

If an option provided in `additionalCommandArgs` is already present in the
declared options in the `barmanObjectStore` section, the extra option will be
declared options in its section (`.spec.backup.barmanObjectStore.data` or `.spec.backup.barmanObjectStore.wal`), the extra option will be
ignored.

The following is an example of how to use this property:

For backups:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
Expand All @@ -182,3 +186,19 @@ spec:
- "--min-chunk-size=5MB"
- "--read-timeout=60"
```

For WAL files:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
[...]
spec:
backup:
barmanObjectStore:
[...]
wal:
additionalCommandArgs:
- "--max-concurrency=1"
- "--read-timeout=60"
```
10 changes: 1 addition & 9 deletions product_docs/docs/postgres_for_kubernetes/1/backup_recovery.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,4 @@ title: 'Backup and Recovery'
originalFilePath: 'src/backup_recovery.md'
---

Until EDB Postgres for Kubernetes 1.20, this page used to contain both the backup and
recovery phases of a PostgreSQL cluster. The reason was that EDB Postgres for Kubernetes
supported only backup and recovery object stores.

Version 1.21 introduces support for the Kubernetes `VolumeSnapshot` API,
providing more possibilities for the end user.

As a result, [backup](backup.md) and [recovery](recovery.md) are now in two
separate sections.
[Backup](backup.md) and [recovery](recovery.md) are in two separate sections.
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,6 @@ title: 'Backup on volume snapshots'
originalFilePath: 'src/backup_volumesnapshot.md'
---

!!! Warning
The initial release of volume snapshots (version 1.21.0) only supported
cold backups, which required fencing of the instance. This limitation
has been waived starting with version 1.21.1. Given the minimal impact of
the change on the code, maintainers have decided to backport this feature
immediately instead of waiting for version 1.22.0 to be out, and make online
backups the default behavior on volume snapshots too. If you are planning
to rely instead on cold backups, make sure you follow the instructions below.

!!! Warning
As noted in the [backup document](backup.md), a cold snapshot explicitly
set to target the primary will result in the primary being fenced for
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,12 @@ specific to Kubernetes and PostgreSQL.
: A *node* is a worker machine in Kubernetes, either virtual or physical, where
all services necessary to run pods are managed by the control plane node(s).

[Postgres Node](architecture.md#reserving-nodes-for-postgresql-workloads)
: A *Postgres node* is a Kubernetes worker node dedicated to running PostgreSQL
workloads. This is achieved by applying the `node-role.kubernetes.io` label and
taint, as [proposed by EDB Postgres for Kubernetes](architecture.md#reserving-nodes-for-postgresql-workloads).
It is also referred to as a `postgres` node.

[Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/)
: A *pod* is the smallest computing unit that can be deployed in a Kubernetes
cluster and is composed of one or more containers that share network and
Expand Down
81 changes: 48 additions & 33 deletions product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -279,10 +279,43 @@ spec:
`-d`), this technique is deprecated and will be removed from future versions of
the API.

You can also specify a custom list of queries that will be executed
once, just after the database is created and configured. These queries will
be executed as the *superuser* (`postgres`), connected to the `postgres`
database:
### Executing Queries After Initialization

You can specify a custom list of queries that will be executed once,
immediately after the cluster is created and configured. These queries will be
executed as the *superuser* (`postgres`) against three different databases, in
this specific order:

1. The `postgres` database (`postInit` section)
2. The `template1` database (`postInitTemplate` section)
3. The application database (`postInitApplication` section)

For each of these sections, EDB Postgres for Kubernetes provides two ways to specify custom
queries, executed in the following order:

- As a list of SQL queries in the cluster's definition (`postInitSQL`,
`postInitTemplateSQL`, and `postInitApplicationSQL` stanzas)
- As a list of Secrets and/or ConfigMaps, each containing a SQL script to be
executed (`postInitSQLRefs`, `postInitTemplateSQLRefs`, and
`postInitApplicationSQLRefs` stanzas). Secrets are processed before ConfigMaps.

Objects in each list will be processed sequentially.

!!! Warning
Use the `postInit`, `postInitTemplate`, and `postInitApplication` options
with extreme care, as queries are run as a superuser and can disrupt the entire
cluster. An error in any of those queries will interrupt the bootstrap phase,
leaving the cluster incomplete and requiring manual intervention.

!!! Important
Ensure the existence of entries inside the ConfigMaps or Secrets specified
in `postInitSQLRefs`, `postInitTemplateSQLRefs`, and
`postInitApplicationSQLRefs`, otherwise the bootstrap will fail. Errors in any
of those SQL files will prevent the bootstrap phase from completing
successfully.

The following example runs a single SQL query as part of the `postInitSQL`
stanza:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
Expand All @@ -305,18 +338,9 @@ spec:
size: 1Gi
```

!!! Warning
Please use the `postInitSQL`, `postInitApplicationSQL` and
`postInitTemplateSQL` options with extreme care, as queries are run as a
superuser and can disrupt the entire cluster. An error in any of those queries
interrupts the bootstrap phase, leaving the cluster incomplete.

### Executing queries after initialization

Moreover, you can specify a list of Secrets and/or ConfigMaps which contains
SQL script that will be executed after the database is created and configured.
These SQL script will be executed using the **superuser** role (`postgres`),
connected to the database specified in the `initdb` section:
The example below relies on `postInitApplicationSQLRefs` to specify a secret
and a ConfigMap containing the queries to run after the initialization on the
application database:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
Expand All @@ -342,18 +366,9 @@ spec:
```

!!! Note
The SQL scripts referenced in `secretRefs` will be executed before the ones
referenced in `configMapRefs`. For both sections the SQL scripts will be
executed respecting the order in the list. Inside SQL scripts, each SQL
statement is executed in a single exec on the server according to the
[PostgreSQL semantics](https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-MULTI-STATEMENT),
comments can be included, but internal command like `psql` cannot.

!!! Warning
Please make sure the existence of the entries inside the ConfigMaps or
Secrets specified in `postInitApplicationSQLRefs`, otherwise the bootstrap will
fail. Errors in any of those SQL files will prevent the bootstrap phase to
complete successfully.
Within SQL scripts, each SQL statement is executed in a single exec on the
server according to the [PostgreSQL semantics](https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-MULTI-STATEMENT).
Comments can be included, but internal commands like `psql` cannot.

### Compatibility Features

Expand Down Expand Up @@ -530,7 +545,7 @@ file on the source PostgreSQL instance:
host replication streaming_replica all md5
```

The following manifest creates a new PostgreSQL 16.3 cluster,
The following manifest creates a new PostgreSQL 16.4 cluster,
called `target-db`, using the `pg_basebackup` bootstrap method
to clone an external PostgreSQL cluster defined as `source-db`
(in the `externalClusters` array). As you can see, the `source-db`
Expand All @@ -545,7 +560,7 @@ metadata:
name: target-db
spec:
instances: 3
imageName: quay.io/enterprisedb/postgresql:16.3
imageName: quay.io/enterprisedb/postgresql:16.4
bootstrap:
pg_basebackup:
Expand All @@ -565,7 +580,7 @@ spec:
```

All the requirements must be met for the clone operation to work, including
the same PostgreSQL version (in our case 16.3).
the same PostgreSQL version (in our case 16.4).

#### TLS certificate authentication

Expand All @@ -580,7 +595,7 @@ in the same Kubernetes cluster.
This example can be easily adapted to cover an instance that resides
outside the Kubernetes cluster.

The manifest defines a new PostgreSQL 16.3 cluster called `cluster-clone-tls`,
The manifest defines a new PostgreSQL 16.4 cluster called `cluster-clone-tls`,
which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
external cluster. The host is identified by the read/write service
in the same cluster, while the `streaming_replica` user is authenticated
Expand All @@ -595,7 +610,7 @@ metadata:
name: cluster-clone-tls
spec:
instances: 3
imageName: quay.io/enterprisedb/postgresql:16.3
imageName: quay.io/enterprisedb/postgresql:16.4
bootstrap:
pg_basebackup:
Expand Down
14 changes: 13 additions & 1 deletion product_docs/docs/postgres_for_kubernetes/1/certificates.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,12 @@ primarily operates in two modes:
You can also choose a hybrid approach, where only part of the certificates is
generated outside CNP.

!!! Note
The operator and instances verify server certificates against the CA only,
disregarding the DNS name. This approach is due to the typical absence of DNS
names in user-provided certificates for the `<cluster>-rw` service used for
communication within the cluster.

## Operator-managed mode

By default, the operator generates a single CA and uses it for both client and
Expand Down Expand Up @@ -66,7 +72,7 @@ is passed as `ssl_ca_file` to all the instances so it can verify client
certificates it signed. The private key is stored in the same secret and used
to sign client certificates generated by the `kubectl cnp` plugin.

#### Client \`streaming_replica\`\` certificate
#### Client `streaming_replica` certificate

The operator uses the generated self-signed CA to sign a client certificate for
the user `streaming_replica`, storing it in a secret of type
Expand All @@ -92,6 +98,12 @@ the following parameters:
The operator still creates and manages the two secrets related to client
certificates.

!!! Note
The operator and instances verify server certificates against the CA only,
disregarding the DNS name. This approach is due to the typical absence of DNS
names in user-provided certificates for the `<cluster>-rw` service used for
communication within the cluster.

!!! Note
If you want ConfigMaps and secrets to be reloaded by instances, you can add
a label with the key `k8s.enterprisedb.io/reload` to it. Otherwise you must reload the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ $ kubectl cnp status <cluster-name>
Cluster Summary
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:16.3
PostgreSQL Image: quay.io/enterprisedb/postgresql:16.4
Primary instance: cluster-example-2
Status: Cluster in healthy state
Instances: 3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -153,14 +153,8 @@ never expires, mirroring the behavior of PostgreSQL. Specifically:
allowing `VALID UNTIL NULL` in the `ALTER ROLE` SQL statement)

!!! Warning
The declarative role management feature has changed behavior since its
initial version (1.20.0). In 1.20.0, a role without a `passwordSecret` would
lead to setting the password to NULL in PostgreSQL.
In practice there is little difference from 1.20.0.
New roles created without `passwordSecret` will have a `NULL` password.
The relevant change is when using the managed roles to manage roles that
had been previously created. In 1.20.0, doing this might inadvertently
result in setting existing passwords to `NULL`.
New roles created without `passwordSecret` will have a `NULL` password
inside PostgreSQL.

### Password hashed

Expand Down
33 changes: 19 additions & 14 deletions product_docs/docs/postgres_for_kubernetes/1/evaluation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,31 +9,36 @@ The process is different between Community PostgreSQL and EDB Postgres Advanced

## Evaluating using PostgreSQL

By default, EDB Postgres for Kubernetes installs the latest available
version of Community PostgreSQL.
By default, EDB Postgres for Kubernetes installs the latest available version of Community PostgreSQL.

No license key is required. The operator automatically generates an implicit trial license for the cluster that lasts for
30 days. This trial license is ideal for evaluation, proof of concept, integration with CI/CD pipelines, and so on.
No license key is required. The operator automatically generates an implicit trial license for the cluster that lasts for 30 days. This trial license is ideal for evaluation, proof of concept, integration with CI/CD pipelines, and so on.

PostgreSQL container images are available at [quay.io/enterprisedb/postgresql](https://quay.io/repository/enterprisedb/postgresql).

## Evaluating using EDB Postgres Advanced Server

You can use EDB Postgres for Kubernetes with EDB Postgres Advanced Server. You will need a trial key to use EDB Postgres Advanced Server.
There are two ways to obtain the EDB Postgres Advanced Server image for evaluation purposes. The easiest is through the EDB Image Repository, where all you’ll need is an EDB account to auto generate a repository access token. The other way is to download the image through [quay.io](http://quay.io) and request a trial license key from EDB support.

!!! Note Obtaining your trial key
You can request a key from the **[EDB Postgres for Kubernetes Trial License Request](https://cloud-native.enterprisedb.com/trial/)** page. You will also need to be signed into your EDB Account. If you do not have an EDB Account, you can [register for one](https://www.enterprisedb.com/accounts/register) on the EDB site.
### EDB Image Repository

Once you have received the license key, you can use EDB Postgres Advanced Server
by setting in the `spec` section of the `Cluster` deployment configuration file:
You can use EDB Postgres for Kubernetes with EDB Postgres Advanced Server. You can access the image by obtaining a repository access token to EDB’s image repositories.

- `imageName` to point to the `quay.io/enterprisedb/edb-postgres-advanced` repository
- `licenseKey` to your license key (in the form of a string)
### Obtaining your access token

You can request a repository access token from the [EDB Repositories Download](https://www.enterprisedb.com/repos-downloads) page. You will also need to be signed into your EDB account. If you don't have an EDB Account, you can [register for one](https://www.enterprisedb.com/accounts/register) on the EDB site.

### Quay Image Repository

If you want to use the Quay image repository, you’ll need a trial license key for access to use the images. To request a trial license key for EDB Postgres Kubernetes please contact your sales representative or you can contact our EDB Technical Support Team by email at [[email protected]](mailto:[email protected]) or file a ticket on our support portal <https://techsupport.enterprisedb.com>. Please allow 24 hours for your license to be generated and delivered to you and if you need any additional support please do not hesitate to contact us.

EDB Postgres Advanced container images are available at
[quay.io/enterprisedb/edb-postgres-advanced](https://quay.io/repository/enterprisedb/edb-postgres-advanced).
Once you have your license key, EDB Postgres Advanced container images will be available at <https://quay.io/repository/enterprisedb/edb-postgres-advanced>

You can then use EDB Postgres Advanced Server by setting in the `spec` section of the `Cluster` deployment configuration file:

- `imageName` to point to the quay.io/enterprisedb/edb-postgres-advanced repository
- `licenseKey` to your license key (in the form of a string)

To see how `imageName` and `licenseKey` is set, refer to the [cluster-full-example](../samples/cluster-example-full.yaml) file from the the [configuration samples](samples.md) section.
To see how `imageName` and `licenseKey` is set, refer to the [cluster-full-example](https://www.enterprisedb.com/docs/postgres_for_kubernetes/latest/samples/cluster-example-full.yaml) file from the [configuration samples](https://www.enterprisedb.com/docs/postgres_for_kubernetes/latest/samples/) section.

## Further Information

Expand Down
Loading

0 comments on commit 8ef2ef7

Please sign in to comment.