Skip to content

Commit

Permalink
Merge pull request #1967 from EnterpriseDB/release/2021-10-26
Browse files Browse the repository at this point in the history
Release: 2021-10-26
  • Loading branch information
josh-heyer authored Oct 26, 2021
2 parents 7639012 + a1b31b9 commit 1c28217
Show file tree
Hide file tree
Showing 68 changed files with 5,310 additions and 1,276 deletions.
3 changes: 3 additions & 0 deletions .github/workflows/sync-and-process-files.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,9 @@ jobs:
with:
node-version: '14'

- name: update npm
run: npm install -g npm@7

- name: Process changes
run: |
case ${{ github.event.client_payload.repo }} in
Expand Down
603 changes: 301 additions & 302 deletions advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx

Large diffs are not rendered by default.

80 changes: 40 additions & 40 deletions advocacy_docs/kubernetes/cloud_native_postgresql/architecture.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,17 @@ Cloud Native PostgreSQL supports clusters based on asynchronous and synchronous
streaming replication to manage multiple hot standby replicas within the same
Kubernetes cluster, with the following specifications:

* One primary, with optional multiple hot standby replicas for High Availability
* Available services for applications:
* `-rw`: applications connect to the only primary instance of the cluster
* `-ro`: applications connect to the only hot standby replicas for read-only-workloads
* `-r`: applications connect to any of the instances for read-only workloads
* Shared-nothing architecture recommended for better resilience of the PostgreSQL cluster:
* PostgreSQL instances should reside on different Kubernetes worker nodes
and share only the network
* PostgreSQL instances can reside in different
availability zones in the same region
* All nodes of a PostgreSQL cluster should reside in the same region
- One primary, with optional multiple hot standby replicas for High Availability
- Available services for applications:
- `-rw`: applications connect to the only primary instance of the cluster
- `-ro`: applications connect to the only hot standby replicas for read-only-workloads
- `-r`: applications connect to any of the instances for read-only workloads
- Shared-nothing architecture recommended for better resilience of the PostgreSQL cluster:
- PostgreSQL instances should reside on different Kubernetes worker nodes
and share only the network
- PostgreSQL instances can reside in different
availability zones in the same region
- All nodes of a PostgreSQL cluster should reside in the same region

!!! Seealso "Replication"
Please refer to the ["Replication" section](replication.md) for more
Expand Down Expand Up @@ -73,9 +73,9 @@ Applications can also access any PostgreSQL instance through the
Applications are supposed to work with the services created by Cloud Native PostgreSQL
in the same Kubernetes cluster:

* `[cluster name]-rw`
* `[cluster name]-ro`
* `[cluster name]-r`
- `[cluster name]-rw`
- `[cluster name]-ro`
- `[cluster name]-r`

Those services are entirely managed by the Kubernetes cluster and
implement a form of Virtual IP as described in the
Expand All @@ -88,8 +88,8 @@ implement a form of Virtual IP as described in the

You can use these services in your applications through:

* DNS resolution
* environment variables
- DNS resolution
- environment variables

For the credentials to connect to PostgreSQL, you can
use the secrets generated by the operator.
Expand Down Expand Up @@ -118,22 +118,22 @@ PostgreSQL cluster, you can also use environment variables to connect to the dat
For example, suppose that your PostgreSQL cluster is called `pg-database`,
you can use the following environment variables in your applications:

* `PG_DATABASE_R_SERVICE_HOST`: the IP address of the service
pointing to all the PostgreSQL instances for read-only workloads
- `PG_DATABASE_R_SERVICE_HOST`: the IP address of the service
pointing to all the PostgreSQL instances for read-only workloads

* `PG_DATABASE_RO_SERVICE_HOST`: the IP address of the
service pointing to all hot-standby replicas of the cluster
- `PG_DATABASE_RO_SERVICE_HOST`: the IP address of the
service pointing to all hot-standby replicas of the cluster

* `PG_DATABASE_RW_SERVICE_HOST`: the IP address of the
service pointing to the *primary* instance of the cluster
- `PG_DATABASE_RW_SERVICE_HOST`: the IP address of the
service pointing to the *primary* instance of the cluster

### Secrets

The PostgreSQL operator will generate two `basic-auth` type secrets for every
PostgreSQL cluster it deploys:

* `[cluster name]-superuser`
* `[cluster name]-app`
- `[cluster name]-superuser`
- `[cluster name]-app`

The secrets contain the username, password, and a working
[`.pgpass file`](https://www.postgresql.org/docs/current/libpq-pgpass.html)
Expand Down Expand Up @@ -162,11 +162,11 @@ only write inside a single Kubernetes cluster, at any time.

However, for business continuity objectives it is fundamental to:

- reduce global **recovery point objectives** (RPO) by storing PostgreSQL backup data
in multiple locations, regions and possibly using different providers
(**Disaster Recovery**)
- reduce global **recovery time objectives** (RTO) by taking advantage of PostgreSQL
replication beyond the primary Kubernetes cluster (**High Availability**)
- reduce global **recovery point objectives** (RPO) by storing PostgreSQL backup data
in multiple locations, regions and possibly using different providers
(**Disaster Recovery**)
- reduce global **recovery time objectives** (RTO) by taking advantage of PostgreSQL
replication beyond the primary Kubernetes cluster (**High Availability**)

In order to address the above concerns, Cloud Native PostgreSQL introduces the
concept of a *PostgreSQL Replica Cluster*. Replica clusters are the Cloud
Expand All @@ -175,17 +175,17 @@ hybrid, and multi-cloud contexts.

A replica cluster is a separate `Cluster` resource:

1. having either `pg_basebackup` or full `recovery` as the `bootstrap`
option from a defined external source cluster
2. having the `replica.enabled` option set to `true`
3. replicating from a defined external cluster identified by `replica.source`,
normally located outside the Kubernetes cluster
4. replaying WAL information received from the recovery object store
(using PostgreSQL's `restore_command` parameter), or via streaming
replication (using PostgreSQL's `primary_conninfo` parameter), or any of
the two (in case both the `barmanObjectStore` and `connectionParameters`
are defined in the external cluster)
5. accepting only read connections, as supported by PostgreSQL's Hot Standby
1. having either `pg_basebackup` or full `recovery` as the `bootstrap`
option from a defined external source cluster
2. having the `replica.enabled` option set to `true`
3. replicating from a defined external cluster identified by `replica.source`,
normally located outside the Kubernetes cluster
4. replaying WAL information received from the recovery object store
(using PostgreSQL's `restore_command` parameter), or via streaming
replication (using PostgreSQL's `primary_conninfo` parameter), or any of
the two (in case both the `barmanObjectStore` and `connectionParameters`
are defined in the external cluster)
5. accepting only read connections, as supported by PostgreSQL's Hot Standby

!!! Seealso
Please refer to the ["Bootstrap" section](bootstrap.md) for more information
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ for more information about designated primary instances).
You can archive the backup files in any service that is supported
by the Barman Cloud infrastructure. That is:

- [AWS S3](https://aws.amazon.com/s3/)
- [Microsoft Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/).
- [AWS S3](https://aws.amazon.com/s3/)
- [Microsoft Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/).

You can also use any compatible implementation of the
supported services.
Expand All @@ -46,12 +46,12 @@ discussed in the following sections.

You will need the following information about your environment:

- `ACCESS_KEY_ID`: the ID of the access key that will be used
to upload files in S3
- `ACCESS_KEY_ID`: the ID of the access key that will be used
to upload files in S3

- `ACCESS_SECRET_KEY`: the secret part of the previous access key
- `ACCESS_SECRET_KEY`: the secret part of the previous access key

- `ACCESS_SESSION_TOKEN`: the optional session token in case it is required
- `ACCESS_SESSION_TOKEN`: the optional session token in case it is required

The access key used must have permission to upload files in
the bucket. Given that, you must create a k8s secret with the
Expand Down Expand Up @@ -249,9 +249,9 @@ proceeding with a backup.
In order to access your storage account, you will need one of the following combinations
of credentials:

- [**Connection String**](https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string#configure-a-connection-string-for-an-azure-storage-account)
- **Storage account name** and [**Storage account access key**](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage)
- **Storage account name** and [**Storage account SAS Token**](https://docs.microsoft.com/en-us/azure/storage/blobs/sas-service-create).
- [**Connection String**](https://docs.microsoft.com/en-us/azure/storage/common/storage-configure-connection-string#configure-a-connection-string-for-an-azure-storage-account)
- **Storage account name** and [**Storage account access key**](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage)
- **Storage account name** and [**Storage account SAS Token**](https://docs.microsoft.com/en-us/azure/storage/blobs/sas-service-create).

The credentials need to be stored inside a Kubernetes Secret, adding data entries only when
needed. The following command performs that:
Expand Down
Loading

0 comments on commit 1c28217

Please sign in to comment.