From 6d340d16a76627d5c8b593b92c4b4bf0315a12f3 Mon Sep 17 00:00:00 2001
From: Betsy Gitelman
Date: Tue, 23 Apr 2024 11:07:04 -0400
Subject: [PATCH 01/26] Edits to BigAnimal PR5510
---
.../third_party_integrations/index.mdx | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx
index bdccf9712b6..8ab5f23ccc8 100644
--- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx
+++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx
@@ -17,13 +17,13 @@ The third-party integrations available in BigAnimal are:
## Metric naming
-When metrics from BigAnimal are exported to third-party monitoring services, they are renamed in accordance with the naming conventions of the target platform.
+When metrics from BigAnimal are exported to third-party monitoring services, they're renamed according to the naming conventions of the target platform.
-The name below provides a mapping between [BigAnimal metric names](/biganimal/release/using_cluster/05_monitoring_and_logging/metrics/)
-and the name that metric will be assigned when exported to a third-party services.
+The following table provides a mapping between [BigAnimal metric names](/biganimal/release/using_cluster/05_monitoring_and_logging/metrics/)
+and the name that metric will be assigned when exported to a third-party service.
!!! Note Kubernetes metrics
-In addition to the metrics listed below, which pertain to the Postgres instances, BigAnimal also exports metrics from the underlying Kubernetes infrastructure. These are prefixed with `k8s.`.
+In addition to these metrics, which pertain to the Postgres instances, BigAnimal also exports metrics from the underlying Kubernetes infrastructure. These are prefixed with `k8s.`.
!!!
| BigAnimal metric name | Metric name for third-party integrations |
From e40939bd231d8dd54859c7ee1f0e246ebb9d29da Mon Sep 17 00:00:00 2001
From: William Ivanski
Date: Tue, 23 Apr 2024 16:20:28 -0300
Subject: [PATCH 02/26] Lasso release 4.15.0
Signed-off-by: William Ivanski
---
product_docs/docs/lasso/4/describe.mdx | 49 +++++++++++++++++++++
product_docs/docs/lasso/4/release-notes.mdx | 16 +++++++
2 files changed, 65 insertions(+)
diff --git a/product_docs/docs/lasso/4/describe.mdx b/product_docs/docs/lasso/4/describe.mdx
index ced6d7e9624..3afa32155f6 100644
--- a/product_docs/docs/lasso/4/describe.mdx
+++ b/product_docs/docs/lasso/4/describe.mdx
@@ -257,6 +257,22 @@ Hardware info through `lspci`.
**Security impact:** Low —
No known security impact.
+### HTTP(s) proxies in use for package downloads (`linux_http_proxy_configuration`)
+
+Gathers information about HTTP(s) proxies in use for package
+downloads. Passwords are redacted.
+
+**Report output:**
+
+ * File `/linux/packages-yum-config-manager.data`: YUM configuration
+ * File `/linux/packages-dnf-config-manager.data`: DNF configuration
+ * File `/linux/etc_environment.data`: Contents of /etc/environment
+
+**Depth:** Surface
+
+**Security Impact:** *Low* —
+No known security impact.
+
### Hypervisor (`linux_hypervisor_collector`)
Information about the type of virtualization used, as returned by the
@@ -344,6 +360,26 @@ Information about the system packages installed using `rpm` or `dpkg`.
**Security impact:** Low —
No known security impact.
+### Installed packages origins (`linux_packages_origin_info`)
+
+Information about the packages origins.
+
+**Report output:**
+
+ * File `/linux/packages-apt_conf.data`: `apt` configuration
+ * File `/linux/packages-apt-cache-policy.data`: `apt` configuration
+ * File `/linux/packages-apt-list-installed.data`: Repositories that were used to install packages
+ * File `/linux/packages-yum-repolist.data`: Repositories that are enabled in `yum`
+ * File `/linux/packages-dnf-module-list.data`: Repositories that are enabled in `dnf`
+ * File `/linux/packages-dnf-repolist.data`: Repositories that are enabled in `dnf`
+ * File `/linux/packages-yum-list-installed.data`: Repositories that were used to install packages
+ * File `/linux/packages-dnf-list-installed.data`: Repositories that were used to install packages
+
+**Depth:** Surface
+
+**Security Impact:** *Low* —
+No known security impact.
+
### PostgreSQL disk layout (`linux_postgresql_disk_layout`)
List all files in the PostgreSQL data directory using `find` for
@@ -1958,6 +1994,19 @@ List of tables replicated by pglogical.
**Security impact:** Low —
No known security impact.
+### Database functions (`postgresql_db_pkgs`)
+
+Database packages/functions/procedures with arguments.
+
+**Report output:**
+
+ * File `pkgs.out`
+
+**Depth:** Shallow
+
+**Security impact:** Low —
+No known security impact.
+
### Database functions (`postgresql_db_procs`)
Functions in the database.
diff --git a/product_docs/docs/lasso/4/release-notes.mdx b/product_docs/docs/lasso/4/release-notes.mdx
index 83c31725125..81a68506c72 100644
--- a/product_docs/docs/lasso/4/release-notes.mdx
+++ b/product_docs/docs/lasso/4/release-notes.mdx
@@ -2,6 +2,22 @@
title: Release notes
---
+## Lasso - Version 4.15.0
+
+Released: 23 Apr 2024
+
+Lasso Version 4.15.0 includes the following enhancements and bug fixes:
+
+| Type | Description | Addresses |
+|-----------------|-------------|-----------|
+| Feature | Lasso now gathers information about the package origins: list of repositories, repository configuration and HTTP(S) proxies in use for package download, if any. | DC-31 |
+| Feature | Lasso now gathers information about the EPAS code packages, including functions and procedures inside the packages. | DC-320 |
+| Feature | Packages for Debian 12 ("Bookworm") | DC-888 |
+| Improvement | Lasso now shows a hint message if connecting to the database with an user that doesn't have access to the custom schema the edb_wait_states extension was installed on | DC-977 |
+| Bug fix | Fix issue where Lasso was trying to set lock_timeout on PostgreSQL older than 9.3 | DC-219 |
+| Doc improvement | Lasso bundle is no longer mentioned in the Lasso documentation and Knowledge Base Articles | DC-885 |
+
+
## Lasso - Version 4.14.0
Released: 05 Mar 2024
From fdc9ea007bd3f0bc77b971397be114f305cceb55 Mon Sep 17 00:00:00 2001
From: gvasquezvargas
Date: Wed, 24 Apr 2024 14:50:09 +0200
Subject: [PATCH 03/26] pgd4k as abbr and other minor fixes
---
.../postgres_distributed_for_kubernetes/1/group_cleanup.mdx | 2 +-
.../docs/postgres_distributed_for_kubernetes/1/index.mdx | 2 +-
.../docs/postgres_distributed_for_kubernetes/1/known_issues.mdx | 2 +-
.../1/supported_versions.mdx | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/group_cleanup.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/group_cleanup.mdx
index a280d91bdb9..e4280110dd3 100644
--- a/product_docs/docs/postgres_distributed_for_kubernetes/1/group_cleanup.mdx
+++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/group_cleanup.mdx
@@ -40,7 +40,7 @@ the PGDGroup is being deleted, and the nodes will not be parted from the PGD clu
### Cleanup parted node
Once the PGDGroup is deleted, its metadata will remain in the catalog in `PARTED`
-state in the `bdr.node_summary` table. The PG4k-PGD operator
+state in the `bdr.node_summary` table. The PGD4K operator
defines a CRD named `PGDGroupCleanup` to help clean up the `PARTED` PGDGroup.
In the example below, the `PGDGroupCleanup` executes locally from `region-a`,
diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/index.mdx
index 009e13027cf..83a66f83b78 100644
--- a/product_docs/docs/postgres_distributed_for_kubernetes/1/index.mdx
+++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/index.mdx
@@ -34,7 +34,7 @@ directoryDefaults:
---
-EDB Postgres Distributed for Kubernetes (`pg4k-pgd`) is an
+EDB Postgres Distributed for Kubernetes (`PGD4K`) is an
operator designed to manage EDB Postgres Distributed (PGD) workloads on
Kubernetes, with traffic routed by PGD Proxy.
diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/known_issues.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/known_issues.mdx
index f15f968f9a9..29f1a464dfe 100644
--- a/product_docs/docs/postgres_distributed_for_kubernetes/1/known_issues.mdx
+++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/known_issues.mdx
@@ -59,4 +59,4 @@ All issues and limitations known for the EDB Postgres Distributed version that y
your EDB Postgres Distributed for Kubernetes instance.
For example, if the EDB Postgres Distributed version you are using is 5.x, your EDB Postgres Distributed for Kubernetes
-instance will be affected by these [5.x known issues](/pgd/latest/known_issues/) and [5.x limitations](/pgd/latest/limitations/).
+instance will be affected by these [5.x known issues](/pgd/5/known_issues/) and [5.x limitations](/pgd/5/limitations/).
diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/supported_versions.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/supported_versions.mdx
index 4e18e4c8e42..625853ccc7f 100644
--- a/product_docs/docs/postgres_distributed_for_kubernetes/1/supported_versions.mdx
+++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/supported_versions.mdx
@@ -17,6 +17,6 @@ The Postgres (operand) versions are limited to those supported by
!!! Important
Please be aware that this page is informative only.
- The ["Platform Compatibility"](https://www.enterprisedb.com/product-compatibility#cnp) page
+ The ["Platform Compatibility"](https://www.enterprisedb.com/product-compatibility#bdrk8s) page
from the EDB website contains the official list of supported software and
Kubernetes distributions.
From b8a7fc980de5ec71a82002a835558e04b1b7e7ae Mon Sep 17 00:00:00 2001
From: cnp-autobot <85171364+cnp-autobot@users.noreply.github.com>
Date: Thu, 25 Apr 2024 10:58:26 +0000
Subject: [PATCH 04/26] [create-pull-request] automated change
---
.../docs/postgres_for_kubernetes/1/addons.mdx | 16 +
.../postgres_for_kubernetes/1/bootstrap.mdx | 10 +-
.../1/cluster_conf.mdx | 2 +-
.../1/connection_pooling.mdx | 28 +
.../1/container_images.mdx | 35 +-
.../1/declarative_hibernation.mdx | 2 +-
.../1/default-monitoring.yaml | 1 +
.../1/failure_modes.mdx | 6 +-
.../1/image_catalog.mdx | 110 ++++
.../docs/postgres_for_kubernetes/1/index.mdx | 2 -
.../1/installation_upgrade.mdx | 183 +++---
.../1/kubectl-plugin.mdx | 547 ++----------------
.../1/kubernetes_upgrade.mdx | 145 +++--
.../1/labels_annotations.mdx | 8 +-
.../postgres_for_kubernetes/1/monitoring.mdx | 2 +-
.../1/operator_capability_levels.mdx | 7 +-
.../postgres_for_kubernetes/1/pg4k.v1.mdx | 416 ++++++++++++-
.../1/postgresql_conf.mdx | 6 +-
.../1/replica_cluster.mdx | 54 +-
.../postgres_for_kubernetes/1/replication.mdx | 103 +++-
.../1/rolling_update.mdx | 2 +
.../postgres_for_kubernetes/1/samples.mdx | 5 +
.../cluster-example-bis-restore-cr.yaml | 26 +
.../samples/cluster-example-bis-restore.yaml | 43 ++
.../1/samples/cluster-example-bis.yaml | 29 +
.../1/samples/cluster-example-catalog.yaml | 24 +
.../1/samples/cluster-example-full.yaml | 2 +-
.../1/samples/pooler-external.yaml | 21 +
.../postgres_for_kubernetes/1/scheduling.mdx | 2 +-
.../postgres_for_kubernetes/1/security.mdx | 3 +-
.../1/ssl_connections.mdx | 2 +-
.../docs/postgres_for_kubernetes/1/tde.mdx | 12 +-
.../1/troubleshooting.mdx | 4 +-
.../1/wal_archiving.mdx | 2 +-
34 files changed, 1144 insertions(+), 716 deletions(-)
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore-cr.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-catalog.yaml
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml
diff --git a/product_docs/docs/postgres_for_kubernetes/1/addons.mdx b/product_docs/docs/postgres_for_kubernetes/1/addons.mdx
index ec049e6fdf9..1f95ab6f6c4 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/addons.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/addons.mdx
@@ -76,6 +76,7 @@ to be defined as a YAML object having the following keys:
- `electedResourcesDecorators`
- `excludedResourcesDecorators`
+- `excludedResourcesSelector`
- `backupInstanceDecorators`
- `preBackupHookConfiguration`
- `postBackupHookConfiguration`
@@ -107,6 +108,12 @@ will be placed on every excluded pod and PVC.
Each element of the array must have the same fields as the
`electedResourcesDecorators` section above.
+#### The `excludedResourcesSelector` section
+
+This section selects Pods and PVCs that are applied to the
+`excludedResourcesDecorators`. It accepts a [label selector rule](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors)
+as value. When empty, all the Pods and every PVC that is not elected will be excluded.
+
#### The `backupInstanceDecorators` section
This section allows you to configure an array of labels and/or annotations that
@@ -188,6 +195,7 @@ data:
- key: "app.example.com/elected"
metadataType: "label"
value: "true"
+ excludedResourcesSelector: app=xyz,env=prod
excludedResourcesDecorators:
- key: "app.example.com/excluded"
metadataType: "label"
@@ -239,6 +247,7 @@ metadata:
- key: "app.example.com/elected"
metadataType: "label"
value: "true"
+ excludedResourcesSelector: app=xyz,env=prod
excludedResourcesDecorators:
- key: "app.example.com/excluded"
metadataType: "label"
@@ -342,6 +351,13 @@ excludedResourcesDecorators:
metadataType: "annotation"
value: "Not necessary for backup"
+# A LabelSelector containing the labels being used to filter Pods
+# and PVCs to decorate with excludedResourcesDecorators.
+# It accepts a label selector rule as value.
+# See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
+# When empty, all the Pods and every PVC that is not elected will be excluded.
+excludedResourcesSelector: app=xyz,env=prod
+
# An array of labels and/or annotations that will be placed
# on the instance pod that's been selected for the backup by
# the operator and which contains the hooks.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
index 9bced338a72..6194e5f30c6 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
@@ -530,7 +530,7 @@ file on the source PostgreSQL instance:
host replication streaming_replica all md5
```
-The following manifest creates a new PostgreSQL 16.1 cluster,
+The following manifest creates a new PostgreSQL 16.2 cluster,
called `target-db`, using the `pg_basebackup` bootstrap method
to clone an external PostgreSQL cluster defined as `source-db`
(in the `externalClusters` array). As you can see, the `source-db`
@@ -545,7 +545,7 @@ metadata:
name: target-db
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:16.1
+ imageName: quay.io/enterprisedb/postgresql:16.2
bootstrap:
pg_basebackup:
@@ -565,7 +565,7 @@ spec:
```
All the requirements must be met for the clone operation to work, including
-the same PostgreSQL version (in our case 16.1).
+the same PostgreSQL version (in our case 16.2).
#### TLS certificate authentication
@@ -580,7 +580,7 @@ in the same Kubernetes cluster.
This example can be easily adapted to cover an instance that resides
outside the Kubernetes cluster.
-The manifest defines a new PostgreSQL 16.1 cluster called `cluster-clone-tls`,
+The manifest defines a new PostgreSQL 16.2 cluster called `cluster-clone-tls`,
which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
external cluster. The host is identified by the read/write service
in the same cluster, while the `streaming_replica` user is authenticated
@@ -595,7 +595,7 @@ metadata:
name: cluster-clone-tls
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:16.1
+ imageName: quay.io/enterprisedb/postgresql:16.2
bootstrap:
pg_basebackup:
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx b/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx
index 8b550eb893d..0a515fb9465 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx
@@ -50,7 +50,7 @@ EDB Postgres for Kubernetes relies on [ephemeral volumes](https://kubernetes.io/
for part of the internal activities. Ephemeral volumes exist for the sole
duration of a pod's life, without persisting across pod restarts.
-### Volume Claim Template for Temporary Storage
+# Volume Claim Template for Temporary Storage
The operator uses by default an `emptyDir` volume, which can be customized by using the `.spec.ephemeralVolumesSizeLimit field`.
This can be overridden by specifying a volume claim template in the `.spec.ephemeralVolumeSource` field.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
index 57531ecd032..b2ac5abdc19 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/connection_pooling.mdx
@@ -258,6 +258,34 @@ spec:
memory: 500Mi
```
+## Service Template
+
+Sometimes, your pooler will require some different labels, annotations, or even change
+the type of the service, you can achieve that by using the `serviceTemplate` field:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Pooler
+metadata:
+ name: pooler-example-rw
+spec:
+ cluster:
+ name: cluster-example
+ instances: 3
+ type: rw
+ serviceTemplate:
+ metadata:
+ labels:
+ app: pooler
+ spec:
+ type: LoadBalancer
+ pgbouncer:
+ poolMode: session
+ parameters:
+ max_client_conn: "1000"
+ default_pool_size: "10"
+```
+
## High availability (HA)
Because of Kubernetes' deployments, you can configure your pooler to run on a
diff --git a/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx b/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
index 5fad160a6d6..689f6f2d8e6 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
@@ -43,24 +43,35 @@ for EDB Postgres for Kubernetes, and publishes them on
## Image tag requirements
-While the image name can be anything valid for Docker, the EDB Postgres for Kubernetes
-operator relies on the *image tag* to detect the Postgres major
-version contained in the image.
+Certainly! Here's an improved version:
-The image tag must start with a valid PostgreSQL major version number (e.g.
-14.5 or 15) optionally followed by a dot and the patch level.
+## Image Tag Requirements
-This can be followed by any character combination that is valid and
+To ensure the operator makes informed decisions, it must accurately detect the
+PostgreSQL major version. This detection can occur in two ways:
+
+1. Utilizing the `major` field of the `imageCatalogRef`, if defined.
+2. Auto-detecting the major version from the image tag of the `imageName` if
+ not explicitly specified.
+
+For auto-detection to work, the image tag must adhere to a specific format. It
+should commence with a valid PostgreSQL major version number (e.g., 15.6 or
+16), optionally followed by a dot and the patch level.
+
+Following this, the tag can include any character combination valid and
accepted in a Docker tag, preceded by a dot, an underscore, or a minus sign.
Examples of accepted image tags:
-- `11.1`
-- `12.3.2.1-1`
-- `12.4`
-- `13`
-- `14.5-10`
-- `15.0`
+- `12.1`
+- `13.3.2.1-1`
+- `13.4`
+- `14`
+- `15.5-10`
+- `16.0`
!!! Warning
`latest` is not considered a valid tag for the image.
+
+!!! Note
+ Image tag requirements do no apply for images defined in a catalog.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx b/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx
index ef3d5664ee7..5b56275699d 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/declarative_hibernation.mdx
@@ -61,7 +61,7 @@ $ kubectl cnp status
Cluster Summary
Name: cluster-example
Namespace: default
-PostgreSQL Image: quay.io/enterprisedb/postgresql:16.1
+PostgreSQL Image: quay.io/enterprisedb/postgresql:16.2
Primary instance: cluster-example-2
Status: Cluster in healthy state
Instances: 3
diff --git a/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml b/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml
index bc2a4fa4877..309f6fd341a 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/default-monitoring.yaml
@@ -202,6 +202,7 @@ data:
description: "Time at which these statistics were last reset"
pg_stat_bgwriter:
+ runonserver: "<17.0.0"
query: |
SELECT checkpoints_timed
, checkpoints_req
diff --git a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx
index a1aab1641cf..24771b9e34e 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx
@@ -8,7 +8,7 @@ PostgreSQL can face on a Kubernetes cluster during its lifetime.
!!! Important
In case the failure scenario you are experiencing is not covered by this
- section, please immediately contact EDB for support and assistance.
+ section, please immediately seek for [professional support](https://cloudnative-pg.io/support/).
!!! Seealso "Postgres instance manager"
Please refer to the ["Postgres instance manager" section](instance_manager.md)
@@ -175,8 +175,8 @@ In the case of undocumented failure, it might be necessary to intervene
to solve the problem manually.
!!! Important
- In such cases, please do not perform any manual operation without the
- support and assistance of EDB engineering team.
+ In such cases, please do not perform any manual operation without
+ [professional support](https://cloudnative-pg.io/support/).
From version 1.11.0 of the operator, you can use the
`k8s.enterprisedb.io/reconciliationLoop` annotation to temporarily disable the
diff --git a/product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx b/product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx
new file mode 100644
index 00000000000..b14443967df
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx
@@ -0,0 +1,110 @@
+---
+title: 'Image Catalog'
+originalFilePath: 'src/image_catalog.md'
+---
+
+`ImageCatalog` and `ClusterImageCatalog` are essential resources that empower
+you to define images for creating a `Cluster`.
+
+The key distinction lies in their scope: an `ImageCatalog` is namespaced, while
+a `ClusterImageCatalog` is cluster-scoped.
+
+Both share a common structure, comprising a list of images, each equipped with
+a `major` field indicating the major version of the image.
+
+!!! Warning
+ The operator places trust in the user-defined major version and refrains
+ from conducting any PostgreSQL version detection. It is the user's
+ responsibility to ensure alignment between the declared major version in
+ the catalog and the PostgreSQL image.
+
+The `major` field's value must remain unique within a catalog, preventing
+duplication across images. Distinct catalogs, however, may
+expose different images under the same `major` value.
+
+**Example of a Namespaced `ImageCatalog`:**
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ImageCatalog
+metadata:
+ name: postgresql
+ namespace: default
+spec:
+ images:
+ - major: 15
+ image: quay.io/enterprisedb/postgresql:15.6
+ - major: 16
+ image: quay.io/enterprisedb/postgresql:16.2
+```
+
+**Example of a Cluster-Wide Catalog using `ClusterImageCatalog` Resource:**
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ClusterImageCatalog
+metadata:
+ name: postgresql
+spec:
+ images:
+ - major: 15
+ image: quay.io/enterprisedb/postgresql:15.6
+ - major: 16
+ image: quay.io/enterprisedb/postgresql:16.2
+```
+
+A `Cluster` resource has the flexibility to reference either an `ImageCatalog`
+or a `ClusterImageCatalog` to precisely specify the desired image.
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ instances: 3
+ imageCatalogRef:
+ apiGroup: postgresql.k8s.enterprisedb.io
+ kind: ImageCatalog
+ name: postgresql
+ major: 16
+ storage:
+ size: 1Gi
+```
+
+Clusters utilizing these catalogs maintain continuous monitoring.
+Any alterations to the images within a catalog trigger automatic updates for
+**all associated clusters** referencing that specific entry.
+
+## EDB Postgres for Kubernetes Catalogs
+
+The EDB Postgres for Kubernetes project maintains `ClusterImageCatalogs` for the images it
+provides. These catalogs are regularly updated with the latest images for each
+major version. By applying the `ClusterImageCatalog.yaml` file from the
+EDB Postgres for Kubernetes project's GitHub repositories, cluster administrators can ensure
+that their clusters are automatically updated to the latest version within the
+specified major release.
+
+### PostgreSQL Container Images
+
+You can install the
+[latest version of the cluster catalog for the PostgreSQL Container Images](https://raw.githubusercontent.com/cloudnative-pg/postgres-containers/main/Debian/ClusterImageCatalog.yaml)
+([cloudnative-pg/postgres-containers](https://github.com/enterprisedb/docker-postgres) repository)
+with:
+
+```shell
+kubectl apply \
+ -f https://raw.githubusercontent.com/cloudnative-pg/postgres-containers/main/Debian/ClusterImageCatalog.yaml
+```
+
+### PostGIS Container Images
+
+You can install the
+[latest version of the cluster catalog for the PostGIS Container Images](https://raw.githubusercontent.com/cloudnative-pg/postgis-containers/main/PostGIS/ClusterImageCatalog.yaml)
+([cloudnative-pg/postgis-containers](https://github.com/cloudnative-pg/postgis-containers) repository)
+with:
+
+```shell
+kubectl apply \
+ -f https://raw.githubusercontent.com/cloudnative-pg/postgis-containers/main/PostGIS/ClusterImageCatalog.yaml
+```
diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/index.mdx
index 7ddf1e5649b..c9d73b9b8d0 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/index.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/index.mdx
@@ -80,8 +80,6 @@ and OpenShift. It is designed, developed, and supported by EDB and covers the
full lifecycle of a highly available Postgres database clusters with a
primary/standby architecture, using native streaming replication.
-EDB Postgres for Kubernetes was made generally available on February 4, 2021. Earlier versions were made available to selected customers prior to the GA release.
-
!!! Note
The operator has been renamed from Cloud Native PostgreSQL. Existing users
diff --git a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
index 30b0aad876c..038fc04854a 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/installation_upgrade.mdx
@@ -19,12 +19,12 @@ The operator can be installed using the provided [Helm chart](https://github.com
The operator can be installed like any other resource in Kubernetes,
through a YAML manifest applied via `kubectl`.
-You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.22.2.yaml)
+You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.23.0.yaml)
for this minor release as follows:
```sh
kubectl apply --server-side -f \
- https://get.enterprisedb.io/cnp/postgresql-operator-1.22.2.yaml
+ https://get.enterprisedb.io/cnp/postgresql-operator-1.23.0.yaml
```
You can verify that with:
@@ -84,7 +84,7 @@ specific minor release, you can just run:
```sh
curl -sSfL \
- https://raw.githubusercontent.com/cloudnative-pg/artifacts/release-1.22/manifests/operator-manifest.yaml | \
+ https://raw.githubusercontent.com/cloudnative-pg/artifacts/release-1.23/manifests/operator-manifest.yaml | \
kubectl apply --server-side -f -
```
@@ -159,10 +159,6 @@ plane for self-managed Kubernetes installations).
before performing an upgrade as some versions might require
extra steps.
-!!! Warning
- If you are upgrading to version 1.20, please read carefully
- the [dedicated section below](#upgrading-to-120-from-a-previous-minor-version).
-
Upgrading EDB Postgres for Kubernetes operator is a two-step process:
1. upgrade the controller and the related Kubernetes resources
@@ -250,54 +246,51 @@ When versions are not directly upgradable, the old version needs to be
removed before installing the new one. This won't affect user data but
only the operator itself.
-### Upgrading to 1.22 from a previous minor version
+### Upgrading to 1.23.0, 1.22.3 or 1.21.5
!!! Important
- If you are transitioning from a prior minor version to version 1.22, please
- ensure that you are using the latest available patch version, which is
- currently 1.22.2. This guarantees that you benefit from the most recent bug
- fixes, security updates, and improvements associated with the 1.22 series.
+ We encourage all existing users of EDB Postgres for Kubernetes to upgrade to version
+ 1.23.0 or at least to the latest stable version of the minor release you are
+ currently using (namely 1.22.2 or 1.21.4).
!!! Warning
Every time you are upgrading to a higher minor release, make sure you
go through the release notes and upgrade instructions of all the
intermediate minor releases. For example, if you want to move
- from 1.20.x to 1.22, make sure you go through the release notes
- and upgrade instructions for 1.21 and 1.22.
+ from 1.21.x to 1.23, make sure you go through the release notes
+ and upgrade instructions for 1.22 and 1.23.
-EDB Postgres for Kubernetes continues to adhere to the security-by-default approach. As of
-version 1.22, the usage of the `ALTER SYSTEM` command is now disabled by
-default.
+#### User defined replication slots
-The reason behind this choice is to ensure that, by default, changes to the
-PostgreSQL configuration in a database cluster controlled by EDB Postgres for Kubernetes are
-allowed only through the Kubernetes API.
+EDB Postgres for Kubernetes now offers automated synchronization of all replication slots
+defined on the primary to any standby within the High Availability (HA)
+cluster.
-At the same time, we are providing an option to enable `ALTER SYSTEM` if you
-need to use it, even temporarily, from versions 1.22.0, 1.21.2, and 1.20.5,
-by setting `.spec.postgresql.enableAlterSystem` to `true`, as in the following
-excerpt:
+If you manually manage replication slots on a standby, it is essential to
+exclude those replication slots from synchronization. Failure to do so may
+result in EDB Postgres for Kubernetes removing them from the standby. To implement this
+exclusion, utilize the following YAML configuration. In this example,
+replication slots with a name starting with 'foo' are prevented from
+synchronization:
```yaml
...
- postgresql:
- enableAlterSystem: true
-...
+ replicationSlots:
+ synchronizeReplicas:
+ enabled: true
+ excludePatterns:
+ - "^foo"
```
-Clusters in 1.22 will have `enableAlterSystem` set to `false` by default.
-If you want to retain the existing behavior, in 1.22, you need to explicitly
-set `enableAlterSystem` to `true` as shown above.
+Alternatively, if you prefer to disable the synchronization mechanism entirely,
+use the following configuration:
-In versions 1.21.2 and 1.20.5, and later patch releases in the 1.20 and 1.21
-branches, `enableAlterSystem` will be set to `true` by default, keeping with
-the existing behavior. If you don't need to use `ALTER SYSTEM`, we recommend
-that you set `enableAlterSystem` explicitly to `false`.
-
-!!! Important
- You can set the desired value for `enableAlterSystem` immediately
- following your upgrade to version 1.22.0, 1.21.2, or 1.20.5, as shown in
- the example above.
+```yaml
+...
+ replicationSlots:
+ synchronizeReplicas:
+ enabled: false
+```
#### Server-side apply of manifests
@@ -325,6 +318,42 @@ Henceforth, `kube-apiserver` will be automatically acknowledged as a recognized
manager for the CRDs, eliminating the need for any further manual intervention
on this matter.
+### Upgrading to 1.22 from a previous minor version
+
+EDB Postgres for Kubernetes continues to adhere to the security-by-default approach. As of
+version 1.22, the usage of the `ALTER SYSTEM` command is now disabled by
+default.
+
+The reason behind this choice is to ensure that, by default, changes to the
+PostgreSQL configuration in a database cluster controlled by EDB Postgres for Kubernetes are
+allowed only through the Kubernetes API.
+
+At the same time, we are providing an option to enable `ALTER SYSTEM` if you
+need to use it, even temporarily, from versions 1.22.0, 1.21.2, and 1.20.5,
+by setting `.spec.postgresql.enableAlterSystem` to `true`, as in the following
+excerpt:
+
+```yaml
+...
+ postgresql:
+ enableAlterSystem: true
+...
+```
+
+Clusters in 1.22 will have `enableAlterSystem` set to `false` by default.
+If you want to retain the existing behavior, in 1.22, you need to explicitly
+set `enableAlterSystem` to `true` as shown above.
+
+In versions 1.21.2 and 1.20.5, and later patch releases in the 1.20 and 1.21
+branches, `enableAlterSystem` will be set to `true` by default, keeping with
+the existing behavior. If you don't need to use `ALTER SYSTEM`, we recommend
+that you set `enableAlterSystem` explicitly to `false`.
+
+!!! Important
+ You can set the desired value for `enableAlterSystem` immediately
+ following your upgrade to version 1.22.0, 1.21.2, or 1.20.5, as shown in
+ the example above.
+
### Upgrading to 1.21 from a previous minor version
With the goal to keep improving out-of-the-box the *convention over
@@ -498,79 +527,3 @@ spec:
...
smartShutdownTimeout: 15
```
-
-### Upgrading to 1.20 from a previous minor version
-
-EDB Postgres for Kubernetes 1.20 introduces some changes from previous versions of the
-operator in the default behavior of a few features, with the goal to improve
-resilience and usability of a Postgres cluster out of the box, through
-convention over configuration.
-
-!!! Important
- These changes all involve cases where at least one replica is present, and
- **only affect new `Cluster` resources**.
-
-#### Backup from a standby
-
-[Backup from a standby](backup.md#backup-from-a-standby)
-was introduced in EDB Postgres for Kubernetes 1.19, but disabled by default - meaning that
-the base backup is taken from the primary unless the target is explicitly
-set to prefer standby.
-
-From version 1.20, if one or more replicas are available, the operator
-will prefer the most aligned standby to take a full base backup.
-
-If you are upgrading your EDB Postgres for Kubernetes deployment to 1.20 and are concerned that
-this feature might impact your production environment for the new `Cluster` resources
-that you create, you can explicitly set the target to the primary by adding the
-following line to all your `Cluster` resources:
-
-```yaml
-spec:
- ...
- backup:
- target: "primary"
-```
-
-#### Restart of a primary after a rolling update
-
-[Automated rolling updates](rolling_update.md#automated-updates-unsupervised)
-have been always available in EDB Postgres for Kubernetes, and by default they update the
-primary after having performed a switchover to the most aligned replica.
-
-From version 1.20, we are changing the default update method
-of the primary from switchover to restart as, in most cases, this is
-the fastest and safest way.
-
-If you are upgrading your EDB Postgres for Kubernetes deployment to 1.20 and are concerned that
-this feature might impact your production environment for the new `Cluster`
-resources that you create, you can explicitly set the update method of the
-primary to switchover by adding the following line to all your `Cluster`
-resources:
-
-```yaml
-spec:
- ...
- primaryUpdateMethod: switchover
-```
-
-#### Replication slots for High Availability
-
-[Replication slots for High Availability](replication.md#replication-slots-for-high-availability)
-were introduced in EDB Postgres for Kubernetes in version 1.18, but disabled by default.
-
-Version 1.20 prepares the ground for enabling this feature by default in any
-future release, as replication slots enhance the resilience and robustness of a
-High Availability cluster.
-
-For future compatibility, if you already know that your environments won't ever
-need replication slots, our recommendation is that you explicitly disable their
-management by adding from now the following lines to your `Cluster` resources:
-
-```yaml
-spec:
- ...
- replicationSlots:
- highAvailability:
- enabled: false
-```
diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
index b2b010faac1..e1982245b0a 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
@@ -34,67 +34,52 @@ them in your systems.
#### Debian packages
-For example, let's install the 1.22.2 release of the plugin, for an Intel based
+For example, let's install the 1.18.1 release of the plugin, for an Intel based
64 bit server. First, we download the right `.deb` file.
```sh
-wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.22.2/kubectl-cnp_1.22.2_linux_x86_64.deb
+$ wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.18.1/kubectl-cnp_1.18.1_linux_x86_64.deb
```
Then, install from the local file using `dpkg`:
```sh
-dpkg -i kubectl-cnp_1.22.2_linux_x86_64.deb
-__OUTPUT__
+$ dpkg -i kubectl-cnp_1.18.1_linux_x86_64.deb
(Reading database ... 16102 files and directories currently installed.)
-Preparing to unpack kubectl-cnp_1.22.2_linux_x86_64.deb ...
-Unpacking cnp (1.22.2) over (1.22.2) ...
-Setting up cnp (1.22.2) ...
+Preparing to unpack kubectl-cnp_1.18.1_linux_x86_64.deb ...
+Unpacking cnp (1.18.1) over (1.18.1) ...
+Setting up cnp (1.18.1) ...
```
#### RPM packages
-As in the example for `.deb` packages, let's install the 1.22.2 release for an
+As in the example for `.deb` packages, let's install the 1.18.1 release for an
Intel 64 bit machine. Note the `--output` flag to provide a file name.
-``` sh
-curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.22.2/kubectl-cnp_1.22.2_linux_x86_64.rpm \
- --output kube-plugin.rpm
+```sh
+curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.18.1/kubectl-cnp_1.18.1_linux_x86_64.rpm --output cnp-plugin.rpm
```
Then install with `yum`, and you're ready to use:
```sh
-yum --disablerepo=* localinstall kube-plugin.rpm
-__OUTPUT__
+$ yum --disablerepo=* localinstall cnp-plugin.rpm
+yum --disablerepo=* localinstall cnp-plugin.rpm
+Failed to set locale, defaulting to C.UTF-8
Dependencies resolved.
-========================================================================================================================
- Package Architecture Version Repository Size
-========================================================================================================================
+====================================================================================================
+ Package Architecture Version Repository Size
+====================================================================================================
Installing:
- kubectl-cnp x86_64 1.22.2-1 @commandline 17 M
+ cnpg x86_64 1.18.1-1 @commandline 14 M
Transaction Summary
-========================================================================================================================
+====================================================================================================
Install 1 Package
-Total size: 17 M
-Installed size: 62 M
+Total size: 14 M
+Installed size: 43 M
Is this ok [y/N]: y
-Downloading Packages:
-Running transaction check
-Transaction check succeeded.
-Running transaction test
-Transaction test succeeded.
-Running transaction
- Preparing : 1/1
- Installing : kubectl-cnp-1.22.2-1.x86_64 1/1
- Verifying : kubectl-cnp-1.22.2-1.x86_64 1/1
-
-Installed:
- kubectl-cnp-1.22.2-1.x86_64
-
-Complete!
```
### Supported Architectures
@@ -117,29 +102,6 @@ operating system and architectures:
- arm 5/6/7
- arm64
-### Configuring auto-completion
-
-To configure [auto-completion](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/) for the plugin, a helper shell script needs to be
-installed into your current PATH. Assuming the latter contains `/usr/local/bin`,
-this can be done with the following commands:
-
-```shell
-cat > kubectl_complete-cnp <..` format (e.g. `1.22.2`). The default empty value installs the version of the operator that matches the version of the plugin.
+- `--version`: minor version of the operator to be installed, such as `1.17`.
+ If a minor version is specified, the plugin will install the latest patch
+ version of that minor version. If no version is supplied the plugin will
+ install the latest `MAJOR.MINOR.PATCH` version of the operator.
- `--watch-namespace`: comma separated string containing the namespaces to
watch (by default all namespaces)
@@ -175,7 +140,7 @@ will install the operator, is as follows:
```shell
kubectl cnp install generate \
-n king \
- --version 1.22.2 \
+ --version 1.17 \
--replicas 3 \
--watch-namespace "albert, bb, freddie" \
> operator.yaml
@@ -184,9 +149,9 @@ kubectl cnp install generate \
The flags in the above command have the following meaning:
- `-n king` install the CNP operator into the `king` namespace
-- `--version 1.22.2` install operator version 1.22.2
+- `--version 1.17` install the latest patch version for minor version 1.17
- `--replicas 3` install the operator with 3 replicas
-- `--watch-namespace "albert, bb, freddie"` have the operator watch for
+- `--watch-namespaces "albert, bb, freddie"` have the operator watch for
changes in the `albert`, `bb` and `freddie` namespaces only
### Status
@@ -222,7 +187,7 @@ Cluster in healthy state
Name: sandbox
Namespace: default
System ID: 7039966298120953877
-PostgreSQL Image: quay.io/enterprisedb/postgresql:16.2
+PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
Primary instance: sandbox-2
Instances: 3
Ready instances: 3
@@ -267,7 +232,7 @@ Cluster in healthy state
Name: sandbox
Namespace: default
System ID: 7039966298120953877
-PostgreSQL Image: quay.io/enterprisedb/postgresql:16.2
+PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
Primary instance: sandbox-2
Instances: 3
Ready instances: 3
@@ -757,89 +722,6 @@ items:
"apiVersion": "postgresql.k8s.enterprisedb.io/v1",
```
-### Logs
-
-The `kubectl cnp logs` command allows to follow the logs of a collection
-of pods related to EDB Postgres for Kubernetes in a single go.
-
-It has at the moment one available sub-command: `cluster`.
-
-#### Cluster logs
-
-The `cluster` sub-command gathers all the pod logs for a cluster in a single
-stream or file.
-This means that you can get all the pod logs in a single terminal window, with a
-single invocation of the command.
-
-As in all the cnp plugin sub-commands, you can get instructions and help with
-the `-h` flag:
-
-`kubectl cnp logs cluster -h`
-
-The `logs` command will display logs in JSON-lines format, unless the
-`--timestamps` flag is used, in which case, a human readable timestamp will be
-prepended to each line. In this case, lines will no longer be valid JSON,
-and tools such as `jq` may not work as desired.
-
-If the `logs cluster` sub-command is given the `-f` flag (aka `--follow`), it
-will follow the cluster pod logs, and will also watch for any new pods created
-in the cluster after the command has been invoked.
-Any new pods found, including pods that have been restarted or re-created,
-will also have their pods followed.
-The logs will be displayed in the terminal's standard-out.
-This command will only exit when the cluster has no more pods left, or when it
-is interrupted by the user.
-
-If `logs` is called without the `-f` option, it will read the logs from all
-cluster pods until the time of invocation and display them in the terminal's
-standard-out, then exit.
-The `-o` or `--output` flag can be provided, to specify the name
-of the file where the logs should be saved, instead of displaying over
-standard-out.
-The `--tail` flag can be used to specify how many log lines will be retrieved
-from each pod in the cluster. By default, the `logs cluster` sub-command will
-display all the logs from each pod in the cluster. If combined with the "follow"
-flag `-f`, the number of logs specified by `--tail` will be retrieved until the
-current time, and and from then the new logs will be followed.
-
-NOTE: unlike other `cnp` plugin commands, the `-f` is used to denote "follow"
-rather than specify a file. This keeps with the convention of `kubectl logs`,
-which takes `-f` to mean the logs should be followed.
-
-Usage:
-
-```shell
-kubectl cnp logs cluster [flags]
-```
-
-Using the `-f` option to follow:
-
-```shell
-kubectl cnp report cluster cluster-example -f
-```
-
-Using `--tail` option to display 3 lines from each pod and the `-f` option
-to follow:
-
-```shell
-kubectl cnp report cluster cluster-example -f --tail 3
-```
-
-``` json
-{"level":"info","ts":"2023-06-30T13:37:33Z","logger":"postgres","msg":"2023-06-30 13:37:33.142 UTC [26] LOG: ending log output to stderr","source":"/controller/log/postgres","logging_pod":"cluster-example-3"}
-{"level":"info","ts":"2023-06-30T13:37:33Z","logger":"postgres","msg":"2023-06-30 13:37:33.142 UTC [26] HINT: Future log output will go to log destination \"csvlog\".","source":"/controller/log/postgres","logging_pod":"cluster-example-3"}
-…
-…
-```
-
-With the `-o` option omitted, and with `--output` specified:
-
-``` sh
-kubectl cnp logs cluster cluster-example --output my-cluster.log
-
-Successfully written logs to "my-cluster.log"
-```
-
### Destroy
The `kubectl cnp destroy` command helps remove an instance and all the
@@ -944,16 +826,11 @@ kubectl cnp fio -n
Refer to the [Benchmarking fio section](benchmarking.md#fio) for more details.
-### Requesting a new physical backup
+### Requesting a new base backup
The `kubectl cnp backup` command requests a new physical base backup for
an existing Postgres cluster by creating a new `Backup` resource.
-!!! Info
- From release 1.21, the `backup` command accepts a new flag, `-m`
- to specify the backup method.
- To request a backup using volume snapshots, set `-m volumeSnapshot`
-
The following example requests an on-demand backup for a given cluster:
```shell
@@ -967,17 +844,10 @@ kubectl cnp backup cluster-example
backup/cluster-example-20230121002300 created
```
-By default, a newly created backup will use the backup target policy defined
-in the cluster to choose which instance to run on.
-However, you can override this policy with the `--backup-target` option.
-
-In the case of volume snapshot backups, you can also use the `--online` option
-to request an online/hot backup or an offline/cold one: additionally, you can
-also tune online backups by explicitly setting the `--immediate-checkpoint` and
-`--wait-for-archive` options.
-
-The ["Backup" section](./backup.md#backup) contains more information about
-the configuration settings.
+By default, new created backup will use the backup target policy defined
+in cluster to choose which instance to run on. You can also use `--backup-target`
+option to override this policy. please refer to [Backup and Recovery](backup_recovery.md)
+for more information about backup target.
### Launching psql
@@ -992,7 +862,7 @@ it from the actual pod. This means that you will be using the `postgres` user.
```shell
kubectl cnp psql cluster-example
-psql (16.2 (Debian 16.2-1.pgdg110+1))
+psql (15.3)
Type "help" for help.
postgres=#
@@ -1003,7 +873,7 @@ select to work against a replica by using the `--replica` option:
```shell
kubectl cnp psql --replica cluster-example
-psql (16.2 (Debian 16.2-1.pgdg110+1))
+psql (15.3)
Type "help" for help.
@@ -1031,335 +901,44 @@ kubectl cnp psql cluster-example -- -U postgres
### Snapshotting a Postgres cluster
-!!! Warning
- The `kubectl cnp snapshot` command has been removed.
- Please use the [`backup` command](#requesting-a-new-physical-backup) to request
- backups using volume snapshots.
-
-### Using pgAdmin4 for evaluation/demonstration purposes only
-
-[pgAdmin](https://www.pgadmin.org/) stands as the most popular and feature-rich
-open-source administration and development platform for PostgreSQL.
-For more information on the project, please refer to the official
-[documentation](https://www.pgadmin.org/docs/).
-
-Given that the pgAdmin Development Team maintains official Docker container
-images, you can install pgAdmin in your environment as a standard
-Kubernetes deployment.
-
-!!! Important
- Deployment of pgAdmin in Kubernetes production environments is beyond the
- scope of this document and, more broadly, of the EDB Postgres for Kubernetes project.
-
-However, **for the purposes of demonstration and evaluation**, EDB Postgres for Kubernetes
-offers a suitable solution. The `cnp` plugin implements the `pgadmin4`
-command, providing a straightforward method to connect to a given database
-`Cluster` and navigate its content in a local environment such as `kind`.
-
-For example, you can install a demo deployment of pgAdmin4 for the
-`cluster-example` cluster as follows:
-
-```sh
-kubectl cnp pgadmin4 cluster-example
-```
-
-This command will produce:
-
-```output
-ConfigMap/cluster-example-pgadmin4 created
-Deployment/cluster-example-pgadmin4 created
-Service/cluster-example-pgadmin4 created
-Secret/cluster-example-pgadmin4 created
-
-[...]
-```
-
-After deploying pgAdmin, forward the port using kubectl and connect
-through your browser by following the on-screen instructions.
-
-![Screenshot of desktop installation of pgAdmin](images/pgadmin4.png)
+The `kubectl cnp snapshot` creates consistent snapshots of a Postgres
+`Cluster` by:
-As usual, you can use the `--dry-run` option to generate the YAML file:
-
-```sh
-kubectl cnp pgadmin4 --dry-run cluster-example
-```
-
-pgAdmin4 can be installed in either desktop or server mode, with the default
-being server.
-
-In `server` mode, authentication is required using a randomly generated password,
-and users must manually specify the database to connect to.
-
-On the other hand, `desktop` mode initiates a pgAdmin web interface without
-requiring authentication. It automatically connects to the `app` database as the
-`app` user, making it ideal for quick demos, such as on a local deployment using
-`kind`:
-
-```sh
-kubectl cnp pgadmin4 --mode desktop cluster-example
-```
-
-After concluding your demo, ensure the termination of the pgAdmin deployment by
-executing:
-
-```sh
-kubectl cnp pgadmin4 --dry-run cluster-example | kubectl delete -f -
-```
-
-!!! Warning
- Never deploy pgAdmin in production using the plugin.
-
-### Logical Replication Publications
-
-The `cnp publication` command group is designed to streamline the creation and
-removal of [PostgreSQL logical replication publications](https://www.postgresql.org/docs/current/logical-replication-publication.html).
-Be aware that these commands are primarily intended for assisting in the
-creation of logical replication publications, particularly on remote PostgreSQL
-databases.
+1. choosing a replica Pod to work on
+2. fencing the replica
+3. taking the snapshot
+4. unfencing the replica
!!! Warning
- It is crucial to have a solid understanding of both the capabilities and
- limitations of PostgreSQL's native logical replication system before using
- these commands.
- In particular, be mindful of the [logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html).
-
-#### Creating a new publication
-
-To create a logical replication publication, use the `cnp publication create`
-command. The basic structure of this command is as follows:
+ A cluster already having a fenced instance cannot be snapshotted.
-```sh
-kubectl cnp publication create \
- --publication \
- [--external-cluster ]
- [options]
-```
+At the moment, this command can be used only for clusters having at least one
+replica: that replica will be shut down by the fencing procedure to ensure the
+snapshot to be consistent (cold backup). As the development of
+declarative support for Kubernetes' `VolumeSnapshot` API continues,
+this limitation will be removed, allowing you to take online backups
+as business continuity requires.
-There are two primary use cases:
-
-- With `--external-cluster`: Use this option to create a publication on an
- external cluster (i.e. defined in the `externalClusters` stanza). The commands
- will be issued from the ``, but the publication will be for the
- data in ``.
-
-- Without `--external-cluster`: Use this option to create a publication in the
- `` PostgreSQL `Cluster` (by default, the `app` database).
-
-!!! Warning
- When connecting to an external cluster, ensure that the specified user has
- sufficient permissions to execute the `CREATE PUBLICATION` command.
-
-You have several options, similar to the [`CREATE PUBLICATION`](https://www.postgresql.org/docs/current/sql-createpublication.html)
-command, to define the group of tables to replicate. Notable options include:
-
-- If you specify the `--all-tables` option, you create a publication `FOR ALL TABLES`.
-- Alternatively, you can specify multiple occurrences of:
- - `--table`: Add a specific table (with an expression) to the publication.
- - `--schema`: Include all tables in the specified database schema (available
- from PostgreSQL 15).
-
-The `--dry-run` option enables you to preview the SQL commands that the plugin
-will execute.
-
-For additional information and detailed instructions, type the following
-command:
-
-```sh
-kubectl cnp publication create --help
-```
-
-##### Example
-
-Given a `source-cluster` and a `destination-cluster`, we would like to create a
-publication for the data on `source-cluster`.
-The `destination-cluster` has an entry in the `externalClusters` stanza pointing
-to `source-cluster`.
-
-We can run:
-
-``` sh
-kubectl cnp publication create destination-cluster \
- --external-cluster=source-cluster --all-tables
-```
-
-which will create a publication for all tables on `source-cluster`, running
-the SQL commands on the `destination-cluster`.
-
-Or instead, we can run:
-
-``` sh
-kubectl cnp publication create source-cluster \
- --publication=app --all-tables
-```
-
-which will create a publication named `app` for all the tables in the
-`source-cluster`, running the SQL commands on the source cluster.
-
-!!! Info
- There are two sample files that have been provided for illustration and inspiration:
- [logical-source](../samples/cluster-example-logical-source.yaml) and
- [logical-destination](../samples/cluster-example-logical-destination.yaml).
-
-#### Dropping a publication
-
-The `cnp publication drop` command seamlessly complements the `create` command
-by offering similar key options, including the publication name, cluster name,
-and an optional external cluster. You can drop a `PUBLICATION` with the
-following command structure:
-
-```sh
-kubectl cnp publication drop \
- --publication \
- [--external-cluster ]
- [options]
-```
-
-To access further details and precise instructions, use the following command:
-
-```sh
-kubectl cnp publication drop --help
-```
-
-### Logical Replication Subscriptions
-
-The `cnp subscription` command group is a dedicated set of commands designed
-to simplify the creation and removal of
-[PostgreSQL logical replication subscriptions](https://www.postgresql.org/docs/current/logical-replication-subscription.html).
-These commands are specifically crafted to aid in the establishment of logical
-replication subscriptions, especially when dealing with remote PostgreSQL
-databases.
-
-!!! Warning
- Before using these commands, it is essential to have a comprehensive
- understanding of both the capabilities and limitations of PostgreSQL's
- native logical replication system.
- In particular, be mindful of the [logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html).
-
-In addition to subscription management, we provide a helpful command for
-synchronizing all sequences from the source cluster. While its applicability
-may vary, this command can be particularly useful in scenarios involving major
-upgrades or data import from remote servers.
-
-#### Creating a new subscription
-
-To create a logical replication subscription, use the `cnp subscription create`
-command. The basic structure of this command is as follows:
-
-```sh
-kubectl cnp subscription create \
- --subscription \
- --publication \
- --external-cluster \
- [options]
-```
-
-This command configures a subscription directed towards the specified
-publication in the designated external cluster, as defined in the
-`externalClusters` stanza of the ``.
-
-For additional information and detailed instructions, type the following
-command:
-
-```sh
-kubectl cnp subscription create --help
-```
-
-##### Example
-
-As in the section on publications, we have a `source-cluster` and a
-`destination-cluster`, and we have already created a publication called
-`app`.
-
-The following command:
-
-``` sh
-kubectl cnp subscription create destination-cluster \
- --external-cluster=source-cluster \
- --publication=app --subscription=app
-```
-
-will create a subscription for `app` on the destination cluster.
-
-!!! Warning
- Prioritize testing subscriptions in a non-production environment to ensure
- their effectiveness and identify any potential issues before implementing them
- in a production setting.
-
-!!! Info
- There are two sample files that have been provided for illustration and inspiration:
- [logical-source](../samples/cluster-example-logical-source.yaml) and
- [logical-destination](../samples/cluster-example-logical-destination.yaml).
-
-#### Dropping a subscription
-
-The `cnp subscription drop` command seamlessly complements the `create` command.
-You can drop a `SUBSCRIPTION` with the following command structure:
-
-```sh
-kubectl cnp subcription drop \
- --subscription \
- [options]
-```
-
-To access further details and precise instructions, use the following command:
-
-```sh
-kubectl cnp subscription drop --help
-```
-
-#### Synchronizing sequences
-
-One notable constraint of PostgreSQL logical replication, implemented through
-publications and subscriptions, is the lack of sequence synchronization. This
-becomes particularly relevant when utilizing logical replication for live
-database migration, especially to a higher version of PostgreSQL. A crucial
-step in this process involves updating sequences before transitioning
-applications to the new database (*cutover*).
-
-To address this limitation, the `cnp subscription sync-sequences` command
-offers a solution. This command establishes a connection with the source
-database, retrieves all relevant sequences, and subsequently updates local
-sequences with matching identities (based on database schema and sequence
-name).
-
-You can use the command as shown below:
+!!! Important
+ Even if the procedure will shut down a replica, the primary
+ Pod will not be involved.
-```sh
-kubectl cnp subscription sync-sequences \
- --subscription \
-
-```
+The `kubectl cnp snapshot` command requires the cluster name:
-For comprehensive details and specific instructions, utilize the following
-command:
+```shell
+kubectl cnp snapshot cluster-example
-```sh
-kubectl cnp subscription sync-sequences --help
+waiting for cluster-example-3 to be fenced
+waiting for VolumeSnapshot cluster-example-3-1682539624 to be ready to use
+unfencing pod cluster-example-3
```
-##### Example
+The `VolumeSnapshot` resource will be created with an empty
+`VolumeSnapshotClass` reference. That resource is intended by be used by the
+`VolumeSnapshotClass` configured as default.
-As in the previous sections for publication and subscription, we have
-a `source-cluster` and a `destination-cluster`. The publication and the
-subscription, both called `app`, are already present.
+A specific `VolumeSnapshotClass` can be requested via the `-c` option:
-The following command will synchronize the sequences involved in the
-`app` subscription, from the source cluster into the destination cluster.
-
-``` sh
-kubectl cnp subscription sync-sequences destination-cluster \
- --subscription=app
+```shell
+kubectl cnp snapshot cluster-example -c longhorn
```
-
-!!! Warning
- Prioritize testing subscriptions in a non-production environment to
- guarantee their effectiveness and detect any potential issues before deploying
- them in a production setting.
-
-## Integration with K9s
-
-The `cnp` plugin can be easily integrated in [K9s](https://k9scli.io/), a
-popular terminal-based UI to interact with Kubernetes clusters.
-
-See [`k9s/plugins.yml`](../samples/k9s/plugins.yml) for details.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubernetes_upgrade.mdx b/product_docs/docs/postgres_for_kubernetes/1/kubernetes_upgrade.mdx
index 321a8dd6b29..956e2e2813a 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/kubernetes_upgrade.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/kubernetes_upgrade.mdx
@@ -1,57 +1,112 @@
---
-title: 'Kubernetes Upgrade'
+title: 'Kubernetes Upgrade and Maintenance'
originalFilePath: 'src/kubernetes_upgrade.md'
---
-Kubernetes clusters must be kept updated. This becomes even more
-important if you are self-managing your Kubernetes clusters, especially
-on **bare metal**.
-
-Planning and executing regular updates is a way for your organization
-to clean up the technical debt and reduce the business risks, despite
-the introduction in your Kubernetes infrastructure of controlled
-downtimes that temporarily take out a node from the cluster for
-maintenance reasons (recommended reading:
+Maintaining an up-to-date Kubernetes cluster is crucial for ensuring optimal
+performance and security, particularly for self-managed clusters, especially
+those running on bare metal infrastructure. Regular updates help address
+technical debt and mitigate business risks, despite the controlled downtimes
+associated with temporarily removing a node from the cluster for maintenance
+purposes. For further insights on embracing risk in operations, refer to the
["Embracing Risk"](https://landing.google.com/sre/sre-book/chapters/embracing-risk/)
-from the Site Reliability Engineering book).
+chapter from the Site Reliability Engineering book.
+
+## Importance of Regular Updates
-For example, you might need to apply security updates on the Linux
-servers where Kubernetes is installed, or to replace a malfunctioning
-hardware component such as RAM, CPU, or RAID controller, or even upgrade
-the cluster to the latest version of Kubernetes.
+Updating Kubernetes involves planning and executing maintenance tasks, such as
+applying security updates to underlying Linux servers, replacing malfunctioning
+hardware components, or upgrading the cluster to the latest Kubernetes version.
+These activities are essential for maintaining a robust and secure
+infrastructure.
-Usually, maintenance operations in a cluster are performed one node
-at a time by:
+## Maintenance Operations in a Cluster
-1. evicting the workloads from the node to be updated (`drain`)
-2. performing the actual operation (for example, system update)
-3. re-joining the node to the cluster (`uncordon`)
+Typically, maintenance operations are carried out on one node at a time, following a [structured process](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/):
-The above process requires workloads to be either stopped for the
-entire duration of the upgrade or migrated to another node.
+1. eviction of workloads (`drain`): workloads are gracefully moved away from
+ the node to be updated, ensuring a smooth transition.
+2. performing the operation: the actual maintenance operation, such as a
+ system update or hardware replacement, is executed.
+3. rejoining the node to the cluster (`uncordon`): the updated node is
+ reintegrated into the cluster, ready to resume its responsibilities.
-While the latest case is the expected one in terms of service
-reliability and self-healing capabilities of Kubernetes, there can
-be situations where it is advised to operate with a temporarily
-degraded cluster and wait for the upgraded node to be up again.
+This process requires either stopping workloads for the entire upgrade duration
+or migrating them to other nodes in the cluster.
-In particular, if your PostgreSQL cluster relies on **node-local storage**
-\- that is *storage which is local to the Kubernetes worker node where
-the PostgreSQL database is running*.
-Node-local storage (or simply *local storage*) is used to enhance performance.
+## Temporary PostgreSQL Cluster Degradation
+
+While the standard approach ensures service reliability and leverages
+Kubernetes' self-healing capabilities, there are scenarios where operating with
+a temporarily degraded cluster may be acceptable. This is particularly relevant
+for PostgreSQL clusters relying on **node-local storage**, where the storage is
+local to the Kubernetes worker node running the PostgreSQL database. Node-local
+storage, or simply *local storage*, is employed to enhance performance.
!!! Note
- If your database files are on shared storage over the network,
- you may not need to define a maintenance window. If the volumes currently
- used by the pods can be reused by pods running on different nodes after
- the drain, the default self-healing behavior of the operator will work
- fine (you can then skip the rest of this section).
-
-When using local storage for PostgreSQL, you are advised to temporarily
-put the cluster in **maintenance mode** through the `nodeMaintenanceWindow`
-option to avoid standard self-healing procedures to kick in,
-while, for example, enlarging the partition on the physical node or
-updating the node itself.
+ If your database files reside on shared storage accessible over the
+ network, the default self-healing behavior of the operator can efficiently
+ handle scenarios where volumes are reused by pods on different nodes after a
+ drain operation. In such cases, you can skip the remaining sections of this
+ document.
+
+## Pod Disruption Budgets
+
+By default, EDB Postgres for Kubernetes safeguards Postgres cluster operations. If a node is
+to be drained and contains a cluster's primary instance, a switchover happens
+ahead of the drain. Once the instance in the node is downgraded to replica, the
+draining can resume.
+For single-instance clusters, a switchover is not possible, so EDB Postgres for Kubernetes
+will prevent draining the node where the instance is housed.
+Additionally, in multi-instance clusters, EDB Postgres for Kubernetes guarantees that only
+one replica at a time is gracefully shut down during a drain operation.
+
+Each PostgreSQL `Cluster` is equipped with two associated `PodDisruptionBudget`
+resources - you can easily confirm it with the `kubectl get pdb` command.
+
+Our recommendation is to leave pod disruption budgets enabled for every
+production Postgres cluster. This can be effortlessly managed by toggling the
+`.spec.enablePDB` option, as detailed in the
+[API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ClusterSpec).
+
+## PostgreSQL Clusters used for Development or Testing
+
+For PostgreSQL clusters used for development purposes, often consisting of
+a single instance, it is essential to disable pod disruption budgets. Failure
+to do so will prevent the node hosting that cluster from being drained.
+
+The following example illustrates how to disable pod disruption budgets for a
+1-instance development cluster:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: dev
+spec:
+ instances: 1
+ enablePDB: false
+
+ storage:
+ size: 1Gi
+```
+
+This configuration ensures smoother maintenance procedures without restrictions
+on draining the node during development activities.
+
+## Node Maintenance Window
+
+!!! Important
+ While EDB Postgres for Kubernetes will continue supporting the node maintenance window,
+ it is currently recommended to transition to direct control of pod disruption
+ budgets, as explained in the previous section. This section is retained
+ mainly for backward compatibility.
+
+Prior to release 1.23, EDB Postgres for Kubernetes had just one declarative mechanism to manage
+Kubernetes upgrades when dealing with local storage: you had to temporarily put
+the cluster in **maintenance mode** through the `nodeMaintenanceWindow` option
+to avoid standard self-healing procedures to kick in, while, for example,
+enlarging the partition on the physical node or updating the node itself.
!!! Warning
Limit the duration of the maintenance window to the shortest
@@ -90,7 +145,13 @@ reusePVC disabled: see section below.
Don't be afraid: it refers to another volume internally used
by the operator - not the PostgreSQL data directory.
-## Single instance clusters with `reusePVC` set to `false`
+!!! Important
+ `PodDisruptionBudget` management can be disabled by setting the
+ `.spec.enablePDB` field to `false`. In that case, the operator won't
+ create `PodDisruptionBudgets` and will delete them if they were
+ previously created.
+
+### Single instance clusters with `reusePVC` set to `false`
!!! Important
We recommend to always create clusters with more
diff --git a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
index 9bb673bd5ab..55805a60e80 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/labels_annotations.mdx
@@ -71,7 +71,8 @@ These predefined labels are managed by EDB Postgres for Kubernetes.
instead
`k8s.enterprisedb.io/podRole`
-: Role of the pod: `instance`, or `pooler`
+: Distinguishes pods dedicated to pooler deployment from those used for
+ database instances
`k8s.enterprisedb.io/poolerName`
: Name of the PgBouncer pooler
@@ -85,12 +86,15 @@ instead
`role` - **deprecated**
: Whether the instance running in a pod is a `primary` or a `replica`.
- This label is deprecated, you should use `k8s.enterprisedb.io/podRole` instead.
+ This label is deprecated, you should use `k8s.enterprisedb.io/instanceRole` instead.
`k8s.enterprisedb.io/scheduled-backup`
: When available, name of the `ScheduledBackup` resource that created a given
`Backup` object
+`k8s.enterprisedb.io/instanceRole`
+: Whether the instance running in a pod is a `primary` or a `replica`.
+
## Predefined annotations
These predefined annotations are managed by EDB Postgres for Kubernetes.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
index b06db08c8f4..4dca75edd84 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/monitoring.mdx
@@ -176,7 +176,7 @@ cnp_collector_up{cluster="cluster-example"} 1
# HELP cnp_collector_postgres_version Postgres version
# TYPE cnp_collector_postgres_version gauge
-cnp_collector_postgres_version{cluster="cluster-example",full="16.1"} 16.1
+cnp_collector_postgres_version{cluster="cluster-example",full="16.2"} 16.2
# HELP cnp_collector_last_failed_backup_timestamp The last failed backup as a unix timestamp
# TYPE cnp_collector_last_failed_backup_timestamp gauge
diff --git a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
index a28a1b1fba9..83a061bf6b4 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/operator_capability_levels.mdx
@@ -64,6 +64,10 @@ primary/standby architecture directly by setting the `imageName`
attribute in the CR. The operator also supports `imagePullSecrets`
to access private container registries, and it supports digests and
tags for finer control of container image immutability.
+If you prefer not to specify an image name, you can leverage
+[image catalogs](image_catalog.md) by simply referencing the PostgreSQL
+major version. Moreover, image catalogs enable you to effortlessly create
+custom catalogs, directing to images based on your specific requirements.
### Labels and annotations
@@ -115,7 +119,8 @@ switchover operations.
EDB Postgres for Kubernetes manages replication slots for all the replicas
in the HA cluster. The implementation is inspired by the previously
proposed patch for PostgreSQL, called
-[failover slots](https://wiki.postgresql.org/wiki/Failover_slots).
+[failover slots](https://wiki.postgresql.org/wiki/Failover_slots), and
+also supports user defined physical replication slots on the primary.
### Database configuration
diff --git a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1.mdx b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1.mdx
index 6a363d2d707..0c0aed6dd60 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/pg4k.v1.mdx
@@ -9,6 +9,8 @@ originalFilePath: 'src/pg4k.v1.md'
- [Backup](#postgresql-k8s-enterprisedb-io-v1-Backup)
- [Cluster](#postgresql-k8s-enterprisedb-io-v1-Cluster)
+- [ClusterImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ClusterImageCatalog)
+- [ImageCatalog](#postgresql-k8s-enterprisedb-io-v1-ImageCatalog)
- [Pooler](#postgresql-k8s-enterprisedb-io-v1-Pooler)
- [ScheduledBackup](#postgresql-k8s-enterprisedb-io-v1-ScheduledBackup)
@@ -86,6 +88,62 @@ More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-
+
+
+## ClusterImageCatalog
+
+
ClusterImageCatalog is the Schema for the clusterimagecatalogs API
Specification of the desired behavior of the ClusterImageCatalog.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+
+
+
+
+
+
+
+## ImageCatalog
+
+
ImageCatalog is the Schema for the imagecatalogs API
Specification of the desired behavior of the ImageCatalog.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
@@ -1754,6 +1890,28 @@ Defaults to: RuntimeDefault
The tablespaces configuration
+
enablePDB
+bool
+
+
+
Manage the PodDisruptionBudget resources within the cluster. When
+configured as true (default setting), the pod disruption budgets
+will safeguard the primary node from being terminated. Conversely,
+setting it to false will result in the absence of any
+PodDisruptionBudget resource, permitting the shutdown of all nodes
+hosting the PostgreSQL cluster. This latter configuration is
+advisable for any PostgreSQL cluster employed for
+development/staging purposes.
+
## Import
@@ -2921,6 +3154,8 @@ with an explanation of the cause
- [ServiceAccountTemplate](#postgresql-k8s-enterprisedb-io-v1-ServiceAccountTemplate)
+- [ServiceTemplateSpec](#postgresql-k8s-enterprisedb-io-v1-ServiceTemplateSpec)
+
Metadata is a structure similar to the metav1.ObjectMeta, but still
parseable by controller-gen to create a suitable CRD for the user.
The comment of PodTemplateSpec has an explanation of why we are
@@ -3250,6 +3485,69 @@ the operator calls PgBouncer's PAUSE and RESUME comman
+
Specification of the desired behavior of the service.
+More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
+
## TDEConfiguration
diff --git a/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx b/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx
index 5df10fe24ca..35c66161607 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/postgresql_conf.mdx
@@ -71,6 +71,7 @@ operator by applying the following sections in this order:
The **global default parameters** are:
```text
+archive_mode = 'on'
dynamic_shared_memory_type = 'posix'
logging_collector = 'on'
log_destination = 'csvlog'
@@ -86,6 +87,7 @@ shared_memory_type = 'mmap' # for PostgreSQL >= 12 only
wal_keep_size = '512MB' # for PostgreSQL >= 13 only
wal_keep_segments = '32' # for PostgreSQL <= 12 only
wal_level = 'logical'
+wal_log_hints = 'on'
wal_sender_timeout = '5s'
wal_receiver_timeout = '5s'
```
@@ -116,7 +118,6 @@ The following parameters are **fixed** and exclusively controlled by the operato
```text
archive_command = '/controller/manager wal-archive %p'
-archive_mode = 'on'
full_page_writes = 'on'
hot_standby = 'true'
listen_addresses = '*'
@@ -127,8 +128,6 @@ ssl_ca_file = '/controller/certificates/client-ca.crt'
ssl_cert_file = '/controller/certificates/server.crt'
ssl_key_file = '/controller/certificates/server.key'
unix_socket_directories = '/controller/run'
-wal_level = 'logical'
-wal_log_hints = 'on'
```
Since the fixed parameters are added at the end, they can't be overridden by the
@@ -653,4 +652,3 @@ Users are not allowed to set the following configuration parameters in the
- `unix_socket_directories`
- `unix_socket_group`
- `unix_socket_permissions`
-- `wal_log_hints`
diff --git a/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx b/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx
index 8aa82012334..14be10f32c4 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/replica_cluster.mdx
@@ -205,33 +205,61 @@ store to fetch the WAL files.
You can check the [sample YAML](../samples/cluster-example-replica-from-volume-snapshot.yaml)
for it in the `samples/` subdirectory.
+## Demoting a Primary to a Replica Cluster
+
+EDB Postgres for Kubernetes provides the functionality to demote a primary cluster to a
+replica cluster. This action is typically planned when transitioning the
+primary role from one data center to another. The process involves demoting the
+current primary cluster (e.g., cluster-eu-south) to a replica cluster and
+subsequently promoting the designated replica cluster (e.g.,
+`cluster-eu-central`) to primary when fully synchronized.
+Provided you have defined an external cluster in the current primary `Cluster`
+resource that points to the replica cluster that's been selected to become the
+new primary, all you need to do is to enable replica mode and define the source
+as follows:
+
+```yaml
+ replica:
+ enabled: true
+ source: cluster-eu-central
+```
+
## Promoting the designated primary in the replica cluster
-To promote the **designated primary** to **primary**, all we need to do is to
+To promote a replica cluster (e.g. `cluster-eu-central`) to a primary cluster
+and make the designated primary a real primary, all you need to do is to
disable the replica mode in the replica cluster through the option
-`.spec.replica.enabled`
+`.spec.replica.enabled`:
```yaml
replica:
enabled: false
- source: cluster-example
+ source: cluster-eu-south
```
-Once the replica mode is disabled, the replica cluster and the source cluster
-will become two separate clusters, and the **designated primary** in the replica
-cluster will be promoted to be that cluster's **primary**. We can verify the role
-change using the cnp plugin, checking the status of the cluster which was
-previously the replica:
+If you have first demoted the `cluster-eu-south` and waited for
+`cluster-eu-central` to be in sync, once `cluster-eu-central` starts as
+primary, the `cluster-eu-south` cluster will seamlessly start as a replica
+cluster, without the need of re-cloning.
+
+If you disable replica mode without prior demotion, the replica cluster and the
+source cluster will become two separate clusters.
+
+When replica mode is disabled, the **designated primary** in the replica
+cluster will be promoted to be that cluster's **primary**.
+
+You can verify the role change using the `cnp` plugin, checking the status of
+the cluster which was previously the replica:
```shell
-kubectl cnp -n status cluster-replica-example
+kubectl cnp -n status cluster-eu-central
```
!!! Note
- Disabling replication is an **irreversible** operation: once replication is
- disabled and the **designated primary** is promoted to **primary**, the
- replica cluster and the source cluster will become two independent clusters
- definitively.
+ Disabling replication is an **irreversible** operation. Once replication is
+ disabled and the designated primary is promoted to primary, the replica cluster
+ and the source cluster become two independent clusters definitively. Ensure to
+ follow the demotion procedure correctly to avoid unintended consequences.
## Delayed replicas
diff --git a/product_docs/docs/postgres_for_kubernetes/1/replication.mdx b/product_docs/docs/postgres_for_kubernetes/1/replication.mdx
index 19af9d44327..ff24b536e26 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/replication.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/replication.mdx
@@ -195,7 +195,7 @@ As you can imagine, the availability zone is just an example, but you could
customize this behavior based on other labels that describe the node, such
as storage, CPU, or memory.
-## Replication slots for High Availability
+## Replication slots
[Replication slots](https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS)
are a native PostgreSQL feature introduced in 9.4 that provides an automated way
@@ -207,9 +207,19 @@ standby is (temporarily) disconnected.
A replication slot exists solely on the instance that created it, and PostgreSQL
does not replicate it on the standby servers. As a result, after a failover
or a switchover, the new primary does not contain the replication slot from
-the old primary. This can create problems for
-the streaming replication clients that were connected to the old
-primary and have lost their slot.
+the old primary. This can create problems for the streaming replication clients
+that were connected to the old primary and have lost their slot.
+
+EDB Postgres for Kubernetes provides a turn-key solution to synchronize the content of
+physical replication slots from the primary to each standby, addressing two use
+cases:
+
+- the replication slots automatically created for the High Availability of the
+ Postgres cluster (see ["Replication slots for High Availability" below](#replication-slots-for-high-availability) for details)
+- [user-defined replication slots](#user-defined-replication-slots) created on
+ the primary
+
+### Replication slots for High Availability
EDB Postgres for Kubernetes fills this gap by introducing the concept of cluster-managed
replication slots, starting with high availability clusters. This feature
@@ -227,13 +237,13 @@ In EDB Postgres for Kubernetes, we use the terms:
content of the `pg_replication_slots` view in the primary, and updated at regular
intervals using `pg_replication_slot_advance()`.
-This feature, introduced in EDB Postgres for Kubernetes 1.18, is now enabled by default and
-can be disabled via configuration. For details, please refer to the
+This feature is enabled by default and can be disabled via configuration.
+For details, please refer to the
["replicationSlots" section in the API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration).
Here follows a brief description of the main options:
`.spec.replicationSlots.highAvailability.enabled`
-: if true, the feature is enabled (`true` is the default since 1.21)
+: if `true`, the feature is enabled (`true` is the default since 1.21)
`.spec.replicationSlots.highAvailability.slotPrefix`
: the prefix that identifies replication slots managed by the operator
@@ -277,6 +287,63 @@ spec:
size: 1Gi
```
+### User-Defined Replication slots
+
+Although EDB Postgres for Kubernetes doesn't support a way to declaratively define physical
+replication slots, you can still [create your own slots via SQL](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-REPLICATION).
+
+!!! Info rmation
+ At the moment, we don't have any plans to manage replication slots
+ in a declarative way, but it might change depending on the feedback
+ we receive from users. The reason is that replication slots exist
+ for a specific purpose and each should be managed by a specific application
+ the oversees the entire lifecycle of the slot on the primary.
+
+EDB Postgres for Kubernetes can manage the synchronization of any user managed physical
+replication slots between the primary and standbys, similarly to what it does
+for the HA replication slots explained above (the only difference is that you
+need to create the replication slot).
+
+This feature is enabled by default (meaning that any replication slot is
+synchronized), but you can disable it or further customize its behavior (for
+example by excluding some slots using regular expressions) through the
+`synchronizeReplicas` stanza. For example:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ instances: 3
+ replicationSlots:
+ synchronizeReplicas:
+ enabled: true
+ excludePatterns:
+ - "^foo"
+```
+
+For details, please refer to the
+["replicationSlots" section in the API reference](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-ReplicationSlotsConfiguration).
+Here follows a brief description of the main options:
+
+`.spec.replicationSlots.synchronizeReplicas.enabled`
+: When true or not specified, every user-defined replication slot on the
+ primary is synchronized on each standby. If changed to false, the operator will
+ remove any replication slot previously created by itself on each standby.
+
+`.spec.replicationSlots.synchronizeReplicas.excludePatterns`
+: A list of regular expression patterns to match the names of user-defined
+ replication slots to be excluded from synchronization. This can be useful to
+ exclude specific slots based on naming conventions.
+
+!!! Warning
+ Users utilizing this feature should carefully monitor user-defined replication
+ slots to ensure they align with their operational requirements and do not
+ interfere with the failover process.
+
+### Synchronization frequency
+
You can also control the frequency with which a standby queries the
`pg_replication_slots` view on the primary, and updates its local copy of
the replication slots, like in this example:
@@ -290,23 +357,12 @@ spec:
instances: 3
# Reduce the frequency of standby HA slots updates to once every 5 minutes
replicationSlots:
- highAvailability:
- enabled: true
updateInterval: 300
storage:
size: 1Gi
```
-Replication slots must be carefully monitored in your infrastructure. By default,
-we provide the `pg_replication_slots` metric in our Prometheus exporter with
-key information such as the name of the slot, the type, whether it is active,
-the lag from the primary.
-
-!!! Seealso "Monitoring"
- Please refer to the ["Monitoring" section](monitoring.md) for details on
- how to monitor a EDB Postgres for Kubernetes deployment.
-
### Capping the WAL size retained for replication slots
When replication slots is enabled, you might end up running out of disk
@@ -330,3 +386,14 @@ when replication slots support is enabled. For example:
max_slot_wal_keep_size: "10GB"
# ...
```
+
+### Monitoring replication slots
+
+Replication slots must be carefully monitored in your infrastructure. By default,
+we provide the `pg_replication_slots` metric in our Prometheus exporter with
+key information such as the name of the slot, the type, whether it is active,
+the lag from the primary.
+
+!!! Seealso "Monitoring"
+ Please refer to the ["Monitoring" section](monitoring.md) for details on
+ how to monitor a EDB Postgres for Kubernetes deployment.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx b/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx
index e68d22cae34..2c23ed4fe4c 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/rolling_update.mdx
@@ -13,6 +13,8 @@ Rolling upgrades are started when:
- the user changes the `imageName` attribute of the cluster specification;
+- the [image catalog](image_catalog.md) is updated with a new image for the major used by the cluster;
+
- a change in the PostgreSQL configuration requires a restart to be
applied;
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples.mdx b/product_docs/docs/postgres_for_kubernetes/1/samples.mdx
index f04675c44cb..bf370afd530 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples.mdx
@@ -131,3 +131,8 @@ your PostgreSQL cluster.
: [`cluster-restore-with-tablespaces.yaml`](../samples/cluster-restore-with-tablespaces.yaml)
For a list of available options, see [API reference](pg4k.v1.md).
+
+## Pooler configuration
+
+**Pooler with custom service config**
+: [`pooler-external.yaml`](../samples/pooler-external.yaml)
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore-cr.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore-cr.yaml
new file mode 100644
index 00000000000..3d033ade9dd
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore-cr.yaml
@@ -0,0 +1,26 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore
+spec:
+ instances: 3
+
+ storage:
+ size: 1Gi
+ storageClass: csi-hostpath-sc
+ walStorage:
+ size: 1Gi
+ storageClass: csi-hostpath-sc
+
+ bootstrap:
+ recovery:
+ volumeSnapshots:
+ storage:
+ name: cluster-example-20231031161103
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+ walStorage:
+ name: cluster-example-20231031161103-wal
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore.yaml
new file mode 100644
index 00000000000..a9f14f917d1
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis-restore.yaml
@@ -0,0 +1,43 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-restore
+spec:
+ instances: 3
+ imageName: registry.dev:5000/postgresql:16
+
+ storage:
+ size: 1Gi
+ storageClass: csi-hostpath-sc
+ walStorage:
+ size: 1Gi
+ storageClass: csi-hostpath-sc
+
+ # Backup properties
+ # This assumes a local minio setup
+# backup:
+# barmanObjectStore:
+# destinationPath: s3://backups/
+# endpointURL: http://minio:9000
+# s3Credentials:
+# accessKeyId:
+# name: minio
+# key: ACCESS_KEY_ID
+# secretAccessKey:
+# name: minio
+# key: ACCESS_SECRET_KEY
+# wal:
+# compression: gzip
+
+ bootstrap:
+ recovery:
+ volumeSnapshots:
+ storage:
+ name: snapshot-0bc6095db42768c7a1fe897494a966f541ef5fb29b2eb8e9399d80bd0a32408a-2023-11-13-7.41.53
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+ walStorage:
+ name: snapshot-a67084ba08097fd8c3e34c6afef8110091da67e5895f0379fd2df5b9f73ff524-2023-11-13-7.41.53
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis.yaml
new file mode 100644
index 00000000000..0a5ae32f7d9
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-bis.yaml
@@ -0,0 +1,29 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ instances: 3
+ imageName: registry.dev:5000/postgresql:16
+
+ backup:
+ volumeSnapshot:
+ className: csi-hostpath-groupsnapclass
+ #className: csi-hostpath-snapclass
+ groupSnapshot: true
+
+ storage:
+ storageClass: csi-hostpath-sc
+ size: 1Gi
+ walStorage:
+ storageClass: csi-hostpath-sc
+ size: 1Gi
+ # tablespaces:
+ # first:
+ # storage:
+ # storageClass: csi-hostpath-sc
+ # size: 1Gi
+ # second:
+ # storage:
+ # storageClass: csi-hostpath-sc
+ # size: 1Gi
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-catalog.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-catalog.yaml
new file mode 100644
index 00000000000..bbf9232c28b
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-catalog.yaml
@@ -0,0 +1,24 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: ImageCatalog
+metadata:
+ name: image-catalog-example
+spec:
+ images:
+ - image: quay.io/enterprisedb/postgresql:16
+ major: 16
+ - image: quay.io/enterprisedb/postgresql:15
+ major: 15
+---
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: cluster-example
+spec:
+ instances: 3
+ imageCatalogRef:
+ apiGroup: postgresql.k8s.enterprisedb.io
+ kind: ImageCatalog
+ name: image-catalog-example
+ major: 15
+ storage:
+ size: 1Gi
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml
index a1c8bb7d269..39a5794da7e 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/cluster-example-full.yaml
@@ -35,7 +35,7 @@ metadata:
name: cluster-example-full
spec:
description: "Example of cluster"
- imageName: quay.io/enterprisedb/postgresql:16.1
+ imageName: quay.io/enterprisedb/postgresql:16.2
# imagePullSecret is only required if the images are located in a private registry
# imagePullSecrets:
# - name: private_registry_access
diff --git a/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml b/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml
new file mode 100644
index 00000000000..227fdb61423
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/samples/pooler-external.yaml
@@ -0,0 +1,21 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Pooler
+metadata:
+ name: pooler-example-rw
+spec:
+ cluster:
+ name: cluster-example
+ instances: 3
+ type: rw
+ serviceTemplate:
+ metadata:
+ labels:
+ app: pooler
+ spec:
+ type: LoadBalancer
+ pgbouncer:
+ poolMode: session
+ parameters:
+ max_client_conn: "1000"
+ default_pool_size: "10"
+
\ No newline at end of file
diff --git a/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx b/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx
index 970845a668b..02385d14111 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/scheduling.mdx
@@ -61,7 +61,7 @@ metadata:
name: cluster-example
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:16.1
+ imageName: quay.io/enterprisedb/postgresql:16.2
affinity:
enablePodAntiAffinity: true #default value
diff --git a/product_docs/docs/postgres_for_kubernetes/1/security.mdx b/product_docs/docs/postgres_for_kubernetes/1/security.mdx
index b354c6eb880..02f92ffe2a8 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/security.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/security.mdx
@@ -83,7 +83,8 @@ For OpenShift specificities on this matter, please consult the
The above permissions are exclusively reserved for the operator's service
account to interact with the Kubernetes API server. They are not directly
accessible by the users of the operator that interact only with `Cluster`,
- `Pooler`, `Backup`, and `ScheduledBackup` resources.
+ `Pooler`, `Backup`, `ScheduledBackup`, `ImageCatalog` and
+ `ClusterImageCatalog` resources.
Below we provide some examples and, most importantly, the reasons why
EDB Postgres for Kubernetes requires full or partial management of standard Kubernetes
diff --git a/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx b/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
index baafac3efb7..36059f2c09a 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/ssl_connections.mdx
@@ -176,7 +176,7 @@ Output:
version
--------------------------------------------------------------------------------------
------------------
-PostgreSQL 16.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat
+PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat
8.3.1-5), 64-bit
(1 row)
```
diff --git a/product_docs/docs/postgres_for_kubernetes/1/tde.mdx b/product_docs/docs/postgres_for_kubernetes/1/tde.mdx
index 13f1850542a..4140eb0767a 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/tde.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/tde.mdx
@@ -5,14 +5,15 @@ originalFilePath: 'src/tde.md'
!!! Important
TDE is available *only* for operands that support it:
- EPAS versions 15 and newer.
+ EPAS and PG Extended, versions 15 and newer.
Transparent Data Encryption, or TDE, is a technology used by several database
vendors to **encrypt data at rest**, i.e. database files on disk.
TDE does not however encrypt data in use.
-TDE is included in EDB Postgres Advanced Server (EPAS), starting with version
-15, and it is supported by the EDB Postgres for Kubernetes operator.
+TDE is included in EDB Postgres Advanced Server and EDB Postgres Extended
+Server from version 15, and is supported by the EDB Postgres for Kubernetes
+operator.
!!! Important
Before you proceed, please take some time to familiarize with the
@@ -23,6 +24,11 @@ Data encryption/decryption is entirely transparent to the user, as it is
managed by the database without requiring any application changes or updated
client drivers.
+!!! Note
+ In the code samples shown below, the `epas` sub-section of `postgresql` in
+ the YAML manifests is used to activate TDE. The `epas` section can be used
+ to enable TDE for PG Extended images as well as for EPAS images.
+
EDB Postgres for Kubernetes provides 3 ways to use TDE:
- using a secret containing the passphrase
diff --git a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx
index f826127e879..d2be118ac9d 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/troubleshooting.mdx
@@ -221,7 +221,7 @@ Cluster in healthy state
Name: cluster-example
Namespace: default
System ID: 7044925089871458324
-PostgreSQL Image: quay.io/enterprisedb/postgresql:16.1-3
+PostgreSQL Image: quay.io/enterprisedb/postgresql:16.2-3
Primary instance: cluster-example-1
Instances: 3
Ready instances: 3
@@ -297,7 +297,7 @@ kubectl describe cluster -n | grep "Image Name"
Output:
```shell
- Image Name: quay.io/enterprisedb/postgresql:16.1-3
+ Image Name: quay.io/enterprisedb/postgresql:16.2-3
```
!!! Note
diff --git a/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx b/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx
index 26385fee0ef..b75441ee1b2 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx
@@ -16,7 +16,7 @@ the ["Backup on object stores" section](backup_barmanobjectstore.md) to set up
the WAL archive.
!!! Info
- Please refer to [`BarmanObjectStoreConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-BarmanObjectStoreConfiguration)
+ Please refer to [`BarmanObjectStoreConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-barmanobjectstoreconfiguration)
in the API reference for a full list of options.
If required, you can choose to compress WAL files as soon as they
From b80e93d70ff81c5e03faee602a2044e2685dd0ef Mon Sep 17 00:00:00 2001
From: Betsy Gitelman
Date: Thu, 25 Apr 2024 11:58:57 -0400
Subject: [PATCH 05/26] Edits to pgd4k PR5524
---
.../1/rel_notes/1_0_rel_notes.mdx | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/rel_notes/1_0_rel_notes.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/rel_notes/1_0_rel_notes.mdx
index 02dfe0fe07d..4e1ab7c13ec 100644
--- a/product_docs/docs/postgres_distributed_for_kubernetes/1/rel_notes/1_0_rel_notes.mdx
+++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/rel_notes/1_0_rel_notes.mdx
@@ -21,16 +21,16 @@ to create and manage EDB Postgres Distributed clusters inside Kubernetes with ca
The EDB Postgres Distributed for Kubernetes operator leverages
[EDB Postgres for Kubernetes](https://www.enterprisedb.com/docs/postgres_for_kubernetes/latest/) (PG4K) and inherits many
of that project's capabilities. EDB Postgres Distributed for Kubernetes version 1.0.0 is based, specifically, on release 1.22 of PG4K.
-Please refer to the [PG4K release notes](https://www.enterprisedb.com/docs/postgres_for_kubernetes/latest/rel_notes/) for more details.
+See the [PG4K release notes](https://www.enterprisedb.com/docs/postgres_for_kubernetes/latest/rel_notes/) for more details.
!!!
## Features
-| Component | Description |
-|-----------|----------------------------------------------------------------------------------------------|
-| PGD4K | Deployment of EDB Postgres Distributed clusters with versions 5 and later inside Kubernetes. |
-| PGD4K | Self-healing capabilities such as recovery and restart of failed PGD nodes. |
-| PGD4K | Defined services that allow applications to connect to the write leader of each PGD group. |
-| PGD4K | Implementation of Raft subgroups. |
-| PGD4K | TLS connections and client certificate authentication. |
-| PGD4K | Continuous backup to an S3 compatible object store. |
+| Component | Description |
+|-----------|---------------------------------------------------------------------------------------------|
+| PGD4K | Deployment of EDB Postgres Distributed clusters with versions 5 and later inside Kubernetes |
+| PGD4K | Self-healing capabilities such as recovery and restart of failed PGD nodes |
+| PGD4K | Defined services that allow applications to connect to the write leader of each PGD group |
+| PGD4K | Implementation of Raft subgroups |
+| PGD4K | TLS connections and client certificate authentication |
+| PGD4K | Continuous backup to an S3-compatible object store |
From 89aea0d917354bef2330464ebb841656410add25 Mon Sep 17 00:00:00 2001
From: Josh Heyer
Date: Thu, 25 Apr 2024 19:44:28 +0000
Subject: [PATCH 06/26] Misc corrections and rollbacks
---
.../1/cluster_conf.mdx | 2 +-
.../1/container_images.mdx | 4 -
.../1/failure_modes.mdx | 6 +-
.../docs/postgres_for_kubernetes/1/index.mdx | 2 +
.../1/kubectl-plugin.mdx | 547 ++++++++++++++++--
.../1/wal_archiving.mdx | 2 +-
6 files changed, 491 insertions(+), 72 deletions(-)
diff --git a/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx b/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx
index 0a515fb9465..8b550eb893d 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/cluster_conf.mdx
@@ -50,7 +50,7 @@ EDB Postgres for Kubernetes relies on [ephemeral volumes](https://kubernetes.io/
for part of the internal activities. Ephemeral volumes exist for the sole
duration of a pod's life, without persisting across pod restarts.
-# Volume Claim Template for Temporary Storage
+### Volume Claim Template for Temporary Storage
The operator uses by default an `emptyDir` volume, which can be customized by using the `.spec.ephemeralVolumesSizeLimit field`.
This can be overridden by specifying a volume claim template in the `.spec.ephemeralVolumeSource` field.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx b/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
index 689f6f2d8e6..6d57d72929f 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/container_images.mdx
@@ -43,10 +43,6 @@ for EDB Postgres for Kubernetes, and publishes them on
## Image tag requirements
-Certainly! Here's an improved version:
-
-## Image Tag Requirements
-
To ensure the operator makes informed decisions, it must accurately detect the
PostgreSQL major version. This detection can occur in two ways:
diff --git a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx
index 24771b9e34e..a1aab1641cf 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/failure_modes.mdx
@@ -8,7 +8,7 @@ PostgreSQL can face on a Kubernetes cluster during its lifetime.
!!! Important
In case the failure scenario you are experiencing is not covered by this
- section, please immediately seek for [professional support](https://cloudnative-pg.io/support/).
+ section, please immediately contact EDB for support and assistance.
!!! Seealso "Postgres instance manager"
Please refer to the ["Postgres instance manager" section](instance_manager.md)
@@ -175,8 +175,8 @@ In the case of undocumented failure, it might be necessary to intervene
to solve the problem manually.
!!! Important
- In such cases, please do not perform any manual operation without
- [professional support](https://cloudnative-pg.io/support/).
+ In such cases, please do not perform any manual operation without the
+ support and assistance of EDB engineering team.
From version 1.11.0 of the operator, you can use the
`k8s.enterprisedb.io/reconciliationLoop` annotation to temporarily disable the
diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/index.mdx
index c9d73b9b8d0..7ddf1e5649b 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/index.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/index.mdx
@@ -80,6 +80,8 @@ and OpenShift. It is designed, developed, and supported by EDB and covers the
full lifecycle of a highly available Postgres database clusters with a
primary/standby architecture, using native streaming replication.
+EDB Postgres for Kubernetes was made generally available on February 4, 2021. Earlier versions were made available to selected customers prior to the GA release.
+
!!! Note
The operator has been renamed from Cloud Native PostgreSQL. Existing users
diff --git a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
index e1982245b0a..b2b010faac1 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/kubectl-plugin.mdx
@@ -34,52 +34,67 @@ them in your systems.
#### Debian packages
-For example, let's install the 1.18.1 release of the plugin, for an Intel based
+For example, let's install the 1.22.2 release of the plugin, for an Intel based
64 bit server. First, we download the right `.deb` file.
```sh
-$ wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.18.1/kubectl-cnp_1.18.1_linux_x86_64.deb
+wget https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.22.2/kubectl-cnp_1.22.2_linux_x86_64.deb
```
Then, install from the local file using `dpkg`:
```sh
-$ dpkg -i kubectl-cnp_1.18.1_linux_x86_64.deb
+dpkg -i kubectl-cnp_1.22.2_linux_x86_64.deb
+__OUTPUT__
(Reading database ... 16102 files and directories currently installed.)
-Preparing to unpack kubectl-cnp_1.18.1_linux_x86_64.deb ...
-Unpacking cnp (1.18.1) over (1.18.1) ...
-Setting up cnp (1.18.1) ...
+Preparing to unpack kubectl-cnp_1.22.2_linux_x86_64.deb ...
+Unpacking cnp (1.22.2) over (1.22.2) ...
+Setting up cnp (1.22.2) ...
```
#### RPM packages
-As in the example for `.deb` packages, let's install the 1.18.1 release for an
+As in the example for `.deb` packages, let's install the 1.22.2 release for an
Intel 64 bit machine. Note the `--output` flag to provide a file name.
-```sh
-curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.18.1/kubectl-cnp_1.18.1_linux_x86_64.rpm --output cnp-plugin.rpm
+``` sh
+curl -L https://github.com/EnterpriseDB/kubectl-cnp/releases/download/v1.22.2/kubectl-cnp_1.22.2_linux_x86_64.rpm \
+ --output kube-plugin.rpm
```
Then install with `yum`, and you're ready to use:
```sh
-$ yum --disablerepo=* localinstall cnp-plugin.rpm
-yum --disablerepo=* localinstall cnp-plugin.rpm
-Failed to set locale, defaulting to C.UTF-8
+yum --disablerepo=* localinstall kube-plugin.rpm
+__OUTPUT__
Dependencies resolved.
-====================================================================================================
- Package Architecture Version Repository Size
-====================================================================================================
+========================================================================================================================
+ Package Architecture Version Repository Size
+========================================================================================================================
Installing:
- cnpg x86_64 1.18.1-1 @commandline 14 M
+ kubectl-cnp x86_64 1.22.2-1 @commandline 17 M
Transaction Summary
-====================================================================================================
+========================================================================================================================
Install 1 Package
-Total size: 14 M
-Installed size: 43 M
+Total size: 17 M
+Installed size: 62 M
Is this ok [y/N]: y
+Downloading Packages:
+Running transaction check
+Transaction check succeeded.
+Running transaction test
+Transaction test succeeded.
+Running transaction
+ Preparing : 1/1
+ Installing : kubectl-cnp-1.22.2-1.x86_64 1/1
+ Verifying : kubectl-cnp-1.22.2-1.x86_64 1/1
+
+Installed:
+ kubectl-cnp-1.22.2-1.x86_64
+
+Complete!
```
### Supported Architectures
@@ -102,6 +117,29 @@ operating system and architectures:
- arm 5/6/7
- arm64
+### Configuring auto-completion
+
+To configure [auto-completion](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_completion/) for the plugin, a helper shell script needs to be
+installed into your current PATH. Assuming the latter contains `/usr/local/bin`,
+this can be done with the following commands:
+
+```shell
+cat > kubectl_complete-cnp <..` format (e.g. `1.22.2`). The default empty value installs the version of the operator that matches the version of the plugin.
- `--watch-namespace`: comma separated string containing the namespaces to
watch (by default all namespaces)
@@ -140,7 +175,7 @@ will install the operator, is as follows:
```shell
kubectl cnp install generate \
-n king \
- --version 1.17 \
+ --version 1.22.2 \
--replicas 3 \
--watch-namespace "albert, bb, freddie" \
> operator.yaml
@@ -149,9 +184,9 @@ kubectl cnp install generate \
The flags in the above command have the following meaning:
- `-n king` install the CNP operator into the `king` namespace
-- `--version 1.17` install the latest patch version for minor version 1.17
+- `--version 1.22.2` install operator version 1.22.2
- `--replicas 3` install the operator with 3 replicas
-- `--watch-namespaces "albert, bb, freddie"` have the operator watch for
+- `--watch-namespace "albert, bb, freddie"` have the operator watch for
changes in the `albert`, `bb` and `freddie` namespaces only
### Status
@@ -187,7 +222,7 @@ Cluster in healthy state
Name: sandbox
Namespace: default
System ID: 7039966298120953877
-PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
+PostgreSQL Image: quay.io/enterprisedb/postgresql:16.2
Primary instance: sandbox-2
Instances: 3
Ready instances: 3
@@ -232,7 +267,7 @@ Cluster in healthy state
Name: sandbox
Namespace: default
System ID: 7039966298120953877
-PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
+PostgreSQL Image: quay.io/enterprisedb/postgresql:16.2
Primary instance: sandbox-2
Instances: 3
Ready instances: 3
@@ -722,6 +757,89 @@ items:
"apiVersion": "postgresql.k8s.enterprisedb.io/v1",
```
+### Logs
+
+The `kubectl cnp logs` command allows to follow the logs of a collection
+of pods related to EDB Postgres for Kubernetes in a single go.
+
+It has at the moment one available sub-command: `cluster`.
+
+#### Cluster logs
+
+The `cluster` sub-command gathers all the pod logs for a cluster in a single
+stream or file.
+This means that you can get all the pod logs in a single terminal window, with a
+single invocation of the command.
+
+As in all the cnp plugin sub-commands, you can get instructions and help with
+the `-h` flag:
+
+`kubectl cnp logs cluster -h`
+
+The `logs` command will display logs in JSON-lines format, unless the
+`--timestamps` flag is used, in which case, a human readable timestamp will be
+prepended to each line. In this case, lines will no longer be valid JSON,
+and tools such as `jq` may not work as desired.
+
+If the `logs cluster` sub-command is given the `-f` flag (aka `--follow`), it
+will follow the cluster pod logs, and will also watch for any new pods created
+in the cluster after the command has been invoked.
+Any new pods found, including pods that have been restarted or re-created,
+will also have their pods followed.
+The logs will be displayed in the terminal's standard-out.
+This command will only exit when the cluster has no more pods left, or when it
+is interrupted by the user.
+
+If `logs` is called without the `-f` option, it will read the logs from all
+cluster pods until the time of invocation and display them in the terminal's
+standard-out, then exit.
+The `-o` or `--output` flag can be provided, to specify the name
+of the file where the logs should be saved, instead of displaying over
+standard-out.
+The `--tail` flag can be used to specify how many log lines will be retrieved
+from each pod in the cluster. By default, the `logs cluster` sub-command will
+display all the logs from each pod in the cluster. If combined with the "follow"
+flag `-f`, the number of logs specified by `--tail` will be retrieved until the
+current time, and and from then the new logs will be followed.
+
+NOTE: unlike other `cnp` plugin commands, the `-f` is used to denote "follow"
+rather than specify a file. This keeps with the convention of `kubectl logs`,
+which takes `-f` to mean the logs should be followed.
+
+Usage:
+
+```shell
+kubectl cnp logs cluster [flags]
+```
+
+Using the `-f` option to follow:
+
+```shell
+kubectl cnp report cluster cluster-example -f
+```
+
+Using `--tail` option to display 3 lines from each pod and the `-f` option
+to follow:
+
+```shell
+kubectl cnp report cluster cluster-example -f --tail 3
+```
+
+``` json
+{"level":"info","ts":"2023-06-30T13:37:33Z","logger":"postgres","msg":"2023-06-30 13:37:33.142 UTC [26] LOG: ending log output to stderr","source":"/controller/log/postgres","logging_pod":"cluster-example-3"}
+{"level":"info","ts":"2023-06-30T13:37:33Z","logger":"postgres","msg":"2023-06-30 13:37:33.142 UTC [26] HINT: Future log output will go to log destination \"csvlog\".","source":"/controller/log/postgres","logging_pod":"cluster-example-3"}
+…
+…
+```
+
+With the `-o` option omitted, and with `--output` specified:
+
+``` sh
+kubectl cnp logs cluster cluster-example --output my-cluster.log
+
+Successfully written logs to "my-cluster.log"
+```
+
### Destroy
The `kubectl cnp destroy` command helps remove an instance and all the
@@ -826,11 +944,16 @@ kubectl cnp fio -n
Refer to the [Benchmarking fio section](benchmarking.md#fio) for more details.
-### Requesting a new base backup
+### Requesting a new physical backup
The `kubectl cnp backup` command requests a new physical base backup for
an existing Postgres cluster by creating a new `Backup` resource.
+!!! Info
+ From release 1.21, the `backup` command accepts a new flag, `-m`
+ to specify the backup method.
+ To request a backup using volume snapshots, set `-m volumeSnapshot`
+
The following example requests an on-demand backup for a given cluster:
```shell
@@ -844,10 +967,17 @@ kubectl cnp backup cluster-example
backup/cluster-example-20230121002300 created
```
-By default, new created backup will use the backup target policy defined
-in cluster to choose which instance to run on. You can also use `--backup-target`
-option to override this policy. please refer to [Backup and Recovery](backup_recovery.md)
-for more information about backup target.
+By default, a newly created backup will use the backup target policy defined
+in the cluster to choose which instance to run on.
+However, you can override this policy with the `--backup-target` option.
+
+In the case of volume snapshot backups, you can also use the `--online` option
+to request an online/hot backup or an offline/cold one: additionally, you can
+also tune online backups by explicitly setting the `--immediate-checkpoint` and
+`--wait-for-archive` options.
+
+The ["Backup" section](./backup.md#backup) contains more information about
+the configuration settings.
### Launching psql
@@ -862,7 +992,7 @@ it from the actual pod. This means that you will be using the `postgres` user.
```shell
kubectl cnp psql cluster-example
-psql (15.3)
+psql (16.2 (Debian 16.2-1.pgdg110+1))
Type "help" for help.
postgres=#
@@ -873,7 +1003,7 @@ select to work against a replica by using the `--replica` option:
```shell
kubectl cnp psql --replica cluster-example
-psql (15.3)
+psql (16.2 (Debian 16.2-1.pgdg110+1))
Type "help" for help.
@@ -901,44 +1031,335 @@ kubectl cnp psql cluster-example -- -U postgres
### Snapshotting a Postgres cluster
-The `kubectl cnp snapshot` creates consistent snapshots of a Postgres
-`Cluster` by:
+!!! Warning
+ The `kubectl cnp snapshot` command has been removed.
+ Please use the [`backup` command](#requesting-a-new-physical-backup) to request
+ backups using volume snapshots.
-1. choosing a replica Pod to work on
-2. fencing the replica
-3. taking the snapshot
-4. unfencing the replica
+### Using pgAdmin4 for evaluation/demonstration purposes only
-!!! Warning
- A cluster already having a fenced instance cannot be snapshotted.
+[pgAdmin](https://www.pgadmin.org/) stands as the most popular and feature-rich
+open-source administration and development platform for PostgreSQL.
+For more information on the project, please refer to the official
+[documentation](https://www.pgadmin.org/docs/).
-At the moment, this command can be used only for clusters having at least one
-replica: that replica will be shut down by the fencing procedure to ensure the
-snapshot to be consistent (cold backup). As the development of
-declarative support for Kubernetes' `VolumeSnapshot` API continues,
-this limitation will be removed, allowing you to take online backups
-as business continuity requires.
+Given that the pgAdmin Development Team maintains official Docker container
+images, you can install pgAdmin in your environment as a standard
+Kubernetes deployment.
!!! Important
- Even if the procedure will shut down a replica, the primary
- Pod will not be involved.
+ Deployment of pgAdmin in Kubernetes production environments is beyond the
+ scope of this document and, more broadly, of the EDB Postgres for Kubernetes project.
-The `kubectl cnp snapshot` command requires the cluster name:
+However, **for the purposes of demonstration and evaluation**, EDB Postgres for Kubernetes
+offers a suitable solution. The `cnp` plugin implements the `pgadmin4`
+command, providing a straightforward method to connect to a given database
+`Cluster` and navigate its content in a local environment such as `kind`.
-```shell
-kubectl cnp snapshot cluster-example
+For example, you can install a demo deployment of pgAdmin4 for the
+`cluster-example` cluster as follows:
-waiting for cluster-example-3 to be fenced
-waiting for VolumeSnapshot cluster-example-3-1682539624 to be ready to use
-unfencing pod cluster-example-3
+```sh
+kubectl cnp pgadmin4 cluster-example
```
-The `VolumeSnapshot` resource will be created with an empty
-`VolumeSnapshotClass` reference. That resource is intended by be used by the
-`VolumeSnapshotClass` configured as default.
+This command will produce:
-A specific `VolumeSnapshotClass` can be requested via the `-c` option:
+```output
+ConfigMap/cluster-example-pgadmin4 created
+Deployment/cluster-example-pgadmin4 created
+Service/cluster-example-pgadmin4 created
+Secret/cluster-example-pgadmin4 created
-```shell
-kubectl cnp snapshot cluster-example -c longhorn
+[...]
+```
+
+After deploying pgAdmin, forward the port using kubectl and connect
+through your browser by following the on-screen instructions.
+
+![Screenshot of desktop installation of pgAdmin](images/pgadmin4.png)
+
+As usual, you can use the `--dry-run` option to generate the YAML file:
+
+```sh
+kubectl cnp pgadmin4 --dry-run cluster-example
+```
+
+pgAdmin4 can be installed in either desktop or server mode, with the default
+being server.
+
+In `server` mode, authentication is required using a randomly generated password,
+and users must manually specify the database to connect to.
+
+On the other hand, `desktop` mode initiates a pgAdmin web interface without
+requiring authentication. It automatically connects to the `app` database as the
+`app` user, making it ideal for quick demos, such as on a local deployment using
+`kind`:
+
+```sh
+kubectl cnp pgadmin4 --mode desktop cluster-example
```
+
+After concluding your demo, ensure the termination of the pgAdmin deployment by
+executing:
+
+```sh
+kubectl cnp pgadmin4 --dry-run cluster-example | kubectl delete -f -
+```
+
+!!! Warning
+ Never deploy pgAdmin in production using the plugin.
+
+### Logical Replication Publications
+
+The `cnp publication` command group is designed to streamline the creation and
+removal of [PostgreSQL logical replication publications](https://www.postgresql.org/docs/current/logical-replication-publication.html).
+Be aware that these commands are primarily intended for assisting in the
+creation of logical replication publications, particularly on remote PostgreSQL
+databases.
+
+!!! Warning
+ It is crucial to have a solid understanding of both the capabilities and
+ limitations of PostgreSQL's native logical replication system before using
+ these commands.
+ In particular, be mindful of the [logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html).
+
+#### Creating a new publication
+
+To create a logical replication publication, use the `cnp publication create`
+command. The basic structure of this command is as follows:
+
+```sh
+kubectl cnp publication create \
+ --publication \
+ [--external-cluster ]
+ [options]
+```
+
+There are two primary use cases:
+
+- With `--external-cluster`: Use this option to create a publication on an
+ external cluster (i.e. defined in the `externalClusters` stanza). The commands
+ will be issued from the ``, but the publication will be for the
+ data in ``.
+
+- Without `--external-cluster`: Use this option to create a publication in the
+ `` PostgreSQL `Cluster` (by default, the `app` database).
+
+!!! Warning
+ When connecting to an external cluster, ensure that the specified user has
+ sufficient permissions to execute the `CREATE PUBLICATION` command.
+
+You have several options, similar to the [`CREATE PUBLICATION`](https://www.postgresql.org/docs/current/sql-createpublication.html)
+command, to define the group of tables to replicate. Notable options include:
+
+- If you specify the `--all-tables` option, you create a publication `FOR ALL TABLES`.
+- Alternatively, you can specify multiple occurrences of:
+ - `--table`: Add a specific table (with an expression) to the publication.
+ - `--schema`: Include all tables in the specified database schema (available
+ from PostgreSQL 15).
+
+The `--dry-run` option enables you to preview the SQL commands that the plugin
+will execute.
+
+For additional information and detailed instructions, type the following
+command:
+
+```sh
+kubectl cnp publication create --help
+```
+
+##### Example
+
+Given a `source-cluster` and a `destination-cluster`, we would like to create a
+publication for the data on `source-cluster`.
+The `destination-cluster` has an entry in the `externalClusters` stanza pointing
+to `source-cluster`.
+
+We can run:
+
+``` sh
+kubectl cnp publication create destination-cluster \
+ --external-cluster=source-cluster --all-tables
+```
+
+which will create a publication for all tables on `source-cluster`, running
+the SQL commands on the `destination-cluster`.
+
+Or instead, we can run:
+
+``` sh
+kubectl cnp publication create source-cluster \
+ --publication=app --all-tables
+```
+
+which will create a publication named `app` for all the tables in the
+`source-cluster`, running the SQL commands on the source cluster.
+
+!!! Info
+ There are two sample files that have been provided for illustration and inspiration:
+ [logical-source](../samples/cluster-example-logical-source.yaml) and
+ [logical-destination](../samples/cluster-example-logical-destination.yaml).
+
+#### Dropping a publication
+
+The `cnp publication drop` command seamlessly complements the `create` command
+by offering similar key options, including the publication name, cluster name,
+and an optional external cluster. You can drop a `PUBLICATION` with the
+following command structure:
+
+```sh
+kubectl cnp publication drop \
+ --publication \
+ [--external-cluster ]
+ [options]
+```
+
+To access further details and precise instructions, use the following command:
+
+```sh
+kubectl cnp publication drop --help
+```
+
+### Logical Replication Subscriptions
+
+The `cnp subscription` command group is a dedicated set of commands designed
+to simplify the creation and removal of
+[PostgreSQL logical replication subscriptions](https://www.postgresql.org/docs/current/logical-replication-subscription.html).
+These commands are specifically crafted to aid in the establishment of logical
+replication subscriptions, especially when dealing with remote PostgreSQL
+databases.
+
+!!! Warning
+ Before using these commands, it is essential to have a comprehensive
+ understanding of both the capabilities and limitations of PostgreSQL's
+ native logical replication system.
+ In particular, be mindful of the [logical replication restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html).
+
+In addition to subscription management, we provide a helpful command for
+synchronizing all sequences from the source cluster. While its applicability
+may vary, this command can be particularly useful in scenarios involving major
+upgrades or data import from remote servers.
+
+#### Creating a new subscription
+
+To create a logical replication subscription, use the `cnp subscription create`
+command. The basic structure of this command is as follows:
+
+```sh
+kubectl cnp subscription create \
+ --subscription \
+ --publication \
+ --external-cluster \
+ [options]
+```
+
+This command configures a subscription directed towards the specified
+publication in the designated external cluster, as defined in the
+`externalClusters` stanza of the ``.
+
+For additional information and detailed instructions, type the following
+command:
+
+```sh
+kubectl cnp subscription create --help
+```
+
+##### Example
+
+As in the section on publications, we have a `source-cluster` and a
+`destination-cluster`, and we have already created a publication called
+`app`.
+
+The following command:
+
+``` sh
+kubectl cnp subscription create destination-cluster \
+ --external-cluster=source-cluster \
+ --publication=app --subscription=app
+```
+
+will create a subscription for `app` on the destination cluster.
+
+!!! Warning
+ Prioritize testing subscriptions in a non-production environment to ensure
+ their effectiveness and identify any potential issues before implementing them
+ in a production setting.
+
+!!! Info
+ There are two sample files that have been provided for illustration and inspiration:
+ [logical-source](../samples/cluster-example-logical-source.yaml) and
+ [logical-destination](../samples/cluster-example-logical-destination.yaml).
+
+#### Dropping a subscription
+
+The `cnp subscription drop` command seamlessly complements the `create` command.
+You can drop a `SUBSCRIPTION` with the following command structure:
+
+```sh
+kubectl cnp subcription drop \
+ --subscription \
+ [options]
+```
+
+To access further details and precise instructions, use the following command:
+
+```sh
+kubectl cnp subscription drop --help
+```
+
+#### Synchronizing sequences
+
+One notable constraint of PostgreSQL logical replication, implemented through
+publications and subscriptions, is the lack of sequence synchronization. This
+becomes particularly relevant when utilizing logical replication for live
+database migration, especially to a higher version of PostgreSQL. A crucial
+step in this process involves updating sequences before transitioning
+applications to the new database (*cutover*).
+
+To address this limitation, the `cnp subscription sync-sequences` command
+offers a solution. This command establishes a connection with the source
+database, retrieves all relevant sequences, and subsequently updates local
+sequences with matching identities (based on database schema and sequence
+name).
+
+You can use the command as shown below:
+
+```sh
+kubectl cnp subscription sync-sequences \
+ --subscription \
+
+```
+
+For comprehensive details and specific instructions, utilize the following
+command:
+
+```sh
+kubectl cnp subscription sync-sequences --help
+```
+
+##### Example
+
+As in the previous sections for publication and subscription, we have
+a `source-cluster` and a `destination-cluster`. The publication and the
+subscription, both called `app`, are already present.
+
+The following command will synchronize the sequences involved in the
+`app` subscription, from the source cluster into the destination cluster.
+
+``` sh
+kubectl cnp subscription sync-sequences destination-cluster \
+ --subscription=app
+```
+
+!!! Warning
+ Prioritize testing subscriptions in a non-production environment to
+ guarantee their effectiveness and detect any potential issues before deploying
+ them in a production setting.
+
+## Integration with K9s
+
+The `cnp` plugin can be easily integrated in [K9s](https://k9scli.io/), a
+popular terminal-based UI to interact with Kubernetes clusters.
+
+See [`k9s/plugins.yml`](../samples/k9s/plugins.yml) for details.
diff --git a/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx b/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx
index b75441ee1b2..26385fee0ef 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/wal_archiving.mdx
@@ -16,7 +16,7 @@ the ["Backup on object stores" section](backup_barmanobjectstore.md) to set up
the WAL archive.
!!! Info
- Please refer to [`BarmanObjectStoreConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-barmanobjectstoreconfiguration)
+ Please refer to [`BarmanObjectStoreConfiguration`](pg4k.v1.md#postgresql-k8s-enterprisedb-io-v1-BarmanObjectStoreConfiguration)
in the API reference for a full list of options.
If required, you can choose to compress WAL files as soon as they
From 1b7363473d0e0b691799d362d12bd1406c61235e Mon Sep 17 00:00:00 2001
From: Josh Heyer
Date: Thu, 25 Apr 2024 18:59:09 +0000
Subject: [PATCH 07/26] Remove interactive demo - Katacoda is truly dead
---
.../docs/postgres_for_kubernetes/1/index.mdx | 1 -
.../1/interactive_demo.mdx | 536 ------------------
.../postgres_for_kubernetes/1/quickstart.mdx | 10 +-
scripts/source/process-cnp-docs.sh | 4 -
4 files changed, 2 insertions(+), 549 deletions(-)
delete mode 100644 product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/index.mdx
index 7ddf1e5649b..26179ee202d 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/index.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/index.mdx
@@ -14,7 +14,6 @@ navigation:
- architecture
- installation_upgrade
- quickstart
- - interactive_demo
- '#Configuration'
- postgresql_conf
- operator_conf
diff --git a/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx b/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx
deleted file mode 100644
index ad1f860d26f..00000000000
--- a/product_docs/docs/postgres_for_kubernetes/1/interactive_demo.mdx
+++ /dev/null
@@ -1,536 +0,0 @@
----
-title: "Installation, Configuration and Deployment Demo"
-description: "Walk through the process of installing, configuring and deploying the EDB Postgres for Kubernetes Operator via a browser-hosted Kubernetes environment"
-navTitle: Install, Configure, Deploy
-platform: ubuntu
-tags:
- - postgresql
- - cloud-native-postgresql-operator
- - kubernetes
- - k3d
- - live-demo
-katacodaPanel:
- scenario: ubuntu:2004
- initializeCommand: clear; echo -e \\\\033[1mPreparing k3d and kubectl...\\\\n\\\\033[0m; snap install kubectl --classic; wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash; clear; echo -e \\\\033[2mk3d is ready\\ - enjoy Kubernetes\\!\\\\033[0m;
- codelanguages: shell, yaml
-showInteractiveBadge: true
----
-
-Want to see what it takes to get the EDB Postgres for Kubernetes Operator up and running? This section will demonstrate the following:
-
-1. Installing the EDB Postgres for Kubernetes Operator
-2. Deploying a three-node PostgreSQL cluster
-3. Installing and using the kubectl-cnp plugin
-4. Testing failover to verify the resilience of the cluster
-
-It will take roughly 5-10 minutes to work through.
-
-!!!interactive This demo is interactive
- You can follow along right in your browser by clicking the button below. Once the environment initializes, you'll see a terminal open at the bottom of the screen.
-
-
-
-Once [k3d](https://k3d.io/) is ready, we need to start a cluster:
-
-```shell
-k3d cluster create
-__OUTPUT__
-INFO[0000] Prep: Network
-INFO[0000] Created network 'k3d-k3s-default'
-INFO[0000] Created image volume k3d-k3s-default-images
-INFO[0000] Starting new tools node...
-INFO[0001] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.6.0'
-INFO[0001] Creating node 'k3d-k3s-default-server-0'
-INFO[0001] Pulling image 'docker.io/rancher/k3s:v1.27.4-k3s1'
-INFO[0003] Starting Node 'k3d-k3s-default-tools'
-INFO[0005] Creating LoadBalancer 'k3d-k3s-default-serverlb'
-INFO[0006] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.6.0'
-INFO[0011] Using the k3d-tools node to gather environment information
-INFO[0011] HostIP: using network gateway 172.17.0.1 address
-INFO[0011] Starting cluster 'k3s-default'
-INFO[0011] Starting servers...
-INFO[0011] Starting Node 'k3d-k3s-default-server-0'
-INFO[0016] All agents already running.
-INFO[0016] Starting helpers...
-INFO[0016] Starting Node 'k3d-k3s-default-serverlb'
-INFO[0023] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap...
-INFO[0025] Cluster 'k3s-default' created successfully!
-INFO[0025] You can now use it like this:
-kubectl cluster-info
-```
-
-This will create the Kubernetes cluster, and you will be ready to use it.
-Verify that it works with the following command:
-
-```shell
-kubectl get nodes
-__OUTPUT__
-NAME STATUS ROLES AGE VERSION
-k3d-k3s-default-server-0 Ready control-plane,master 17s v1.27.4+k3s1
-```
-
-You will see one node called `k3d-k3s-default-server-0`. If the status isn't yet "Ready", wait for a few seconds and run the command above again.
-
-## Install EDB Postgres for Kubernetes
-
-Now that the Kubernetes cluster is running, you can proceed with EDB Postgres for Kubernetes installation as described in the ["Installation and upgrades"](installation_upgrade.md) section:
-
-```shell
-kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.21.0.yaml
-__OUTPUT__
-namespace/postgresql-operator-system created
-customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created
-customresourcedefinition.apiextensions.k8s.io/clusters.postgresql.k8s.enterprisedb.io created
-customresourcedefinition.apiextensions.k8s.io/poolers.postgresql.k8s.enterprisedb.io created
-customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.k8s.enterprisedb.io created
-serviceaccount/postgresql-operator-manager created
-clusterrole.rbac.authorization.k8s.io/postgresql-operator-manager created
-clusterrolebinding.rbac.authorization.k8s.io/postgresql-operator-manager-rolebinding created
-configmap/postgresql-operator-default-monitoring created
-service/postgresql-operator-webhook-service created
-deployment.apps/postgresql-operator-controller-manager created
-mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created
-validatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-validating-webhook-configuration created
-```
-
-And then verify that it was successfully installed:
-
-```shell
-kubectl get deploy -n postgresql-operator-system postgresql-operator-controller-manager
-__OUTPUT__
-NAME READY UP-TO-DATE AVAILABLE AGE
-postgresql-operator-controller-manager 1/1 1 1 52s
-```
-
-## Deploy a PostgreSQL cluster
-
-As with any other deployment in Kubernetes, to deploy a PostgreSQL cluster
-you need to apply a configuration file that defines your desired `Cluster`.
-
-The [`cluster-example.yaml`](../samples/cluster-example.yaml) sample file
-defines a simple `Cluster` using the default storage class to allocate
-disk space:
-
-```yaml
-cat < cluster-example.yaml
-# Example of PostgreSQL cluster
-apiVersion: postgresql.k8s.enterprisedb.io/v1
-kind: Cluster
-metadata:
- name: cluster-example
-spec:
- instances: 3
-
- # Example of rolling update strategy:
- # - unsupervised: automated update of the primary once all
- # replicas have been upgraded (default)
- # - supervised: requires manual supervision to perform
- # the switchover of the primary
- primaryUpdateStrategy: unsupervised
-
- # Require 1Gi of space
- storage:
- size: 1Gi
-EOF
-```
-
-!!! Note "There's more"
- For more detailed information about the available options, please refer
- to the ["API Reference" section](pg4k.v1.md).
-
-In order to create the 3-node PostgreSQL cluster, you need to run the following command:
-
-```shell
-kubectl apply -f cluster-example.yaml
-__OUTPUT__
-cluster.postgresql.k8s.enterprisedb.io/cluster-example created
-```
-
-You can check that the pods are being created with the `get pods` command. It'll take a bit to initialize, so if you run that
-immediately after applying the cluster configuration you'll see the status as `Init:` or `PodInitializing`:
-
-```shell
-kubectl get pods
-__OUTPUT__
-NAME READY STATUS RESTARTS AGE
-cluster-example-1-initdb-sdr25 0/1 PodInitializing 0 20s
-```
-
-...give it a minute, and then check on it again:
-
-```shell
-kubectl get pods
-__OUTPUT__
-NAME READY STATUS RESTARTS AGE
-cluster-example-1 1/1 Running 0 47s
-cluster-example-2 1/1 Running 0 24s
-cluster-example-3 1/1 Running 0 8s
-```
-
-Now we can check the status of the cluster:
-
-
-```shell
-kubectl get cluster cluster-example -o yaml
-__OUTPUT__
-apiVersion: postgresql.k8s.enterprisedb.io/v1
-kind: Cluster
-metadata:
- annotations:
- kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"postgresql.k8s.enterprisedb.io/v1","kind":"Cluster","metadata":{"annotations":{},"name":"cluster-example","namespace":"default"},"spec":{"instances":3,"primaryUpdateStrategy":"unsupervised","storage":{"size":"1Gi"}}}
- creationTimestamp: "2023-10-18T19:53:06Z"
- generation: 1
- name: cluster-example
- namespace: default
- resourceVersion: "1201"
- uid: 9d712b83-f2ea-4835-8de1-c2cee75bd3c7
-spec:
- affinity:
- podAntiAffinityType: preferred
- topologyKey: ""
- bootstrap:
- initdb:
- database: app
- encoding: UTF8
- localeCType: C
- localeCollate: C
- owner: app
- enableSuperuserAccess: true
- failoverDelay: 0
- imageName: quay.io/enterprisedb/postgresql:15.3
- instances: 3
- logLevel: info
- maxSyncReplicas: 0
- minSyncReplicas: 0
- monitoring:
- customQueriesConfigMap:
- - key: queries
- name: postgresql-operator-default-monitoring
- disableDefaultQueries: false
- enablePodMonitor: false
- postgresGID: 26
- postgresUID: 26
- postgresql:
- parameters:
- archive_mode: "on"
- archive_timeout: 5min
- dynamic_shared_memory_type: posix
- log_destination: csvlog
- log_directory: /controller/log
- log_filename: postgres
- log_rotation_age: "0"
- log_rotation_size: "0"
- log_truncate_on_rotation: "false"
- logging_collector: "on"
- max_parallel_workers: "32"
- max_replication_slots: "32"
- max_worker_processes: "32"
- shared_memory_type: mmap
- shared_preload_libraries: ""
- wal_keep_size: 512MB
- wal_receiver_timeout: 5s
- wal_sender_timeout: 5s
- syncReplicaElectionConstraint:
- enabled: false
- primaryUpdateMethod: restart
- primaryUpdateStrategy: unsupervised
- resources: {}
- startDelay: 30
- stopDelay: 30
- storage:
- resizeInUseVolumes: true
- size: 1Gi
- switchoverDelay: 40000000
-status:
- certificates:
- clientCASecret: cluster-example-ca
- expirations:
- cluster-example-ca: 2024-01-16 19:48:06 +0000 UTC
- cluster-example-replication: 2024-01-16 19:48:06 +0000 UTC
- cluster-example-server: 2024-01-16 19:48:06 +0000 UTC
- replicationTLSSecret: cluster-example-replication
- serverAltDNSNames:
- - cluster-example-rw
- - cluster-example-rw.default
- - cluster-example-rw.default.svc
- - cluster-example-r
- - cluster-example-r.default
- - cluster-example-r.default.svc
- - cluster-example-ro
- - cluster-example-ro.default
- - cluster-example-ro.default.svc
- serverCASecret: cluster-example-ca
- serverTLSSecret: cluster-example-server
- cloudNativePostgresqlCommitHash: c42ca1c2
- cloudNativePostgresqlOperatorHash: 1d51c15adffb02c81dbc4e8752ddb68f709699c78d9c3384ed9292188685971b
- conditions:
- - lastTransitionTime: "2023-10-18T19:54:30Z"
- message: Cluster is Ready
- reason: ClusterIsReady
- status: "True"
- type: Ready
- - lastTransitionTime: "2023-10-18T19:54:30Z"
- message: velero addon is disabled
- reason: Disabled
- status: "False"
- type: k8s.enterprisedb.io/velero
- - lastTransitionTime: "2023-10-18T19:54:30Z"
- message: external-backup-adapter addon is disabled
- reason: Disabled
- status: "False"
- type: k8s.enterprisedb.io/externalBackupAdapter
- - lastTransitionTime: "2023-10-18T19:54:30Z"
- message: external-backup-adapter-cluster addon is disabled
- reason: Disabled
- status: "False"
- type: k8s.enterprisedb.io/externalBackupAdapterCluster
- - lastTransitionTime: "2023-10-18T19:54:31Z"
- message: kasten addon is disabled
- reason: Disabled
- status: "False"
- type: k8s.enterprisedb.io/kasten
- configMapResourceVersion:
- metrics:
- postgresql-operator-default-monitoring: "860"
- currentPrimary: cluster-example-1
- currentPrimaryTimestamp: "2023-10-18T19:53:49.065241Z"
- healthyPVC:
- - cluster-example-1
- - cluster-example-2
- - cluster-example-3
- instanceNames:
- - cluster-example-1
- - cluster-example-2
- - cluster-example-3
- instances: 3
- instancesReportedState:
- cluster-example-1:
- isPrimary: true
- timeLineID: 1
- cluster-example-2:
- isPrimary: false
- timeLineID: 1
- cluster-example-3:
- isPrimary: false
- timeLineID: 1
- instancesStatus:
- healthy:
- - cluster-example-1
- - cluster-example-2
- - cluster-example-3
- latestGeneratedNode: 3
- licenseStatus:
- isImplicit: true
- isTrial: true
- licenseExpiration: "2023-11-17T19:53:06Z"
- licenseStatus: Implicit trial license
- repositoryAccess: false
- valid: true
- managedRolesStatus: {}
- phase: Cluster in healthy state
- poolerIntegrations:
- pgBouncerIntegration: {}
- pvcCount: 3
- readService: cluster-example-r
- readyInstances: 3
- secretsResourceVersion:
- applicationSecretVersion: "832"
- clientCaSecretVersion: "828"
- replicationSecretVersion: "830"
- serverCaSecretVersion: "828"
- serverSecretVersion: "829"
- superuserSecretVersion: "831"
- targetPrimary: cluster-example-1
- targetPrimaryTimestamp: "2023-10-18T19:53:06.981792Z"
- timelineID: 1
- topology:
- instances:
- cluster-example-1: {}
- cluster-example-2: {}
- cluster-example-3: {}
- nodesUsed: 1
- successfullyExtracted: true
- writeService: cluster-example-rw
-```
-
-!!! Note
- By default, the operator will install the latest available minor version
- of the latest major version of PostgreSQL when the operator was released.
- You can override this by setting [the `imageName` key in the `spec` section of
- the `Cluster` definition](pg4k.v1/#clusterspec).
-
-!!! Important
- The immutable infrastructure paradigm requires that you always
- point to a specific version of the container image.
- Never use tags like `latest` or `13` in a production environment
- as it might lead to unpredictable scenarios in terms of update
- policies and version consistency in the cluster.
-
-## Install the kubectl-cnp plugin
-
-EDB Postgres for Kubernetes provides [a plugin for kubectl](kubectl-plugin) to manage a cluster in Kubernetes, along with a script to install it:
-
-```shell
-curl -sSfL \
- https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \
- sudo sh -s -- -b /usr/local/bin
-__OUTPUT__
-EnterpriseDB/kubectl-cnp info checking GitHub for latest tag
-EnterpriseDB/kubectl-cnp info found version: 1.21.0 for v1.21.0/linux/x86_64
-EnterpriseDB/kubectl-cnp info installed /usr/local/bin/kubectl-cnp
-```
-
-The `cnp` command is now available in kubectl:
-
-```shell
-kubectl cnp status cluster-example
-__OUTPUT__
-Cluster Summary
-Name: cluster-example
-Namespace: default
-System ID: 7291389121501601807
-PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
-Primary instance: cluster-example-1
-Primary start time: 2023-10-18 19:53:49 +0000 UTC (uptime 2m32s)
-Status: Cluster in healthy state
-Instances: 3
-Ready instances: 3
-Current Write LSN: 0/6054B60 (Timeline: 1 - WAL File: 000000010000000000000006)
-
-Certificates Status
-Certificate Name Expiration Date Days Left Until Expiration
----------------- --------------- --------------------------
-cluster-example-ca 2024-01-16 19:48:06 +0000 UTC 89.99
-cluster-example-replication 2024-01-16 19:48:06 +0000 UTC 89.99
-cluster-example-server 2024-01-16 19:48:06 +0000 UTC 89.99
-
-Continuous Backup status
-Not configured
-
-Streaming Replication status
-Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority
----- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- -------------
-cluster-example-2 0/6054B60 0/6054B60 0/6054B60 0/6054B60 00:00:00 00:00:00 00:00:00 streaming async 0
-cluster-example-3 0/6054B60 0/6054B60 0/6054B60 0/6054B60 00:00:00 00:00:00 00:00:00 streaming async 0
-
-Unmanaged Replication Slot Status
-No unmanaged replication slots found
-
-Instances status
-Name Database Size Current LSN Replication role Status QoS Manager Version Node
----- ------------- ----------- ---------------- ------ --- --------------- ----
-cluster-example-1 29 MB 0/6054B60 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0
-cluster-example-2 29 MB 0/6054B60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0
-cluster-example-3 29 MB 0/6054B60 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0
-```
-
-!!! Note "There's more"
- See [the Cloud Native PostgreSQL Plugin page](kubectl-plugin/) for more commands and options.
-
-## Testing failover
-
-As our status checks show, we're running two replicas - if something happens to the primary instance of PostgreSQL, the cluster will fail over to one of them. Let's demonstrate this by killing the primary pod:
-
-```shell
-kubectl delete pod --wait=false cluster-example-1
-__OUTPUT__
-pod "cluster-example-1" deleted
-```
-
-This simulates a hard shutdown of the server - a scenario where something has gone wrong.
-
-Now if we check the status...
-```shell
-kubectl cnp status cluster-example
-__OUTPUT__
-Cluster Summary
-Name: cluster-example
-Namespace: default
-System ID: 7291389121501601807
-PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
-Primary instance: cluster-example-2
-Primary start time: 2023-10-18 19:57:07 +0000 UTC (uptime 5s)
-Status: Failing over Failing over from cluster-example-1 to cluster-example-2
-Instances: 3
-Ready instances: 2
-Current Write LSN: 0/7001000 (Timeline: 2 - WAL File: 000000020000000000000007)
-
-Certificates Status
-Certificate Name Expiration Date Days Left Until Expiration
----------------- --------------- --------------------------
-cluster-example-ca 2024-01-16 19:48:06 +0000 UTC 89.99
-cluster-example-replication 2024-01-16 19:48:06 +0000 UTC 89.99
-cluster-example-server 2024-01-16 19:48:06 +0000 UTC 89.99
-
-Continuous Backup status
-Not configured
-
-Streaming Replication status
-Not available yet
-
-Unmanaged Replication Slot Status
-No unmanaged replication slots found
-
-Instances status
-Name Database Size Current LSN Replication role Status QoS Manager Version Node
----- ------------- ----------- ---------------- ------ --- --------------- ----
-cluster-example-2 29 MB 0/7001000 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0
-cluster-example-3 29 MB 0/70000A0 Standby (file based) OK BestEffort 1.20.2 k3d-k3s-default-server-0
-cluster-example-1 - - - pod not available BestEffort - k3d-k3s-default-server-0
-```
-
-...the failover process has begun, with the second pod promoted to primary. Once the failed pod has restarted, it will become a replica of the new primary:
-
-```shell
-kubectl cnp status cluster-example
-__OUTPUT__
-Cluster Summary
-Name: cluster-example
-Namespace: default
-System ID: 7291389121501601807
-PostgreSQL Image: quay.io/enterprisedb/postgresql:15.3
-Primary instance: cluster-example-2
-Primary start time: 2023-10-18 19:57:07 +0000 UTC (uptime 1m14s)
-Status: Cluster in healthy state
-Instances: 3
-Ready instances: 3
-Current Write LSN: 0/7004D98 (Timeline: 2 - WAL File: 000000020000000000000007)
-
-Certificates Status
-Certificate Name Expiration Date Days Left Until Expiration
----------------- --------------- --------------------------
-cluster-example-ca 2024-01-16 19:48:06 +0000 UTC 89.99
-cluster-example-replication 2024-01-16 19:48:06 +0000 UTC 89.99
-cluster-example-server 2024-01-16 19:48:06 +0000 UTC 89.99
-
-Continuous Backup status
-Not configured
-
-Streaming Replication status
-Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority
----- -------- --------- --------- ---------- --------- --------- ---------- ----- ---------- -------------
-cluster-example-1 0/7004D98 0/7004D98 0/7004D98 0/7004D98 00:00:00 00:00:00 00:00:00 streaming async 0
-cluster-example-3 0/7004D98 0/7004D98 0/7004D98 0/7004D98 00:00:00 00:00:00 00:00:00 streaming async 0
-
-Unmanaged Replication Slot Status
-No unmanaged replication slots found
-
-Instances status
-Name Database Size Current LSN Replication role Status QoS Manager Version Node
----- ------------- ----------- ---------------- ------ --- --------------- ----
-cluster-example-2 29 MB 0/7004D98 Primary OK BestEffort 1.20.2 k3d-k3s-default-server-0
-cluster-example-1 29 MB 0/7004D98 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0
-cluster-example-3 29 MB 0/7004D98 Standby (async) OK BestEffort 1.20.2 k3d-k3s-default-server-0
-```
-
-
-### Further reading
-
-This is all it takes to get a PostgreSQL cluster up and running, but of course there's a lot more possible - and certainly much more that is prudent before you should ever deploy in a production environment!
-
-- Design goals and possibilities offered by the CloudNativePG Operator: check out the [Architecture](architecture) and [Use cases](use_cases) sections.
-
-- Configuring a secure and reliable system: read through the [Security](security), [Failure Modes](failure_modes) and [Backup and Recovery](backup_recovery) sections.
-
-
diff --git a/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx b/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx
index eb19791ad1f..6d9b01f5d00 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx
@@ -1,6 +1,8 @@
---
title: 'Quickstart'
originalFilePath: 'src/quickstart.md'
+redirects:
+ - ../interactive_demo/
---
This section describes how to test a PostgreSQL cluster on your laptop/computer
@@ -8,14 +10,6 @@ using EDB Postgres for Kubernetes on a local Kubernetes cluster in [Kind](https:
[Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/).
-
-!!! Tip "Live demonstration"
- Don't want to install anything locally just yet? Try a demonstration directly in your browser:
-
- [EDB Postgres for Kubernetes Operator Interactive Quickstart](interactive_demo)
-
-
-
Red Hat OpenShift Container Platform users can test the certified operator for
EDB Postgres for Kubernetes on the [Red Hat OpenShift Local](https://developers.redhat.com/products/openshift-local/overview) (formerly Red Hat CodeReady Containers).
diff --git a/scripts/source/process-cnp-docs.sh b/scripts/source/process-cnp-docs.sh
index 7e8419b1d4a..7f666edb5b7 100755
--- a/scripts/source/process-cnp-docs.sh
+++ b/scripts/source/process-cnp-docs.sh
@@ -30,10 +30,6 @@ cd $SOURCE_CHECKOUT/docs-import/docs
# grab key bit of source for use in docs
cp $SOURCE_CHECKOUT/docs-import/config/manager/default-monitoring.yaml $SOURCE_CHECKOUT/docs-import/docs/src/
-node $DESTINATION_CHECKOUT/scripts/fileProcessor/main.mjs \
- -f "src/**/quickstart.md" \
- -p cnp/add-quickstart-content
-
node $DESTINATION_CHECKOUT/scripts/fileProcessor/main.mjs \
-f "src/**/*.md" \
-p "cnp/add-frontmatters" \
From 878c1472b10af90ae5ac19bb84de8b5c018f9e9e Mon Sep 17 00:00:00 2001
From: Josh Heyer
Date: Thu, 25 Apr 2024 20:26:19 +0000
Subject: [PATCH 08/26] Release notes for PG4K 1.18.12, 1.21.5, 1.22.3, 1.23.0
---
.../1/rel_notes/1_18_12_rel_notes.mdx | 22 +++++++++++++++++++
.../1/rel_notes/1_21_5_rel_notes.mdx | 12 ++++++++++
.../1/rel_notes/1_22_3_rel_notes.mdx | 12 ++++++++++
.../1/rel_notes/1_23_0_rel_notes.mdx | 12 ++++++++++
.../1/rel_notes/index.mdx | 8 +++++++
5 files changed, 66 insertions(+)
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_12_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_5_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_3_rel_notes.mdx
create mode 100644 product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_0_rel_notes.mdx
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_12_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_12_rel_notes.mdx
new file mode 100644
index 00000000000..a429ce20374
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_18_12_rel_notes.mdx
@@ -0,0 +1,22 @@
+---
+title: "EDB Postgres for Kubernetes 1.18.12 release notes"
+navTitle: "Version 1.18.12"
+---
+
+Released: 24 Apr 2024
+
+EDB Postgres for Kubernetes version 1.18.12 is an LTS release of EDB Postgres for Kubernetes; there is no corresponding upstream release of CloudNativePG.
+
+This release of EDB Postgres for Kubernetes includes the following:
+
+| Type | Description |
+| ------------ | ------------------------------------------------------------------------------------------------------------------------------ |
+| Enhancement | Added upgrade process from 1.18.x LTS to 1.22.x LTS |
+| Enhancement | Documentation for Kubernetes 1.29.x or above ([#3729](https://github.com/cloudnative-pg/cloudnative-pg/pull/3729)) |
+| Bug fix | Properly handle LSN sorting when is empty on a replica ([#4283](https://github.com/cloudnative-pg/cloudnative-pg/pull/4283)) |
+| Bug fix | Avoids stopping reconciliation loop when there is no instance status available ([#4132](https://github.com/cloudnative-pg/cloudnative-pg/pull/4132)) |
+| Bug fix | Waits for elected replica to be in streaming mode before a switchover ([#4288](https://github.com/cloudnative-pg/cloudnative-pg/pull/4288)) |
+| Bug fix | Allow backup hooks to be called while using Velero backup |
+| Bug fix | Waits for the Restic init container to be completed |
+| Bug fix | Ensure pods with no ownership are deleted during cluster restore ([#4141](https://github.com/cloudnative-pg/cloudnative-pg/pull/4141)) |
+| Security | Updated all Go dependencies to fix any latest security issues |
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_5_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_5_rel_notes.mdx
new file mode 100644
index 00000000000..6134ee0fe34
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_21_5_rel_notes.mdx
@@ -0,0 +1,12 @@
+---
+title: "EDB Postgres for Kubernetes 1.21.5 release notes"
+navTitle: "Version 1.21.5"
+---
+
+Released: 23 Apr 2024
+
+This release of EDB Postgres for Kubernetes includes the following:
+
+| Type | Description |
+| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Upstream merge | Merged with community CloudNativePG 1.21.5. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.21/release_notes/v1.21/). |
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_3_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_3_rel_notes.mdx
new file mode 100644
index 00000000000..58a233181e6
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_22_3_rel_notes.mdx
@@ -0,0 +1,12 @@
+---
+title: "EDB Postgres for Kubernetes 1.22.3 release notes"
+navTitle: "Version 1.22.3"
+---
+
+Released: 24 Apr 2024
+
+This release of EDB Postgres for Kubernetes includes the following:
+
+| Type | Description |
+| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Upstream merge | Merged with community CloudNativePG 1.22.3. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.22/). |
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_0_rel_notes.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_0_rel_notes.mdx
new file mode 100644
index 00000000000..f143b0c5594
--- /dev/null
+++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/1_23_0_rel_notes.mdx
@@ -0,0 +1,12 @@
+---
+title: "EDB Postgres for Kubernetes 1.23.0 release notes"
+navTitle: "Version 1.23.0"
+---
+
+Released: 24 Apr 2024
+
+This release of EDB Postgres for Kubernetes includes the following:
+
+| Type | Description |
+| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Upstream merge | Merged with community CloudNativePG 1.23.0. See the community [Release Notes](https://cloudnative-pg.io/documentation/1.23/release_notes/v1.23/). |
diff --git a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx
index 9653bb2319a..5e1ef89fc17 100644
--- a/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx
+++ b/product_docs/docs/postgres_for_kubernetes/1/rel_notes/index.mdx
@@ -4,9 +4,12 @@ navTitle: "Release notes"
redirects:
- ../release_notes
navigation:
+- 1_23_0_rel_notes
+- 1_22_3_rel_notes
- 1_22_2_rel_notes
- 1_22_1_rel_notes
- 1_22_0_rel_notes
+- 1_21_5_rel_notes
- 1_21_4_rel_notes
- 1_21_3_rel_notes
- 1_21_2_rel_notes
@@ -26,6 +29,7 @@ navigation:
- 1_19_2_rel_notes
- 1_19_1_rel_notes
- 1_19_0_rel_notes
+- 1_18_12_rel_notes
- 1_18_11_rel_notes
- 1_18_10_rel_notes
- 1_18_9_rel_notes
@@ -91,9 +95,12 @@ The EDB Postgres for Kubernetes documentation describes the major version of EDB
| Version | Release date | Upstream merges |
| -------------------------- | ------------ | ------------------------------------------------------------------------------------------- |
+| [1.23.0](1_23_0_rel_notes) | 24 Apr 2024 | Upstream [1.23.0](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.23/) |
+| [1.22.3](1_22_3_rel_notes) | 24 Apr 2024 | Upstream [1.22.3](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.22/) |
| [1.22.2](1_22_2_rel_notes) | 22 Mar 2024 | Upstream [1.22.2](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.22/) |
| [1.22.1](1_22_1_rel_notes) | 02 Feb 2024 | Upstream [1.22.1](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.22/) |
| [1.22.0](1_22_0_rel_notes) | 22 Dec 2023 | Upstream [1.22.0](https://cloudnative-pg.io/documentation/1.22/release_notes/v1.22/) |
+| [1.21.5](1_21_5_rel_notes) | 24 Apr 2024 | Upstream [1.21.5](https://cloudnative-pg.io/documentation/1.21/release_notes/v1.21/) |
| [1.21.4](1_21_4_rel_notes) | 22 Mar 2024 | Upstream [1.21.4](https://cloudnative-pg.io/documentation/1.21/release_notes/v1.21/) |
| [1.21.3](1_21_3_rel_notes) | 02 Feb 2024 | Upstream [1.21.3](https://cloudnative-pg.io/documentation/1.21/release_notes/v1.21/) |
| [1.21.2](1_21_2_rel_notes) | 22 Dec 2023 | Upstream [1.21.2](https://cloudnative-pg.io/documentation/1.21/release_notes/v1.21/) |
@@ -113,6 +120,7 @@ The EDB Postgres for Kubernetes documentation describes the major version of EDB
| [1.19.2](1_19_2_rel_notes) | 27 Apr 2023 | Upstream [1.19.2](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) |
| [1.19.1](1_19_1_rel_notes) | 20 Mar 2023 | Upstream [1.19.1](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) |
| [1.19.0](1_19_0_rel_notes) | 14 Feb 2023 | Upstream [1.19.0](https://cloudnative-pg.io/documentation/1.19/release_notes/v1.19/) |
+| [1.18.12](1_18_12_rel_notes) | 24 Apr 2024 | None |
| [1.18.11](1_18_11_rel_notes) | 22 Mar 2024 | None |
| [1.18.10](1_18_10_rel_notes) | 02 Feb 2024 | None |
| [1.18.9](1_18_9_rel_notes) | 22 Dec 2023 | None |
From 4185eca1ca2b6b94f84d91426adee8a8190cfc18 Mon Sep 17 00:00:00 2001
From: Josh Heyer
Date: Thu, 25 Apr 2024 22:34:24 +0000
Subject: [PATCH 09/26] New featured topics: PG4K-PGD, EPAS AMI instructions
---
src/pages/index.js | 36 +++++++++++++++++++++++-------------
1 file changed, 23 insertions(+), 13 deletions(-)
diff --git a/src/pages/index.js b/src/pages/index.js
index 5e5c889fae8..c395726312a 100644
--- a/src/pages/index.js
+++ b/src/pages/index.js
@@ -68,24 +68,27 @@ const Page = () => {
- Documentation for the latest version of PGD includes an
- all-new section covering manual configuration and
- installation.
+ EDB Postgres Distributed for Kubernetes is an operator
+ designed to manage PGD workloads on Kubernetes, with traffic
+ routed by PGD Proxy.
Find out more →
@@ -105,21 +108,28 @@ const Page = () => {
-
- Trusted Postgres Architect 23.30
+
+ Advanced Server AWS AMI deployment
- TPA now provides a custom Execution Environment image to be
- used on RedHat Ansible Automation Controller.
+ EDB Postgres Advanced Server Amazon Machine Image (AMI) is a
+ preconfigured template with EDB Postgres Advanced Server 15
+ installed on RHEL 8.
-
+
Find out more →
From 68b465db58d0954458eaf623c508328a716b1f60 Mon Sep 17 00:00:00 2001
From: Josh Heyer
Date: Thu, 25 Apr 2024 16:04:34 -0700
Subject: [PATCH 10/26] Fix link to notifications
---
.../docs/biganimal/release/administering_cluster/projects.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/biganimal/release/administering_cluster/projects.mdx b/product_docs/docs/biganimal/release/administering_cluster/projects.mdx
index be6464ffa39..61f89f0c03c 100644
--- a/product_docs/docs/biganimal/release/administering_cluster/projects.mdx
+++ b/product_docs/docs/biganimal/release/administering_cluster/projects.mdx
@@ -23,7 +23,7 @@ To add a user:
4. Depending on the level of access you want for the user, select the appropriate role.
5. Select **Submit**.
-You can enable in-app inbox or email notifications to get alerted when a user is invited to a project. For more information, see [managing notifications](../notifications/#manage-notifications).
+You can enable in-app inbox or email notifications to get alerted when a user is invited to a project. For more information, see [managing notifications](notifications/#manage-notifications).
## Creating a project
From aff3972989c733cd9a8958f77d00ecfe8a796822 Mon Sep 17 00:00:00 2001
From: gvasquezvargas
Date: Fri, 26 Apr 2024 11:05:18 +0200
Subject: [PATCH 11/26] Replacing epas 15 mentions in PGD docs
---
product_docs/docs/pgd/5/admin-tpa/installing.mdx | 4 ++--
.../docs/pgd/5/quickstart/connecting_applications.mdx | 4 ++--
.../docs/pgd/5/quickstart/further_explore_failover.mdx | 2 +-
product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx | 8 ++++----
product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx | 8 ++++----
product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx | 8 ++++----
product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx | 2 +-
.../docs/pgd/5/routing/raft/01_raft_subgroups_and_tpa.mdx | 2 +-
8 files changed, 19 insertions(+), 19 deletions(-)
diff --git a/product_docs/docs/pgd/5/admin-tpa/installing.mdx b/product_docs/docs/pgd/5/admin-tpa/installing.mdx
index a351d3e1923..0fd47939242 100644
--- a/product_docs/docs/pgd/5/admin-tpa/installing.mdx
+++ b/product_docs/docs/pgd/5/admin-tpa/installing.mdx
@@ -62,7 +62,7 @@ For example:
[tpa]$ tpaexec configure ~/clusters/speedy \
--architecture PGD-Always-ON \
--platform aws \
- --edb-postgres-advanced 15 \
+ --edb-postgres-advanced 16 \
--redwood \
--location-names eu-west-1 eu-north-1 eu-central-1 \
--data-nodes-per-location 3 \
@@ -76,7 +76,7 @@ The command creates a directory named `~/clusters/speedy` and generates a config
In the example, the options select:
- An AWS deployment (`--platform aws`)
-- EDB Postgres Advanced Server, version 15 and Oracle compatibility (`--edb-postgres-advanced 15` and `--redwood`)
+- EDB Postgres Advanced Server, version 16 and Oracle compatibility (`--edb-postgres-advanced 16` and `--redwood`)
- Three locations (`--location-names eu-west-1 eu-north-1 eu-central-1`)
- Three data nodes at each location (`--data-nodes-per-location 3`)
- Proxy routing policy of global (`--pgd-proxy-routing global`)
diff --git a/product_docs/docs/pgd/5/quickstart/connecting_applications.mdx b/product_docs/docs/pgd/5/quickstart/connecting_applications.mdx
index af13cf1f1e1..b7d18c0cd8b 100644
--- a/product_docs/docs/pgd/5/quickstart/connecting_applications.mdx
+++ b/product_docs/docs/pgd/5/quickstart/connecting_applications.mdx
@@ -144,7 +144,7 @@ After you install `psql` or a similar client, you can connect to the cluster. Ru
```shell
psql -h -p 6432 -U enterprisedb bdrdb
__OUTPUT__
-psql (15.2, server 15.2.0 (Debian 15.2.0-2.buster))
+psql (16.2, server 16.2.0)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
@@ -161,7 +161,7 @@ By listing all the addresses of proxies as the host, you can ensure that the cli
```shell
psql -h ,, -U enterprisedb -p 6432 bdrdb
__OUTPUT__
-psql (15.2, server 15.2.0 (Debian 15.2.0-2.buster))
+psql (16.2, server 16.2.0)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
diff --git a/product_docs/docs/pgd/5/quickstart/further_explore_failover.mdx b/product_docs/docs/pgd/5/quickstart/further_explore_failover.mdx
index d556cc179b4..1e2c93ee793 100644
--- a/product_docs/docs/pgd/5/quickstart/further_explore_failover.mdx
+++ b/product_docs/docs/pgd/5/quickstart/further_explore_failover.mdx
@@ -134,7 +134,7 @@ If you want to be sure that this table is replicated, you can connect to another
You'll see a login message similar to this:
```console
-psql.bin (15.2.0 (Debian 15.2.0-2.buster), server 15.2.0 (Debian 15.2.0-2.buster)) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
+psql.bin (16.2.0, server 16.2.0) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
You are now connected to database "bdrdb" as user "enterprisedb" on host "kaftan" (address "10.33.25.233") at port "5444".
bdrdb=#
```
diff --git a/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx b/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx
index b76520c7465..0b390f14746 100644
--- a/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx
+++ b/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx
@@ -139,7 +139,7 @@ tpaexec configure democluster \
--architecture PGD-Always-ON \
--platform aws \
--region eu-west-1 \
- --edb-postgres-advanced 15 \
+ --edb-postgres-advanced 16 \
--redwood \
--location-names dc1 \
--pgd-proxy-routing local \
@@ -164,7 +164,7 @@ By default, TPA configures Debian as the default OS for all nodes on AWS.
Observe that you don't have to deploy PGD to the same platform you're using to run TPA!
-Specify that the data nodes will be running [EDB Postgres Advanced Server v15](/epas/latest/) (`--edb-postgres-advanced 15`) with Oracle compatibility (`--redwood`).
+Specify that the data nodes will be running [EDB Postgres Advanced Server v16](/epas/latest/) (`--edb-postgres-advanced 16`) with Oracle compatibility (`--redwood`).
You set the notional location of the nodes to `dc1` using `--location-names`. You then set `--pgd-proxy-routing` to `local` so that proxy routing can route traffic to all nodes within each location.
@@ -250,7 +250,7 @@ You can now run the `psql` command to access the bdrdb database:
```shell
psql bdrdb
__OUTPUT__
-psql (15.2.0, server 15.2.0)
+psql (16.2.0, server 16.2.0)
Type "help" for help.
bdrdb=#
@@ -309,7 +309,7 @@ The proxies provide high-availability connections to the cluster of data nodes f
```shell
psql -h kaboom,kaftan,kaolin -p 6432 bdrdb
__OUTPUT__
-psql (15.2.0, server 15.2.0)
+psql (16.2.0, server 16.2.0)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
Type "help" for help.
diff --git a/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx b/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx
index a5592cbe1a1..247f411daba 100644
--- a/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx
+++ b/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx
@@ -171,7 +171,7 @@ Run the [`tpaexec configure`](/tpa/latest/tpaexec-configure/) command to generat
tpaexec configure democluster \
--architecture PGD-Always-ON \
--platform docker \
- --edb-postgres-advanced 15 \
+ --edb-postgres-advanced 16 \
--redwood \
--location-names dc1 \
--pgd-proxy-routing local \
@@ -194,7 +194,7 @@ Linux as the default image for all nodes.
Observe that you don't have to deploy PGD to the same platform you're using to run TPA!
-Specify that the data nodes will be running [EDB Postgres Advanced Server v15](/epas/latest/) (`--edb-postgres-advanced 15`) with Oracle compatibility (`--redwood`).
+Specify that the data nodes will be running [EDB Postgres Advanced Server v16](/epas/latest/) (`--edb-postgres-advanced 16`) with Oracle compatibility (`--redwood`).
You set the notional location of the nodes to `dc1` using `--location-names`. You then set `--pgd-proxy-routing` to `local` so that proxy routing can route traffic to all nodes within each location.
@@ -275,7 +275,7 @@ You can now run the `psql` command to access the `bdrdb` database:
```shell
psql bdrdb
__OUTPUT__
-psql (15.2.0, server 15.2.0)
+psql (16.2.0, server 16.2.0)
Type "help" for help.
bdrdb=#
@@ -334,7 +334,7 @@ The proxies provide high-availability connections to the cluster of data nodes f
```
psql -h kaboom,kaftan,kaolin -p 6432 bdrdb
__OUTPUT__
-psql (15.2.0, server 15.2.0)
+psql (16.2.0, server 16.2.0)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
Type "help" for help.
diff --git a/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx b/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx
index ddb994dadd3..dde5ebfa7d6 100644
--- a/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx
+++ b/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx
@@ -124,7 +124,7 @@ Run the [`tpaexec configure`](/tpa/latest/tpaexec-configure/) command to generat
tpaexec configure democluster \
--architecture PGD-Always-ON \
--platform bare \
- --edb-postgres-advanced 15 \
+ --edb-postgres-advanced 16 \
--redwood \
--no-git \
--location-names dc1 \
@@ -136,7 +136,7 @@ You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), whi
For Linux hosts, specify that you're targeting a "bare" platform (`--platform bare`). TPA will determine the Linux version running on each host during deployment. See [the EDB Postgres Distributed compatibility table](https://www.enterprisedb.com/resources/platform-compatibility) for details about the supported operating systems.
-Specify that the data nodes will be running [EDB Postgres Advanced Server v15](https://www.enterprisedb.com/docs/epas/latest/) (`--edb-postgres-advanced 15`) with Oracle compatibility (`--redwood`).
+Specify that the data nodes will be running [EDB Postgres Advanced Server v16](https://www.enterprisedb.com/docs/epas/latest/) (`--edb-postgres-advanced 16`) with Oracle compatibility (`--redwood`).
You set the notional location of the nodes to `dc1` using `--location-names`. You then set `--pgd-proxy-routing` to `local` so that proxy routing can route traffic to all nodes within each location.
@@ -297,7 +297,7 @@ You can now run the `psql` command to access the `bdrdb` database:
```shell
psql bdrdb
__OUTPUT__
-psql (15.2.0, server 15.2.0)
+psql (16.2.0, server 16.2.0)
Type "help" for help.
bdrdb=#
@@ -356,7 +356,7 @@ The proxies provide high-availability connections to the cluster of data nodes f
```shell
psql -h kaboom,kaftan,kaolin -p 6432 bdrdb
__OUTPUT__
-psql (15.2.0, server 15.2.0)
+psql (16.2.0, server 16.2.0)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
Type "help" for help.
diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx
index 8de3704586c..be9edc30740 100644
--- a/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx
+++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx
@@ -14,7 +14,7 @@ The highlights of this release include:
* Enhanced routing capabilities
* Unified replication durability configuration
* Support for EDB Advanced Storage Pack
- * Support for TDE with EDB Postgres Advanced 15 and EDB Postgres Extended 15
+ * Support for TDE with EDB Postgres Advanced and EDB Postgres Extended 16
* Integration with OpenTelemetry
* Improved transaction tracking performance (Group Commit, CAMO)
* Postgres 12 to 15 compatiblity
diff --git a/product_docs/docs/pgd/5/routing/raft/01_raft_subgroups_and_tpa.mdx b/product_docs/docs/pgd/5/routing/raft/01_raft_subgroups_and_tpa.mdx
index 33926e7a478..7a93eebbae9 100644
--- a/product_docs/docs/pgd/5/routing/raft/01_raft_subgroups_and_tpa.mdx
+++ b/product_docs/docs/pgd/5/routing/raft/01_raft_subgroups_and_tpa.mdx
@@ -21,7 +21,7 @@ The barman nodes don't participate in the subgroup and, by extension, the Raft g
To create this configuration, you run:
```
-tpaexec configure pgdgroup --architecture PGD-Always-ON --location-names us_east us_west --data-nodes-per-location 3 --epas 15 --no-redwood --enable_proxy_routing local --hostnames-from hostnames.txt
+tpaexec configure pgdgroup --architecture PGD-Always-ON --location-names us_east us_west --data-nodes-per-location 3 --epas 16 --no-redwood --enable_proxy_routing local --hostnames-from hostnames.txt
```
Where `hostnames.txt` contains:
From 7fad33f71845718c56e4bb7e8af1e95b70504f8d Mon Sep 17 00:00:00 2001
From: gvasquezvargas
Date: Fri, 26 Apr 2024 11:10:36 +0200
Subject: [PATCH 12/26] Returning pgd_5.0.0_rel_notes.mdx to original state
---
product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx
index be9edc30740..8de3704586c 100644
--- a/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx
+++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.0.0_rel_notes.mdx
@@ -14,7 +14,7 @@ The highlights of this release include:
* Enhanced routing capabilities
* Unified replication durability configuration
* Support for EDB Advanced Storage Pack
- * Support for TDE with EDB Postgres Advanced and EDB Postgres Extended 16
+ * Support for TDE with EDB Postgres Advanced 15 and EDB Postgres Extended 15
* Integration with OpenTelemetry
* Improved transaction tracking performance (Group Commit, CAMO)
* Postgres 12 to 15 compatiblity
From 7f53e7b1293695a03cda62b02625d084e9785442 Mon Sep 17 00:00:00 2001
From: Josh Earlenbaugh
Date: Fri, 26 Apr 2024 14:03:49 -0400
Subject: [PATCH 13/26] PGD CLI in BA docs (#5520)
Added a page highlighting the PGD CLI more visibly in the BA docs. This shows explicitly how to connect to a BA cluster using the PGD CLI. It also shows the most common PGD CLI commands that a BA user might be interested in.
---------
Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com>
---
.../release/using_cluster/pgd_cli_ba.mdx | 114 ++++++++++++++++++
1 file changed, 114 insertions(+)
create mode 100644 product_docs/docs/biganimal/release/using_cluster/pgd_cli_ba.mdx
diff --git a/product_docs/docs/biganimal/release/using_cluster/pgd_cli_ba.mdx b/product_docs/docs/biganimal/release/using_cluster/pgd_cli_ba.mdx
new file mode 100644
index 00000000000..0b517c8340c
--- /dev/null
+++ b/product_docs/docs/biganimal/release/using_cluster/pgd_cli_ba.mdx
@@ -0,0 +1,114 @@
+---
+title: PGD CLI on BigAnimal
+navTitle: PGD CLI on BigAnimal
+deepToC: true
+---
+
+When running a distributed high-availability cluster on BigAnimal, you can use the [PGD CLI](../../../pgd/latest/cli/) to manage cluster operations, such as switching over write leaders, performing cluster health checks, and viewing various details about nodes, groups, or other aspects of the cluster.
+
+## Installing the PGD CLI
+
+To [install the PGD CLI](../../../pgd/latest/cli/installing_cli/), replace `` with your EDB subscription token in the following command for Debian and Ubuntu machines:
+
+```bash
+curl -1sLf 'https://downloads.enterprisedb.com/<your-token>/postgres_distributed/setup.deb.sh' | sudo -E bash
+sudo apt-get install edb-pgd5-cli
+```
+
+or in this command for RHEL, Rocky, AlmaLinux, or Oracle Linux machines:
+
+```bash
+curl -1sLf 'https://downloads.enterprisedb.com/<your-token>/postgres_distributed/setup.rpm.sh' | sudo -E bash
+sudo yum install edb-pgd5-cli
+```
+
+## Connecting to your BigAnimal cluster
+
+### Discovering your database connection string
+
+To connect to your distributed high-availability BigAnimal cluster via the PGD CLI, you need to [discover the database connection string](../../../pgd/latest/cli/discover_connections/) from your BigAnimal console:
+
+1. Log into the [BigAnimal clusters](https://portal.biganimal.com/clusters) view.
+1. In the filter, set **Cluster Type** to **Distributed High Availability** to show only clusters that work with PGD CLI.
+1. Select your cluster.
+1. In the view of your cluster, select the **Connect** tab.
+1. Copy the read/write URI from the connection info. This is your connection string.
+
+### Using the PGD CLI with your database connection string
+
+!!! Important
+PGD does not prompt for interactive passwords. Accordingly, you should have a [`.pgpass` file](https://www.postgresql.org/docs/current/libpq-pgpass.html) properly configured to allow access to the cluster. Your BigAnimal cluster's connection information page has all the necessary information needed for the file.
+
+Without a properly configured `.pgpass`, you receive a database connection error when using a PGD CLI command, even when using the correct database connection string with the `--dsn` flag.
+!!!
+
+To use the PGD CLI with your database connection string, use the `--dsn` flag with your PGD CLI command:
+
+```bash
+pgd show-nodes --dsn ""
+```
+
+## PGD commands in BigAnimal
+
+!!! Note
+There are three EDB Postgres Distributed CLI commands that don't work with distributed high-availability BigAnimal clusters: `create-proxy`, `delete-proxy`, and `alter-proxy-option`. These are managed by BigAnimal, as BigAnimal runs on Kubernetes, and it is a technical best practice to have the Kubernetes operator handle these functions.
+!!!
+
+What follows are some examples of the most common PGD CLI commands with a BigAnimal cluster.
+
+### `pgd check-health`
+
+`pgd check-health` provides statuses with relevant messaging regarding the clock skew of node pairs, node accessibility, the current raft leader, replication slot health, and versioning consistency:
+
+```
+$ pgd check-health --dsn "postgres://edb_admin@p-mbx2p83u9n-a.pg.biganimal.io:5432/bdrdb?sslmode=require"
+__OUTPUT__
+Check Status Message
+----- ------ -------
+ClockSkew Ok All BDR node pairs have clockskew within permissible limit
+Connection Ok All BDR nodes are accessible
+Raft Warning There is no RAFT_LEADER, an election might be in progress
+Replslots Ok All BDR replication slots are working correctly
+Version Ok All nodes are running same BDR versions
+```
+
+### `pgd show-nodes`
+
+`pgd show-nodes` returns all the nodes in the DHA cluster and their summaries, including name, node id, group, and current/target state:
+
+```
+$ pgd show-nodes --dsn "postgres://edb_admin@p-mbx2p83u9n-a.pg.biganimal.io:5432/bdrdb?sslmode=require"
+__OUTPUT__
+Node Node ID Group Type Current State Target State Status Seq ID
+---- ------- ----- ---- ------------- ------------ ------ ------
+p-mbx2p83u9n-a-1 3537039754 dc1 data ACTIVE ACTIVE Up 1
+p-mbx2p83u9n-a-2 3155790619 p-mbx2p83u9n-a data ACTIVE ACTIVE Up 2
+p-mbx2p83u9n-a-3 2604177211 p-mbx2p83u9n-a data ACTIVE ACTIVE Up 3
+```
+
+### `pgd show-groups`
+
+`pgd show-groups` returns all groups in your DHA BigAnimal cluster. It also notes which node is the current write leader of each group:
+
+
+```
+$ pgd show-groups --dsn "postgres://edb_admin@p-mbx2p83u9n-a.pg.biganimal.io:5432/bdrdb?sslmode=require"
+__OUTPUT__
+Group Group ID Type Parent Group Location Raft Routing Write Leader
+----- -------- ---- ------------ -------- ---- ------- ------------
+world 3239291720 global true true p-mbx2p83u9n-a-1
+dc1 4269540889 data p-mbx2p83u9n-a false false
+p-mbx2p83u9n-a 2800873689 data world true true p-mbx2p83u9n-a-3
+```
+
+### `pgd switchover`
+
+`pgd switchover` manually changes the write leader of the group, and can be used to simulate a [failover](../../../pgd/latest/quickstart/further_explore_failover).
+
+```
+$ pgd switchover --group-name world --node-name p-mbx2p83u9n-a-2 --dsn "postgres://edb_admin@p-mbx2p83u9n-a.pg.biganimal.io:5432/bdrdb?sslmode=require"
+__OUTPUT__
+switchover is complete
+```
+
+See the [PGD CLI command reference](../../../pgd/latest/cli/command_ref/) for the full range of PGD CLI commands and their descriptions.
From 6f337a60c6123b5137d619e0da47d1a3a1a888c7 Mon Sep 17 00:00:00 2001
From: Bobby Bissett <70302203+EFM-Bobby@users.noreply.github.com>
Date: Mon, 29 Apr 2024 07:41:55 -0400
Subject: [PATCH 14/26] Fix release years that were changed for 4.0/4.1
Back in commit 0fcdb9f1c37e24e8ca7e2e80c228e368dd5c7237, the release years for efm versions 4.0 and 4.1 got bumped up by a year (so that 4.0 and 4.1 were actually released AFTER 4.2). Fixing this based on the diff of the commit above and the dates of the changes in efm code itself.
---
product_docs/docs/efm/4/efm_rel_notes/index.mdx | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/product_docs/docs/efm/4/efm_rel_notes/index.mdx b/product_docs/docs/efm/4/efm_rel_notes/index.mdx
index 0d1e8d7c6a6..47785741f2c 100644
--- a/product_docs/docs/efm/4/efm_rel_notes/index.mdx
+++ b/product_docs/docs/efm/4/efm_rel_notes/index.mdx
@@ -16,8 +16,8 @@ about the release that introduced the feature.
| [4.4](06_efm_44_rel_notes) | 05 Jan 2022|
| [4.3](07_efm_43_rel_notes) | 18 Dec 2021|
| [4.2](08_efm_42_rel_notes) | 19 Apr 2021 |
-| [4.1](09_efm_41_rel_notes) | 11 Dec 2021|
-| [4.0](10_efm_40_rel_notes) | 02 Sep 2021 |
+| [4.1](09_efm_41_rel_notes) | 11 Dec 2020|
+| [4.0](10_efm_40_rel_notes) | 02 Sep 2020 |
From cb019c838a643c3c99a83dbb22b7d2a450bd9d5a Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com>
Date: Mon, 29 Apr 2024 15:33:33 +0100
Subject: [PATCH 15/26] Typo fix - no ticket
---
product_docs/docs/pgd/4/bdr/nodes.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pgd/4/bdr/nodes.mdx b/product_docs/docs/pgd/4/bdr/nodes.mdx
index dd194998393..db806f1ae03 100644
--- a/product_docs/docs/pgd/4/bdr/nodes.mdx
+++ b/product_docs/docs/pgd/4/bdr/nodes.mdx
@@ -348,7 +348,7 @@ For these reasons, we generally recommend to use either logical standby nodes
or a subscribe-only group instead of physical standby nodes. They both
have better operational characteristics in comparison.
-You can can manually ensure the group slot is advanced on all nodes
+You can manually ensure the group slot is advanced on all nodes
(as much as possible), which helps hasten the creation of BDR-related
replication slots on a physical standby using the following SQL syntax:
From 737ef74877bf8d627c43161fa6cbc5010269b6ff Mon Sep 17 00:00:00 2001
From: Josh Earlenbaugh
Date: Mon, 29 Apr 2024 14:53:30 -0400
Subject: [PATCH 16/26] added --wait flag to helm upgrade command. (#5558)
---
.../1/installation_upgrade.mdx | 1 +
1 file changed, 1 insertion(+)
diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx
index 58770407271..dade65a4f22 100644
--- a/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx
+++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx
@@ -34,6 +34,7 @@ Make sure to replace your repo and token in the following command:
```console
helm upgrade --dependency-update \
--install edb-pg4k-pgd \
+ --wait \
--namespace pgd-operator-system \
--create-namespace \
edb/edb-postgres-distributed-for-kubernetes \
From 14e74c4f86e06aa2dd17a537bd8543876be035d6 Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Mon, 4 Mar 2024 12:16:35 +0000
Subject: [PATCH 17/26] First commit with initial changes for PGD 4
Signed-off-by: Dj Walker-Morgan
---
.../docs/pgd/4/admin-manual/index.mdx | 18 +
.../installing/01-provisioning-hosts.mdx | 76 ++++
.../installing/02-install-postgres.mdx | 66 ++++
.../03-configuring-repositories.mdx | 156 ++++++++
.../installing/04-installing-software.mdx | 332 ++++++++++++++++++
.../installing/05-creating-cluster.mdx | 161 +++++++++
.../installing/06-check-cluster.mdx | 210 +++++++++++
.../installing/07-configure-proxies.mdx | 265 ++++++++++++++
.../installing/08-using-pgd-cli.mdx | 251 +++++++++++++
.../installing/images/edbrepos2.0.png | 3 +
.../pgd/4/admin-manual/installing/index.mdx | 53 +++
11 files changed, 1591 insertions(+)
create mode 100644 product_docs/docs/pgd/4/admin-manual/index.mdx
create mode 100644 product_docs/docs/pgd/4/admin-manual/installing/01-provisioning-hosts.mdx
create mode 100644 product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx
create mode 100644 product_docs/docs/pgd/4/admin-manual/installing/03-configuring-repositories.mdx
create mode 100644 product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
create mode 100644 product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx
create mode 100644 product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx
create mode 100644 product_docs/docs/pgd/4/admin-manual/installing/07-configure-proxies.mdx
create mode 100644 product_docs/docs/pgd/4/admin-manual/installing/08-using-pgd-cli.mdx
create mode 100644 product_docs/docs/pgd/4/admin-manual/installing/images/edbrepos2.0.png
create mode 100644 product_docs/docs/pgd/4/admin-manual/installing/index.mdx
diff --git a/product_docs/docs/pgd/4/admin-manual/index.mdx b/product_docs/docs/pgd/4/admin-manual/index.mdx
new file mode 100644
index 00000000000..3e14d6d1f40
--- /dev/null
+++ b/product_docs/docs/pgd/4/admin-manual/index.mdx
@@ -0,0 +1,18 @@
+---
+title: Manual Installation and Administration
+navTitle: Manually
+---
+
+This section of the manual covers how to manually deploy and administer EDB Postgres Distributed 4.
+
+* [Installing](installing) works through the steps needed to:
+ * Provision hosts
+ * Install Postgres
+ * Configure repositories
+ * Install the PGD software
+ * Create a cluster
+ * Check a cluster
+ * Configure PGD proxies
+ * Install and use PGD CLI
+
+The installing section provides an example cluster which will be used in future examples.
\ No newline at end of file
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/01-provisioning-hosts.mdx b/product_docs/docs/pgd/4/admin-manual/installing/01-provisioning-hosts.mdx
new file mode 100644
index 00000000000..79b0a10d8a4
--- /dev/null
+++ b/product_docs/docs/pgd/4/admin-manual/installing/01-provisioning-hosts.mdx
@@ -0,0 +1,76 @@
+---
+title: Step 1 - Provisioning Hosts
+navTitle: Provisioning Hosts
+deepToC: true
+---
+
+## Provisioning hosts
+
+The first step in the process of deploying PGD is to provision and configure hosts.
+
+You can deploy to virtual machine instances in the cloud with Linux installed, on-premise virtual machines with Linux installed or on-premise physical hardware also with Linux installed.
+
+Whichever [supported Linux operating system](https://www.enterprisedb.com/resources/platform-compatibility#bdr) and whichever deployment platform you select, the result of provisioning a machine must be a Linux system that can be accessed by you using SSH with a user that has superuser, administrator or sudo privileges.
+
+Each machine provisioned should be able to make connections to any other machine you are provisioning for your cluster.
+
+On cloud deployments, this may be done over the public network or over a VPC.
+
+On-premise deployments should be able to connect over the local network.
+
+!!! Note Cloud provisioning guides
+
+If you are new to cloud provisioning, these guides may provide assistance:
+
+ Vendor | Platform | Guide
+ ------ | -------- | ------
+ Amazon | AWS | [Tutorial: Get started with Amazon EC2 Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html)
+ Microsoft | Azure | [Quickstart: Create a Linux virtual machine in the Azure portal](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu)
+ Google | GCP | [Create a Linux VM instance in Compute Engine](https://cloud.google.com/compute/docs/create-linux-vm-instance)
+
+!!!
+
+### Configuring hosts
+
+#### Create an admin user
+
+We recommend that you configure an admin user for each provisioned instance.
+The admin user must have superuser or sudo (to superuser) privileges.
+We also recommend that the admin user should be configured for passwordless SSH access using certificates.
+
+#### Ensure networking connectivity
+
+With the admin user created, ensure that each machine can communicate with the other machines you are provisioning.
+
+In particular, the PostgreSQL TCP/IP port (5444 for EDB Postgres Advanced
+Server, 5432 for EDB Postgres Extended and Community PostgreSQL) should be open
+to all machines in the cluster. If you plan to deploy PGD Proxy, its port must be
+open to any applications which will connect to the cluster. Port 6432 is typically
+used for PGD Proxy.
+
+## Worked example
+
+For the example in this section, we have provisioned three hosts with Red Hat Enterprise Linux 9.
+
+* host-one
+* host-two
+* host-three
+
+Each is configured with a "admin" admin user.
+
+These hosts have been configured in the cloud and as such each host has both a public and private IP address.
+
+ Name | Public IP | Private IP
+------|-----------|----------------------
+ host-one | 172.24.117.204 | 192.168.254.166
+ host-two | 172.24.113.247 | 192.168.254.247
+ host-three | 172.24.117.23 | 192.168.254.135
+
+For our example cluster, we have also edited `/etc/hosts` to use those private IP addresses:
+
+```
+192.168.254.166 host-one
+192.168.254.247 host-two
+192.168.254.135 host-three
+```
+
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx b/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx
new file mode 100644
index 00000000000..7622c3be8bb
--- /dev/null
+++ b/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx
@@ -0,0 +1,66 @@
+---
+title: Step 2 - Installing Postgres
+navTitle: Installing Postgres
+deepToC: true
+---
+
+## Installing Postgres
+
+You will need to install Postgres on all the hosts.
+
+An EDB account is required to use the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can get installation instructions.
+Select your platform and Postgres edition.
+You will be presented with 2 steps of instructions, the first covering how to configure the required package repository and the second covering how to install the packages from that repository.
+
+Run both steps.
+
+## Worked example
+
+In our example, we will be installing EDB Postgres Advanced Server 16 on Red Hat Enterprise Linux 9 (RHEL 9).
+
+### EDB account
+
+You'll need an EDB account to install both Postgres and PGD.
+
+Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can select your platform and then scroll down the list to select the Postgres version you wish to install:
+
+* EDB Postgres Advanced Server (up to and including version 14)
+* EDB Postgres Extended (up to and including version 14)
+* PostgreSQL (up to and including version 14)
+
+!!! Note
+PGD 4 does not support Postgres versions 15, 16, and later
+!!!
+
+Upon selecting the version of the Postgres server you want, two steps will be displayed.
+
+
+### 1: Configuring repositories
+
+For step 1, you can choose to use the automated script or step through the manual install instructions that are displayed. Your EDB repository token will be automatically inserted by the EDB Repos 2.0 site into these scripts.
+In our examples, it will be shown as `XXXXXXXXXXXXXXXX`.
+
+On each provisioned host, either run the automatic repository installation script which will look like this:
+
+```shell
+curl -1sLf 'https://downloads.enterprisedb.com/XXXXXXXXXXXXXXXX/enterprise/setup.rpm.sh' | sudo -E bash
+```
+
+Or use the manual installation steps which look like this:
+
+```shell
+dnf install yum-utils
+rpm --import 'https://downloads.enterprisedb.com/XXXXXXXXXXXXXXXX/enterprise/gpg.E71EB0829F1EF813.key'
+curl -1sLf 'https://downloads.enterprisedb.com/XXXXXXXXXXXXXXXX/enterprise/config.rpm.txt?distro=el&codename=9' > /tmp/enterprise.repo
+dnf config-manager --add-repo '/tmp/enterprise.repo'
+dnf -q makecache -y --disablerepo='*' --enablerepo='enterprisedb-enterprise'
+```
+
+### 2: Install Postgres
+
+For step 2, we just run the command to install the packages.
+
+```
+sudo dnf -y install edb-as14-server
+```
+
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/03-configuring-repositories.mdx b/product_docs/docs/pgd/4/admin-manual/installing/03-configuring-repositories.mdx
new file mode 100644
index 00000000000..835259f024e
--- /dev/null
+++ b/product_docs/docs/pgd/4/admin-manual/installing/03-configuring-repositories.mdx
@@ -0,0 +1,156 @@
+---
+title: Step 3 - Configuring PGD repositories
+navTitle: Configuring PGD repositories
+deepToC: true
+---
+
+## Configuring PGD repositories
+
+To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages.
+
+The following operations should be carried out on each host. For the purposes of this exercise, each host will be a standard data node, but the procedure would be the same for other [node types](../../node_management/node_types) such as witness or subscriber-only nodes.
+
+* Use your EDB account.
+ * Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page.
+
+* Set environment variables.
+ * Set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the repository token:
+
+ ```
+ export EDB_SUBSCRIPTION_TOKEN=
+ ```
+
+* Configure the repository.
+ * Run the automated installer to install the repositories:
+
+ !!! Note Red Hat
+ ```
+ curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed_4/setup.rpm.sh" | sudo -E bash
+ ```
+ !!!
+
+ !!! Note Ubuntu/Debian
+ ```
+ curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed_4/setup.deb.sh" | sudo -E bash
+ ```
+ !!!
+
+## Worked example
+
+### Use your EDB account
+
+You'll need an EDB account to install Postgres Distributed.
+
+Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page, where you can obtain your repo token.
+
+On your first visit to this page, select **Request Access** to generate your repo token.
+
+![EDB Repos 2.0](images/edbrepos2.0.png)
+
+Copy the token to your clipboard using the **Copy Token** button and store it safely.
+
+
+### Set environment variables
+
+Set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the value of your EDB repo token, obtained in the [EDB account](#use-your-edb-account) step.
+
+```
+export EDB_SUBSCRIPTION_TOKEN=
+```
+
+You can add this to your `.bashrc` script or similar shell profile to ensure it's always set.
+
+!!! Note
+Your preferred platform may support storing this variable as a secret which can appear as an environment variable. If this is the case, don't add the setting to `.bashrc` and instead add it to your platform's secret manager.
+!!!
+
+### Configure the repository
+
+All the software you need is available from the EDB Postgres Distributed package repository.
+You have the option to simply download and run a script to configure the EDB Postgres Distributed repository.
+You can also download, inspect and then run that same script.
+The following instructions also include the essential steps that the scripts take for any user wanting to manually run, or automate, the installation process.
+
+
+#### RHEL/Other RHEL-based
+
+You can autoinstall with automated OS detection
+
+```
+curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed_4/setup.rpm.sh" | sudo -E bash
+```
+
+If you wish to inspect the script that is generated for you run:
+
+```
+curl -1sLfO "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed_4/setup.rpm.sh"
+```
+
+Then inspect the resulting `setup.rpm.sh` file. When you are happy to proceed, run:
+
+```
+sudo -E bash setup.rpm.sh
+```
+
+If you want to perform all steps manually or use your own preferred deployment mechanism, you can use the following example as a guide:
+
+You will need to pass details of your Linux distribution and version. You may need to change the codename to match the version of RHEL you are using. Here we set it for RHEL compatible Linux version 9:
+
+```
+export DISTRO="el"
+export CODENAME="9"
+```
+
+Now install the yum-utils package:
+
+```
+sudo dnf install -y yum-utils
+```
+
+The next step will import a GPG key for the repositories:
+
+```
+sudo rpm --import "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed_4/gpg.44D45428437EAD1B.key"
+```
+
+Now, we can import the repository details, add them to the local configuration and enable the repository.
+
+```
+curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed_4/config.rpm.txt?distro=$DISTRO&codename=$CODENAME" > /tmp/enterprise.repo
+sudo dnf config-manager --add-repo '/tmp/enterprise.repo'
+sudo dnf -q makecache -y --disablerepo='*' --enablerepo='enterprisedb-postgres_distributed'
+```
+
+
+
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx b/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
new file mode 100644
index 00000000000..3f5279c71fd
--- /dev/null
+++ b/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
@@ -0,0 +1,332 @@
+---
+title: Step 4 - Installing the PGD software
+navTitle: Installing PGD software
+deepToC: true
+---
+
+## Installing the PGD software
+
+With the repositories configured, you can now install the Postgres Distributed software.
+These steps must be carried out on each host before proceeding to the next step.
+
+* **Install the packages**
+ * Install the PGD packages which include a server specific BDR package and generic PGD cli packages. (`edb-bdr4-`, and `edb-pgd-cli`)
+
+
+* **Ensure the Postgres database server has been initialized and started.**
+ * Use `systemctl status ` to check the service is running
+ * If not, initialize the database and start the service
+
+
+* **Configure the BDR extension**
+ * Add the BDR extension (`$libdir/bdr`) at the start of the shared_preload_libraries setting in `postgresql.conf`.
+ * Set the `wal_level` GUC variable to `logical` in `postgresql.conf`.
+ * Turn on commit timestamp tracking by setting `track_commit_timestamp` to `'on'` in `postgresql.conf`.
+ * Raise the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.
+ !!! Note The `max_worker_processes` value
+ The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases and other factors.
+ To calculate the needed value see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings).
+ The value of 16 was calculated for the size of cluster we are deploying and must be raised for larger clusters.
+ !!!
+ * Set a password on the EnterprisedDB/Postgres user.
+ * Add rules to `pg_hba.conf` to allow nodes to connect to each other.
+ * Ensure that these lines are present in `pg_hba.conf:
+ ```
+ host all all all md5
+ host replication all all md5
+ ```
+ * Add a `.pgpass` file to allow nodes to authenticate each other.
+ * Configure a user with sufficient privileges to be able to log into the other nodes.
+ * See [The Password File](https://www.postgresql.org/docs/current/libpq-pgpass.html) in the Postgres documentation for more on the `.pgpass` file.
+
+
+* **Restart the server.**
+ * Verify the restarted server is running with the modified settings and the bdr extension is available
+
+
+* **Create the replicated database.**
+ * Log into the server's default database (`edb` for EPAS, `postgres` for PGE and Community).
+ * Use `CREATE DATABASE bdrdb` to create the default PGD replicated database.
+ * Log out and then log back in to `bdrdb`.
+ * Use `CREATE EXTENSION bdr` to enable the BDR extension and PGD to run on that database.
+
+
+We will look in detail at the steps for EDB Postgres Advanced Server in the worked example below.
+
+If you are installing PGD with EDB Postgres Extended Server or Community Postgres, the steps are similar, but details such as package names and paths are different. These differences are summarized in [Installing PGD for EDB Postgres Extended Server](#installing-pgd-for-edb-postgres-extended-server) and [Installing PGD for Postgresql](#installing-pgd-for-postgresql).
+
+## Worked example
+
+### Install the packages
+
+The first step is to install the packages. For each Postgres package, there is a `edb-bdr4-` package to go with it.
+For example, if we are installing EDB Postgres Advanced Server (epas) version 14, we would install `edb-bdr4-epas14`.
+
+There are two other packages to also install:
+
+- `edb-pgd-cli` for the PGD command line tool.
+
+To install all of these packages on a RHEL or RHEL compatible Linux, run:
+
+```
+sudo dnf -y install edb-bdr4-epas14 edb-pgd-cli
+```
+
+### Ensure the database is initialized and started
+
+If it wasn't initialized and started by the database's package initialisation (or you are repeating the process), you will need to initialize and start the server.
+
+To see if the server is running, you can check the service. The service name for EDB Advanced Server is `edb-as-14` so run:
+
+```
+sudo systemctl status edb-as-14
+```
+
+If the server is not running, this will respond with:
+
+```
+○ edb-as-14.service - EDB Postgres Advanced Server 14
+ Loaded: loaded (/usr/lib/systemd/system/edb-as-16.service; disabled; preset: disabled)
+ Active: inactive (dead)
+```
+
+The "Active: inactive (dead)" tells us we will need to initialize and start the server.
+
+You will need to know the path to the setup script for your particular Postgres flavor.
+
+For EDB Postgres Advanced Server, this script can be found in `/usr/edb/as14/bin` as `edb-as-14-setup`.
+This command needs to be run with the `initdb` parameter and we need to pass an option setting the database to use UTF-8.
+
+```
+sudo PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/edb/as14/bin/edb-as-14-setup initdb
+```
+
+Once the database is initialized, we will start it which will enable us to continue configuring the BDR extension.
+
+```
+sudo systemctl start edb-as-14
+```
+
+### Configure the BDR extension
+
+Installing EDB Postgres Advanced Server creates a system user `enterprisedb` with admin capabilities when connected to the database. We will be using this user to configure the BDR extension.
+
+#### Preload the BDR library
+
+We want the bdr library to be preloaded with other libraries.
+EPAS has a number of libraries already preloaded, so we have to prefix the existing list with the BDR library.
+
+```
+echo -e "shared_preload_libraries = '\$libdir/bdr,\$libdir/dbms_pipe,\$libdir/edb_gen,\$libdir/dbms_aq'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
+```
+
+!!!tip
+This command format (`echo ... | sudo ... tee -a ...`) appends the echoed string to the end of the postgresql.conf file, which is owned by another user.
+!!!
+
+#### Set the `wal_level`
+
+The BDR extension needs to set the server to perform logical replication. We do this by setting `wal_level` to `logical`.
+
+```
+echo -e "wal_level = 'logical'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
+
+```
+
+#### Enable commit timestamp tracking
+
+The BDR extension also needs the commit timestamp tracking enabled.
+
+```
+echo -e "track_commit_timestamp = 'on'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
+
+```
+
+#### Raise `max_worker_processes`
+
+To communicate between multiple nodes, Postgres Distributed nodes run more worker processes than usual.
+The default limit (8) is too low even for a small cluster.
+
+The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases and other factors.
+To calculate the needed value see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings).
+
+For this example, with a 3 node cluster, we are using the value of 16.
+
+Raise the maximum number of worker processes to 16 with this commmand:
+
+```
+echo -e "max_worker_processes = '16'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
+
+```
+
+
+This value must be raised for larger clusters.
+
+#### Add a password to the Postgres enterprisedb user
+
+To allow connections between nodes, a password needs to be set on the Postgres enterprisedb user.
+For this example, we are using the password `secret`.
+Select a different password for your deployments.
+You will need this password when we get to [Creating the PGD Cluster](05-creating-cluster).
+
+```
+sudo -u enterprisedb psql edb -c "ALTER USER enterprisedb WITH PASSWORD 'secret'"
+
+```
+
+#### Enable inter-node authentication in pg_hba.conf
+
+Out of the box, Postgres allows local authentication and connections with the database but not external network connections.
+To enable this, edit `pg_hba.conf` and add appropriate rules, including rules for the replication users.
+To simplify the process, use this command:
+
+```
+echo -e "host all all all md5\nhost replication all all md5" | sudo tee -a /var/lib/edb/as14/data/pg_hba.conf
+
+```
+
+It will append
+
+```
+host all all all md5
+host replication all all md5
+
+```
+
+to `pg_hba.conf` which will enable the nodes to replicate.
+
+#### Enable authentication between nodes
+
+As part of the process of connecting nodes for replication, PGD logs into other nodes.
+It will perform that log in as the user that Postgres is running under.
+For epas, this is the `enterprisedb` user.
+That user will need credentials to log into the other nodes.
+We will supply these credentials using the `.pgpass` file which needs to reside in the user's home directory.
+The home directory for `enterprisedb` is `/var/lib/edb`.
+
+Run this command to create the file:
+
+```
+echo -e "*:*:*:enterprisedb:secret" | sudo -u enterprisedb tee /var/lib/edb/.pgpass; sudo chmod 0600 /var/lib/edb/.pgpass
+
+```
+
+You can read more about the `.pgpass` file in [The Password File](https://www.postgresql.org/docs/current/libpq-pgpass.html) in the PostgreSQL documentation.
+
+### Restart the server
+
+After all these configuration changes, it is recommended that the server is restarted with:
+
+```
+sudo systemctl restart edb-as-14
+
+```
+
+#### Check the extension has been installed
+
+At this point, it is worth checking the extension is actually available and our configuration has been correctly loaded. You can query the pg_available_extensions table for the bdr extension like this:
+
+```
+sudo -u enterprisedb psql edb -c "select * from pg_available_extensions where name like 'bdr'"
+
+```
+
+Which should return an entry for the extension and its version.
+
+```
+ name | default_version | installed_version | comment
+------+-----------------+-------------------+-------------------------------------------
+ bdr | 4.3.3 | | Bi-Directional Replication for PostgreSQL
+(1 row)
+ ```
+
+You can also confirm the other server settings using this command:
+
+```
+sudo -u enterprisedb psql edb -c "show all" | grep -e wal_level -e track_commit_timestamp -e max_worker_processes
+
+```
+
+### Create the replicated database
+
+The server is now prepared for PGD.
+We need to next create a database named `bdrdb` and install the bdr extension when logged into it.
+
+```
+sudo -u enterprisedb psql edb -c "CREATE DATABASE bdrdb"
+sudo -u enterprisedb psql bdrdb -c "CREATE EXTENSION bdr"
+
+```
+
+Finally, test the connection by logging into the server.
+
+```
+sudo -u enterprisedb psql bdrdb
+```
+
+You should be connected to the server.
+Execute the command "\\dx" to list extensions installed.
+
+```
+bdrdb=# \dx
+ List of installed extensions
+ Name | Version | Schema | Description
+------------------+---------+------------+--------------------------------------------------
+ bdr | 4.3.3 | pg_catalog | Bi-Directional Replication for PostgreSQL
+ edb_dblink_libpq | 1.0 | pg_catalog | EnterpriseDB Foreign Data Wrapper for PostgreSQL
+ edb_dblink_oci | 1.0 | pg_catalog | EnterpriseDB Foreign Data Wrapper for Oracle
+ edbspl | 1.0 | pg_catalog | EDB-SPL procedural language
+ plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
+```
+
+Notice that the bdr extension is listed in the table, showing it is installed.
+
+## Summaries
+
+### Installing PGD4 for EDB Postgres Advanced Server
+
+These are all the commands used in this section gathered together for your convenience.
+
+```
+sudo dnf -y install edb-bdr4-epas14 edb-pgd-cli
+sudo PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/edb/as14/bin/edb-as-14-setup initdb
+sudo systemctl start edb-as-14
+echo -e "shared_preload_libraries = '\$libdir/bdr,\$libdir/dbms_pipe,\$libdir/edb_gen,\$libdir/dbms_aq'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
+echo -e "wal_level = 'logical'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
+echo -e "track_commit_timestamp = 'on'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
+echo -e "max_worker_processes = '16'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
+sudo -u enterprisedb psql edb -c "ALTER USER enterprisedb WITH PASSWORD 'secret'"
+echo -e "host all all all md5\nhost replication all all md5" | sudo tee -a /var/lib/edb/as14/data/pg_hba.conf
+echo -e "*:*:*:enterprisedb:secret" | sudo -u enterprisedb tee /var/lib/edb/.pgpass; sudo chmod 0600 /var/lib/edb/.pgpass
+sudo systemctl restart edb-as-14
+sudo -u enterprisedb psql edb -c "CREATE DATABASE bdrdb"
+sudo -u enterprisedb psql bdrdb -c "CREATE EXTENSION bdr"
+sudo -u enterprisedb psql bdrdb
+
+```
+
+### Installing PGD for EDB Postgres Extended Server
+
+If installing PGD with EDB Postgres Extended Server, there are a number of differences from the EPAS installation.
+
+* The BDR package to install is named `edb-bdrV-pgextendedNN` (where V is the PGD version and NN is the PGE version number)
+* A different setup utility should be called: /usr/edb/pgeNN/bin/edb-pge-NN-setup
+* The service name is edb-pge-NN.
+* The system user is postgres (not enterprisedb)
+* The home directory for the postgres user is `/var/lib/pgqsl`
+* There are no pre-existing libraries to be added to `shared_preload_libraries`
+
+
+### Installing PGD for Postgresql
+
+If installing PGD with PostgresSQL, there are a number of differences from the EPAS installation.
+
+* The BDR package to install is named `edb-bdrV-pgNN` (where V is the PGD version and NN is the PostgreSQL version number)
+* A different setup utility should be called: /usr/pgsql-NN/bin/postgresql-NN-setup
+* The service name is postgresql-NN.
+* The system user is postgres (not enterprisedb)
+* The home directory for the postgres user is `/var/lib/pgqsl`
+* There are no pre-existing libraries to be added to `shared_preload_libraries`
+
+
+
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx b/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx
new file mode 100644
index 00000000000..b14caa5dac4
--- /dev/null
+++ b/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx
@@ -0,0 +1,161 @@
+---
+title: Step 5 - Creating the PGD Cluster
+navTitle: Creating the Cluster
+deepToC: true
+---
+
+## Creating the PGD cluster
+
+* **Create connection strings for each node**.
+For each node we want to create a connection string which will allow PGD to perform replication.
+
+ The connection string is a key/value string which starts with a `host=` and the IP address of the host (or if you have resolvable named hosts, the name of the host).
+
+ That is followed by the name of the database; `dbname=bdrdb` as we created a `bdrdb` database when [installing the software](04-installing-software).
+
+ We recommend you also add the port number of the server to your connection string as `port=5444` for EDB Postgres Advanced Server and `port=5432` for EDB Postgres Extended and Community PostgreSQL.
+
+
+* **Prepare the first node.**
+To create the cluster, we select and log into one of the hosts Postgres server's `bdrdb` database.
+
+
+* **Create the first node.**
+ Run `bdr.create_node` and give the node a name and its connection string where *other* nodes may connect to it.
+ * Create the top-level group.
+ Create a top-level group for the cluster with `bdr.create_node_group` giving it a single parameter, the name of the top-level group.
+ * Create a sub-group.
+ Create a sub-group as a child of the top-level group with `bdr.create_node_group` giving it two parameters, the name of the sub-group and the name of the parent (and top-level) group.
+ This initializes the first node.
+
+
+* **Adding the second node.**
+ * Create the second node.
+ Log into another initialized node's `bdrdb` database.
+ Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes may connect to it.
+ * Join the second node to the cluster
+ Next, run `bdr.join_node_group` passing two parameters, the connection string for the first node and the name of the sub-group you want the node to join.
+
+
+* **Adding the third node.**
+ * Create the third node
+ Log into another initialized node's `bdrdb` database.
+ Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes may connect to it.
+ * Join the third node to the cluster
+ Next, run `bdr.join_node_group` passing two parameters, the connection string for the first node and the name of the sub-group you want the node to join.
+
+
+## Worked example
+
+So far, we have:
+
+* Created three Hosts.
+* Installed a Postgres server on each host.
+* Installed Postgres Distributed on each host.
+* Configured the Postgres server to work with PGD on each host.
+
+To create the cluster, we will tell `host-one`'s Postgres instance that it is a PGD node - `node-one` and create PGD groups on that node.
+Then we will tell `host-two` and `host-three`'s Postgres instances that they are PGD nodes - `node-two` and `node-three` and that they should join a group on `node-one`.
+
+### Create connection strings for each node
+
+We calculate the connection strings for each of the node in advance.
+Below are the connection strings for our 3 node example:
+
+| Name | Node Name | Private IP | Connection string |
+| ---------- | ---------- | --------------- | -------------------------------------- |
+| host-one | node-one | 192.168.254.166 | host=host-one dbname=bdrdb port=5444 |
+| host-two | node-two | 192.168.254.247 | host=host-two dbname=bdrdb port=5444 |
+| host-three | node-three | 192.167.254.135 | host=host-three dbname=bdrdb port=5444 |
+
+### Preparing the first node
+
+Log into host-one's Postgres server.
+
+```
+ssh admin@host-one
+sudo -iu enterprisedb psql bdrdb
+```
+
+### Create the first node
+
+Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string which other nodes can use to connect to it.
+
+```
+select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444');
+```
+
+#### Create the top-level group
+
+Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter will create the top-level group with that name. For our example, we will create a top-level group named `pgd`.
+
+```
+select bdr.create_node_group('pgd');
+```
+
+#### Create a sub-group
+
+Using sub-groups to organize your nodes is preferred as it allows services like PGD proxy, which we will be configuring later, to coordinate their operations.
+In a larger PGD installation, multiple sub-groups can exist providing organizational grouping that enables geographical mapping of clusters and localized resilience.
+For that reason, in this example, we are creating a sub-group for our first nodes to enable simpler expansion and use of PGD proxy.
+
+Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a sub-group of the top-level group.
+The sub-group name is the first parameter, the parent group is the second parameter.
+For our example, we will create a sub-group `dc1` as a child of `pgd`.
+
+
+```
+select bdr.create_node_group('dc1','pgd');
+```
+
+### Adding the second node
+
+Log into host-two's Postgres server
+
+```
+ssh admin@host-two
+sudo -iu enterprisedb psql bdrdb
+```
+
+#### Create the second node
+
+We call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it.
+
+```
+select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444');
+```
+
+#### Join the second node to the cluster
+
+Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) we can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter.
+
+```
+select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1');
+```
+
+### Adding the third node
+
+Log into host-three's Postgres server
+
+```
+ssh admin@host-three
+sudo -iu enterprisedb psql bdrdb
+```
+
+#### Create the third node
+
+We call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it.
+
+```
+select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444');
+```
+
+#### Join the third node to the cluster
+
+Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) we can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter.
+
+```
+select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1');
+```
+
+We have now created a PGD cluster.
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx b/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx
new file mode 100644
index 00000000000..df92b8e1f99
--- /dev/null
+++ b/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx
@@ -0,0 +1,210 @@
+---
+title: Step 6 - Checking the cluster
+navTitle: Checking the cluster
+deepToC: true
+---
+
+## Checking the cluster
+
+
+With the cluster up and running, it is worthwhile running some basic checks on how effectively it is replicating.
+
+In the following example, we show one quick way to do this but you should ensure that any testing you perform is appropriate for your use case.
+
+* **Preparation**
+ * Ensure the cluster is ready
+ * Log into the database on host-one/node-one
+ * Run `select bdr.wait_slot_confirm_lsn(NULL, NULL);`
+ * When the query returns the cluster is ready
+
+
+* **Create data**
+ The simplest way to test the cluster is replicating is to log into one node, create a table and populate it.
+ * On node-one create a table
+ ```sql
+ CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT );
+ ```
+ * On node-one populate the table
+ ```sql
+ INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000);
+ ```
+ * On node-one monitor performance
+ ```sql
+ select * from bdr.node_replication_rates;
+ ```
+ * On node-one get a sum of the value column (for checking)
+ ```sql
+ select COUNT(*),SUM(value) from quicktest;
+ ```
+* **Check data**
+ * Log into node-two
+ Log into the database on host-two/node-two
+ * On node-two get a sum of the value column (for checking)
+ ```sql
+ select COUNT(*),SUM(value) from quicktest;
+ ```
+ * Compare with the result from node-one
+ * Log into node-three
+ Log into the database on host-three/node-three
+ * On node-three get a sum of the value column (for checking)
+ ```sql
+ select COUNT(*),SUM(value) from quicktest;
+ ```
+ * Compare with the result from node-one and node-two
+
+## Worked example
+
+### Preparation
+
+Log into host-one's Postgres server.
+```
+ssh admin@host-one
+sudo -iu enterprisedb psql bdrdb
+```
+
+This is your connection to PGD's node-one.
+
+#### Ensure the cluster is ready
+
+To ensure that the cluster is ready to go, run:
+
+```
+select bdr.wait_slot_confirm_lsn(NULL, NULL)
+```
+
+This query will block while the cluster is busy initializing and return when the cluster is ready.
+
+In another window, log into host-two's Postgres server
+
+```
+ssh admin@host-two
+sudo -iu enterprisedb psql bdrdb
+```
+
+### Create data
+
+#### On node-one create a table
+
+Run
+
+```sql
+CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT );
+```
+
+#### On node-one populate the table
+
+```
+INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000);
+```
+
+This will generate a table of 10000 rows of random values.
+
+#### On node-one monitor performance
+
+As soon as possible, run:
+
+```sql
+select * from bdr.node_replication_rates;
+```
+
+And you should see statistics on how quickly that data has been replicated to the other two nodes.
+
+```console
+bdrdb=# select * from bdr.node_replication_rates;
+ peer_node_id | target_name | sent_lsn | replay_lsn | replay_lag | replay_lag_bytes | replay_lag_size | apply_rate | catchup_interv
+al
+--------------+-------------+-----------+------------+------------+------------------+-----------------+------------+---------------
+---
+ 1954860017 | node-three | 0/DDAA908 | 0/DDAA908 | 00:00:00 | 0 | 0 bytes | 13682 | 00:00:00
+ 2299992455 | node-two | 0/DDAA908 | 0/DDAA908 | 00:00:00 | 0 | 0 bytes | 13763 | 00:00:00
+(2 rows)
+```
+
+And it's already replicated.
+
+#### On node-one get a checksum
+
+Run:
+
+```sql
+select COUNT(*),SUM(value) from quicktest;
+```
+
+to get some values from the generated data:
+
+```sql
+bdrdb=# select COUNT(*),SUM(value) from quicktest;
+__OUTPUT__
+ count | sum
+--------+-----------
+ 100000 | 498884606
+(1 row)
+```
+
+### Check data
+
+#### Log into host-two's Postgres server.
+```
+ssh admin@host-two
+sudo -iu enterprisedb psql bdrdb
+```
+
+This is your connection to PGD's node-two.
+
+#### On node-two get a checksum
+
+Run:
+
+```sql
+select COUNT(*),SUM(value) from quicktest;
+```
+
+to get node-two's values for the generated data:
+
+```sql
+bdrdb=# select COUNT(*),SUM(value) from quicktest;
+__OUTPUT__
+ count | sum
+--------+-----------
+ 100000 | 498884606
+(1 row)
+```
+
+#### Compare with the result from node-one
+
+And the values will be identical.
+
+You can repeat the process with node-three, or generate new data on any node and see it replicate to the other nodes.
+
+#### Log into host-threes's Postgres server.
+```
+ssh admin@host-two
+sudo -iu enterprisedb psql bdrdb
+```
+
+This is your connection to PGD's node-three.
+
+#### On node-three get a checksum
+
+Run:
+
+```sql
+select COUNT(*),SUM(value) from quicktest;
+```
+
+to get node-three's values for the generated data:
+
+```sql
+bdrdb=# select COUNT(*),SUM(value) from quicktest;
+__OUTPUT__
+ count | sum
+--------+-----------
+ 100000 | 498884606
+(1 row)
+```
+
+#### Compare with the result from node-one and node-two
+
+And the values will be identical.
+
+
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/07-configure-proxies.mdx b/product_docs/docs/pgd/4/admin-manual/installing/07-configure-proxies.mdx
new file mode 100644
index 00000000000..fe05ab07d0a
--- /dev/null
+++ b/product_docs/docs/pgd/4/admin-manual/installing/07-configure-proxies.mdx
@@ -0,0 +1,265 @@
+---
+title: Step 7 - Configure proxies
+navTitle: Configure proxies
+deepToC: true
+---
+
+
+~~## Configure proxies
+
+PGD can use proxies to direct traffic to one of the clusters nodes, selected automatically by the cluster.
+There are performance and availabilty reasons for using a proxy:
+
+* Performance: By directing all traffic and in particular write traffic, to one node, the node can resolve write conflicts locally and more efficiently.
+* Availability: When a node is taken down for maintenance or goes offline for other reasons, the proxy can automatically direct new traffic to a new, automatically selected, write leader.
+
+It is best practice to configure PGD Proxy for clusters to enable this behavior.
+
+### Configure the cluster for proxies
+
+To set up a proxy, you will need to first prepare the cluster and sub-group the proxies will be working with by:
+
+* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the sub-group. Use [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option), passing the sub-group name, option name and new value as parameters.
+* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) and passing the new proxy name and the sub-group it should be attached to.
+* Create a `pgdproxy` user on the cluster with a password (or other authentication)
+
+### Configure each host as a proxy
+
+Once the cluster is ready, you will need to configure each host to run pgd-proxy by:
+
+* Creating a `pgdproxy` local user
+* Creating a `.pgpass` file for that user which will allow it to log into the cluster as `pgdproxy`.
+* Modify the systemd service file for pgdproxy to use the pgdproxy user.
+* Create a proxy config file for the host which lists the connection strings for all the nodes in the sub-group, specifies the name that the proxy should use when connected and gives the endpoint connection string the proxy will accept connections on.
+* Install that file as `/etc/edb/pgd-proxy/pgd-proxy-config.yml`
+* Restart the systemd service and check its status.
+* Log into the proxy and verify its operation.
+
+Further detail on all these steps is included in the worked example.
+
+## Worked example
+
+## Preparing for proxies
+
+For proxies to function, the `dc1` subgroup must enable Raft and routing.
+
+Log into any node in the cluster, using psql to connect to the bdrdb database as the `enterprisedb` user, and execute:
+
+```
+SELECT bdr.alter_node_group_option('dc1', 'enable_raft', 'true');
+SELECT bdr.alter_node_group_option('dc1', 'enable_proxy_routing', 'true');
+```
+
+The [`bdr.node_group_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_group_summary) view can be used to check the status of options previously set with bdr.alter_node_group_option():
+
+```sql
+SELECT node_group_name, enable_proxy_routing, enable_raft
+ FROM bdr.node_group_summary
+ WHERE parent_group_name IS NOT NULL;
+__OUTPUT__
+ node_group_name | enable_proxy_routing | enable_raft
+-----------------+----------------------+-------------
+ dc1 | t | t
+(1 row)
+
+bdrdb=#
+```
+
+
+Next, create a PGD proxy within the cluster using the `bdr.create_proxy` function.
+This function takes two parameters, the proxy's unique name and the group it should be a proxy for.
+
+In our example, we want a proxy on each host in the dc1 sub-group:
+
+```
+SELECT bdr.create_proxy('pgd-proxy-one','dc1');
+SELECT bdr.create_proxy('pgd-proxy-two','dc1');
+SELECT bdr.create_proxy('pgd-proxy-three','dc1');
+```
+
+The [`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) view can be used to check that the proxies were created:
+
+```sql
+SELECT proxy_name, node_group_name
+ FROM bdr.proxy_config_summary;
+__OUTPUT__
+ proxy_name | node_group_name
+-----------------+-----------------
+ pgd-proxy-one | dc1
+ pgd-proxy-two | dc1
+ pgd-proxy-three | dc1
+
+ bdrdb=#
+ ```
+
+## Create a pgdproxy user on the database
+
+Create a user named pgdproxy and give it a password. In this example we will use `proxysecret`
+
+On any node, log into the bdrdb database as enterprisedb/postgres.
+
+```
+CREATE USER pgdproxy PASSWORD 'proxysecret';
+GRANT bdr_superuser TO pgdproxy;
+```
+
+## Create a pgdproxy user on each host
+
+```
+sudo adduser pgdproxy
+```
+
+This user will need credentials to connect to the server.
+We will create a .pgpass file with the `proxysecret` password in it.
+Then we will lock down the `.pgpass` file so it is only accessible by its owner.
+
+```
+echo -e "*:*:*:pgdproxy:proxysecret" | sudo tee /home/pgdproxy/.pgpass
+sudo chown pgdproxy /home/pgdproxy/.pgpass
+sudo chmod 0600 /home/pgdproxy/.pgpass
+```
+
+## Configure the systemd service on each host
+
+Switch the service file from using root to using the pgdproxy user
+
+```
+sudo sed -i s/root/pgdproxy/ /usr/lib/systemd/system/pgd-proxy.service
+```
+
+Reload the systemd daemon.
+
+```
+sudo systemctl daemon-reload
+```
+
+## Create a proxy config file for each host
+
+The proxy configuration file will be slightly different for each host.
+It is a YAML file which contains a cluster object. This in turn has three
+properties:
+
+The name of the PGD cluster's top-level group (as `name`).
+An array of endpoints of databases (as `endpoints`).
+The proxy definition object with a name and endpoint (as `proxy`).
+
+The first two properties will be the same for all hosts:
+
+```
+cluster:
+ name: pgd
+ endpoints:
+ - host=host-one dbname=bdrdb port=5444
+ - host=host-two dbname=bdrdb port=5444
+ - host=host-three dbname=bdrdb port=5444
+```
+
+Remember that host-one, host-two and host-three are the systems on which the cluster nodes (node-one, node-two, node-three) are running.
+We use the name of the host, not the node, for the endpoint connection.
+
+Also note that the endpoints in this example specify port=5444.
+This is necessary for EDB Postgres Advanced Server instances.
+For EDB Postgres Extended and Community PostgreSQL, this can be omitted.
+
+
+The third property, `proxy`, has a `name` property and an `endpoint` property.
+The `name` property should be a name created with `bdr.create_proxy` earlier, and it will be different on each host.
+The `endpoint` property is a string which defines how the proxy presents itself as a connection string.
+A proxy cannot be on the same port as the Postgres server and, ideally, should be on a commonly used port different from direct connections, even when no Postgres server is running on the host.
+We typically use port 6432 for PGD proxies.
+
+```
+ proxy:
+ name: pgd-proxy-one
+ endpoint: "host=localhost dbname=bdrdb port=6432"
+```
+
+In this case, by using 'localhost' in the endpoint, we specify that this proxy will listen on the host where the proxy is running.
+
+## Install a PGD proxy configuration on each host
+
+For each host, create the `/etc/edb/pgd-proxy` directory:
+
+```
+sudo mkdir -p /etc/edb/pgd-proxy
+```
+
+Then on each host, write the appropriate configuration to the `pgd-proxy-config.yml` file in the `/etc/edb/pgd-proxy` directory.
+
+For our example, this could be run on host-one to create the file.
+
+```
+cat <
Date: Mon, 4 Mar 2024 15:04:19 +0000
Subject: [PATCH 18/26] Updates to the cli docs removing features
Signed-off-by: Dj Walker-Morgan
---
.../installing/08-using-pgd-cli.mdx | 131 +++++-------------
1 file changed, 32 insertions(+), 99 deletions(-)
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/08-using-pgd-cli.mdx b/product_docs/docs/pgd/4/admin-manual/installing/08-using-pgd-cli.mdx
index e0c1ff043b0..ac0dc433666 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/08-using-pgd-cli.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/08-using-pgd-cli.mdx
@@ -26,14 +26,16 @@ We recommend the first option, as the other options don't scale well with multip
* If PGD CLI has already been installed move to the next step.
* For any system, repeat the [configure repositories](03-configuring-repositories) step on that system.
* Then run the package installation command appropriate for that platform.
- * RHEL and derivatives: `sudo dnf install edb-pgd5-cli`
- * Debian, Ubuntu and derivatives: `sudo apt-get install edb-pgd5-cli`
+ * RHEL and derivatives: `sudo dnf install edb-pgd-cli`
+ * Debian, Ubuntu and derivatives: `sudo apt-get install edb-pgd-cli`
* Create a configuration file
* YAML file which specifies the cluster and endpoints the PGD CLI application should use.
* Install the configuration file.
- * Copy the YAML configuraiton file to a default config directory /etc/edb/pgd-cli/ as pgd-cli-config.yml.
+ * Copy the YAML configuraiton file to a default config directory /etc/edb/ as pgd-config.yml.
* Repeat this on any system where you want to run PGD CLI.
-* Run pgd-cli.
+* Add `/usr/local/bin/` to the PATH for any user wanting to use PGD CLI.
+ * Add adding `/usr/local/bin` to the path in to your `.bashrc` file
+* Run pgd-cli with the `pgd` command.
### Use PGD CLI to explore the cluster
* Check the health of the cluster with the `check-health` command.
@@ -52,7 +54,12 @@ Also consult the [PGD CLI documentation](../../cli/) for details of other config
### Ensure PGD CLI is installed
In this worked example, we will be configuring and using PGD CLI on host-one, where we've already installed Postgres and PGD.
-There is no need to install PGD CLI again.
+There is no need to install PGD CLI again. We will be using the `enterprisedb` account as this is already configured for access to Postgres.
+If you are not logged in as `enterprisedb` switch to it using:
+
+```
+sudo -iu enterprisedb
+```
### Create a configuration file
@@ -80,15 +87,15 @@ For EDB Postgres Extended and Community PostgreSQL, this can be omitted.
Create the PGD CLI configuration directory.
```
-sudo mkdir -p /etc/edb/pgd-cli
+sudo mkdir -p /etc/edb/
```
-Then write the configuration to the `pgd-cli-config.yml` file in the `/etc/edb/pgd-cli` directory.
+Then write the configuration to the `pgd-config.yml` file in the `/etc/edb/` directory.
For our example, this could be run on host-one to create the file.
```
-cat <
Date: Tue, 5 Mar 2024 10:48:11 +0000
Subject: [PATCH 19/26] Remove proxies from this version
Signed-off-by: Dj Walker-Morgan
---
.../installing/07-configure-proxies.mdx | 265 ------------------
...using-pgd-cli.mdx => 07-using-pgd-cli.mdx} | 4 -
.../pgd/4/admin-manual/installing/index.mdx | 7 +-
3 files changed, 2 insertions(+), 274 deletions(-)
delete mode 100644 product_docs/docs/pgd/4/admin-manual/installing/07-configure-proxies.mdx
rename product_docs/docs/pgd/4/admin-manual/installing/{08-using-pgd-cli.mdx => 07-using-pgd-cli.mdx} (96%)
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/07-configure-proxies.mdx b/product_docs/docs/pgd/4/admin-manual/installing/07-configure-proxies.mdx
deleted file mode 100644
index fe05ab07d0a..00000000000
--- a/product_docs/docs/pgd/4/admin-manual/installing/07-configure-proxies.mdx
+++ /dev/null
@@ -1,265 +0,0 @@
----
-title: Step 7 - Configure proxies
-navTitle: Configure proxies
-deepToC: true
----
-
-
-~~## Configure proxies
-
-PGD can use proxies to direct traffic to one of the clusters nodes, selected automatically by the cluster.
-There are performance and availabilty reasons for using a proxy:
-
-* Performance: By directing all traffic and in particular write traffic, to one node, the node can resolve write conflicts locally and more efficiently.
-* Availability: When a node is taken down for maintenance or goes offline for other reasons, the proxy can automatically direct new traffic to a new, automatically selected, write leader.
-
-It is best practice to configure PGD Proxy for clusters to enable this behavior.
-
-### Configure the cluster for proxies
-
-To set up a proxy, you will need to first prepare the cluster and sub-group the proxies will be working with by:
-
-* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the sub-group. Use [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option), passing the sub-group name, option name and new value as parameters.
-* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) and passing the new proxy name and the sub-group it should be attached to.
-* Create a `pgdproxy` user on the cluster with a password (or other authentication)
-
-### Configure each host as a proxy
-
-Once the cluster is ready, you will need to configure each host to run pgd-proxy by:
-
-* Creating a `pgdproxy` local user
-* Creating a `.pgpass` file for that user which will allow it to log into the cluster as `pgdproxy`.
-* Modify the systemd service file for pgdproxy to use the pgdproxy user.
-* Create a proxy config file for the host which lists the connection strings for all the nodes in the sub-group, specifies the name that the proxy should use when connected and gives the endpoint connection string the proxy will accept connections on.
-* Install that file as `/etc/edb/pgd-proxy/pgd-proxy-config.yml`
-* Restart the systemd service and check its status.
-* Log into the proxy and verify its operation.
-
-Further detail on all these steps is included in the worked example.
-
-## Worked example
-
-## Preparing for proxies
-
-For proxies to function, the `dc1` subgroup must enable Raft and routing.
-
-Log into any node in the cluster, using psql to connect to the bdrdb database as the `enterprisedb` user, and execute:
-
-```
-SELECT bdr.alter_node_group_option('dc1', 'enable_raft', 'true');
-SELECT bdr.alter_node_group_option('dc1', 'enable_proxy_routing', 'true');
-```
-
-The [`bdr.node_group_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_group_summary) view can be used to check the status of options previously set with bdr.alter_node_group_option():
-
-```sql
-SELECT node_group_name, enable_proxy_routing, enable_raft
- FROM bdr.node_group_summary
- WHERE parent_group_name IS NOT NULL;
-__OUTPUT__
- node_group_name | enable_proxy_routing | enable_raft
------------------+----------------------+-------------
- dc1 | t | t
-(1 row)
-
-bdrdb=#
-```
-
-
-Next, create a PGD proxy within the cluster using the `bdr.create_proxy` function.
-This function takes two parameters, the proxy's unique name and the group it should be a proxy for.
-
-In our example, we want a proxy on each host in the dc1 sub-group:
-
-```
-SELECT bdr.create_proxy('pgd-proxy-one','dc1');
-SELECT bdr.create_proxy('pgd-proxy-two','dc1');
-SELECT bdr.create_proxy('pgd-proxy-three','dc1');
-```
-
-The [`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) view can be used to check that the proxies were created:
-
-```sql
-SELECT proxy_name, node_group_name
- FROM bdr.proxy_config_summary;
-__OUTPUT__
- proxy_name | node_group_name
------------------+-----------------
- pgd-proxy-one | dc1
- pgd-proxy-two | dc1
- pgd-proxy-three | dc1
-
- bdrdb=#
- ```
-
-## Create a pgdproxy user on the database
-
-Create a user named pgdproxy and give it a password. In this example we will use `proxysecret`
-
-On any node, log into the bdrdb database as enterprisedb/postgres.
-
-```
-CREATE USER pgdproxy PASSWORD 'proxysecret';
-GRANT bdr_superuser TO pgdproxy;
-```
-
-## Create a pgdproxy user on each host
-
-```
-sudo adduser pgdproxy
-```
-
-This user will need credentials to connect to the server.
-We will create a .pgpass file with the `proxysecret` password in it.
-Then we will lock down the `.pgpass` file so it is only accessible by its owner.
-
-```
-echo -e "*:*:*:pgdproxy:proxysecret" | sudo tee /home/pgdproxy/.pgpass
-sudo chown pgdproxy /home/pgdproxy/.pgpass
-sudo chmod 0600 /home/pgdproxy/.pgpass
-```
-
-## Configure the systemd service on each host
-
-Switch the service file from using root to using the pgdproxy user
-
-```
-sudo sed -i s/root/pgdproxy/ /usr/lib/systemd/system/pgd-proxy.service
-```
-
-Reload the systemd daemon.
-
-```
-sudo systemctl daemon-reload
-```
-
-## Create a proxy config file for each host
-
-The proxy configuration file will be slightly different for each host.
-It is a YAML file which contains a cluster object. This in turn has three
-properties:
-
-The name of the PGD cluster's top-level group (as `name`).
-An array of endpoints of databases (as `endpoints`).
-The proxy definition object with a name and endpoint (as `proxy`).
-
-The first two properties will be the same for all hosts:
-
-```
-cluster:
- name: pgd
- endpoints:
- - host=host-one dbname=bdrdb port=5444
- - host=host-two dbname=bdrdb port=5444
- - host=host-three dbname=bdrdb port=5444
-```
-
-Remember that host-one, host-two and host-three are the systems on which the cluster nodes (node-one, node-two, node-three) are running.
-We use the name of the host, not the node, for the endpoint connection.
-
-Also note that the endpoints in this example specify port=5444.
-This is necessary for EDB Postgres Advanced Server instances.
-For EDB Postgres Extended and Community PostgreSQL, this can be omitted.
-
-
-The third property, `proxy`, has a `name` property and an `endpoint` property.
-The `name` property should be a name created with `bdr.create_proxy` earlier, and it will be different on each host.
-The `endpoint` property is a string which defines how the proxy presents itself as a connection string.
-A proxy cannot be on the same port as the Postgres server and, ideally, should be on a commonly used port different from direct connections, even when no Postgres server is running on the host.
-We typically use port 6432 for PGD proxies.
-
-```
- proxy:
- name: pgd-proxy-one
- endpoint: "host=localhost dbname=bdrdb port=6432"
-```
-
-In this case, by using 'localhost' in the endpoint, we specify that this proxy will listen on the host where the proxy is running.
-
-## Install a PGD proxy configuration on each host
-
-For each host, create the `/etc/edb/pgd-proxy` directory:
-
-```
-sudo mkdir -p /etc/edb/pgd-proxy
-```
-
-Then on each host, write the appropriate configuration to the `pgd-proxy-config.yml` file in the `/etc/edb/pgd-proxy` directory.
-
-For our example, this could be run on host-one to create the file.
-
-```
-cat <
Date: Tue, 5 Mar 2024 10:48:55 +0000
Subject: [PATCH 20/26] Remove proxies from top page
Signed-off-by: Dj Walker-Morgan
---
product_docs/docs/pgd/4/admin-manual/index.mdx | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/product_docs/docs/pgd/4/admin-manual/index.mdx b/product_docs/docs/pgd/4/admin-manual/index.mdx
index 3e14d6d1f40..c6116407b62 100644
--- a/product_docs/docs/pgd/4/admin-manual/index.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/index.mdx
@@ -12,7 +12,7 @@ This section of the manual covers how to manually deploy and administer EDB Post
* Install the PGD software
* Create a cluster
* Check a cluster
- * Configure PGD proxies
* Install and use PGD CLI
-The installing section provides an example cluster which will be used in future examples.
\ No newline at end of file
+The installing section provides an example cluster which will be used in future examples.
+
From 7b95cd21ab132be8d03a58767c72a0a4e44d059b Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com>
Date: Wed, 27 Mar 2024 09:15:18 +0000
Subject: [PATCH 21/26] Apply suggestions from review
---
.../installing/04-installing-software.mdx | 2 +-
.../installing/05-creating-cluster.mdx | 24 +++++++++----------
.../installing/06-check-cluster.mdx | 2 +-
.../installing/07-using-pgd-cli.mdx | 6 ++---
.../pgd/4/admin-manual/installing/index.mdx | 2 +-
5 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx b/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
index 3f5279c71fd..4bcc9bf98e7 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
@@ -25,7 +25,7 @@ These steps must be carried out on each host before proceeding to the next step.
* Raise the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.
!!! Note The `max_worker_processes` value
The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases and other factors.
- To calculate the needed value see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings).
+ To calculate the needed value see [Postgres configuration/settings](/pgd/4/bdr/configuration/#postgresql-settings-for-bdr).
The value of 16 was calculated for the size of cluster we are deploying and must be raised for larger clusters.
!!!
* Set a password on the EnterprisedDB/Postgres user.
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx b/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx
index b14caa5dac4..2e71721c619 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx
@@ -17,7 +17,7 @@ For each node we want to create a connection string which will allow PGD to perf
* **Prepare the first node.**
-To create the cluster, we select and log into one of the hosts Postgres server's `bdrdb` database.
+To create the cluster, we log into the `bdrdb` database on one of the nodes.
* **Create the first node.**
@@ -33,15 +33,15 @@ To create the cluster, we select and log into one of the hosts Postgres server's
* Create the second node.
Log into another initialized node's `bdrdb` database.
Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes may connect to it.
- * Join the second node to the cluster
+ * Join the second node to the cluster.
Next, run `bdr.join_node_group` passing two parameters, the connection string for the first node and the name of the sub-group you want the node to join.
* **Adding the third node.**
- * Create the third node
+ * Create the third node.
Log into another initialized node's `bdrdb` database.
Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes may connect to it.
- * Join the third node to the cluster
+ * Join the third node to the cluster.
Next, run `bdr.join_node_group` passing two parameters, the connection string for the first node and the name of the sub-group you want the node to join.
@@ -49,7 +49,7 @@ To create the cluster, we select and log into one of the hosts Postgres server's
So far, we have:
-* Created three Hosts.
+* Created three hosts.
* Installed a Postgres server on each host.
* Installed Postgres Distributed on each host.
* Configured the Postgres server to work with PGD on each host.
@@ -79,7 +79,7 @@ sudo -iu enterprisedb psql bdrdb
### Create the first node
-Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string which other nodes can use to connect to it.
+Call the [`bdr.create_node`](/pgd/4/bdr/nodes#bdrcreate_node) function to create a node, passing it the node name and a connection string which other nodes can use to connect to it.
```
select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444');
@@ -87,7 +87,7 @@ select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444');
#### Create the top-level group
-Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter will create the top-level group with that name. For our example, we will create a top-level group named `pgd`.
+Call the [`bdr.create_node_group`](/pgd/4/bdr/nodes#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter will create the top-level group with that name. For our example, we will create a top-level group named `pgd`.
```
select bdr.create_node_group('pgd');
@@ -99,7 +99,7 @@ Using sub-groups to organize your nodes is preferred as it allows services like
In a larger PGD installation, multiple sub-groups can exist providing organizational grouping that enables geographical mapping of clusters and localized resilience.
For that reason, in this example, we are creating a sub-group for our first nodes to enable simpler expansion and use of PGD proxy.
-Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a sub-group of the top-level group.
+Call the [`bdr.create_node_group`](/pgd/4/bdr/nodes/#bdrcreate_node) function again to create a sub-group of the top-level group.
The sub-group name is the first parameter, the parent group is the second parameter.
For our example, we will create a sub-group `dc1` as a child of `pgd`.
@@ -119,7 +119,7 @@ sudo -iu enterprisedb psql bdrdb
#### Create the second node
-We call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it.
+We call the [`bdr.create_node`](/pgd/4/bdr/nodes/#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it.
```
select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444');
@@ -127,7 +127,7 @@ select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444');
#### Join the second node to the cluster
-Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) we can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter.
+Using [`bdr.join_node_group`](/pgd/4/bdr/nodes/#bdrjoin_node_group) we can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter.
```
select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1');
@@ -144,7 +144,7 @@ sudo -iu enterprisedb psql bdrdb
#### Create the third node
-We call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it.
+We call the [`bdr.create_node`](/pgd/4/bdr/nodes/#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it.
```
select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444');
@@ -152,7 +152,7 @@ select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444');
#### Join the third node to the cluster
-Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) we can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter.
+Using [`bdr.join_node_group`](/pgd/4/bdr/nodes/#bdrjoin_node_group) we can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter.
```
select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1');
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx b/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx
index df92b8e1f99..c8d93bf49bc 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx
@@ -176,7 +176,7 @@ And the values will be identical.
You can repeat the process with node-three, or generate new data on any node and see it replicate to the other nodes.
-#### Log into host-threes's Postgres server.
+#### Log into host-three's Postgres server.
```
ssh admin@host-two
sudo -iu enterprisedb psql bdrdb
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/07-using-pgd-cli.mdx b/product_docs/docs/pgd/4/admin-manual/installing/07-using-pgd-cli.mdx
index 66f5da9ad08..ff514ace3e9 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/07-using-pgd-cli.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/07-using-pgd-cli.mdx
@@ -7,12 +7,12 @@ deepToC: true
## Using PGD CLI
-The PGD CLI command uses a configuration file to work out which hosts to connect to.
+The PGD CLI client uses a configuration file to work out which hosts to connect to.
There are [options](../../cli/using_cli) that allow you to override this to use alternative configuration files or explicitly point at a server, but by default PGD CLI looks for a configuration file in preset locations.
The connection to the database is authenticated in the same way as other command line utilities, like the psql command, are authenticated.
-Unlike other commands, PGD CLI doesn't interactively prompt for your password. Therefore, You must pass your password using one of the following methods:
+Unlike other commands, PGD CLI doesn't interactively prompt for your password. Therefore, you must pass your password using one of the following methods:
- Adding an entry to your [`.pgpass` password file](https://www.postgresql.org/docs/current/libpq-pgpass.html), which includes the host, port, database name, user name, and password.
- Setting the password in the `PGPASSWORD` environment variable.
@@ -76,7 +76,7 @@ cluster:
Note that the endpoints in this example specify port=5444.
This is necessary for EDB Postgres Advanced Server instances.
-For EDB Postgres Extended and Community PostgreSQL, this can be omitted.
+For EDB Postgres Extended and Community PostgreSQL, this can be set to `port=5432`.
### Install the configuration file
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/index.mdx b/product_docs/docs/pgd/4/admin-manual/installing/index.mdx
index fe9d88d2cba..de1f4f4e841 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/index.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/index.mdx
@@ -10,7 +10,7 @@ navigation:
- 07-using-pgd-cli
---
-EDB offers automated PGD deployment using TPA (Trusted Postgres Architect) because its generally more reliable than manual processes.
+EDB offers automated PGD deployment using TPA (Trusted Postgres Architect) because it's generally more reliable than manual processes. '
Consult [Deploying with TPA](../../admin-tpa/installing.mdx) for how to install TPA and use its automated best-practice driven PGD deployment options for full details.
To complement automated installation, and to enable alternative installation and deployment processes, this section of the documentation looks at the basic operations needed to manually configure a three-node PGD 4 cluster (with a local sub-group) and PGD CLI.
From 854e075b618bb0110df4c3380d637c2b82010eb5 Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Tue, 2 Apr 2024 13:20:57 +0100
Subject: [PATCH 22/26] Fixes from review
Signed-off-by: Dj Walker-Morgan
---
.../4/admin-manual/installing/02-install-postgres.mdx | 9 +++++----
.../4/admin-manual/installing/04-installing-software.mdx | 4 ++--
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx b/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx
index 7622c3be8bb..15744b3e313 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx
@@ -16,7 +16,11 @@ Run both steps.
## Worked example
-In our example, we will be installing EDB Postgres Advanced Server 16 on Red Hat Enterprise Linux 9 (RHEL 9).
+In our example, we will be installing EDB Postgres Advanced Server 14 on Red Hat Enterprise Linux 9 (RHEL 9).
+
+!!! Note
+PGD 4 does not support Postgres versions 15, 16, or later
+!!!
### EDB account
@@ -28,9 +32,6 @@ Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.
* EDB Postgres Extended (up to and including version 14)
* PostgreSQL (up to and including version 14)
-!!! Note
-PGD 4 does not support Postgres versions 15, 16, and later
-!!!
Upon selecting the version of the Postgres server you want, two steps will be displayed.
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx b/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
index 4bcc9bf98e7..4d7e7b0dae1 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
@@ -314,7 +314,7 @@ If installing PGD with EDB Postgres Extended Server, there are a number of diffe
* The service name is edb-pge-NN.
* The system user is postgres (not enterprisedb)
* The home directory for the postgres user is `/var/lib/pgqsl`
-* There are no pre-existing libraries to be added to `shared_preload_libraries`
+* `shared_preload_libraries` is empty by default and will only need `$libdir/bdr` added to it.
### Installing PGD for Postgresql
@@ -326,7 +326,7 @@ If installing PGD with PostgresSQL, there are a number of differences from the E
* The service name is postgresql-NN.
* The system user is postgres (not enterprisedb)
* The home directory for the postgres user is `/var/lib/pgqsl`
-* There are no pre-existing libraries to be added to `shared_preload_libraries`
+* `shared_preload_libraries` is empty by default and will only need `$libdir/bdr` added to it.
From b65c50e499bf89e3685ad78c4cb97a99405a3ad8 Mon Sep 17 00:00:00 2001
From: Betsy Gitelman
Date: Thu, 4 Apr 2024 14:57:20 -0400
Subject: [PATCH 23/26] Editorial review of manual install
---
.../installing/01-provisioning-hosts.mdx | 39 +++--
.../installing/02-install-postgres.mdx | 23 ++-
.../03-configuring-repositories.mdx | 26 ++--
.../installing/04-installing-software.mdx | 138 +++++++++---------
.../installing/05-creating-cluster.mdx | 84 +++++------
.../installing/06-check-cluster.mdx | 77 +++++-----
.../installing/07-using-pgd-cli.mdx | 64 ++++----
.../pgd/4/admin-manual/installing/index.mdx | 29 ++--
8 files changed, 229 insertions(+), 251 deletions(-)
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/01-provisioning-hosts.mdx b/product_docs/docs/pgd/4/admin-manual/installing/01-provisioning-hosts.mdx
index 79b0a10d8a4..a761d960750 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/01-provisioning-hosts.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/01-provisioning-hosts.mdx
@@ -1,6 +1,6 @@
---
-title: Step 1 - Provisioning Hosts
-navTitle: Provisioning Hosts
+title: Step 1 - Provisioning hosts
+navTitle: Provisioning hosts
deepToC: true
---
@@ -8,19 +8,19 @@ deepToC: true
The first step in the process of deploying PGD is to provision and configure hosts.
-You can deploy to virtual machine instances in the cloud with Linux installed, on-premise virtual machines with Linux installed or on-premise physical hardware also with Linux installed.
+You can deploy to virtual machine instances in the cloud with Linux installed, on-premises virtual machines with Linux installed, or on-premises physical hardware, also with Linux installed.
-Whichever [supported Linux operating system](https://www.enterprisedb.com/resources/platform-compatibility#bdr) and whichever deployment platform you select, the result of provisioning a machine must be a Linux system that can be accessed by you using SSH with a user that has superuser, administrator or sudo privileges.
+Whichever [supported Linux operating system](https://www.enterprisedb.com/resources/platform-compatibility#bdr) and whichever deployment platform you select, the result of provisioning a machine must be a Linux system that you can access using SSH with a user that has superuser, administrator, or sudo privileges.
-Each machine provisioned should be able to make connections to any other machine you are provisioning for your cluster.
+Each machine provisioned must be able to make connections to any other machine you're provisioning for your cluster.
-On cloud deployments, this may be done over the public network or over a VPC.
+On cloud deployments, this can be done over the public network or over a VPC.
-On-premise deployments should be able to connect over the local network.
+On-premises deployments must be able to connect over the local network.
!!! Note Cloud provisioning guides
-If you are new to cloud provisioning, these guides may provide assistance:
+If you're new to cloud provisioning, these guides may provide assistance:
Vendor | Platform | Guide
------ | -------- | ------
@@ -36,29 +36,29 @@ If you are new to cloud provisioning, these guides may provide assistance:
We recommend that you configure an admin user for each provisioned instance.
The admin user must have superuser or sudo (to superuser) privileges.
-We also recommend that the admin user should be configured for passwordless SSH access using certificates.
+We also recommend that the admin user be configured for passwordless SSH access using certificates.
#### Ensure networking connectivity
-With the admin user created, ensure that each machine can communicate with the other machines you are provisioning.
+With the admin user created, ensure that each machine can communicate with the other machines you're provisioning.
In particular, the PostgreSQL TCP/IP port (5444 for EDB Postgres Advanced
-Server, 5432 for EDB Postgres Extended and Community PostgreSQL) should be open
+Server, 5432 for EDB Postgres Extended and community PostgreSQL) must be open
to all machines in the cluster. If you plan to deploy PGD Proxy, its port must be
-open to any applications which will connect to the cluster. Port 6432 is typically
+open to any applications that will connect to the cluster. Port 6432 is typically
used for PGD Proxy.
## Worked example
-For the example in this section, we have provisioned three hosts with Red Hat Enterprise Linux 9.
+For the example in this section, three hosts are provisioned with Red Hat Enterprise Linux 9.
-* host-one
-* host-two
-* host-three
+* `host-one`
+* `host-two`
+* `host-three`
-Each is configured with a "admin" admin user.
+Each is configured with an admin user named admin.
-These hosts have been configured in the cloud and as such each host has both a public and private IP address.
+These hosts have been configured in the cloud. As such, each host has both a public and private IP address.
Name | Public IP | Private IP
------|-----------|----------------------
@@ -66,11 +66,10 @@ These hosts have been configured in the cloud and as such each host has both a p
host-two | 172.24.113.247 | 192.168.254.247
host-three | 172.24.117.23 | 192.168.254.135
-For our example cluster, we have also edited `/etc/hosts` to use those private IP addresses:
+For this example, the cluster's `/etc/hosts` file was edited to use those private IP addresses:
```
192.168.254.166 host-one
192.168.254.247 host-two
192.168.254.135 host-three
```
-
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx b/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx
index 15744b3e313..9cb4dec42cd 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx
@@ -6,48 +6,48 @@ deepToC: true
## Installing Postgres
-You will need to install Postgres on all the hosts.
+You need to install Postgres on all the hosts.
An EDB account is required to use the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can get installation instructions.
Select your platform and Postgres edition.
-You will be presented with 2 steps of instructions, the first covering how to configure the required package repository and the second covering how to install the packages from that repository.
+You're presented with two steps of instructions. The first covers how to configure the required package repository and the second covers how to install the packages from that repository.
Run both steps.
## Worked example
-In our example, we will be installing EDB Postgres Advanced Server 14 on Red Hat Enterprise Linux 9 (RHEL 9).
+In this example, EDB Postgres Advanced Server 14 is installed on Red Hat Enterprise Linux 9 (RHEL 9).
!!! Note
-PGD 4 does not support Postgres versions 15, 16, or later
+PGD 4 doesn't support Postgres versions 15, 16, or later.
!!!
### EDB account
-You'll need an EDB account to install both Postgres and PGD.
+You need an EDB account to install both Postgres and PGD.
-Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can select your platform and then scroll down the list to select the Postgres version you wish to install:
+Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can select your platform. Then scroll down the list to select the Postgres version you want to install:
* EDB Postgres Advanced Server (up to and including version 14)
* EDB Postgres Extended (up to and including version 14)
* PostgreSQL (up to and including version 14)
-Upon selecting the version of the Postgres server you want, two steps will be displayed.
+When you select the version of the Postgres server you want, two steps are displayed.
### 1: Configuring repositories
For step 1, you can choose to use the automated script or step through the manual install instructions that are displayed. Your EDB repository token will be automatically inserted by the EDB Repos 2.0 site into these scripts.
-In our examples, it will be shown as `XXXXXXXXXXXXXXXX`.
+In these examples, the token is shown as `XXXXXXXXXXXXXXXX`.
-On each provisioned host, either run the automatic repository installation script which will look like this:
+On each provisioned host, you can run the automatic repository installation script, which looks like this:
```shell
curl -1sLf 'https://downloads.enterprisedb.com/XXXXXXXXXXXXXXXX/enterprise/setup.rpm.sh' | sudo -E bash
```
-Or use the manual installation steps which look like this:
+Or you can use the manual installation steps, which looks like this:
```shell
dnf install yum-utils
@@ -59,9 +59,8 @@ dnf -q makecache -y --disablerepo='*' --enablerepo='enterprisedb-enterprise'
### 2: Install Postgres
-For step 2, we just run the command to install the packages.
+For step 2, just run the command to install the packages:
```
sudo dnf -y install edb-as14-server
```
-
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/03-configuring-repositories.mdx b/product_docs/docs/pgd/4/admin-manual/installing/03-configuring-repositories.mdx
index 835259f024e..3e805a34713 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/03-configuring-repositories.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/03-configuring-repositories.mdx
@@ -8,7 +8,7 @@ deepToC: true
To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages.
-The following operations should be carried out on each host. For the purposes of this exercise, each host will be a standard data node, but the procedure would be the same for other [node types](../../node_management/node_types) such as witness or subscriber-only nodes.
+Perform the following operations on each host. For the purposes of this exercise, each host will be a standard data node, but the procedure would be the same for other [node types](../../node_management/node_types), such as witness or subscriber-only nodes.
* Use your EDB account.
* Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page.
@@ -39,7 +39,7 @@ The following operations should be carried out on each host. For the purposes of
### Use your EDB account
-You'll need an EDB account to install Postgres Distributed.
+You need an EDB account to install Postgres Distributed.
Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page, where you can obtain your repo token.
@@ -47,7 +47,7 @@ On your first visit to this page, select **Request Access** to generate your rep
![EDB Repos 2.0](images/edbrepos2.0.png)
-Copy the token to your clipboard using the **Copy Token** button and store it safely.
+Copy the token to your clipboard using **Copy Token**, and store it safely.
### Set environment variables
@@ -61,59 +61,59 @@ export EDB_SUBSCRIPTION_TOKEN=
You can add this to your `.bashrc` script or similar shell profile to ensure it's always set.
!!! Note
-Your preferred platform may support storing this variable as a secret which can appear as an environment variable. If this is the case, don't add the setting to `.bashrc` and instead add it to your platform's secret manager.
+Your preferred platform may support storing this variable as a secret, which can appear as an environment variable. If this is the case, don't add the setting to `.bashrc`. Instead add it to your platform's secret manager.
!!!
### Configure the repository
All the software you need is available from the EDB Postgres Distributed package repository.
You have the option to simply download and run a script to configure the EDB Postgres Distributed repository.
-You can also download, inspect and then run that same script.
+You can also download, inspect, and then run that same script.
The following instructions also include the essential steps that the scripts take for any user wanting to manually run, or automate, the installation process.
#### RHEL/Other RHEL-based
-You can autoinstall with automated OS detection
+You can autoinstall with automated OS detection:
```
curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed_4/setup.rpm.sh" | sudo -E bash
```
-If you wish to inspect the script that is generated for you run:
+If you want to inspect the script that's generated for you, run:
```
curl -1sLfO "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed_4/setup.rpm.sh"
```
-Then inspect the resulting `setup.rpm.sh` file. When you are happy to proceed, run:
+Then, inspect the resulting `setup.rpm.sh` file. When you're ready to proceed, run:
```
sudo -E bash setup.rpm.sh
```
-If you want to perform all steps manually or use your own preferred deployment mechanism, you can use the following example as a guide:
+If you want to perform all steps manually or use your own preferred deployment mechanism, you can use the following example as a guide.
-You will need to pass details of your Linux distribution and version. You may need to change the codename to match the version of RHEL you are using. Here we set it for RHEL compatible Linux version 9:
+You will need to pass details of your Linux distribution and version. You may need to change the codename to match the version of RHEL you're using. This example sets it for RHEL-compatible Linux version 9:
```
export DISTRO="el"
export CODENAME="9"
```
-Now install the yum-utils package:
+Now install the `yum-utils` package:
```
sudo dnf install -y yum-utils
```
-The next step will import a GPG key for the repositories:
+The next step imports a GPG key for the repositories:
```
sudo rpm --import "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed_4/gpg.44D45428437EAD1B.key"
```
-Now, we can import the repository details, add them to the local configuration and enable the repository.
+Now, you can import the repository details, add them to the local configuration, and enable the repository.
```
curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed_4/config.rpm.txt?distro=$DISTRO&codename=$CODENAME" > /tmp/enterprise.repo
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx b/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
index 4d7e7b0dae1..b176b9d89fb 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
@@ -7,30 +7,30 @@ deepToC: true
## Installing the PGD software
With the repositories configured, you can now install the Postgres Distributed software.
-These steps must be carried out on each host before proceeding to the next step.
+You must perform these steps on each host before proceeding to the next step.
-* **Install the packages**
- * Install the PGD packages which include a server specific BDR package and generic PGD cli packages. (`edb-bdr4-`, and `edb-pgd-cli`)
+* **Install the packages.**
+ * Install the PGD packages, which include a server-specific BDR package and generic PGD CLI packages(`edb-bdr4-` and `edb-pgd-cli`).
-* **Ensure the Postgres database server has been initialized and started.**
- * Use `systemctl status ` to check the service is running
- * If not, initialize the database and start the service
+* **Ensure the Postgres database server was initialized and started.**
+ * Use `systemctl status` to check the service is running.
+ * If it isn't, initialize the database and start the service.
-* **Configure the BDR extension**
- * Add the BDR extension (`$libdir/bdr`) at the start of the shared_preload_libraries setting in `postgresql.conf`.
+* **Configure the BDR extension.**
+ * Add the BDR extension (`$libdir/bdr`) at the start of the `shared_preload_libraries` setting in `postgresql.conf`.
* Set the `wal_level` GUC variable to `logical` in `postgresql.conf`.
* Turn on commit timestamp tracking by setting `track_commit_timestamp` to `'on'` in `postgresql.conf`.
* Raise the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.
!!! Note The `max_worker_processes` value
- The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases and other factors.
- To calculate the needed value see [Postgres configuration/settings](/pgd/4/bdr/configuration/#postgresql-settings-for-bdr).
- The value of 16 was calculated for the size of cluster we are deploying and must be raised for larger clusters.
+ The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors.
+ To calculate the needed value, see [Postgres configuration/settings](/pgd/4/bdr/configuration/#postgresql-settings-for-bdr).
+ The value of 16 was calculated for the size of cluster being deployed in this example and must be raised for larger clusters.
!!!
* Set a password on the EnterprisedDB/Postgres user.
* Add rules to `pg_hba.conf` to allow nodes to connect to each other.
- * Ensure that these lines are present in `pg_hba.conf:
+ * Ensure that these lines are present in `pg_hba.conf`:
```
host all all all md5
host replication all all md5
@@ -41,32 +41,32 @@ These steps must be carried out on each host before proceeding to the next step.
* **Restart the server.**
- * Verify the restarted server is running with the modified settings and the bdr extension is available
+ * Verify the restarted server is running with the modified settings and that the BDR extension is available.
* **Create the replicated database.**
- * Log into the server's default database (`edb` for EPAS, `postgres` for PGE and Community).
+ * Log in to the server's default database (`edb` for EDB Postgres Advanced Server, `postgres` for EDB Postgres Extended Server and community Postgres).
* Use `CREATE DATABASE bdrdb` to create the default PGD replicated database.
* Log out and then log back in to `bdrdb`.
* Use `CREATE EXTENSION bdr` to enable the BDR extension and PGD to run on that database.
-We will look in detail at the steps for EDB Postgres Advanced Server in the worked example below.
+The worked example that follows explores these steps in detail for EDB Postgres Advanced Server.
-If you are installing PGD with EDB Postgres Extended Server or Community Postgres, the steps are similar, but details such as package names and paths are different. These differences are summarized in [Installing PGD for EDB Postgres Extended Server](#installing-pgd-for-edb-postgres-extended-server) and [Installing PGD for Postgresql](#installing-pgd-for-postgresql).
+If you're installing PGD with EDB Postgres Extended Server or community Postgres, the steps are similar, but details such as package names and paths are different. These differences are summarized in [Installing PGD for EDB Postgres Extended Server](#installing-pgd-for-edb-postgres-extended-server) and [Installing PGD for Postgresql](#installing-pgd-for-postgresql).
## Worked example
### Install the packages
-The first step is to install the packages. For each Postgres package, there is a `edb-bdr4-` package to go with it.
-For example, if we are installing EDB Postgres Advanced Server (epas) version 14, we would install `edb-bdr4-epas14`.
+The first step is to install the packages. For each Postgres package, there's an `edb-bdr4-` package to go with it.
+For example, if you're installing EDB Postgres Advanced Server (epas) version 14, you install `edb-bdr4-epas14`.
There are two other packages to also install:
-- `edb-pgd-cli` for the PGD command line tool.
+- `edb-pgd-cli` for the PGD command line tool
-To install all of these packages on a RHEL or RHEL compatible Linux, run:
+To install all of these packages on a RHEL or RHEL-compatible Linux, run:
```
sudo dnf -y install edb-bdr4-epas14 edb-pgd-cli
@@ -74,15 +74,15 @@ sudo dnf -y install edb-bdr4-epas14 edb-pgd-cli
### Ensure the database is initialized and started
-If it wasn't initialized and started by the database's package initialisation (or you are repeating the process), you will need to initialize and start the server.
+If it wasn't initialized and started by the database's package initialization (or you're repeating the process), you need to initialize and start the server.
-To see if the server is running, you can check the service. The service name for EDB Advanced Server is `edb-as-14` so run:
+To see if the server is running, you can check the service. The service name for EDB Advanced Server is `edb-as-14`, so run:
```
sudo systemctl status edb-as-14
```
-If the server is not running, this will respond with:
+If the server isn't running, the response is:
```
○ edb-as-14.service - EDB Postgres Advanced Server 14
@@ -90,18 +90,18 @@ If the server is not running, this will respond with:
Active: inactive (dead)
```
-The "Active: inactive (dead)" tells us we will need to initialize and start the server.
+The "Active: inactive (dead)" tells you that you need to initialize and start the server.
-You will need to know the path to the setup script for your particular Postgres flavor.
+You need to know the path to the setup script for your particular Postgres flavor.
For EDB Postgres Advanced Server, this script can be found in `/usr/edb/as14/bin` as `edb-as-14-setup`.
-This command needs to be run with the `initdb` parameter and we need to pass an option setting the database to use UTF-8.
+This command needs to be run with the `initdb` parameter, passing an option setting the database to use UTF-8:
```
sudo PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/edb/as14/bin/edb-as-14-setup initdb
```
-Once the database is initialized, we will start it which will enable us to continue configuring the BDR extension.
+Once the database is initialized, start it, which enables you to continue configuring the BDR extension"
```
sudo systemctl start edb-as-14
@@ -109,24 +109,24 @@ sudo systemctl start edb-as-14
### Configure the BDR extension
-Installing EDB Postgres Advanced Server creates a system user `enterprisedb` with admin capabilities when connected to the database. We will be using this user to configure the BDR extension.
+Installing EDB Postgres Advanced Server creates a system user enterprisedb with admin capabilities when connected to the database. In this example, this user configures the BDR extension.
#### Preload the BDR library
-We want the bdr library to be preloaded with other libraries.
-EPAS has a number of libraries already preloaded, so we have to prefix the existing list with the BDR library.
+The BDR library needs to be preloaded with other libraries.
+EDB Postgres Advanced Server has a number of libraries already preloaded, so you have to prefix the existing list with the BDR library:
```
echo -e "shared_preload_libraries = '\$libdir/bdr,\$libdir/dbms_pipe,\$libdir/edb_gen,\$libdir/dbms_aq'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
```
!!!tip
-This command format (`echo ... | sudo ... tee -a ...`) appends the echoed string to the end of the postgresql.conf file, which is owned by another user.
+This command format (`echo ... | sudo ... tee -a ...`) appends the echoed string to the end of the `postgresql.conf` file, which is owned by another user.
!!!
#### Set the `wal_level`
-The BDR extension needs to set the server to perform logical replication. We do this by setting `wal_level` to `logical`.
+The BDR extension needs to set the server to perform logical replication. You can do this by setting `wal_level` to `logical`:
```
echo -e "wal_level = 'logical'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
@@ -135,7 +135,7 @@ echo -e "wal_level = 'logical'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/
#### Enable commit timestamp tracking
-The BDR extension also needs the commit timestamp tracking enabled.
+The BDR extension also needs the commit timestamp tracking enabled:
```
echo -e "track_commit_timestamp = 'on'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
@@ -147,12 +147,12 @@ echo -e "track_commit_timestamp = 'on'" | sudo -u enterprisedb tee -a /var/lib/e
To communicate between multiple nodes, Postgres Distributed nodes run more worker processes than usual.
The default limit (8) is too low even for a small cluster.
-The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases and other factors.
-To calculate the needed value see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings).
+The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors.
+To calculate the needed value, see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings).
-For this example, with a 3 node cluster, we are using the value of 16.
+This example, with a 3-node cluster, uses the value of 16.
-Raise the maximum number of worker processes to 16 with this commmand:
+Raise the maximum number of worker processes to 16 with this command:
```
echo -e "max_worker_processes = '16'" | sudo -u enterprisedb tee -a /var/lib/edb/as14/data/postgresql.conf >/dev/null
@@ -165,9 +165,9 @@ This value must be raised for larger clusters.
#### Add a password to the Postgres enterprisedb user
To allow connections between nodes, a password needs to be set on the Postgres enterprisedb user.
-For this example, we are using the password `secret`.
+This example uses the password `secret`.
Select a different password for your deployments.
-You will need this password when we get to [Creating the PGD Cluster](05-creating-cluster).
+You will need this password when you get to [Creating the PGD cluster](05-creating-cluster).
```
sudo -u enterprisedb psql edb -c "ALTER USER enterprisedb WITH PASSWORD 'secret'"
@@ -185,7 +185,7 @@ echo -e "host all all all md5\nhost replication all all md5" | sudo tee -a /var/
```
-It will append
+It appends the following to `pg_hba.conf`, which enables the nodes to replicate:
```
host all all all md5
@@ -193,16 +193,15 @@ host replication all all md5
```
-to `pg_hba.conf` which will enable the nodes to replicate.
#### Enable authentication between nodes
As part of the process of connecting nodes for replication, PGD logs into other nodes.
-It will perform that log in as the user that Postgres is running under.
-For epas, this is the `enterprisedb` user.
-That user will need credentials to log into the other nodes.
-We will supply these credentials using the `.pgpass` file which needs to reside in the user's home directory.
-The home directory for `enterprisedb` is `/var/lib/edb`.
+It performs that login as the user that Postgres is running under.
+For EDB Postgres Advanced Server, this is the enterprisedb user.
+That user needs credentials to log into the other nodes.
+This example supplies these credentials using the `.pgpass` file, which needs to reside in the user's home directory.
+The home directory for enterprisedb is `/var/lib/edb`.
Run this command to create the file:
@@ -215,7 +214,7 @@ You can read more about the `.pgpass` file in [The Password File](https://www.po
### Restart the server
-After all these configuration changes, it is recommended that the server is restarted with:
+After all these configuration changes, we recommend that you restart the server with:
```
sudo systemctl restart edb-as-14
@@ -224,14 +223,14 @@ sudo systemctl restart edb-as-14
#### Check the extension has been installed
-At this point, it is worth checking the extension is actually available and our configuration has been correctly loaded. You can query the pg_available_extensions table for the bdr extension like this:
+At this point, it's worth checking the extension is actually available and the configuration was correctly loaded. You can query the `pg_available_extensions` table for the BDR extension like this:
```
sudo -u enterprisedb psql edb -c "select * from pg_available_extensions where name like 'bdr'"
```
-Which should return an entry for the extension and its version.
+This command returns an entry for the extension and its version:
```
name | default_version | installed_version | comment
@@ -250,7 +249,7 @@ sudo -u enterprisedb psql edb -c "show all" | grep -e wal_level -e track_commit_
### Create the replicated database
The server is now prepared for PGD.
-We need to next create a database named `bdrdb` and install the bdr extension when logged into it.
+Next, create a database named `bdrdb` and install the BDR extension when logged into it.
```
sudo -u enterprisedb psql edb -c "CREATE DATABASE bdrdb"
@@ -264,8 +263,9 @@ Finally, test the connection by logging into the server.
sudo -u enterprisedb psql bdrdb
```
-You should be connected to the server.
-Execute the command "\\dx" to list extensions installed.
+You're connected to the server.
+
+Execute the command `\dx` to list extensions installed.
```
bdrdb=# \dx
@@ -279,7 +279,7 @@ bdrdb=# \dx
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
```
-Notice that the bdr extension is listed in the table, showing it is installed.
+Notice that the BDR extension is listed in the table, showing that it's installed.
## Summaries
@@ -307,26 +307,24 @@ sudo -u enterprisedb psql bdrdb
### Installing PGD for EDB Postgres Extended Server
-If installing PGD with EDB Postgres Extended Server, there are a number of differences from the EPAS installation.
-
-* The BDR package to install is named `edb-bdrV-pgextendedNN` (where V is the PGD version and NN is the PGE version number)
-* A different setup utility should be called: /usr/edb/pgeNN/bin/edb-pge-NN-setup
-* The service name is edb-pge-NN.
-* The system user is postgres (not enterprisedb)
-* The home directory for the postgres user is `/var/lib/pgqsl`
-* `shared_preload_libraries` is empty by default and will only need `$libdir/bdr` added to it.
-
+If you're installing PGD with EDB Postgres Extended Server, there are a number of differences from the EDB Postgres Advanced Server installation:
-### Installing PGD for Postgresql
+* The BDR package to install is named `edb-bdrV-pgextendedNN`, where V is the PGD version and NN is the PGE version number.
+* Call a different setup utility: `/usr/edb/pgeNN/bin/edb-pge-NN-setup`
+* The service name is `edb-pge-NN`.
+* The system user is postgres, not enterprisedb.
+* The home directory for the postgres user is `/var/lib/pgqsl`.
+* `shared_preload_libraries` is empty by default and needs only `$libdir/bdr` added to it.
-If installing PGD with PostgresSQL, there are a number of differences from the EPAS installation.
-* The BDR package to install is named `edb-bdrV-pgNN` (where V is the PGD version and NN is the PostgreSQL version number)
-* A different setup utility should be called: /usr/pgsql-NN/bin/postgresql-NN-setup
-* The service name is postgresql-NN.
-* The system user is postgres (not enterprisedb)
-* The home directory for the postgres user is `/var/lib/pgqsl`
-* `shared_preload_libraries` is empty by default and will only need `$libdir/bdr` added to it.
+### Installing PGD for PostgreSQL
+If installing PGD with PostgreSQL, there are a number of differences from the EDB Postgres Advanced Server installation.
+* The BDR package to install is named `edb-bdrV-pgNN`, where V is the PGD version and NN is the PostgreSQL version number.
+* Call a different setup utility: `/usr/pgsql-NN/bin/postgresql-NN-setup`.
+* The service name is `postgresql-NN`.
+* The system user is postgres, not enterprisedb.
+* The home directory for the postgres user is `/var/lib/pgqsl`.
+* `shared_preload_libraries` is empty by default and needs only `$libdir/bdr` added to it.
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx b/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx
index 2e71721c619..e7b586cab5c 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx
@@ -1,66 +1,66 @@
---
-title: Step 5 - Creating the PGD Cluster
+title: Step 5 - Creating the PGD cluster
navTitle: Creating the Cluster
deepToC: true
---
## Creating the PGD cluster
-* **Create connection strings for each node**.
-For each node we want to create a connection string which will allow PGD to perform replication.
+* **Create connection strings for each node.**
+For each node, create a connection string that will allow PGD to perform replication.
- The connection string is a key/value string which starts with a `host=` and the IP address of the host (or if you have resolvable named hosts, the name of the host).
+ The connection string is a key/value string that starts with `host=` and the IP address of the host (or, if you have resolvable named hosts, the name of the host).
- That is followed by the name of the database; `dbname=bdrdb` as we created a `bdrdb` database when [installing the software](04-installing-software).
+ That's followed by the name of the database: `dbname=bdrdb`. (The `bdrdb` database was created while [installing the software](04-installing-software).)
- We recommend you also add the port number of the server to your connection string as `port=5444` for EDB Postgres Advanced Server and `port=5432` for EDB Postgres Extended and Community PostgreSQL.
+ We recommend you also add the port number of the server to your connection string as `port=5444` for EDB Postgres Advanced Server and `port=5432` for EDB Postgres Extended and community PostgreSQL.
* **Prepare the first node.**
-To create the cluster, we log into the `bdrdb` database on one of the nodes.
+To create the cluster, log into the `bdrdb` database on one of the nodes.
* **Create the first node.**
- Run `bdr.create_node` and give the node a name and its connection string where *other* nodes may connect to it.
+ Run `bdr.create_node` and give the node a name and its connection string where *other* nodes can connect to it.
* Create the top-level group.
- Create a top-level group for the cluster with `bdr.create_node_group` giving it a single parameter, the name of the top-level group.
- * Create a sub-group.
- Create a sub-group as a child of the top-level group with `bdr.create_node_group` giving it two parameters, the name of the sub-group and the name of the parent (and top-level) group.
- This initializes the first node.
+ Create a top-level group for the cluster with `bdr.create_node_group`, giving it a single parameter: the name of the top-level group.
+ * Create a subgroup.
+ Create a subgroup as a child of the top-level group with `bdr.create_node_group`, giving it two parameters: the name of the subgroup and the name of the parent (and top-level) group.
+ This sequence initializes the first node.
-* **Adding the second node.**
+* **Add the second node.**
* Create the second node.
Log into another initialized node's `bdrdb` database.
- Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes may connect to it.
+ Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes can connect to it.
* Join the second node to the cluster.
- Next, run `bdr.join_node_group` passing two parameters, the connection string for the first node and the name of the sub-group you want the node to join.
+ Next, run `bdr.join_node_group`, passing two parameters: the connection string for the first node and the name of the subgroup you want the node to join.
-* **Adding the third node.**
+* **Add the third node.**
* Create the third node.
Log into another initialized node's `bdrdb` database.
- Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes may connect to it.
+ Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes can connect to it.
* Join the third node to the cluster.
- Next, run `bdr.join_node_group` passing two parameters, the connection string for the first node and the name of the sub-group you want the node to join.
+ Next, run `bdr.join_node_group`, passing two parameters: the connection string for the first node and the name of the subgroup you want the node to join.
## Worked example
-So far, we have:
+So far, this example has:
* Created three hosts.
* Installed a Postgres server on each host.
* Installed Postgres Distributed on each host.
* Configured the Postgres server to work with PGD on each host.
-To create the cluster, we will tell `host-one`'s Postgres instance that it is a PGD node - `node-one` and create PGD groups on that node.
-Then we will tell `host-two` and `host-three`'s Postgres instances that they are PGD nodes - `node-two` and `node-three` and that they should join a group on `node-one`.
+To create the cluster, tell the `host-one` Postgres instance that it's a PGD node (`node-one`) and create PGD groups on that node.
+Then tell the `host-two` and `host-three` Postgres instances that they're PGD nodes (`node-two` and `node-three`) and that they should join a group on `node-one`.
### Create connection strings for each node
-We calculate the connection strings for each of the node in advance.
-Below are the connection strings for our 3 node example:
+Calculate the connection strings for each of the nodes in advance.
+Following are the connection strings for this 3-node example:
| Name | Node Name | Private IP | Connection string |
| ---------- | ---------- | --------------- | -------------------------------------- |
@@ -70,7 +70,7 @@ Below are the connection strings for our 3 node example:
### Preparing the first node
-Log into host-one's Postgres server.
+Log into the `host-one` Postgres server.
```
ssh admin@host-one
@@ -79,7 +79,7 @@ sudo -iu enterprisedb psql bdrdb
### Create the first node
-Call the [`bdr.create_node`](/pgd/4/bdr/nodes#bdrcreate_node) function to create a node, passing it the node name and a connection string which other nodes can use to connect to it.
+Call the [`bdr.create_node`](/pgd/4/bdr/nodes#bdrcreate_node) function to create a node, passing it the node name and a connection string that other nodes can use to connect to it.
```
select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444');
@@ -87,30 +87,30 @@ select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444');
#### Create the top-level group
-Call the [`bdr.create_node_group`](/pgd/4/bdr/nodes#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter will create the top-level group with that name. For our example, we will create a top-level group named `pgd`.
+Call the [`bdr.create_node_group`](/pgd/4/bdr/nodes#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter creates the top-level group with that name. For this example, create a top-level group named `pgd`:
```
select bdr.create_node_group('pgd');
```
-#### Create a sub-group
+#### Create a subgroup
-Using sub-groups to organize your nodes is preferred as it allows services like PGD proxy, which we will be configuring later, to coordinate their operations.
-In a larger PGD installation, multiple sub-groups can exist providing organizational grouping that enables geographical mapping of clusters and localized resilience.
-For that reason, in this example, we are creating a sub-group for our first nodes to enable simpler expansion and use of PGD proxy.
+Using subgroups to organize your nodes is preferred as it allows services like PGD Proxy, which you'll configured later in this example, to coordinate their operations.
+In a larger PGD installation, multiple subgroups can exist providing organizational grouping that enables geographical mapping of clusters and localized resilience.
+For that reason, this example creates a subgroup for the first nodes to enable simpler expansion and use of PGD Proxy.
-Call the [`bdr.create_node_group`](/pgd/4/bdr/nodes/#bdrcreate_node) function again to create a sub-group of the top-level group.
-The sub-group name is the first parameter, the parent group is the second parameter.
-For our example, we will create a sub-group `dc1` as a child of `pgd`.
+Call the [`bdr.create_node_group`](/pgd/4/bdr/nodes/#bdrcreate_node) function again to create a subgroup of the top-level group.
+The subgroup name is the first parameter, and the parent group is the second parameter.
+This example creates a subgroup `dc1` as a child of `pgd`:
```
select bdr.create_node_group('dc1','pgd');
```
-### Adding the second node
+### Add the second node
-Log into host-two's Postgres server
+Log into the `host-two` Postgres server:
```
ssh admin@host-two
@@ -119,7 +119,7 @@ sudo -iu enterprisedb psql bdrdb
#### Create the second node
-We call the [`bdr.create_node`](/pgd/4/bdr/nodes/#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it.
+Call the [`bdr.create_node`](/pgd/4/bdr/nodes/#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it:
```
select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444');
@@ -127,15 +127,15 @@ select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444');
#### Join the second node to the cluster
-Using [`bdr.join_node_group`](/pgd/4/bdr/nodes/#bdrjoin_node_group) we can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter.
+Using [`bdr.join_node_group`](/pgd/4/bdr/nodes/#bdrjoin_node_group), you can ask `node-two` to join the `node-one` `dc1` group. The function takes as a first parameter the connection string of a node already in the group. It takes the group name as a second parameter.
```
select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1');
```
-### Adding the third node
+### Add the third node
-Log into host-three's Postgres server
+Log into the `host-three` Postgres server.
```
ssh admin@host-three
@@ -144,7 +144,7 @@ sudo -iu enterprisedb psql bdrdb
#### Create the third node
-We call the [`bdr.create_node`](/pgd/4/bdr/nodes/#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it.
+Call the [`bdr.create_node`](/pgd/4/bdr/nodes/#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it:
```
select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444');
@@ -152,10 +152,10 @@ select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444');
#### Join the third node to the cluster
-Using [`bdr.join_node_group`](/pgd/4/bdr/nodes/#bdrjoin_node_group) we can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter.
+Using [`bdr.join_node_group`](/pgd/4/bdr/nodes/#bdrjoin_node_group), you can ask `node-three` to join the `node-one` `dc1` group. The function takes as a first parameter the connection string of a node already in the group. It takes the group name as a second parameter.
```
select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1');
```
-We have now created a PGD cluster.
+These steps have now created a PGD cluster.
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx b/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx
index c8d93bf49bc..070d3ca1b83 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx
@@ -7,62 +7,62 @@ deepToC: true
## Checking the cluster
-With the cluster up and running, it is worthwhile running some basic checks on how effectively it is replicating.
+With the cluster up and running, it's worthwhile to run some basic checks to see how effectively it's replicating.
-In the following example, we show one quick way to do this but you should ensure that any testing you perform is appropriate for your use case.
+The following example shows one quick way to do this, but make sure that any testing you perform is appropriate for your use case.
* **Preparation**
- * Ensure the cluster is ready
- * Log into the database on host-one/node-one
- * Run `select bdr.wait_slot_confirm_lsn(NULL, NULL);`
- * When the query returns the cluster is ready
+ * Ensure the cluster is ready.
+ * Log into the database on `host-one`/`node-one`.
+ * Run `select bdr.wait_slot_confirm_lsn(NULL, NULL);`.
+ * When the query returns, the cluster is ready.
* **Create data**
- The simplest way to test the cluster is replicating is to log into one node, create a table and populate it.
- * On node-one create a table
+ The simplest way to test the cluster is replicating is to log into one node, create a table, and populate it.
+ * On `node-one`, create a table:
```sql
CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT );
```
- * On node-one populate the table
+ * On `node-one`, populate the table:
```sql
INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000);
```
- * On node-one monitor performance
+ * On `node-one`, monitor performance:
```sql
select * from bdr.node_replication_rates;
```
- * On node-one get a sum of the value column (for checking)
+ * On `node-one`, get a sum of the value column (for checking):
```sql
select COUNT(*),SUM(value) from quicktest;
```
* **Check data**
- * Log into node-two
- Log into the database on host-two/node-two
- * On node-two get a sum of the value column (for checking)
+ * Log into `node-two`.
+ Log into the database on `host-two`/`node-two`.
+ * On `node-two`, get a sum of the value column (for checking):
```sql
select COUNT(*),SUM(value) from quicktest;
```
- * Compare with the result from node-one
- * Log into node-three
- Log into the database on host-three/node-three
- * On node-three get a sum of the value column (for checking)
+ * Compare with the result from `node-one`.
+ * Log into `node-three`.
+ Log into the database on `host-three`/`node-three`.
+ * On `node-three`, get a sum of the value column (for checking):
```sql
select COUNT(*),SUM(value) from quicktest;
```
- * Compare with the result from node-one and node-two
+ * Compare with the result from `node-one` and `node-two`.
## Worked example
### Preparation
-Log into host-one's Postgres server.
+Log into the `host-one` Postgres server:
```
ssh admin@host-one
sudo -iu enterprisedb psql bdrdb
```
-This is your connection to PGD's node-one.
+This is your connection to PGD's `node-one`.
#### Ensure the cluster is ready
@@ -74,7 +74,7 @@ select bdr.wait_slot_confirm_lsn(NULL, NULL)
This query will block while the cluster is busy initializing and return when the cluster is ready.
-In another window, log into host-two's Postgres server
+In another window, log into the `host-two` Postgres server:
```
ssh admin@host-two
@@ -85,7 +85,7 @@ sudo -iu enterprisedb psql bdrdb
#### On node-one create a table
-Run
+Run:
```sql
CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT );
@@ -97,7 +97,7 @@ CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT );
INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000);
```
-This will generate a table of 10000 rows of random values.
+This command generates a table of 10000 rows of random values.
#### On node-one monitor performance
@@ -107,7 +107,7 @@ As soon as possible, run:
select * from bdr.node_replication_rates;
```
-And you should see statistics on how quickly that data has been replicated to the other two nodes.
+This command returns statistics on how quickly that data was replicated to the other two nodes:
```console
bdrdb=# select * from bdr.node_replication_rates;
@@ -124,13 +124,13 @@ And it's already replicated.
#### On node-one get a checksum
-Run:
+To get some values from the generated data, run:
```sql
select COUNT(*),SUM(value) from quicktest;
```
-to get some values from the generated data:
+This command returns:
```sql
bdrdb=# select COUNT(*),SUM(value) from quicktest;
@@ -149,17 +149,17 @@ ssh admin@host-two
sudo -iu enterprisedb psql bdrdb
```
-This is your connection to PGD's node-two.
+This is your connection to PGD's `node-two`.
#### On node-two get a checksum
-Run:
+To get node-two's values for the generated data, run:
```sql
select COUNT(*),SUM(value) from quicktest;
```
-to get node-two's values for the generated data:
+This command returns:
```sql
bdrdb=# select COUNT(*),SUM(value) from quicktest;
@@ -172,27 +172,26 @@ __OUTPUT__
#### Compare with the result from node-one
-And the values will be identical.
+When you compare with the result from `node-one`, the values will be identical.
-You can repeat the process with node-three, or generate new data on any node and see it replicate to the other nodes.
+You can repeat the process with `node-three` or generate new data on any node and see it replicate to the other nodes.
-#### Log into host-three's Postgres server.
+#### Log into host-three's Postgres server
```
ssh admin@host-two
sudo -iu enterprisedb psql bdrdb
```
-This is your connection to PGD's node-three.
+This is your connection to PGD's `node-three`.
#### On node-three get a checksum
-Run:
+To get the `node-three` values for the generated data, run:
```sql
select COUNT(*),SUM(value) from quicktest;
```
-
-to get node-three's values for the generated data:
+This command returns:
```sql
bdrdb=# select COUNT(*),SUM(value) from quicktest;
@@ -205,6 +204,4 @@ __OUTPUT__
#### Compare with the result from node-one and node-two
-And the values will be identical.
-
-
+When you compare the results, the values will be identical.
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/07-using-pgd-cli.mdx b/product_docs/docs/pgd/4/admin-manual/installing/07-using-pgd-cli.mdx
index ff514ace3e9..c30e3e68911 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/07-using-pgd-cli.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/07-using-pgd-cli.mdx
@@ -6,11 +6,10 @@ deepToC: true
## Using PGD CLI
-
The PGD CLI client uses a configuration file to work out which hosts to connect to.
There are [options](../../cli/using_cli) that allow you to override this to use alternative configuration files or explicitly point at a server, but by default PGD CLI looks for a configuration file in preset locations.
-The connection to the database is authenticated in the same way as other command line utilities, like the psql command, are authenticated.
+The connection to the database is authenticated in the same way as other command line utilities (like the psql command) are authenticated.
Unlike other commands, PGD CLI doesn't interactively prompt for your password. Therefore, you must pass your password using one of the following methods:
@@ -22,26 +21,26 @@ We recommend the first option, as the other options don't scale well with multip
### Configuring and connecting PGD CLI
-* Ensure PGD-CLI is installed
- * If PGD CLI has already been installed move to the next step.
+* Ensure PGD-CLI is installed.
+ * If PGD CLI is already installed, move to the next step.
* For any system, repeat the [configure repositories](03-configuring-repositories) step on that system.
- * Then run the package installation command appropriate for that platform.
+ * Then run the package installation command appropriate for that platform:
* RHEL and derivatives: `sudo dnf install edb-pgd-cli`
- * Debian, Ubuntu and derivatives: `sudo apt-get install edb-pgd-cli`
-* Create a configuration file
- * YAML file which specifies the cluster and endpoints the PGD CLI application should use.
+ * Debian, Ubuntu, and derivatives: `sudo apt-get install edb-pgd-cli`
+* Create a configuration file.
+ * YAML file that specifies the cluster and endpoints for the PGD CLI application to use.
* Install the configuration file.
- * Copy the YAML configuraiton file to a default config directory /etc/edb/ as pgd-config.yml.
- * Repeat this on any system where you want to run PGD CLI.
+ * Copy the YAML configuration file to a default config directory `/etc/edb/` as `pgd-config.yml`.
+ * Repeat this step on any system where you want to run PGD CLI.
* Add `/usr/local/bin/` to the PATH for any user wanting to use PGD CLI.
- * Add adding `/usr/local/bin` to the path in to your `.bashrc` file
+ * Add `/usr/local/bin` to the path in your `.bashrc` file
* Run pgd-cli with the `pgd` command.
### Use PGD CLI to explore the cluster
* Check the health of the cluster with the `check-health` command.
* Show the nodes in the cluster with the `show-nodes` command.
-We go into more details of these command in the worked example below.
+More details of these command are shown in the worked example that follows.
Also consult the [PGD CLI documentation](../../cli/) for details of other configuration options and a full command reference.
@@ -49,9 +48,9 @@ Also consult the [PGD CLI documentation](../../cli/) for details of other config
### Ensure PGD CLI is installed
-In this worked example, we will be configuring and using PGD CLI on host-one, where we've already installed Postgres and PGD.
-There is no need to install PGD CLI again. We will be using the `enterprisedb` account as this is already configured for access to Postgres.
-If you are not logged in as `enterprisedb` switch to it using:
+This worked example configures and uses PGD CLI on `host-one`, where Postgres and PGD are already installed.
+You don't need to install PGD CLI again. The example uses the `enterprisedb` account as this is already configured for access to Postgres.
+If you aren't logged in as `enterprisedb` switch to it:
```
sudo -iu enterprisedb
@@ -59,11 +58,11 @@ sudo -iu enterprisedb
### Create a configuration file
-The PGD CLI configuration file is similar to the PGD proxy configuration filer.
-It is a YAML file which contains a cluster object. This has two properties:
+The PGD CLI configuration file is similar to the PGD Proxy configuration file.
+It's a YAML file that contains a cluster object. This has two properties:
-The name of the PGD cluster's top-level group (as `name`).
-An array of endpoints of databases (as `endpoints`).
+- The name of the PGD cluster's top-level group (as `name`)
+- An array of endpoints of databases (as `endpoints`)
```
cluster:
@@ -74,21 +73,21 @@ cluster:
- host=host-three dbname=bdrdb port=5444
```
-Note that the endpoints in this example specify port=5444.
+Note that the endpoints in this example specify `port=5444`.
This is necessary for EDB Postgres Advanced Server instances.
-For EDB Postgres Extended and Community PostgreSQL, this can be set to `port=5432`.
+For EDB Postgres Extended and community PostgreSQL, this can be set to `port=5432`.
### Install the configuration file
-Create the PGD CLI configuration directory.
+Create the PGD CLI configuration directory:
```
sudo mkdir -p /etc/edb/
```
-Then write the configuration to the `pgd-config.yml` file in the `/etc/edb/` directory.
+Then, write the configuration to the `pgd-config.yml` file in the `/etc/edb/` directory.
-For our example, this could be run on host-one to create the file.
+For this example, this can be run on `host-one` to create the file:
```
cat <
Date: Wed, 10 Apr 2024 16:29:29 +0100
Subject: [PATCH 24/26] Fix two packages to one package
(Removal of proxy section)
---
.../pgd/4/admin-manual/installing/04-installing-software.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx b/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
index b176b9d89fb..d752fe6f18c 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
+++ b/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
@@ -62,7 +62,7 @@ If you're installing PGD with EDB Postgres Extended Server or community Postgres
The first step is to install the packages. For each Postgres package, there's an `edb-bdr4-` package to go with it.
For example, if you're installing EDB Postgres Advanced Server (epas) version 14, you install `edb-bdr4-epas14`.
-There are two other packages to also install:
+There in one other packages to also install:
- `edb-pgd-cli` for the PGD command line tool
From 256f8344a24c63fa3f6d476459c90bf3abe71435 Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Wed, 24 Apr 2024 10:37:51 +0100
Subject: [PATCH 25/26] Positioned content, made requested changes
Signed-off-by: Dj Walker-Morgan
---
product_docs/docs/pgd/4/admin-manual/index.mdx | 18 ------------------
product_docs/docs/pgd/4/deployments/index.mdx | 10 ++++++----
.../manually}/01-provisioning-hosts.mdx | 0
.../manually}/02-install-postgres.mdx | 0
.../manually}/03-configuring-repositories.mdx | 0
.../manually}/04-installing-software.mdx | 0
.../manually}/05-creating-cluster.mdx | 0
.../manually}/06-check-cluster.mdx | 0
.../manually}/07-using-pgd-cli.mdx | 0
.../manually}/images/edbrepos2.0.png | 0
.../manually}/index.mdx | 9 +++++++--
11 files changed, 13 insertions(+), 24 deletions(-)
delete mode 100644 product_docs/docs/pgd/4/admin-manual/index.mdx
rename product_docs/docs/pgd/4/{admin-manual/installing => deployments/manually}/01-provisioning-hosts.mdx (100%)
rename product_docs/docs/pgd/4/{admin-manual/installing => deployments/manually}/02-install-postgres.mdx (100%)
rename product_docs/docs/pgd/4/{admin-manual/installing => deployments/manually}/03-configuring-repositories.mdx (100%)
rename product_docs/docs/pgd/4/{admin-manual/installing => deployments/manually}/04-installing-software.mdx (100%)
rename product_docs/docs/pgd/4/{admin-manual/installing => deployments/manually}/05-creating-cluster.mdx (100%)
rename product_docs/docs/pgd/4/{admin-manual/installing => deployments/manually}/06-check-cluster.mdx (100%)
rename product_docs/docs/pgd/4/{admin-manual/installing => deployments/manually}/07-using-pgd-cli.mdx (100%)
rename product_docs/docs/pgd/4/{admin-manual/installing => deployments/manually}/images/edbrepos2.0.png (100%)
rename product_docs/docs/pgd/4/{admin-manual/installing => deployments/manually}/index.mdx (84%)
diff --git a/product_docs/docs/pgd/4/admin-manual/index.mdx b/product_docs/docs/pgd/4/admin-manual/index.mdx
deleted file mode 100644
index c6116407b62..00000000000
--- a/product_docs/docs/pgd/4/admin-manual/index.mdx
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: Manual Installation and Administration
-navTitle: Manually
----
-
-This section of the manual covers how to manually deploy and administer EDB Postgres Distributed 4.
-
-* [Installing](installing) works through the steps needed to:
- * Provision hosts
- * Install Postgres
- * Configure repositories
- * Install the PGD software
- * Create a cluster
- * Check a cluster
- * Install and use PGD CLI
-
-The installing section provides an example cluster which will be used in future examples.
-
diff --git a/product_docs/docs/pgd/4/deployments/index.mdx b/product_docs/docs/pgd/4/deployments/index.mdx
index 8a5c9f0110e..ac88b08e341 100644
--- a/product_docs/docs/pgd/4/deployments/index.mdx
+++ b/product_docs/docs/pgd/4/deployments/index.mdx
@@ -1,16 +1,18 @@
---
title: "Deployment options"
indexCards: simple
-
+navigation:
+- tpaexec
+- manually
---
You can deploy and install EDB Postgres Distributed products using the following methods:
-- TPAexec is an orchestration tool that uses Ansible to build Postgres clusters as specified by TPA (Trusted Postgres Architecture), a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations are as applicable to quick testbed setups as to production environments.
+- TPAexec is an orchestration tool that uses Ansible to build Postgres clusters as specified by TPA (Trusted Postgres Architecture), a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations are as applicable to quick testbed setups as to production environments. To deploy PGD using TPA, see the [TPA documentation](/admin-tpa/installing/).
-- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high availability support through EDB Postres Distributed allows single-region or multi-region clusters with one or two data groups. See the [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) topic in the [BigAnimal documentation](/biganimal/latest) for more information.
+- Manual installation is also available where TPA is not an option. Details of how to deploy PGD manually are in the [manual installation](/pgd/4/deployments/manually/) section of the documentation.
-Coming soon:
+- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high availability support through EDB Postres Distributed allows single-region or multi-region clusters with one or two data groups. See the [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) topic in the [BigAnimal documentation](/biganimal/latest) for more information.
- EDB Postgres Distributed for Kubernetes will be a Kubernetes operator is designed, developed, and supported by EDB that covers the full lifecycle of a highly available Postgres database clusters with a multi-master architecture, using BDR replication. It is based on the open source CloudNativePG operator, and provides additional value such as compatibility with Oracle using EDB Postgres Advanced Server and additional supported platforms such as IBM Power and OpenShift.
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/01-provisioning-hosts.mdx b/product_docs/docs/pgd/4/deployments/manually/01-provisioning-hosts.mdx
similarity index 100%
rename from product_docs/docs/pgd/4/admin-manual/installing/01-provisioning-hosts.mdx
rename to product_docs/docs/pgd/4/deployments/manually/01-provisioning-hosts.mdx
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx b/product_docs/docs/pgd/4/deployments/manually/02-install-postgres.mdx
similarity index 100%
rename from product_docs/docs/pgd/4/admin-manual/installing/02-install-postgres.mdx
rename to product_docs/docs/pgd/4/deployments/manually/02-install-postgres.mdx
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/03-configuring-repositories.mdx b/product_docs/docs/pgd/4/deployments/manually/03-configuring-repositories.mdx
similarity index 100%
rename from product_docs/docs/pgd/4/admin-manual/installing/03-configuring-repositories.mdx
rename to product_docs/docs/pgd/4/deployments/manually/03-configuring-repositories.mdx
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx b/product_docs/docs/pgd/4/deployments/manually/04-installing-software.mdx
similarity index 100%
rename from product_docs/docs/pgd/4/admin-manual/installing/04-installing-software.mdx
rename to product_docs/docs/pgd/4/deployments/manually/04-installing-software.mdx
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx b/product_docs/docs/pgd/4/deployments/manually/05-creating-cluster.mdx
similarity index 100%
rename from product_docs/docs/pgd/4/admin-manual/installing/05-creating-cluster.mdx
rename to product_docs/docs/pgd/4/deployments/manually/05-creating-cluster.mdx
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx b/product_docs/docs/pgd/4/deployments/manually/06-check-cluster.mdx
similarity index 100%
rename from product_docs/docs/pgd/4/admin-manual/installing/06-check-cluster.mdx
rename to product_docs/docs/pgd/4/deployments/manually/06-check-cluster.mdx
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/07-using-pgd-cli.mdx b/product_docs/docs/pgd/4/deployments/manually/07-using-pgd-cli.mdx
similarity index 100%
rename from product_docs/docs/pgd/4/admin-manual/installing/07-using-pgd-cli.mdx
rename to product_docs/docs/pgd/4/deployments/manually/07-using-pgd-cli.mdx
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/images/edbrepos2.0.png b/product_docs/docs/pgd/4/deployments/manually/images/edbrepos2.0.png
similarity index 100%
rename from product_docs/docs/pgd/4/admin-manual/installing/images/edbrepos2.0.png
rename to product_docs/docs/pgd/4/deployments/manually/images/edbrepos2.0.png
diff --git a/product_docs/docs/pgd/4/admin-manual/installing/index.mdx b/product_docs/docs/pgd/4/deployments/manually/index.mdx
similarity index 84%
rename from product_docs/docs/pgd/4/admin-manual/installing/index.mdx
rename to product_docs/docs/pgd/4/deployments/manually/index.mdx
index b8a07786e38..d44954b6be0 100644
--- a/product_docs/docs/pgd/4/admin-manual/installing/index.mdx
+++ b/product_docs/docs/pgd/4/deployments/manually/index.mdx
@@ -1,5 +1,6 @@
---
-title: Deploying manually
+title: Deploying PGD 4 manually
+navTitle: Manually
navigation:
- 01-provisioning-hosts
- 02-install-postgres
@@ -11,7 +12,7 @@ navigation:
---
EDB offers automated PGD deployment using Trusted Postgres Architect (TPA) because it's generally more reliable than manual processes.
-Consult [Deploying with TPA](../../admin-tpa/installing.mdx) for full details on how to install TPA and use its automated best-practice-driven PGD deployment options.
+Consult [Deploying with TPA](../tpaexec/) for full details on how to install TPA and use its automated best-practice-driven PGD deployment options.
To complement automated installation, and to enable alternative installation and deployment processes, this section of the documentation looks at the basic operations needed to manually configure a three-node PGD 4 cluster (with a local subgroup) and PGD CLI.
@@ -20,6 +21,10 @@ This section includes, for completeness, instructions for installing PostgreSQL.
Each step is outlined and followed by a worked example with further detail.
This documentation is not a quick start but an exploration of PGD installation. It shows how to configure a basic deployment that will be used for additional examples of PGD administration tasks.
+!!! Note
+Installation of HARP proxies is not covered in this guide. For information on how to install HARP proxies, see the [HARP Proxy Installation Guide](/pgd/4/harp/03_installation).
+!!!
+
The examples deploy a 3-node cluster of EDB Postgres Advanced Server 14 on Red Hat Enterprise Linux 9. These instructions also apply to RHEL derivatives like Alma Linux, Rocky Linux, or Oracle Linux.
At the highest level, manually deploying PGD involves the following steps:
From a06d924633353c937830fb14334f5be8a333cb3c Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com>
Date: Wed, 24 Apr 2024 11:50:13 +0100
Subject: [PATCH 26/26] Fix tense in
product_docs/docs/pgd/4/deployments/index.mdx
---
product_docs/docs/pgd/4/deployments/index.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pgd/4/deployments/index.mdx b/product_docs/docs/pgd/4/deployments/index.mdx
index ac88b08e341..3f85b9230f5 100644
--- a/product_docs/docs/pgd/4/deployments/index.mdx
+++ b/product_docs/docs/pgd/4/deployments/index.mdx
@@ -14,5 +14,5 @@ You can deploy and install EDB Postgres Distributed products using the following
- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high availability support through EDB Postres Distributed allows single-region or multi-region clusters with one or two data groups. See the [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) topic in the [BigAnimal documentation](/biganimal/latest) for more information.
-- EDB Postgres Distributed for Kubernetes will be a Kubernetes operator is designed, developed, and supported by EDB that covers the full lifecycle of a highly available Postgres database clusters with a multi-master architecture, using BDR replication. It is based on the open source CloudNativePG operator, and provides additional value such as compatibility with Oracle using EDB Postgres Advanced Server and additional supported platforms such as IBM Power and OpenShift.
+- EDB Postgres Distributed for Kubernetes is a Kubernetes operator is designed, developed, and supported by EDB that covers the full lifecycle of a highly available Postgres database clusters with a multi-master architecture, using BDR replication. It is based on the open source CloudNativePG operator, and provides additional value such as compatibility with Oracle using EDB Postgres Advanced Server and additional supported platforms such as IBM Power and OpenShift.