From e88b6bd8b94f5a68a20b93cfc8f1651b9dade75a Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Mon, 9 Oct 2023 11:29:57 +0530
Subject: [PATCH 1/9] updated the content
---
.../biganimal/release/overview/02_high_availability.mdx | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability.mdx
index d85a4212719..10ec6e78934 100644
--- a/product_docs/docs/biganimal/release/overview/02_high_availability.mdx
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability.mdx
@@ -76,6 +76,13 @@ A true active-active solution that protects against regional failures, a two dat
![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](images/Multi-Region-3Nodes.png)
+
+### Cross-Cloud Service Providers(CSP) witness groups
+
+Cross-CSP witness groups are available with AWS, Azure and GCP, on BYOA and BAH. It is enabled by default and applies to both multi-region configurations available with PGD. The database price is not charged for witness groups, it is charged for the infrastructure it uses. The cost of the cross-CSP witness group is reflected in the pricing estimate footer.
+
+By default, the CSP selected for the data groups is pre-selected for the witness group. Users can change the CSP of the witness group to be one of the other two CSPs.
+
## For more information
For instructions on creating a distributed high-availability cluster using the BigAnimal portal, see [Creating a distributed high-availability cluster](../getting_started/creating_a_cluster/creating_an_eha_cluster/).
From c29c7614b6fdb0215f683fc4a1f7fab8493691c3 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Mon, 16 Oct 2023 18:45:12 +0530
Subject: [PATCH 2/9] Supported cluster types section broken to smaller
sections
---
.../distributed_highavailability.mdx | 61 +++++++++++++++++++
.../overview/02_high_availability/index.mdx | 17 ++++++
.../primary_standby_highavailability.mdx | 27 ++++++++
.../02_high_availability/single_node.mdx | 9 +++
...ility.mdx => 02_high_availability_old.mdx} | 0
5 files changed, 114 insertions(+)
create mode 100644 product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
create mode 100644 product_docs/docs/biganimal/release/overview/02_high_availability/index.mdx
create mode 100644 product_docs/docs/biganimal/release/overview/02_high_availability/primary_standby_highavailability.mdx
create mode 100644 product_docs/docs/biganimal/release/overview/02_high_availability/single_node.mdx
rename product_docs/docs/biganimal/release/overview/{02_high_availability.mdx => 02_high_availability_old.mdx} (100%)
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
new file mode 100644
index 00000000000..abb18b813e7
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
@@ -0,0 +1,61 @@
+---
+title: "Distributed high availability"
+---
+
+Distributed high-availability clusters are powered by [EDB Postgres Distributed](/pgd/latest/) using multi-master logical replication to deliver more advanced cluster management compared to a physical replication-based system. Distributed high-availability clusters offer the ability to deploy a cluster across multiple regions or a single region. For use cases where high availability across regions is a major concern, a cluster deployment with distributed high availability enabled can provide one region with three data nodes, another region with the same configuration, and one group with a witness node in a third region for a true active-active solution.
+
+Distributed high-availability clusters support both EDB Postgres Advanced Server and EDB Postgres Extended Server database distributions.
+
+Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. One of these data nodes is the leader at any given time, while the rest are shadow nodes. We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/).
+
+[PGD Proxy](/pgd/latest/routing/proxy) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover.
+
+The witness node/witness group doesn't host data but exists for management purposes, supporting operations that require a consensus, for example, in case of an availability zone failure.
+
+!!!Note
+ Operations against a distributed high-availability cluster leverage the [EDB Postgres Distributed switchover](/pgd/latest/cli/command_ref/pgd_switchover/) feature which provides sub-second interruptions during planned lifecycle operations.
+
+## Single data location
+
+A single data location configuration has one data group and either:
+
+- two data nodes with one lead and one shadow, and a witness node each in separate availability zones
+
+ ![region(2 data + 1 witness) + (1 witness)](images/1dg-2dn-1wn-1wg.png)
+
+- three data nodes with one lead and two shadow nodes each in separate availability zones
+
+ ![region(3 data) + (1 witness)](images/1dg-3dn-1wg.png)
+
+## Multiple data locations
+
+A multiple data location configuration has two data groups that contain either:
+
+- Two data nodes
+
+ - A data node, shadow node, and a witness node in one region
+ - The same configuration in another region
+ - A witness node in a third region
+
+ ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](images/2dn-1wn-2dn-1wn-1wg.png)
+
+- Three data nodes
+
+ - A data node and two shadow nodes in one region
+ - The same configuration in another region
+ - A witness node in a third region
+
+ ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](images/eha.png)
+
+
+### Cross-Cloud Service Providers(CSP) witness groups
+
+Cross-CSP witness groups are available with AWS, Azure and GCP, on BYOA and BAH. It is enabled by default and applies to both multi-region configurations available with PGD. The database price is not charged for witness groups, it is charged for the infrastructure it uses. The cost of the cross-CSP witness group is reflected in the pricing estimate footer.
+
+By default, the CSP selected for the data groups is pre-selected for the witness group. Users can change the CSP of the witness group to be one of the other two CSPs.
+
+## For more information
+
+For instructions on creating a distributed high-availability cluster using the BigAnimal portal, see [Creating a distributed high-availability cluster](../getting_started/creating_a_cluster/creating_an_eha_cluster/).
+
+For instructions on creating, retrieving information from, and managing a distributed high-availability cluster using the BigAnimal CLI, see [Using the BigAnimal CLI](/biganimal/latest/reference/cli/managing_clusters/#managing-distributed-high-availability-clusters).
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/index.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/index.mdx
new file mode 100644
index 00000000000..dedf22f9f31
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/index.mdx
@@ -0,0 +1,17 @@
+---
+title: "Supported cluster types"
+deepToC: true
+redirects:
+ - 02_high_availibility
+navigation:
+- single_node
+- primary_standby_highavailability
+- distributed_highavailability
+---
+
+BigAnimal supports three cluster types:
+- [Single node](./single_node)
+- [Primary/standby high availability](./primary_standby_highavailability)
+- [Distributed high availability](./distributed_highavailability)
+
+You choose the type of cluster you want on the [Create Cluster](https://portal.biganimal.com/create-cluster) page in the [BigAnimal](https://portal.biganimal.com) portal.
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/primary_standby_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/primary_standby_highavailability.mdx
new file mode 100644
index 00000000000..f50402f0d46
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/primary_standby_highavailability.mdx
@@ -0,0 +1,27 @@
+---
+title: "Primary/standby high availability"
+---
+
+The Primary/Standby High Availability option is provided to minimize downtime in cases of failures. Primary/standby high-availability clusters—one *primary* and one or two *standby replicas*—are configured automatically, with standby replicas staying up to date through physical streaming replication.
+
+If read-only workloads are enabled, then standby replicas serve the read-only workloads. In a two-node cluster, the single standby replica serves read-only workloads. In a three-node cluster, both standby replicas serve read-only workloads. The connections are made to the two standby replicas randomly and on a per-connection basis.
+
+In cloud regions with availability zones, clusters are provisioned across zones to provide fault tolerance in the face of a datacenter failure.
+
+In case of temporary or permanent unavailability of the primary, a standby replica becomes the primary.
+
+![BigAnimal Cluster4](../images/high-availability.png)
+
+Incoming client connections are always routed to the current primary. In case of failure of the primary, a standby replica is promoted to primary, and new connections are routed to the new primary. When the old primary recovers, it rejoins the cluster as a standby replica.
+
+## Standby replicas
+
+By default, replication is synchronous to one standby replica and asynchronous to the other. That is, one standby replica must confirm that a transaction record was written to disk before the client receives acknowledgment of a successful commit.
+
+In a cluster with one primary and one replica (a two-node primary/standby high-availability cluster), you run the risk of the cluster being unavailable for writes because it doesn't have the same level of reliability as a three-node cluster. BigAnimal automatically disables synchronous replication during maintenance operations of a two-node cluster to ensure write availability. You can also change from the default synchronous replication for a two-node cluster to asynchronous replication on a per-session/per-transaction basis.
+
+In PostgreSQL terms, `synchronous_commit` is set to `on`, and `synchronous_standby_names` is set to `ANY 1 (replica-1, replica-2)`. You can modify this behavior on a per-transaction, per-session, per-user, or per-database basis with appropriate `SET` or `ALTER` commands.
+
+To ensure write availability, BigAnimal disables synchronous replication during maintenance operations of a two-node cluster.
+
+Since BigAnimal replicates to only one node synchronously, some standby replicas in three-node clusters might experience replication lag. Also, if you override the BigAnimal synchronous replication configuration, then the standby replicas are inconsistent.
\ No newline at end of file
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/single_node.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/single_node.mdx
new file mode 100644
index 00000000000..cd95d96aed1
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/single_node.mdx
@@ -0,0 +1,9 @@
+---
+title: "Single node"
+---
+
+For nonproduction use cases where high availability isn't a primary concern, a cluster deployment with high availability not enabled provides one primary with no standby replicas for failover or read-only workloads.
+
+In case of unrecoverable failure of the primary, a restore from a backup is required.
+
+![BigAnimal Cluster4](../images/single-node.png)
\ No newline at end of file
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability_old.mdx
similarity index 100%
rename from product_docs/docs/biganimal/release/overview/02_high_availability.mdx
rename to product_docs/docs/biganimal/release/overview/02_high_availability_old.mdx
From d0262eae86a9c6988d8794ddab1f9d0bc0337874 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Thu, 19 Oct 2023 11:36:04 +0530
Subject: [PATCH 3/9] Added images
---
.../overview/images/1dg-2dn-1wn-1wg.png | 3 +
.../overview/images/1dg-2dn-1wn-1wg.svg | 170 +++++++++++
.../release/overview/images/1dg-3dn-1wg.png | 3 +
.../release/overview/images/1dg-3dn-1wg.svg | 150 ++++++++++
.../overview/images/2dn-1wn-2dn-1wn-1wg.png | 3 +
.../overview/images/2dn-1wn-2dn-1wn-1wg.svg | 278 ++++++++++++++++++
.../biganimal/release/overview/images/EHA.svg | 240 +++++++++++++++
.../biganimal/release/overview/images/eha.png | 3 +
8 files changed, 850 insertions(+)
create mode 100644 product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.png
create mode 100644 product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.svg
create mode 100644 product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.png
create mode 100644 product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.svg
create mode 100644 product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.png
create mode 100644 product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.svg
create mode 100644 product_docs/docs/biganimal/release/overview/images/EHA.svg
create mode 100644 product_docs/docs/biganimal/release/overview/images/eha.png
diff --git a/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.png b/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.png
new file mode 100644
index 00000000000..05d6c476bfe
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a2c023d630df3366839c9c179869fa6dd7ade2a216fb6d41d2aaebdb6445ebc3
+size 175264
diff --git a/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.svg b/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.svg
new file mode 100644
index 00000000000..42c2343832d
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.svg
@@ -0,0 +1,170 @@
+
diff --git a/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.png b/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.png
new file mode 100644
index 00000000000..5e056886c4e
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:037cc9f8e016a9ebb0a458c3db6866a6372395053c9302b250ee99f279dcc7f9
+size 183150
diff --git a/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.svg b/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.svg
new file mode 100644
index 00000000000..2d5848c0448
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.svg
@@ -0,0 +1,150 @@
+
diff --git a/product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.png b/product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.png
new file mode 100644
index 00000000000..2e7e3b43e31
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2045016bae9699cb8ba41c68ab80631ef46d155edacb73e10bbbe5b5f5933cab
+size 233198
diff --git a/product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.svg b/product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.svg
new file mode 100644
index 00000000000..8be40a52eb2
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.svg
@@ -0,0 +1,278 @@
+
diff --git a/product_docs/docs/biganimal/release/overview/images/EHA.svg b/product_docs/docs/biganimal/release/overview/images/EHA.svg
new file mode 100644
index 00000000000..b9e8ff12f8a
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/images/EHA.svg
@@ -0,0 +1,240 @@
+
diff --git a/product_docs/docs/biganimal/release/overview/images/eha.png b/product_docs/docs/biganimal/release/overview/images/eha.png
new file mode 100644
index 00000000000..b38c429dd45
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/images/eha.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4cf9c6fb91cf9fce3487bd3884947cf335096ceb9fe94de73825b86c19a60c85
+size 246162
From e1c9a14cfeb7a5222126b3220bccc9b67d1dcfde Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Thu, 19 Oct 2023 15:27:34 +0530
Subject: [PATCH 4/9] Updated images and cross-csp witness node content
---
.../distributed_highavailability.mdx | 23 ++-
.../overview/images/1dg-2dn-1wn-1wg.png | 3 -
.../overview/images/1dg-2dn-1wn-1wg.svg | 170 ------------------
.../release/overview/images/1dg-3dn-1wg.png | 3 -
.../release/overview/images/1dg-3dn-1wg.svg | 150 ----------------
.../overview/images/2dn-1wn-2dn-1wn-1wg.png | 4 +-
.../release/overview/images/image3.png | 3 +
.../release/overview/images/image5.png | 3 +
8 files changed, 22 insertions(+), 337 deletions(-)
delete mode 100644 product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.png
delete mode 100644 product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.svg
delete mode 100644 product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.png
delete mode 100644 product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.svg
create mode 100644 product_docs/docs/biganimal/release/overview/images/image3.png
create mode 100644 product_docs/docs/biganimal/release/overview/images/image5.png
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
index abb18b813e7..5ec2054b7a4 100644
--- a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
@@ -21,23 +21,23 @@ A single data location configuration has one data group and either:
- two data nodes with one lead and one shadow, and a witness node each in separate availability zones
- ![region(2 data + 1 witness) + (1 witness)](images/1dg-2dn-1wn-1wg.png)
+ ![region(2 data + 1 witness)](../images/image5.png)
- three data nodes with one lead and two shadow nodes each in separate availability zones
- ![region(3 data) + (1 witness)](images/1dg-3dn-1wg.png)
+ ![region(3 data)](../images/image3.png)
-## Multiple data locations
+## Multiple data locations and witness node
A multiple data location configuration has two data groups that contain either:
-- Two data nodes
+- Two data nodes (not recommended for production)
- A data node, shadow node, and a witness node in one region
- The same configuration in another region
- A witness node in a third region
- ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](images/2dn-1wn-2dn-1wn-1wg.png)
+ ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/2dn-1wn-2dn-1wn-1wg.png)
- Three data nodes
@@ -45,14 +45,19 @@ A multiple data location configuration has two data groups that contain either:
- The same configuration in another region
- A witness node in a third region
- ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](images/eha.png)
+ ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/eha.png)
-### Cross-Cloud Service Providers(CSP) witness groups
+### Cross-Cloud Service Providers(CSP) witness node
-Cross-CSP witness groups are available with AWS, Azure and GCP, on BYOA and BAH. It is enabled by default and applies to both multi-region configurations available with PGD. The database price is not charged for witness groups, it is charged for the infrastructure it uses. The cost of the cross-CSP witness group is reflected in the pricing estimate footer.
+By default, the cloud service provider selected for the data groups is pre-selected for the witness node.
+
+To further improve availability, siting a witness node on a different cloud service provider can enable the witness to be available to the cluster when there are only two regions available to the cluster on one cloud service provider.
+
+Users can change the cloud service provider of the witness node to be one of the other two cloud service provider.
+
+Cross-cloud service provider witness node are available with AWS, Azure and GCP, using your own cloud account and BigAnimal's cloud account. It is enabled by default and applies to both multi-region configurations available with PGD. The database price is not charged for witness groups, it is charged for the infrastructure it uses. The cost of the cross-CSP witness node is reflected in the pricing estimate footer.
-By default, the CSP selected for the data groups is pre-selected for the witness group. Users can change the CSP of the witness group to be one of the other two CSPs.
## For more information
diff --git a/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.png b/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.png
deleted file mode 100644
index 05d6c476bfe..00000000000
--- a/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:a2c023d630df3366839c9c179869fa6dd7ade2a216fb6d41d2aaebdb6445ebc3
-size 175264
diff --git a/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.svg b/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.svg
deleted file mode 100644
index 42c2343832d..00000000000
--- a/product_docs/docs/biganimal/release/overview/images/1dg-2dn-1wn-1wg.svg
+++ /dev/null
@@ -1,170 +0,0 @@
-
diff --git a/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.png b/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.png
deleted file mode 100644
index 5e056886c4e..00000000000
--- a/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:037cc9f8e016a9ebb0a458c3db6866a6372395053c9302b250ee99f279dcc7f9
-size 183150
diff --git a/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.svg b/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.svg
deleted file mode 100644
index 2d5848c0448..00000000000
--- a/product_docs/docs/biganimal/release/overview/images/1dg-3dn-1wg.svg
+++ /dev/null
@@ -1,150 +0,0 @@
-
diff --git a/product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.png b/product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.png
index 2e7e3b43e31..59aaca86864 100644
--- a/product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.png
+++ b/product_docs/docs/biganimal/release/overview/images/2dn-1wn-2dn-1wn-1wg.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:2045016bae9699cb8ba41c68ab80631ef46d155edacb73e10bbbe5b5f5933cab
-size 233198
+oid sha256:74d42cebb487b6b1805b444936e3b9081a59c5f6380f525fba195b5c57a98a73
+size 359697
diff --git a/product_docs/docs/biganimal/release/overview/images/image3.png b/product_docs/docs/biganimal/release/overview/images/image3.png
new file mode 100644
index 00000000000..0ac1deb47ff
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/images/image3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b240ec929c3a1720ca6c0a65bd899efffb59398283e0f7bf2404ca6b9cdf7e06
+size 154571
diff --git a/product_docs/docs/biganimal/release/overview/images/image5.png b/product_docs/docs/biganimal/release/overview/images/image5.png
new file mode 100644
index 00000000000..bc941d77f68
--- /dev/null
+++ b/product_docs/docs/biganimal/release/overview/images/image5.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea69e7751e7f7acdc0bb053221d84763bbde6fde028529d6b8fb764ab53382b6
+size 148677
From 67b6cfb30e7131a5a853bec95170e27a1f45acd0 Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Thu, 19 Oct 2023 12:54:46 -0400
Subject: [PATCH 5/9] edits to new high availability content
---
.../distributed_highavailability.mdx | 37 +++++++++++--------
.../primary_standby_highavailability.mdx | 6 +--
.../overview/02_high_availability_old.mdx | 6 +--
3 files changed, 27 insertions(+), 22 deletions(-)
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
index 5ec2054b7a4..9c39790b61a 100644
--- a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
@@ -2,36 +2,42 @@
title: "Distributed high availability"
---
-Distributed high-availability clusters are powered by [EDB Postgres Distributed](/pgd/latest/) using multi-master logical replication to deliver more advanced cluster management compared to a physical replication-based system. Distributed high-availability clusters offer the ability to deploy a cluster across multiple regions or a single region. For use cases where high availability across regions is a major concern, a cluster deployment with distributed high availability enabled can provide one region with three data nodes, another region with the same configuration, and one group with a witness node in a third region for a true active-active solution.
+Distributed high-availability clusters are powered by [EDB Postgres Distributed](/pgd/latest/). They use multi-master logical replication to deliver more advanced cluster management compared to a physical replication-based system. Distributed high-availability clusters let you deploy a cluster across multiple regions or a single region. For use cases where high availability across regions is a major concern, a cluster deployment with distributed high availability enabled can provide:
+
+- One region with three data nodes
+- Another region with the same configuration
+- One group with a witness node in a third region
+
+This configuration provides a true active-active solution.
Distributed high-availability clusters support both EDB Postgres Advanced Server and EDB Postgres Extended Server database distributions.
-Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. One of these data nodes is the leader at any given time, while the rest are shadow nodes. We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/).
+Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three or two data nodes and one witness node. At any given time, one of these data nodes is the leader, while the rest are shadow nodes. We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/).
[PGD Proxy](/pgd/latest/routing/proxy) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover.
-The witness node/witness group doesn't host data but exists for management purposes, supporting operations that require a consensus, for example, in case of an availability zone failure.
+The witness node/witness group doesn't host data but exists for management purposes. It supports operations that require a consensus, for example, in case of an availability zone failure.
!!!Note
- Operations against a distributed high-availability cluster leverage the [EDB Postgres Distributed switchover](/pgd/latest/cli/command_ref/pgd_switchover/) feature which provides sub-second interruptions during planned lifecycle operations.
+ Operations against a distributed high-availability cluster leverage the [EDB Postgres Distributed switchover](/pgd/latest/cli/command_ref/pgd_switchover/) feature, which provides subsecond interruptions during planned lifecycle operations.
## Single data location
-A single data location configuration has one data group and either:
+A configuration with single data location has one data group and either:
-- two data nodes with one lead and one shadow, and a witness node each in separate availability zones
+- Two data nodes with one lead and one shadow and a witness node each in separate availability zones
![region(2 data + 1 witness)](../images/image5.png)
-- three data nodes with one lead and two shadow nodes each in separate availability zones
+- Three data nodes with one lead and two shadow nodes each in separate availability zones
![region(3 data)](../images/image3.png)
## Multiple data locations and witness node
-A multiple data location configuration has two data groups that contain either:
+A configuration with multiple data locations has two data groups that contain either:
-- Two data nodes (not recommended for production)
+- Two data nodes (not recommended for production):
- A data node, shadow node, and a witness node in one region
- The same configuration in another region
@@ -39,7 +45,7 @@ A multiple data location configuration has two data groups that contain either:
![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/2dn-1wn-2dn-1wn-1wg.png)
-- Three data nodes
+- Three data nodes:
- A data node and two shadow nodes in one region
- The same configuration in another region
@@ -48,16 +54,15 @@ A multiple data location configuration has two data groups that contain either:
![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/eha.png)
-### Cross-Cloud Service Providers(CSP) witness node
-
-By default, the cloud service provider selected for the data groups is pre-selected for the witness node.
+### Cross-cloud service providers (CSP) witness node
-To further improve availability, siting a witness node on a different cloud service provider can enable the witness to be available to the cluster when there are only two regions available to the cluster on one cloud service provider.
+By default, the cloud service provider selected for the data groups is preselected for the witness node.
-Users can change the cloud service provider of the witness node to be one of the other two cloud service provider.
+To further improve availability, you can site a witness node on a different cloud service provider. This configuration can enable the witness to be available to the cluster when only two regions are available to the cluster on one cloud service provider.
-Cross-cloud service provider witness node are available with AWS, Azure and GCP, using your own cloud account and BigAnimal's cloud account. It is enabled by default and applies to both multi-region configurations available with PGD. The database price is not charged for witness groups, it is charged for the infrastructure it uses. The cost of the cross-CSP witness node is reflected in the pricing estimate footer.
+Users can change the cloud service provider of the witness node to be one of the other two cloud service providers.
+Cross-cloud service provider witness nodes are available with AWS, Azure, and GCP using your own cloud account and BigAnimal's cloud account. This option is enabled by default and applies to both multi-region configurations available with PGD. The database price isn't charged for witness groups but for the infrastructure it uses. The cost of the cross-CSP witness node is reflected in the pricing estimate at the bottom of your BigAnimal portal.
## For more information
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/primary_standby_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/primary_standby_highavailability.mdx
index f50402f0d46..f644e3998b7 100644
--- a/product_docs/docs/biganimal/release/overview/02_high_availability/primary_standby_highavailability.mdx
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/primary_standby_highavailability.mdx
@@ -6,7 +6,7 @@ The Primary/Standby High Availability option is provided to minimize downtime in
If read-only workloads are enabled, then standby replicas serve the read-only workloads. In a two-node cluster, the single standby replica serves read-only workloads. In a three-node cluster, both standby replicas serve read-only workloads. The connections are made to the two standby replicas randomly and on a per-connection basis.
-In cloud regions with availability zones, clusters are provisioned across zones to provide fault tolerance in the face of a datacenter failure.
+In cloud regions with availability zones, clusters are provisioned across zones to provide fault tolerance in the face of a data center failure.
In case of temporary or permanent unavailability of the primary, a standby replica becomes the primary.
@@ -18,9 +18,9 @@ Incoming client connections are always routed to the current primary. In case of
By default, replication is synchronous to one standby replica and asynchronous to the other. That is, one standby replica must confirm that a transaction record was written to disk before the client receives acknowledgment of a successful commit.
-In a cluster with one primary and one replica (a two-node primary/standby high-availability cluster), you run the risk of the cluster being unavailable for writes because it doesn't have the same level of reliability as a three-node cluster. BigAnimal automatically disables synchronous replication during maintenance operations of a two-node cluster to ensure write availability. You can also change from the default synchronous replication for a two-node cluster to asynchronous replication on a per-session/per-transaction basis.
+In a cluster with one primary and one replica (a two-node primary/standby high-availability cluster), you run the risk of the cluster being unavailable for writes because it doesn't have the same level of reliability as a three-node cluster. BigAnimal disables synchronous replication during maintenance operations of a two-node cluster to ensure write availability. You can also change from the default synchronous replication for a two-node cluster to asynchronous replication on a per-session or per-transaction basis.
-In PostgreSQL terms, `synchronous_commit` is set to `on`, and `synchronous_standby_names` is set to `ANY 1 (replica-1, replica-2)`. You can modify this behavior on a per-transaction, per-session, per-user, or per-database basis with appropriate `SET` or `ALTER` commands.
+In PostgreSQL terms, `synchronous_commit` is set to `on`, and `synchronous_standby_names` is set to `ANY 1 (replica-1, replica-2)`. You can modify this behavior on a per-transaction, per-session, per-user, or per-database basis using `SET` or `ALTER` commands.
To ensure write availability, BigAnimal disables synchronous replication during maintenance operations of a two-node cluster.
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability_old.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability_old.mdx
index 10ec6e78934..8f19cc4f620 100644
--- a/product_docs/docs/biganimal/release/overview/02_high_availability_old.mdx
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability_old.mdx
@@ -77,11 +77,11 @@ A true active-active solution that protects against regional failures, a two dat
![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](images/Multi-Region-3Nodes.png)
-### Cross-Cloud Service Providers(CSP) witness groups
+### Cross-cloud service providers (CSP) witness groups
-Cross-CSP witness groups are available with AWS, Azure and GCP, on BYOA and BAH. It is enabled by default and applies to both multi-region configurations available with PGD. The database price is not charged for witness groups, it is charged for the infrastructure it uses. The cost of the cross-CSP witness group is reflected in the pricing estimate footer.
+Cross-CSP witness groups are available with AWS, Azure, and GCP on BYOA and BAH. Its enabled by default and applies to both multi-region configurations available with PGD. The database price isn't charged for witness groups but for the infrastructure it uses. The cost of the cross-CSP witness group is reflected in the pricing estimate at the bottom of the BigAnimal page.
-By default, the CSP selected for the data groups is pre-selected for the witness group. Users can change the CSP of the witness group to be one of the other two CSPs.
+By default, the CSP selected for the data groups is preselected for the witness group. Users can change the CSP of the witness group to be one of the other two CSPs.
## For more information
From b0094b5bf92a1ae92315b476d560c929715c9ae4 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Fri, 17 Nov 2023 12:58:57 +0530
Subject: [PATCH 6/9] Edits done as per suggestions from Amrita
---
.../distributed_highavailability.mdx | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
index 9c39790b61a..86a94c2fa56 100644
--- a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
@@ -4,15 +4,14 @@ title: "Distributed high availability"
Distributed high-availability clusters are powered by [EDB Postgres Distributed](/pgd/latest/). They use multi-master logical replication to deliver more advanced cluster management compared to a physical replication-based system. Distributed high-availability clusters let you deploy a cluster across multiple regions or a single region. For use cases where high availability across regions is a major concern, a cluster deployment with distributed high availability enabled can provide:
-- One region with three data nodes
-- Another region with the same configuration
-- One group with a witness node in a third region
+- Two data groups with a witness group in a third region
+- Three data groups
-This configuration provides a true active-active solution.
+This configuration provides a true active-active solution as each data group is configured to accept writes.
Distributed high-availability clusters support both EDB Postgres Advanced Server and EDB Postgres Extended Server database distributions.
-Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three or two data nodes and one witness node. At any given time, one of these data nodes is the leader, while the rest are shadow nodes. We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/).
+Distributed high-availability clusters contain one, two, or three data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes is the leader and accepts writes, while the rest are referred to as [shadow nodes](/pgd/latest/terminology/#write-leader). We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/).
[PGD Proxy](/pgd/latest/routing/proxy) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover.
@@ -37,7 +36,7 @@ A configuration with single data location has one data group and either:
A configuration with multiple data locations has two data groups that contain either:
-- Two data nodes (not recommended for production):
+- Two data groups (not recommended for production):
- A data node, shadow node, and a witness node in one region
- The same configuration in another region
@@ -45,7 +44,7 @@ A configuration with multiple data locations has two data groups that contain ei
![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/2dn-1wn-2dn-1wn-1wg.png)
-- Three data nodes:
+- Three data groups:
- A data node and two shadow nodes in one region
- The same configuration in another region
@@ -58,11 +57,11 @@ A configuration with multiple data locations has two data groups that contain ei
By default, the cloud service provider selected for the data groups is preselected for the witness node.
-To further improve availability, you can site a witness node on a different cloud service provider. This configuration can enable the witness to be available to the cluster when only two regions are available to the cluster on one cloud service provider.
+To further improve availability, you can designate a witness node on a different cloud service provider than the data groups. This configuration can enable a three-region configuration even if a single cloud provider only offers two regions in the jurisdiction you are allowed to deploy your cluster in.
-Users can change the cloud service provider of the witness node to be one of the other two cloud service providers.
+You can also change the cloud service provider of the witness node to be one of the other two cloud service providers.
-Cross-cloud service provider witness nodes are available with AWS, Azure, and GCP using your own cloud account and BigAnimal's cloud account. This option is enabled by default and applies to both multi-region configurations available with PGD. The database price isn't charged for witness groups but for the infrastructure it uses. The cost of the cross-CSP witness node is reflected in the pricing estimate at the bottom of your BigAnimal portal.
+Cross-cloud service provider witness nodes are available with AWS, Azure, and Google Cloud using your own cloud account and BigAnimal's cloud account. This option is enabled by default and applies to both multi-region configurations available with PGD. The database price isn't charged for witness nodes but for the infrastructure it uses. The cost of the cross-CSP witness node is reflected in the pricing estimate at the bottom of your BigAnimal portal.
## For more information
From bb34b341e723867bbe562bf1373fea1eddaa7ea3 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Mon, 27 Nov 2023 15:18:35 +0530
Subject: [PATCH 7/9] Edited content as per feedback from Natalia
---
.../02_high_availability/distributed_highavailability.mdx | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
index 86a94c2fa56..7c2a40822b5 100644
--- a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
@@ -57,11 +57,11 @@ A configuration with multiple data locations has two data groups that contain ei
By default, the cloud service provider selected for the data groups is preselected for the witness node.
-To further improve availability, you can designate a witness node on a different cloud service provider than the data groups. This configuration can enable a three-region configuration even if a single cloud provider only offers two regions in the jurisdiction you are allowed to deploy your cluster in.
+To guard against cloud service provider failures, you can designate a witness node on a different cloud service provider than the data groups. This configuration can enable a three-region configuration even if a single cloud provider only offers two regions in the jurisdiction you are allowed to deploy your cluster in.
You can also change the cloud service provider of the witness node to be one of the other two cloud service providers.
-Cross-cloud service provider witness nodes are available with AWS, Azure, and Google Cloud using your own cloud account and BigAnimal's cloud account. This option is enabled by default and applies to both multi-region configurations available with PGD. The database price isn't charged for witness nodes but for the infrastructure it uses. The cost of the cross-CSP witness node is reflected in the pricing estimate at the bottom of your BigAnimal portal.
+Cross-cloud service provider witness nodes are available with AWS, Azure, and Google Cloud using your own cloud account and BigAnimal's cloud account. This option is enabled by default and applies to both multi-region configurations available with PGD. For witness nodes you only pay for the used infrastructure, which is reflected in the pricing estimate.
## For more information
From 1f8d0904893c6380fd323ee631401f51c6f70723 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Mon, 27 Nov 2023 16:02:31 +0530
Subject: [PATCH 8/9] Removed three data groups as per feedback from Natalia
---
.../distributed_highavailability.mdx | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
index 7c2a40822b5..7a020e3abf0 100644
--- a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
@@ -11,7 +11,7 @@ This configuration provides a true active-active solution as each data group is
Distributed high-availability clusters support both EDB Postgres Advanced Server and EDB Postgres Extended Server database distributions.
-Distributed high-availability clusters contain one, two, or three data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes is the leader and accepts writes, while the rest are referred to as [shadow nodes](/pgd/latest/terminology/#write-leader). We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/).
+Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes is the leader and accepts writes, while the rest are referred to as [shadow nodes](/pgd/latest/terminology/#write-leader). We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/).
[PGD Proxy](/pgd/latest/routing/proxy) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover.
@@ -36,22 +36,21 @@ A configuration with single data location has one data group and either:
A configuration with multiple data locations has two data groups that contain either:
-- Two data groups (not recommended for production):
+- Three data nodes:
- - A data node, shadow node, and a witness node in one region
+ - A data node and two shadow nodes in one region
- The same configuration in another region
- A witness node in a third region
- ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/2dn-1wn-2dn-1wn-1wg.png)
+ ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/eha.png)
-- Three data groups:
+- Two data nodes (not recommended for production):
- - A data node and two shadow nodes in one region
+ - A data node, shadow node, and a witness node in one region
- The same configuration in another region
- A witness node in a third region
- ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/eha.png)
-
+ ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/2dn-1wn-2dn-1wn-1wg.png)
### Cross-cloud service providers (CSP) witness node
From a8837d1ce02529ece4a3ec4a9a07f75db68d578a Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Wed, 29 Nov 2023 19:19:22 +0530
Subject: [PATCH 9/9] removed old file and edited content as per feedback from
Amrita
---
.../distributed_highavailability.mdx | 7 +-
.../overview/02_high_availability_old.mdx | 90 -------------------
2 files changed, 1 insertion(+), 96 deletions(-)
delete mode 100644 product_docs/docs/biganimal/release/overview/02_high_availability_old.mdx
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
index 7a020e3abf0..72fc2320289 100644
--- a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
+++ b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx
@@ -2,10 +2,7 @@
title: "Distributed high availability"
---
-Distributed high-availability clusters are powered by [EDB Postgres Distributed](/pgd/latest/). They use multi-master logical replication to deliver more advanced cluster management compared to a physical replication-based system. Distributed high-availability clusters let you deploy a cluster across multiple regions or a single region. For use cases where high availability across regions is a major concern, a cluster deployment with distributed high availability enabled can provide:
-
-- Two data groups with a witness group in a third region
-- Three data groups
+Distributed high-availability clusters are powered by [EDB Postgres Distributed](/pgd/latest/). They use multi-master logical replication to deliver more advanced cluster management compared to a physical replication-based system. Distributed high-availability clusters let you deploy a cluster across multiple regions or a single region. For use cases where high availability across regions is a major concern, a cluster deployment with distributed high availability enabled can provide two data groups with a witness group in a third region
This configuration provides a true active-active solution as each data group is configured to accept writes.
@@ -58,8 +55,6 @@ By default, the cloud service provider selected for the data groups is preselect
To guard against cloud service provider failures, you can designate a witness node on a different cloud service provider than the data groups. This configuration can enable a three-region configuration even if a single cloud provider only offers two regions in the jurisdiction you are allowed to deploy your cluster in.
-You can also change the cloud service provider of the witness node to be one of the other two cloud service providers.
-
Cross-cloud service provider witness nodes are available with AWS, Azure, and Google Cloud using your own cloud account and BigAnimal's cloud account. This option is enabled by default and applies to both multi-region configurations available with PGD. For witness nodes you only pay for the used infrastructure, which is reflected in the pricing estimate.
## For more information
diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability_old.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability_old.mdx
deleted file mode 100644
index 8f19cc4f620..00000000000
--- a/product_docs/docs/biganimal/release/overview/02_high_availability_old.mdx
+++ /dev/null
@@ -1,90 +0,0 @@
----
-title: "Supported cluster types"
-deepToC: true
-redirects:
- - 02_high_availibility
----
-
-BigAnimal supports three cluster types:
-- Single node
-- Primary/standby high availability
-- Distributed high availability
-
-You choose the type of cluster you want on the [Create Cluster](https://portal.biganimal.com/create-cluster) page in the [BigAnimal](https://portal.biganimal.com) portal.
-
-
-## Single node
-
-For nonproduction use cases where high availability isn't a primary concern, a cluster deployment with high availability not enabled provides one primary with no standby replicas for failover or read-only workloads.
-
-In case of unrecoverable failure of the primary, a restore from a backup is required.
-
-![BigAnimal Cluster4](images/single-node.png)
-
-## Primary/standby high availability
-
-The Primary/Standby High Availability option is provided to minimize downtime in cases of failures. Primary/standby high-availability clusters—one *primary* and one or two *standby replicas*—are configured automatically, with standby replicas staying up to date through physical streaming replication.
-
-If read-only workloads are enabled, then standby replicas serve the read-only workloads. In a two-node cluster, the single standby replica serves read-only workloads. In a three-node cluster, both standby replicas serve read-only workloads. The connections are made to the two standby replicas randomly and on a per-connection basis.
-
-In cloud regions with availability zones, clusters are provisioned across zones to provide fault tolerance in the face of a datacenter failure.
-
-In case of temporary or permanent unavailability of the primary, a standby replica becomes the primary.
-
-![BigAnimal Cluster4](images/high-availability.png)
-
-Incoming client connections are always routed to the current primary. In case of failure of the primary, a standby replica is promoted to primary, and new connections are routed to the new primary. When the old primary recovers, it rejoins the cluster as a standby replica.
-
-### Standby replicas
-
-By default, replication is synchronous to one standby replica and asynchronous to the other. That is, one standby replica must confirm that a transaction record was written to disk before the client receives acknowledgment of a successful commit.
-
-In a cluster with one primary and one replica (a two-node primary/standby high-availability cluster), you run the risk of the cluster being unavailable for writes because it doesn't have the same level of reliability as a three-node cluster. BigAnimal automatically disables synchronous replication during maintenance operations of a two-node cluster to ensure write availability. You can also change from the default synchronous replication for a two-node cluster to asynchronous replication on a per-session/per-transaction basis.
-
-In PostgreSQL terms, `synchronous_commit` is set to `on`, and `synchronous_standby_names` is set to `ANY 1 (replica-1, replica-2)`. You can modify this behavior on a per-transaction, per-session, per-user, or per-database basis with appropriate `SET` or `ALTER` commands.
-
-To ensure write availability, BigAnimal disables synchronous replication during maintenance operations of a two-node cluster.
-
-Since BigAnimal replicates to only one node synchronously, some standby replicas in three-node clusters might experience replication lag. Also, if you override the BigAnimal synchronous replication configuration, then the standby replicas are inconsistent.
-
-## Distributed high availability
-
-Distributed high-availability clusters are powered by [EDB Postgres Distributed](/pgd/latest/) using multi-master logical replication to deliver more advanced cluster management compared to a physical replication-based system. Distributed high-availability clusters offer the ability to deploy a cluster across multiple regions or a single region. For use cases where high availability across regions is a major concern, a cluster deployment with distributed high availability enabled can provide one region with three data nodes, another region with the same configuration, and one group with a witness node in a third region for a true active-active solution.
-
-Distributed high-availability clusters support both EDB Postgres Advanced Server and EDB Postgres Extended Server database distributions.
-
-Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. One of these data nodes is the leader at any given time, while the rest are shadow nodes. We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/).
-
-[PGD Proxy](/pgd/latest/routing/proxy) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover.
-
-The witness node/witness group doesn't host data but exists for management purposes, supporting operations that require a consensus, for example, in case of an availability zone failure.
-
-!!!Note
- Operations against a distributed high-availability cluster leverage the [EDB Postgres Distributed switchover](/pgd/latest/cli/command_ref/pgd_switchover/) feature which provides sub-second interruptions during planned lifecycle operations.
-
-### Single data location
-
-A single data location configuration has three data nodes with one lead and two shadow nodes each in separate availability zones.
-
-### Two data locations and witness
-
-A true active-active solution that protects against regional failures, a two data locations configuration has:
-
-- A data node, shadow node, and a witness node in one region
-- The same configuration in another region
-- A witness node in a third region
-
- ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](images/Multi-Region-3Nodes.png)
-
-
-### Cross-cloud service providers (CSP) witness groups
-
-Cross-CSP witness groups are available with AWS, Azure, and GCP on BYOA and BAH. Its enabled by default and applies to both multi-region configurations available with PGD. The database price isn't charged for witness groups but for the infrastructure it uses. The cost of the cross-CSP witness group is reflected in the pricing estimate at the bottom of the BigAnimal page.
-
-By default, the CSP selected for the data groups is preselected for the witness group. Users can change the CSP of the witness group to be one of the other two CSPs.
-
-## For more information
-
-For instructions on creating a distributed high-availability cluster using the BigAnimal portal, see [Creating a distributed high-availability cluster](../getting_started/creating_a_cluster/creating_an_eha_cluster/).
-
-For instructions on creating, retrieving information from, and managing a distributed high-availability cluster using the BigAnimal CLI, see [Using the BigAnimal CLI](/biganimal/latest/reference/cli/managing_clusters/#managing-distributed-high-availability-clusters).