From 0b2d0810a25439d38925e827b74a99180c59fc71 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Tue, 25 Jun 2024 16:52:50 +0200 Subject: [PATCH] implementing feedback from DJ, general fixes --- .../release/getting_started/creating_a_cluster/index.mdx | 2 +- .../02_high_availability/distributed_highavailability.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index 86715216a82..db338e7a51d 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -42,7 +42,7 @@ The following options aren't available when creating your cluster: - [Primary/Standby High Availability](../../overview/02_high_availability/primary_standby_highavailability/) creates a cluster with one primary and one or two standby replicas in different availability zones. You can create primary/standby high-availability clusters running PostgreSQL, EDB Postgres Extended Server or EDB Postgres Advanced Server. If you enable read-only workloads, then you might have to raise the IP address resource limits for the cluster. - - [Distributed High Availability](../../overview/02_high_availability/distributed_highavailability/) creates a cluster, powered by EDB Postgres Distributed, with up to two data groups spread across multiple cloud regions to deliver higher performance and faster recovery. You can create distributed high availability clusters running PostgreSQL, EDB Postgres Extended Server or EDB Postgres Advanced Server. If you enable read-only workloads, then you might have to raise the IP address resource limits for the cluster. See [Creating a distributed high-availability cluster](creating_a_dha_cluster) for instructions. + - [Distributed High Availability](../../overview/02_high_availability/distributed_highavailability/) creates a cluster, powered by EDB Postgres Distributed, with up to two data groups spread across multiple cloud regions to deliver higher performance and faster recovery. You can create distributed high availability clusters running PostgreSQL, EDB Postgres Extended Server or EDB Postgres Advanced Server. See [Creating a distributed high-availability cluster](creating_a_dha_cluster) for instructions. See [Supported cluster types](/biganimal/latest/overview/02_high_availability/) for more information about the different cluster types. diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx index f499f08d277..e5c32678cd1 100644 --- a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx +++ b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx @@ -8,7 +8,7 @@ This configuration provides a true active-active solution as each data group is Distributed high-availability clusters support both EDB Postgres Advanced Server and EDB Postgres Extended Server database distributions. -Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes is the leader and accepts writes, while the rest are referred to as [shadow nodes](/pgd/latest/terminology/#write-leader). We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/). +Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes in each group is the leader and accepts writes, while the rest are referred to as [shadow nodes](/pgd/latest/terminology/#write-leader). We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/). [PGD Proxy](/pgd/latest/routing/proxy) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover.