From 4eda83a1f23f03bf4081c7a6e7f71128c1e069c4 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 27 Nov 2023 16:02:31 +0530 Subject: [PATCH] Removed three data groups as per feedback from Natalia --- .../distributed_highavailability.mdx | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-) diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx index 7c2a40822b5..7a020e3abf0 100644 --- a/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx +++ b/product_docs/docs/biganimal/release/overview/02_high_availability/distributed_highavailability.mdx @@ -11,7 +11,7 @@ This configuration provides a true active-active solution as each data group is Distributed high-availability clusters support both EDB Postgres Advanced Server and EDB Postgres Extended Server database distributions. -Distributed high-availability clusters contain one, two, or three data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes is the leader and accepts writes, while the rest are referred to as [shadow nodes](/pgd/latest/terminology/#write-leader). We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/). +Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. At any given time, one of these data nodes is the leader and accepts writes, while the rest are referred to as [shadow nodes](/pgd/latest/terminology/#write-leader). We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/). [PGD Proxy](/pgd/latest/routing/proxy) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover. @@ -36,22 +36,21 @@ A configuration with single data location has one data group and either: A configuration with multiple data locations has two data groups that contain either: -- Two data groups (not recommended for production): +- Three data nodes: - - A data node, shadow node, and a witness node in one region + - A data node and two shadow nodes in one region - The same configuration in another region - A witness node in a third region - ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/2dn-1wn-2dn-1wn-1wg.png) + ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/eha.png) -- Three data groups: +- Two data nodes (not recommended for production): - - A data node and two shadow nodes in one region + - A data node, shadow node, and a witness node in one region - The same configuration in another region - A witness node in a third region - ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/eha.png) - + ![region(2 data + 1 shadow) + region(2 data + 1 shadow) + region(1 witness)](../images/2dn-1wn-2dn-1wn-1wg.png) ### Cross-cloud service providers (CSP) witness node