From e34837726b21b5c000650bc1b395319c9bb8951a Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 21 May 2024 17:52:15 +0100 Subject: [PATCH 1/2] PGD 5 docs standardized on Always-on architectures. Signed-off-by: Dj Walker-Morgan --- .../deploy-config/deploy-biganimal/index.mdx | 2 +- .../deploy-tpa/deploying/index.mdx | 2 +- .../pgd/5/deploy-config/deploy-tpa/index.mdx | 2 +- .../docs/pgd/5/planning/architectures.mdx | 24 +++++++++---------- .../quickstart/further_explore_conflicts.mdx | 2 +- .../docs/pgd/5/quickstart/quick_start_aws.mdx | 4 ++-- product_docs/docs/pgd/5/terminology.mdx | 2 +- 7 files changed, 19 insertions(+), 19 deletions(-) diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx index d9766e5b874..97810ab124e 100644 --- a/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx +++ b/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx @@ -6,7 +6,7 @@ redirects: - /pgd/latest/admin-biganimal/ #generated for pgd deploy-config-planning reorg --- -EDB BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility. It runs in your cloud account or BigAnimal's cloud account, where it's operated by our Postgres experts. EDB BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and multi-region Always On clusters. +EDB BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility. It runs in your cloud account or BigAnimal's cloud account, where it's operated by our Postgres experts. EDB BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and multi-region Always-on clusters. This section covers how to work with EDB Postgres Distributed when deployed on BigAnimal. diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/index.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/index.mdx index 9fe74716323..9a15b43292a 100644 --- a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/index.mdx +++ b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/index.mdx @@ -22,7 +22,7 @@ This applies to physical and virtual machines, both self-hosted and in the cloud !!! Note Get started with TPA and PGD quickly - If you want to experiment with a local deployment as quickly as possible, you can [deploy an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-On cluster on Docker. + If you want to experiment with a local deployment as quickly as possible, you can [deploy an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-on cluster on Docker. If deploying to the cloud is your aim, you can [deploy an EDB Postgres Distributed example cluster on AWS](/pgd/latest/quickstart/quick_start_aws) to get a PGD 5 cluster on your own Amazon account. diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/index.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/index.mdx index 4b6f2ff132d..70bbf1bb7b1 100644 --- a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/index.mdx +++ b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/index.mdx @@ -11,7 +11,7 @@ both self-hosted and in the cloud (with AWS EC2). !!! Note Get started with TPA and PGD quickly - If you want to experiment with a local deployment as quickly as possible, you can [deploying an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-On cluster on Docker. + If you want to experiment with a local deployment as quickly as possible, you can [deploying an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-on cluster on Docker. If deploying to the cloud is your aim, you can [deploying an EDB Postgres Distributed example cluster on AWS](/pgd/latest/quickstart/quick_start_aws) to get a PGD 5 cluster on your own Amazon account. diff --git a/product_docs/docs/pgd/5/planning/architectures.mdx b/product_docs/docs/pgd/5/planning/architectures.mdx index c23f9feeaa4..5a63bb78a5b 100644 --- a/product_docs/docs/pgd/5/planning/architectures.mdx +++ b/product_docs/docs/pgd/5/planning/architectures.mdx @@ -8,7 +8,7 @@ redirects: - /pgd/latest/architectures/ --- -Always On architectures reflect EDB’s Trusted Postgres architectures. They +Always-on architectures reflect EDB’s Trusted Postgres architectures. They encapsulate practices and help you to achieve the highest possible service availability in multiple configurations. These configurations range from single-location architectures to complex distributed systems that protect from @@ -21,16 +21,16 @@ described here. Use-case-specific variations have been successfully deployed in production. However, these variations must undergo rigorous architecture review first. -Always On architectures can be deployed using EDB’s standard deployment tool +Always-on architectures can be deployed using EDB’s standard deployment tool Trusted Postgres Architect (TPA) or configured manually. -## Standard EDB Always On architectures +## Standard EDB Always-on architectures EDB has identified a set of standardized architectures to support single- or multi-location deployments with varying levels of redundancy, depending on your recovery point objective (RPO) and recovery time objective (RTO) requirements. -The Always ON architecture uses three database node groups as a basic building block. +The Always-on architecture uses three database node groups as a basic building block. You can also use a five-node group for extra redundancy. EDB Postgres Distributed consists of the following major building blocks: @@ -40,7 +40,7 @@ EDB Postgres Distributed consists of the following major building blocks: - PGD Proxy — A connection router that makes sure the application is connected to the right data nodes. -All Always On architectures protect an increasing range of failure situations. +All Always-on architectures protect an increasing range of failure situations. For example, a single active location with two data nodes protects against local hardware failure but doesn't provide protection from location (data center or availability zone) failure. Extending that architecture with a backup @@ -75,7 +75,7 @@ requires an odd number of nodes to make decisions using a [Raft](https://raft.gi consensus model. Thus, even the simpler architectures always have three nodes, even if not all of them are storing data. -Applications connect to the standard Always On architectures by way of multi-host +Applications connect to the standard Always-on architectures by way of multi-host connection strings, where each PGD Proxy server is a distinct entry in the multi-host connection string. You must always have at least two proxy nodes in each location to ensure high availability. You can colocate the proxy with the @@ -83,11 +83,11 @@ database instance, in which case we recommend putting the proxy on every data node. Other connection mechanisms have been successfully deployed in production. However, -they aren't part of the standard Always On architectures. +they aren't part of the standard Always-on architectures. -### Always On Single Location +### Always-on Single Location -![Always On 1 Location, 3 Nodes Diagram](images/always_on_1x3_updated.png) +![Always-on 1 Location, 3 Nodes Diagram](images/always_on_1x3_updated.png) * Additional replication between data nodes 1 and 3 isn't shown but occurs as part of the replication mesh * Redundant hardware to quickly restore from local failures @@ -104,9 +104,9 @@ they aren't part of the standard Always On architectures. * Postgres Enterprise Manager (PEM) for monitoring (not depicted) * Can be shared by multiple PGD clusters -### Always On multi-location +### Always-on multi-location -![Always On 2 Locations, 3 Nodes Per Location, Active/Active Diagram](images/always_on_2x3_aa_updated.png) +![Always-on 2 Locations, 3 Nodes Per Location, Active/Active Diagram](images/always_on_2x3_aa_updated.png) * Application can be Active/Active in each location or can be Active/Passive or Active DR with only one location taking writes. * Additional replication between data nodes 1 and 3 isn't shown but occurs as part of the replication mesh. @@ -133,7 +133,7 @@ All architectures provide the following: * Zero downtime upgrades * Support for availability zones in public/private cloud -Use these criteria to help you to select the appropriate Always On architecture. +Use these criteria to help you to select the appropriate Always-on architecture. | | Single  Data Location | Two  Data  Locations | Two  Data Locations  + Witness | Three or More Data Locations | |------------------------------------------------------|----------------------------------------------------------|----------------------------------------------------------|----------------------------------------------------------|----------------------------------------------------------| diff --git a/product_docs/docs/pgd/5/quickstart/further_explore_conflicts.mdx b/product_docs/docs/pgd/5/quickstart/further_explore_conflicts.mdx index 01d0c8fbf64..5c56c99a851 100644 --- a/product_docs/docs/pgd/5/quickstart/further_explore_conflicts.mdx +++ b/product_docs/docs/pgd/5/quickstart/further_explore_conflicts.mdx @@ -9,7 +9,7 @@ In a multi-master architecture like PGD, conflicts happen. PGD is built to handl A conflict can occur when one database node has an update from an application to a row and another node has a different update to the same row. This type of conflict is called a *row-level conflict*. Conflicts aren't errors. Resolving them effectively is core to how Postgres Distributed maintains consistency. -The best way to handle conflicts is not to have them in the first place! Use PGD's Always-On architecture with proxies to ensure that your applications write to the same server in the cluster. +The best way to handle conflicts is not to have them in the first place! Use PGD's Always-on architecture with proxies to ensure that your applications write to the same server in the cluster. When conflicts occur, though, it's useful to know how PGD resolves them, how you can control that resolution, and how you can find out that they're happening. Row insertion and row updates are two actions that can cause conflicts. diff --git a/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx b/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx index 524911f1141..079211cebdc 100644 --- a/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx +++ b/product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx @@ -10,7 +10,7 @@ redirects: --- -This quick start sets up EDB Postgres Distributed with an Always On Single Location architecture using Amazon EC2. +This quick start sets up EDB Postgres Distributed with an Always-on Single Location architecture using Amazon EC2. ## Introducing TPA and PGD @@ -147,7 +147,7 @@ tpaexec configure democluster \ --hostnames-unsorted ``` -You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always On architectures](../planning/architectures/). As part of the default architecture, +You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always-on architectures](../planning/architectures/). As part of the default architecture, this configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers, along with a [Barman](../backup#physical-backup) node for backup. Specify that you're using AWS (`--platform aws`) and eu-west-1 as the region (`--region eu-west-1`). diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index 0bf2f1881b2..2777b771fbe 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -137,7 +137,7 @@ Witness nodes primarily serve to help the cluster establish a consensus. An odd #### Write leader -In an Always-On architecture, a node is selected as the correct connection endpoint for applications. This node is called the write leader. Once selected, proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. +In an Always-on architecture, a node is selected as the correct connection endpoint for applications. This node is called the write leader. Once selected, proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. #### Writer From 3c259f3b83975316be7837c075c70ae8ae473577 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 22 May 2024 09:00:07 +0100 Subject: [PATCH 2/2] Mop up changes Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/overview/index.mdx | 4 ++-- product_docs/docs/pgd/5/planning/deployments.mdx | 2 +- product_docs/docs/pgd/5/quickstart/next_steps.mdx | 2 +- product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx | 4 ++-- product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx | 2 +- product_docs/docs/pgd/5/repsets.mdx | 2 +- 6 files changed, 8 insertions(+), 8 deletions(-) diff --git a/product_docs/docs/pgd/5/overview/index.mdx b/product_docs/docs/pgd/5/overview/index.mdx index 2d0687c56ea..00715c9bdf7 100644 --- a/product_docs/docs/pgd/5/overview/index.mdx +++ b/product_docs/docs/pgd/5/overview/index.mdx @@ -63,13 +63,13 @@ DDL is replicated across nodes by default. DDL execution can be user controlled ## Architectural options and performance -### Always On architectures +### Always-on architectures A number of different architectures can be configured, each of which has different performance and scalability characteristics. The group is the basic building block consisting of 2+ nodes (servers). In a group, each node is in a different availability zone, with dedicated router and backup, giving immediate switchover and high availability. Each group has a dedicated replication set defined on it. If the group loses a node, you can easily repair or replace it by copying an existing node from the group. -The Always On architectures are built from either one group in a single location or two groups in two separate locations. Each group provides high availability. When two groups are leveraged in remote locations, they together also provide disaster recovery (DR). +The Always-on architectures are built from either one group in a single location or two groups in two separate locations. Each group provides high availability. When two groups are leveraged in remote locations, they together also provide disaster recovery (DR). Tables are created across both groups, so any change goes to all nodes, not just to nodes in the local group. diff --git a/product_docs/docs/pgd/5/planning/deployments.mdx b/product_docs/docs/pgd/5/planning/deployments.mdx index 236f3267526..17efb93254e 100644 --- a/product_docs/docs/pgd/5/planning/deployments.mdx +++ b/product_docs/docs/pgd/5/planning/deployments.mdx @@ -9,7 +9,7 @@ You can deploy and install EDB Postgres Distributed products using the following -- [Trusted Postgres Architect](/tpa/latest) (TPA) is an orchestration tool that uses Ansible to build Postgres clusters using a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations apply to quick testbed setups just as they do to production environments. TPA's flexibility allows deployments to virtual machines, AWS cloud instances or Linux host hardware. See [Deploying with TPA](../deploy-config/deploy-tpa/deploying/) for more information. -- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility that runs in your cloud account or BigAnimal's cloud account where it's operated by our Postgres experts. EDB BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and and multi-region Always On clusters. See [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) in the [BigAnimal documentation](/biganimal/latest) for more information. +- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility that runs in your cloud account or BigAnimal's cloud account where it's operated by our Postgres experts. EDB BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and and multi-region Always-on clusters. See [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) in the [BigAnimal documentation](/biganimal/latest) for more information. - [EDB Postgres Distributed for Kubernetes](/postgres_distributed_for_kubernetes/latest/) is a Kubernetes operator designed, developed, and supported by EDB. It covers the full lifecycle of highly available Postgres database clusters with a multi-master architecture, using PGD replication. It's based on the open source CloudNativePG operator and provides additional value, such as compatibility with Oracle using EDB Postgres Advanced Server, Transparent Data Encryption (TDE) using EDB Postgres Extended or Advanced Server, and additional supported platforms including IBM Power and OpenShift. diff --git a/product_docs/docs/pgd/5/quickstart/next_steps.mdx b/product_docs/docs/pgd/5/quickstart/next_steps.mdx index 69a98ba17bc..eae96251803 100644 --- a/product_docs/docs/pgd/5/quickstart/next_steps.mdx +++ b/product_docs/docs/pgd/5/quickstart/next_steps.mdx @@ -9,7 +9,7 @@ description: > ### Architecture -In this quick start, we created a single region cluster of high availability Postgres databases. This is the, Always On Single Location architecture, one of a range of available PGD architectures. Other architectures include Always On Multi-Location, with clusters in multiple data centers working together, and variations of both with witness nodes enhancing resilience. Read more in [architectural options](../planning/architectures/). +In this quick start, we created a single region cluster of high availability Postgres databases. This is the, Always-on Single Location architecture, one of a range of available PGD architectures. Other architectures include Always-on Multi-Location, with clusters in multiple data centers working together, and variations of both with witness nodes enhancing resilience. Read more in [architectural options](../planning/architectures/). ### Postgres versions diff --git a/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx b/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx index 6aab9a58948..2d3d9f01230 100644 --- a/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx +++ b/product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx @@ -8,7 +8,7 @@ redirects: --- -This quick start uses TPA to set up PGD with an Always On Single Location architecture using local Docker containers. +This quick start uses TPA to set up PGD with an Always-on Single Location architecture using local Docker containers. ## Introducing TPA and PGD @@ -154,7 +154,7 @@ tpaexec configure democluster \ ``` You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which -sets up the configuration for [PGD 5's Always On +sets up the configuration for [PGD 5's Always-on architectures](../planning/architectures/). As part of the default architecture, it configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers, along with a [Barman](../backup#physical-backup) diff --git a/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx b/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx index dd8b3780f5e..acf40655704 100644 --- a/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx +++ b/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx @@ -132,7 +132,7 @@ tpaexec configure democluster \ --hostnames-unsorted ``` -You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always On architectures](../planning/architectures/). As part of the default architecture, it configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers and a [Barman](/backup/#physical-backup) node for backup. +You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always-on architectures](../planning/architectures/). As part of the default architecture, it configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers and a [Barman](/backup/#physical-backup) node for backup. For Linux hosts, specify that you're targeting a "bare" platform (`--platform bare`). TPA will determine the Linux version running on each host during deployment. See [the EDB Postgres Distributed compatibility table](https://www.enterprisedb.com/resources/platform-compatibility) for details about the supported operating systems. diff --git a/product_docs/docs/pgd/5/repsets.mdx b/product_docs/docs/pgd/5/repsets.mdx index 6be0f762fd5..7fec0ed65e9 100644 --- a/product_docs/docs/pgd/5/repsets.mdx +++ b/product_docs/docs/pgd/5/repsets.mdx @@ -263,7 +263,7 @@ This configuration looks like this: ![Multi-Region 3 Nodes Configuration](./images/always-on-2x3-aa-updated.png) -This is the standard Always-On multiregion configuration as discussed in the [Choosing your architecture](planning/architectures) section. +This is the standard Always-on multiregion configuration as discussed in the [Choosing your architecture](planning/architectures) section. ### Application Requirements