Skip to content

Commit

Permalink
Merge pull request #5665 from EnterpriseDB/docs/pgd/fix/alwaysonnorma…
Browse files Browse the repository at this point in the history
…lise

DOCS-663 - PGD 5 docs standardized on Always-on architectures.
  • Loading branch information
djw-m authored May 28, 2024
2 parents c55d7ae + 3c259f3 commit 91f597a
Show file tree
Hide file tree
Showing 13 changed files with 27 additions and 27 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ redirects:
- /pgd/latest/admin-biganimal/ #generated for pgd deploy-config-planning reorg
---

EDB BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility. It runs in your cloud account or BigAnimal's cloud account, where it's operated by our Postgres experts. EDB BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and multi-region Always On clusters.
EDB BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility. It runs in your cloud account or BigAnimal's cloud account, where it's operated by our Postgres experts. EDB BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and multi-region Always-on clusters.

This section covers how to work with EDB Postgres Distributed when deployed on BigAnimal.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ This applies to physical and virtual machines, both self-hosted and in the cloud

!!! Note Get started with TPA and PGD quickly

If you want to experiment with a local deployment as quickly as possible, you can [deploy an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-On cluster on Docker.
If you want to experiment with a local deployment as quickly as possible, you can [deploy an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-on cluster on Docker.

If deploying to the cloud is your aim, you can [deploy an EDB Postgres Distributed example cluster on AWS](/pgd/latest/quickstart/quick_start_aws) to get a PGD 5 cluster on your own Amazon account.

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/deploy-config/deploy-tpa/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ both self-hosted and in the cloud (with AWS EC2).

!!! Note Get started with TPA and PGD quickly

If you want to experiment with a local deployment as quickly as possible, you can [deploying an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-On cluster on Docker.
If you want to experiment with a local deployment as quickly as possible, you can [deploying an EDB Postgres Distributed example cluster on Docker](/pgd/latest/quickstart/quick_start_docker) to configure, provision, and deploy a PGD 5 Always-on cluster on Docker.

If deploying to the cloud is your aim, you can [deploying an EDB Postgres Distributed example cluster on AWS](/pgd/latest/quickstart/quick_start_aws) to get a PGD 5 cluster on your own Amazon account.

Expand Down
4 changes: 2 additions & 2 deletions product_docs/docs/pgd/5/overview/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,13 @@ DDL is replicated across nodes by default. DDL execution can be user controlled

## Architectural options and performance

### Always On architectures
### Always-on architectures

A number of different architectures can be configured, each of which has different performance and scalability characteristics.

The group is the basic building block consisting of 2+ nodes (servers). In a group, each node is in a different availability zone, with dedicated router and backup, giving immediate switchover and high availability. Each group has a dedicated replication set defined on it. If the group loses a node, you can easily repair or replace it by copying an existing node from the group.

The Always On architectures are built from either one group in a single location or two groups in two separate locations. Each group provides high availability. When two groups are leveraged in remote locations, they together also provide disaster recovery (DR).
The Always-on architectures are built from either one group in a single location or two groups in two separate locations. Each group provides high availability. When two groups are leveraged in remote locations, they together also provide disaster recovery (DR).

Tables are created across both groups, so any change goes to all nodes, not just to nodes in the local group.

Expand Down
24 changes: 12 additions & 12 deletions product_docs/docs/pgd/5/planning/architectures.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ redirects:
- /pgd/latest/architectures/
---

Always On architectures reflect EDB’s Trusted Postgres architectures. They
Always-on architectures reflect EDB’s Trusted Postgres architectures. They
encapsulate practices and help you to achieve the highest possible service
availability in multiple configurations. These configurations range from
single-location architectures to complex distributed systems that protect from
Expand All @@ -21,16 +21,16 @@ described here. Use-case-specific variations have been successfully deployed in
production. However, these variations must undergo rigorous architecture review
first.

Always On architectures can be deployed using EDB’s standard deployment tool
Always-on architectures can be deployed using EDB’s standard deployment tool
Trusted Postgres Architect (TPA) or configured manually.

## Standard EDB Always On architectures
## Standard EDB Always-on architectures

EDB has identified a set of standardized architectures to support single- or
multi-location deployments with varying levels of redundancy, depending on your
recovery point objective (RPO) and recovery time objective (RTO) requirements.

The Always ON architecture uses three database node groups as a basic building block.
The Always-on architecture uses three database node groups as a basic building block.
You can also use a five-node group for extra redundancy.

EDB Postgres Distributed consists of the following major building blocks:
Expand All @@ -40,7 +40,7 @@ EDB Postgres Distributed consists of the following major building blocks:
- PGD Proxy — A connection router that makes sure the application is connected
to the right data nodes.

All Always On architectures protect an increasing range of failure situations.
All Always-on architectures protect an increasing range of failure situations.
For example, a single active location with two data nodes protects against local
hardware failure but doesn't provide protection from location (data
center or availability zone) failure. Extending that architecture with a backup
Expand Down Expand Up @@ -75,19 +75,19 @@ requires an odd number of nodes to make decisions using a [Raft](https://raft.gi
consensus model. Thus, even the simpler architectures always have three nodes,
even if not all of them are storing data.

Applications connect to the standard Always On architectures by way of multi-host
Applications connect to the standard Always-on architectures by way of multi-host
connection strings, where each PGD Proxy server is a distinct entry in the
multi-host connection string. You must always have at least two proxy nodes in
each location to ensure high availability. You can colocate the proxy with the
database instance, in which case we recommend putting the proxy on every data
node.

Other connection mechanisms have been successfully deployed in production. However,
they aren't part of the standard Always On architectures.
they aren't part of the standard Always-on architectures.

### Always On Single Location
### Always-on Single Location

![Always On 1 Location, 3 Nodes Diagram](images/always_on_1x3_updated.png)
![Always-on 1 Location, 3 Nodes Diagram](images/always_on_1x3_updated.png)

* Additional replication between data nodes 1 and 3 isn't shown but occurs as part of the replication mesh
* Redundant hardware to quickly restore from local failures
Expand All @@ -104,9 +104,9 @@ they aren't part of the standard Always On architectures.
* Postgres Enterprise Manager (PEM) for monitoring (not depicted)
* Can be shared by multiple PGD clusters

### Always On multi-location
### Always-on multi-location

![Always On 2 Locations, 3 Nodes Per Location, Active/Active Diagram](images/always_on_2x3_aa_updated.png)
![Always-on 2 Locations, 3 Nodes Per Location, Active/Active Diagram](images/always_on_2x3_aa_updated.png)

* Application can be Active/Active in each location or can be Active/Passive or Active DR with only one location taking writes.
* Additional replication between data nodes 1 and 3 isn't shown but occurs as part of the replication mesh.
Expand All @@ -133,7 +133,7 @@ All architectures provide the following:
* Zero downtime upgrades
* Support for availability zones in public/private cloud

Use these criteria to help you to select the appropriate Always On architecture.
Use these criteria to help you to select the appropriate Always-on architecture.

| | Single  Data Location | Two  Data  Locations | Two  Data Locations  + Witness | Three or More Data Locations |
|------------------------------------------------------|----------------------------------------------------------|----------------------------------------------------------|----------------------------------------------------------|----------------------------------------------------------|
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/planning/deployments.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ You can deploy and install EDB Postgres Distributed products using the following

-- [Trusted Postgres Architect](/tpa/latest) (TPA) is an orchestration tool that uses Ansible to build Postgres clusters using a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations apply to quick testbed setups just as they do to production environments. TPA's flexibility allows deployments to virtual machines, AWS cloud instances or Linux host hardware. See [Deploying with TPA](../deploy-config/deploy-tpa/deploying/) for more information.

- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility that runs in your cloud account or BigAnimal's cloud account where it's operated by our Postgres experts. EDB BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and and multi-region Always On clusters. See [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) in the [BigAnimal documentation](/biganimal/latest) for more information.
- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility that runs in your cloud account or BigAnimal's cloud account where it's operated by our Postgres experts. EDB BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high-availability support powered by EDB Postgres Distributed (PGD) enables single- and and multi-region Always-on clusters. See [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) in the [BigAnimal documentation](/biganimal/latest) for more information.

- [EDB Postgres Distributed for Kubernetes](/postgres_distributed_for_kubernetes/latest/) is a Kubernetes operator designed, developed, and supported by EDB. It covers the full lifecycle of highly available Postgres database clusters with a multi-master architecture, using PGD replication. It's based on the open source CloudNativePG operator and provides additional value, such as compatibility with Oracle using EDB Postgres Advanced Server, Transparent Data Encryption (TDE) using EDB Postgres Extended or Advanced Server, and additional supported platforms including IBM Power and OpenShift.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In a multi-master architecture like PGD, conflicts happen. PGD is built to handl

A conflict can occur when one database node has an update from an application to a row and another node has a different update to the same row. This type of conflict is called a *row-level conflict*. Conflicts aren't errors. Resolving them effectively is core to how Postgres Distributed maintains consistency.

The best way to handle conflicts is not to have them in the first place! Use PGD's Always-On architecture with proxies to ensure that your applications write to the same server in the cluster.
The best way to handle conflicts is not to have them in the first place! Use PGD's Always-on architecture with proxies to ensure that your applications write to the same server in the cluster.

When conflicts occur, though, it's useful to know how PGD resolves them, how you can control that resolution, and how you can find out that they're happening. Row insertion and row updates are two actions that can cause conflicts.

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/quickstart/next_steps.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: >

### Architecture

In this quick start, we created a single region cluster of high availability Postgres databases. This is the, Always On Single Location architecture, one of a range of available PGD architectures. Other architectures include Always On Multi-Location, with clusters in multiple data centers working together, and variations of both with witness nodes enhancing resilience. Read more in [architectural options](../planning/architectures/).
In this quick start, we created a single region cluster of high availability Postgres databases. This is the, Always-on Single Location architecture, one of a range of available PGD architectures. Other architectures include Always-on Multi-Location, with clusters in multiple data centers working together, and variations of both with witness nodes enhancing resilience. Read more in [architectural options](../planning/architectures/).

### Postgres versions

Expand Down
4 changes: 2 additions & 2 deletions product_docs/docs/pgd/5/quickstart/quick_start_aws.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ redirects:
---


This quick start sets up EDB Postgres Distributed with an Always On Single Location architecture using Amazon EC2.
This quick start sets up EDB Postgres Distributed with an Always-on Single Location architecture using Amazon EC2.

## Introducing TPA and PGD

Expand Down Expand Up @@ -147,7 +147,7 @@ tpaexec configure democluster \
--hostnames-unsorted
```

You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always On architectures](../planning/architectures/). As part of the default architecture,
You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always-on architectures](../planning/architectures/). As part of the default architecture,
this configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers, along with a [Barman](../backup#physical-backup) node for backup.

Specify that you're using AWS (`--platform aws`) and eu-west-1 as the region (`--region eu-west-1`).
Expand Down
4 changes: 2 additions & 2 deletions product_docs/docs/pgd/5/quickstart/quick_start_docker.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ redirects:
---


This quick start uses TPA to set up PGD with an Always On Single Location architecture using local Docker containers.
This quick start uses TPA to set up PGD with an Always-on Single Location architecture using local Docker containers.

## Introducing TPA and PGD

Expand Down Expand Up @@ -154,7 +154,7 @@ tpaexec configure democluster \
```

You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which
sets up the configuration for [PGD 5's Always On
sets up the configuration for [PGD 5's Always-on
architectures](../planning/architectures/). As part of the default architecture,
it configures your cluster with three data nodes, cohosting three [PGD
Proxy](../routing/proxy/) servers, along with a [Barman](../backup#physical-backup)
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,7 @@ tpaexec configure democluster \
--hostnames-unsorted
```

You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always On architectures](../planning/architectures/). As part of the default architecture, it configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers and a [Barman](/backup/#physical-backup) node for backup.
You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD 5's Always-on architectures](../planning/architectures/). As part of the default architecture, it configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers and a [Barman](/backup/#physical-backup) node for backup.

For Linux hosts, specify that you're targeting a "bare" platform (`--platform bare`). TPA will determine the Linux version running on each host during deployment. See [the EDB Postgres Distributed compatibility table](https://www.enterprisedb.com/resources/platform-compatibility) for details about the supported operating systems.

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/repsets.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ This configuration looks like this:

![Multi-Region 3 Nodes Configuration](./images/always-on-2x3-aa-updated.png)

This is the standard Always-On multiregion configuration as discussed in the [Choosing your architecture](planning/architectures) section.
This is the standard Always-on multiregion configuration as discussed in the [Choosing your architecture](planning/architectures) section.

### Application Requirements

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/terminology.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ Witness nodes primarily serve to help the cluster establish a consensus. An odd

#### Write leader

In an Always-On architecture, a node is selected as the correct connection endpoint for applications. This node is called the write leader. Once selected, proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*.
In an Always-on architecture, a node is selected as the correct connection endpoint for applications. This node is called the write leader. Once selected, proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*.

#### Writer

Expand Down

1 comment on commit 91f597a

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.