Skip to content

Commit

Permalink
Merge pull request #3764 from EnterpriseDB/docs/biganimal/pgd-cross-r…
Browse files Browse the repository at this point in the history
…egion

BigAnimal: PGD Cross-Region
  • Loading branch information
drothery-edb authored Apr 20, 2023
2 parents e5281d4 + 558e9f6 commit ebf3e7b
Show file tree
Hide file tree
Showing 10 changed files with 108 additions and 19 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
---
title: Creating an extreme-high-availability cluster
---

When you create an extreme-high-availability cluster, you need to set up the data group. Extreme-high-availability clusters can contain one or two data groups.

1. After specifying **Extreme High Availability** as your cluster type on the **Cluster Info** tab and your cluster name and password on the **Cluster Settings** tab, select **Next: Data Groups**.

1. On the **Nodes Settings** tab, in the **Nodes** section, select **Two Data Nodes** or **Three Data Nodes**.

For more information on node architecture, see [Extreme high availability (Preview)](/biganimal/latest/overview/02_high_availability/#extreme-high-availability-preview).

1. In the **Database Type** section:

1. Select the type of Postgres you want to use in the **Postgres Type** field:

- **Oracle Compatible** is powered by [EDB Postgres Advanced Server](/epas/latest/). View [a quick demonstration of Oracle compatibility on BigAnimal](../../using_cluster/06_demonstration_oracle_compatibility). EDB Postgres Advanced Server is compatible with all three cluster types.

1. In the **Postgres Version** list, select either 14 or 15 as the version of Postgres that you want to use.

1. Select the settings for your cluster according to [Creating a cluster](/biganimal/latest/getting_started/creating_a_cluster/). Find the instructions for the **Node Settings** tab in [Cluster Settings tab](../creating_a_cluster/#cluster-settings-tab) and [Additional Settings tab](../creating_a_cluster/#additional-settings-tab).

1. In the **Parameters** section on the **DB Configuration** tab, you can update the value of the database configuration parameters for the data group as needed.

To update the parameter values, see [Modifying your database configuration parameters](../../using_cluster/03_modifying_your_cluster/05_db_configuration_parameters).

1. Select **Create: Data Group**. The data group preview appears.

1. To finish creating your cluster, select **Create Cluster**. If you want to create a second data group, select **Add a Data Group**.

## Creating a second data group

After creating the first data group, you can create a second data group for your extreme-high-availability cluster by selecting **Add a Data Group** before you create the cluster.

By default, the settings for your first data group populate the second data group's settings. However, if you want to change certain settings you can. Just know that your changes can change the settings for the entire cluster. That being said, the database type and cloud provider must be consistent across both data groups. The data groups and the witness group must all be in different regions. Otherwise, you can choose the second data group's settings as needed.

When choosing the number of data nodes for the second data group, see [Extreme high availability (Preview)](/biganimal/latest/overview/02_high_availability/#extreme-high-availability-preview) for information on node architecture.
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Before creating your cluster, make sure you have enough resources. Without enoug

- [High availability](/biganimal/latest/overview/02_high_availability/#high-availability) creates a cluster with one primary and one or two standby replicas in different availability zones. You can create high-availability clusters running EDB Postgres Advanced Server or PostgreSQL. Only high-availability clusters allow you to enable read-only workloads for users. However, if you enable read-only workloads, then you might have to raise the IP address resource limits for the cluster.

- [Extreme high availability (beta)](/biganimal/latest/overview/02_high_availability/#extreme-high-availability-beta) creates a cluster configured with a leader node, three shadow nodes, and one witness node. This cluster uses EDB Postgres Distributed to deliver higher performance and faster recovery. You can create extreme high-availability clusters with either PostgreSQL or Oracle compatibility.
- [Extreme high availability (Preview)](/biganimal/latest/overview/02_high_availability/#extreme-high-availability-preview) creates a cluster, powered by EDB Postgres Distributed, with up to two data groups spread across multiple cloud regions. This cluster uses EDB Postgres Distributed to deliver higher performance and faster recovery. See [Creating an extreme high-availability cluster](creating_an_eha_cluster) for instructions.

See [Supported cluster types](/biganimal/latest/overview/02_high_availability/) for more information about the different cluster types.

Expand All @@ -62,8 +62,6 @@ Before creating your cluster, make sure you have enough resources. Without enoug
- **Oracle Compatible** is powered by [EDB Postgres Advanced Server](/epas/latest/). View [a quick demonstration of Oracle compatibility on BigAnimal](../../using_cluster/06_demonstration_oracle_compatibility). EDB Postgres Advanced Server is compatible with all three cluster types.

- **[PostgreSQL](/supported-open-source/postgresql/)** is an open-source, object-relational database management system. PostgreSQL is compatible with single-node and high-availability cluster types.

- **PostgreSQL Compatible** uses advanced logical replication of data and schema. It's available only if you select extreme high availability as your cluster type.

1. In the **Postgres Version** list, select the version of Postgres that you want to use. See [Database version policy](../../overview/05_database_version_policy) for more information.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ redirects:
BigAnimal supports three cluster types:
- Single node
- Standard high availability
- Extreme high availability (beta)
- Extreme high availability (Preview)

You choose the type of cluster you want on the [Create Cluster](https://portal.biganimal.com/create-cluster) page in the [BigAnimal](https://portal.biganimal.com) portal.

Expand All @@ -17,8 +17,7 @@ Postgres distribution and version support varies by cluster type.
| --------------------- | ---------------- | ------------------------------ |
| PostgreSQL | 11–15 | Single node, high availability |
| Oracle Compatible | 11–15 | Single node, high availability |
| Oracle Compatible | 12–14 | Extreme high availability |
| PostgreSQL Compatible | 12-14 | Extreme high availability |
| Oracle Compatible | 14–15 | Extreme high availability |

## Single node

Expand All @@ -32,7 +31,7 @@ In case of unrecoverable failure of the primary, a restore from a backup is requ

The high availability option is provided to minimize downtime in cases of failures. High-availability clusters—one *primary* and one or two *standby replicas*—are configured automatically, with standby replicas staying up to date through physical streaming replication.

If read-only workloads are enabled, then standby replicas serve the read-only workoads. In a two-node cluster, the single standby replica serves read-only workloads. In a three-node cluster, both standby replicas serve read-only workloads. The connections are made to the two standby replicas randomly and on a per-connection basis.
If read-only workloads are enabled, then standby replicas serve the read-only workloads. In a two-node cluster, the single standby replica serves read-only workloads. In a three-node cluster, both standby replicas serve read-only workloads. The connections are made to the two standby replicas randomly and on a per-connection basis.

In cloud regions with availability zones, clusters are provisioned across zones to provide fault tolerance in the face of a datacenter failure.

Expand All @@ -54,15 +53,31 @@ To ensure write availability, BigAnimal disables synchronous replication during

Since BigAnimal replicates to only one node synchronously, some standby replicas in three-node clusters might experience replication lag. Also, if you override the BigAnimal synchronous replication configuration, then the standby replicas are inconsistent.

## Extreme high availability (Preview)

## Extreme high availability (beta)
For use cases where high availability across regions is a major concern, a cluster deployment with extreme high availability enabled can provide two data group with three data nodes each, plus a witness group, for a true active-active solution. Extreme-high-availability clusters offer the ability to deploy a cluster across multiple regions or a single region. Extreme-high-availability clusters are powered by [EDB Postgres Distributed](/pgd/latest/) using multi-master logical replication.

Extreme-high-availability clusters are powered by EDB Postgres Distributed, a logical replication platform that delivers more advanced cluster management compared to a physical replication-based system.
Extreme-high-availability clusters are only Oracle compatible.

You can deploy extreme-high-availability clusters with either PostgreSQL or Oracle compatibility.
Extreme-high-availability clusters are configured according to *data groups*. EDB Postgres Distributed (PGD) clusters create a PGD global group, which contains one or two data groups. Your data groups can be made up of a combination of data nodes and witness nodes. One of these data nodes is the leader at any given time, while the rest are shadow nodes.

Extreme-high-availability clusters deploy four data-hosting BDR nodes across two availability zones (A and B in the diagram). One of these nodes is the leader at any given time (A.1 in the diagram). The rest are typically referred to as *shadow* nodes. At any time, the HARP router can promote any shadow node to leadership. The third availability zone (C) contains one node called the *witness*. This node doesn't host data. It exists only for management purposes, to support operations that require consensus in case of an availability-zone failure.
The witness node/witness group doesn't host data but exists for management purposes, supporting operations that require a consensus, for example, in case of an availability zone failure.

The EDB Postgres Distributed router (HARP) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. HARP leverages a distributed consensus model to determine availability of the BDR nodes in the cluster. On failure or unavailability of the leader, HARP elects a new leader and redirects application traffic. Together with the core capabilities of BDR, this mechanism of routing application traffic to the leader node enables fast failover and switchover without risk of data loss.
The following are the possible node configurations for one data group:

![BigAnimal Cluster4](images/Extreme-HA-Diagram-2x.png)
- 2 data nodes + 1 local witness node
![2 data + 1 local witness](images/SingleDataGroup-1.png)
- 3 data nodes
![3 data nodes](images/SingleDataGroup-2.png)

If you're looking for a true active-active solution that protects against regional failures, select a two-data-group configuration. The following are the possible configurations for two data groups:

- 3 data nodes + 3 data nodes, 1 witness group in a different region
![3 data nodes + 3 data nodes, 1 witness group in a different region ](images/2DataGroups-1.png)

- 2 data nodes + 1 witness node, 2 data nodes + 1 witness node, and 1 witness group in a different region
![2 data nodes + 1 witness node, 2 data nodes + 1 witness node, and 1 witness group in a different region](images/2DataGroups-2.png)

For instructions on creating an extreme-high-availability cluster, see [Creating an extreme-high-availability cluster](../getting_started/creating_a_cluster/creating_an_eha_cluster).

If you want to restore an extreme-high-availability cluster, you can restore only one data group at a time. If you want to restore the second data group to the same cluster, you need to manually enter its details. For instructions, see [Perform an extreme-high-availability cluster restore](../using_cluster/04_backup_and_restore/#perform-an-extreme-high-availability-cluster-restore).
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ We support the major Postgres versions from the date they're made available unti
| Postgres distribution | Versions |
| ---------------------------- | --------------------------------------------------- |
| PostgreSQL | 11–15 |
| EDB Postgres Advanced Server | 11–15, 12-15 for extreme-high-availability clusters |
| EDB Postgres Advanced Server | 11–15, 14-15 for extreme-high-availability clusters |
| EDB Postgres Extended Server | 12-15 |

## End-of-life policy
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

1 comment on commit ebf3e7b

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.