-
Notifications
You must be signed in to change notification settings - Fork 249
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #2814 from EnterpriseDB/release/2022-06-16
Release: 2022-06-16
- Loading branch information
Showing
22 changed files
with
187 additions
and
78 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
51 changes: 38 additions & 13 deletions
51
product_docs/docs/biganimal/release/overview/02_high_availability.mdx
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,28 +1,53 @@ | ||
--- | ||
title: "Supported architectures" | ||
title: "Supported cluster types" | ||
redirects: | ||
- 02_high_availibility | ||
--- | ||
|
||
BigAnimal enables deploying a cluster with or without high availability. The option is controlled with the **High Availablity** control on the [Create Cluster](https://portal.biganimal.com/create-cluster) page in the [BigAnimal](https://portal.biganimal.com) portal. | ||
BigAnimal supports three cluster types: | ||
- Single node | ||
- Standard high availability | ||
- Extreme high availability (beta) | ||
|
||
## High availability—enabled | ||
You choose which type of cluster you want on the [Create Cluster](https://portal.biganimal.com/create-cluster) page in the [BigAnimal](https://portal.biganimal.com) portal. | ||
|
||
The high availability option is provided to minimize downtime in cases of failures. High-availability clusters—one *primary* and two *replicas*—are configured automatically, with replicas staying up to date through physical streaming replication. In cloud regions with availability zones, clusters are provisioned across multiple availability zones to provide fault tolerance in the face of a datacenter failure. | ||
Postgres distribution and version support varies by cluster type. | ||
|
||
- Replicas are usually called *standby servers*. | ||
- In case of temporary or permanent unavailability of the primary, a standby replica becomes the primary. | ||
| Postgres distribution | Versions | Cluster type | | ||
| ---------------------------- | ------------ | ------------------------------ | | ||
| PostgreSQL | 11–14 | Single node, high availability | | ||
| Oracle Compatible | 11–14 | Single node, high availability | | ||
| Oracle Compatible | 12–14 | Extreme high availability | | ||
| PostgreSQL Compatible | 12-14 | Extreme high availability | | ||
|
||
![*BigAnimal Cluster4*](images/high-availability.png) | ||
## Single node | ||
|
||
Incoming client connections are always routed to the current primary. In case of failure of the primary, a standby replica is automatically promoted to primary, and new connections are routed to the new primary. When the old primary recovers, it rejoins the cluster as a replica. | ||
For nonproduction use cases where high availability is not a primary concern, a cluster deployment with high availability not enabled provides one primary with no standby replicas for failover or read-only workloads. | ||
|
||
By default, replication is synchronous to one replica and asynchronous to the other. That is, one replica must confirm that a transaction record was written to disk before the client receives acknowledgment of a successful commit. In PostgreSQL terms, `synchronous_commit` is set to `on` and `synchronous_standby_names` is set to `ANY 1 (replica-1, replica-2)`. You can modify this behavior on a per-transaction, per-session, per-user, or per-database basis with appropriate `SET` or `ALTER` commands. | ||
In case of unrecoverable failure of the primary, a restore from a backup is required. | ||
|
||
## High availability—not enabled | ||
![*BigAnimal Cluster4*](images/Single-Node-Diagram-2x.png) | ||
|
||
For nonproduction use cases where high availability is not a primary concern, a cluster deployment with high availability not enabled provides one primary with no standby servers for failover or read-only workloads. | ||
## Standard high availability | ||
|
||
In case of permanent unavailability of the primary, a restore from a backup is required. | ||
The high availability option is provided to minimize downtime in cases of failures. High-availability clusters—one *primary* and two *standby replicas*—are configured automatically, with standby replicas staying up to date through physical streaming replication. In cloud regions with availability zones, clusters are provisioned across zones to provide fault tolerance in the face of a datacenter failure. | ||
|
||
![*BigAnimal Cluster4*](images/ha-not-enabled.png) | ||
In case of temporary or permanent unavailability of the primary, a standby replica becomes the primary. | ||
|
||
![*BigAnimal Cluster4*](images/HA-diagram-2x.png) | ||
|
||
Incoming client connections are always routed to the current primary. In case of failure of the primary, a standby replica is automatically promoted to primary, and new connections are routed to the new primary. When the old primary recovers, it rejoins the cluster as a standby replica. | ||
|
||
By default, replication is synchronous to one standby replica and asynchronous to the other. That is, one standby replica must confirm that a transaction record was written to disk before the client receives acknowledgment of a successful commit. In PostgreSQL terms, `synchronous_commit` is set to `on` and `synchronous_standby_names` is set to `ANY 1 (replica-1, replica-2)`. You can modify this behavior on a per-transaction, per-session, per-user, or per-database basis with appropriate `SET` or `ALTER` commands. | ||
|
||
## Extreme high availability (beta) | ||
|
||
Extreme high availability clusters are powered by EDB Postgres Distributed, a logical replication platform that delivers more advanced cluster management compared to a physical replication based system. | ||
|
||
Extreme high availability clusters can be deployed with either PostgreSQL or Oracle compatibility. | ||
|
||
Extreme High Availability clusters deploy four data-hosting "BDR" nodes across two availability zones (A and B in the diagram below). One of these nodes will be the leader at any given time (A.1 in the diagram). The rest are typically referred to as "shadow" nodes. Any shadow node can be promoted to leadership at any time by the HARP router. The third availability zone (C) contains one node called the "witness". This node does not host data; it exists only for management purposes, to support operations that require consensus in case of an availability zone failure. | ||
|
||
The EDB Postgres Distributed router (HARP) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. HARP leverages a distributed consensus model to determine availability of the BDR nodes in the cluster. On failure or unavailability of the leader, HARP elects a new leader and redirects application traffic. Together with the core capabilities of BDR, this mechanism of routing application traffic to the leader node enables fast failover and switchover without risk of data loss. | ||
|
||
![*BigAnimal Cluster4*](images/Extreme-HA-Diagram-2x.png) |
Oops, something went wrong.