Skip to content

Commit

Permalink
Merge pull request #5072 from EnterpriseDB/release/2023-12-06
Browse files Browse the repository at this point in the history
Release: 2023-12-06
  • Loading branch information
djw-m authored Dec 6, 2023
2 parents 50aff98 + 6f39d91 commit 6191c38
Show file tree
Hide file tree
Showing 5 changed files with 10 additions and 13 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: "Distributed high availability"
---

Distributed high-availability clusters are powered by [EDB Postgres Distributed](/pgd/latest/). They use multi-master logical replication to deliver more advanced cluster management compared to a physical replication-based system. Distributed high-availability clusters let you deploy a cluster across multiple regions or a single region. For use cases where high availability across regions is a major concern, a cluster deployment with distributed high availability enabled can provide two data groups with a witness group in a third region
Distributed high-availability clusters are powered by [EDB Postgres Distributed](/pgd/latest/). They use multi-master logical replication to deliver more advanced cluster management compared to a physical replication-based system. Distributed high-availability clusters let you deploy a cluster across multiple regions or a single region. For use cases where high availability across regions is a major concern, a cluster deployment with distributed high availability enabled can provide two data groups with a witness group in a third region.

This configuration provides a true active-active solution as each data group is configured to accept writes.

Expand All @@ -19,13 +19,13 @@ The witness node/witness group doesn't host data but exists for management purpo

## Single data location

A configuration with single data location has one data group and either:
A configuration with a single data location has one data group and either:

- Two data nodes with one lead and one shadow and a witness node each in separate availability zones
- Two data nodes with one lead, one shadow, and a witness node, each in separate availability zones

![region(2 data + 1 witness)](../images/image5.png)

- Three data nodes with one lead and two shadow nodes each in separate availability zones
- Three data nodes with one lead and two shadow nodes, each in separate availability zones

![region(3 data)](../images/image3.png)

Expand Down Expand Up @@ -53,9 +53,9 @@ A configuration with multiple data locations has two data groups that contain ei

By default, the cloud service provider selected for the data groups is preselected for the witness node.

To guard against cloud service provider failures, you can designate a witness node on a different cloud service provider than the data groups. This configuration can enable a three-region configuration even if a single cloud provider only offers two regions in the jurisdiction you are allowed to deploy your cluster in.
To guard against cloud service provider failures, you can designate a witness node on a cloud service provider different from the one for data groups. This configuration can enable a three-region configuration even if a single cloud provider offers only two regions in the jurisdiction you're allowed to deploy your cluster in.

Cross-cloud service provider witness nodes are available with AWS, Azure, and Google Cloud using your own cloud account and BigAnimal's cloud account. This option is enabled by default and applies to both multi-region configurations available with PGD. For witness nodes you only pay for the used infrastructure, which is reflected in the pricing estimate.
Cross-cloud service provider witness nodes are available with AWS, Azure, and Google Cloud using your own cloud account and BigAnimal's cloud account. This option is enabled by default and applies to both multi-region configurations available with PGD. For witness nodes you pay only for the infrastructure used, which is reflected in the pricing estimate.

## For more information

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: "Primary/standby high availability"
---

The Primary/Standby High Availability option is provided to minimize downtime in cases of failures. Primary/standby high-availability clusters—one *primary* and one or two *standby replicas*—are configured automatically, with standby replicas staying up to date through physical streaming replication.
The Primary/Standby High Availability option is provided to minimize downtime in cases of failures. Primary/standby high-availability clusters—one *primary* and one or two *standby replicas*—are configured automatically. Standby replicas stay up to date through physical streaming replication.

If read-only workloads are enabled, then standby replicas serve the read-only workloads. In a two-node cluster, the single standby replica serves read-only workloads. In a three-node cluster, both standby replicas serve read-only workloads. The connections are made to the two standby replicas randomly and on a per-connection basis.

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/epas/16/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ All of these features are available in Postgres mode and [Oracle compatibility m
- [Oracle-compatible custom data types](reference/sql_reference/02_data_types/)
- [Oracle keywords](reference/sql_reference/01_sql_syntax/)
- [Oracle functions](reference/sql_reference/03_functions_and_operators/)
- [Orace-style catalog views](reference/oracle_compatibility_reference/epas_compat_cat_views/)
- [Oracle-style catalog views](reference/oracle_compatibility_reference/epas_compat_cat_views/)
- [Additional compatibility with Oracle MERGE](reference/oracle_compatibility_reference/epas_compat_sql/65a_merge.mdx).

EDB also makes available a [full suite of tools and utilities](tools_utilities_and_components) that helps you monitor and manage your EDB Postgres Advanced Server deployment.
4 changes: 2 additions & 2 deletions product_docs/docs/pgd/5/postgres-configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ For PGD's own settings, see the [PGD settings reference](reference/pgd-settings)
To run correctly, PGD requires these Postgres settings:

- `wal_level` — Must be set to `logical`, since PGD relies on logical decoding.
- `shared_preload_libraries` — Must contain `bdr`, although it can contain
other entries before or after, as needed. However, don't include `pglogical`.
- `shared_preload_libraries` — Must start with `bdr`, before other
entries. Don't include `pglogical` in this list
- `track_commit_timestamp` — Must be set to `on` for conflict resolution to
retrieve the timestamp for each conflicting row.

Expand Down
3 changes: 0 additions & 3 deletions product_docs/docs/pgd/5/security/pgd-predefined-roles.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,6 @@ extension is dropped from a database, the roles continue to exist. You need to
drop them manually if dropping is required. This practice allows PGD to be used
in multiple databases on the same PostgreSQL instance without problem.

The `GRANT ROLE` DDL statement doesn't participate in PGD replication. Thus,
execute this on each node of a cluster.

### bdr_superuser

- ALL PRIVILEGES ON ALL TABLES IN SCHEMA BDR
Expand Down

2 comments on commit 6191c38

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸŽ‰ Published on https://edb-docs.netlify.app as production
πŸš€ Deployed on https://6570a697f78c1e152d514348--edb-docs.netlify.app

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.