Skip to content

Commit

Permalink
Merge pull request #3702 from EnterpriseDB/release/2023-02-21
Browse files Browse the repository at this point in the history
Release: 2023-02-21
  • Loading branch information
drothery-edb authored Feb 21, 2023
2 parents 0c61ebd + bd79905 commit 96b447c
Show file tree
Hide file tree
Showing 192 changed files with 25,960 additions and 66 deletions.
3 changes: 2 additions & 1 deletion build-sources.json
Original file line number Diff line number Diff line change
Expand Up @@ -28,5 +28,6 @@
"postgis": true,
"repmgr": true,
"slony": true,
"tde": true
"tde": true,
"tpa": true
}
1 change: 1 addition & 0 deletions gatsby-config.js
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,7 @@ const sourceToPluginConfig = {
repmgr: { name: "repmgr", path: "product_docs/docs/repmgr" },
slony: { name: "slony", path: "product_docs/docs/slony" },
tde: { name: "tde", path: "product_docs/docs/tde" },
tpa: { name: "tpa", path: "product_docs/docs/tpa" },
};

const externalSourcePlugins = () => {
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/4/bdr/appusage.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ a cluster, you can't add a node with a minor version if the cluster
uses a newer protocol version. This returns an error.

Both of these features might be affected by specific restrictions.
See [Release notes](/pgd/latest/rel_notes/) for any known incompatibilities.
See [Release notes](/pgd/4/rel_notes/) for any known incompatibilities.

## Replicating between nodes with differences

Expand Down
6 changes: 3 additions & 3 deletions product_docs/docs/pgd/4/bdr/catalogs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ A view containing active global locks on this node. The [`bdr.global_locks`](#bd
exposes BDR's shared-memory lock state tracking, giving administrators greater
insight into BDR's global locking activity and progress.

See [Monitoring global locks](/pgd/latest/monitoring#monitoring-global-locks)
See [Monitoring global locks](/pgd/4/monitoring#monitoring-global-locks)
for more information about global locking.

#### `bdr.global_locks` columns
Expand Down Expand Up @@ -481,7 +481,7 @@ Every node in the cluster regularly broadcasts its progress every
is 60000 ms, i.e., 1 minute). Expect N \* (N-1) rows in this relation.

You might be more interested in the `bdr.node_slots` view for monitoring
purposes. See also [Monitoring](/pgd/latest/monitoring).
purposes. See also [Monitoring](/pgd/4/monitoring).

#### `bdr.node_peer_progress` columns

Expand Down Expand Up @@ -543,7 +543,7 @@ given node.
This view contains information about replication slots used in the current
database by BDR.

See [Monitoring outgoing replication](/pgd/latest/monitoring#monitoring-outgoing-replication)
See [Monitoring outgoing replication](/pgd/4/monitoring#monitoring-outgoing-replication)
for guidance on the use and interpretation of this view's fields.

#### `bdr.node_slots` columns
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/4/bdr/configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ which vary according to the size and scale of the cluster.
- `max_replication_slots` — Same as `max_wal_senders`.
- `wal_sender_timeout` and `wal_receiver_timeout` — Determines how
quickly a node considers its CAMO partner as disconnected or
reconnected. See [CAMO failure scenarios](/pgd/latest/bdr/camo/#failure-scenarios) for
reconnected. See [CAMO failure scenarios](/pgd/4/bdr/camo/#failure-scenarios) for
details.

In normal running for a group with N peer nodes, BDR requires
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/4/bdr/conflicts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -418,7 +418,7 @@ cost of doing this penalizes the majority of users, so at this time
it simply logs `delete_missing`.

Later releases will automatically resolve `INSERT`/`DELETE` anomalies
via rechecks using [LiveCompare](/latest/livecompare) when `delete_missing` conflicts occur.
via rechecks using [LiveCompare](/livecompare/latest) when `delete_missing` conflicts occur.
These can be performed manually by applications by checking
the `bdr.conflict_history_summary` view.

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/4/bdr/ddl.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ ALTER or DROP of an object created in the current transaction doesn't required
global DML lock.

Monitoring of global DDL locks and global DML locks is shown in
[Monitoring](/pgd/latest/monitoring).
[Monitoring](/pgd/4/monitoring).

## Minimizing the impact of DDL

Expand Down
4 changes: 2 additions & 2 deletions product_docs/docs/pgd/4/bdr/functions.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -744,7 +744,7 @@ bdr.monitor_local_replslots()
#### Notes

This function returns a record with fields `status` and `message`,
as explained in [Monitoring replication slots](/pgd/latest/monitoring/#monitoring-replication-slots).
as explained in [Monitoring replication slots](/pgd/4/monitoring/#monitoring-replication-slots).

### bdr.wal_sender_stats

Expand Down Expand Up @@ -794,7 +794,7 @@ bdr.get_decoding_worker_stat()

#### Notes

For further details, see [Monitoring WAL senders using LCR](/pgd/latest/monitoring/#monitoring-wal-senders-using-lcr).
For further details, see [Monitoring WAL senders using LCR](/pgd/4/monitoring/#monitoring-wal-senders-using-lcr).

### bdr.lag_control

Expand Down
6 changes: 3 additions & 3 deletions product_docs/docs/pgd/4/bdr/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ overhead of replication as the cluster grows and minimizing the bandwidth to oth

BDR is compatible with Postgres, EDB Postgres Extended Server, and EDB Postgres
Advanced Server distributions and can be deployed as a
standard Postgres extension. See the [Compatibility matrix](/pgd/latest/#compatibility-matrix)
standard Postgres extension. See the [Compatibility matrix](/pgd/4/#compatibility-matrix)
for details of supported version combinations.

Some key BDR features depend on certain core
Expand All @@ -170,7 +170,7 @@ example, if having the BDR feature Commit At Most Once (CAMO) is mission
critical to your use case, don't adopt the community
PostgreSQL distribution because it doesn't have the core capability required to handle
CAMO. See the full feature matrix compatibility in
[Choosing a Postgres distribution](/pgd/latest/choosing_server/).
[Choosing a Postgres distribution](/pgd/4/choosing_server/).

BDR offers close to native Postgres compatibility. However, some access
patterns don't necessarily work as well in multi-node setup as they do on a
Expand Down Expand Up @@ -259,4 +259,4 @@ BDR places a limit that at most 10 databases in any one PostgreSQL instance
can be BDR nodes across different BDR node groups. However, BDR works best if
you use only one BDR database per PostgreSQL instance.

The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/latest/architectures/#architecture-details).
The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details).
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/4/bdr/nodes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -587,7 +587,7 @@ On EDB Postgres Extended Server and EDB Postgres Advanced Server, offline nodes
also hold back freezing of data to prevent losing conflict-resolution data
(see [Origin conflict detection](conflicts)).

Administrators must monitor for node outages (see [monitoring](/pgd/latest/monitoring/))
Administrators must monitor for node outages (see [monitoring](/pgd/4/monitoring/))
and make sure nodes have enough free disk space. If the workload is
predictable, you might be able to calculate how much space is used over time,
allowing a prediction of the maximum time a node can be down before critical
Expand Down
8 changes: 4 additions & 4 deletions product_docs/docs/pgd/4/choosing_durability.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ EDB Postgres Distributed allows you to choose from several replication configura

* Asynchronous
* Synchronous (using `synchronous_standby_names`)
* [Commit at Most Once](/pgd/latest/bdr/camo)
* [Eager](/pgd/latest/bdr/eager)
* [Group Commit](/pgd/latest/bdr/group-commit)
* [Commit at Most Once](/pgd/4/bdr/camo)
* [Eager](/pgd/4/bdr/eager)
* [Group Commit](/pgd/4/bdr/group-commit)


For more information, see [Durability](/pgd/latest/bdr/durability).
For more information, see [Durability](/pgd/4/bdr/durability).
6 changes: 3 additions & 3 deletions product_docs/docs/pgd/4/cli/installing_cli.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ navTitle: "Installing PGD CLI"
---


TPAexec installs and configures PGD CLI on each BDR node, by default. If you wish to install PGD CLI on any non-BDR instance in the cluster, you simply attach the pgdcli role to that instance in TPAexec's configuration file before deploying. See [TPAexec](/pgd/latest/deployments/tpaexec) for more information.
TPAexec installs and configures PGD CLI on each BDR node, by default. If you wish to install PGD CLI on any non-BDR instance in the cluster, you simply attach the pgdcli role to that instance in TPAexec's configuration file before deploying. See [TPAexec](/pgd/4/deployments/tpaexec) for more information.

## Installing manually

Expand All @@ -20,7 +20,7 @@ When the PGD CLI is configured by TPAexec, it connects automatically, but with a

### Specifying database connection strings

You can either use a configuration file to specify the database connection strings for your cluster (see following section) or pass the connection string directly to a command (see the [sample use case](/pgd/latest/cli/#passing-a-database-connection-string)).
You can either use a configuration file to specify the database connection strings for your cluster (see following section) or pass the connection string directly to a command (see the [sample use case](/pgd/4/cli/#passing-a-database-connection-string)).

#### Using a configuration file

Expand All @@ -43,5 +43,5 @@ The `pgd-config.yml`, is located in the `/etc/edb` directory, by default. The PG
2. `$HOME/.edb`
3. `.` (working directory)

If you rename the file or move it to another location, specify the new name and location using the optional `-f` or `--config-file` flag when entering a command. See the [sample use case](/pgd/latest/cli/#passing-a-database-connection-string).
If you rename the file or move it to another location, specify the new name and location using the optional `-f` or `--config-file` flag when entering a command. See the [sample use case](/pgd/4/cli/#passing-a-database-connection-string).

Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ By default, `tpaexec configure` uses the names first, second, and so on for any
Specify `--location-names` to provide more meaningful names for each location.

### Enable Commit At Most Once
Specify `--enable-camo` to set the pair of BDR primary instances in each region to be each other's Commit At Most Once (CAMO) partners. See [Commit At Most Once (CAMO)](/pgd/latest/bdr/camo/) for more information.
Specify `--enable-camo` to set the pair of BDR primary instances in each region to be each other's Commit At Most Once (CAMO) partners. See [Commit At Most Once (CAMO)](/pgd/4/bdr/camo/) for more information.


## Provision
Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/pgd/4/harp/03_installation.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
navTitle: Installation
title: Installation
redirects:
- /pgd/latest/harp/03_installation/
---

A standard installation of HARP includes two system services:
Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/pgd/4/harp/04_configuration.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
navTitle: Configuration
title: Configuring HARP for cluster management
redirects:
- /pgd/latest/harp/04_configuration/
---

The HARP configuration file follows a standard YAML-style formatting that was simplified for readability. This file is located in the `/etc/harp`
Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/pgd/4/harp/05_bootstrapping.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
navTitle: Bootstrapping
title: Cluster bootstrapping
redirects:
- /pgd/latest/harp/05_bootstrapping/
---

To use HARP, a minimum amount of metadata must exist in the DCS. The
Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/pgd/4/harp/06_harp_manager.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
navTitle: HARP Manager
title: HARP Manager
redirects:
- /pgd/latest/harp/06_harp_manager/
---

HARP Manager is a daemon that interacts with the local PostgreSQL/BDR node
Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/pgd/4/harp/07_harp_proxy.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
navTitle: HARP Proxy
title: HARP Proxy
redirects:
- /pgd/latest/harp/07_harp_proxy/
---

HARP Proxy is a daemon that acts as an abstraction layer between the client
Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/pgd/4/harp/08_harpctl.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
navTitle: harpctl
title: harpctl command-line tool
redirects:
- /pgd/latest/harp/08_harpctl/
---

`harpctl` is a command-line tool for directly manipulating the consensus layer
Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/pgd/4/harp/09_consensus-layer.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
navTitle: Consensus layer
title: Consensus layer considerations
redirects:
- /pgd/latest/harp/09_consensus-layer/
---

HARP is designed so that it can work with different implementations of
Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/pgd/4/harp/10_security.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
navTitle: Security
title: Security and roles
redirects:
- /pgd/latest/harp/10_security/
---

Beyond basic package installation and configuration, HARP requires
Expand Down
1 change: 1 addition & 0 deletions product_docs/docs/pgd/4/harp/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ directoryDefaults:
description: "High Availability Routing for Postgres (HARP) is a cluster-management tool for EDB Postgres Distributed clusters."
redirects:
- /pgd/4/harp/02_overview
- /pgd/latest/harp/
---

High Availability Routing for Postgres (HARP) is a new approach for managing high availabiliity for
Expand Down
4 changes: 2 additions & 2 deletions product_docs/docs/pgd/4/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ EDB Postgres Distributed provides multi-master replication and data distribution
By default EDB Postgres Distributed uses asynchronous replication, applying changes on
the peer nodes only after the local commit. Additional levels of synchronicity can
be configured between different nodes, groups of nodes or all nodes by configuring
[Group Commit](/pgd/latest/bdr/group-commit), [CAMO](/pgd/latest/bdr/camo), or
[Eager](/pgd/latest/bdr/eager) replication.
[Group Commit](/pgd/4/bdr/group-commit), [CAMO](/pgd/4/bdr/camo), or
[Eager](/pgd/4/bdr/eager) replication.

## Compatibility matrix

Expand Down
12 changes: 6 additions & 6 deletions product_docs/docs/pgd/4/known_issues.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This section discusses currently known issues in EDB Postgres Distributed 4.

## Data Consistency

Read about [Conflicts](/pgd/latest/bdr/conflicts/) to understand
Read about [Conflicts](/pgd/4/bdr/conflicts/) to understand
the implications of the asynchronous operation mode in terms of data
consistency.

Expand All @@ -33,7 +33,7 @@ release.
concurrent updates of the same row are repeatedly applied on two
different nodes, then one of the update statements might hang due
to a deadlock with the BDR writer. As mentioned in the
[Conflicts](/pgd/latest/bdr/conflicts/) chapter, `skip` is not the default
[Conflicts](/pgd/4/bdr/conflicts/) chapter, `skip` is not the default
resolver for the `update_origin_change` conflict, and this
combination isn't intended to be used in production. It discards
one of the two conflicting updates based on the order of arrival
Expand Down Expand Up @@ -63,8 +63,8 @@ release.
Adding or removing a pair doesn't need a restart of Postgres or even a
reload of the configuration.

- Group Commit cannot be combined with [CAMO](/pgd/latest/bdr/camo/) or [Eager All Node
replication](/pgd/latest/bdr/eager/). Eager Replication currently only works by using the
- Group Commit cannot be combined with [CAMO](/pgd/4/bdr/camo/) or [Eager All Node
replication](/pgd/4/bdr/eager/). Eager Replication currently only works by using the
"global" BDR commit scope.

- Neither Eager replication nor Group Commit support
Expand All @@ -82,9 +82,9 @@ release.
- Parallel apply is not currently supported in combination with Group
Commit, please make sure to disable it when using Group Commit by
either setting `num_writers` to 1 for the node group (using
[`bdr.alter_node_group_config`](/pgd/latest/bdr/nodes#bdralter_node_group_config)) or
[`bdr.alter_node_group_config`](/pgd/4/bdr/nodes#bdralter_node_group_config)) or
via the GUC `bdr.writers_per_subscription` (see
[Configuration of Generic Replication](/pgd/latest/bdr/configuration#generic-replication)).
[Configuration of Generic Replication](/pgd/4/bdr/configuration#generic-replication)).

- There currently is no protection against altering or removing a commit
scope. Running transactions in a commit scope that is concurrently
Expand Down
22 changes: 11 additions & 11 deletions product_docs/docs/pgd/4/monitoring.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ node_seq_id | 3
node_local_dbname | postgres
```

Also, the table [`bdr.node_catchup_info`](/pgd/latest/bdr/catalogs/#bdrnode_catchup_info) will give information
Also, the table [`bdr.node_catchup_info`](/pgd/4/bdr/catalogs/#bdrnode_catchup_info) will give information
on the catch-up state, which can be relevant to joining nodes or parting nodes.

When a node is parted, it could be that some nodes in the cluster did not receive
Expand All @@ -103,8 +103,8 @@ The `catchup_state` can be one of the following:

There are two main views used for monitoring of replication activity:

- [`bdr.node_slots`](/pgd/latest/bdr/catalogs/#bdrnode_slots) for monitoring outgoing replication
- [`bdr.subscription_summary`](/pgd/latest/bdr/catalogs/#bdrsubscription_summary) for monitoring incoming replication
- [`bdr.node_slots`](/pgd/4/bdr/catalogs/#bdrnode_slots) for monitoring outgoing replication
- [`bdr.subscription_summary`](/pgd/4/bdr/catalogs/#bdrsubscription_summary) for monitoring incoming replication

Most of the information provided by `bdr.node_slots` can be also obtained by querying
the standard PostgreSQL replication monitoring views
Expand All @@ -114,13 +114,13 @@ and

Each node has one BDR group slot which should never have a connection to it
and will very rarely be marked as active. This is normal, and does not imply
something is down or disconnected. See [`Replication Slots created by BDR`](/pgd/latest/bdr/nodes/#replication-slots-created-by-bdr).
something is down or disconnected. See [`Replication Slots created by BDR`](/pgd/4/bdr/nodes/#replication-slots-created-by-bdr).

### Monitoring Outgoing Replication

There is an additional view used for monitoring of outgoing replication activity:

- [`bdr.node_replication_rates`](/pgd/latest/bdr/catalogs/#bdrnode_replication_rates) for monitoring outgoing replication
- [`bdr.node_replication_rates`](/pgd/4/bdr/catalogs/#bdrnode_replication_rates) for monitoring outgoing replication

The `bdr.node_replication_rates` view gives an overall picture of the outgoing
replication activity along with the catchup estimates for peer nodes,
Expand Down Expand Up @@ -274,9 +274,9 @@ subscription_status | replicating

### Monitoring WAL senders using LCR

If the [Decoding Worker](/pgd/latest/bdr/nodes#decoding-worker) is enabled, information about the
If the [Decoding Worker](/pgd/4/bdr/nodes#decoding-worker) is enabled, information about the
current LCR (`Logical Change Record`) file for each WAL sender can be monitored
via the function [bdr.wal_sender_stats](/pgd/latest/bdr/functions#bdrwal_sender_stats),
via the function [bdr.wal_sender_stats](/pgd/4/bdr/functions#bdrwal_sender_stats),
e.g.:

```
Expand All @@ -291,10 +291,10 @@ postgres=# SELECT * FROM bdr.wal_sender_stats();

If `is_using_lcr` is `FALSE`, `decoder_slot_name`/`lcr_file_name` will be `NULL`.
This will be the case if the Decoding Worker is not enabled, or the WAL sender is
serving a [logical standby](/pgd/latest/bdr/nodes#logical-standby-nodes).
serving a [logical standby](/pgd/4/bdr/nodes#logical-standby-nodes).

Additionally, information about the Decoding Worker can be monitored via the function
[bdr.get_decoding_worker_stat](/pgd/latest/bdr/functions#bdrget_decoding_worker_stat), e.g.:
[bdr.get_decoding_worker_stat](/pgd/4/bdr/functions#bdrget_decoding_worker_stat), e.g.:

```
postgres=# SELECT * FROM bdr.get_decoding_worker_stat();
Expand Down Expand Up @@ -364,7 +364,7 @@ Either or both entry types may be created for the same transaction, depending on
the type of DDL operation and the value of the `bdr.ddl_locking` setting.

Global locks held on the local node are visible in [the `bdr.global_locks`
view](/pgd/latest/bdr/catalogs#bdrglobal_locks). This view shows the type of the lock; for
view](/pgd/4/bdr/catalogs#bdrglobal_locks). This view shows the type of the lock; for
relation locks it shows which relation is being locked, the PID holding the
lock (if local), and whether the lock has been globally granted or not. In case
of global advisory locks, `lock_type` column shows `GLOBAL_LOCK_ADVISORY` and
Expand All @@ -390,7 +390,7 @@ timing information.

## Monitoring Conflicts

Replication [conflicts](/pgd/latest/bdr/conflicts) can arise when multiple nodes make
Replication [conflicts](/pgd/4/bdr/conflicts) can arise when multiple nodes make
changes that affect the same rows in ways that can interact with each other.
The BDR system should be monitored to ensure that conflicts are identified
and, where possible, application changes are made to eliminate them or make
Expand Down
Loading

2 comments on commit 96b447c

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸŽ‰ Published on https://edb-docs.netlify.app as production
πŸš€ Deployed on https://63f51f3da19ef608c8a1c6b3--edb-docs.netlify.app

Please sign in to comment.