diff --git a/product_docs/docs/pgd/5/durability/legacy-sync.mdx b/product_docs/docs/pgd/5/durability/legacy-sync.mdx index bc43cd89a1d..269e4fca2db 100644 --- a/product_docs/docs/pgd/5/durability/legacy-sync.mdx +++ b/product_docs/docs/pgd/5/durability/legacy-sync.mdx @@ -19,7 +19,7 @@ replication with `synchronous_commit` and `synchronous_standby_names`. Consider using [Group Commit](group-commit) or [Synchronous Commit](synchronous_commit) instead. -Unlike PGD replication options, PSR sync will persist first, replicating after +Unlike PGD replication options, PSR sync persists first, replicating after the WAL flush of commit record. ### Usage @@ -31,7 +31,7 @@ requirements of non-PGD standby nodes. Once you've added it, you can configure the level of synchronization per transaction using `synchronous_commit`, which defaults to `on`. This setting -means that adding the application name to to `synchronous_standby_names` already +means that adding the application name to `synchronous_standby_names` already enables synchronous replication. Setting `synchronous_commit` to `local` or `off` turns off synchronous replication. @@ -69,7 +69,7 @@ Postgres crashes.* The following table provides an overview of the configuration settings that you must set to a non-default value (req) and those that are optional (opt) but -affecting a specific variant. +affect a specific variant. | Setting (GUC) | Group Commit | Lag Control | PSR | Legacy Sync | |--------------------------------------|:------------:|:-----------:|:-------:|:-----------:| diff --git a/product_docs/docs/pgd/5/durability/limitations.mdx b/product_docs/docs/pgd/5/durability/limitations.mdx index 1e705fe3f53..b8d74749399 100644 --- a/product_docs/docs/pgd/5/durability/limitations.mdx +++ b/product_docs/docs/pgd/5/durability/limitations.mdx @@ -8,7 +8,7 @@ The following limitations apply to the use of commit scopes and the various dura - [Legacy synchronous replication](legacy-sync) uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two aren't - compatible, so don't use them together. Whenever you use Group Commit, CAMO + compatible, so don't use them together. Whenever you use Group Commit, CAMO, or Eager, make sure none of the PGD nodes are configured in `synchronous_standby_names`. @@ -31,7 +31,7 @@ nodes in a group. If you use this feature, take the following limitations into a - `TRUNCATE` -- Explicit two-phase commit is not supported by Group Commit as it already uses two-phase commit. +- Explicit two-phase commit isn't supported by Group Commit as it already uses two-phase commit. - Combining different commit decision options in the same transaction or combining different conflict resolution options in the same transaction isn't @@ -43,10 +43,10 @@ nodes in a group. If you use this feature, take the following limitations into a ## Eager -[Eager](../consistency/eager) is available through Group Commit. It avoids conflicts by eagerly aborting transactions that might clash. It's subject to the same limitations as Group Commit. +[Eager](../consistency/eager) is available through Group Commit. It avoids conflicts by eagerly aborting transactions that might clash. It's subject to the same limitations as Group Commit. -- Eager doesn't allow the `NOTIFY` SQL command or the `pg_notify()` function. It - also don't allow `LISTEN` or `UNLISTEN`. +Eager doesn't allow the `NOTIFY` SQL command or the `pg_notify()` function. It +also doesn't allow `LISTEN` or `UNLISTEN`. ## CAMO @@ -55,7 +55,7 @@ applications committing more than once. If you use this feature, take these limitations into account when planning: - CAMO is designed to query the results of a recently failed COMMIT on the -origin node. In case of disconnection the application must request the +origin node. In case of disconnection, the application must request the transaction status from the CAMO partner. Ensure that you have as little delay as possible after the failure before requesting the status. Applications must not rely on CAMO decisions being stored for longer than 15 minutes. @@ -100,7 +100,7 @@ CAMO. You can configure this option in the PGD node group. See - `TRUNCATE` -- Explicit two-phase commit is not supported by CAMO as it already uses two-phase commit. +- Explicit two-phase commit isn't supported by CAMO as it already uses two-phase commit. - You can combine only CAMO transactions with the `DEGRADE TO` clause for switching to asynchronous operation in case of lowered availability. diff --git a/product_docs/docs/pgd/5/durability/overview.mdx b/product_docs/docs/pgd/5/durability/overview.mdx index 2a4f6bd7cdd..94e6ffde9c8 100644 --- a/product_docs/docs/pgd/5/durability/overview.mdx +++ b/product_docs/docs/pgd/5/durability/overview.mdx @@ -1,5 +1,5 @@ --- -title: An overview of durability options +title: Overview of durability options navTitle: Overview --- @@ -41,21 +41,19 @@ Commit scopes allow four kinds of controlling durability of the transaction: and at what stage of replication it can be considered committed. This option also allows you to control the visibility ordering of the transaction. -- [CAMO](camo): This kind of commit scope is a variant of Group Commit in +- [CAMO](camo): This kind of commit scope is a variant of Group Commit, in which the client takes on the responsibility for verifying that a transaction - has been committed before retrying. + was committed before retrying. - [Lag Control](lag-control): This kind of commit scope controls how far behind nodes can be in terms of replication before allowing commit to proceed. -- [PGD Synchronous Commit](synchronous_commit): This kind of commit scope allows for a behaviour where the origin node awaits a majority of nodes to confirm and behaves more like a native Postgres synchronous commit. +- [PGD Synchronous Commit](synchronous_commit): This kind of commit scope allows for a behavior where the origin node awaits a majority of nodes to confirm and behaves more like a native Postgres synchronous commit. !!! Note Legacy synchronization availability For backward compatibility, PGD still supports configuring synchronous replication with `synchronous_commit` and `synchronous_standby_names`. See [Legacy synchronous replication](legacy-sync) for more on this option. We -recommend users instead use [PGD Synchronous_Commit](synchronous_commit). +recommend that you use [PGD Synchronous Commit](synchronous_commit) instead. !!! - - diff --git a/product_docs/docs/pgd/5/monitoring/index.mdx b/product_docs/docs/pgd/5/monitoring/index.mdx index cf4d509eb72..b9240d7e0ff 100644 --- a/product_docs/docs/pgd/5/monitoring/index.mdx +++ b/product_docs/docs/pgd/5/monitoring/index.mdx @@ -12,7 +12,7 @@ Monitoring replication setups is important to ensure that your system: It's important to have automated monitoring in place to ensure that the administrator is alerted and can take proactive action when issues occur. For example, the administrator can be alerted if -replication slots start falling badly behind, +replication slots start falling badly behind. EDB provides Postgres Enterprise Manager (PEM), which supports PGD starting with version 8.1. See [Monitoring EDB Postgres Distributed](/pem/latest/monitoring_BDR_nodes/) for more information. diff --git a/product_docs/docs/pgd/5/monitoring/otel.mdx b/product_docs/docs/pgd/5/monitoring/otel.mdx index 62ffedceae1..172090dff80 100644 --- a/product_docs/docs/pgd/5/monitoring/otel.mdx +++ b/product_docs/docs/pgd/5/monitoring/otel.mdx @@ -15,7 +15,7 @@ These are attached to all metrics and traces: ## OTEL and OLTP compatibility -For OTEL connections the integration supports OLTP/HTTP version 1.0.0 only, +For OTEL connections, the integration supports OLTP/HTTP version 1.0.0 only, over HTTP or HTTPS. It doesn't support OLTP/gRPC. ## Metrics collection @@ -23,7 +23,7 @@ over HTTP or HTTPS. It doesn't support OLTP/gRPC. Setting the configuration option `bdr.metrics_otel_http_url` to a non-empty URL enables the metric collection. -Different kinds of metrics are collected as shown in the tables that follow. +Different kinds of metrics are collected, as shown in the tables that follow. ### Generic metrics @@ -52,7 +52,7 @@ Different kinds of metrics are collected as shown in the tables that follow. ### Consensus metric -See also [Monitoring Raft Consensus](sql/#monitoring-raft-consensus) +See also [Monitoring Raft consensus](sql/#monitoring-raft-consensus) | Metric name | Type | Labels | Description | ----------- | ---- | ------ | ----------- @@ -70,7 +70,7 @@ See also [Monitoring Raft Consensus](sql/#monitoring-raft-consensus) Tracing collection to OpenTelemetry requires configuring `bdr.trace_otel_http_url` and enabling tracing using `bdr.trace_enable`. -The tracing is limited to only some subsystems at the moment, primarily to the +The tracing is currently limited to only some subsystems, primarily to the cluster management functionality. The following spans can be seen in traces. | Span name | Description | diff --git a/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx b/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx index e23ed4c2116..09d1c20fbb6 100644 --- a/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx +++ b/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx @@ -3,7 +3,7 @@ title: Joining a heterogeneous cluster --- -PGD 4.0 node can join a EDB Postgres Distributed cluster running 3.7.x at a specific +A PGD 4.0 node can join a EDB Postgres Distributed cluster running 3.7.x at a specific minimum maintenance release (such as 3.7.6) or a mix of 3.7 and 4.0 nodes. This procedure is useful when you want to upgrade not just the PGD major version but also the underlying PostgreSQL major diff --git a/product_docs/docs/pgd/5/node_management/index.mdx b/product_docs/docs/pgd/5/node_management/index.mdx index 6ebec55eb1b..373af0d2120 100644 --- a/product_docs/docs/pgd/5/node_management/index.mdx +++ b/product_docs/docs/pgd/5/node_management/index.mdx @@ -34,7 +34,7 @@ with nodes and subgroups. clusters. * [Groups and subgroups](groups_and_subgroups) goes into more detail on how - groups and subgroups work in PGD + groups and subgroups work in PGD. * [Creating and joining groups](creating_and_joining) looks at how new PGD groups can be created and how to join PGD nodes to them. @@ -47,10 +47,10 @@ with nodes and subgroups. failure. * [Subscriber-only nodes and groups](subscriber_only) looks at how subscriber-only nodes work and - how they are configured. + how they're configured. * [Viewing topology](viewing_topology) details commands and SQL queries that can - show the structure of a PGD clusters nodes and groups. + show the structure of a PGD cluster's nodes and groups. * [Removing nodes and groups](removing_nodes_and_groups) shows the process to follow to safely remove a node from a group or a group from a cluster. @@ -75,5 +75,3 @@ with nodes and subgroups. * [Maintenance commands through proxies](maintainance_with_proxies) shows how to send maintenance commands to nodes that you can't directly access, such as those behind a proxy. - - diff --git a/product_docs/docs/pgd/5/node_management/node_recovery.mdx b/product_docs/docs/pgd/5/node_management/node_recovery.mdx index 382135aa612..5065e6cb345 100644 --- a/product_docs/docs/pgd/5/node_management/node_recovery.mdx +++ b/product_docs/docs/pgd/5/node_management/node_recovery.mdx @@ -42,7 +42,7 @@ On EDB Postgres Extended Server and EDB Postgres Advanced Server, offline nodes also hold back freezing of data to prevent losing conflict-resolution data (see [Origin conflict detection](../consistency/conflicts)). -Administrators must monitor for node outages (see [monitoring](../monitoring/)) +Administrators must monitor for node outages (see [Monitoring](../monitoring/)) and make sure nodes have enough free disk space. If the workload is predictable, you might be able to calculate how much space is used over time, allowing a prediction of the maximum time a node can be down before critical @@ -55,4 +55,4 @@ slot must be parted from the cluster, as described in [Replication slots created While a node is offline, the other nodes might not yet have received the same set of data from the offline node, so this might appear as a slight divergence across nodes. The parting process corrects this imbalance across nodes. -(Later versions might do this earlier.) \ No newline at end of file +(Later versions might do this earlier.)