diff --git a/product_docs/docs/pgd/5/durability/durabilityterminology.mdx b/product_docs/docs/pgd/5/durability/durabilityterminology.mdx index 085536fbcbd..490bba31074 100644 --- a/product_docs/docs/pgd/5/durability/durabilityterminology.mdx +++ b/product_docs/docs/pgd/5/durability/durabilityterminology.mdx @@ -5,7 +5,7 @@ title: Durability terminology ## Durability terminology This page covers terms and definitions directly related to PGD's durability options. -For other terms, see the main [Terminology](../terminology) section. +For other terms, see [Terminology](../terminology). ### Nodes @@ -18,7 +18,7 @@ concurrent transactions. first, initiating replication to other PGD nodes and responding back to the client with a confirmation or an error. -* The *origin node group* is a PGD group which includes the *origin*. +* The *origin node group* is a PGD group which includes the origin. * A *partner* node is a PGD node expected to confirm transactions according to Group Commit requirements. @@ -26,4 +26,3 @@ concurrent transactions. * A *commit group* is the group of all PGD nodes involved in the commit, that is, the origin and all of its partner nodes, which can be just a few or all peer nodes. - diff --git a/product_docs/docs/pgd/5/durability/group-commit.mdx b/product_docs/docs/pgd/5/durability/group-commit.mdx index 88f24e27958..657ceeb0929 100644 --- a/product_docs/docs/pgd/5/durability/group-commit.mdx +++ b/product_docs/docs/pgd/5/durability/group-commit.mdx @@ -12,7 +12,7 @@ Commit scope kind: `GROUP COMMIT` The goal of Group Commit is to protect against data loss in case of single node failures or temporary outages. You achieve this by requiring more than one PGD node to successfully confirm a transaction at COMMIT time. Confirmation can be sent -at a number of points in the transaction processing, but defaults to "visible" when +at a number of points in the transaction processing but defaults to "visible" when the transaction has been flushed to disk and is visible to all other transactions. @@ -27,7 +27,7 @@ SELECT bdr.add_commit_scope( ); ``` -This example creates a commit scope where all the nodes in the left_dc group and any one of the nodes in the right_dc group must receive and successfuly confirm a committed transaction. +This example creates a commit scope where all the nodes in the `left_dc` group and any one of the nodes in the `right_dc` group must receive and successfully confirm a committed transaction. ## Requirements @@ -46,7 +46,7 @@ originating per node. ## Limitations -See the Group Commit section of the [Limitations](limitations#group-commit) section. +See the Group Commit section of [Limitations](limitations#group-commit). ## Configuration @@ -56,7 +56,7 @@ determines the PGD nodes involved in the commit of a transaction. ## Confirmation - Confirmation Level | Group Commit handling + Confirmation level | Group Commit handling -------------------------|------------------------------- `received` | A remote PGD node confirms the transaction immediately after receiving it, prior to starting the local application. `replicated` | Confirms after applying changes of the transaction but before flushing them to disk. @@ -75,18 +75,18 @@ You can configure Group Commit to decide commits in three different ways: `group `partner`, and `raft`. The `group` decision is the default. It specifies that the commit is confirmed -by the origin node upon it recieving as many confirmations as required by the +by the origin node upon receiving as many confirmations as required by the commit scope group. The difference is that the commit decision is made based on PREPARE replication while the durability checks COMMIT (PREPARED) replication. -The `partner` decision is what [Commit At Most Once](camo) uses. This approach +The `partner` decision is what [Commit At Most Once](camo) (CAMO) uses. This approach works only when there are two data nodes in the node group. These two nodes are partners of each other, and the replica rather than origin decides whether to commit something. This approach requires application changes to use the CAMO transaction protocol to work correctly, as the application is in some way part of the consensus. For more on this approach, see [CAMO](camo). -The `raft` decision uses PGDs built-in raft consensus for commit decisions. Use of the `raft` decision can reduce performance. It's currently required only when using `GROUP COMMIT` +The `raft` decision uses PGDs built-in Raft consensus for commit decisions. Use of the `raft` decision can reduce performance. It's currently required only when using `GROUP COMMIT` with an ALL commit scope group. Using an ALL commit scope group requires that the commit decision must be set to @@ -97,7 +97,7 @@ Using an ALL commit scope group requires that the commit decision must be set to Conflict resolution can be `async` or `eager`. Async means that PGD does optimistic conflict resolution during replication -using the row-level resolution as configured for given node. This happens +using the row-level resolution as configured for a given node. This happens regardless of whether the origin transaction committed or is still in progress. See [Conflicts](../consistency/conflicts) for details about how the asynchronous conflict resolution works. @@ -111,7 +111,7 @@ Using an ALL commit scope group requires that the [commit decision](#commit-decisions) must be set to `raft` to avoid reconciliation issues. -For the details about how Eager conflict resolution works, +For details about how Eager conflict resolution works, see [Eager conflict resolution](../consistency/eager). ### Aborts @@ -132,7 +132,7 @@ into a two-phase commit. In the first phase (prepare), the transaction is prepared locally and made ready to commit. The data is made durable but is uncomitted at this stage, so other transactions can't see the changes made by this transaction. This prepared transaction gets -copied to all remaining nodes through normal logical replication. +copied to all remaining nodes through normal logical replication. The origin node seeks confirmations from other nodes, as per rules in the Group Commit grammar. If it gets confirmations from the minimum required nodes in the @@ -146,7 +146,7 @@ replicas may crash. This leaves the prepared transactions in the system. The `pg_prepared_xacts` view in Postgres can show prepared transactions on a system. The prepared transactions might be holding locks and other resources. To release those locks and resources, either abort or commit the transaction. -to be either aborted or committed. That decision must be made with a consensus +That decision must be made with a consensus of nodes. When `commit_decision` is `raft`, then, Raft acts as the reconciliator, and these diff --git a/product_docs/docs/pgd/5/durability/index.mdx b/product_docs/docs/pgd/5/durability/index.mdx index c5b5cd5fbc0..9769361be1e 100644 --- a/product_docs/docs/pgd/5/durability/index.mdx +++ b/product_docs/docs/pgd/5/durability/index.mdx @@ -24,8 +24,8 @@ redirects: EDB Postgres Distributed (PGD) offers a range of synchronous modes to complement its default asynchronous replication. These synchronous modes are configured through -commit scopes: rules that define how operations are handled and when the system -should consider a transaction committed. +commit scopes. Commit scopes are rules that define how operations are handled and when the system +considers a transaction committed. ## Introducing @@ -56,23 +56,20 @@ retrying. This ensures that their commits only happen at most once. dynamically throttle nodes according to the slowest node and regulates how far out of sync nodes may go when a database node goes out of service. -* [Synchronous Commit](synchronous_commit) examines a commit scope mechanism which works +* [Synchronous Commit](synchronous_commit) examines a commit scope mechanism that works in a similar fashion to legacy synchronous replication, but from within the commit scope framework. ## Working with commit scopes -* [Administering](administering) addresses how a PGD cluster with Group Commit -in use should be managed. +* [Administering](administering) addresses how to manage a PGD cluster with Group Commit +in use. * [Limitations](limitations) lists the various combinations of durability options -which are not currently supported or not possible. Do refer to this before deciding +that aren't currently supported or aren't possible. Refer to this before deciding on a durability strategy. -* [Legacy Synchronous Replication](legacy-sync) shows how traditional Postgres synchronous operations +* [Legacy synchronous replication](legacy-sync) shows how traditional Postgres synchronous operations can still be accessed under PGD. * [Internal timing of operations](timing) compares legacy replication with PGD's async and synchronous operations, especially the difference in the order by which transactions are flushed to disk or made visible. - - - diff --git a/product_docs/docs/pgd/5/durability/lag-control.mdx b/product_docs/docs/pgd/5/durability/lag-control.mdx index bf10962550f..0d3886453d8 100644 --- a/product_docs/docs/pgd/5/durability/lag-control.mdx +++ b/product_docs/docs/pgd/5/durability/lag-control.mdx @@ -19,33 +19,33 @@ limits. The data throughput of database applications on a PGD origin node can exceed the rate at which committed data can replicate to downstream peer nodes. -If this imbalance persists, it can put satisfying organizational objectives -such as RPO, RCO, and GEO at risk. +If this imbalance persists, it can put satisfying organizational objectives, +such as RPO, RCO, and GEO, at risk. -- **RPO** (Recovery point objective) specifies the maximum-tolerated amount of data +- **Recovery point objective (RPO)** specifies the maximum-tolerated amount of data that can be lost due to unplanned events, usually expressed as an amount of time. In PGD, RPO determines the acceptable amount of committed data that hasn't been applied to one or more peer nodes. -- **RCO** (Resource constraint objective) acknowledges that there is finite storage +- **Resource constraint objective (RCO)** acknowledges that finite storage is available. In PGD, the demands on these storage resources increase as lag increases. -- **GEO** (Group elasticity objective) ensures that any node isn't originating new +- **Group elasticity objective (GEO)** ensures that any node isn't originating new data at a rate that can't be saved to its peer nodes. To allow organizations to achieve their objectives, PGD offers Lag Control. This feature provides a means to precisely regulate the potential imbalance without -intruding on applications, by transparently introducing a delay to READ WRITE -transactions that modify data. This delay, the PGD Commit Delay, starts at 0ms. +intruding on applications. It does so by transparently introducing a delay to READ WRITE +transactions that modify data. This delay, the PGD commit delay, starts at 0ms. Using the LAG CONTROL commit scope kind, you can set a maximum time that commits -commits can be delayed between nodes in a group, maximum lag time or maximum lag +can be delayed between nodes in a group, maximum lag time, or maximum lag size (based on the size of the WAL). If the nodes can process transactions within the specified maximums on enough -nodes, the PGD commit delay will stay at 0ms or be reduced towards 0ms. If the -maximums are exceeded on enough nodes though the PGD commit delay on the -originating node is increased. It will continue increasing until the lag control +nodes, the PGD commit delay will stay at 0ms or be reduced toward 0ms. If the +maximums are exceeded on enough nodes, though, the PGD commit delay on the +originating node is increased. It will continue increasing until the Lag Control constraints are met on enough nodes again. The PGD commit delay happens after a transaction has completed and released all @@ -85,12 +85,12 @@ To get started using Lag Control: ## Configuration -You specify lag control in a commit scope, which allows consistent and +You specify Lag Control in a commit scope, which allows consistent and coordinated parameter settings across the nodes spanned by the commit scope -rule. You can include a lag control specification in the default commit scope of +rule. You can include a Lag Control specification in the default commit scope of a top group or as part of an origin group commit scope. -As in example, take a configuration with two datacenters, `left_dc` and `right_dc`, represented as sub-groups: +As in example, take a configuration with two datacenters, `left_dc` and `right_dc`, represented as subgroups: ```sql SELECT bdr.create_node_group( @@ -105,7 +105,7 @@ SELECT bdr.create_node_group( ); ``` -The code below adds Lag Control rules for those two data centers, using individual rules for each subgroup: +The following code adds Lag Control rules for those two data centers, using individual rules for each subgroup: ```sql SELECT bdr.add_commit_scope( @@ -122,14 +122,14 @@ SELECT bdr.add_commit_scope( ); ``` -You can add a lag control commit scope rule to existing commit scope rules that -also include group commit and CAMO rule specifications. +You can add a Lag Control commit scope rule to existing commit scope rules that +also include Group Commit and CAMO rule specifications. -The `max_commit_delay` is interval, typically specified in milliseconds (1ms). +The `max_commit_delay` is an interval, typically specified in milliseconds (1ms). Using fractional values for sub-millisecond precision is supported. -The `max_lag_size` is an integer which specifies the maximum allowed lag in -terms of WAL bytes. +The `max_lag_size` is an integer that specifies the maximum allowed lag in +terms of WAL bytes. The `max_lag_time` is an interval, typically specified in seconds, that specifies the maximum allowed lag in terms of time. @@ -144,12 +144,12 @@ continued increase. ## Confirmation - Confirmation Level | Lag Control Handling + Confirmation level | Lag Control handling -------------------------|------------------------------- - `received` | Not applicable, only uses the default `VISIBLE`. - `replicated` | Not applicable, only uses the default `VISIBLE`. - `durable` | Not applicable, only uses the default `VISIBLE`. - `visible` (default) | Not applicable, only uses the default `VISIBLE`. + `received` | Not applicable, only uses the default, `VISIBLE`. + `replicated` | Not applicable, only uses the default, `VISIBLE`. + `durable` | Not applicable, only uses the default, `VISIBLE`. + `visible` (default) | Not applicable, only uses the default, `VISIBLE`. ## Transaction application @@ -199,7 +199,7 @@ setting. In these cases, it can be useful to use the `SET [SESSION|LOCAL]` command to custom configure Lag Control settings for those applications or modify those applications. For example, bulk load operations are sometimes split -into multiple, smaller transactions to limit transaction snapshot duration +into multiple smaller transactions to limit transaction snapshot duration and WAL retention size or establish a restart point if the bulk load fails. In deference to Lag Control, those transaction commits can also schedule very long PGD commit delays to allow digestion of the lag contributed by the @@ -207,7 +207,7 @@ prior partial bulk load. ## Meeting organizational objectives -In the example objectives list earlier: +In the example objectives listed earlier: - RPO can be met by setting an appropriate maximum lag time. - RCO can be met by setting an appropriate maximum lag size. @@ -215,11 +215,11 @@ In the example objectives list earlier: and the PGD runtime lag measures, As mentioned, when the maximum PGD runtime commit delay is -pegged at the PGD configured commit-delay limit, and the lag +pegged at the PGD-configured commit-delay limit, and the lag measures consistently exceed their PGD-configured maximum levels, this scenario can be a marker for PGD group expansion. -## Lag Control and Extensions +## Lag Control and extensions The PGD commit delay is a post-commit delay. It occurs after the transaction has committed and after all Postgres resources locked or acquired by the transaction @@ -229,4 +229,3 @@ The same guarantee can't be made for external resources managed by Postgres extensions. Regardless of extension dependencies, the same guarantee can be made if the PGD extension is listed before extension-based resource managers in postgresql.conf. - diff --git a/product_docs/docs/pgd/5/durability/legacy-sync.mdx b/product_docs/docs/pgd/5/durability/legacy-sync.mdx index bc43cd89a1d..269e4fca2db 100644 --- a/product_docs/docs/pgd/5/durability/legacy-sync.mdx +++ b/product_docs/docs/pgd/5/durability/legacy-sync.mdx @@ -19,7 +19,7 @@ replication with `synchronous_commit` and `synchronous_standby_names`. Consider using [Group Commit](group-commit) or [Synchronous Commit](synchronous_commit) instead. -Unlike PGD replication options, PSR sync will persist first, replicating after +Unlike PGD replication options, PSR sync persists first, replicating after the WAL flush of commit record. ### Usage @@ -31,7 +31,7 @@ requirements of non-PGD standby nodes. Once you've added it, you can configure the level of synchronization per transaction using `synchronous_commit`, which defaults to `on`. This setting -means that adding the application name to to `synchronous_standby_names` already +means that adding the application name to `synchronous_standby_names` already enables synchronous replication. Setting `synchronous_commit` to `local` or `off` turns off synchronous replication. @@ -69,7 +69,7 @@ Postgres crashes.* The following table provides an overview of the configuration settings that you must set to a non-default value (req) and those that are optional (opt) but -affecting a specific variant. +affect a specific variant. | Setting (GUC) | Group Commit | Lag Control | PSR | Legacy Sync | |--------------------------------------|:------------:|:-----------:|:-------:|:-----------:| diff --git a/product_docs/docs/pgd/5/durability/limitations.mdx b/product_docs/docs/pgd/5/durability/limitations.mdx index 1e705fe3f53..b8d74749399 100644 --- a/product_docs/docs/pgd/5/durability/limitations.mdx +++ b/product_docs/docs/pgd/5/durability/limitations.mdx @@ -8,7 +8,7 @@ The following limitations apply to the use of commit scopes and the various dura - [Legacy synchronous replication](legacy-sync) uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two aren't - compatible, so don't use them together. Whenever you use Group Commit, CAMO + compatible, so don't use them together. Whenever you use Group Commit, CAMO, or Eager, make sure none of the PGD nodes are configured in `synchronous_standby_names`. @@ -31,7 +31,7 @@ nodes in a group. If you use this feature, take the following limitations into a - `TRUNCATE` -- Explicit two-phase commit is not supported by Group Commit as it already uses two-phase commit. +- Explicit two-phase commit isn't supported by Group Commit as it already uses two-phase commit. - Combining different commit decision options in the same transaction or combining different conflict resolution options in the same transaction isn't @@ -43,10 +43,10 @@ nodes in a group. If you use this feature, take the following limitations into a ## Eager -[Eager](../consistency/eager) is available through Group Commit. It avoids conflicts by eagerly aborting transactions that might clash. It's subject to the same limitations as Group Commit. +[Eager](../consistency/eager) is available through Group Commit. It avoids conflicts by eagerly aborting transactions that might clash. It's subject to the same limitations as Group Commit. -- Eager doesn't allow the `NOTIFY` SQL command or the `pg_notify()` function. It - also don't allow `LISTEN` or `UNLISTEN`. +Eager doesn't allow the `NOTIFY` SQL command or the `pg_notify()` function. It +also doesn't allow `LISTEN` or `UNLISTEN`. ## CAMO @@ -55,7 +55,7 @@ applications committing more than once. If you use this feature, take these limitations into account when planning: - CAMO is designed to query the results of a recently failed COMMIT on the -origin node. In case of disconnection the application must request the +origin node. In case of disconnection, the application must request the transaction status from the CAMO partner. Ensure that you have as little delay as possible after the failure before requesting the status. Applications must not rely on CAMO decisions being stored for longer than 15 minutes. @@ -100,7 +100,7 @@ CAMO. You can configure this option in the PGD node group. See - `TRUNCATE` -- Explicit two-phase commit is not supported by CAMO as it already uses two-phase commit. +- Explicit two-phase commit isn't supported by CAMO as it already uses two-phase commit. - You can combine only CAMO transactions with the `DEGRADE TO` clause for switching to asynchronous operation in case of lowered availability. diff --git a/product_docs/docs/pgd/5/durability/overview.mdx b/product_docs/docs/pgd/5/durability/overview.mdx index 2a4f6bd7cdd..94e6ffde9c8 100644 --- a/product_docs/docs/pgd/5/durability/overview.mdx +++ b/product_docs/docs/pgd/5/durability/overview.mdx @@ -1,5 +1,5 @@ --- -title: An overview of durability options +title: Overview of durability options navTitle: Overview --- @@ -41,21 +41,19 @@ Commit scopes allow four kinds of controlling durability of the transaction: and at what stage of replication it can be considered committed. This option also allows you to control the visibility ordering of the transaction. -- [CAMO](camo): This kind of commit scope is a variant of Group Commit in +- [CAMO](camo): This kind of commit scope is a variant of Group Commit, in which the client takes on the responsibility for verifying that a transaction - has been committed before retrying. + was committed before retrying. - [Lag Control](lag-control): This kind of commit scope controls how far behind nodes can be in terms of replication before allowing commit to proceed. -- [PGD Synchronous Commit](synchronous_commit): This kind of commit scope allows for a behaviour where the origin node awaits a majority of nodes to confirm and behaves more like a native Postgres synchronous commit. +- [PGD Synchronous Commit](synchronous_commit): This kind of commit scope allows for a behavior where the origin node awaits a majority of nodes to confirm and behaves more like a native Postgres synchronous commit. !!! Note Legacy synchronization availability For backward compatibility, PGD still supports configuring synchronous replication with `synchronous_commit` and `synchronous_standby_names`. See [Legacy synchronous replication](legacy-sync) for more on this option. We -recommend users instead use [PGD Synchronous_Commit](synchronous_commit). +recommend that you use [PGD Synchronous Commit](synchronous_commit) instead. !!! - - diff --git a/product_docs/docs/pgd/5/monitoring/index.mdx b/product_docs/docs/pgd/5/monitoring/index.mdx index cf4d509eb72..b9240d7e0ff 100644 --- a/product_docs/docs/pgd/5/monitoring/index.mdx +++ b/product_docs/docs/pgd/5/monitoring/index.mdx @@ -12,7 +12,7 @@ Monitoring replication setups is important to ensure that your system: It's important to have automated monitoring in place to ensure that the administrator is alerted and can take proactive action when issues occur. For example, the administrator can be alerted if -replication slots start falling badly behind, +replication slots start falling badly behind. EDB provides Postgres Enterprise Manager (PEM), which supports PGD starting with version 8.1. See [Monitoring EDB Postgres Distributed](/pem/latest/monitoring_BDR_nodes/) for more information. diff --git a/product_docs/docs/pgd/5/monitoring/otel.mdx b/product_docs/docs/pgd/5/monitoring/otel.mdx index 62ffedceae1..172090dff80 100644 --- a/product_docs/docs/pgd/5/monitoring/otel.mdx +++ b/product_docs/docs/pgd/5/monitoring/otel.mdx @@ -15,7 +15,7 @@ These are attached to all metrics and traces: ## OTEL and OLTP compatibility -For OTEL connections the integration supports OLTP/HTTP version 1.0.0 only, +For OTEL connections, the integration supports OLTP/HTTP version 1.0.0 only, over HTTP or HTTPS. It doesn't support OLTP/gRPC. ## Metrics collection @@ -23,7 +23,7 @@ over HTTP or HTTPS. It doesn't support OLTP/gRPC. Setting the configuration option `bdr.metrics_otel_http_url` to a non-empty URL enables the metric collection. -Different kinds of metrics are collected as shown in the tables that follow. +Different kinds of metrics are collected, as shown in the tables that follow. ### Generic metrics @@ -52,7 +52,7 @@ Different kinds of metrics are collected as shown in the tables that follow. ### Consensus metric -See also [Monitoring Raft Consensus](sql/#monitoring-raft-consensus) +See also [Monitoring Raft consensus](sql/#monitoring-raft-consensus) | Metric name | Type | Labels | Description | ----------- | ---- | ------ | ----------- @@ -70,7 +70,7 @@ See also [Monitoring Raft Consensus](sql/#monitoring-raft-consensus) Tracing collection to OpenTelemetry requires configuring `bdr.trace_otel_http_url` and enabling tracing using `bdr.trace_enable`. -The tracing is limited to only some subsystems at the moment, primarily to the +The tracing is currently limited to only some subsystems, primarily to the cluster management functionality. The following spans can be seen in traces. | Span name | Description | diff --git a/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx b/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx index e23ed4c2116..09d1c20fbb6 100644 --- a/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx +++ b/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx @@ -3,7 +3,7 @@ title: Joining a heterogeneous cluster --- -PGD 4.0 node can join a EDB Postgres Distributed cluster running 3.7.x at a specific +A PGD 4.0 node can join a EDB Postgres Distributed cluster running 3.7.x at a specific minimum maintenance release (such as 3.7.6) or a mix of 3.7 and 4.0 nodes. This procedure is useful when you want to upgrade not just the PGD major version but also the underlying PostgreSQL major diff --git a/product_docs/docs/pgd/5/node_management/index.mdx b/product_docs/docs/pgd/5/node_management/index.mdx index 6ebec55eb1b..373af0d2120 100644 --- a/product_docs/docs/pgd/5/node_management/index.mdx +++ b/product_docs/docs/pgd/5/node_management/index.mdx @@ -34,7 +34,7 @@ with nodes and subgroups. clusters. * [Groups and subgroups](groups_and_subgroups) goes into more detail on how - groups and subgroups work in PGD + groups and subgroups work in PGD. * [Creating and joining groups](creating_and_joining) looks at how new PGD groups can be created and how to join PGD nodes to them. @@ -47,10 +47,10 @@ with nodes and subgroups. failure. * [Subscriber-only nodes and groups](subscriber_only) looks at how subscriber-only nodes work and - how they are configured. + how they're configured. * [Viewing topology](viewing_topology) details commands and SQL queries that can - show the structure of a PGD clusters nodes and groups. + show the structure of a PGD cluster's nodes and groups. * [Removing nodes and groups](removing_nodes_and_groups) shows the process to follow to safely remove a node from a group or a group from a cluster. @@ -75,5 +75,3 @@ with nodes and subgroups. * [Maintenance commands through proxies](maintainance_with_proxies) shows how to send maintenance commands to nodes that you can't directly access, such as those behind a proxy. - - diff --git a/product_docs/docs/pgd/5/node_management/node_recovery.mdx b/product_docs/docs/pgd/5/node_management/node_recovery.mdx index 382135aa612..5065e6cb345 100644 --- a/product_docs/docs/pgd/5/node_management/node_recovery.mdx +++ b/product_docs/docs/pgd/5/node_management/node_recovery.mdx @@ -42,7 +42,7 @@ On EDB Postgres Extended Server and EDB Postgres Advanced Server, offline nodes also hold back freezing of data to prevent losing conflict-resolution data (see [Origin conflict detection](../consistency/conflicts)). -Administrators must monitor for node outages (see [monitoring](../monitoring/)) +Administrators must monitor for node outages (see [Monitoring](../monitoring/)) and make sure nodes have enough free disk space. If the workload is predictable, you might be able to calculate how much space is used over time, allowing a prediction of the maximum time a node can be down before critical @@ -55,4 +55,4 @@ slot must be parted from the cluster, as described in [Replication slots created While a node is offline, the other nodes might not yet have received the same set of data from the offline node, so this might appear as a slight divergence across nodes. The parting process corrects this imbalance across nodes. -(Later versions might do this earlier.) \ No newline at end of file +(Later versions might do this earlier.) diff --git a/product_docs/docs/pgd/5/repsets.mdx b/product_docs/docs/pgd/5/repsets.mdx index 7fec0ed65e9..bd266860a1d 100644 --- a/product_docs/docs/pgd/5/repsets.mdx +++ b/product_docs/docs/pgd/5/repsets.mdx @@ -261,7 +261,7 @@ There's also, as we recommend, a witness node, named `witness` in `region-c`, bu This configuration looks like this: -![Multi-Region 3 Nodes Configuration](./images/always-on-2x3-aa-updated.png) +![Multi-Region 3 Nodes Configuration](./planning/images/always-on-2x3-aa-updated.png) This is the standard Always-on multiregion configuration as discussed in the [Choosing your architecture](planning/architectures) section. diff --git a/src/constants/updates.js b/src/constants/updates.js index 76ee79562f0..88fa9cd9681 100644 --- a/src/constants/updates.js +++ b/src/constants/updates.js @@ -1,6 +1,14 @@ import IconNames from "../components/icon/iconNames"; export const updates = [ + { + title: "Trusted Postgres Architect 23.33", + icon: IconNames.INSTANCES, + description: + "TPA 23.33 includes platform support for Debian 12, enables PGD 5.5 read-only proxies and adds the ability to deploy beacon agent for EDB Postgres AI into your deployed clusters.", + url: "/tpa/latest/", + moreUrl: "/tpa/latest/rel_notes/tpa_23.33_rel_notes/", + }, { title: "EDB Postgres Distributed 5.5", icon: IconNames.HIGH_AVAILABILITY,