diff --git a/product_docs/docs/bdr/4/appusage.mdx b/product_docs/docs/bdr/4/appusage.mdx index 65e7625fa3d..f07c9eff0ad 100644 --- a/product_docs/docs/bdr/4/appusage.mdx +++ b/product_docs/docs/bdr/4/appusage.mdx @@ -174,7 +174,7 @@ a cluster, you can't add a node with a minor version if the cluster uses a newer protocol version. This returns an error. Both of these features might be affected by specific restrictions. -See [Release notes](release-notes) for any known incompatibilities. +See [Release notes](release_notes) for any known incompatibilities. ## Replicating between nodes with differences @@ -323,9 +323,9 @@ its different modes. You can test BDR applications using the following programs, in addition to other techniques. -- [TPAexec] -- [pgbench with CAMO/Failover options] -- [isolationtester with multi-node access] +- [TPAexec](#TPAexec) +- [pgbench with CAMO/Failover options](#pgbench-with-camofailover-options) +- [isolationtester with multi-node access](#isolationtester-with-multi-node-access) ### TPAexec diff --git a/product_docs/docs/bdr/4/camo.mdx b/product_docs/docs/bdr/4/camo.mdx index 1144dac6bf9..1f424ee8780 100644 --- a/product_docs/docs/bdr/4/camo.mdx +++ b/product_docs/docs/bdr/4/camo.mdx @@ -110,7 +110,7 @@ Valid values for `bdr.enable_camo` that enable CAMO are: * `remote_commit_async` * `remote_commit_flush` or `on` -See the [Comparison](durability#Comparison) of synchronous replication +See the [Comparison](durability/#comparison) of synchronous replication modes for details about how each mode behaves. Setting `bdr.enable_camo = off` disables this feature, which is the default. @@ -580,4 +580,4 @@ outages. ## CAMO versus group commit CAMO doesn't currently work with -[group commit](group_commit). +[group commit](group-commit). diff --git a/product_docs/docs/bdr/4/catalogs.mdx b/product_docs/docs/bdr/4/catalogs.mdx index 466ceb510a1..03deebb7222 100644 --- a/product_docs/docs/bdr/4/catalogs.mdx +++ b/product_docs/docs/bdr/4/catalogs.mdx @@ -129,7 +129,7 @@ managing global consensus. As for the `bdr.global_consensus_response_journal` catalog, the payload is stored in a binary encoded format, which can be decoded with the `bdr.decode_message_payload()` function. See the -[`bdr.global_consensus_journal_details`] view for more details. +[`bdr.global_consensus_journal_details`](#bdrglobal-consensus-journal-details) view for more details. #### `bdr.global_consensus_journal` columns @@ -176,7 +176,7 @@ that were received while managing global consensus. As for the `bdr.global_consensus_journal` catalog, the payload is stored in a binary-encoded format, which can be decoded with the `bdr.decode_message_payload()` function. See the -[`bdr.global_consensus_journal_details`] view for more details. +[`bdr.global_consensus_journal_details`](#bdrglobal-consensus-journal-details) view for more details. #### `bdr.global_consensus_response_journal` columns @@ -214,11 +214,11 @@ Don't modify the contents of this table. It is an important BDR catalog. ### `bdr.global_locks` -A view containing active global locks on this node. The `bdr.global_locks` view +A view containing active global locks on this node. The [`bdr.global_locks`](#bdrglobal-locks) view exposes BDR's shared-memory lock state tracking, giving administrators greater insight into BDR's global locking activity and progress. -See [Monitoring global locks](monitoring#Monitoring-global-locks) +See [Monitoring global locks](/pgd/latest/monitoring#monitoring-global-locks) for more information about global locking. #### `bdr.global_locks` columns @@ -310,7 +310,7 @@ This table identifies the local node in the current database of the current Post ### `bdr.local_node_summary` -A view containing the same information as [`bdr.node_summary`] but only for the +A view containing the same information as [`bdr.node_summary`](#bdrnode-summary) but only for the local node. ### `bdr.local_sync_status` @@ -481,7 +481,7 @@ Every node in the cluster regularly broadcasts its progress every is 60000 ms, i.e., 1 minute). Expect N \* (N-1) rows in this relation. You might be more interested in the `bdr.node_slots` view for monitoring -purposes. See also [Monitoring](monitoring). +purposes. See also [Monitoring](/pgd/latest/monitoring). #### `bdr.node_peer_progress` columns @@ -543,7 +543,7 @@ given node. This view contains information about replication slots used in the current database by BDR. -See [Monitoring outgoing replication](monitoring#monitoring-outgoing-replication) +See [Monitoring outgoing replication](/pgd/latest/monitoring#monitoring-outgoing-replication) for guidance on the use and interpretation of this view's fields. #### `bdr.node_slots` columns @@ -559,7 +559,7 @@ for guidance on the use and interpretation of this view's fields. | target_id | oid | The OID of the target node | | local_slot_name | name | Name of the replication slot according to BDR | | slot_name | name | Name of the slot according to Postgres (same as above) | -| is_group_slot | boolean | True if the slot is the node-group crash recovery slot for this node (see ["Group Replication Slot"]\(nodes.md#Group Replication Slot)) | +| is_group_slot | boolean | True if the slot is the node-group crash recovery slot for this node (see ["Group Replication Slot"]\(nodes#Group Replication Slot)) | | is_decoder_slot | boolean | Is this slot used by Decoding Worker | | plugin | name | Logical decoding plugin using this slot (should be pglogical_output or bdr) | | slot_type | text | Type of the slot (should be logical) | diff --git a/product_docs/docs/bdr/4/column-level-conflicts.mdx b/product_docs/docs/bdr/4/column-level-conflicts.mdx index cb22e27afe9..ae21fa8a2e8 100644 --- a/product_docs/docs/bdr/4/column-level-conflicts.mdx +++ b/product_docs/docs/bdr/4/column-level-conflicts.mdx @@ -189,7 +189,7 @@ SELECT bdr.column_timestamps_enable('test_table'::regclass, 'cts', true); You can disable it using `bdr.column_timestamps_disable`. Commit timestamps currently have restrictions that are -explained in [Limitations](#limitations). +explained in [Notes](#notes). ## Inspecting column timestamps diff --git a/product_docs/docs/bdr/4/configuration.mdx b/product_docs/docs/bdr/4/configuration.mdx index 39325ef8bed..d91a3378034 100644 --- a/product_docs/docs/bdr/4/configuration.mdx +++ b/product_docs/docs/bdr/4/configuration.mdx @@ -82,7 +82,7 @@ Unless noted otherwise, you can set the values at any time. ### Global sequence parameters -- `bdr.default_sequence_kind` — Sets the default [sequence kind](sequences.md). +- `bdr.default_sequence_kind` — Sets the default [sequence kind](sequences). The default value is `distributed`, which means `snowflakeid` is used for `int8` sequences (i.e., `bigserial`) and `galloc` sequence for `int4` (i.e., `serial`) and `int2` sequences. @@ -138,7 +138,7 @@ Unless noted otherwise, you can set the values at any time. across all nodes might cause replicated DDL to interrupt replication until the administrator intervenes. - See [Role manipulation statements](ddl#Role_manipulation_statements) + See [Role manipulation statements](ddl#role-manipulation-statements) for details. - `bdr.ddl_locking` — Configures the operation mode of global locking for DDL. @@ -366,7 +366,7 @@ time of each type of worker. This tracking table isn't persistent. It is cleared by PostgreSQL restarts, including soft restarts during crash recovery after an unclean backend exit. -You can use the view [`bdr.worker_tasks`](monitoring#bdr.worker_tasks) to inspect this state so the administrator can see any backoff +You can use the view [`bdr.worker_tasks`](catalogs#bdrworker_tasks) to inspect this state so the administrator can see any backoff rate limiting currently in effect. For rate limiting purposes, workers are classified by task. This key consists diff --git a/product_docs/docs/bdr/4/conflicts.mdx b/product_docs/docs/bdr/4/conflicts.mdx index 478b01d9b52..63c89db8a4f 100644 --- a/product_docs/docs/bdr/4/conflicts.mdx +++ b/product_docs/docs/bdr/4/conflicts.mdx @@ -928,7 +928,7 @@ The recognized methods for conflict detection are: For more information about the difference between `column_commit_timestamp` and `column_modify_timestamp` conflict detection methods, see -[Current vs Commit Timestamp](column-level-conflicts#current-vs-commit-timestamp]) +[Current vs Commit Timestamp](column-level-conflicts#current-versus-commit-timestamp) section in the CLCD chapter. This function uses the same replication mechanism as `DDL` statements. This diff --git a/product_docs/docs/bdr/4/ddl.mdx b/product_docs/docs/bdr/4/ddl.mdx index 5bf0468aadf..9e55d7299db 100644 --- a/product_docs/docs/bdr/4/ddl.mdx +++ b/product_docs/docs/bdr/4/ddl.mdx @@ -24,7 +24,7 @@ the DDL change to all nodes and ensure that they are consistent. In the default replication set, DDL is replicated to all nodes by default. To replicate DDL, a DDL replication filter has to be added to the -replication set. See [DDL Replication Filtering]. +replication set. See [DDL replication filtering](#ddr-replication-filtering). BDR is significantly different to standalone PostgreSQL when it comes to DDL replication, and treating it as the same is the most @@ -179,7 +179,7 @@ down node must first be removed from the configuration. If a DDL statement is not replicated, no global locks will be acquired. Locking behavior is specified by the `bdr.ddl_locking` parameter, as -explained in [Executing DDL on BDR systems](#Executing-DDL-on-BDR-systems): +explained in [Executing DDL on BDR systems](#executing-ddl-on-bdr-systems): - `ddl_locking = on` takes Global DDL Lock and, if needed, takes Relation DML Lock. - `ddl_locking = dml` skips Global DDL Lock and, if needed, takes Relation DML Lock. @@ -207,7 +207,7 @@ DDL replication is not active on Logical Standby nodes until they are promoted. Note that some BDR management functions act like DDL, meaning that they will attempt to take global locks and their actions will be replicated, if DDL replication is active. The full list of replicated functions is listed in -[BDR Functions that behave like DDL]. +[BDR functions that behave like DDL](#bdr-functions-that-behave-like-ddl). DDL executed on temporary tables never need global locks. @@ -215,7 +215,7 @@ ALTER or DROP of an object crrated in current transactioon does not required global DML lock. Monitoring of global DDL locks and global DML locks is shown in the -[Monitoring](monitoring) chapter. +[Monitoring](/pgd/latest/monitoring) chapter. ## Minimizing the Impact of DDL @@ -324,7 +324,7 @@ BDR prevents some DDL statements from running when it is active on a database. This protects the consistency of the system by disallowing statements that cannot be replicated correctly, or for which replication is not yet supported. Statements that are supported with some restrictions -are covered in [DDL Statements With Restrictions]; while commands that are +are covered in [DDL Statements With Restrictions](#ddl-statements-with-restrictions); while commands that are entirely disallowed in BDR are covered in prohibited DDL statements. If a statement is not permitted under BDR, it is often possible to find diff --git a/product_docs/docs/bdr/4/durability.mdx b/product_docs/docs/bdr/4/durability.mdx index f37c672a622..42c8cfdaa3b 100644 --- a/product_docs/docs/bdr/4/durability.mdx +++ b/product_docs/docs/bdr/4/durability.mdx @@ -18,7 +18,7 @@ can all be implemented individually: eventually be applied on all nodes without further conflicts, or get an abort directly informing the client of an error. -BDR provides a [Group Commit](group_commit.md) feature to guarante +BDR provides a [Group Commit](group-commit) feature to guarantee durability and visibility by providing a variant of synchronous replication. This is very similar to Postgres' `synchronous_commit` feature for physical standbys, but providing a lot more flexibility @@ -45,8 +45,8 @@ Postgres itself provides [Physical Streaming Replication](https://www.postgresql (PSR), which is uni-directional, but offers a [synchronous variant](https://www.postgresql.org/docs/current/warm-standby.html#SYNCHRONOUS-REPLICATION). For backwards compatibility, BDR still supports configuring synchronous replication via `synchronous_commit` and `synchronous_standby_names`, see -[Legacy Synchronous Replication](durability.md#legacy-synchronous-replication), -but the use of [Group Commit](group_commit.md) is recommended instead +[Legacy Synchronous Replication](durability#legacy-synchronous-replication-using-bdr), +but the use of [Group Commit](group-commit) is recommended instead in all cases. This chapter covers the various forms of synchronous or eager @@ -85,7 +85,7 @@ Synchronous Replication with BDR, it refers to the `synchronous_commit` setting. For CAMO, it refers to the `bdr.enable_camo` setting. Lastly, for Group Commit, it refers to the confirmation requirements of the -[commit scope configuration](group_commit#configuration). +[commit scope configuration](group-commit#configuration). | Variant | Mode | Received | Visible | Durable | |--------------|-------------------------------|----------|----------|----------| @@ -214,7 +214,7 @@ required synchronization level and prevents loss of data. !!! Note This approach is not recommended. Please consider using - [Group Commit](group_commit.md) instead. + [Group Commit](group-commit) instead. ### Usage diff --git a/product_docs/docs/bdr/4/eager.mdx b/product_docs/docs/bdr/4/eager.mdx index e90111792f6..695562e4fcd 100644 --- a/product_docs/docs/bdr/4/eager.mdx +++ b/product_docs/docs/bdr/4/eager.mdx @@ -5,7 +5,7 @@ title: Eager Replication To prevent conflicts after a commit, set the `bdr.commit_scope` parameter to `global`. The default setting of `local` disables eager replication, so BDR will apply changes and resolve potential conflicts -post-commit, as described in the [Conflicts chapter](conflicts.md). +post-commit, as described in the [Conflicts chapter](conflicts). In this mode, BDR uses two-phase commit (2PC) internally to detect and resolve conflicts prior to the local commit. It turns a normal @@ -33,7 +33,7 @@ Eager All-Node Replication uses prepared transactions internally; therefore all replica nodes need to have a `max_prepared_transactions` configured high enough to be able to handle all incoming transactions (possibly in addition to local two-phase commit and CAMO transactions; -see [Configuration: Max Prepared Transactions](configuration.md#max-prepared-transactions)). +see [Configuration: Max Prepared Transactions](configuration#max-prepared-transactions)). We recommend to configure it the same on all nodes, and high enough to cover the maximum number of concurrent transactions across the cluster for which CAMO or Eager All-Node Replication is used. Other than @@ -93,8 +93,7 @@ Eager transactions). Other than this difference in configuration and invocation of that function, the client needs to adhere to the protocol -described for [CAMO](camo.md). See the [reference client -implementations](camo_clients.md). +described for [CAMO](camo). See the [reference client implementations](camo_clients). ### Limitations diff --git a/product_docs/docs/bdr/4/functions.mdx b/product_docs/docs/bdr/4/functions.mdx index 54ddbcd9588..dcc52636268 100644 --- a/product_docs/docs/bdr/4/functions.mdx +++ b/product_docs/docs/bdr/4/functions.mdx @@ -689,7 +689,7 @@ bdr.monitor_group_versions() #### Notes This function returns a record with fields `status` and `message`, -as explained in [Monitoring]. +as explained in [Monitoring](#monitoring). This function calls `bdr.run_on_all_nodes()`. @@ -708,7 +708,7 @@ bdr.monitor_group_raft() #### Notes This function returns a record with fields `status` and `message`, -as explained in [Monitoring]. +as explained in [Monitoring](#monitoring). This function calls `bdr.run_on_all_nodes()`. @@ -728,11 +728,11 @@ bdr.monitor_local_replslots() #### Notes This function returns a record with fields `status` and `message`, -as explained in [Monitoring Replication Slots](monitoring.md#monitoring-replication-slots). +as explained in [Monitoring Replication Slots](/pgd/latest/monitoring/#monitoring-replication-slots). ### bdr.wal_sender_stats -If the [Decoding Worker](nodes.md#decoding-worker) is enabled, this +If the [Decoding Worker](nodes#decoding-worker) is enabled, this function shows information about the decoder slot and current LCR (`Logical Change Record`) segment file being read by each WAL sender. @@ -755,7 +755,7 @@ bdr.wal_sender_stats() → setof record (pid integer, is_using_lcr boolean, ### bdr.get_decoding_worker_stat -If the [Decoding Worker](nodes.md#decoding-worker) is enabled, this function +If the [Decoding Worker](nodes#decoding-worker) is enabled, this function shows information about the state of the Decoding Worker associated with the current database. This also provides more granular information about Decoding Worker progress than is available via `pg_replication_slots`. @@ -778,11 +778,11 @@ bdr.get_decoding_worker_stat() → setof record (pid integer, decoded_upto_l #### Notes -For further details see [Monitoring WAL senders using LCR](monitoring.md#monitoring-wal-senders-using-lcr). +For further details see [Monitoring WAL senders using LCR](/pgd/latest/monitoring/#monitoring-wal-senders-using-lcr). ### bdr.lag_control -If [Lag Control](lag-control.mdx#configuration) is enabled, this function +If [Lag Control](lag-control#configuration) is enabled, this function shows information about the commit delay and number of nodes conforming to their configured lag measure for the local node and current database. @@ -899,7 +899,7 @@ consensus mechanism is working. Internal implementation of sequence increments. This function will be used instead of standard `nextval` in queries which -interact with [BDR Global Sequences]. +interact with [BDR Global Sequences](#bdr-global-sequences). #### Notes diff --git a/product_docs/docs/bdr/4/group-commit.mdx b/product_docs/docs/bdr/4/group-commit.mdx index 800bc22dcc3..a81d8183407 100644 --- a/product_docs/docs/bdr/4/group-commit.mdx +++ b/product_docs/docs/bdr/4/group-commit.mdx @@ -94,7 +94,7 @@ SELECT bdr.add_commit_scope( ### Confirmation Levels BDR nodes can send confirmations for a transaction at different points -in time, similar to [Commit At Most Once](camo.md). In increasing +in time, similar to [Commit At Most Once](camo). In increasing levels of protection, from the perspective of the confirming node, these are: diff --git a/product_docs/docs/bdr/4/index.mdx b/product_docs/docs/bdr/4/index.mdx index 92121684d9a..cc31cff2bc7 100644 --- a/product_docs/docs/bdr/4/index.mdx +++ b/product_docs/docs/bdr/4/index.mdx @@ -51,7 +51,7 @@ Detailed overview about how BDR works is described in the BDR is compatible with PostgresSQL, EDB Postgres Extended and EDB Postgres Advanced flavors of PostgresSQL database servers and can be deployed as a -standard PG extension. See [Comptibility Matrix](pgd/latest/compatibility_matrix/) +standard PG extension. See [Comptibility Matrix](/pgd/latest/compatibility_matrix/) for details of supported version combinations. It is important to note that some key BDR features depend on certain core diff --git a/product_docs/docs/bdr/4/lag-control.mdx b/product_docs/docs/bdr/4/lag-control.mdx index 1b34a7ae0d2..6284ad8f972 100644 --- a/product_docs/docs/bdr/4/lag-control.mdx +++ b/product_docs/docs/bdr/4/lag-control.mdx @@ -71,7 +71,7 @@ of milliseconds with a fractional part including a sub-millisecond setting if appropriate. By default, `bdr.lag_control_min_conforming_nodes` is set to one (1). -For a complete list, see [Lag Control](configuration.md) +For a complete list, see [Lag Control](configuration) ## Overview diff --git a/product_docs/docs/bdr/4/nodes.mdx b/product_docs/docs/bdr/4/nodes.mdx index 15809b6f8a3..8cfafa4da14 100644 --- a/product_docs/docs/bdr/4/nodes.mdx +++ b/product_docs/docs/bdr/4/nodes.mdx @@ -290,7 +290,7 @@ advised to take care to quiesce the database before promotion. You may make DDL changes to logical standby nodes but they will not be replicated, nor will they attempt to take global DDL locks. BDR functions -which act similarly to DDL will also not be replicated. See [DDL Replication]. +which act similarly to DDL will also not be replicated. See [DDL Replication](#ddl-replication). If you have made incompatible DDL changes to a logical standby, then the database is said to be a divergent node. Promotion of a divergent node will currently result in replication failing. @@ -363,7 +363,7 @@ the Primary: 1. Assume control of the same IP address or hostname as the Primary. 2. Inform the BDR cluster of the change in address by executing the - [bdr.alter_node_interface] function on all other BDR nodes. + [bdr.alter_node_interface](#bdralter-node-interface) function on all other BDR nodes. Once this is done, the other BDR nodes will re-establish communication with the newly promoted Standby -> Primary node. Since replication @@ -585,7 +585,7 @@ On EDB Postgres Extended Server and EDB Postgres Advanced Server, offline nodes also hold back freezing of data to prevent losing conflict resolution data (see: [Origin Conflict Detection](conflicts)). -Administrators should monitor for node outages (see: [monitoring](monitoring)) +Administrators should monitor for node outages (see: [monitoring](/pgd/latest/monitoring/)) and make sure nodes have sufficient free disk space. If the workload is predictable, it may be possible to calculate how much space is used over time, allowing a prediction of the maximum time a node can be down before critical @@ -932,7 +932,7 @@ bdr.create_node_group(node_group_name text, 'read coordinator' and 'write coordinator'. 'subscriber-only' type is used to create a group of nodes that only receive changes from the fully joined nodes in the cluster, but they never send replication - changes to other nodes. See [Subscriber-Only Nodes] for more details. + changes to other nodes. See [Subscriber-Only Nodes](#subscriber-only-nodes) for more details. Datanode implies that the group represents a shard, whereas the other values imply that the group represents respective coordinators. Except 'subscriber-only', the rest three values are reserved for future use. @@ -1013,7 +1013,7 @@ bdr.alter_node_group_config(node_group_name text, initially the 'local' commit scope. This is only applicable to the top-level node group. Individual rules can be used for different origin groups of the same commit scope. See the section about - [Origin Groups](group_commit.md) for Group Commit for more details. + [Origin Groups](group-commit) for Group Commit for more details. #### Notes diff --git a/product_docs/docs/bdr/4/overview.mdx b/product_docs/docs/bdr/4/overview.mdx index 07908f8d97f..31c8b02e519 100644 --- a/product_docs/docs/bdr/4/overview.mdx +++ b/product_docs/docs/bdr/4/overview.mdx @@ -14,7 +14,7 @@ other servers that are part of the same BDR group. By default BDR uses asynchronous replication, applying changes on the peer nodes only after the local commit. An optional -[eager all node replication](eager) feature allows for commiting +[eager all node replication](eager) feature allows for committing on all nodes using consensus. ## Basic Architecture diff --git a/product_docs/docs/bdr/4/release_notes/bdr4.0.1_rel_notes.mdx b/product_docs/docs/bdr/4/release_notes/bdr4.0.1_rel_notes.mdx index afc1c4bb573..af1726b726a 100644 --- a/product_docs/docs/bdr/4/release_notes/bdr4.0.1_rel_notes.mdx +++ b/product_docs/docs/bdr/4/release_notes/bdr4.0.1_rel_notes.mdx @@ -32,4 +32,4 @@ This release supports upgrading from the following versions of BDR: - 4.0.0 and higher Please make sure you read and understand the process and limitations described -in the [Upgrade Guide](upgrades) before upgrading. +in the [Upgrade Guide](/pgd/latest/upgrades/) before upgrading. diff --git a/product_docs/docs/bdr/4/release_notes/bdr4.0.2_rel_notes.mdx b/product_docs/docs/bdr/4/release_notes/bdr4.0.2_rel_notes.mdx index 20cbf7fb0c9..d39bfb08be5 100644 --- a/product_docs/docs/bdr/4/release_notes/bdr4.0.2_rel_notes.mdx +++ b/product_docs/docs/bdr/4/release_notes/bdr4.0.2_rel_notes.mdx @@ -34,4 +34,4 @@ The upgrade path from BDR 3.7 is not currently stable and needs to be considered beta. Tests should be performed with at least BDR 3.7.15. Please make sure you read and understand the process and limitations described -in the [Upgrade Guide](upgrades) before upgrading. +in the [Upgrade Guide](/pgd/latest/upgrades/) before upgrading. diff --git a/product_docs/docs/bdr/4/release_notes/bdr4.1.0_rel_notes.mdx b/product_docs/docs/bdr/4/release_notes/bdr4.1.0_rel_notes.mdx index a2fab9c4a28..859bb4f68e4 100644 --- a/product_docs/docs/bdr/4/release_notes/bdr4.1.0_rel_notes.mdx +++ b/product_docs/docs/bdr/4/release_notes/bdr4.1.0_rel_notes.mdx @@ -65,4 +65,4 @@ This release supports upgrading from the following versions of BDR: - 3.7.16 Please make sure you read and understand the process and limitations described -in the [Upgrade Guide](upgrades) before upgrading. +in the [Upgrade Guide](/pgd/latest/upgrades/) before upgrading. diff --git a/product_docs/docs/bdr/4/release_notes/bdr4_rel_notes.mdx b/product_docs/docs/bdr/4/release_notes/bdr4_rel_notes.mdx index 7b622d30fc5..468ec891233 100644 --- a/product_docs/docs/bdr/4/release_notes/bdr4_rel_notes.mdx +++ b/product_docs/docs/bdr/4/release_notes/bdr4_rel_notes.mdx @@ -15,7 +15,7 @@ versions are 3.7 and 3.6. | Feature | User Experience | There is no pglogical 4.0 extension that corresponds to the BDR 4.0 extension. BDR no longer has a requirement for pglogical.

This means also that only BDR extension and schema exist and any configuration parameters were renamed from `pglogical.` to `bdr.`.

| Feature | Initial experience | Some configuration options have change defaults for better post-install experience:
- Parallel apply is now enabled by default (with 2 writers). Allows for better performance, especially with streaming enabled.
- `COPY` and `CREATE INDEX CONCURRENTLY` are now streamed directly to writer in parallel (on Postgres versions where streaming is supported) to all available nodes by default, eliminating or at least reducing replication lag spikes after these operations.
- The timeout for global locks have been increased to 10 minutes
- The `bdr.min_worker_backoff_delay` now defaults to 1s so that subscriptions retry connection only once per second on error | Feature | Reliability and operability | Greatly reduced the chance of false positives in conflict detection during node join for table that use origin based conflict detection -| Feature | Reliability and operability | Move configuration of CAMO pairs to SQL catalogs

To reduce chances of misconfiguration and make CAMO pairs within the BDR cluster known globally, move the CAMO configuration from the individual node's postgresql.conf to BDR system catalogs managed by Raft. This for example can prevent against inadvertently dropping a node that's still configured to be a CAMO partner for another active node.

Please see the [Upgrades chapter](upgrades#upgrading-a-camo-enable-cluster) for details on the upgrade process.

This deprecates GUCs `bdr.camo_partner_of` and `bdr.camo_origin_for` and replaces the functions `bdr.get_configured_camo_origin_for()` and `get_configured_camo_partner_of` with `bdr.get_configured_camo_partner`.

+| Feature | Reliability and operability | Move configuration of CAMO pairs to SQL catalogs

To reduce chances of misconfiguration and make CAMO pairs within the BDR cluster known globally, move the CAMO configuration from the individual node's postgresql.conf to BDR system catalogs managed by Raft. This for example can prevent against inadvertently dropping a node that's still configured to be a CAMO partner for another active node.

Please see the [Upgrades chapter](/pgd/latest/upgrades/#upgrading-a-camo-enabled-cluster) for details on the upgrade process.

This deprecates GUCs `bdr.camo_partner_of` and `bdr.camo_origin_for` and replaces the functions `bdr.get_configured_camo_origin_for()` and `get_configured_camo_partner_of` with `bdr.get_configured_camo_partner`.

## Upgrades @@ -24,4 +24,4 @@ This release supports upgrading from the following version of BDR: - 3.7.13.1 Please make sure you read and understand the process and limitations described -in the [Upgrade Guide](upgrades) before upgrading. +in the [Upgrade Guide](/pgd/latest/upgrades/) before upgrading. diff --git a/product_docs/docs/bdr/4/repsets.mdx b/product_docs/docs/bdr/4/repsets.mdx index eefb8abad29..d608d80b39a 100644 --- a/product_docs/docs/bdr/4/repsets.mdx +++ b/product_docs/docs/bdr/4/repsets.mdx @@ -138,7 +138,7 @@ Management of replication sets. Note that, with the exception of `bdr.alter_node_replication_sets`, the following functions are considered to be `DDL` so DDL replication and global locking -applies to them, if that is currently active. See [DDL Replication]. +applies to them, if that is currently active. See [DDL Replication](ddl). ### bdr.create_replication_set @@ -332,7 +332,7 @@ the node where this function is executed. Tables can be added and removed to one or multiple replication sets. This only affects replication of changes (DML) in those tables, schema changes (DDL) are -handled by DDL replication set filters (see [DDL Replication Filtering] below). +handled by DDL replication set filters (see [DDL Replication Filtering](#ddl-replication-filtering)). The replication uses the table membership in replication sets in combination with the node replication sets configuration to determine which actions should be @@ -453,7 +453,7 @@ FROM bdr.tables WHERE set_name = 'myrepset'; ``` -In the section [Behavior with Foreign Keys] above, we report a +In the section [Behavior with Foreign Keys](#behavior-with-foreign-keys), we report a query that lists all the foreign keys whose referenced table is not included in the same replication set as the referencing table. diff --git a/product_docs/docs/bdr/4/security.mdx b/product_docs/docs/bdr/4/security.mdx index a0bb8d722f3..d412c66d630 100644 --- a/product_docs/docs/bdr/4/security.mdx +++ b/product_docs/docs/bdr/4/security.mdx @@ -17,7 +17,7 @@ similarly to the PostgreSQL default/predefined roles: - *bdr_read_all_conflicts* - can view *all* conflicts in `bdr.conflict_history`. These BDR roles are created when the BDR extension is -installed. See [BDR Default Roles] below for more details. +installed. See [BDR Default Roles](#bdr-default-roles) for more details. Managing BDR does not require that administrators have access to user data. @@ -46,7 +46,7 @@ to exclude the `public` schema from the search_path without problems. Administrators should not grant explicit privileges on catalog objects such as tables, views and functions; manage access to those objects by granting one of the roles documented in [BDR -Default Roles]. +Default Roles](#bdr-default-roles). This requirement is a consequence of the flexibility that allows joining a node group even if the nodes on either side of the join do diff --git a/product_docs/docs/bdr/4/sequences.mdx b/product_docs/docs/bdr/4/sequences.mdx index caca78639fb..a309731534c 100644 --- a/product_docs/docs/bdr/4/sequences.mdx +++ b/product_docs/docs/bdr/4/sequences.mdx @@ -212,7 +212,7 @@ to or more than the above ranges assigned for each sequence datatype. should not be used. A few limitations apply to galloc sequences. BDR tracks galloc sequences in a -special BDR catalog [bdr.sequence_alloc](catalogs.md#bdrsequence_alloc). This +special BDR catalog [bdr.sequence_alloc](catalogs#bdrsequence_alloc). This catalog is required to track the currently allocated chunks for the galloc sequences. The sequence name and namespace is stored in this catalog. Since the sequence chunk allocation is managed via Raft whereas any changes to the @@ -505,7 +505,7 @@ Once set, `seqkind` is only visible via the `bdr.sequences` view; in all other ways the sequence will appear as a normal sequence. BDR treats this function as `DDL`, so DDL replication and global locking applies, -if that is currently active. See [DDL Replication]. +if that is currently active. See [DDL Replication](ddl). #### Synopsis ```sql diff --git a/product_docs/docs/bdr/4/striggers.mdx b/product_docs/docs/bdr/4/striggers.mdx index bbb9a6ec5fb..840de32c83b 100644 --- a/product_docs/docs/bdr/4/striggers.mdx +++ b/product_docs/docs/bdr/4/striggers.mdx @@ -392,7 +392,7 @@ bdr.trigger_get_type() This function returns the current conflict type if called inside a conflict trigger, or `NULL` otherwise. -See [Conflict Types]\(conflicts.md#List of Conflict Types) +See [Conflict Types](conflicts#list-of-conflict-types) for possible return values of this function. #### Synopsis diff --git a/product_docs/docs/bdr/4/transaction-streaming.mdx b/product_docs/docs/bdr/4/transaction-streaming.mdx index 09778d299d2..810064f8a4e 100644 --- a/product_docs/docs/bdr/4/transaction-streaming.mdx +++ b/product_docs/docs/bdr/4/transaction-streaming.mdx @@ -117,7 +117,7 @@ either a writer or a file. The decision is based on several factors: (writer 0 is always reserved for non-streamed transactions) - if parallel apply is on, but all writers are already busy handling streamed transactions, then the new transaction will be streamed to a file. See - [bdr.writers]\(monitoring.md#Monitoring BDR Writers) to check BDR + [bdr.writers]\(monitoring#monitoring-bdr-writers) to check BDR writer status. If streaming to a writer is possible (i.e. a free writer is available), then the diff --git a/product_docs/docs/harp/2/08_harpctl.mdx b/product_docs/docs/harp/2/08_harpctl.mdx index 3eb2aec25d2..bc39a94a9b1 100644 --- a/product_docs/docs/harp/2/08_harpctl.mdx +++ b/product_docs/docs/harp/2/08_harpctl.mdx @@ -366,7 +366,7 @@ harpctl set cluster event_sync_interval=200 ### `harpctl set node` Sets node-related attributes for the named node. Any options mentioned in -[Node directives](04_configuration#node_directives) are valid here. +[Node directives](04_configuration#node-directives) are valid here. Example: @@ -376,7 +376,7 @@ harpctl set node mynode priority=500 ### `harpctl set proxy` -Sets proxy-related attributes for the named proxy. Any options mentioned in the [Proxy directives](04_configuration#proxy_directives) +Sets proxy-related attributes for the named proxy. Any options mentioned in the [Proxy directives](04_configuration#proxy-directives) are valid here. Properties set this way require a restart of the proxy before the new value takes effect. diff --git a/product_docs/docs/harp/2/09_consensus-layer.mdx b/product_docs/docs/harp/2/09_consensus-layer.mdx index 953b754519a..2ac5b7a4a34 100644 --- a/product_docs/docs/harp/2/09_consensus-layer.mdx +++ b/product_docs/docs/harp/2/09_consensus-layer.mdx @@ -17,8 +17,8 @@ supported DCS implementations. ## BDR driver compatibility The `bdr` native consensus layer is available from BDR versions -[3.6.21](/bdr/latest/release-notes/#bdr-3621) -and [3.7.3](/bdr/latest/release-notes/#bdr-373). +[3.6.21](/bdr/3.7/release-notes/#bdr-362) +and [3.7.3](/bdr/3.7/release-notes/#bdr-373). For the purpose of maintaining a voting quorum, BDR Logical Standby nodes don't participate in consensus communications in a BDR cluster. Don't count these in the total node list to fulfill DCS quorum requirements. diff --git a/product_docs/docs/pgd/4/architectures/bronze.mdx b/product_docs/docs/pgd/4/architectures/bronze.mdx index 7f47bfc48bb..66487140652 100644 --- a/product_docs/docs/pgd/4/architectures/bronze.mdx +++ b/product_docs/docs/pgd/4/architectures/bronze.mdx @@ -1,9 +1,9 @@ --- -title: "AlwaysOn Bronze (single active location - cloud region or on prem data center)" +title: "Always On Bronze (single active location - cloud region or on prem data center)" navTitle: Bronze --- -The AlwaysOn Bronze architecture includes the following: +The Always On Bronze architecture includes the following: - Two BDR data nodes - One BDR witness node that doesn't hold data but is used for consensus diff --git a/product_docs/docs/pgd/4/architectures/gold.mdx b/product_docs/docs/pgd/4/architectures/gold.mdx index 0b969057cd0..fb1e1b73fe3 100644 --- a/product_docs/docs/pgd/4/architectures/gold.mdx +++ b/product_docs/docs/pgd/4/architectures/gold.mdx @@ -1,5 +1,5 @@ --- -title: "AlwaysOn Gold (two active locations - cloud regions or on prem data centers)" +title: "Always On Gold (two active locations - cloud regions or on prem data centers)" navTitle: "Gold" --- @@ -8,9 +8,9 @@ This architecture favors local resiliency/redundancy first and remote locations This architecture enables geo-distributed writes where no/low conflict handling is expected -The AlwaysOn Gold architecture requires intervention to move between locations but all data will be replicated and available to the application when failover is initiated with full capacity. +The Always On Gold architecture requires intervention to move between locations but all data will be replicated and available to the application when failover is initiated with full capacity. -The AlwaysOn Gold architecture includes the following: +The Always On Gold architecture includes the following: - Four BDR data nodes (two in location A, two in location B) - One BDR witness node in location C (optional but recommended) diff --git a/product_docs/docs/pgd/4/architectures/index.mdx b/product_docs/docs/pgd/4/architectures/index.mdx index 4375343a001..5940a6e18e1 100644 --- a/product_docs/docs/pgd/4/architectures/index.mdx +++ b/product_docs/docs/pgd/4/architectures/index.mdx @@ -7,20 +7,20 @@ navigation: - platinum --- -AlwaysOn architectures reflect EDB’s recommended practices and help you to achieve the highest possible service availability in multiple configurations. These configurations range from single-location architectures to complex distributed systems that protect from hardware failures and data center failures. The architectures leverage EDB Postgres Distributed’s multi-master capability and its ability to achieve 99.999% availability, even during maintenance operations. +Always On architectures reflect EDB’s recommended practices and help you to achieve the highest possible service availability in multiple configurations. These configurations range from single-location architectures to complex distributed systems that protect from hardware failures and data center failures. The architectures leverage EDB Postgres Distributed’s multi-master capability and its ability to achieve 99.999% availability, even during maintenance operations. -You can use EDB Postgres Distributed for architectures beyond the examples described here. Use-case-specific variations have been successfully deployed in production. However, these variations must undergo rigorous architecture review first. Also, EDB’s standard deployment tool for AlwaysOn architectures, TPAExec, must be enabled to support the variations before they can be supported in production environments. +You can use EDB Postgres Distributed for architectures beyond the examples described here. Use-case-specific variations have been successfully deployed in production. However, these variations must undergo rigorous architecture review first. Also, EDB’s standard deployment tool for Always On architectures, TPAExec, must be enabled to support the variations before they can be supported in production environments. -## Standard EDB AlwaysOn architectures +## Standard EDB Always On architectures -EDB has identified four standard AlwaysOn architectures: +EDB has identified four standard Always On architectures: - [Bronze](bronze): Single active location (data center or availability zone \[AZ\]) - [Silver](silver): Single active location with redundant hardware to quickly restore failover capability and a backup in a disaster recovery (DR) location - [Gold](gold): Two active locations - [Platinum](platinum): Two active locations with additional redundant hardware in a hot standby mode -All AlwaysOn architectures protect a progressively robust range of failure situations. For example, AlwaysOn Bronze protects against local hardware failure but doesn't provide protection from location (data center or AZ) failure. AlwaysOn Silver makes sure that a backup is kept at a different location, thus providing some protection in case of the catastrophic loss of a location. However, the database still must be restored from backup first, which might violate recovery time objective (RTO) requirements. AlwaysOn Gold provides two active locations connected in a multi-master mesh network, making sure that service remains available even in case a location goes offline. Finally, AlwaysOn Platinum adds redundant hot standby hardware in both locations to maintain local high availability in case of a hardware failure. +All Always On architectures protect a progressively robust range of failure situations. For example, Always On Bronze protects against local hardware failure but doesn't provide protection from location (data center or AZ) failure. Always On Silver makes sure that a backup is kept at a different location, thus providing some protection in case of the catastrophic loss of a location. However, the database still must be restored from backup first, which might violate recovery time objective (RTO) requirements. Always On Gold provides two active locations connected in a multi-master mesh network, making sure that service remains available even in case a location goes offline. Finally, Always On Platinum adds redundant hot standby hardware in both locations to maintain local high availability in case of a hardware failure. Each architecture can provide zero recovery point objective (RPO), as data can be streamed synchronously to at least one local master, thus guaranteeing zero data loss in case of local hardware failure. @@ -28,21 +28,23 @@ Increasing the availability guarantee drives additional cost for hardware and li ## Architecture details -EDB Postgres Distributed uses a [Raft](https://raft.github.io)-based consensus architecture. While regular database operations (insert, select, delete) don’t require cluster-wide consensus, EDB Postgres Distributed benefits from an odd number of BDR nodes to make decisions that require consensus, such as generating new global sequences, or distributed DDL operations. Even the simpler architectures always have three BDR nodes, even if not all of them are storing data. AlwaysOn Gold and Platinum, which use two active locations, introduce a fifth BDR node as a witness node to support the RAFT requirements. +EDB Postgres Distributed uses a [Raft](https://raft.github.io)-based consensus architecture. While regular database operations (insert, select, delete) don’t require cluster-wide consensus, EDB Postgres Distributed benefits from an odd number of BDR nodes to make decisions that require consensus, such as generating new global sequences, or distributed DDL operations. Even the simpler architectures always have three BDR nodes, even if not all of them are storing data. Always On Gold and Platinum, which use two active locations, introduce a fifth BDR node as a witness node to support the RAFT requirements. -Applications connect to the standard AlwaysOn architectures by way of multi-host connection strings, where each pgBouncer/HAProxy server is a distinct entry in the multi-host connection string. Other connection mechanisms have been successfully deployed in production, but they're not part of the standard AlwaysOn architectures. +Applications connect to the standard Always On architectures by way of multi-host connection strings, where each HA-proxy server is a distinct entry in the multi-host connection string. Other connection mechanisms have been successfully deployed in production, but they're not part of the standard Always On architectures. ## Choosing your architecture -Use these criteria to help you to select the appropriate AlwaysOn architecture. +All architecures provide the following: +* Hardware failure protection +* Zero downtime upgrades +* Support for availability zones in public/private cloud + +Use these criteria to help you to select the appropriate Always On architecture. | | Bronze | Silver | Gold | Platinum | |-----------------------------|------------------|------------------|----------------|----------------------| -| Hardware failure protection | Yes | Yes | Yes | Yes | | Location failure protection | No (unless Barman is moved offsite)| Yes - Recovery from backup | Yes - instant failover to fully functional site | Yes - instant failover to fully functional site | | Failover to DR or full DC | DR (if Barman is located offsite); NA otherwise | DR (if Barman is located offsite) | Full DC | Full DC | -| Zero downtime upgrade | Yes | Yes | Yes | Yes | -| Support of availability zones in public/ private cloud | Yes | Yes | Yes | Yes | | Fast local restoration of high availability after device failure | No; time to restore HA: (1) VM prov + (2) approx 60 min/500GB | Yes; three local data nodes allow to maintain HA after device failure | No; time to restore HA: (1) VM prov + (2) approx 60 min/500GB | Yes; logical standbys can quickly be promoted to master data nodes | | Cross data center network traffic | Backup traffic only (if Barman is located offsite); none otherwise | Backup traffic only (if Barman is located offsite); none otherwise | Full replication traffic | Full replication traffic | | License cost | 2 data nodes | 3 data nodes | 4 data nodes | 4 data nodes
2 logical standbys | diff --git a/product_docs/docs/pgd/4/architectures/platinum.mdx b/product_docs/docs/pgd/4/architectures/platinum.mdx index 6dd946c2180..929f72431ea 100644 --- a/product_docs/docs/pgd/4/architectures/platinum.mdx +++ b/product_docs/docs/pgd/4/architectures/platinum.mdx @@ -1,9 +1,9 @@ --- -title: "AlwaysOn Platinum (two locations; fast HA restoration)" +title: "Always On Platinum (two locations; fast HA restoration)" navTitle: "Platinum" --- -The AlwaysOn Platinum architecture includes the following: +The Always On Platinum architecture includes the following: - Four BDR data nodes (two in location A, two in location B) - Two logical standby nodes (one in location A, one in location B) diff --git a/product_docs/docs/pgd/4/architectures/silver.mdx b/product_docs/docs/pgd/4/architectures/silver.mdx index 3fc9f5868c7..7fec464ffde 100644 --- a/product_docs/docs/pgd/4/architectures/silver.mdx +++ b/product_docs/docs/pgd/4/architectures/silver.mdx @@ -1,9 +1,9 @@ --- -title: "AlwaysOn Silver (single active location - cloud region or on prem data center)" +title: "Always On Silver (single active location - cloud region or on prem data center)" navTitle: Silver --- -The AlwaysOn Silver architecture includes the following: +The Always On Silver architecture includes the following: - Three BDR data nodes - Two HARP-Proxy nodes diff --git a/product_docs/docs/pgd/4/choosing_durability.mdx b/product_docs/docs/pgd/4/choosing_durability.mdx index a6343e41827..0e5a67b9617 100644 --- a/product_docs/docs/pgd/4/choosing_durability.mdx +++ b/product_docs/docs/pgd/4/choosing_durability.mdx @@ -8,7 +8,7 @@ EDB Postgres Distributed allows you to choose from several replication configura * Synchronous (using `synchronous_standby_names`) * [Commit at Most Once](/bdr/latest/camo) * [Eager](/bdr/latest/eager) -* [Group Commit](/bdr/latest/group_commit) +* [Group Commit](/bdr/latest/group-commit) For more information, see [Durability](/bdr/latest/durability). diff --git a/product_docs/docs/pgd/4/deployments/index.mdx b/product_docs/docs/pgd/4/deployments/index.mdx index 474a1c899ac..e2c122e85a9 100644 --- a/product_docs/docs/pgd/4/deployments/index.mdx +++ b/product_docs/docs/pgd/4/deployments/index.mdx @@ -10,7 +10,7 @@ You can deploy and install EDB Postgres Distributed products using the following Coming soon: -- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. The addition of extreme high availability support through EDB Postres Distributed allows single-region AlwaysOn Gold clusters: two BDR groups in different availability zones in a single cloud region, with a witness node in a third availability zone. +- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. The addition of extreme high availability support through EDB Postres Distributed allows single-region Always On Gold clusters: two BDR groups in different availability zones in a single cloud region, with a witness node in a third availability zone. -- EDB Postgres for Kubernetes is an operator is designed, developed, and supported by EDB that covers the full lifecycle of a highly available Postgres database clusters with a primary/standby architecture, using native streaming replication. It is based on the open source CloudNativePG operator, and provides additional value such as compatibility with Oracle using EDB Postgres Advanced Server and additional supported platforms such as IBM Power and OpenShift. +- EDB Postgres Distributed for Kubernetes will be a Kubernetes operator is designed, developed, and supported by EDB that covers the full lifecycle of a highly available Postgres database clusters with a multi-master architecture, using BDR replication. It is based on the open source CloudNativePG operator, and provides additional value such as compatibility with Oracle using EDB Postgres Advanced Server and additional supported platforms such as IBM Power and OpenShift. diff --git a/product_docs/docs/pgd/4/deployments/tpaexec/quick_start.mdx b/product_docs/docs/pgd/4/deployments/tpaexec/quick_start.mdx index 8b8ef1a84d2..1bca2bbec65 100644 --- a/product_docs/docs/pgd/4/deployments/tpaexec/quick_start.mdx +++ b/product_docs/docs/pgd/4/deployments/tpaexec/quick_start.mdx @@ -4,7 +4,7 @@ navTitle: "Quick start" --- -The following steps setup EDB Postgres Distributed with the AlwaysOn Silver +The following steps setup EDB Postgres Distributed with the Always On Silver architecture using Amazon EC2. 1. Generate a configuration file: diff --git a/product_docs/docs/pgd/4/monitoring.mdx b/product_docs/docs/pgd/4/monitoring.mdx index f3668174ea9..ce2e545c628 100644 --- a/product_docs/docs/pgd/4/monitoring.mdx +++ b/product_docs/docs/pgd/4/monitoring.mdx @@ -83,7 +83,7 @@ node_seq_id | 3 node_local_dbname | postgres ``` -Also, the table [`bdr.node_catchup_info`](catalogs) will give information +Also, the table [`bdr.node_catchup_info`](/bdr/latest/catalogs) will give information on the catch-up state, which can be relevant to joining nodes or parting nodes. When a node is parted, it could be that some nodes in the cluster did not receive @@ -103,8 +103,8 @@ The `catchup_state` can be one of the following: There are two main views used for monitoring of replication activity: -- [`bdr.node_slots`](catalogs) for monitoring outgoing replication -- [`bdr.subscription_summary`](catalogs) for monitoring incoming replication +- [`bdr.node_slots`](/bdr/latest/catalogs) for monitoring outgoing replication +- [`bdr.subscription_summary`](/bdr/latest/catalogs) for monitoring incoming replication Most of the information provided by `bdr.node_slots` can be also obtained by querying the standard PostgreSQL replication monitoring views @@ -114,13 +114,13 @@ and Each node has one BDR group slot which should never have a connection to it and will very rarely be marked as active. This is normal, and does not imply -something is down or disconnected. See [`Replication Slots created by BDR`](nodes). +something is down or disconnected. See [`Replication Slots created by BDR`](/bdr/latest/nodes). ### Monitoring Outgoing Replication There is an additional view used for monitoring of outgoing replication activity: -- [`bdr.node_replication_rates`](catalogs) for monitoring outgoing replication +- [`bdr.node_replication_rates`](/bdr/latest/catalogs) for monitoring outgoing replication The `bdr.node_replication_rates` view gives an overall picture of the outgoing replication activity along with the catchup estimates for peer nodes, @@ -274,9 +274,9 @@ subscription_status | replicating ### Monitoring WAL senders using LCR -If the [Decoding Worker](nodes#decoding-worker) is enabled, information about the +If the [Decoding Worker](/bdr/latest/nodes#decoding-worker) is enabled, information about the current LCR (`Logical Change Record`) file for each WAL sender can be monitored -via the function [bdr.wal_sender_stats](functions#bdrwal_sender_stats), +via the function [bdr.wal_sender_stats](/bdr/latest/functions#bdrwal_sender_stats), e.g.: ``` @@ -291,10 +291,10 @@ postgres=# SELECT * FROM bdr.wal_sender_stats(); If `is_using_lcr` is `FALSE`, `decoder_slot_name`/`lcr_file_name` will be `NULL`. This will be the case if the Decoding Worker is not enabled, or the WAL sender is -serving a [logical standby]\(nodes.md#Logical Standby Nodes). +serving a [logical standby](/bdr/latest/nodes#Logical Standby Nodes). Additionally, information about the Decoding Worker can be monitored via the function -[bdr.get_decoding_worker_stat](functions#bdr_get_decoding_worker_stat), e.g.: +[bdr.get_decoding_worker_stat](/bdr/latest/functions#bdr_get_decoding_worker_stat), e.g.: ``` postgres=# SELECT * FROM bdr.get_decoding_worker_stat(); @@ -363,7 +363,7 @@ Either or both entry types may be created for the same transaction, depending on the type of DDL operation and the value of the `bdr.ddl_locking` setting. Global locks held on the local node are visible in [the `bdr.global_locks` -view](catalogs#bdrglobal_locks). This view shows the type of the lock; for +view](/bdr/latest/catalogs#bdrglobal_locks). This view shows the type of the lock; for relation locks it shows which relation is being locked, the PID holding the lock (if local), and whether the lock has been globally granted or not. In case of global advisory locks, `lock_type` column shows `GLOBAL_LOCK_ADVISORY` and @@ -389,7 +389,7 @@ timing information. ## Monitoring Conflicts -Replication [conflicts](conflicts) can arise when multiple nodes make +Replication [conflicts](/bdr/latest/conflicts) can arise when multiple nodes make changes that affect the same rows in ways that can interact with each other. The BDR system should be monitored to ensure that conflicts are identified and, where possible, application changes are made to eliminate them or make diff --git a/product_docs/docs/pgd/4/upgrades/index.mdx b/product_docs/docs/pgd/4/upgrades/index.mdx index 21ff8083568..c3ca8c5c9a1 100644 --- a/product_docs/docs/pgd/4/upgrades/index.mdx +++ b/product_docs/docs/pgd/4/upgrades/index.mdx @@ -187,7 +187,7 @@ version of BDR4 binary like this: SELECT bdr.bdr_version(); ``` -Always check the [monitoring](monitoring) after upgrade of a node to confirm +Always check the [monitoring](../monitoring) after upgrade of a node to confirm that the upgraded node is working as expected. ## Application Schema Upgrades diff --git a/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx b/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx index 433923f963a..22b81c8255e 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/quickstart.mdx @@ -98,6 +98,5 @@ spec: ``` !!! Note "There's more" - For more detailed information about the available options, refer - to [API Reference(api_reference.md). - + For more detailed information about the available options, please refer + to the ["API Reference" section](api_reference.md).