diff --git a/product_docs/docs/pgd/5/choosing_server.mdx b/product_docs/docs/pgd/5/choosing_server.mdx index ae7cdc72734..83629330505 100644 --- a/product_docs/docs/pgd/5/choosing_server.mdx +++ b/product_docs/docs/pgd/5/choosing_server.mdx @@ -14,7 +14,7 @@ The following table lists features of EDB Postgres Distributed that are dependen | [Granular DDL Locking](ddl/#ddl-locking-details) | Y | Y | Y | | [Streaming of large transactions](transaction-streaming/) | v14+ | v13+ | v14+ | | [Distributed sequences](sequences/#pgd-global-sequences) | Y | Y | Y | -| [Subscribe-only nodes](nodes/#physical-standby-nodes) | Y | Y | Y | +| [Subscribe-only nodes](node_management/subscriber_only/) | Y | Y | Y | | [Monitoring](monitoring/) | Y | Y | Y | | [OpenTelemetry support](monitoring/otel/) | Y | Y | Y | | [Parallel apply](parallelapply) | Y | Y | Y | @@ -28,7 +28,7 @@ The following table lists features of EDB Postgres Distributed that are dependen | [Commit At Most Once (CAMO)](durability/camo/) | N | Y | 14+ | | [Eager Conflict Resolution](consistency/eager/) | N | Y | 14+ | | [Lag Control](durability/lag-control/) | N | Y | 14+ | -| [Decoding Worker](nodes/#decoding-worker) | N | 13+ | 14+ | +| [Decoding Worker](node_management/decoding_worker) | N | 13+ | 14+ | | [Lag tracker](monitoring/sql/#monitoring-outgoing-replication) | N | Y | 14+ | | Missing partition conflict | N | Y | 14+ | | No need for UPDATE Trigger on tables with TOAST | N | Y | 14+ | diff --git a/product_docs/docs/pgd/5/consistency/conflicts.mdx b/product_docs/docs/pgd/5/consistency/conflicts.mdx index 08d64f1e125..36eb100deba 100644 --- a/product_docs/docs/pgd/5/consistency/conflicts.mdx +++ b/product_docs/docs/pgd/5/consistency/conflicts.mdx @@ -475,7 +475,7 @@ Origin info is available only up to the point where a row is frozen. Updates arr A node that was offline that reconnects and begins sending data changes can cause divergent errors if the newly arrived updates are older than the frozen rows that they update. Inserts and deletes aren't affected by this situation. -We suggest that you don't leave down nodes for extended outages, as discussed in [Node restart and down node recovery](../nodes). +We suggest that you don't leave down nodes for extended outages, as discussed in [Node restart and down node recovery](../node_management). On EDB Postgres Extended Server and EDB Postgres Advanced Server, PGD holds back the freezing of rows while a node is down. This mechanism handles this situation gracefully so you don't need to change parameter settings. diff --git a/product_docs/docs/pgd/5/index.mdx b/product_docs/docs/pgd/5/index.mdx index ff42aeb253c..da57bf12902 100644 --- a/product_docs/docs/pgd/5/index.mdx +++ b/product_docs/docs/pgd/5/index.mdx @@ -22,8 +22,8 @@ navigation: - upgrades - "#Using" - appusage + - node_management - postgres-configuration - - nodes - ddl - security - sequences diff --git a/product_docs/docs/pgd/5/monitoring/sql.mdx b/product_docs/docs/pgd/5/monitoring/sql.mdx index 36ac35396c8..82f849b2a4a 100644 --- a/product_docs/docs/pgd/5/monitoring/sql.mdx +++ b/product_docs/docs/pgd/5/monitoring/sql.mdx @@ -105,7 +105,7 @@ and Each node has one PGD group slot that must never have a connection to it and is very rarely be marked as active. This is normal and doesn't imply -something is down or disconnected. See [Replication slots created by PGD`](../nodes/#replication-slots-created-by-pgd). +something is down or disconnected. See [Replication slots](../node_management/replication_slots) in Node Management. ### Monitoring outgoing replication @@ -271,7 +271,7 @@ subscription_status | replicating ### Monitoring WAL senders using LCR -If the [decoding worker](../nodes#decoding-worker) is enabled, you can monitor information about the +If the [decoding worker](../node_management/decoding_worker/) is enabled, you can monitor information about the current logical change record (LCR) file for each WAL sender using the function [`bdr.wal_sender_stats()`](/pgd/latest/reference/functions/#bdrwal_sender_stats). For example: @@ -287,7 +287,7 @@ postgres=# SELECT * FROM bdr.wal_sender_stats(); If `is_using_lcr` is `FALSE`, `decoder_slot_name`/`lcr_file_name` is `NULL`. This is the case if the decoding worker isn't enabled or the WAL sender is -serving a [logical standby](../nodes#logical-standby-nodes). +serving a [logical standby](../node_management/logical_standby_nodes/). Also, you can monitor information about the decoding worker using the function [`bdr.get_decoding_worker_stat()`](/pgd/latest/reference/functions/#bdrget_decoding_worker_stat). For example: diff --git a/product_docs/docs/pgd/5/node_management/connections_dsns_and_ssl.mdx b/product_docs/docs/pgd/5/node_management/connections_dsns_and_ssl.mdx new file mode 100644 index 00000000000..e2227d3eabb --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/connections_dsns_and_ssl.mdx @@ -0,0 +1,55 @@ +--- +title: Connection DSNs and SSL (TLS) +--- + +Because nodes connect +using `libpq`, the DSN of a node is a [`libpq`]( https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNECT-SSLMODE) connection string. As such, the connection string can contain any permitted `libpq` connection +parameter, including those for SSL. The DSN must work as the +connection string from the client connecting to the node in which it's +specified. An example of such a set of parameters using a client certificate is: + +```ini +sslmode=verify-full sslcert=bdr_client.crt sslkey=bdr_client.key +sslrootcert=root.crt +``` + +With this setup, the files `bdr_client.crt`, `bdr_client.key`, and +`root.crt` must be present in the data directory on each node, with the +appropriate permissions. +For `verify-full` mode, the server's SSL certificate is checked to +ensure that it's directly or indirectly signed with the `root.crt` certificate +authority and that the host name or address used in the connection matches the +contents of the certificate. In the case of a name, this can match a subject's +alternative name or, if there are no such names in the certificate, the +subject's common name (CN) field. +Postgres doesn't currently support subject alternative names for IP +addresses, so if the connection is made by address rather than name, it must +match the CN field. + +The CN of the client certificate must be the name of the user making the +PGD connection, +which is usually the user postgres. Each node requires matching +lines permitting the connection in the `pg_hba.conf` file. For example: + +```ini +hostssl all postgres 10.1.2.3/24 cert +hostssl replication postgres 10.1.2.3/24 cert +``` + +Another setup might be to use `SCRAM-SHA-256` passwords instead of client +certificates and not verify the server identity as long as +the certificate is properly signed. Here the DSN parameters might be: + +```ini +sslmode=verify-ca sslrootcert=root.crt +``` + +The corresponding `pg_hba.conf` lines are: + +```ini +hostssl all postgres 10.1.2.3/24 scram-sha-256 +hostssl replication postgres 10.1.2.3/24 scram-sha-256 +``` + +In such a scenario, the postgres user needs a [`.pgpass`](https://www.postgresql.org/docs/current/libpq-pgpass.html) file +containing the correct password. \ No newline at end of file diff --git a/product_docs/docs/pgd/5/node_management/creating_and_joining.mdx b/product_docs/docs/pgd/5/node_management/creating_and_joining.mdx new file mode 100644 index 00000000000..1a1c20634ec --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/creating_and_joining.mdx @@ -0,0 +1,95 @@ +--- +title: Creating and joining PGD groups +navTitle: Creating and joining PGD groups +--- + +## Creating and joining PGD groups + +For PGD, every node must connect to every other node. To make +configuration easy, when a new node joins, it configures all +existing nodes to connect to it. For this reason, every node, including +the first PGD node created, must know the [PostgreSQL connection string](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING) that other nodes +can use to connect to it. This connection string is +sometimes referred to as a data source name (DSN). + +Both formats of connection string are supported. +So you can use either key-value format, like `host=myhost port=5432 dbname=mydb`, +or URI format, like `postgresql://myhost:5432/mydb`. + +The SQL function [`bdr.create_node_group()`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) creates the PGD group +from the local node. Doing so activates PGD on that node and allows other +nodes to join the PGD group, which consists of only one node at that point. +At the time of creation, you must specify the connection string for other +nodes to use to connect to this node. + +Once the node group is created, every further node can join the PGD +group using the [`bdr.join_node_group()`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) function. + +Alternatively, use the command line utility [bdr_init_physical](/pgd/latest/reference/nodes/#bdr_init_physical) to +create a new node, using `pg_basebackup` or a physical standby of an existing +node. If using `pg_basebackup`, the bdr_init_physical utility can optionally +specify the base backup of only the target database. The earlier +behavior was to back up the entire database cluster. With this utility, the activity +completes faster and also uses less space because it excludes +unwanted databases. If you specify only the target database, then the excluded +databases get cleaned up and removed on the new node. + +When a new PGD node is joined to an existing PGD group or a node subscribes +to an upstream peer, before replication can begin the system must copy the +existing data from the peer nodes to the local node. This copy must be +carefully coordinated so that the local and remote data starts out +identical. It's not enough to use pg_dump yourself. The BDR +extension provides built-in facilities for making this initial copy. + +During the join process, the BDR extension synchronizes existing data +using the provided source node as the basis and creates all metadata +information needed for establishing itself in the mesh topology in the PGD +group. If the connection between the source and the new node disconnects during +this initial copy, restart the join process from the +beginning. + +The node that's joining the cluster must not contain any schema or data +that already exists on databases in the PGD group. We recommend that the +newly joining database be empty except for the BDR extension. However, +it's important that all required database users and roles are created. + +Optionally, you can skip the schema synchronization using the `synchronize_structure` +parameter of the [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) function. In this case, the schema must +already exist on the newly joining node. + +We recommend that you select the source node that has the best connection (the +closest) as the source node for joining. Doing so lowers the time +needed for the join to finish. + +Coordinate the join procedure using the Raft consensus algorithm, which +requires most existing nodes to be online and reachable. + +The logical join procedure (which uses the [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) function) +performs data sync doing `COPY` operations and uses multiple writers +(parallel apply) if those are enabled. + +Node join can execute concurrently with other node joins for the majority of +the time taken to join. However, only one regular node at a time can be in +either of the states PROMOTE or PROMOTING. These states are typically fairly short if +all other nodes are up and running. Otherwise the join is serialized at +this stage. The subscriber-only nodes are an exception to this rule, and they +can be concurrently in PROMOTE and PROMOTING states as well, so their join +process is fully concurrent. + +The join process uses only one node as the source, so it can be +executed when nodes are down if a majority of nodes are available. +This approach can cause a complexity when running logical join. +During logical join, the commit timestamp of rows copied from the source +node is set to the latest commit timestamp on the source node. +Committed changes on nodes that have a commit timestamp earlier than this +(because nodes are down or have significant lag) can conflict with changes +from other nodes. In this case, the newly joined node can be resolved +differently to other nodes, causing a divergence. As a result, we recommend +not running a node join when significant replication lag exists between nodes. +If this is necessary, run LiveCompare on the newly joined node to +correct any data divergence once all nodes are available and caught up. + +`pg_dump` can fail when there's concurrent DDL activity on the source node +because of cache-lookup failures. Since [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) uses pg_dump +internally, it might fail if there's concurrent DDL activity on the source node. +Retrying the join works in that case. diff --git a/product_docs/docs/pgd/5/node_management/decoding_worker.mdx b/product_docs/docs/pgd/5/node_management/decoding_worker.mdx new file mode 100644 index 00000000000..59c8364e0b9 --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/decoding_worker.mdx @@ -0,0 +1,79 @@ +--- +title: Decoding worker +--- + +PGD provides an option to enable a decoding worker process that performs +decoding once, no matter how many nodes are sent data. This option introduces a +new process, the WAL decoder, on each PGD node. One WAL sender process still +exists for each connection, but these processes now just perform the task of +sending and receiving data. Taken together, these changes reduce the CPU +overhead of larger PGD groups and also allow higher replication throughput +since the WAL sender process now spends more time on communication. + +## Enabling + +`enable_wal_decoder` is an option for each PGD group, which is currently +disabled by default. You can use [`bdr.alter_node_group_config()`](../reference/nodes-management-interfaces/#bdralter_node_group_config) to enable or +disable the decoding worker for a PGD group. + +When the decoding worker is enabled, PGD stores logical change record (LCR) +files to allow buffering of changes between decoding and when all +subscribing nodes received data. LCR files are stored under the +`pg_logical` directory in each local node's data directory. The number and +size of the LCR files varies as replication lag increases, so this process also +needs monitoring. The LCRs that aren't required by any of the PGD nodes are cleaned +periodically. The interval between two consecutive cleanups is controlled by +[`bdr.lcr_cleanup_interval`](/pgd/latest/reference/pgd-settings#bdrlcr_cleanup_interval), which defaults to 3 minutes. The cleanup is +disabled when [`bdr.lcr_cleanup_interval`](/pgd/latest/reference/pgd-settings#bdrlcr_cleanup_interval) is 0. + +## Disabling + +When disabled, logical decoding is performed by the WAL sender process for each +node subscribing to each node. In this case, no LCR files are written. + +Even though the decoding worker is enabled for a PGD group, following +GUCs control the production and use of LCR per node. By default +these are `false`. For production and use of LCRs, enable the +decoding worker for the PGD group and set these GUCs to `true` on each of the nodes in the PGD group. + +- [`bdr.enable_wal_decoder`](/pgd/latest/reference/pgd-settings#bdrenable_wal_decoder) — When `false`, all WAL + senders using LCRs restart to use WAL directly. When `true` + along with the PGD group config, a decoding worker process is + started to produce LCR and WAL senders that use LCR. +- [`bdr.receive_lcr`](/pgd/latest/reference/pgd-settings#bdrreceive_lcr) — When `true` on the subscribing node, it requests WAL + sender on the publisher node to use LCRs if available. + + +!!! Note Notes +As of now, a decoding worker decodes changes corresponding to the node where it's +running. A logical standby is sent changes from all the nodes in the PGD group +through a single source. Hence a WAL sender serving a logical standby currently can't +use LCRs. + +A subscriber-only node receives changes from respective nodes directly. Hence +a WAL sender serving a subscriber-only node can use LCRs. + +Even though LCRs are produced, the corresponding WALs are still retained similar +to the case when a decoding worker isn't enabled. In the future, it might be possible +to remove WAL corresponding the LCRs, if they aren't otherwise required. +!!! + +## LCR file names + +For reference, the first 24 characters of an LCR file name are similar to those +in a WAL file name. The first 8 characters of the name are currently all '0'. +In the future, they're expected to represent the TimeLineId similar to the first 8 +characters of a WAL segment file name. The following sequence of 16 characters +of the name is similar to the WAL segment number, which is used to track LCR +changes against the WAL stream. + +However, logical changes are +reordered according to the commit order of the transactions they belong to. +Hence their placement in the LCR segments doesn't match the placement of +corresponding WAL in the WAL segments. + +The set of the last 16 characters represents the +subsegment number in an LCR segment. Each LCR file corresponds to a +subsegment. LCR files are binary and variable sized. The maximum size of an +LCR file can be controlled by `bdr.max_lcr_segment_file_size`, which +defaults to 1 GB. diff --git a/product_docs/docs/pgd/5/node_management/groups_and_subgroups.mdx b/product_docs/docs/pgd/5/node_management/groups_and_subgroups.mdx new file mode 100644 index 00000000000..dfa466525a0 --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/groups_and_subgroups.mdx @@ -0,0 +1,22 @@ +--- +title: Groups and subgroups +--- + +## Groups + +A PGD cluster's nodes are gathered in groups. A "top level" group always exists and is the group +to which all data nodes belong to automatically. The "top level" group can also be +the direct parent of sub-groups. + +## Sub-groups + +A group can also contain zero or more subgroups. Subgroups can be used to +represent data centers or locations allowing commit scopes to refer to nodes +in a particular region as a whole. PGD Proxy can also make use of subgroups +to delineate nodes available to be write leader. + +The `node_group_type` value specifies the type when the subgroup is created. +Some sub-group types change the behavior of the nodes within the group. For +example, a [subscriber-only](subscriber_only) sub-group will make all the nodes +within the group into subscriber-only nodes. + diff --git a/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx b/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx new file mode 100644 index 00000000000..e23ed4c2116 --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/heterogeneous_clusters.mdx @@ -0,0 +1,32 @@ +--- +title: Joining a heterogeneous cluster +--- + + +PGD 4.0 node can join a EDB Postgres Distributed cluster running 3.7.x at a specific +minimum maintenance release (such as 3.7.6) or a mix of 3.7 and 4.0 nodes. +This procedure is useful when you want to upgrade not just the PGD +major version but also the underlying PostgreSQL major +version. You can achieve this by joining a 3.7 node running on +PostgreSQL 12 or 13 to a EDB Postgres Distributed cluster running 3.6.x on +PostgreSQL 11. The new node can also +run on the same PostgreSQL major release as all of the nodes in the +existing cluster. + +PGD ensures that the replication works correctly in all directions +even when some nodes are running 3.6 on one PostgreSQL major release and +other nodes are running 3.7 on another PostgreSQL major release. However, +we recommend that you quickly bring the cluster into a +homogenous state by parting the older nodes once enough new nodes +join the cluster. Don't run any DDLs that might +not be available on the older versions and vice versa. + +A node joining with a different major PostgreSQL release can't use +physical backup taken with [`bdr_init_physical`](/pgd/latest/reference/nodes#bdr_init_physical), and the node must join +using the logical join method. Using this method is necessary because the major +PostgreSQL releases aren't on-disk compatible with each other. + +When a 3.7 node joins the cluster using a 3.6 node as a +source, certain configurations, such as conflict resolution, +aren't copied from the source node. The node must be configured +after it joins the cluster. \ No newline at end of file diff --git a/product_docs/docs/pgd/5/node_management/index.mdx b/product_docs/docs/pgd/5/node_management/index.mdx new file mode 100644 index 00000000000..1c43188ff04 --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/index.mdx @@ -0,0 +1,76 @@ +--- +title: Node management +navTitle: Node management +navigation: + - node_types + - groups_and_subgroups + - creating_and_joining + - witness_nodes + - logical_standby_nodes + - physical_standby_nodes + - subscriber_only + - viewing_topology + - removing_nodes_and_groups + - heterogeneous_clusters + - connections_dsns_and_ssl + - decoding_worker + - replication_slots + - node_recovery +redirects: + - /pgd/latest/nodes/ +--- + + +All data nodes in a PGD cluster are members of one or more groups. By default, +all data nodes are members of the top-level group, which spans all data nodes +in the PGD cluster. Nodes can also belong to subgroups that can be configured to +reflect logical or geographical organization of the PGD cluster. + +You can manage nodes and groups using the various options available +with nodes and subgroups. + +* [Node types](node_types) covers the kinds of node that can exist in PGD + clusters. + +* [Groups and subgroups](groups_and_subgroups) goes into more detail on how + groups and subgroups work in PGD + +* [Creating and joining groups](creating_and_joining) looks at how new PGD + groups can be created and how to join PGD nodes to them. + +* [Witness nodes](witness_nodes) looks at a special class of PGD node, dedicated + to establishing consensus in a group. + +* [Logical standby nodes](logical_standby_nodes) shows how to efficiently keep a + node on standby synchronized and ready to step in as a primary in the case of + failure. + +* [Physical standby nodes](physical_standby_nodes) covers an alternative + technique of providing a standby node. + +* [Subscriber-only nodes and groups](subscriber_only) looks at how subscriber-only nodes work and + how they are configured. + +* [Viewing topology](viewing_topology) details commands and SQL queries that can + show the structure of a PGD clusters nodes and groups. + +* [Removing nodes and groups](removing_nodes_and_groups) shows the process to + follow to safely remove a node from a group or a group from a cluster. + +* [Heterogeneous clusters](heterogeneous_clusters) looks at how your PGD cluster + can interoperate with PGD nodes from earlier editions of PGD. + +* [Connection DSNs](connections_dsns_and_ssl) introduces the DSNs or connection + strings needed to connect directly to a node in a PGD cluster. It also covers + how to use SSL/TLS certificates to provide authentication and encryption + between servers and between clients. + +* [Decoding worker](decoding_worker) covers a feature of PGD that allows groups + of nodes to reduce CPU overhead due to replication. + +* [Replication slots](replication_slots) examines how the Postgres replication + slots are consumed when PGD is operating. + + * [Node recovery](node_recovery) details the steps needed to bring a node back + into service after a failure or scheduled downtime and the impact it has on + the cluster as it returns. diff --git a/product_docs/docs/pgd/5/node_management/logical_standby_nodes.mdx b/product_docs/docs/pgd/5/node_management/logical_standby_nodes.mdx new file mode 100644 index 00000000000..cf999fa354b --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/logical_standby_nodes.mdx @@ -0,0 +1,106 @@ +--- +title: Logical standby nodes +--- + +PGD allows you to create a *logical standby node*, also known as an offload +node, a read-only node, receive-only node, or logical-read replicas. +A master node can have zero, one, or more logical standby nodes. + +!!! Note + Logical standby nodes can be used in environments where network traffic + between data centers is a concern; otherwise having more data nodes per + location is always preferred. + +With a physical standby node, the node never comes up fully, forcing it to +stay in continual recovery mode. +PGD allows something similar. [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) has the `pause_in_standby` +option to make the node stay in half-way-joined as a logical standby node. +Logical standby nodes receive changes but don't send changes made locally +to other nodes. + +Later, if you want, use [`bdr.promote_node`](/pgd/latest/reference/nodes-management-interfaces#bdrpromote_node) to move the logical standby into a +full, normal send/receive node. + +A logical standby is sent data by one source node, defined by the DSN in +[`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group). Changes from all other nodes are received from this one +source node, minimizing bandwidth between multiple sites. + +There are multiple options for high availability: + +- If the source node dies, one physical standby can be promoted to a master. + In this case, the new master can continue to feed any or all logical standby nodes. + +- If the source node + dies, one logical standby can be promoted to a full node and replace the source + in a failover operation similar to single-master operation. If there + are multiple logical standby nodes, the other nodes can't follow the new master, + so the effectiveness of this technique is limited to one logical + standby. + +In case a new standby is created from an existing PGD node, +the needed replication slots for operation aren't synced to the +new standby until at least 16 MB of LSN has elapsed since the group +slot was last advanced. In extreme cases, this might require a full +16 MB before slots are synced or created on the streaming replica. If +a failover or switchover occurs during this interval, the +streaming standby can't be promoted to replace its PGD node, as the +group slot and other dependent slots don't exist yet. + +The slot sync-up process on the standby solves this by invoking a function +on the upstream. This function moves the group slot in the +entire EDB Postgres Distributed cluster by performing WAL switches and requesting all PGD +peer nodes to replay their progress updates. This behavior causes the +group slot to move ahead in a short time span. This reduces the time +required by the standby for the initial slot's sync-up, allowing for +faster failover to it, if required. + +On PostgreSQL, it's important to ensure that the slot's sync-up completes on +the standby before promoting it. You can run the following query on the +standby in the target database to monitor and ensure that the slots +synced up with the upstream. The promotion can go ahead when this query +returns `true`. + +```sql +SELECT true FROM pg_catalog.pg_replication_slots WHERE + slot_type = 'logical' AND confirmed_flush_lsn IS NOT NULL; +``` + +You can also nudge the slot sync-up process in the entire PGD +cluster by manually performing WAL switches and by requesting all PGD +peer nodes to replay their progress updates. This activity causes +the group slot to move ahead in a short time and also hastens the +slot sync-up activity on the standby. You can run the following queries +on any PGD peer node in the target database for this: + +```sql +SELECT bdr.run_on_all_nodes('SELECT pg_catalog.pg_switch_wal()'); +SELECT bdr.run_on_all_nodes('SELECT bdr.request_replay_progress_update()'); +``` + +Use the monitoring query on the standby to check that these +queries do help in faster slot sync-up on that standby. + +Logical standby nodes can be protected using physical standby nodes, +if desired, so Master->LogicalStandby->PhysicalStandby. You can't +cascade from LogicalStandby to LogicalStandby. + +A logical standby does allow write transactions, so the restrictions +of a physical standby don't apply. You can use this to great benefit, since +it allows the logical standby to have additional indexes, longer retention +periods for data, intermediate work tables, LISTEN/NOTIFY, temp tables, +materialized views, and other differences. + +Any changes made locally to logical standbys that commit before the promotion +aren't sent to other nodes. All transactions that commit after promotion +are sent onwards. If you perform writes to a logical standby, +take care to quiesce the database before promotion. + +You might make DDL changes to logical standby nodes, but they aren't +replicated and they don't attempt to take global DDL locks. PGD functions +that act similarly to DDL also aren't replicated. See [DDL replication](../ddl/). +If you made incompatible DDL changes to a logical standby, +then the database is a *divergent node*. Promotion of a divergent +node currently results in replication failing. +As a result, plan to either ensure that a logical standby node +is kept free of divergent changes if you intend to use it as a standby, or +ensure that divergent nodes are never promoted. diff --git a/product_docs/docs/pgd/5/node_management/node_recovery.mdx b/product_docs/docs/pgd/5/node_management/node_recovery.mdx new file mode 100644 index 00000000000..382135aa612 --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/node_recovery.mdx @@ -0,0 +1,58 @@ +--- +title: Node restart and down node recovery +navTitle: Node restart and recovery +--- + +PGD is designed to recover from node restart or node disconnection. +The disconnected node rejoins the group by reconnecting +to each peer node and then replicating any missing data from that node. + +When a node starts up, each connection begins showing up in [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible#bdrnode_slots) with +`bdr.node_slots.state = catchup` and begins replicating missing data. +Catching up continues for a period of time that depends on the +amount of missing data from each peer node and will likely increase +over time, depending on the server workload. + +If the amount of write activity on each node isn't uniform, the catchup period +from nodes with more data can take significantly longer than other nodes. +Eventually, the slot state changes to `bdr.node_slots.state = streaming`. + +Nodes that are offline for longer periods, such as hours or days, +can begin to cause resource issues for various reasons. Don't plan +on extended outages without understanding the following issues. + +Each node retains change information (using one +[replication slot](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html) +for each peer node) so it can later replay changes to a temporarily unreachable node. +If a peer node remains offline indefinitely, this accumulated change information +eventually causes the node to run out of storage space for PostgreSQL +transaction logs (*WAL* in `pg_wal`), and likely causes the database server +to shut down with an error similar to this: + +``` +PANIC: could not write to file "pg_wal/xlogtemp.559": No space left on device +``` + +Or, it might report other out-of-disk related symptoms. + +In addition, slots for offline nodes also hold back the catalog xmin, preventing +vacuuming of catalog tables. + +On EDB Postgres Extended Server and EDB Postgres Advanced Server, offline nodes +also hold back freezing of data to prevent losing conflict-resolution data +(see [Origin conflict detection](../consistency/conflicts)). + +Administrators must monitor for node outages (see [monitoring](../monitoring/)) +and make sure nodes have enough free disk space. If the workload is +predictable, you might be able to calculate how much space is used over time, +allowing a prediction of the maximum time a node can be down before critical +issues arise. + +Don't manually remove replication slots created by PGD. If you do, the cluster +becomes damaged and the node that was using the +slot must be parted from the cluster, as described in [Replication slots created by PGD](replication_slots/). + +While a node is offline, the other nodes might not yet have received +the same set of data from the offline node, so this might appear as a slight +divergence across nodes. The parting process corrects this imbalance across nodes. +(Later versions might do this earlier.) \ No newline at end of file diff --git a/product_docs/docs/pgd/5/node_management/node_types.mdx b/product_docs/docs/pgd/5/node_management/node_types.mdx new file mode 100644 index 00000000000..e5616d129e9 --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/node_types.mdx @@ -0,0 +1,26 @@ +--- +title: Node types +--- + +## Data nodes + +A data node in PGD is a node which runs a Postgres instance. It replicates data to all other data nodes. It also participates in the cluster wide Raft decision making around locking and leadership. It will be the member of one or more groups and by default will be a member of the "top level" group which spans all data nodes in the cluster. + +## Witness nodes + +A witness node behaves like a data node in that it participates in the cluster wide Raft decision making around locking and leadership. It does not replicate or store data though. The purpose of a witness node is to be available to ensure a majority can be achieved when the cluster seeks a consensus. The [Witness nodes](witness_nodes) section has more details. + +## Logical standby nodes + +Logical standby nodes are nodes which recieve the logical data changes from another node and replicate them locally. They can be used to replace the node they are replicating, if that node should become unavailable, with some caveats. See [Logical standby nodes](logical_standby_nodes) for more details. + +## Physical standby nodes + +Physical standby nodes are nodes which recieve the physical data changes from another node writing them locally. They can be used to replace the node they are recieving the changes from. See [Physical standby nodes](physical_standby_nodes) for further details. + +## Subscriber-only nodes + +A subscriber-only node is a data node which, as the name suggests, only subscribes to changes in the cluster but does not replicate changes to other nodes. Subscriber-only nodes can be used as read-only nodes for applications. Subscriber-only nodes are created by taking a data node and adding it to a subscriber-only group. See [Subscriber-only nodes and groups](subscriber_only) for more details. + + + \ No newline at end of file diff --git a/product_docs/docs/pgd/5/node_management/physical_standby_nodes.mdx b/product_docs/docs/pgd/5/node_management/physical_standby_nodes.mdx new file mode 100644 index 00000000000..148dfb147b6 --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/physical_standby_nodes.mdx @@ -0,0 +1,107 @@ +--- +title: Physical standby nodes +--- + +PGD enables you to create traditional physical standby failover +nodes. These are commonly intended to directly replace a PGD +node in the cluster after a short promotion procedure. As with +any standard Postgres cluster, a node can have any number of these +physical replicas. + +There are, however, some minimal prerequisites for this to work properly +due to the use of replication slots and other functional requirements in +PGD: + +- The connection between PGD primary and standby uses streaming + replication through a physical replication slot. +- The standby has: + - `recovery.conf` (for PostgreSQL <12, for PostgreSQL 12+ these settings are in `postgres.conf`): + - `primary_conninfo` pointing to the primary + - `primary_slot_name` naming a physical replication slot on the primary to be used only by this standby + - `postgresql.conf`: + - `shared_preload_libraries = 'bdr'`, there can be other plugins in the list as well, but don't include pglogical + - `hot_standby = on` + - `hot_standby_feedback = on` +- The primary has: + - `postgresql.conf`: + - [`bdr.standby_slot_names`](/pgd/latest/reference/pgd-settings#bdrstandby_slot_names) specifies the physical + replication slot used for the standby's `primary_slot_name`. + +While this is enough to produce a working physical standby of a PGD +node, you need to address some additional concerns. + +Once established, the standby requires enough time and WAL traffic +to trigger an initial copy of the primary's other PGD-related +replication slots, including the PGD group slot. At minimum, slots on a +standby are live and can survive a failover only if they report +a nonzero `confirmed_flush_lsn` as reported by `pg_replication_slots`. + +As a consequence, check physical standby nodes in newly initialized PGD +clusters with low amounts of write activity before +assuming a failover will work normally. Failing to take this precaution +can result in the standby having an incomplete subset of required +replication slots needed to function as a PGD node, and thus an +aborted failover. + +The protection mechanism that ensures physical standby nodes are up to date +and can be promoted (as configured by [`bdr.standby_slot_names`](/pgd/latest/reference/pgd-settings#bdrstandby_slot_names)) affects the +overall replication latency of the PGD group. This is because the group replication +happens only when the physical standby nodes are up to date. + +For these reasons, we generally recommend to use either logical standby nodes +or a subscribe-only group instead of physical standby nodes. They both +have better operational characteristics in comparison. + +You can manually ensure the group slot is advanced on all nodes +(as much as possible), which helps hasten the creation of PGD-related +replication slots on a physical standby using the following SQL syntax: + +```sql +SELECT bdr.move_group_slot_all_nodes(); +``` + +Upon failover, the standby must perform one of two actions to replace +the primary: + +- Assume control of the same IP address or hostname as the primary. +- Inform the EDB Postgres Distributed cluster of the change in address by executing the + [bdr.alter_node_interface](../reference/nodes-management-interfaces/#bdralter_node_interface) function on all other PGD nodes. + +Once this is done, the other PGD nodes reestablish communication +with the newly promoted standby -> primary node. Since replication +slots are synchronized only periodically, this new primary might reflect +a lower LSN than expected by the existing PGD nodes. If this is the +case, PGD fast forwards each lagging slot to the last location +used by each PGD node. + +Take special note of the [`bdr.alter_node_interface`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_interface)) parameter as +well. It's important to set it in a EDB Postgres Distributed cluster where there is a +primary -> physical standby relationship or when using subscriber-only groups. + +PGD maintains a group slot that always reflects the state of the +cluster node showing the most lag for any outbound replication. +With the addition of a physical +replica, PGD must be informed that there's a nonparticipating node +member that, regardless, affects the state of the group slot. + +Since the standby doesn't directly communicate with the other PGD +nodes, the `standby_slot_names` parameter informs PGD to consider named +slots as needed constraints on the group slot as well. When set, the +group slot is held if the standby shows lag, even if the group +slot is normally advanced. + +As with any physical replica, this type of standby can also be +configured as a synchronous replica. As a reminder, this requires: + +- On the standby: + - Specifying a unique `application_name` in `primary_conninfo` +- On the primary: + - Enabling `synchronous_commit` + - Including the standby `application_name` in `synchronous_standby_names` + +It's possible to mix physical standby and other PGD nodes in +`synchronous_standby_names`. CAMO and Eager All-Node Replication use +different synchronization mechanisms and don't work with synchronous +replication. Make sure `synchronous_standby_names` doesn't +include any PGD node if either CAMO or Eager All-Node Replication is used. +Instead use only non-PGD nodes, for example, a physical standby. diff --git a/product_docs/docs/pgd/5/node_management/removing_nodes_and_groups.mdx b/product_docs/docs/pgd/5/node_management/removing_nodes_and_groups.mdx new file mode 100644 index 00000000000..5a5eed70765 --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/removing_nodes_and_groups.mdx @@ -0,0 +1,37 @@ +--- +title: Removing nodes and groups +--- + +## Removing a node from a PGD group + +Since PGD is designed to recover from extended node outages, you +must explicitly tell the system if you're removing a node +permanently. If you permanently shut down a node and don't tell +the other nodes, then performance suffers and eventually +the whole system stops working. + +Node removal, also called *parting*, is done using the [`bdr.part_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrpart_node) +function. You must specify the node name (as passed during node creation) +to remove a node. You can call the [`bdr.part_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrpart_node) function from any active +node in the PGD group, including the node that you're removing. + +Just like the join procedure, parting is done using Raft consensus and requires a +majority of nodes to be online to work. + +The parting process affects all nodes. The Raft leader manages a vote +between nodes to see which node has the most recent data from the parting node. +Then all remaining nodes make a secondary, temporary connection to the +most recent node to allow them to catch up any missing data. + +A parted node still is known to PGD but doesn't consume resources. A +node might be added again under the same name as a parted node. +In rare cases, you might want to clear all metadata of a parted +node by using the function [`bdr.drop_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrdrop_node). + +## Removing a whole PGD group + +PGD groups usually map to locations. When a location is no longer being deployed, +it's likely that the PGD group for the location also needs to be removed. + +The PGD group that's being removed must be empty. Before you can remove the +group, you must part all the nodes in the group. diff --git a/product_docs/docs/pgd/5/node_management/replication_slots.mdx b/product_docs/docs/pgd/5/node_management/replication_slots.mdx new file mode 100644 index 00000000000..8fbe149f7ff --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/replication_slots.mdx @@ -0,0 +1,86 @@ +--- +title: Replication slots created by PGD +--- + +On a PGD master node, the following replication slots are +created by PGD: + +- One *group slot*, named `bdr__` +- N-1 *node slots*, named `bdr___`, where N is the total number of PGD nodes in the cluster, + including direct logical standbys, if any + +!!! Warning + Don't drop those slots. PGD creates and manages them and drops them when or if necessary. + +On the other hand, you can create or drop replication slots required by software like Barman +or logical replication using the appropriate commands +for the software without any effect on PGD. +Don't start slot names used by other software with the +prefix `bdr_`. + +For example, in a cluster composed of the three nodes `alpha`, `beta`, and +`gamma`, where PGD is used to replicate the `mydb` database and the +PGD group is called `mygroup`: + +- Node `alpha` has three slots: + - One group slot named `bdr_mydb_mygroup` + - Two node slots named `bdr_mydb_mygroup_beta` and + `bdr_mydb_mygroup_gamma` +- Node `beta` has three slots: + - One group slot named `bdr_mydb_mygroup` + - Two node slots named `bdr_mydb_mygroup_alpha` and + `bdr_mydb_mygroup_gamma` +- Node `gamma` has three slots: + - One group slot named `bdr_mydb_mygroup` + - Two node slots named `bdr_mydb_mygroup_alpha` and + `bdr_mydb_mygroup_beta` + +## Group replication slot + +The group slot is an internal slot used by PGD primarily to track the +oldest safe position that any node in the PGD group (including all logical +standbys) has caught up to, for any outbound replication from this node. + +The group slot name is given by the function [`bdr.local_group_slot_name()`](/pgd/latest/reference/functions#bdrlocal_group_slot_name). + +The group slot can: + +- Join new nodes to the PGD group without having all existing nodes + up and running (although the majority of nodes should be up). This process doesn't + incur data loss in case the node that was down during join starts + replicating again. +- Part nodes from the cluster consistently, even if some nodes haven't + caught up fully with the parted node. +- Hold back the freeze point to avoid missing some conflicts. +- Keep the historical snapshot for timestamp-based snapshots. + +The group slot is usually inactive and is fast forwarded only periodically +in response to Raft progress messages from other nodes. + +!!! Warning + Don't drop the group slot. Although usually inactive, it's + still vital to the proper operation of the EDB Postgres Distributed cluster. If you drop it, + then some or all of the features can stop working or have + incorrect outcomes. + +## Hashing long identifiers + +The name of a replication slot, like any other PostgreSQL +identifier, can't be longer than 63 bytes. PGD handles this by +shortening the database name, the PGD group name, and the name of the +node in case the resulting slot name is too long for that limit. +Shortening an identifier is carried out by replacing the final section +of the string with a hash of the string itself. + +For example, consider a cluster that replicates a database +named `db20xxxxxxxxxxxxxxxx` (20 bytes long) using a PGD group named +`group20xxxxxxxxxxxxx` (20 bytes long). The logical replication slot +associated to node `a30xxxxxxxxxxxxxxxxxxxxxxxxxxx` (30 bytes long) +is called since `3597186`, `be9cbd0`, and `7f304a2` are respectively the hashes +of `db20xxxxxxxxxxxxxxxx`, `group20xxxxxxxxxxxxx`, and +`a30xxxxxxxxxxxxxxxxxxxxxxxxxx`. + +``` +bdr_db20xxxx3597186_group20xbe9cbd0_a30xxxxxxxxxxxxx7f304a2 +``` diff --git a/product_docs/docs/pgd/5/node_management/subscriber_only.mdx b/product_docs/docs/pgd/5/node_management/subscriber_only.mdx new file mode 100644 index 00000000000..8c6c01fd7ff --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/subscriber_only.mdx @@ -0,0 +1,62 @@ +--- +title: Subscriber-only nodes and groups +--- + +## Subscriber-only nodes + +As the name suggests, a subscriber-only node subscribes only to replication +changes from other nodes in the cluster. However, no other nodes receive +replication changes from `subscriber-only` nodes. This is somewhat similar to +logical standby nodes. But in contrast to logical standby, the `subscriber-only` +nodes are fully joined to the cluster. They can receive replication changes from +all other nodes in the cluster and hence aren't affected by unavailability or +parting of any one node in the cluster. + +A `subscriber-only` node is a fully joined PGD node and hence it receives all +replicated DDLs and acts on those. It also uses Raft to consistently report its +status to all nodes in the cluster. The `subscriber-only` node doesn't have Raft +voting rights and hence can't become a Raft leader or participate in the leader +election. Also, while it receives replicated DDLs, it doesn't participate in DDL +or DML lock acquisition. In other words, a currently down `subscriber-only` node +doesn't stop a DML lock from being acquired. + +The `subscriber-only` node forms the building block for PGD Tree topology. In +this topology, a small number of fully active nodes are replicating changes in +all directions. A large number of `subscriber-only` nodes receive only changes +but never send any changes to any other node in the cluster. This topology +avoids connection explosion due to a large number of nodes, yet provides an +extremely large number of `leaf` nodes that you can use to consume the data. + +## Subscriber-only groups + +To make use of `subscriber-only` nodes, first create a PGD group of type +`subscriber-only`. Make it a subgroup of the group from which the member nodes +receive the replication changes. Once you create the subgroup, all nodes that +intend to become `subscriber-only` nodes must join the subgroup. You can create +more than one subgroup of `subscriber-only` type, and they can have different +parent groups. + +Once a node successfully joins the `subscriber-only` subgroup, it becomes a +`subscriber-only` node and starts receiving replication changes for the parent +group. Any changes made directly on the `subscriber-only` node aren't +replicated. + +See +[`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) +to know how to create a subgroup of a specific type and belonging to a specific +parent group. + +!!! Note + +Since a `subscriber-only` node doesn't replicate changes to any node in the +cluster, it can't act as a source for syncing replication changes when a node is +parted from the cluster. But if the `subscriber-only` node already received and +applied replication changes from the parted node that no other node in the +cluster currently has, then that causes inconsistency between the nodes. + +For now, you can solve this by setting +[`bdr.standby_slot_names`](/pgd/latest/reference/pgd-settings#bdrstandby_slot_names) +and +[`bdr.standby_slots_min_confirmed`](/pgd/latest/reference/pgd-settings#bdrstandby_slots_min_confirmed) +so that there's always a fully active PGD node that's ahead of the +`subscriber-only` nodes. !!! diff --git a/product_docs/docs/pgd/5/node_management/viewing_topology.mdx b/product_docs/docs/pgd/5/node_management/viewing_topology.mdx new file mode 100644 index 00000000000..e9732a872ff --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/viewing_topology.mdx @@ -0,0 +1,102 @@ +--- +title: Viewing PGD topology +--- + + +## Listing PGD groups + +### Using [pgd-cli](../cli) + +Use the [pgd-cli](../cli) `show-groups` command to list all groups in the PGD cluster: + +```shell +pgd show-groups +``` +```console + Group Group ID Type Parent Group Location Raft Routing Write Leader + ----- -------- ---- ------------ -------- ---- ------- ------------ + bdrgroup 1360502012 global true false + group_a 3618712053 data bdrgroup a true true bdr-a1 + group_b 402614658 data bdrgroup b true true bdr-b1 + group_c 2808307099 data bdrgroup c false false + group_so 2123208041 subscriber-only bdrgroup c false false +``` + +### Using SQL + +The following simple query lists all the PGD node groups of which the current +node is a member. It currently returns only one row from +[`bdr.local_node_summary`](/pgd/latest/reference/catalogs-visible#bdrlocal_node_summary). + +```sql +SELECT node_group_name +FROM bdr.local_node_summary; +``` + +You can display the configuration of each node group using a more +complex query: + +```sql +SELECT g.node_group_name +, ns.pub_repsets +, ns.sub_repsets +, g.node_group_default_repset AS default_repset +, node_group_check_constraints AS check_constraints +FROM bdr.local_node_summary ns +JOIN bdr.node_group g USING (node_group_name); +``` + +## Listing nodes in a PGD group + +### Using [pgd-cli](../cli) + +Use the `show-nodes` command to list all nodes in the PGD cluster: + +```shell +pgd show-nodes +``` +```console + Node Node ID Group Type Current State Target State Status Seq ID + ---- ------- ----- ---- ------------- ------------ ------ ------ + bdr-a1 3136956818 group_a data ACTIVE ACTIVE Up 6 + bdr-a2 2133699692 group_a data ACTIVE ACTIVE Up 3 + logical-standby-a1 1140256918 group_a standby STANDBY STANDBY Up 9 + witness-a 3889635963 group_a witness ACTIVE ACTIVE Up 7 + bdr-b1 2380210996 group_b data ACTIVE ACTIVE Up 1 + bdr-b2 2244996162 group_b data ACTIVE ACTIVE Up 2 + logical-standby-b1 3541792022 group_b standby STANDBY STANDBY Up 10 + witness-b 661050297 group_b witness ACTIVE ACTIVE Up 5 + witness-c 1954444188 group_c witness ACTIVE ACTIVE Up 4 + subscriber-only-c1 2448841809 group_so subscriber-only ACTIVE ACTIVE Up 8 +``` + +Use `grep` with the group name to filter the list to a specific group: + +```shell +pgd show-nodes | grep group_b +``` +```console + bdr-b1 2380210996 group_b data ACTIVE ACTIVE Up 1 + bdr-b2 2244996162 group_b data ACTIVE ACTIVE Up 2 + logical-standby-b1 3541792022 group_b standby STANDBY STANDBY Up 10 + witness-b 661050297 group_b witness ACTIVE ACTIVE Up 5 +``` + +### Using SQL + +You can extract the list of all nodes in a given node group (such as `mygroup`) +from the [`bdr.node_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_summary)` view. For example: + +```sql +SELECT node_name AS name +, node_seq_id AS ord +, peer_state_name AS current_state +, peer_target_state_name AS target_state +, interface_connstr AS dsn +FROM bdr.node_summary +WHERE node_group_name = 'mygroup'; +``` + +The read-only state of a node, as shown in the +`current_state` or in the `target_state` query columns, is indicated +as `STANDBY`. diff --git a/product_docs/docs/pgd/5/node_management/witness_nodes.mdx b/product_docs/docs/pgd/5/node_management/witness_nodes.mdx new file mode 100644 index 00000000000..68083a9513d --- /dev/null +++ b/product_docs/docs/pgd/5/node_management/witness_nodes.mdx @@ -0,0 +1,35 @@ +--- +title: Witness nodes +--- + +A witness node is a lightweight node which functions as a data node but which +does not store or replicate data. It is used to allow a PGD cluster which uses +Raft consensus to have an odd number of voting nodes and therefore be able to +achieve a majority when making decisions. + +## Witness Nodes within PGD groups or regions + +One typical use of witness nodes is when a PGD group has two data nodes but +resources are not available for the recommended three data nodes. In this case, +a witness node can be added to the PGD group to provide a third voting node to +local Raft decision making. These decisions are primarily about who will be +electing a write leader for the proxies to use. With only two nodes, there could +be no consensus over which data node could be write leader. With two data nodes +and a witness, there would be two candidates (the data nodes) and three voters +(the data nodes and the witness) normally, and when a data node was down, there +would still be two voters able to select a write leader. + +## Witness Node outside regions + +At a higher level, witness nodes can be used when there are multiple PGD groups +mapped to different regions. For example with three data nodes per-region in two +regions, while running normally allows all six data nodes to participate in Raft +decisions and obtaining DDL and DML global locks. Even when a data +node is down, there are sufficient data nodes to obtain a consensus. But if a +network partition occurs and connectivity with the other region is lost, then +there are now only three nodes out of six available, which is not enough for a +consensus. To avoid this scenario, a witness node in a third region can be +deployed as part of the PGD cluster. This witness node will allow a consensus to +be achieved for most operational requirements of the PGD cluster while a region +is unavailable. + diff --git a/product_docs/docs/pgd/5/nodes.mdx b/product_docs/docs/pgd/5/nodes.mdx deleted file mode 100644 index eaff9f9dd1e..00000000000 --- a/product_docs/docs/pgd/5/nodes.mdx +++ /dev/null @@ -1,802 +0,0 @@ ---- -title: Node management -redirects: - - ../bdr/nodes ---- - -Each database that's member of a PGD group must be represented by its own -node. A node is a unique identifier of a database in a PGD group. - -At present, each node can be a member of just one node group. (This might be -extended in later releases.) Each node can subscribe to one or more -replication sets to give fine-grained control over replication. - -A PGD group might also contain zero or more subgroups, allowing you to create a variety -of different architectures. - -## Creating and joining a PGD group - -For PGD, every node must connect to every other node. To make -configuration easy, when a new node joins, it configures all -existing nodes to connect to it. For this reason, every node, including -the first PGD node created, must know the [PostgreSQL connection string](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING), -sometimes referred to as a data source name (DSN), that other nodes -can use to connect to it. Both formats of connection string are supported. -So you can use either key-value format, like `host=myhost port=5432 dbname=mydb`, -or URI format, like `postgresql://myhost:5432/mydb`. - -The SQL function [`bdr.create_node_group()`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) creates the PGD group -from the local node. Doing so activates PGD on that node and allows other -nodes to join the PGD group, which consists of only one node at that point. -At the time of creation, you must specify the connection string for other -nodes to use to connect to this node. - -Once the node group is created, every further node can join the PGD -group using the [`bdr.join_node_group()`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) function. - -Alternatively, use the command line utility [bdr_init_physical](/pgd/latest/reference/nodes/#bdr_init_physical) to -create a new node, using `pg_basebackup` (or a physical standby) of an existing -node. If using `pg_basebackup`, the bdr_init_physical utility can optionally -specify the base backup of only the target database. The earlier -behavior was to back up the entire database cluster. With this utility, the activity -completes faster and also uses less space because it excludes -unwanted databases. If you specify only the target database, then the excluded -databases get cleaned up and removed on the new node. - -When a new PGD node is joined to an existing PGD group or a node subscribes -to an upstream peer, before replication can begin the system must copy the -existing data from the peer nodes to the local node. This copy must be -carefully coordinated so that the local and remote data starts out -identical. It's not enough to use pg_dump yourself. The BDR -extension provides built-in facilities for making this initial copy. - -During the join process, the BDR extension synchronizes existing data -using the provided source node as the basis and creates all metadata -information needed for establishing itself in the mesh topology in the PGD -group. If the connection between the source and the new node disconnects during -this initial copy, restart the join process from the -beginning. - -The node that's joining the cluster must not contain any schema or data -that already exists on databases in the PGD group. We recommend that the -newly joining database be empty except for the BDR extension. However, -it's important that all required database users and roles are created. - -Optionally, you can skip the schema synchronization using the `synchronize_structure` -parameter of the [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) function. In this case, the schema must -already exist on the newly joining node. - -We recommend that you select the source node that has the best connection (the -closest) as the source node for joining. Doing so lowers the time -needed for the join to finish. - -Coordinate the join procedure using the Raft consensus algorithm, which -requires most existing nodes to be online and reachable. - -The logical join procedure (which uses the [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) function) -performs data sync doing `COPY` operations and uses multiple writers -(parallel apply) if those are enabled. - -Node join can execute concurrently with other node joins for the majority of -the time taken to join. However, only one regular node at a time can be in -either of the states PROMOTE or PROMOTING, which are typically fairly short if -all other nodes are up and running. Otherwise the join is serialized at -this stage. The subscriber-only nodes are an exception to this rule, and they -can be concurrently in PROMOTE and PROMOTING states as well, so their join -process is fully concurrent. - -The join process uses only one node as the source, so it can be -executed when nodes are down if a majority of nodes are available. -This can cause a complexity when running logical join. -During logical join, the commit timestamp of rows copied from the source -node is set to the latest commit timestamp on the source node. -Committed changes on nodes that have a commit timestamp earlier than this -(because nodes are down or have significant lag) can conflict with changes -from other nodes. In this case, the newly joined node can be resolved -differently to other nodes, causing a divergence. As a result, we recommend -not running a node join when significant replication lag exists between nodes. -If this is necessary, run LiveCompare on the newly joined node to -correct any data divergence once all nodes are available and caught up. - -`pg_dump` can fail when there is concurrent DDL activity on the source node -because of cache-lookup failures. Since [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) uses `pg_dump` -internally, it might fail if there's concurrent DDL activity on the source node. -Retrying the join works in that case. - -### Joining a heterogeneous cluster - -PGD 4.0 node can join a EDB Postgres Distributed cluster running 3.7.x at a specific -minimum maintenance release (such as 3.7.6) or a mix of 3.7 and 4.0 nodes. -This procedure is useful when you want to upgrade not just the PGD -major version but also the underlying PostgreSQL major -version. You can achieve this by joining a 3.7 node running on -PostgreSQL 12 or 13 to a EDB Postgres Distributed cluster running 3.6.x on -PostgreSQL 11. The new node can also -run on the same PostgreSQL major release as all of the nodes in the -existing cluster. - -PGD ensures that the replication works correctly in all directions -even when some nodes are running 3.6 on one PostgreSQL major release and -other nodes are running 3.7 on another PostgreSQL major release. But -we recommend that you quickly bring the cluster into a -homogenous state by parting the older nodes once enough new nodes -join the cluster. Don't run any DDLs that might -not be available on the older versions and vice versa. - -A node joining with a different major PostgreSQL release can't use -physical backup taken with [`bdr_init_physical`](/pgd/latest/reference/nodes#bdr_init_physical), and the node must join -using the logical join method. This is necessary because the major -PostgreSQL releases aren't on-disk compatible with each other. - -When a 3.7 node joins the cluster using a 3.6 node as a -source, certain configurations, such as conflict resolution, -aren't copied from the source node. The node must be configured -after it joins the cluster. - -### Connection DSNs and SSL (TLS) - -The DSN of a node is simply a `libpq` connection string, since nodes connect -using `libpq`. As such, it can contain any permitted `libpq` connection -parameter, including those for SSL. The DSN must work as the -connection string from the client connecting to the node in which it's -specified. An example of such a set of parameters using a client certificate is: - -```ini -sslmode=verify-full sslcert=bdr_client.crt sslkey=bdr_client.key -sslrootcert=root.crt -``` - -With this setup, the files `bdr_client.crt`, `bdr_client.key`, and -`root.crt` must be present in the data directory on each node, with the -appropriate permissions. -For `verify-full` mode, the server's SSL certificate is checked to -ensure that it's directly or indirectly signed with the `root.crt` certificate -authority and that the host name or address used in the connection matches the -contents of the certificate. In the case of a name, this can match a Subject -Alternative Name or, if there are no such names in the certificate, the -Subject's Common Name (CN) field. -Postgres doesn't currently support subject alternative names for IP -addresses, so if the connection is made by address rather than name, it must -match the CN field. - -The CN of the client certificate must be the name of the user making the -PGD connection. -This is usually the user postgres. Each node requires matching -lines permitting the connection in the `pg_hba.conf` file. For example: - -```ini -hostssl all postgres 10.1.2.3/24 cert -hostssl replication postgres 10.1.2.3/24 cert -``` - -Another setup might be to use `SCRAM-SHA-256` passwords instead of client -certificates and not verify the server identity as long as -the certificate is properly signed. Here the DSN parameters might be: - -```ini -sslmode=verify-ca sslrootcert=root.crt -``` - -The corresponding `pg_hba.conf` lines are: - -```ini -hostssl all postgres 10.1.2.3/24 scram-sha-256 -hostssl replication postgres 10.1.2.3/24 scram-sha-256 -``` - -In such a scenario, the postgres user needs a `.pgpass` file -containing the correct password. - -## Witness nodes - -If the cluster has an even number of nodes, it might be useful to create -an extra node to help break ties in the event of a network split (or -network partition, as it is sometimes called). - -Rather than create an additional full-size node, you can create a micro node, -sometimes called a witness node. This is a normal PGD node that -is deliberately set up not to replicate any tables or data to it. - -## Logical standby nodes - -PGD allows you to create a *logical standby node*, also known as an offload -node, a read-only node, receive-only node, or logical-read replicas. -A master node can have zero, one, or more logical standby nodes. - -!!! Note - Logical standby nodes can be used in environments where network traffic - between data centers is a concern; otherwise having more data nodes per - location is always preferred. - -With a physical standby node, the node never comes up fully, forcing it to -stay in continual recovery mode. -PGD allows something similar. [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) has the `pause_in_standby` -option to make the node stay in half-way-joined as a logical standby node. -Logical standby nodes receive changes but don't send changes made locally -to other nodes. - -Later, if you want, use [`bdr.promote_node`](/pgd/latest/reference/nodes-management-interfaces#bdrpromote_node) to move the logical standby into a -full, normal send/receive node. - -A logical standby is sent data by one source node, defined by the DSN in -[`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group). Changes from all other nodes are received from this one -source node, minimizing bandwidth between multiple sites. - -There are multiple options for high availability: - -- If the source node dies, one physical standby can be promoted to a master. - In this case, the new master can continue to feed any or all logical standby nodes. - -- If the source node - dies, one logical standby can be promoted to a full node and replace the source - in a failover operation similar to single-master operation. If there - are multiple logical standby nodes, the other nodes can't follow the new master, - so the effectiveness of this technique is limited to one logical - standby. - -In case a new standby is created from an existing PGD node, -the needed replication slots for operation aren't synced to the -new standby until at least 16 MB of LSN has elapsed since the group -slot was last advanced. In extreme cases, this might require a full -16 MB before slots are synced or created on the streaming replica. If -a failover or switchover occurs during this interval, the -streaming standby can't be promoted to replace its PGD node, as the -group slot and other dependent slots don't exist yet. - -The slot sync-up process on the standby solves this by invoking a function -on the upstream. This function moves the group slot in the -entire EDB Postgres Distributed cluster by performing WAL switches and requesting all PGD -peer nodes to replay their progress updates. This causes the -group slot to move ahead in a short time span. This reduces the time -required by the standby for the initial slot's sync-up, allowing for -faster failover to it, if required. - -On PostgreSQL, it's important to ensure that the slot's sync-up completes on -the standby before promoting it. You can run the following query on the -standby in the target database to monitor and ensure that the slots -synced up with the upstream. The promotion can go ahead when this query -returns `true`. - -```sql -SELECT true FROM pg_catalog.pg_replication_slots WHERE - slot_type = 'logical' AND confirmed_flush_lsn IS NOT NULL; -``` - -You can also nudge the slot sync-up process in the entire PGD -cluster by manually performing WAL switches and by requesting all PGD -peer nodes to replay their progress updates. This activity causes -the group slot to move ahead in a short time and also hastens the -slot sync-up activity on the standby. You can run the following queries -on any PGD peer node in the target database for this: - -```sql -SELECT bdr.run_on_all_nodes('SELECT pg_catalog.pg_switch_wal()'); -SELECT bdr.run_on_all_nodes('SELECT bdr.request_replay_progress_update()'); -``` - -Use the monitoring query on the standby to check that these -queries do help in faster slot sync-up on that standby. - -Logical standby nodes can be protected using physical standby nodes, -if desired, so Master->LogicalStandby->PhysicalStandby. You can't -cascade from LogicalStandby to LogicalStandby. - -A logical standby does allow write transactions, so the restrictions -of a physical standby don't apply. You can use this to great benefit, since -it allows the logical standby to have additional indexes, longer retention -periods for data, intermediate work tables, LISTEN/NOTIFY, temp tables, -materialized views, and other differences. - -Any changes made locally to logical standbys that commit before the promotion -aren't sent to other nodes. All transactions that commit after promotion -are sent onwards. If you perform writes to a logical standby, -take care to quiesce the database before promotion. - -You might make DDL changes to logical standby nodes but they aren't -replicated and they don't attempt to take global DDL locks. PGD functions -that act similarly to DDL also aren't replicated. See [DDL replication](ddl). -If you made incompatible DDL changes to a logical standby, -then the database is a *divergent node*. Promotion of a divergent -node currently results in replication failing. -As a result, plan to either ensure that a logical standby node -is kept free of divergent changes if you intend to use it as a standby, or -ensure that divergent nodes are never promoted. - -## Physical standby nodes - -PGD also enables you to create traditional physical standby failover -nodes. These are commonly intended to directly replace a PGD -node in the cluster after a short promotion procedure. As with -any standard Postgres cluster, a node can have any number of these -physical replicas. - -There are, however, some minimal prerequisites for this to work properly -due to the use of replication slots and other functional requirements in -PGD: - -- The connection between PGD primary and standby uses streaming - replication through a physical replication slot. -- The standby has: - - `recovery.conf` (for PostgreSQL <12, for PostgreSQL 12+ these settings are in `postgres.conf`): - - `primary_conninfo` pointing to the primary - - `primary_slot_name` naming a physical replication slot on the primary to be used only by this standby - - `postgresql.conf`: - - `shared_preload_libraries = 'bdr'`, there can be other plugins in the list as well, but don't include pglogical - - `hot_standby = on` - - `hot_standby_feedback = on` -- The primary has: - - `postgresql.conf`: - - [`bdr.standby_slot_names`](/pgd/latest/reference/pgd-settings#bdrstandby_slot_names) specifies the physical - replication slot used for the standby's `primary_slot_name`. - -While this is enough to produce a working physical standby of a PGD -node, you need to address some additional concerns. - -Once established, the standby requires enough time and WAL traffic -to trigger an initial copy of the primary's other PGD-related -replication slots, including the PGD group slot. At minimum, slots on a -standby are live and can survive a failover only if they report -a nonzero `confirmed_flush_lsn` as reported by `pg_replication_slots`. - -As a consequence, check physical standby nodes in newly initialized PGD -clusters with low amounts of write activity before -assuming a failover will work normally. Failing to take this precaution -can result in the standby having an incomplete subset of required -replication slots needed to function as a PGD node, and thus an -aborted failover. - -The protection mechanism that ensures physical standby nodes are up to date -and can be promoted (as configured by [`bdr.standby_slot_names`](/pgd/latest/reference/pgd-settings#bdrstandby_slot_names)) affects the -overall replication latency of the PGD group. This is because the group replication -happens only when the physical standby nodes are up to date. - -For these reasons, we generally recommend to use either logical standby nodes -or a subscribe-only group instead of physical standby nodes. They both -have better operational characteristics in comparison. - -You can can manually ensure the group slot is advanced on all nodes -(as much as possible), which helps hasten the creation of PGD-related -replication slots on a physical standby using the following SQL syntax: - -```sql -SELECT bdr.move_group_slot_all_nodes(); -``` - -Upon failover, the standby must perform one of two actions to replace -the primary: - -- Assume control of the same IP address or hostname as the primary. -- Inform the EDB Postgres Distributed cluster of the change in address by executing the - [bdr.alter_node_interface](reference/nodes-management-interfaces/#bdralter_node_interface) function on all other PGD nodes. - -Once this is done, the other PGD nodes reestablish communication -with the newly promoted standby -> primary node. Since replication -slots are synchronized only periodically, this new primary might reflect -a lower LSN than expected by the existing PGD nodes. If this is the -case, PGD fast forwards each lagging slot to the last location -used by each PGD node. - -Take special note of the [`bdr.alter_node_interface`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_interface)) parameter as -well. It's important to set it in a EDB Postgres Distributed cluster where there is a -primary -> physical standby relationship or when using subscriber-only groups. - -PGD maintains a group slot that always reflects the state of the -cluster node showing the most lag for any outbound replication. -With the addition of a physical -replica, PGD must be informed that there is a nonparticipating node -member that, regardless, affects the state of the group slot. - -Since the standby doesn't directly communicate with the other PGD -nodes, the `standby_slot_names` parameter informs PGD to consider named -slots as needed constraints on the group slot as well. When set, the -group slot is held if the standby shows lag, even if the group -slot is normally advanced. - -As with any physical replica, this type of standby can also be -configured as a synchronous replica. As a reminder, this requires: - -- On the standby: - - Specifying a unique `application_name` in `primary_conninfo` -- On the primary: - - Enabling `synchronous_commit` - - Including the standby `application_name` in `synchronous_standby_names` - -It's possible to mix physical standby and other PGD nodes in -`synchronous_standby_names`. CAMO and Eager All-Node Replication use -different synchronization mechanisms and don't work with synchronous -replication. Make sure `synchronous_standby_names` doesn't -include any PGD node if either CAMO or Eager All-Node Replication is used. -Instead use only non-PGD nodes, for example, a physical standby. - -## Subgroups - -A group can also contain zero or more subgroups. Each subgroup can be -allocated to a specific purpose in the top-level parent group. The -node_group_type specifies the type when the subgroup is created. - -### Subscriber-only groups - -As the name suggests, this type of node -subscribes only to replication changes from other nodes in the cluster. However, -no other nodes receive replication changes from `subscriber-only` -nodes. This is somewhat similar to logical standby nodes. But in contrast -to logical standby, the `subscriber-only` nodes are fully joined to -the cluster. They can receive replication changes from all other nodes -in the cluster and hence aren't affected by unavailability or -parting of any one node in the cluster. - -A `subscriber-only` node is a fully joined -PGD node and hence it receives all replicated DDLs and acts on those. It -also uses Raft to consistently report its status to all nodes in the -cluster. The `subscriber-only` node doesn't have Raft voting rights and -hence can't become a Raft leader or participate in the leader -election. Also, while it receives replicated DDLs, it doesn't -participate in DDL or DML lock acquisition. In other words, a currently -down `subscriber-only` node doesn't stop a DML lock from being acquired. - -The `subscriber-only` node forms the building block for PGD Tree -topology. In this topology, a small number of fully active -nodes are replicating changes in all directions. A large -number of `subscriber-only` nodes receive only changes but never -send any changes to any other node in the cluster. This topology avoids -connection explosion due to a large number of nodes, yet provides -an extremely large number of `leaf` nodes that you can use to consume the -data. - -To make use of `subscriber-only` nodes, first -create a PGD group of type `subscriber-only`. Make it a subgroup of -the group from which the member nodes receive the replication -changes. Once you create the subgroup, all nodes that intend to become -`subscriber-only` nodes must join the subgroup. You can create more than one -subgroup of `subscriber-only` type, and they can have -different parent groups. - -Once a node successfully joins the `subscriber-only` subgroup, it -becomes a `subscriber-only` node and starts receiving replication changes -for the parent group. Any changes made directly on the `subscriber-only` -node aren't replicated. - -See [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) to know how to create a subgroup of a -specific type and belonging to a specific parent group. - -#### Notes - -Since a `subscriber-only` node doesn't replicate changes to any node in -the cluster, it can't act as a source for syncing replication changes -when a node is parted from the cluster. But if the `subscriber-only` -node already received and applied replication changes from the -parted node that no other node in the cluster currently has, then that -causes inconsistency between the nodes. - -For now, you can solve this by setting [`bdr.standby_slot_names`](/pgd/latest/reference/pgd-settings#bdrstandby_slot_names) -and [`bdr.standby_slots_min_confirmed`](/pgd/latest/reference/pgd-settings#bdrstandby_slots_min_confirmed) so that there's always a fully active PGD node that's ahead of the `subscriber-only` -nodes. - -## Decoding worker - -PGD4 provides an option to enable a decoding worker process that performs -decoding once, no matter how many nodes are sent data. This introduces a -new process, the WAL decoder, on each PGD node. One WAL sender process still -exists for each connection, but these processes now just perform the task of -sending and receiving data. Taken together, these changes reduce the CPU -overhead of larger PGD groups and also allow higher replication throughput -since the WAL sender process now spends more time on communication. - -`enable_wal_decoder` is an option for each PGD group, which is currently -disabled by default. You can use [`bdr.alter_node_group_config()`](reference/nodes-management-interfaces/#bdralter_node_group_config) to enable or -disable the decoding worker for a PGD group. - -When the decoding worker is enabled, PGD stores logical change record (LCR) -files to allow buffering of changes between decoding and when all -subscribing nodes received data. LCR files are stored under the -`pg_logical` directory in each local node's data directory. The number and -size of the LCR files varies as replication lag increases, so this also -needs monitoring. The LCRs that aren't required by any of the PGD nodes are cleaned -periodically. The interval between two consecutive cleanups is controlled by -[`bdr.lcr_cleanup_interval`](/pgd/latest/reference/pgd-settings#bdrlcr_cleanup_interval), which defaults to 3 minutes. The cleanup is -disabled when [`bdr.lcr_cleanup_interval`](/pgd/latest/reference/pgd-settings#bdrlcr_cleanup_interval) is zero. - -When disabled, logical decoding is performed by the WAL sender process for each -node subscribing to each node. In this case, no LCR files are written. - -Even though the decoding worker is enabled for a PGD group, following -GUCs control the production and use of LCR per node. By default -these are `false`. For production and use of LCRs, enable the -decoding worker for the PGD group and set these GUCs to `true` on each of the nodes in the PGD group. - -- [`bdr.enable_wal_decoder`](/pgd/latest/reference/pgd-settings#bdrenable_wal_decoder) — When turned `false`, all WAL - senders using LCRs restart to use WAL directly. When `true` - along with the PGD group config, a decoding worker process is - started to produce LCR and WAL Senders use LCR. -- [`bdr.receive_lcr`](/pgd/latest/reference/pgd-settings#bdrreceive_lcr) — When `true` on the subscribing node, it requests WAL - sender on the publisher node to use LCRs if available. - -### Notes - -As of now, a decoding worker decodes changes corresponding to the node where it's -running. A logical standby is sent changes from all the nodes in the PGD group -through a single source. Hence a WAL sender serving a logical standby can't -use LCRs right now. - -A subscriber-only node receives changes from respective nodes directly. Hence -a WAL sender serving a subscriber-only node can use LCRs. - -Even though LCRs are produced, the corresponding WALs are still retained similar -to the case when a decoding worker isn't enabled. In the future, it might be possible -to remove WAL corresponding the LCRs, if they aren't otherwise required. - -For reference, the first 24 characters of an LCR file name are similar to those -in a WAL file name. The first 8 characters of the name are all '0' right now. -In the future, they are expected to represent the TimeLineId similar to the first 8 -characters of a WAL segment file name. The following sequence of 16 characters -of the name is similar to the WAL segment number, which is used to track LCR -changes against the WAL stream. - -However, logical changes are -reordered according to the commit order of the transactions they belong to. -Hence their placement in the LCR segments doesn't match the placement of -corresponding WAL in the WAL segments. - -The set of last 16 characters represents the -subsegment number in an LCR segment. Each LCR file corresponds to a -subsegment. LCR files are binary and variable sized. The maximum size of an -LCR file can be controlled by `bdr.max_lcr_segment_file_size`, which -defaults to 1 GB. - -## Node restart and down node recovery - -PGD is designed to recover from node restart or node disconnection. -The disconnected node rejoins the group by reconnecting -to each peer node and then replicating any missing data from that node. - -When a node starts up, each connection begins showing up in [`bdr.node_slots`](/pgd/latest/reference/catalogs-visible#bdrnode_slots) with -`bdr.node_slots.state = catchup` and begins replicating missing data. -Catching up continues for a period of time that depends on the -amount of missing data from each peer node and will likely increase -over time, depending on the server workload. - -If the amount of write activity on each node isn't uniform, the catchup period -from nodes with more data can take significantly longer than other nodes. -Eventually, the slot state changes to `bdr.node_slots.state = streaming`. - -Nodes that are offline for longer periods, such as hours or days, -can begin to cause resource issues for various reasons. Don't plan -on extended outages without understanding the following issues. - -Each node retains change information (using one -[replication slot](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html) -for each peer node) so it can later replay changes to a temporarily unreachable node. -If a peer node remains offline indefinitely, this accumulated change information -eventually causes the node to run out of storage space for PostgreSQL -transaction logs (*WAL* in `pg_wal`), and likely causes the database server -to shut down with an error similar to this: - -``` -PANIC: could not write to file "pg_wal/xlogtemp.559": No space left on device -``` - -Or, it might report other out-of-disk related symptoms. - -In addition, slots for offline nodes also hold back the catalog xmin, preventing -vacuuming of catalog tables. - -On EDB Postgres Extended Server and EDB Postgres Advanced Server, offline nodes -also hold back freezing of data to prevent losing conflict-resolution data -(see [Origin conflict detection](consistency/conflicts)). - -Administrators must monitor for node outages (see [monitoring](monitoring/)) -and make sure nodes have enough free disk space. If the workload is -predictable, you might be able to calculate how much space is used over time, -allowing a prediction of the maximum time a node can be down before critical -issues arise. - -Don't manually remove replication slots created by PGD. If you do, the cluster -becomes damaged and the node that was using the -slot must be parted from the cluster, as described in [Replication slots created by PGD](#replication-slots-created-by-pgd). - -While a node is offline, the other nodes might not yet have received -the same set of data from the offline node, so this might appear as a slight -divergence across nodes. The parting process corrects this imbalance across nodes. -(Later versions might do this earlier.) - -### Replication slots created by PGD - -On a PGD master node, the following replication slots are -created by PGD: - -- One *group slot*, named `bdr__` -- N-1 *node slots*, named `bdr___`, where N is the total number of PGD nodes in the cluster, - including direct logical standbys, if any - -!!! Warning - Don't drop those slots. PGD creates and manages them and drops them when or if necessary. - -On the other hand, you can create or drop replication slots required by software like Barman -or logical replication using the appropriate commands -for the software without any effect on PGD. -Don't start slot names used by other software with the -prefix `bdr_`. - -For example, in a cluster composed of the three nodes `alpha`, `beta`, and -`gamma`, where PGD is used to replicate the `mydb` database and the -PGD group is called `mygroup`: - -- Node `alpha` has three slots: - - One group slot named `bdr_mydb_mygroup` - - Two node slots named `bdr_mydb_mygroup_beta` and - `bdr_mydb_mygroup_gamma` -- Node `beta` has three slots: - - One group slot named `bdr_mydb_mygroup` - - Two node slots named `bdr_mydb_mygroup_alpha` and - `bdr_mydb_mygroup_gamma` -- Node `gamma` has three slots: - - One group slot named `bdr_mydb_mygroup` - - Two node slots named `bdr_mydb_mygroup_alpha` and - `bdr_mydb_mygroup_beta` - -#### Group replication slot - -The group slot is an internal slot used by PGD primarily to track the -oldest safe position that any node in the PGD group (including all logical -standbys) has caught up to, for any outbound replication from this node. - -The group slot name is given by the function [`bdr.local_group_slot_name()`](/pgd/latest/reference/functions#bdrlocal_group_slot_name). - -The group slot can: - -- Join new nodes to the PGD group without having all existing nodes - up and running (although the majority of nodes should be up), without - incurring data loss in case the node that was down during join starts - replicating again. -- Part nodes from the cluster consistently, even if some nodes haven't - caught up fully with the parted node. -- Hold back the freeze point to avoid missing some conflicts. -- Keep the historical snapshot for timestamp-based snapshots. - -The group slot is usually inactive and is fast forwarded only periodically -in response to Raft progress messages from other nodes. - -!!! Warning - Don't drop the group slot. Although usually inactive, it's - still vital to the proper operation of the EDB Postgres Distributed cluster. If you drop it, - then some or all of the features can stop working or have - incorrect outcomes. - -### Hashing long identifiers - -The name of a replication slot—like any other PostgreSQL -identifier—can't be longer than 63 bytes. PGD handles this by -shortening the database name, the PGD group name, and the name of the -node in case the resulting slot name is too long for that limit. -Shortening an identifier is carried out by replacing the final section -of the string with a hash of the string itself. - -For example, consider a cluster that replicates a database -named `db20xxxxxxxxxxxxxxxx` (20 bytes long) using a PGD group named -`group20xxxxxxxxxxxxx` (20 bytes long). The logical replication slot -associated to node `a30xxxxxxxxxxxxxxxxxxxxxxxxxxx` (30 bytes long) -is called since `3597186`, `be9cbd0`, and `7f304a2` are respectively the hashes -of `db20xxxxxxxxxxxxxxxx`, `group20xxxxxxxxxxxxx`, and -`a30xxxxxxxxxxxxxxxxxxxxxxxxxx`. - -``` -bdr_db20xxxx3597186_group20xbe9cbd0_a30xxxxxxxxxxxxx7f304a2 -``` - -## Removing a node from a PGD group - -Since PGD is designed to recover from extended node outages, you -must explicitly tell the system if you're removing a node -permanently. If you permanently shut down a node and don't tell -the other nodes, then performance suffers and eventually -the whole system stops working. - -Node removal, also called *parting*, is done using the [`bdr.part_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrpart_node) -function. You must specify the node name (as passed during node creation) -to remove a node. You can call the [`bdr.part_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrpart_node) function from any active -node in the PGD group, including the node that you're removing. - -Just like the join procedure, parting is done using Raft consensus and requires a -majority of nodes to be online to work. - -The parting process affects all nodes. The Raft leader manages a vote -between nodes to see which node has the most recent data from the parting node. -Then all remaining nodes make a secondary, temporary connection to the -most-recent node to allow them to catch up any missing data. - -A parted node still is known to PGD but doesn't consume resources. A -node might be added again under the same name as a parted node. -In rare cases, you might want to clear all metadata of a parted -node by using the function [`bdr.drop_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrdrop_node). - -## Removing a whole PGD group - -PGD groups usually map to locations. When a location is no longer being deployed, -it's likely that the PGD group for the location also needs to be removed. - -The PGD group that's being removed must be empty. Before you can remove the -group, you must part all the nodes in the group. - -### Uninstalling PGD - -Dropping the PGD extension removes all the PGD objects in a node, -including metadata tables. You can do this with the following -command: - -```sql -DROP EXTENSION bdr; -``` - -If the database depends on some PGD-specific objects, then you can't drop the PGD -extension. Examples include: - -- Tables using PGD-specific sequences such as `SnowflakeId` or `galloc` -- Column using CRDT data types -- Views that depend on some PGD catalog tables - -Remove those dependencies before dropping the BDR extension. -For example, drop the dependent objects, alter the column -type to a non-PGD equivalent, or change the sequence type back to -`local`. - -!!! Warning - You can drop the BDR extension only if the node was - successfully parted from its PGD node group or if it's the last - node in the group. Dropping PGD metadata breaks replication to and from the other nodes. - -!!! Warning - When dropping a local PGD node or the BDR extension in the local - database, any preexisting session might still try to execute a PGD-specific workflow - and therefore fail. You can solve the problem - by disconnecting the session and then reconnecting the client or - by restarting the instance. - -There's also a [`bdr.drop_node()`](/pgd/latest/reference/nodes-management-interfaces#bdrdrop_node) function. Use this function only in -emergencies, such as if there's a problem with parting. - -## Listing PGD topology - -### Listing PGD groups - -The following simple query lists all the PGD node groups of which the current -node is a member. It currently returns only one row from -[`bdr.local_node_summary`](/pgd/latest/reference/catalogs-visible#bdrlocal_node_summary). - -```sql -SELECT node_group_name -FROM bdr.local_node_summary; -``` - -You can display the configuration of each node group using a more -complex query: - -```sql -SELECT g.node_group_name -, ns.pub_repsets -, ns.sub_repsets -, g.node_group_default_repset AS default_repset -, node_group_check_constraints AS check_constraints -FROM bdr.local_node_summary ns -JOIN bdr.node_group g USING (node_group_name); -``` - -### Listing nodes in a PGD group - -You can extract the list of all nodes in a given node group (such as `mygroup`) -from the [`bdr.node_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_summary)` view as shown in the following -example: - -```sql -SELECT node_name AS name -, node_seq_id AS ord -, peer_state_name AS current_state -, peer_target_state_name AS target_state -, interface_connstr AS dsn -FROM bdr.node_summary -WHERE node_group_name = 'mygroup'; -``` - -The read-only state of a node, as shown in the -`current_state` or in the `target_state` query columns, is indicated -as `STANDBY`. diff --git a/product_docs/docs/pgd/5/postgres-configuration.mdx b/product_docs/docs/pgd/5/postgres-configuration.mdx index 301ae669df6..b275b9c96e2 100644 --- a/product_docs/docs/pgd/5/postgres-configuration.mdx +++ b/product_docs/docs/pgd/5/postgres-configuration.mdx @@ -53,7 +53,7 @@ N slots from the formula \* writers. This is because the `max_replication_slots` also sets the maximum number of replication origins, and some of the functionality of parallel apply uses extra origin per writer. -When the [decoding worker](nodes#decoding-worker) is enabled, this +When the [decoding worker](node_management/decoding_worker/) is enabled, this process requires one extra replication slot per PGD group. Changing the `max_worker_processes`, `max_wal_senders`, and `max_replication_slots` parameters requires restarting the local node. diff --git a/product_docs/docs/pgd/5/reference/functions.mdx b/product_docs/docs/pgd/5/reference/functions.mdx index 38905db703f..82d23c4cb08 100644 --- a/product_docs/docs/pgd/5/reference/functions.mdx +++ b/product_docs/docs/pgd/5/reference/functions.mdx @@ -888,7 +888,7 @@ as explained in [Monitoring replication slots](../monitoring/sql/#monitoring-rep ### `bdr.wal_sender_stats` -If the [decoding worker](../nodes#decoding-worker) is enabled, this +If the [decoding worker](../node_management/decoding_worker/) is enabled, this function shows information about the decoder slot and current LCR (logical change record) segment file being read by each WAL sender. @@ -911,7 +911,7 @@ bdr.wal_sender_stats() ### `bdr.get_decoding_worker_stat` -If the [decoding worker](../nodes#decoding-worker) is enabled, this function +If the [decoding worker](../node_management/decoding_worker/) is enabled, this function shows information about the state of the decoding worker associated with the current database. This also provides more granular information about decoding worker progress than is available via `pg_replication_slots`. diff --git a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx index c5ab3814f4f..403a515212c 100644 --- a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx +++ b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx @@ -77,7 +77,7 @@ including information about remote nodes. Or it can be the remote node, in which case only metadata for that specific node is removed. !!! Note - PGD4 can have a maximum of 1024 node records (both ACTIVE and PARTED) + PGD can have a maximum of 1024 node records (both ACTIVE and PARTED) at one time because each node has a unique sequence number assigned to it, for use by snowflakeid and timeshard sequences. PARTED nodes aren't automatically cleaned up. If this @@ -110,10 +110,10 @@ bdr.create_node_group(node_group_name text, `read coordinator`, and `write coordinator`. `subscriber-only` type is used to create a group of nodes that receive changes only from the fully joined nodes in the cluster, but they never send replication - changes to other nodes. See [Subscriber-only groups](../nodes#subscriber-only-groups) for more details. + changes to other nodes. See [Subscriber-only groups](../node_management/subscriber_only/) for more details. `Datanode` implies that the group represents a shard, whereas the other values imply that the group represents respective coordinators. - Except `subscriber-only`, the other values are reserved for future use. + Except for `subscriber-only`, the other values are reserved for future use. `NULL` implies a normal general-purpose node group is created. ### Notes diff --git a/product_docs/docs/pgd/5/reference/pgd-settings.mdx b/product_docs/docs/pgd/5/reference/pgd-settings.mdx index 4fce1e40821..d663c4121f4 100644 --- a/product_docs/docs/pgd/5/reference/pgd-settings.mdx +++ b/product_docs/docs/pgd/5/reference/pgd-settings.mdx @@ -483,11 +483,7 @@ Track lock timing when tracking statistics for relations. ### `bdr.enable_wal_decoder` -Enables logical change record (LCR) sending on a single node with a [decoding -worker](../nodes#decoding-worker). By default, this setting is false. When set -to true, a decoding worker process starts, and WAL senders send the LCRs it -produces. If set back to false, any WAL senders using LCR are restarted and use -the WAL directly. +Enables logical change record (LCR) sending on a single node with a [decoding worker](../node_management/decoding_worker/). By default, this setting is false. When set to true, a decoding worker process starts, and WAL senders send the LCRs it produces. If set back to false, any WAL senders using LCR are restarted and use the WAL directly. !!! Note You also need to enable this setting on all nodes in the PGD group and @@ -509,11 +505,7 @@ set the `enable_wal_decoder` option to true on the group. ### `bdr.lcr_cleanup_interval` -Logical change record (LCR) file cleanup interval. When the [decoding -worker](../nodes#decoding-worker) is enabled, the decoding worker stores LCR -files as a buffer. These files are periodically cleaned, and this setting -controls the interval between any two consecutive cleanups. The default is 3 -minutes. Setting it to zero disables cleanup. +Logical change record (LCR) file cleanup interval. When the [decoding worker](../node_management/decoding_worker/) is enabled, the decoding worker stores LCR files as a buffer. These files are periodically cleaned, and this setting controls the interval between any two consecutive cleanups. The default is 3 minutes. Setting it to zero disables cleanup. ## Connectivity settings