diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index 6d8dbbe63c8..83f12f9744f 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -4,17 +4,17 @@ title: Notifications With BigAnimal, you can opt to get specific types of notifications and receive both in-app and email notifications. -Different types of events are sent as notifications. These notifications are set at different levels and users with different roles can configure this notifications. This table provides the list of events sent as notifications grouped by different levels at which they can be set: +Different types of events are sent as notifications. These notifications are set at different levels, and users with different roles can configure these notifications. This table provides the list of events sent as notifications grouped by the levels at which they can be set. | Level | Event | Role | Subscription type | |--------------|--------------------------------------------------------------------------------------------------|----------------------------------|--------------------- | | Organization | Payment method added | Organization owner/admin | Digital self-service | | Organization | Personal access key is expiring | Account owner | All | | Organization | Machine user access key is expiring | Organization owner | All | -| Project | Upcoming maintenance upgrade on a cluster (24hr) | Project owner/editor | All | +| Project | Upcoming maintenance upgrade on a cluster in 24 hours | Project owner/editor | All | | Project | Successful maintenance upgrade on a cluster | Project owner/editor | All | | Project | Failed maintenance upgrade on a cluster | Project owner/editor | All | -| Project | Paused cluster will automatically reactivated in 24 hours | Project owner/editor | All | +| Project | Paused cluster will automatically reactivate in 24 hours | Project owner/editor | All | | Project | Paused cluster was automatically reactivated | Project owner/editor | All | | Project | You must set up the encryption key permission for your CMK-enabled cluster | Project owner/editor | All | | Project | Key error with CMK-enabled cluster | Project owner and project editor | All | @@ -24,16 +24,16 @@ Different types of events are sent as notifications. These notifications are set | Project | Failed connection to third-party monitoring integration (and future non-monitoring integrations) | Project owner/editor | All | !!!note -All subscription type means Digital self-service, Direct purchase, and Azure Marketplace. For more information, see [subscription types](/biganimal/latest/pricing_and_billing/#payments-and-billing). +Under "Subscription type," "All" means digital self-service, direct purchase, and Azure Marketplace. For more information, see [subscription types](/biganimal/latest/pricing_and_billing/#payments-and-billing). !!! ## Configuring notifications -The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the in-app inbox, email or both. They can also configure email notifications for their teams within their organization. +The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the in-app inbox, by email, or both. They can also configure email notifications for their teams in their organization. -Project level notifications are configured within the project. +Project-level notifications are configured in the project. -Notification settings made by a user are applicable only to that user. If an email notification is enabled, the email is sent to the email address associated with the user's login. +Notification settings made by a user apply only to that user. If an email notification is enabled, the email is sent to the email address associated with the user's login. ## Viewing notifications @@ -42,19 +42,18 @@ Users in the following roles can view the notifications: - Project owners/editors can view the project-level notifications. - Account owners can view their own account-level notifications. -Each notification indicates the level and/or project it belongs to for the user having multiple roles within BigAnimal. +Each notification indicates the level and project it belongs to for the user having multiple roles in BigAnimal. -Select the bell icon on the top of your BigAnimal portal to view the in-app notifications. On the bell icon, you can read the notification, mark it as unread and also archive the notification. +Select the bell at the top of your BigAnimal portal to view the in-app notifications. By selecting the bell, you can read the notification, mark it as unread, and archive it. -Check the inbox of your configured email addresses, to view the email notifications. +To view the email notifications, check the inbox of your configured email addresses. ## Manage notifications To manage the notifications: 1. Log in to the BigAnimal portal. -1. From the menu under your name in the top right panel, select **My Account**. -1. Select the **Notifications tab**. Notifications are grouped by organizations and projects available to you. +1. From the menu under your name in the top-right panel, select **My Account**. +1. Select the **Notifications** tab. Notifications are grouped by organizations and projects available to you. 1. Select any specific organization/project to manage the notifications. - - Enable/disable the notification for a particular event using the toggle button. - - Select the **Email** and **Inbox** next to an event to enable/disable the email and in-app notifications for the event. - + - Enable/disable the notification for a particular event using the toggle. + - Select **Email** and **Inbox** next to an event to enable/disable the email and in-app notifications for the event. diff --git a/product_docs/docs/biganimal/release/administering_cluster/projects.mdx b/product_docs/docs/biganimal/release/administering_cluster/projects.mdx index d11bb0331c2..7ca273f8ee2 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/projects.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/projects.mdx @@ -48,9 +48,11 @@ Before creating a project: To create a new project: -1. Select **Projects > New Project**. -1. Enter a unique name for the project. -1. Select **Create Project**. +1. Select **Projects > Create New Project**. +1. In the **Project Name** field, enter a unique name. +1. Optionally, under **Tags*, select **+**. +1. To assign an existing tag, in the search bar under **Tags**, enter a tag name. To add a new tag, instead select **+ Add Tag**. +1. Select **Create New Project**. 1. Select **Project > See All Projects**. 1. Select your new project. 1. Set up the cloud provider for the project. See [Connecting your cloud](/biganimal/latest/getting_started/02_connecting_to_your_cloud/). diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index 491eb1ec308..259863984bc 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -67,6 +67,10 @@ The following options aren't available when creating your cluster: 1. In the **Password** field, enter a password for your cluster. This is the password for the user edb_admin. +1. Under **Tags**, select **+**. + +1. To assign an existing tag, in the search bar under **Tags**, enter a tag name. To add a new tag, instead select **+ Add Tag**. + 1. In the **Database Type** section: 1. In the **Postgres Type** field, select the type of Postgres you want to use: diff --git a/product_docs/docs/biganimal/release/known_issues/index.mdx b/product_docs/docs/biganimal/release/known_issues/index.mdx index b97c9b2da00..cd8de30faf9 100644 --- a/product_docs/docs/biganimal/release/known_issues/index.mdx +++ b/product_docs/docs/biganimal/release/known_issues/index.mdx @@ -5,4 +5,4 @@ navTitle: Known issues These known issues and/or limitations are in the current release of BigAnimal and the Postgres deployments it supports: -* [Known issues with distributed high availability](known_issues_dha) \ No newline at end of file +* [Known issues with distributed high availability](known_issues_pgd) \ No newline at end of file diff --git a/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx index 255f6d9a62a..d472725761b 100644 --- a/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx +++ b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx @@ -9,6 +9,8 @@ redirects: These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. These known issues are tracked in our ticketing system and are expected to be resolved in a future release. +For general PGD known issues, refer to the [Known Issues](/pgd/latest/known_issues/) and [Limitations](/pgd/latest/limitations/) in the PGD documentation. + ## Management/administration ### Deleting a PGD data group may not fully reconcile @@ -39,21 +41,15 @@ This cannot be changed, either at initialization or after the cluster is created ## Replication -### A PGD replication slot may fail to transition cleanly from disconnect to catch up -As part of fault injection testing with PGD on BigAnimal, you may decide to delete VMs. -Your cluster will recover if you do so, as expected. -However, if you're testing in a bring-your-own-account (BYOA) deployment, in some cases, as the cluster is recovering, a replication slot may remain disconnected. -This will persist for a few hours until the replication slot recovers automatically. - ### Replication speed is slow during a large data migration During a large data migration, when migrating to a PGD cluster, you may experience a replication rate of 20 MBps. ### PGD leadership change on healthy cluster PGD clusters that are in a healthy state may experience a change in PGD node leadership, potentially resulting in failover. -No intervention is needed as a new leader will be appointed. +The client applications will need to reconnect when a leadership change occurs. ### Extensions which require alternate roles are not supported -Where an extension requires a role other than the default role (`streaming_replica`) used for replication, it will fail when attempting to replicate. +Where an extension requires a role other than the default role (`bdr_application`) used for replication, it will fail when attempting to replicate. This is because PGD runs replication writer operations as a `SECURITY_RESTRICTED_OPERATION` to mitigate the risk of privilege escalation. Attempts to install such extensions may cause the cluster to fail to operate. diff --git a/product_docs/docs/biganimal/release/using_cluster/tagging/create_and_manage_tags.mdx b/product_docs/docs/biganimal/release/using_cluster/tagging/create_and_manage_tags.mdx new file mode 100644 index 00000000000..4b0a89e2189 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/tagging/create_and_manage_tags.mdx @@ -0,0 +1,89 @@ +--- +title: Creating and managing tags +--- + +BigAnimal supports tagging of the following resources: + +- Project +- Cluster + +## Create and assign a tag + +You can create a tag using either of these methods: + +- [Create a tag and assign it to a resource]() +- [Create and assign a tag while creating a resource]() + +### Create a tag and assign it to a resource + +To create a tag: + +1. Log in to the BigAnimal portal. + +1. From the menu under your name in the top right of the panel, select **Tags**. + +1. From the Create Tag page, select **Create Tag**. + +1. Enter the tag name. + +1. Select the color for the tag. + +1. View the tag in **Preview**. + +1. Select **Save**. + +The generated tag is available for the user. + +To assign a tag to an existing cluster: + +1. Go to cluster's home page. + +1. In the clusters list, select the edit icon next to the cluster. + +1. On the Edit Cluster page, go to the **Cluster Settings** tab. + +1. Under **Tags**, select **+**. + +1. In the search bar, enter the name of the tag and select the tag. + +1. To assign the tag, select **Save**. + +To assign a tag to an existing project: + +1. Go to the project's home page. + +1. In the projects list, select the edit icon next to the project. + +1. On the Edit Project page, under **Tags**, select **+**. + +1. In the search bar, enter the tag name and select the tag. + +1. To assign the tag, select **Save**. + +### Create a tag while creating a resource + +Create and assign a tag while [creating a project](../../administering_cluster/projects.mdx/#creating-a-project) and [creating a cluster](../../getting_started/creating_a_cluster/index.mdx/#cluster-settings-tab). + +## Edit a tag + +1. Log in to the BigAnimal portal. + +1. From the menu under your name in the top right of the panel, select **Tags**. + +1. On the Tags page, select the edit button next to the tag name. + +1. Edit the tag name and color. + +1. Select **Save**. + +## Delete a tag + +1. Log in to the BigAnimal portal. + +1. From the menu under your name in the top right of the panel, select **Tags**. + +1. On the Tags page, select the delete icon next to the tag name. + + You're prompted to type **delete tag** in the field. + +1. To delete the tag, enter the text as instructed, and select **Yes, Delete tag**. diff --git a/product_docs/docs/biganimal/release/using_cluster/tagging/index.mdx b/product_docs/docs/biganimal/release/using_cluster/tagging/index.mdx new file mode 100644 index 00000000000..0a1eb95967d --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/tagging/index.mdx @@ -0,0 +1,18 @@ +--- +title: Tagging BigAnimal resources +--- + +BigAnimal provides a shared tags system that allows you to assign and manage tags for resources across different resource types. + +The tags are assigned to the following resource types: + +- Project +- Cluster + +Key features of shared tags system are: + +- **Shared tags** — You can [create shared tags](./create_and_manage_tags.mdx) that are accessible and applicable to all the supported resource types. Shared tags provides a standard way to categorize and organize resources. Tags are scoped to the organizations and shared among all users in the organization. The access to tags is controlled by existing permission system. + +- **Tag assignment** — You can assign one or more shared tags to individual resource items. This capability enables you to categorize resources effectively and according to the needs. + +- **Tag permissions** — The tag management includes permission settings to control who can create, edit, or delete shared tags. This ensures security and control over the tagging system. \ No newline at end of file diff --git a/product_docs/docs/pgd/5/admin-tpa/index.mdx b/product_docs/docs/pgd/5/admin-tpa/index.mdx index 6c651feec79..a520306ce5a 100644 --- a/product_docs/docs/pgd/5/admin-tpa/index.mdx +++ b/product_docs/docs/pgd/5/admin-tpa/index.mdx @@ -1,5 +1,5 @@ --- -title: Automated Installation and Administration with TPA +title: Automated installation and administration with TPA navTitle: With TPA --- @@ -22,3 +22,5 @@ This section of the manual covers how to use TPA to deploy and administer EDB Po * Deploying the configuration with TPA The installing section provides an example cluster which will be used in future examples. + +You can also [perform a rolling major version upgrade](upgrading_major_rolling.mdx) with PGD administered by TPA. diff --git a/product_docs/docs/pgd/5/admin-tpa/upgrading_major_rolling.mdx b/product_docs/docs/pgd/5/admin-tpa/upgrading_major_rolling.mdx new file mode 100644 index 00000000000..3ab6a7deb23 --- /dev/null +++ b/product_docs/docs/pgd/5/admin-tpa/upgrading_major_rolling.mdx @@ -0,0 +1,602 @@ +--- +title: Performing a Postgres major version rolling upgrade on a PGD cluster built with TPA +navTitle: Upgrading Postgres major versions +deepToC: true +--- + +## Upgrading Postgres major versions + +Upgrading a Postgres database's major version to access improved features, performance enhancements, and security updates is a common administration task. Doing the same for a Postgres Distributed (PGD) cluster deployed with Trusted Postgres Architect (TPA) is essentially the same process but performed as a rolling upgrade. + +The rolling upgrade process allows updating individual cluster nodes to a new major Postgres version while maintaining cluster availability and operational continuity. This approach minimizes downtime and ensures data integrity by allowing the rest of the cluster to remain operational as each node is upgraded sequentially. + +The following overview of the general instructions and [worked example](#worked-example) help to provide a smooth and controlled upgrade process. + +### Prepare the upgrade + +To prepare for the upgrade, identify the subgroups and nodes you're trying to upgrade and note an initial upgrade order. + +To do this, connect to one of the nodes using SSH and run the `pgd show-nodes` command: + +```bash +sudo -u postgres pgd show-nodes +``` + +The `pgd show-nodes` command shows you all the nodes in your PGD cluster and the subgroup to which each node belongs. Then you want to find out which node is the write leader in each subgroup by running: + +```bash +sudo -u postgres pgd show-groups +``` + +This command outputs a list of the different groups/subgroups running in your cluster and the write leader of each group. To maintain operational continuity, you need to switch write leaders over to another node in their subgroup before you can upgrade them. To keep the number of planned switchovers to a minimum, when upgrading a subgroup of nodes, upgrade the writer leaders last. + +Even though you verified which node is the current write leader for planning purposes, the write leader of a subgroup could change to another node at any moment for operational reasons before you upgrade that node. Therefore, you still need to verify that a node isn't the write leader just before upgrading that node. + +You now have enough information to determine your upgrade order, one subgroup at a time, aiming to upgrade the identified write leader node last in each subgroup. + +### Perform the upgrade on each node + +!!! Note +To help prevent data loss, ensure that your databases and configuration files are backed up before starting the upgrade process. +!!! + +Using the [preliminary order](#prepare-the-upgrade), perform the following steps on each node while connected via SSH: + +* **Confirm the current Postgres version** + * View versions from PGD by running `sudo -u postgres pgd show-version`. + * Ensure that the expected major version is running. + + +* **Verify that the target node isn't the write leader** + * Check whether the target node is the write leader for the group you're upgrading using `sudo -u postgres pgd show-groups`. + * If the target node is the current write leader for the group/subgroup you're upgrading, perform a [planned switchover](#plan-the-upgrade) to another node. + * `sudo -u postgres pgd switchover --group-name --node-name ` + + +* **Stop Postgres on the target node** + * Stop the Postgres service on the current node by running `sudo systemctl stop postgres`. + * The target node is no longer actively participating as a node in the cluster. + + +* **Install PGD and utilities** + * Install PGD and its utilities compatible with the Postgres version you're upgrading to. + * `sudo apt install edb-bdr-pg edb-bdr-utilities` + + +* **Initialize the new Postgres instance** + * Create a directory that will house the database files for the new version of PostgreSQL: + * `sudo mkdir -p /opt/postgres/datanew` + * Ensure that user postgres has ownership permissions (chown) to the directory. + * Initialize a new PostgreSQL database cluster in the directory you just created. + * This step involves using the `initdb` command provided by the newly installed version of PostgreSQL. + * Replace `` with the path to the bin directory of the newly installed PostgreSQL version: `sudo -u postgres /initdb -D /opt/postgres/datanew`. + * You may need to run this command as the postgres user or another user with appropriate permissions. + * Make sure to include the `--data-checksums` flag to ensure the cluster uses data checksums. + + +* **Migrate configuration to the new Postgres version** + * Locate the following configuration files in your current PostgreSQL data directory: + * `postgresql.conf` + * The main configuration file containing settings related to the database system. + * `postgresql.auto.conf` + * Contains settings set by PostgreSQL, such as those modified by the `ALTER SYSTEM` command. + * `pg_hba.conf` + * Manages client authentication, specifying which users can connect to which databases from which hosts. + * The entire `conf.d` directory (if present) + * Allows for organizing configuration settings into separate files for better manageability. + * Copy these files and the `conf.d` directory to the new data directory you created for the upgraded version of PostgreSQL. + + +* **Verify the Postgres service is inactive** + * Before proceeding, it's important to ensure that no PostgreSQL processes are active for both the old and the new data directories. This verification step prevents any data corruption or conflicts during the upgrade process. + * Use the `sudo systemctl status postgres` command to verify that Postgres was stopped. + * If it isn't stopped, run `systemctl stop postgres` and verify again that it was stopped. + + +* **Swap PGDATA directories for version upgrade** + * Rename `/opt/postgres/data` to `/opt/postgres/dataold` and `/opt/postgres/datanew` to `/opt/postgres/data`. + * This step readies your system for the next crucial phase: running pg_upgrade to finalize the PostgreSQL version transition. + + +* **Verify upgrade feasibility** + * The `bdr_pg_upgrade` tool offers a `--check` option designed to perform a preliminary scan of your current setup, identifying any potential issues that could hinder the upgrade process. + * You need to run this check from an upgrade directory with ownership given to user `postgres`, such as `/home/upgrade/`, so that the upgrade log files created by `bdr_pg_upgrade` can be stored. + * To initiate the safety check, append the `--check` option to your `bdr_pg_upgrade` command. + * This operation simulates the upgrade process without making any changes, providing insights into any compatibility issues, deprecated features, or configuration adjustments required for a successful upgrade. + * Address any warnings or errors indicated by this check to ensure an uneventful transition to the new version. + + +* **Execute the Postgres major version upgrade** + * Execute the upgrade process by running the `bdr_pg_upgrade` command without the `--check` option. + * It's essential to monitor the command output for any errors or warnings that require attention. + * The time the upgrade process take depends on the size of your database and the complexity of your setup. + + +* **Update the Postgres service configuration** + * Update the service configuration to reflect the new PostgreSQL version by updating the version number in the `postgres.service` file: + * `sudo sed -i -e 's///g' /etc/systemd/system/postgres.service` + * Refresh the system's service manager to apply these changes: + * `sudo systemctl daemon-reload` + + +* **Restart Postgres** + * Proceed to restart the PostgreSQL service. + * `systemctl start postgres` + + +* **Validate the new Postgres version** + * Verify that your PostgreSQL instance is now upgraded by again running `sudo -u postgres pgd show-version`. + + +* **Clean up post-upgrade** + * Run `vacuumdb` with the `ANALYZE` option immediately after the upgrade but before introducing a heavy production load. + * Running this command minimizes the immediate performance impact, preparing the database for more accurate testing. + * Remove the old version's data directory, `/opt/postgres/dataold`. + +### Reconcile the upgrade with TPA + +TPA needs to continue to manage the deployment effectively after all the nodes have been upgraded. Therefore, it's necessary to reconcile the upgraded nodes with TPA. + +Follow these steps to update the configuration and redeploy the PGD cluster through TPA. + +* **Update the `config.yml`** + * Change the `config.yml` of the TPA-managed cluster to the new version. + * `cluster_vars: postgres_version: ''` + + +* **Use `tpaexec` to redeploy the PGD cluster with the updated `config.yml`** + * `tpaexec deploy ` + +The worked example that follows shows upgrading the Postgres major version from 15 to 16 on a PGD 5 cluster deployed with TPA in detail. + +## Worked example + +This worked example starts with a TPA-managed PGD cluster deployed using the [AWS Quickstart](https://www.enterprisedb.com/docs/pgd/latest/quickstart/quick_start_aws/). The cluster has three nodes: kaboom, kaolin, and kaftan, all running Postges 15. + +This example starts with kaboom. + +!!! Note +Some steps of this process involve running commands as the Postgres owner. We refer to this user as postgres throughout, when appropriate. If you're running EDB Postgres Advanced Server, substitute the postgres user with enterprisedb in all relevant commands. +!!! + +### Confirm the current Postgres version + +SSH into kaboom to confirm the major version of Postgres is expected by running: + +```bash +sudo -u postgres pgd show-version +``` + +The output will be similar to this for your cluster: + +``` +Node BDR Version Postgres Version +---- ----------- ---------------- +kaboom 5.4.0 15.6 (Debian 15.6-2EDB.buster) +kaftan 5.4.0 15.6 (Debian 15.6-2EDB.buster) +kaolin 5.4.0 15.6 (Debian 15.6-2EDB.buster) + +``` + +Confirm that the Postgres version is the expected version. + +### Verify that the target node isn't the write leader + +The cluster must be available throughout the process (that is, a *rolling* upgrade). There must always be an available write leader to maintain continuous cluster availability. So, if the target node is the current write leader, you must [perform a planned switchover](#performing-a-planned-switchover) of the [write leader](../terminology/#write-leader) node before upgrading it so that a write leader is always available. + +While connected via SSH to kaboom, see which node is the current write leader of the group you're upgrading using the `pgd show-groups` command: + +```bash +sudo -u postgres pgd show-groups +``` + +In this case, you can see that kaboom is the current write leader of the sole subgroup `dc1_subgroup`: + +``` +Group Group ID Type Parent Group Location Raft Routing Write Leader +----- -------- ---- ------------ -------- ---- ------- ------------ +democluster 1935823863 global true false +dc1_subgroup 1302278103 data democluster dc1 true true kaboom +``` + +So you must perform a planned switchover of the write leader of `dc1_subgroup` to another node in the cluster. + +#### Perform a planned switchover + +Change the write leader to kaftan so kaboom's Postgres instance can be stopped: + +```bash +sudo -u postgres pgd switchover --group-name dc1_subgroup --node-name kaftan +``` + +After the switchover is successful, it's safe to stop Postgres on the target node. Of course, if kaftan is still the write leader when you come to upgrading it, you'll need to perform another planned switchover at that time. + +### Stop Postgres on the target node + +While connected via SSH to the target node (in this case, kaboom), stop Postgres on the node by running: + +```bash +sudo systemctl stop postgres +``` + +This command halts the server on kaboom. Your cluster continues running using the other two nodes. + +### Install PGD and utilities + +Next, install the new version of Postgres (PG16) and the upgrade tool: + +```bash +sudo apt install edb-bdr5-pg16 edb-bdr-utilities +``` + +### Initialize the new Postgres instance + +Make a new data directory for the upgraded Postgres, and give the postgres user ownership of the directory: + +```bash +sudo mkdir /opt/postgres/datanew +sudo chown -R postgres:postgres /opt/postgres/datanew +``` + +Then, initialize Postgres 16 in the new directory: + +```bash +sudo -u postgres /usr/lib/postgresql/16/bin/initdb \ + -D /opt/postgres/datanew \ + -E UTF8 \ + --lc-collate=en_US.UTF-8 \ + --lc-ctype=en_US.UTF-8 \ + --data-checksums +``` + +This command creates a PG16 data directory for configuration, `/opt/postgres/datanew`. + +### Migrate configuration to the new Postgres version + +The next step copies the configuration files from the old Postgres version (PG15) to the new Postgres version's (PG16). Configuration files reside in each version's data directory. + +Copy over the `postgresql.conf`, `postgresql.auto.conf`, and `pg_hba.conf` files and the whole `conf.d` directory: + +```bash +sudo -u postgres cp /opt/postgres/data/postgresql.conf /opt/postgres/datanew/ +sudo -u postgres cp /opt/postgres/data/postgresql.auto.conf /opt/postgres/datanew/ +sudo -u postgres cp /opt/postgres/data/pg_hba.conf /opt/postgres/datanew/ +sudo -u postgres cp -r /opt/postgres/data/conf.d/ /opt/postgres/datanew/ +``` + +### Verify the Postgres service is inactive + +Although you [previously stopped the Postgres service on the target node](#stop-postgres-on-the-target-node), kaboom, to verify it's stopped, run the `systemctl status postgres` command: + +```bash +sudo systemctl status postgres +``` + +The output of the `status` command shows that the Postgres service has stopped running: + +``` +● postgres.service - Postgres 15 (TPA) + Loaded: loaded (/etc/systemd/system/postgres.service; enabled; vendor preset: enabled) + Active: inactive (dead) since Wed 2024-03-20 15:32:18 UTC; 4min 9s ago + Main PID: 24396 (code=exited, status=0/SUCCESS) + +Mar 20 15:32:18 kaboom postgres[25032]: [22-1] 2024-03-20 15:32:18 UTC [pgdproxy@10.33.125.89(20108)/[unknown]/bdrdb:25032]: [1] FA +Mar 20 15:32:18 kaboom postgres[25033]: [22-1] 2024-03-20 15:32:18 UTC [pgdproxy@10.33.125.89(20124)/[unknown]/bdrdb:25033]: [1] FA +Mar 20 15:32:18 kaboom postgres[25034]: [22-1] 2024-03-20 15:32:18 UTC [pgdproxy@10.33.125.88(43534)/[unknown]/bdrdb:25034]: [1] FA +Mar 20 15:32:18 kaboom postgres[25035]: [22-1] 2024-03-20 15:32:18 UTC [pgdproxy@10.33.125.88(43538)/[unknown]/bdrdb:25035]: [1] FA +Mar 20 15:32:18 kaboom postgres[25036]: [22-1] 2024-03-20 15:32:18 UTC [pgdproxy@10.33.125.87(37292)/[unknown]/bdrdb:25036]: [1] FA +Mar 20 15:32:18 kaboom postgres[25037]: [22-1] 2024-03-20 15:32:18 UTC [pgdproxy@10.33.125.87(37308)/[unknown]/bdrdb:25037]: [1] FA +Mar 20 15:32:18 kaboom postgres[24398]: [24-1] 2024-03-20 15:32:18 UTC [@//:24398]: [15] LOG: checkpoint complete: wrote 394 buffe +Mar 20 15:32:18 kaboom postgres[24396]: [22-1] 2024-03-20 15:32:18 UTC [@//:24396]: [23] LOG: database system is shut down +Mar 20 15:32:18 kaboom systemd[1]: postgres.service: Succeeded. +Mar 20 15:32:18 kaboom systemd[1]: Stopped Postgres 15 (TPA). +``` + +### Swap PGDATA directories for version upgrade + +Next, swap the PG15 and PG16 data directories: + +```bash +sudo mv /opt/postgres/data /opt/postgres/dataold +sudo mv /opt/postgres/datanew /opt/postgres/data +``` + +!!! Important +If something goes wrong at some point during the procedure, you may want to rollback/revert a node to the older major version. To do this, rename directories again so that the current data directory, `/opt/postgres/data`, becomes `/opt/postgres/datafailed` and the old data directory, `/opt/postgres/dataold`, becomes the current data directory: + +```bash +sudo mv /opt/postgres/data /opt/postgres/datafailed +sudo mv /opt/postgres/dataold /opt/postgres/data +``` + +This rolls back/reverts the node back to the previous major version of Postgres. +!!! + +### Verify upgrade feasibility + +The `bdr_pg_upgrade` tool has a `--check` option, which performs a dry run of some of the upgrade process. You can use this option to ensure the upgrade goes smoothly. + +However, first, you need a directory for the files created by `bdr_pg_upgrade`. For this example, create an `/upgrade` directory in the `/home` directory. Then give ownership of the directory to the user postgres. + +```bash +sudo mkdir /home/upgrade +sudo chown postgres:postgres /home/upgrade +``` + +Next, navigate to `/home/upgrade` and run: + +```bash +sudo -u postgres /usr/bin/bdr_pg_upgrade \ + --old-bindir /usr/lib/postgresql/15/bin/ \ + --new-bindir /usr/lib/postgresql/16/bin/ \ + --old-datadir /opt/postgres/dataold/ \ + --new-datadir /opt/postgres/data/ \ + --database bdrdb \ + --check +``` + +The following is the output: + +``` +Performing BDR Postgres Checks +------------------------------ +Collecting pre-upgrade new cluster control data ok +Checking new cluster state is shutdown ok +Checking BDR versions ok + +Passed all bdr_pg_upgrade checks, now calling pg_upgrade + +Performing Consistency Checks +----------------------------- +Checking cluster versions ok +Checking database user is the install user ok +Checking database connection settings ok +Checking for prepared transactions ok +Checking for system-defined composite types in user tables ok +Checking for reg* data types in user tables ok +Checking for contrib/isn with bigint-passing mismatch ok +Checking for presence of required libraries ok +Checking database user is the install user ok +Checking for prepared transactions ok +Checking for new cluster tablespace directories ok + +*Clusters are compatible +``` + +!!! Note +If you didn't initialize Postgres 16 with checksums using the `--data-checksums` option, but did initialize checksums with your Postgres 15 instance, an error tells you about the incompatibility: + +```bash +old cluster uses data checksums but the new one does not +``` +!!! + +### Execute the Postgres major version upgrade + +You're ready to run the upgrade. On the target node, run: + +```bash +sudo -u postgres /usr/bin/bdr_pg_upgrade \ + --old-bindir /usr/lib/postgresql/15/bin/ \ + --new-bindir /usr/lib/postgresql/16/bin/ \ + --old-datadir /opt/postgres/dataold/ \ + --new-datadir /opt/postgres/data/ \ + --database bdrdb +``` + +The following is the expected output: + +``` +Performing BDR Postgres Checks +------------------------------ +Collecting pre-upgrade new cluster control data ok +Checking new cluster state is shutdown ok +Checking BDR versions ok +Starting old cluster (if shutdown) ok +Connecting to old cluster ok +Checking if bdr schema exists ok +Turning DDL replication off ok +Terminating connections to database ok +Disabling connections to database ok +Waiting for all slots to be flushed ok +Disconnecting from old cluster ok +Stopping old cluster ok +Starting old cluster with BDR disabled ok +Connecting to old cluster ok +Collecting replication origins ok +Collecting replication slots ok +Disconnecting from old cluster ok +Stopping old cluster ok + +Passed all bdr_pg_upgrade checks, now calling pg_upgrade + +Performing Consistency Checks +----------------------------- +Checking cluster versions ok +Checking database user is the install user ok +Checking database connection settings ok +Checking for prepared transactions ok +Checking for system-defined composite types in user tables ok +Checking for reg* data types in user tables ok +Checking for contrib/isn with bigint-passing mismatch ok +Creating dump of global objects ok +Creating dump of database schemas ok +Checking for presence of required libraries ok +Checking database user is the install user ok +Checking for prepared transactions ok +Checking for new cluster tablespace directories ok + +If pg_upgrade fails after this point, you must re-initdb the +new cluster before continuing. + +Performing Upgrade +------------------ +Analyzing all rows in the new cluster ok +Freezing all rows in the new cluster ok +Deleting files from new pg_xact ok +Copying old pg_xact to new server ok +Setting oldest XID for new cluster ok +Setting next transaction ID and epoch for new cluster ok +Deleting files from new pg_multixact/offsets ok +Copying old pg_multixact/offsets to new server ok +Deleting files from new pg_multixact/members ok +Copying old pg_multixact/members to new server ok +Setting next multixact ID and offset for new cluster ok +Resetting WAL archives ok +Setting frozenxid and minmxid counters in new cluster ok +Restoring global objects in the new cluster ok +Restoring database schemas in the new cluster ok +Copying user relation files ok +Setting next OID for new cluster ok +Sync data directory to disk ok +Creating script to delete old cluster ok +Checking for extension updates notice + +Your installation contains extensions that should be updated +with the ALTER EXTENSION command. The file + update_extensions.sql +when executed by psql by the database superuser will update +these extensions. + + +Upgrade Complete +---------------- +Optimizer statistics are not transferred by pg_upgrade. +Once you start the new server, consider running: + /usr/pgsql-15/bin/vacuumdb --all --analyze-in-stages + +Running this script will delete the old cluster's data files: + ./delete_old_cluster.sh + +pg_upgrade complete, performing BDR post-upgrade steps +------------------------------------------------------ +Collecting old cluster control data ok +Collecting new cluster control data ok +Checking LSN of new cluster ok +Starting new cluster with BDR disabled ok +Connecting to new cluster ok +Creating replication origin (bdr_bdrdb_rb69_bdr2) ok +Advancing replication origin (bdr_bdrdb_rb69_bdr2, 0/1F4... ok +Creating replication origin (bdr_bdrdb_rb69_bdr1) ok +Advancing replication origin (bdr_bdrdb_rb69_bdr1, 0/1E8... ok +Creating replication slot (bdr_bdrdb_rb69_bdr1) ok +Creating replication slot (bdr_bdrdb_rb69) ok +Creating replication slot (bdr_bdrdb_rb69_bdr2) ok +Stopping new cluster +``` + +### Update the Postgres service configuration + +The Postgres service on the system is configured to start the old version of Postgres (PG15). You need to modify the `postgres.service` file to start the new version (PG16). + +You can do this using `sed` to replace the old version number `15` with `16` throughout the file. + +```bash +sudo sed -i -e 's/15/16/g' /etc/systemd/system/postgres.service +``` + +After you've changed the version number, you can tell the systemd daemon to reload the configuration. On the target node, run: + +```bash +sudo systemctl daemon-reload +``` + +### Restart Postgres + +Start the modified Postgres service by running: + +```bash +sudo systemctl start postgres +``` + +### Validate the new Postgres version + +Repeating the first step, check the version of Postgres to confirm that you upgraded kaboom correctly. While still on kaboom, run: + +```bash +sudo -u postgres pgd show-version +``` + +Use the output to confirm that kaboom is running the upgraded Postgres version: + + ``` +Node BDR Version Postgres Version +---- ----------- ---------------- +kaboom 5.4.0 16.2 (Debian 16.2-2EDB.buster) +kaftan 5.4.0 15.6 (Debian 15.6-2EDB.buster) +kaolin 5.4.0 15.6 (Debian 15.6-2EDB.buster) + +``` + +Here kaboom has been upgraded to major version 16. + +### Clean up post-upgrade + +As a best practice, run a vacuum over the database at this point. When the upgrade ran, you may have noticed the post-upgrade report included: + +``` +Once you start the new server, consider running: + /usr/lib/postgresql/16/bin/vacuumdb --all --analyze-in-stages +``` + +You can run the vacuum now. On the target node, run: + +```bash +sudo -u postgres /usr/lib/postgresql/16/bin/vacuumdb --all --analyze-in-stages +``` + +If you're sure you don't need to revert this node, you can also clean up the old data directory folder `dataold`: + +```bash +sudo rm -r /opt/postgres/dataold +``` + +Upgrading the target node is now complete. + +### Next steps + +After completing the upgrade on kaboom, run the same steps on kaolin and kaftan. + +If you followed along with this example and kaftan is the write leader, to ensure availability, you must [perform a planned switchover](#perform-a-planned-switchover) to another, already upgraded node before running the upgrade steps on kaftan. + +#### Check Postgres versions across the cluster + +After completing the upgrade on all nodes, while connected to one of the nodes, you can once again check your versions: + +```bash +sudo -u postgres pgd show-version +``` + +The output will be similar to the following: + + ``` +Node BDR Version Postgres Version +---- ----------- ---------------- +kaboom 5.4.0 16.2 (Debian 16.2-2EDB.buster) +kaftan 5.4.0 16.2 (Debian 16.2-2EDB.buster) +kaolin 5.4.0 16.2 (Debian 16.2-2EDB.buster) + +``` + +This output shows that all the nodes are successfully upgraded to the new Postgres version 16. + +#### Reconcile with TPA + +After all the nodes are upgraded, you still need to [reconcile](https://www.enterprisedb.com/docs/tpa/latest/reference/reconciling-local-changes/) the upgraded version of Postgres with TPA so you can continue to use TPA to manage the cluster in the future. + +To do this, return to the command line where your TPA cluster directory resides. In this worked example, the TPA cluster directory is `/home/ubuntu/democluster` on the instance where you originally deployed the cluster using TPA. + +After navigating to your cluster directory, use a code editor to edit `config.yml` and change `cluster_vars:` from `postgres_version: '15'` to `postgres_version: '16'`. + +Unless they were already added to your `.bashrc` or `.bash_profile`, ensure the TPA tools are accessible in your command line session by adding TPA's binary directory to your PATH: + +```bash +export PATH=$PATH:/opt/EDB/TPA/bin +``` +Finally, redeploy the cluster: + +```bash +tpaexec deploy democluster +``` + +This command applies the configuration changes to the cluster managed by TPA. If the deployment is successful, the reconciliation of the new version of Postgres with TPA and the upgrade procedure as a whole is complete. diff --git a/product_docs/docs/pgd/5/upgrades/manual_overview.mdx b/product_docs/docs/pgd/5/upgrades/manual_overview.mdx deleted file mode 100644 index b00fed0c1e7..00000000000 --- a/product_docs/docs/pgd/5/upgrades/manual_overview.mdx +++ /dev/null @@ -1,229 +0,0 @@ ---- -title: "Upgrading PGD clusters manually" ---- - -Because EDB Postgres Distributed consists of multiple software components, -the upgrade strategy depends partially on the components that are being upgraded. - -In general, you can upgrade the cluster with almost zero downtime by -using an approach called rolling upgrade. Using this approach, nodes are upgraded one by one, and -the application connections are switched over to already upgraded nodes. - -You can also stop all nodes, perform the upgrade on all nodes, and -only then restart the entire cluster. This approach is the same as with a standard PostgreSQL setup. -This strategy of upgrading all nodes at the same time avoids running with -mixed versions of software and therefore is the simplest. However, it incurs -downtime and we don't recommend it unless you can't perform the rolling upgrade -for some reason. - -To upgrade an EDB Postgres Distributed cluster: - -1. Plan the upgrade. -2. Prepare for the upgrade. -3. Upgrade the server software. -4. Check and validate the upgrade. - -## Upgrade planning - -There are broadly two ways to upgrade each node: - -* Upgrade nodes in place to the newer software version. See [Rolling server - software upgrades](#rolling-server-software-upgrades). -* Replace nodes with ones that have the newer version installed. See [Rolling - upgrade using node join](#rolling-upgrade-using-node-join). - -You can use both of these approaches in a rolling manner. - -### Rolling upgrade considerations - -While the cluster is going through a rolling upgrade, mixed versions of software -are running in the cluster. For example, suppose nodeA has PGD 3.7.16, while -nodeB and nodeC has 4.1.0. In this state, the replication and group -management uses the protocol and features from the oldest version (3.7.16 -in this example), so any new features provided by the newer version -that require changes in the protocol are disabled. Once all nodes are -upgraded to the same version, the new features are enabled. - -Similarly, when a cluster with WAL-decoder-enabled nodes is going through a -rolling upgrade, WAL decoder on a higher version of PGD node produces LCRs -([logical change records](../node_management/decoding_worker/#enabling)) with a -higher pglogical version. WAL decoder on a lower version of PGD node produces -LCRs with lower pglogical version. As a result, WAL senders on a higher version -of PGD nodes are not expected to use LCRs due to a mismatch in protocol -versions. On a lower version of PGD nodes, WAL senders may continue to use LCRs. -Once all the PGD nodes are on the same PGD version, WAL senders use LCRs. - -A rolling upgrade starts with a cluster with all nodes at a prior release. It -then proceeds by upgrading one node at a time to the newer release, until all -nodes are at the newer release. There must be no more than two versions of the -software running at the same time. An upgrade must be completed, with all nodes -fully upgraded, before starting another upgrade. - -An upgrade process can take more time when -caution is required to reduce business risk. However, we don't recommend -running mixed versions of the software indefinitely. - -While you can use a rolling upgrade for upgrading a major version of the software, -we don't support mixing PostgreSQL, EDB Postgres Extended, and -EDB Postgres Advanced Server in one cluster. So you can't use this approach -to change the Postgres variant. - -!!! Warning - Downgrades of the EDB Postgres Distributed aren't supported. They require - that you manually rebuild the cluster. - -### Rolling server software upgrades - -A rolling upgrade is where the [server software -upgrade](#server-software-upgrade) is upgraded sequentially on each node in a -cluster without stopping the the cluster. Each node is temporarily stopped from -partiticpating in the cluster and its server software upgraded. Once updated, it -is returned to the cluster and it then catches up with the cluster's activity -during its absence. - -The actual procedure depends on whether the Postgres component is being -upgraded to a new major version. - -During the upgrade process, you can switch the application over to a node -that's currently not being upgraded to provide continuous availability of -the database for applications. - -### Rolling upgrade using node join - -The other method to upgrade the server software is to join a new node -to the cluster and later drop one of the existing nodes running -the older version of the software. - -For this approach, the procedure is always the same. However, because it -includes node join, a potentially large data transfer is required. - -Take care not to use features that are available only in -the newer Postgres version until all nodes are upgraded to the -newer and same release of Postgres. This is especially true for any -new DDL syntax that was added to a newer release of Postgres. - -!!! Note - `bdr_init_physical` makes a byte-by-byte copy of the source node - so you can't use it while upgrading from one major Postgres version - to another. In fact, currently `bdr_init_physical` requires that even the - PGD version of the source and the joining node be exactly the same. - You can't use it for rolling upgrades by way of joining a new node method. Instead, use a logical join. - -### Upgrading a CAMO-enabled cluster - -Upgrading a CAMO-enabled cluster requires upgrading CAMO groups one by one while -disabling the CAMO protection for the group being upgraded and reconfiguring it -using the new [commit scope](../durability/commit-scopes)-based settings. - -We recommended the following approach for upgrading two BDR nodes that -constitute a CAMO pair to PGD 5.0: - -- Ensure `bdr.enable_camo` remains `off` for transactions on any of - the two nodes, or redirect clients away from the two nodes. Removing - the CAMO pairing while attempting to use CAMO leads to errors - and prevents further transactions. -- Uncouple the pair by deconfiguring CAMO either by resetting - `bdr.camo_origin_for` and `bdr.camo_parter_of` (when upgrading from - BDR 3.7.x) or by using `bdr.remove_camo_pair` (on BDR 4.x). -- Upgrade the two nodes to PGD 5.0. -- Create a dedicated node group for the two nodes and move them into - that node group. -- Create a [commit scope](../durability/commit-scopes) for this node - group and thus the pair of nodes to use CAMO. -- Reactivate CAMO protection again by either setting a - `default_commit_scope` or by changing the clients to explicitly set - `bdr.commit_scope` instead of `bdr.enable_camo` for their sessions - or transactions. -- If necessary, allow clients to connect to the CAMO protected nodes - again. - -## Upgrade preparation - -Each major release of the software contains several changes that might affect -compatibility with previous releases. These might affect the Postgres -configuration, deployment scripts, as well as applications using PGD. We -recommend considering these changes and making any needed adjustments in advance of the upgrade. - -See individual changes mentioned in the [release notes](../rel_notes/) and any version-specific upgrade notes. - -## Server software upgrade - -Upgrading EDB Postgres Distributed on individual nodes happens in place. -You don't need to back up and restore when upgrading the BDR extension. - -### BDR extension upgrade - -The BDR extension upgrade process consists of a few steps. - -#### Stop Postgres - -During the upgrade of binary packages, it's usually best to stop the running -Postgres server first. Doing so ensures that mixed versions don't get loaded in case -of an unexpected restart during the upgrade. - -#### Upgrade packages - -The first step in the upgrade is to install the new version of the BDR packages. This installation -installs both the new binary and the extension SQL script. This step is specific to the operating system. - -#### Start Postgres - -Once packages are upgraded, you can start the Postgres instance. The BDR -extension is upgraded upon start when the new binaries -detect the older version of the extension. - -### Postgres upgrade - -The process of in-place upgrade of Postgres depends on whether you're -upgrading to new minor version of Postgres or to a new major version of Postgres. - -#### Minor version Postgres upgrade - -Upgrading to a new minor version of Postgres is similar to [upgrading -the BDR extension](#bdr-extension-upgrade). Stopping Postgres, upgrading packages, -and starting Postgres again is typically all that's needed. - -However, sometimes more steps, like reindexing, might be recommended for -specific minor version upgrades. Refer to the release notes of the -version of Postgres you're upgrading to. - -#### Major version Postgres upgrade - -Upgrading to a new major version of Postgres is more complicated than upgrading to a minor version. - -EDB Postgres Distributed provides a `bdr_pg_upgrade` command line utility, -which you can use to do [in-place Postgres major version upgrades](bdr_pg_upgrade). - -!!! Note - When upgrading to a new major version of any software, including Postgres, the - BDR extension, and others, it's always important to ensure the compatibility - of your application with the target version of the software you're upgrading. - -## Upgrade check and validation - -After you upgrade your PGD node, you can verify the current -version of the binary: - -```sql -SELECT bdr.bdr_version(); -``` - -Always check your [monitoring](../monitoring) after upgrading a node to confirm -that the upgraded node is working as expected. - -## Moving from HARP to PGD Proxy - -HARP can temporarily coexist with the new -[connection management](../routing) configuration. This means you can: - -- Upgrade a whole pre-5 cluster to a PGD 5 cluster. -- Set up the connection routing. -- Replace the HARP Proxy with PGD Proxy. -- Move application connections to PGD Proxy instances. -- Remove the HARP Manager from all servers. - -We strongly recommend doing this as soon as possible after upgrading nodes to -PGD 5. HARP isn't certified for long-term use with PGD 5. - -TPA provides some useful tools for this and will eventually provide a single-command -upgrade path between PGD 4 and PGD 5.