From 5249454d4423a2008a2418a88e127355a7c0b264 Mon Sep 17 00:00:00 2001 From: amrita-suresh <33535573+amrita-suresh@users.noreply.github.com> Date: Tue, 19 Mar 2024 16:26:09 -0400 Subject: [PATCH 01/13] Update index.mdx adding PGE to BA --- .../release/getting_started/creating_a_cluster/index.mdx | 2 ++ 1 file changed, 2 insertions(+) diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index c9582d9e44d..2d3c8c6ba19 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -73,6 +73,8 @@ The following options aren't available when creating your cluster: - **[EDB Postgres Advanced Server](/epas/latest/)** is EDB's Oracle-compatible database offering. View [a quick demonstration of Oracle compatibility on BigAnimal](../../using_cluster/06_demonstration_oracle_compatibility). EDB Postgres Advanced Server is compatible with all three cluster types. + - **[EDB Postgres Extended Server](/pge/latest/)** is EDB's is EDB's advanced logical replication, PostgreSQL-compatible database offering. + - **[PostgreSQL](/supported-open-source/postgresql/)** is the open-source, object-relational database management system. PostgreSQL is compatible with single-node and primary/standby high-availability cluster types. 1. In the **Postgres Version** list, select the version of Postgres that you want to use. See [Database version policy](../../overview/05_database_version_policy) for more information. From e10efd0980b0a5d4376489d988ee7d721980df41 Mon Sep 17 00:00:00 2001 From: Jagdish Kewat Date: Wed, 20 Mar 2024 18:24:47 +0530 Subject: [PATCH 02/13] Create a rel notes for harp 2.4.0. for PGD 3.7 --- .../3.7/harp/01_release_notes/harp2.4.0_rel_notes.mdx | 9 +++++++++ .../docs/pgd/3.7/harp/01_release_notes/index.mdx | 2 ++ 2 files changed, 11 insertions(+) create mode 100644 product_docs/docs/pgd/3.7/harp/01_release_notes/harp2.4.0_rel_notes.mdx diff --git a/product_docs/docs/pgd/3.7/harp/01_release_notes/harp2.4.0_rel_notes.mdx b/product_docs/docs/pgd/3.7/harp/01_release_notes/harp2.4.0_rel_notes.mdx new file mode 100644 index 00000000000..e241b5191c0 --- /dev/null +++ b/product_docs/docs/pgd/3.7/harp/01_release_notes/harp2.4.0_rel_notes.mdx @@ -0,0 +1,9 @@ +--- +title: "Version 2.4.0" +--- + +This is a minor release of HARP 2 that includes internal maintenance fixes. + +| Type | Description | +| ---- |------------ | +| Change | Routine security library upgrades and refreshed build toolchain | diff --git a/product_docs/docs/pgd/3.7/harp/01_release_notes/index.mdx b/product_docs/docs/pgd/3.7/harp/01_release_notes/index.mdx index a0355088c93..8467591676c 100644 --- a/product_docs/docs/pgd/3.7/harp/01_release_notes/index.mdx +++ b/product_docs/docs/pgd/3.7/harp/01_release_notes/index.mdx @@ -1,6 +1,7 @@ --- title: Release Notes navigation: +- harp2.4.0_rel_notes - harp2.3.2_rel_notes - harp2.3.1_rel_notes - harp2.3.0_rel_notes @@ -26,6 +27,7 @@ The release notes in this section provide information on what was new in each re | Version | Release Date | | ----------------------- | ------------ | +| [2.4.0](harp2.4.0_rel_notes) | 05 Mar 2024 | | [2.3.2](harp2.3.2_rel_notes) | 17 Oct 2023 | | [2.3.1](harp2.3.1_rel_notes) | 27 Jul 2023 | | [2.3.0](harp2.3.0_rel_notes) | 12 Jul 2023 | From c7943daf356a25db78145f5c117640b9e036fddd Mon Sep 17 00:00:00 2001 From: piano35-edb <160748516+piano35-edb@users.noreply.github.com> Date: Wed, 20 Mar 2024 13:04:40 -0500 Subject: [PATCH 03/13] fault injection testing add doc and images --- .../fault_injection_testing.mdx | 102 ++++++++++++++++++ .../images/biganimal_faultinjectiontest_1.png | 3 + .../images/biganimal_faultinjectiontest_2.png | 3 + .../images/biganimal_faultinjectiontest_3.png | 3 + .../04_fault_injection_testing/index.mdx | 9 ++ 5 files changed, 120 insertions(+) create mode 100644 product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/fault_injection_testing.mdx create mode 100644 product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_1.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_2.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_3.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/index.mdx diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/fault_injection_testing.mdx b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/fault_injection_testing.mdx new file mode 100644 index 00000000000..f54bf54b0c2 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/fault_injection_testing.mdx @@ -0,0 +1,102 @@ +--- +title: "Fault injection testing" +--- + +You can test the fault tolerance of your cluster by identify and deleting a VM in order to inject a fault. Once a VM is deleted, you can monitor +the availability and recovery of the cluster. + +## Requirements + +Ensure you meet the following requirements before using fault injection testing: + ++ You have connected your BigAnimal cloud account with your Azure subscription. See [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) for more information. ++ You should have permissions in your Azure subscription to view and delete VMs. ++ You have **pgd cli** installed. See [Installing PGD CLI](/pgd/latest/cli/installing_cli/#) for more information. ++ You have created a **pgd-cli-config.yml** file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information. + +## Fault injection testing steps + +Fault injection testing consists of the following steps: + +1. Verifying Cluster Health +2. Determining the write leader node for your cluster +3. Deleting a write leader node from your cluster +4. Monitoring cluster health +  +  +#### Verifying Cluster Health + +Use the following commands to monitor your cluster health, node info, raft, replication lag, and write leads. + +```sql +pgd check-health -f pgd-cli-config.yml +pgd verify cluster -f pgd-cli-config.yml +pgd show-nodes -f pgd-cli-config.yml +pgd show-raft -f pgd-cli-config.yml +pgd show-replslots –verbose -f pgd-cli-config.yml +pgd show-subscriptions -f pgd-cli-config.yml +pgd show-groups -f pgd-cli-config.yml +``` + +You can use **pgd help** for more information on these commands. + +To list the supported commands, enter: + +```sh +pgd help +``` + +For help with a specific command and its parameters, enter `pgd help `. For example: + +```sh +pgd help show-nodes +``` + +  +#### Determining the write leader node for your cluster + + +```sql +pgd show-groups -f pgd-cli-config.yml + +Group Group ID Type Write Leader +-------- ------------------ —--- ------------ +world 3239291720 global p-x67kjp3fsq-d-1 +p-x67kjp3fsq-a 2456382099 data world p-x67kjp3fsq-a-1 +p-x67kjp3fsq-c 4147262499 data world +p-x67kjp3fsq-d 3176957154 data world p-x67kjp3fsq-d-1 +``` + + + +## Deleting a write leader node from your cluster + +To delete a write lead node from the cluster: +1. Log into BigAnimal. +2. In a separate browser window, log into your Microsoft Azure subscription. +3. In the left navigation of BigAnimal portal, choose **Clusters**. +4. Choose the cluster to test fault injection with and copy the string value from the URL. The string value is located after the underscore. + +![Delete a write lead](images/biganimal_faultinjectiontest_1.png) +  + +5. In your Azure subscription, paste the string into the search and prefix it with **dp-** to search for the data plane. +From the results, choose the Kubernetes service from the Azure Region that your cluster is deployed in. + +![Delete a write lead 2](images/biganimal_faultinjectiontest_2.png) +  + +6. Identify the VMSS + +!!! Note Avoiding stale data +Don't delete the VMSS here or Sub resources directly +!!! + +7. Browse to the Data Plane, choose Workloads, and locate the Kubernetes resources for your cluster. Choose one of the cluster nodes to delete. +![Delete a write lead 3](images/biganimal_faultinjectiontest_3.png) + +  +### Monitoring cluster health + +After deleting a cluster node, you can monitor the health of the cluster using the same **pgd** commands that you used to verify cluster health. + diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_1.png b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_1.png new file mode 100644 index 00000000000..e3486f1ceb3 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:632fbda88375371eef7ab5286a7d4897497144882fed581f66731ed6f73bc5c0 +size 92464 diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_2.png b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_2.png new file mode 100644 index 00000000000..1d03d83c7d7 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fd934539bf4f4cd529a3ead374f818ada6e1af77b0a2408f374fbd417f518392 +size 174239 diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_3.png b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_3.png new file mode 100644 index 00000000000..72eb2b0ebdf --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4818ed3898e263b94df36761c559d10f0cbf5fbb32dec8ce87168511fe115a28 +size 218906 diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/index.mdx b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/index.mdx new file mode 100644 index 00000000000..3caad8562f5 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/index.mdx @@ -0,0 +1,9 @@ +--- +title: "Testing availability and recovery for your cluster" + +navigation: + - Fault injection testing + +--- + +With BigAnimal, you can test the availability and recovery for your cluster. \ No newline at end of file From 73691e99d3157e2f618a1d9374b2debdcac40823 Mon Sep 17 00:00:00 2001 From: piano35-edb <160748516+piano35-edb@users.noreply.github.com> Date: Wed, 20 Mar 2024 13:37:59 -0500 Subject: [PATCH 04/13] fault injection testing - add images, edits --- .../fault_injection_testing.mdx | 17 ++++++++++------- .../images/biganimal_faultinjectiontest_1.png | 0 .../images/biganimal_faultinjectiontest_2.png | 0 .../images/biganimal_faultinjectiontest_3.png | 0 .../images/biganimal_faultinjectiontest_4.png | 3 +++ .../index.mdx | 0 6 files changed, 13 insertions(+), 7 deletions(-) rename product_docs/docs/biganimal/release/using_cluster/{04_fault_injection_testing => 04_fault_injection_testing_your_cluster}/fault_injection_testing.mdx (87%) rename product_docs/docs/biganimal/release/using_cluster/{04_fault_injection_testing => 04_fault_injection_testing_your_cluster}/images/biganimal_faultinjectiontest_1.png (100%) rename product_docs/docs/biganimal/release/using_cluster/{04_fault_injection_testing => 04_fault_injection_testing_your_cluster}/images/biganimal_faultinjectiontest_2.png (100%) rename product_docs/docs/biganimal/release/using_cluster/{04_fault_injection_testing => 04_fault_injection_testing_your_cluster}/images/biganimal_faultinjectiontest_3.png (100%) create mode 100644 product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_4.png rename product_docs/docs/biganimal/release/using_cluster/{04_fault_injection_testing => 04_fault_injection_testing_your_cluster}/index.mdx (100%) diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/fault_injection_testing.mdx b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/fault_injection_testing.mdx similarity index 87% rename from product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/fault_injection_testing.mdx rename to product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/fault_injection_testing.mdx index f54bf54b0c2..4fc785800f7 100644 --- a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/fault_injection_testing.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/fault_injection_testing.mdx @@ -2,10 +2,10 @@ title: "Fault injection testing" --- -You can test the fault tolerance of your cluster by identify and deleting a VM in order to inject a fault. Once a VM is deleted, you can monitor +You can test the fault tolerance of your cluster by deleting a VM in order to inject a fault. Once a VM is deleted, you can monitor the availability and recovery of the cluster. -## Requirements +## Requirement Ensure you meet the following requirements before using fault injection testing: @@ -66,7 +66,7 @@ p-x67kjp3fsq-a 2456382099 data world p-x67kjp3fsq-a-1 p-x67kjp3fsq-c 4147262499 data world p-x67kjp3fsq-d 3176957154 data world p-x67kjp3fsq-d-1 ``` - +In this example, the write lead node is **p-x67kjp3fsq-a-1**. ## Deleting a write leader node from your cluster @@ -86,13 +86,16 @@ From the results, choose the Kubernetes service from the Azure Region that your ![Delete a write lead 2](images/biganimal_faultinjectiontest_2.png)   -6. Identify the VMSS +6. Identify the Kubernetes service for your cluster. + +![Delete a write lead](images/biganimal_faultinjectiontest_4.png) +  -!!! Note Avoiding stale data -Don't delete the VMSS here or Sub resources directly +!!!Note +Don't delete the VMSS here or sub resources directly. !!! -7. Browse to the Data Plane, choose Workloads, and locate the Kubernetes resources for your cluster. Choose one of the cluster nodes to delete. +7. Browse to the Data Plane, choose Workloads, and locate the Kubernetes resources for your cluster to delete a chosen node. ![Delete a write lead 3](images/biganimal_faultinjectiontest_3.png)   diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_1.png b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_1.png similarity index 100% rename from product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_1.png rename to product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_1.png diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_2.png b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_2.png similarity index 100% rename from product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_2.png rename to product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_2.png diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_3.png b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_3.png similarity index 100% rename from product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/images/biganimal_faultinjectiontest_3.png rename to product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_3.png diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_4.png b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_4.png new file mode 100644 index 00000000000..33c01ab324d --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_4.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:45cd675b2ea5da8201efed20bbf2b4b3b5b89875f990506f8f9fc69a0e360d7b +size 142820 diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/index.mdx b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/index.mdx similarity index 100% rename from product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing/index.mdx rename to product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/index.mdx From d151924480aa289af9fcd52209228f4470ab445e Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Wed, 20 Mar 2024 14:56:15 -0400 Subject: [PATCH 05/13] Edits to BigAnimal PR5407 --- .../release/migration/dha_bulk_migration.mdx | 141 +++++++++--------- .../biganimal/release/migration/index.mdx | 5 +- 2 files changed, 71 insertions(+), 75 deletions(-) diff --git a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx index 08cde98a5bf..858b4139aef 100644 --- a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx +++ b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx @@ -1,21 +1,21 @@ --- title: Bulk loading data into DHA/PGD clusters navTITLE: Bulk loading into DHA/PGD clusters -description: This guide is specifically for environments where there is no direct access to the PGD Nodes, only PGD Proxy endpoints, such as BigAnimal’s Distributed High Availability deployments of PGD. +description: This content is specifically for environments where there's no direct access to the PGD nodes, only PGD Proxy endpoints, such as BigAnimal's distributed high availability deployments of PGD. deepToC: true --- -## Bulk loading data into PGD clusters. +## Bulk loading data into PGD clusters -**This guide is specifically for environments where there is no direct access to the PGD Nodes, only PGD Proxy endpoints, such as BigAnimal’s Distributed High Availability deployments of PGD.** +**This content is specifically for environments where there's no direct access to the PGD nodes, only PGD Proxy endpoints, such as BigAnimal's distributed high availability deployments of PGD.** -Bulk loading data into a PGD cluster can, without care, cause a lot of replication load on a cluster. With that in mind, this document lays out a process to mitigate that replication load. +Without using care, bulk loading data into a PGD cluster can cause a lot of replication load on a cluster. With that in mind, this content describes a process to mitigate that replication load. -## Provision or Prepare a PGD Cluster +## Provision or prepare a PGD cluster -You will need to have provisioned a PGD Cluster, either manually, via TPA or on BigAnimal. This will be the target database for +You must provision a PGD cluster, either manually, using TPA, or on BigAnimal. -It is recommended that when provisioning, or if needed, after provisioning, you set the following Postgres GUC variables: +We recommend that, when provisioning or, if needed, after provisioning, you set the following Postgres GUC variables. | GUC variable | Setting | @@ -23,19 +23,18 @@ It is recommended that when provisioning, or if needed, after provisioning, you | maintenance_work_mem | 1GB | | wal_sender_timeout | 60min | | wal_receiver_timeout | 60min | - | max_wal_size | Should be either
• a multiple (2 or 3) of your largest table
or
• more than one third of the capacity of your dedicated WAL disk (if configured) | + | max_wal_size | Set to either:
• A multiple (2 or 3) of your largest table
or
• More than one third of the capacity of your dedicated WAL disk (if configured) | +Make note of the target's proxy hostname and port. You also need a user and password for the target cluster. -You will need to make note of the target’s proxy hostname and port. Also you will need a user and password for the target cluster. Your tar - -In the following instructions, we give examples for a cluster named `ab-cluster`, with an `ab-group` sub-group and three nodes `ab-node-1`, `ab-node-2` and `ab-node3`. The cluster is accessed through a host named `ab-proxy`. On BigAnimal, a cluster is configured, by default, with an `edb_admin` user which can be used for the bulk upload. +The following instructions give examples for a cluster named `ab-cluster` with an `ab-group` subgroup and three nodes: `ab-node-1`, `ab-node-2`, and `ab-node3`. The cluster is accessed through a host named `ab-proxy`. On BigAnimal, a cluster is configured, by default, with an edb_admin user that can be used for the bulk upload. ## Identify your data source -You will need to source hostname, port, database name, user and password for your source database. +You need the source hostname, port, database name, user, and password for your source database. -Also, you will currently need a list of tables in the database that you wish to migrate to the target database. +Also, you currently need a list of tables in the database that you want to migrate to the target database. ## Prepare a bastion server @@ -45,11 +44,11 @@ Create a virtual machine with your preferred operating system in the cloud to or * Use your EDB account. * Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page. * Set environment variables. - * Set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the repository token: + * Set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the repository token. * Configure the repositories. * Run the automated installer to install the repositories. * Install the required software. - * You will need to install and configure: + * Install and configure: * psql * PGD CLI * Migration Toolkit @@ -57,7 +56,7 @@ Create a virtual machine with your preferred operating system in the cloud to or ### Configure repositories -The required software is available from the EDB repositories. You will need to install the EDB repositories on your bastion server. +The required software is available from the EDB repositories. Install the EDB repositories on your bastion server. * Red Hat ``` @@ -73,11 +72,11 @@ curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/enterpris ### Install the required software -Once the repositories have been configured, you can install the required software. +Once the repositories are configured, you can install the required software. #### psql and pg_dump/pg_restore -The psql command is the interactive terminal for working with PostgreSQL. It is a client application and can be installed on any operating system. Packaged with PSQL are pg_dump and pg_restore, command line utilities for dumping and restoring PostgreSQL databases. +The psql command is the interactive terminal for working with PostgreSQL. It's a client application and can be installed on any operating system. Packaged with psql are pg_dump and pg_restore, command-line utilities for dumping and restoring PostgreSQL databases. * Ubuntu ``` @@ -104,7 +103,7 @@ chmod 0600 $HOME/.pgpass #### PGD CLI -PGD CLI is a command line interface for managing and monitoring PGD clusters. It is a Go application and can be installed on any operating system. +PGD CLI is a command-line interface for managing and monitoring PGD clusters. It's a Go application and can be installed on any operating system. * Ubuntu ``` @@ -115,7 +114,7 @@ sudo apt-get install edb-pgd5-cli sudo dnf install edb-pgd5-cli ``` -Create a configuration file for the pgd cli. +Create a configuration file for the PGD CLI: ``` cluster: @@ -124,7 +123,7 @@ cluster: - host=target-cluster-hostname dbname=target-cluster-dbname port=target-cluster-port user=target-cluster-user-name ``` -For our example ab-cluster: +For the example `ab-cluster`: ``` cluster: @@ -135,12 +134,12 @@ cluster: Save it as `pgd-cli-config.yml`. -See also [https://www.enterprisedb.com/docs/pgd/latest/cli/installing_cli/](https://www.enterprisedb.com/docs/pgd/latest/cli/installing_cli/) +See also [Installing PGD CLI](/pgd/latest/cli/installing_cli/). #### Migration Toolkit -EDB's Migration Toolkit (MTK) is a command-line tool that can be used to migrate data from a source database to a target database. It is a Java application and requires a Java runtime environment to be installed. +EDB's Migration Toolkit (MTK) is a command-line tool that can be used to migrate data from a source database to a target database. It's a Java application and requires a Java runtime environment to be installed. * Ubuntu ``` @@ -153,35 +152,35 @@ sudo apt-get -y install edb-migrationtoolkit sudo wget https://jdbc.postgresql.org/download/postgresql-42.7.2.jar -P /usr/edb/migrationtoolkit/lib ``` -See also [https://www.enterprisedb.com/docs/migration_toolkit/latest/installing/](https://www.enterprisedb.com/docs/migration_toolkit/latest/installing/) +See also [Installing Migration Toolkit](/migration_toolkit/latest/installing/) -## Setup and tune the target cluster +## Set up and tune the target cluster -On the target cluster and within the regional group required, select one node that will be the destination for the data. +On the target cluster and within the regional group required, select one node to be the destination for the data. -If we have a group `ab-group` with `ab-node-1`, `ab-node-2` and `ab-node-3`, we may select `ab-node-1` as our destination node. +If you have a group `ab-group` with `ab-node-1`, `ab-node-2`, and `ab-node-3`, you can select `ab-node-1` as the destination node. ### Set up a fence -Fence off all other nodes apart from the destination node. +Fence off all other nodes except for the destination node. -Connect to any node on the destination group using the `psql` command. +Connect to any node on the destination group using the psql command. Use `bdr.alter_node_option` and turn the `route_fence` option to `true` -for each node in the group apart from the destination node. +for each node in the group apart from the destination node: ```sql select bdr.alter_node_option('ab-node-2','route_fence','t'); select bdr.alter_node_option('ab-node-3','route_fence','t'); ``` -The next time you connect with `psql`, you will be directed to the write leader which should be the destination node. To ensure that it is, we need to send two more commands. +The next time you connect with psql, you're directed to the write leader, which should be the destination node. To ensure that it is, you need to send two more commands. ### Make the destination node both write and raft leader -To minimize the possibility of disconnections, we move the raft and write leader roles to our destination node. +To minimize the possibility of disconnections, move the raft and write leader roles to the destination node. Make the destination node the raft leader using `bdr.raft_leadership_transfer`: @@ -189,18 +188,18 @@ Make the destination node the raft leader using `bdr.raft_leadership_transfer`: bdr.raft_leadership_transfer('ab-node-1',true); ``` -This will trigger a write leader election which will elect the `ab-node-1` as write leader because you have fenced off the other nodes in the group. +Because you fenced off the other nodes in the group, this command triggers a write leader election that elects the `ab-node-1` as write leader. ### Record then clear default commit scopes -We need to make a record of the default commit scopes in the cluster. The next step will overwrite the settings and at the end of this process we will need to restore them. Run: +You need to make a record of the default commit scopes in the cluster. The next step overwrites the settings. (At the end of this process, you need to restore them.) Run: ```sql select node_group_name,default_commit_scope from bdr.node_group_summary ; ``` -This will produce an output similar to:: +This command produces an output similar to:: ``` node_group_name | default_commit_scope @@ -209,7 +208,7 @@ This will produce an output similar to:: ab-group | ba001_ab-group-a ``` -Record these values. We can now overwrite the settings: +Record these values. You can now overwrite the settings: ```sql select bdr.alter_node_group_option('ab-group','default_commit_scope', 'local'); @@ -217,9 +216,9 @@ select bdr.alter_node_group_option('ab-group','default_commit_scope', 'local'); ## Prepare to monitor the data migration -Check the target cluster is healthy +Check that the target cluster is healthy. -* Run` pgd -f pgd-cli-config.yml check-health` to check the overall health of the cluster: +* To check the overall health of the cluster, run` pgd -f pgd-cli-config.yml check-health` : ``` Check Status Message ----- ------ ------- @@ -229,9 +228,9 @@ Raft Ok Raft Consensus is working correctly Replslots Ok All BDR replication slots are working correctly Version Ok All nodes are running same BDR versions ``` -(All checks should pass) +(When the cluster is healthy, all checks pass.) -* Run `pgd -f pgd-cli-config.yml verify-cluster` to verify the configuration of the cluster: +* To verify the configuration of the cluster, run `pgd -f pgd-cli-config.yml verify-cluster`: ``` Check Status Groups ----- ------ ------ @@ -242,9 +241,9 @@ Witness-only group does not have any child groups There is at max 1 witness-only group iff there is even number of local Data Groups Ok There are at least 2 proxies configured per Data Group if routing is enabled Ok ``` -(All checks should pass) +(When the cluster is verified, all checks.) -* Run `pgd -f pgd-cli-config.yml show-nodes` to check the status of the nodes: +* To check the status of the nodes, run `pgd -f pgd-cli-config.yml show-nodes`: ``` Node Node ID Group Type Current State Target State Status Seq ID ---- ------- ----- ---- ------------- ------------ ------ ------ @@ -254,48 +253,46 @@ ab-node-3 199017004 ab-group data ACTIVE ACTIVE Up 3 ``` -* Run `pgd -f pgd-cli-config.yml show-raft` to confirm the raft leader: +* To confirm the raft leader, run `pgd -f pgd-cli-config.yml show-raft`. -* Run `pgd -f pgd-cli-config.yml show-replslots` to confirm the replication slots: +* To confirm the replication slots, run `pgd -f pgd-cli-config.yml show-replslots`. -* Run `pgd -f pgd-cli-config.yml show-subscriptions` to confirm the subscriptions: +* To confirm the subscriptions, run `pgd -f pgd-cli-config.yml show-subscriptions`. -* Run `pgd -f pgd-cli-config.yml show-groups` to confirm the groups: +* To confirm the groups, run `pgd -f pgd-cli-config.yml show-groups`. -These commands will provide a snapshot of the state of the cluster before the migration begins. +These commands provide a snapshot of the state of the cluster before the migration begins. ## Migrating the data -This currently has to be performed in three phases. +Currently, you must migrate the data in three phases: -1. Transferring the “pre-data” using pg_dump and pg_restore. This exports and imports all the data definitions. -1. Using MTK (Migration Toolkit) with the `--dataonly` option to transfer only the data from each table, repeating as necessary for each table. -1. Transferring the “post-data” using pg_dump and pg_restore. This completes the data transfer. +1. Transferring the “pre-data” using pg_dump and pg_restore, which exports and imports all the data definitions. +1. Using MTK with the `--dataonly` option to transfer only the data from each table, repeating as necessary for each table. +1. Transferring the “post-data” using pg_dump and pg_restore, which completes the data transfer. ### Transferring the pre-data -Use the `pg_dump` utility against the source database to dump the pre-data section in directory format. +Use the `pg_dump` utility against the source database to dump the pre-data section in directory format: ``` pg_dump -Fd -f predata --section=pre-data -h -p -U ``` -Once the pre-data has been dumped into the predata directory, it can be loaded, using `pg_restore` into the target cluster. +Once the pre-data is dumped into the predata directory, you can load it into the target cluster using `pg_restore`: ``` pg_restore -Fd --section=pre-data -d "host=ab-node-1-host dbname= user= options='-cbdr.ddl_locking=off -cbdr.commit_scope=local'" predata ``` -The `options=` section in the connection string to the server is important. The options disable DDL locking and sets the commit scope to `local` overriding any default commit scopes. Using `--section=pre-data` limits the restore to the configuration that precedes the data in the dump: +The `options=` section in the connection string to the server is important. The options disable DDL locking and set the commit scope to `local`, overriding any default commit scopes. Using `--section=pre-data` limits the restore to the configuration that precedes the data in the dump. ### Transferring the data -In this step, the Migration Toolkit will be used to transfer the table data between the source and target. - -Edit `/usr/edb/migrationtoolkit/etc/toolkit.properties`. +In this step, Migration Toolkit is used to transfer the table data between the source and target. -You will need to use sudo to raise your privilege to do this - ie. `sudo vi /usr/edb/migrationtoolkit/etc/toolkit.properties`. +Edit `/usr/edb/migrationtoolkit/etc/toolkit.properties`. You need to use sudo to raise your privilege to do this, that is, `sudo vi /usr/edb/migrationtoolkit/etc/toolkit.properties`. ``` SRC_DB_URL=jdbc:postgresql://:/ @@ -307,19 +304,19 @@ TARGET_DB_USER= TARGET_DB_PASSWORD= ``` -Edit the relevant values into the settings: +Edit the relevant values in the settings. Ensure that the configuration file is owned by the user you intend to run the data transfer as and read-write only for its owner. -Now, select sets of tables in the source database that should be be transferred together, ideally grouping them for redundancy in case of failure. +Now, select sets of tables in the source database that must be transferred together, ideally grouping them for redundancy in case of failure: ``` nohup /usr/edb/migrationtoolkit/bin/runMTK.sh -sourcedbtype postgres -targetdbtype postgres -loaderCount 1 -tableLoaderLimit 1 -fetchSize 4000 -parallelLoadRowLimit 1000 -truncLoad -dataOnly -tables ,,... > mtk.log ``` -This command uses the `-truncLoad` option and will drop indexes and constraints before the data is loaded, then recreate them after the loading has completed. +This command uses the `-truncLoad` option and drops indexes and constraints before the data is loaded. It then recreates them after the loading has completed. -You can run multiple instances of this command in parallel; add an `&` to the end of the command. Ensure that you write the output from each to different files (e.g mtk_1.log, mtk_2.log). +You can run multiple instances of this command in parallel. To do so, add an `&` to the end of the command. Ensure that you write the output from each to different files (for example, `mtk_1.log`, `mtk_2.log`). For example: @@ -335,13 +332,13 @@ nohup /usr/edb/migrationtoolkit/bin/runMTK.sh -sourcedbtype postgres -targetdbty This sets up four processes, each transferring a particular table or sets of tables as a background process. -While this is running, monitor the lag. Log into the destination node with psql and monitor lag with: +While this is running, monitor the lag. Log into the destination node with psql, and monitor lag with: ```sql SELECT NOW(); SELECT pg_size_pretty( pg_database_size('bdrdb') ); SELECT * FROM bdr.node_replication_rates; ``` -Once the lag has been consumed, return to the shell. You can now use `tail` to monitor the progress of the data transfer by following the log files of each process: +Once the lag is consumed, return to the shell. You can now use `tail` to monitor the progress of the data transfer by following the log files of each process: ``` tail -f mtk_1.log mtk_2.log mtk_3.log mtk_4.log @@ -349,7 +346,7 @@ tail -f mtk_1.log mtk_2.log mtk_3.log mtk_4.log ### Transferring the post-data -Make sure there is no replication lag across the entire cluster before proceeding with post-data. +Make sure there's no replication lag across the entire cluster before proceeding with post-data. Now dump the post-data section of the source database: @@ -357,33 +354,33 @@ Now dump the post-data section of the source database: pg_dump -Fd -f postdata --section=post-data -h -p -U ``` -And then load the post-data section into the target database: +Then load the post-data section into the target database: ``` pg_restore -Fd -d “host=ab-node-1-host dbname= user= options='-cbdr.ddl_locking=off -cbdr.commit_scope=local'” --section=post-data postdata ``` -If this step fails due to a disconnection, return to monitoring lag (as above) then, when no synchronization lag is present, repeat the restore. +If this step fails due to a disconnection, return to monitoring lag (as described previously). Then, when no synchronization lag is present, repeat the restore. ## Resume the cluster -### Remove the routing fences you set up earlier on the other nodes. +### Remove the routing fences you set up earlier on the other nodes -Connect directly to the destination node via psql. Use `bdr.alter_node_option` and turn off the `route_fence` option for each node in the group apart from the destination node, which is already off. +Connect directly to the destination node using psql. Use `bdr.alter_node_option` and turn off the `route_fence` option for each node in the group except for the destination node, which is already off: ```sql select bdr.alter_node_option('ab-node-2','route_fence','f'); select bdr.alter_node_option('ab-node-3','route_fence','f'); ``` -Proxies will now be able to route to all the nodes in the group. +Proxies can now route to all the nodes in the group. ### Reset commit scopes -You can now restore the default commit scopes to the cluster to allow PGD to manage the replication load. Set the `default_commit_scope` for the groups to the value for [the groups that you recorded in an earlier step](#record-then-clear-default-commit-scopes). +You can now restore the default commit scopes to the cluster to allow PGD to manage the replication load. Set `default_commit_scope` for the groups to the value for [the groups that you recorded in an earlier step](#record-then-clear-default-commit-scopes). ```sql select bdr.alter_node_group_option('ab-group','default_commit_scope', 'ba001_ab-group-a'); ``` -The cluster is now loaded and ready for production. For more assurance, you can run the `pgd -f pgd-cli-config.yml check-health` command to check the overall health of the cluster (and the other pgd commands from when you checked the cluster earlier). +The cluster is now loaded and ready for production. For more assurance, you can run the `pgd -f pgd-cli-config.yml check-health` command to check the overall health of the cluster and the other PGD commands from when you checked the cluster earlier. diff --git a/product_docs/docs/biganimal/release/migration/index.mdx b/product_docs/docs/biganimal/release/migration/index.mdx index 1209ac859a4..f06bac8622a 100644 --- a/product_docs/docs/biganimal/release/migration/index.mdx +++ b/product_docs/docs/biganimal/release/migration/index.mdx @@ -30,7 +30,6 @@ See the following BigAnimal knowlege base articles for step-by-step instructions Several options are available for migrating EDB Postgres Advanced Server and PostgreSQL databases to BigAnimal. One option is to use the Migration Toolkit. Another simple option for many use cases is to import an existing PostgreSQL or EDB Postgres Advanced Server database to BigAnimal. See [Importing an existing Postgres database](cold_migration). -## Migrating to Distributed High Availability clusters - -When migrating to a PGD powered Distributed High Availability (DHA) cluster, we recommend that you use the [DHA/PGD Bulk Migration](dha_bulk_migration) guide. This guide provides a step-by-step process for migrating your data to a DHA cluster while minimizing the impact of subsequent replication on the process. +## Migrating to distributed high availability clusters +When migrating to a PGD-powered distributed high availability (DHA) cluster, we recommend that you follow the instructions in [DHA/PGD bulk migration](dha_bulk_migration). This content provides a step-by-step process for migrating your data to a DHA cluster while minimizing the impact of subsequent replication on the process. From 6fd67002770a9ecabd345cc74cbb6b8e663802bd Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Wed, 20 Mar 2024 15:07:50 -0400 Subject: [PATCH 06/13] Update product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx --- .../docs/biganimal/release/migration/dha_bulk_migration.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx index 858b4139aef..72694508f53 100644 --- a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx +++ b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx @@ -56,7 +56,7 @@ Create a virtual machine with your preferred operating system in the cloud to or ### Configure repositories -The required software is available from the EDB repositories. Install the EDB repositories on your bastion server. +The required software is available from the EDB repositories. You need to install the EDB repositories on your bastion server. * Red Hat ``` From 78c24aad916ca9b98af3687e671cc948e8fb8fcb Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Wed, 20 Mar 2024 15:23:21 -0400 Subject: [PATCH 07/13] Apply suggestions from code review Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> --- .../docs/biganimal/release/migration/dha_bulk_migration.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx index 72694508f53..9917b0dd0df 100644 --- a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx +++ b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx @@ -1,19 +1,19 @@ --- title: Bulk loading data into DHA/PGD clusters navTITLE: Bulk loading into DHA/PGD clusters -description: This content is specifically for environments where there's no direct access to the PGD nodes, only PGD Proxy endpoints, such as BigAnimal's distributed high availability deployments of PGD. +description: This guidance is specifically for environments where there's no direct access to the PGD nodes, only PGD Proxy endpoints, such as BigAnimal's distributed high availability deployments of PGD. deepToC: true --- ## Bulk loading data into PGD clusters -**This content is specifically for environments where there's no direct access to the PGD nodes, only PGD Proxy endpoints, such as BigAnimal's distributed high availability deployments of PGD.** +**This guidance is specifically for environments where there's no direct access to the PGD nodes, only PGD Proxy endpoints, such as BigAnimal's distributed high availability deployments of PGD.** Without using care, bulk loading data into a PGD cluster can cause a lot of replication load on a cluster. With that in mind, this content describes a process to mitigate that replication load. ## Provision or prepare a PGD cluster -You must provision a PGD cluster, either manually, using TPA, or on BigAnimal. +You must provision a PGD cluster, either manually, using TPA, or on BigAnimal. This will be the target database for the migration. Ensure that you provision it with sufficient storage capacity to hold the migrated data. We recommend that, when provisioning or, if needed, after provisioning, you set the following Postgres GUC variables. From 6cd5b2830fa7ee71b4f6d97c08fd3bf6f743120a Mon Sep 17 00:00:00 2001 From: Josh Earlenbaugh Date: Wed, 20 Mar 2024 15:28:48 -0400 Subject: [PATCH 08/13] Revert "Update index.mdx" --- .../release/getting_started/creating_a_cluster/index.mdx | 2 -- 1 file changed, 2 deletions(-) diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index 89e807a9be2..74915736312 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -73,8 +73,6 @@ The following options aren't available when creating your cluster: - **[EDB Postgres Advanced Server](/epas/latest/)** is EDB's Oracle-compatible database offering. View [a quick demonstration of Oracle compatibility on BigAnimal](../../using_cluster/06_demonstration_oracle_compatibility). EDB Postgres Advanced Server is compatible with all three cluster types. - - **[EDB Postgres Extended Server](/pge/latest/)** is EDB's is EDB's advanced logical replication, PostgreSQL-compatible database offering. - - **[PostgreSQL](/supported-open-source/postgresql/)** is the open-source, object-relational database management system. PostgreSQL is compatible with single-node and primary/standby high-availability cluster types. 1. In the **Postgres Version** list, select the version of Postgres that you want to use. See [Database version policy](../../overview/05_database_version_policy) for more information. From 6264ead1b540a118bb5a5501cdba78d0a3a5ed04 Mon Sep 17 00:00:00 2001 From: piano35-edb <160748516+piano35-edb@users.noreply.github.com> Date: Wed, 20 Mar 2024 14:36:34 -0500 Subject: [PATCH 09/13] incorporated requested changes --- .../index.mdx | 9 ----- .../images/biganimal_faultinjectiontest_1.png | 0 .../images/biganimal_faultinjectiontest_2.png | 0 .../images/biganimal_faultinjectiontest_3.png | 0 .../images/biganimal_faultinjectiontest_4.png | 0 .../index.mdx} | 34 ++++++++++--------- .../biganimal/release/using_cluster/index.mdx | 1 + 7 files changed, 19 insertions(+), 25 deletions(-) delete mode 100644 product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/index.mdx rename product_docs/docs/biganimal/release/using_cluster/{04_fault_injection_testing_your_cluster => fault_injection_testing}/images/biganimal_faultinjectiontest_1.png (100%) rename product_docs/docs/biganimal/release/using_cluster/{04_fault_injection_testing_your_cluster => fault_injection_testing}/images/biganimal_faultinjectiontest_2.png (100%) rename product_docs/docs/biganimal/release/using_cluster/{04_fault_injection_testing_your_cluster => fault_injection_testing}/images/biganimal_faultinjectiontest_3.png (100%) rename product_docs/docs/biganimal/release/using_cluster/{04_fault_injection_testing_your_cluster => fault_injection_testing}/images/biganimal_faultinjectiontest_4.png (100%) rename product_docs/docs/biganimal/release/using_cluster/{04_fault_injection_testing_your_cluster/fault_injection_testing.mdx => fault_injection_testing/index.mdx} (77%) diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/index.mdx b/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/index.mdx deleted file mode 100644 index 3caad8562f5..00000000000 --- a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/index.mdx +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: "Testing availability and recovery for your cluster" - -navigation: - - Fault injection testing - ---- - -With BigAnimal, you can test the availability and recovery for your cluster. \ No newline at end of file diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_1.png b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/images/biganimal_faultinjectiontest_1.png similarity index 100% rename from product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_1.png rename to product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/images/biganimal_faultinjectiontest_1.png diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_2.png b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/images/biganimal_faultinjectiontest_2.png similarity index 100% rename from product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_2.png rename to product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/images/biganimal_faultinjectiontest_2.png diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_3.png b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/images/biganimal_faultinjectiontest_3.png similarity index 100% rename from product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_3.png rename to product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/images/biganimal_faultinjectiontest_3.png diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_4.png b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/images/biganimal_faultinjectiontest_4.png similarity index 100% rename from product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/images/biganimal_faultinjectiontest_4.png rename to product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/images/biganimal_faultinjectiontest_4.png diff --git a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/fault_injection_testing.mdx b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx similarity index 77% rename from product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/fault_injection_testing.mdx rename to product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx index 4fc785800f7..c705fd39960 100644 --- a/product_docs/docs/biganimal/release/using_cluster/04_fault_injection_testing_your_cluster/fault_injection_testing.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx @@ -1,30 +1,32 @@ --- title: "Fault injection testing" + +navigation: + - Fault injection testing --- You can test the fault tolerance of your cluster by deleting a VM in order to inject a fault. Once a VM is deleted, you can monitor the availability and recovery of the cluster. -## Requirement +## Requirements Ensure you meet the following requirements before using fault injection testing: + You have connected your BigAnimal cloud account with your Azure subscription. See [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) for more information. + You should have permissions in your Azure subscription to view and delete VMs. -+ You have **pgd cli** installed. See [Installing PGD CLI](/pgd/latest/cli/installing_cli/#) for more information. -+ You have created a **pgd-cli-config.yml** file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information. ++ You have PGD CLI installed. See [Installing PGD CLI](/pgd/latest/cli/installing_cli/#) for more information. ++ You have created a `pgd-cli-config.yml` file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information. ## Fault injection testing steps Fault injection testing consists of the following steps: -1. Verifying Cluster Health +1. Verifying cluster health 2. Determining the write leader node for your cluster 3. Deleting a write leader node from your cluster 4. Monitoring cluster health -  -  -#### Verifying Cluster Health + +### Verifying Cluster Health Use the following commands to monitor your cluster health, node info, raft, replication lag, and write leads. @@ -38,7 +40,7 @@ pgd show-subscriptions -f pgd-cli-config.yml pgd show-groups -f pgd-cli-config.yml ``` -You can use **pgd help** for more information on these commands. +You can use `pgd help` for more information on these commands. To list the supported commands, enter: @@ -77,19 +79,19 @@ To delete a write lead node from the cluster: 3. In the left navigation of BigAnimal portal, choose **Clusters**. 4. Choose the cluster to test fault injection with and copy the string value from the URL. The string value is located after the underscore. -![Delete a write lead](images/biganimal_faultinjectiontest_1.png) -  + ![Delete a write lead](images/biganimal_faultinjectiontest_1.png) + 5. In your Azure subscription, paste the string into the search and prefix it with **dp-** to search for the data plane. -From the results, choose the Kubernetes service from the Azure Region that your cluster is deployed in. + * From the results, choose the Kubernetes service from the Azure Region that your cluster is deployed in. -![Delete a write lead 2](images/biganimal_faultinjectiontest_2.png) + ![Delete a write lead 2](images/biganimal_faultinjectiontest_2.png)   6. Identify the Kubernetes service for your cluster. -![Delete a write lead](images/biganimal_faultinjectiontest_4.png) -  + ![Delete a write lead](images/biganimal_faultinjectiontest_4.png) + !!!Note Don't delete the VMSS here or sub resources directly. @@ -98,8 +100,8 @@ Don't delete the VMSS here or sub resources directly. 7. Browse to the Data Plane, choose Workloads, and locate the Kubernetes resources for your cluster to delete a chosen node. ![Delete a write lead 3](images/biganimal_faultinjectiontest_3.png) -  + ### Monitoring cluster health -After deleting a cluster node, you can monitor the health of the cluster using the same **pgd** commands that you used to verify cluster health. +After deleting a cluster node, you can monitor the health of the cluster using the same **PGD CLI** commands that you used to verify cluster health. diff --git a/product_docs/docs/biganimal/release/using_cluster/index.mdx b/product_docs/docs/biganimal/release/using_cluster/index.mdx index eecab5c5643..a7398ff4ced 100644 --- a/product_docs/docs/biganimal/release/using_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/index.mdx @@ -9,6 +9,7 @@ navigation: - 03_modifying_your_cluster - 04_backup_and_restore - 05_monitoring_and_logging +- fault_injection_testing - 05a_deleting_your_cluster - 06_analyze_with_superset - 06_demonstration_oracle_compatibility From f2f7b99d7dfd8c50cef57b3d516b2d0213841c88 Mon Sep 17 00:00:00 2001 From: piano35-edb <160748516+piano35-edb@users.noreply.github.com> Date: Wed, 20 Mar 2024 15:10:49 -0500 Subject: [PATCH 10/13] removed nbsp, h4 --- .../release/using_cluster/fault_injection_testing/index.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx index c705fd39960..18551d04a6e 100644 --- a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx @@ -54,8 +54,8 @@ For help with a specific command and its parameters, enter `pgd help Date: Wed, 20 Mar 2024 20:34:07 +0000 Subject: [PATCH 11/13] Update product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx --- .../release/using_cluster/fault_injection_testing/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx index 18551d04a6e..7f3e6d08368 100644 --- a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx @@ -32,7 +32,7 @@ Use the following commands to monitor your cluster health, node info, raft, repl ```sql pgd check-health -f pgd-cli-config.yml -pgd verify cluster -f pgd-cli-config.yml +pgd verify-cluster -f pgd-cli-config.yml pgd show-nodes -f pgd-cli-config.yml pgd show-raft -f pgd-cli-config.yml pgd show-replslots –verbose -f pgd-cli-config.yml From 6a4573e8caaa2ad8c42059e4b87a347d947ea3bf Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Wed, 20 Mar 2024 20:40:51 +0000 Subject: [PATCH 12/13] Fix code colors --- .../release/using_cluster/fault_injection_testing/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx index 7f3e6d08368..5cff0d110a4 100644 --- a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx @@ -30,7 +30,7 @@ Fault injection testing consists of the following steps: Use the following commands to monitor your cluster health, node info, raft, replication lag, and write leads. -```sql +```shell pgd check-health -f pgd-cli-config.yml pgd verify-cluster -f pgd-cli-config.yml pgd show-nodes -f pgd-cli-config.yml From 395b21db3e05e458a1a96a66ad1074cb6fa72ccc Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Wed, 20 Mar 2024 20:45:41 +0000 Subject: [PATCH 13/13] Misc fixes to text --- .../fault_injection_testing/index.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx index 5cff0d110a4..37eea1c4d3c 100644 --- a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx @@ -44,13 +44,13 @@ You can use `pgd help` for more information on these commands. To list the supported commands, enter: -```sh +```shell pgd help ``` For help with a specific command and its parameters, enter `pgd help `. For example: -```sh +```shell pgd help show-nodes ``` @@ -58,9 +58,9 @@ pgd help show-nodes ### Determining the write leader node for your cluster -```sql +```shell pgd show-groups -f pgd-cli-config.yml - +__OUTPUT__ Group Group ID Type Write Leader -------- ------------------ —--- ------------ world 3239291720 global p-x67kjp3fsq-d-1 @@ -68,7 +68,7 @@ p-x67kjp3fsq-a 2456382099 data world p-x67kjp3fsq-a-1 p-x67kjp3fsq-c 4147262499 data world p-x67kjp3fsq-d 3176957154 data world p-x67kjp3fsq-d-1 ``` -In this example, the write lead node is **p-x67kjp3fsq-a-1**. +In this example, the write leader node is **p-x67kjp3fsq-a-1**. ## Deleting a write leader node from your cluster @@ -94,7 +94,7 @@ To delete a write lead node from the cluster: !!!Note -Don't delete the VMSS here or sub resources directly. +Don't delete the Azure Kubernetes VMSS here or sub resources directly. !!! 7. Browse to the Data Plane, choose Workloads, and locate the Kubernetes resources for your cluster to delete a chosen node. @@ -103,5 +103,5 @@ Don't delete the VMSS here or sub resources directly. ### Monitoring cluster health -After deleting a cluster node, you can monitor the health of the cluster using the same **PGD CLI** commands that you used to verify cluster health. +After deleting a cluster node, you can monitor the health of the cluster using the same PGD CLI commands that you used to verify cluster health.