Skip to content

Commit

Permalink
Merge pull request #5425 from EnterpriseDB/release-2024-03-20c
Browse files Browse the repository at this point in the history
Release 2024-03-20 (c)
  • Loading branch information
djw-m authored Mar 20, 2024
2 parents 1ba91f3 + dde8de1 commit 5fa5938
Show file tree
Hide file tree
Showing 10 changed files with 202 additions and 75 deletions.
141 changes: 69 additions & 72 deletions product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx

Large diffs are not rendered by default.

5 changes: 2 additions & 3 deletions product_docs/docs/biganimal/release/migration/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ See the following BigAnimal knowlege base articles for step-by-step instructions

Several options are available for migrating EDB Postgres Advanced Server and PostgreSQL databases to BigAnimal. One option is to use the Migration Toolkit. Another simple option for many use cases is to import an existing PostgreSQL or EDB Postgres Advanced Server database to BigAnimal. See [Importing an existing Postgres database](cold_migration).

## Migrating to Distributed High Availability clusters

When migrating to a PGD powered Distributed High Availability (DHA) cluster, we recommend that you use the [DHA/PGD Bulk Migration](dha_bulk_migration) guide. This guide provides a step-by-step process for migrating your data to a DHA cluster while minimizing the impact of subsequent replication on the process.
## Migrating to distributed high availability clusters

When migrating to a PGD-powered distributed high availability (DHA) cluster, we recommend that you follow the instructions in [DHA/PGD bulk migration](dha_bulk_migration). This content provides a step-by-step process for migrating your data to a DHA cluster while minimizing the impact of subsequent replication on the process.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
---
title: "Fault injection testing"

navigation:
- Fault injection testing
---

You can test the fault tolerance of your cluster by deleting a VM in order to inject a fault. Once a VM is deleted, you can monitor
the availability and recovery of the cluster.

## Requirements

Ensure you meet the following requirements before using fault injection testing:

+ You have connected your BigAnimal cloud account with your Azure subscription. See [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) for more information.
+ You should have permissions in your Azure subscription to view and delete VMs.
+ You have PGD CLI installed. See [Installing PGD CLI](/pgd/latest/cli/installing_cli/#) for more information.
+ You have created a `pgd-cli-config.yml` file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information.

## Fault injection testing steps

Fault injection testing consists of the following steps:

1. Verifying cluster health
2. Determining the write leader node for your cluster
3. Deleting a write leader node from your cluster
4. Monitoring cluster health

### Verifying Cluster Health

Use the following commands to monitor your cluster health, node info, raft, replication lag, and write leads.

```shell
pgd check-health -f pgd-cli-config.yml
pgd verify-cluster -f pgd-cli-config.yml
pgd show-nodes -f pgd-cli-config.yml
pgd show-raft -f pgd-cli-config.yml
pgd show-replslots –verbose -f pgd-cli-config.yml
pgd show-subscriptions -f pgd-cli-config.yml
pgd show-groups -f pgd-cli-config.yml
```

You can use `pgd help` for more information on these commands.

To list the supported commands, enter:

```shell
pgd help
```

For help with a specific command and its parameters, enter `pgd help <command_name>`. For example:

```shell
pgd help show-nodes
```


### Determining the write leader node for your cluster


```shell
pgd show-groups -f pgd-cli-config.yml
__OUTPUT__
Group Group ID Type Write Leader
-------- ------------------ β€”--- ------------
world 3239291720 global p-x67kjp3fsq-d-1
p-x67kjp3fsq-a 2456382099 data world p-x67kjp3fsq-a-1
p-x67kjp3fsq-c 4147262499 data world
p-x67kjp3fsq-d 3176957154 data world p-x67kjp3fsq-d-1
```
In this example, the write leader node is **p-x67kjp3fsq-a-1**.


## Deleting a write leader node from your cluster

To delete a write lead node from the cluster:
1. Log into BigAnimal.
2. In a separate browser window, log into your Microsoft Azure subscription.
3. In the left navigation of BigAnimal portal, choose **Clusters**.
4. Choose the cluster to test fault injection with and copy the string value from the URL. The string value is located after the underscore.

![Delete a write lead](images/biganimal_faultinjectiontest_1.png)


5. In your Azure subscription, paste the string into the search and prefix it with **dp-** to search for the data plane.
* From the results, choose the Kubernetes service from the Azure Region that your cluster is deployed in.

![Delete a write lead 2](images/biganimal_faultinjectiontest_2.png)


6. Identify the Kubernetes service for your cluster.

![Delete a write lead](images/biganimal_faultinjectiontest_4.png)


!!!Note
Don't delete the Azure Kubernetes VMSS here or sub resources directly.
!!!

7. Browse to the Data Plane, choose Workloads, and locate the Kubernetes resources for your cluster to delete a chosen node.
![Delete a write lead 3](images/biganimal_faultinjectiontest_3.png)


### Monitoring cluster health

After deleting a cluster node, you can monitor the health of the cluster using the same PGD CLI commands that you used to verify cluster health.

Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ navigation:
- 03_modifying_your_cluster
- 04_backup_and_restore
- 05_monitoring_and_logging
- fault_injection_testing
- 05a_deleting_your_cluster
- 06_analyze_with_superset
- 06_demonstration_oracle_compatibility
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: "Version 2.4.0"
---

This is a minor release of HARP 2 that includes internal maintenance fixes.

| Type | Description |
| ---- |------------ |
| Change | Routine security library upgrades and refreshed build toolchain |
2 changes: 2 additions & 0 deletions product_docs/docs/pgd/3.7/harp/01_release_notes/index.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
---
title: Release Notes
navigation:
- harp2.4.0_rel_notes
- harp2.3.2_rel_notes
- harp2.3.1_rel_notes
- harp2.3.0_rel_notes
Expand All @@ -26,6 +27,7 @@ The release notes in this section provide information on what was new in each re

| Version | Release Date |
| ----------------------- | ------------ |
| [2.4.0](harp2.4.0_rel_notes) | 05 Mar 2024 |
| [2.3.2](harp2.3.2_rel_notes) | 17 Oct 2023 |
| [2.3.1](harp2.3.1_rel_notes) | 27 Jul 2023 |
| [2.3.0](harp2.3.0_rel_notes) | 12 Jul 2023 |
Expand Down

2 comments on commit 5fa5938

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸŽ‰ Published on https://edb-docs.netlify.app as production
πŸš€ Deployed on https://65fb5901f29d692e42a9040f--edb-docs.netlify.app

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.