From 05a91c3e65a06283f91d959bbbb1adc68bae4f01 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Wed, 19 Jun 2024 13:42:56 -0400 Subject: [PATCH 1/6] Re-edit of PGD - third set of edits --- .../deploy-config/deploy-biganimal/index.mdx | 8 +-- .../deploy-config/deploy-kubernetes/index.mdx | 2 +- .../deploy-tpa/deploying/01-configuring.mdx | 9 ++- .../deploy-tpa/deploying/02-deploying.mdx | 4 +- .../deploy-tpa/deploying/index.mdx | 2 +- .../pgd/5/deploy-config/deploy-tpa/index.mdx | 10 +-- .../docs/pgd/5/deploy-config/index.mdx | 8 +-- .../docs/pgd/5/durability/administering.mdx | 5 +- product_docs/docs/pgd/5/durability/camo.mdx | 68 ++++++++++--------- 9 files changed, 60 insertions(+), 56 deletions(-) diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx index 97810ab124e..97e4f2cc02a 100644 --- a/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx +++ b/product_docs/docs/pgd/5/deploy-config/deploy-biganimal/index.mdx @@ -11,8 +11,8 @@ EDB BigAnimal is a fully managed database-as-a-service with built-in Oracle comp This section covers how to work with EDB Postgres Distributed when deployed on BigAnimal. * [Creating a distributed high-availability cluster](/biganimal/latest/getting_started/creating_a_cluster/creating_a_dha_cluster/) in the BigAnimal documentation works through the steps needed to: - * Prepare your cloud environment for a distributed high-availability cluster - * Sign in to BigAnimal + * Prepare your cloud environment for a distributed high-availability cluster. + * Sign in to BigAnimal. * Create a distributed high-availability cluster, including: - * Creating and configuring a data group - * Optionally creating and configuring a second data group in a different region + * Creating and configuring a data group. + * Optionally creating and configuring a second data group in a different region. diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-kubernetes/index.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-kubernetes/index.mdx index 6827efd747b..4cdb49317db 100644 --- a/product_docs/docs/pgd/5/deploy-config/deploy-kubernetes/index.mdx +++ b/product_docs/docs/pgd/5/deploy-config/deploy-kubernetes/index.mdx @@ -10,7 +10,7 @@ EDB Postgres Distributed for Kubernetes is a Kubernetes operator designed, devel This section covers how to deploy and configure EDB Postgres Distributed using the Kubernetes operator. -* A [Quickstart](/postgres_distributed_for_kubernetes/latest/quickstart) in the PGD for Kubernetes documentation works through the steps needed to: +* [Quick start](/postgres_distributed_for_kubernetes/latest/quickstart) in the PGD for Kubernetes documentation works through the steps needed to: * Create a Kind/Minikube cluster. * Install Helm and the Helm chart for PGD for Kubernetes. * Create a simple configuration file for a PGD cluster. diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/01-configuring.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/01-configuring.mdx index 688e2ad8662..bedd73aec82 100644 --- a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/01-configuring.mdx +++ b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/01-configuring.mdx @@ -23,9 +23,9 @@ tpaexec configure --architecture [options] | `--redwood` or `--no-redwood` | Required when `--edb-postgres-advanced` flag is present. Specifies whether Oracle database compatibility features are desired. | | `--location-names l1 l2 l3` | Required. Specifies the names of the locations to deploy PGD to. | | `--data-nodes-per-location N` | Specifies the number of data nodes per location. Default is 3. | -| `--add-witness-node-per-location` | For an even number of data nodes per location, adds witness nodes to allow for local consensus. Enabled by default for 2 data node locations. | -| `--add-proxy-nodes-per-location` | Whether to separate PGD proxies from data nodes and how many to configure. By default one proxy is configured and cohosted for each data node. | -| `--pgd-proxy-routing global\|local` | Whether PGD Proxy routing is handled on a global or local (per-location) basis. | +| `--add-witness-node-per-location` | For an even number of data nodes per location, adds witness nodes to allow for local consensus. Enabled by default for 2-data-node locations. | +| `--add-proxy-nodes-per-location` | Specifies whether to separate PGD proxies from data nodes and how many to configure. By default one proxy is configured and cohosted for each data node. | +| `--pgd-proxy-routing global\|local` | Specifies whether PGD Proxy routing is handled on a global or local (per-location) basis. | | `--add-witness-only-location loc` | Designates one of the cluster locations as witness-only (no data nodes are present in that location). | | `--enable-camo` | Sets up a CAMO pair in each location. Works only with 2 data nodes per location. | @@ -48,7 +48,7 @@ The first argument must be the cluster directory, for example, `speedy` or `~/cl The command creates a directory named `~/clusters/speedy` and generates a configuration file named `config.yml` that follows the layout of the PGD-Always-ON architecture. You can use the `tpaexec configure --architecture PGD-Always-ON --help` command to see the values that are supported for the configuration options in this architecture. -In the example, the options select: +In the example, the options select: - An AWS deployment (`--platform aws`) - EDB Postgres Advanced Server, version 16 and Oracle compatibility (`--edb-postgres-advanced 16` and `--redwood`) @@ -135,4 +135,3 @@ Specify `--hostnames-pattern` to restrict hostnames to those matching the egrep By default, `tpaexec configure` uses the names first, second, and so on for any locations used by the selected architecture. Specify `--location-names` to provide more meaningful names for each location. - diff --git a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/02-deploying.mdx b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/02-deploying.mdx index 1d727ba8051..b9a6e03def2 100644 --- a/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/02-deploying.mdx +++ b/product_docs/docs/pgd/5/deploy-config/deploy-tpa/deploying/02-deploying.mdx @@ -16,7 +16,7 @@ The `tpaexec provision` command creates instances and other resources required b For example, given AWS access with the necessary privileges, TPA provisions EC2 instances, VPCs, subnets, routing tables, internet gateways, security groups, EBS volumes, elastic IPs, and so on. -You can also provision existing servers by selecting the `bare` platform and providing connection details. Whether these are bare metal servers or those provisioned separately on a cloud platform, they can be used as if they were created by TPA. +You can also provision existing servers by selecting the `bare` platform and providing connection details. Whether these are bare metal servers or those provisioned separately on a cloud platform, you can use them as if they were created by TPA. You aren't restricted to a single platform. You can spread your cluster out across some AWS instances in multiple regions and some on-premises servers or servers in other data centres, as needed. @@ -25,6 +25,8 @@ At the end of the provisioning stage, you will have the required number of insta ## Deploy The `tpaexec deploy` command installs and configures Postgres and other software on the provisioned servers. TPA can create the servers, but it doesn't matter who created them so long as SSH and sudo access are available. This includes setting up replication, backups, and so on. +