From 8e2849ab829633f2f286b45421ecd1d7760b32ed Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 26 Sep 2023 17:08:49 -0400 Subject: [PATCH 01/22] Edits to Kasten by Veeam guide --- .../KastenbyVeeam/02-PartnerInformation.mdx | 12 ++--- .../KastenbyVeeam/03-SolutionSummary.mdx | 6 +-- .../04-ConfiguringVeeamKasten.mdx | 48 +++++++++---------- .../KastenbyVeeam/05-UsingVeeamKasten.mdx | 41 ++++++++-------- .../06-CertificationEnvironment.mdx | 4 +- .../KastenbyVeeam/07-SupportandLogging.mdx | 20 ++++---- .../partner_docs/KastenbyVeeam/index.mdx | 7 ++- 7 files changed, 70 insertions(+), 68 deletions(-) diff --git a/advocacy_docs/partner_docs/KastenbyVeeam/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/KastenbyVeeam/02-PartnerInformation.mdx index 509aaaba12c..2da4e7d0e36 100644 --- a/advocacy_docs/partner_docs/KastenbyVeeam/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/KastenbyVeeam/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' -description: 'Details of the Partner' +title: 'Partner information' +description: 'Details of the partner' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Kasten by Veeam | -| **Web Site** | https://www.kasten.io/ | -| **Partner Product** | Kasten K10 | +| **Partner name** | Kasten by Veeam | +| **Website** | https://www.kasten.io/ | +| **Partner product** | Kasten K10 | | **Version** | Kasten 6.0 | -| **Product Description** | Kasten K10 is a Cloud Native data management platform for Day 2 operations. Purpose built for Kubernetes, Kasten backups and restores your applications, handles disaster recovery and manages application migration. Kasten can be implemented with EDB Postgres for Kubernetes to create fast backups and restores. | +| **Product description** | Kasten K10 is a cloud-native data management platform for Day 2 operations. Built for Kubernetes, Kasten backs up and restores your applications, handles disaster recovery, and manages application migration. Kasten can be implemented with EDB Postgres for Kubernetes to create fast backups and restores. | diff --git a/advocacy_docs/partner_docs/KastenbyVeeam/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/KastenbyVeeam/03-SolutionSummary.mdx index 9e2d3a5a7de..5775170c63e 100644 --- a/advocacy_docs/partner_docs/KastenbyVeeam/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/KastenbyVeeam/03-SolutionSummary.mdx @@ -1,10 +1,10 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Explanation of the solution and its purpose' --- -Kasten by Veeam is a data management platform built for Kubernetes that can provide enterprise operations teams with an easy-to-use and secure system for backup and restore of Kubernetes applications. Kasten can be used in conjunction with EDB Postgres for Kubernetes and the EDB external backup adapter to successfully backup and restore data. +Kasten by Veeam is a data management platform built for Kubernetes that can provide enterprise operations teams with an easy-to-use and secure system for backup and restore of Kubernetes applications. Kasten can be used with EDB Postgres for Kubernetes and the EDB external backup adapter to successfully back up and restore data. -The EDB Postgres for Kubernetes [external backup adapter](https://www.enterprisedb.com/docs/postgres_for_kubernetes/latest/addons/#external-backup-adapter) allows for a third party tool, such as Kasten by Veeam, to discover an API that is needed in order to create a successful backup. +The EDB Postgres for Kubernetes [external backup adapter](https://www.enterprisedb.com/docs/postgres_for_kubernetes/latest/addons/#external-backup-adapter) allows for a third-party tool, such as Kasten by Veeam, to discover an API that's needed to create a successful backup. ![Kasten K10 Architecture](Images/KastenSolutionSummaryImagenew.png) diff --git a/advocacy_docs/partner_docs/KastenbyVeeam/04-ConfiguringVeeamKasten.mdx b/advocacy_docs/partner_docs/KastenbyVeeam/04-ConfiguringVeeamKasten.mdx index fc6a1fbfcef..73fe4f893c4 100644 --- a/advocacy_docs/partner_docs/KastenbyVeeam/04-ConfiguringVeeamKasten.mdx +++ b/advocacy_docs/partner_docs/KastenbyVeeam/04-ConfiguringVeeamKasten.mdx @@ -1,6 +1,6 @@ --- title: 'Configuration' -description: 'Walkthrough on configuring the integration' +description: 'Walkthrough of configuring the integration' --- Implementing EDB Postgres for Kubernetes with Kasten by Veeam requires the following components: @@ -16,32 +16,32 @@ Implementing EDB Postgres for Kubernetes with Kasten by Veeam requires the follo - Kasten K10 installed on your system !!! Note - For this integration, use the **example.yaml** files provided in each section for the appropriate Kasten configuration pieces, and change any environment variables per your specific needs. + For this integration, use the `example.yaml` files provided for the appropriate Kasten configuration pieces, and change any environment variables per your specific needs. - The **Add the Backup Decorator Annotations to the Cluster** section is the important section for the Kasten addon integration. + [Add the backup decorator annotations to the cluster](#add-the-backup-decorator-annotations-to-the-cluster) is important for the Kasten add-on integration. - Refer to the [EDB Postgres for Kubernetes external backup adapter](https://www.enterprisedb.com/docs/postgres_for_kubernetes/latest/addons/#external-backup-adapter) docs to view more detailed information on the EDB Postgres for Kubernetes backup adaptor addon functionality and additional details on its configuraton parameters. + Refer to the [EDB Postgres for Kubernetes external backup adapter](/postgres_for_kubernetes/latest/addons/#external-backup-adapter) documentation for more detailed information on the EDB Postgres for Kubernetes backup adaptor add-on functionality and additional details on its configuraton parameters. -## Install the Operator +## Install the operator -1. Install the EDB Postgres for Kubernetes operator. +Install the EDB Postgres for Kubernetes operator. ```bash kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.20.2.yaml ``` -Running this command will create the operator namespace where the controller will be running. +Running this command creates the operator namespace where the controller runs. -## Create an EDB Cluster, Client and Add Data +## Create an EDB cluster and client and add data -1. Initiate the below lines of code in your Kubernetes environment to create a specific namespace and apply your `.yaml` file. +1. In your Kubernetes environment, create a specific namespace and apply your `.yaml` file: ```bash kubctl create ns edb kubectl apply -f cluster-example.yaml -n edb ``` -### Example **cluster-example.yaml** file: +Example `cluster-example.yaml` file: ```bash # Example of PostgreSQL cluster @@ -94,7 +94,9 @@ kubectl cnp certificate cluster-app \ ```bash kubectl create -f client.yaml -n edb ``` -### Example **client.yaml** file: + +Example `client.yaml` file: + ```bash apiVersion: apps/v1 kind: Deployment @@ -148,7 +150,7 @@ spec: defaultMode: 0600 ``` -6. Add some data into the cluster to test the backup and restore, the following is sample data that was used for this example. +6. Add some data into the cluster to test the backup and restore. The following is sample data that was used for this example: ```bash kubectl exec -it deploy/cert-test -- bash @@ -168,11 +170,11 @@ select * from links; exit ``` -## Add the Backup Decorator Annotations to the Cluster +## Add the backup decorator annotations to the cluster -If you create the cluster from the previous section the **cluster-example.yaml** already includes the Kasten addon therefore you can skip this part. If you are working with your own cluster you will need to add the Kasten addon. +If you created the cluster from the previous process, `cluster-example.yaml` already includes the Kasten add-on, and you can skip this part. If you're working with your own cluster, you need to add the Kasten add-on. -1. Add the following annotations to your cluster, in the above **cluster-example.yaml** there is an example of where to add the annotation. +1. Add the following annotations to your cluster. The previous `cluster-example.yaml` file shows an example of where to add the annotation. ```bash "k8s.enterprisedb.io/addons": '["kasten"]' @@ -180,13 +182,13 @@ If you create the cluster from the previous section the **cluster-example.yaml** ## Install the EDB blueprint -1. Enter the following command in your environment: +1. In your environment, enter: ```bash kubectl create -f edb-hooks.yaml ``` -### Example **edb-hooks.yaml** file: +Example `edb-hooks.yaml` file: ```bash apiVersion: cr.kanister.io/v1alpha1 @@ -260,14 +262,12 @@ actions: done exit 0 ``` -## Create a Backup Policy with the EDB hooks - -1. Launch your Kasten K10 interface. +## Create a backup policy with the EDB hooks -2. Create a policy for the EDB namespace, you will need to set up a location profile for the export and kanister actions. - -Add the hooks example: - ![Kasten Backup Policy with EDB Hooks](Images/KastenBackupPolicywithHooks.png) +1. Launch the Kasten K10 interface. +2. Create a policy for the EDB namespace. You need to set up a location profile for the export and kanister actions. +3. Add the hooks example: + ![Kasten Backup Policy with EDB Hooks](Images/KastenBackupPolicywithHooks.png) diff --git a/advocacy_docs/partner_docs/KastenbyVeeam/05-UsingVeeamKasten.mdx b/advocacy_docs/partner_docs/KastenbyVeeam/05-UsingVeeamKasten.mdx index 0f9ea225747..5d77e8f48cb 100644 --- a/advocacy_docs/partner_docs/KastenbyVeeam/05-UsingVeeamKasten.mdx +++ b/advocacy_docs/partner_docs/KastenbyVeeam/05-UsingVeeamKasten.mdx @@ -3,43 +3,42 @@ title: 'Using' description: 'Walkthrough of example usage scenarios' --- -When you have configured your Kubernetes environment per the `Configuring` section you will then be able to start taking backups and completing restores. +After you configure your Kubernetes environment, you can start taking backups and completing restores. -## Launch a Backup +## Launch a backup 1. Launch your Kasten K10 interface. -2. Use Kasten K10 to launch a backup that creates two restore points, a local and a remote. +2. Use Kasten K10 to launch a backup that creates two restore points: a local and a remote. -3. You now have a backup we can use to validate a restore in the next section. +3. You now have a backup to use to validate a restore. ![Launch a Backup](Images/LaunchaBackup.png) -!!! Note - The Kasten by Veeam backup process is explained below: - 1. EDB elects a replica for the backup. - 2. Kasten will discover the replica. - 3. Kasten calls the EDB pre-backup command on the discovered replica. - 4. The replica becomes ready for the backup. - 5. Kasten takes the backup. - 6. Kasten calls the EDB post backup command on the replica. - 7. The replica leaves the backup mode. - 8. The backup is then over and is consistent for a restore. +## Backup process summary + +The Kasten by Veeam backup process is: +1. EDB elects a replica for the backup. +2. Kasten discovers the replica. +3. Kasten calls the EDB pre-backup command on the discovered replica. +4. The replica becomes ready for the backup. +5. Kasten takes the backup. +6. Kasten calls the EDB post-backup command on the replica. +7. The replica leaves the backup mode. +8. The backup is over and is consistent for a restore. -## Restore Database +## Restore database -1. To get ready for Kasten K10 to complete a restore, we will remove the EDB namespace in this example. +1. To get ready for Kasten K10 to complete a restore, remove the EDB namespace: ```bash kubectl delete ns edb ``` -2. In the Kasten K10 interface go to your remote restore point. +2. In the Kasten K10 interface, go to your remote restore point. -3. On the remote restore point select `restore`. +3. On the remote restore point, select **restore**. -4. After the restore is complete, all of your data will be present. +4. After the restore is complete, all of your data is present. ![Kasten Data Restore Point](Images/KastenRestorePoint.png) - - diff --git a/advocacy_docs/partner_docs/KastenbyVeeam/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/KastenbyVeeam/06-CertificationEnvironment.mdx index 2f1d83e235b..83826f4956f 100644 --- a/advocacy_docs/partner_docs/KastenbyVeeam/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/KastenbyVeeam/06-CertificationEnvironment.mdx @@ -1,11 +1,11 @@ --- -title: 'Certification Environment' +title: 'Certification environment' description: 'Overview of the certification environment' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | August 28, 2023 | +| **Certification test date** | August 28, 2023 | | **EDB Postgres for Kubernetes** | 1.20.2 | | **EDB Postgres for Kubernetes External Backup Adapter** | | **Kasten by Veeam Kasten K10** | 6.0 | diff --git a/advocacy_docs/partner_docs/KastenbyVeeam/07-SupportandLogging.mdx b/advocacy_docs/partner_docs/KastenbyVeeam/07-SupportandLogging.mdx index 630405dd64c..f4f329b68b9 100644 --- a/advocacy_docs/partner_docs/KastenbyVeeam/07-SupportandLogging.mdx +++ b/advocacy_docs/partner_docs/KastenbyVeeam/07-SupportandLogging.mdx @@ -1,21 +1,25 @@ --- -title: 'Support and Logging Details' +title: 'Support and logging details' description: 'Details of the support process and logging information' --- ## Support -Technical support for the use of these products is provided by both EDB and Veeam. A proper support contract is required to be in place at both EDB and Veeam. A support ticket can be opened on either side to start the process. If it is determined through the support ticket that resources from the other vendor is required, the customer should open a support ticket with that vendor through normal support channels. This will allow both companies to work together to help the customer as needed. +Technical support for the use of these products is provided by both EDB and Veeam. A support contract must be in place at both EDB and Veeam. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging -**EDB Postgres Advanced Server Logs** +The following log files are available. -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance and from here you can navigate to `log`, `current_logfiles` or you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. An example of the full path to view EDB Postgres Advanced Server logs: `/var/lib/edb/as15/data/log`. +### EDB Postgres Advanced Server logs -**PostgreSQL Server Logs** +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Or, you can navigate to the `postgresql.conf` file, which you can use to customize logging options or enable `edb_audit` logs. -The default log directories for PostgreSQL logs vary depending on the operating system: +An example of the full path to view EDB Postgres Advanced Server logs is `/var/lib/edb/as15/data/log`. + +### PostgreSQL Server logs + +The default log directories for PostgreSQL logs depend on the operating system: - Debian-based system: `/var/log/postgresql/postgresql-x.x.main.log. X.x.` @@ -23,7 +27,7 @@ The default log directories for PostgreSQL logs vary depending on the operating - Windows: `C:\Program Files\PostgreSQL\9.3\data\pg_log` -**Kasten by Veeam Logs** +### Kasten by Veeam logs -On the Kasten K10 UI navigate to `Settings` then `Support` then click `Download Logs`. +On the Kasten K10 interface, select **Settings > Support**, and then select **Download Logs**. ![Veeam Kasten Logs](Images/VeeamKastenLogging.png) diff --git a/advocacy_docs/partner_docs/KastenbyVeeam/index.mdx b/advocacy_docs/partner_docs/KastenbyVeeam/index.mdx index c5b88448b4d..054ba07e381 100644 --- a/advocacy_docs/partner_docs/KastenbyVeeam/index.mdx +++ b/advocacy_docs/partner_docs/KastenbyVeeam/index.mdx @@ -5,10 +5,9 @@ directoryDefaults: iconName: handshake --- -

- -

+![Partner Program Logo](Images/PartnerProgram.jpg.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Kasten by Veeam for Kasten K10

-

This document is intended to augment each vendor’s product documentation in order to guide the reader in getting the products working together. It is not intended to show the optimal configuration for the certified integration.

\ No newline at end of file +

This document is intended to augment each vendor’s product documentation to guide you in getting the products working together. It isn't intended to show the optimal configuration for the certified integration.

\ No newline at end of file From be6894e1aa877cdd777f2c2e575623439587cc20 Mon Sep 17 00:00:00 2001 From: Chris Estes <106166814+ccestes@users.noreply.github.com> Date: Wed, 27 Sep 2023 10:49:28 -0400 Subject: [PATCH 02/22] BAH content in Connecting from Azure --- .../01_connecting_from_azure/index.mdx | 33 ++++++++++++++----- 1 file changed, 24 insertions(+), 9 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx index 936aa93b8c3..df19f823b59 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx @@ -1,6 +1,7 @@ --- title: Connecting from Azure navTitle: From Azure +deepToC: true redirects: - /biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure - /biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/01_private_endpoint @@ -10,8 +11,6 @@ Three different methods enable you to connect to your cluster from your applicat ## Azure private endpoint (recommended) -While other methods for connecting your cluster from your application's virtual network in Azure are available, we strongly recommend using the Azure private endpoint method. - Azure private endpoint is a network interface that securely connects a private IP address from your Azure virtual network (VNet) to an external service. You grant access only to a single cluster instead of the entire BigAnimal resource virtual network, thus ensuring maximum network isolation. Other advantages include: - You need to configure the Private Link only once. Then you can use multiple private endpoints to connect applications from many different VNets. @@ -23,8 +22,24 @@ Private endpoints are the same mechanism used by first-party Azure services such If you set up a private endpoint and want to change to a public network, you must remove the private endpoint resources before making the change. !!! +### Using BigAnimal's cloud account + +When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Azure subscription ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#network-logs--telemetry-section)). BigAnimal, in turn, provides you with a private link alias, which you can use to connect to your cluster privately. + +1. When creating your cluster, on the Cluster Settings tab, in the Network section: + 1. Select **Private**. + + 1. Enter your application's Azure subscription ID. + +1. After the cluster is created, go to the cluster details to see the corresponding endpoint service name. You need the service name while creating a private endpoint. + +1. Create a private endpoint in the client's VPC. The steps for creating a private endpoint in the client's VPC are the same whether you're using BigAnimal's cloud or your own. See [Step 1: Create an Azure private endpoint](#step-1-create-an-azure-private-endpoint) and [Step 2: Create an Azure Private DNS Zone for the private endpoint](#step-2-create-an-azure-private-dns-zone-for-the-private-endpoint). + +1. In your application's Azure account, select **Private Link Center**, and then select **Private endpoints**. Select the endpoint you created previously, and use the service name provided in the details section in BigAnimal to access your cluster. + +### Using your Azure account -## Private endpoint example +#### Example This example shows how to connect your cluster using Azure private endpoint. @@ -46,7 +61,7 @@ Assume that your cluster is on a subscription called `development` and is being - Virtual network subnet: `snet-client` -### Prerequisites +#### Prerequisites To walk through an example in your own environment, you need: @@ -67,11 +82,11 @@ To walk through an example in your own environment, you need: In this example, you create an Azure private endpoint in your client VM's virtual network. After you create the private endpoint, you can use its private IP address to access the Postgres cluster. You must perform this procedure for every virtual network you want to connect from. -### Step 1: Create an Azure private endpoint +#### Step 1: Create an Azure private endpoint Create an Azure private endpoint in each client virtual network that needs to connect to your BigAnimal cluster. You can create the private endpoint using either the [Azure portal](#using-the-azure-portal) or the [Azure CLI](#using-the-azure-cli). -#### Using the Azure portal +##### Using the Azure portal 1. If you prefer to create the private endpoint using the Azure portal, on the upper-left side of the screen, select **Create a resource > Networking > Private Link**. Alternatively. in the search box enter `Private Link`. @@ -129,7 +144,7 @@ you created by entering the following details: 10. Proceed to [Accessing the cluster](#accessing-the-cluster). -#### Using the Azure CLI +##### Using the Azure CLI If you prefer to create the private endpoint using the Azure CLI, either use your local terminal with an Azure CLI profile already configured or open a new Azure Cloud Shell using the Azure portal. @@ -160,7 +175,7 @@ az network private-endpoint create \ - `subscription` is the Azure subscription in which to create the private endpoint. -### Accessing the cluster +#### Accessing the cluster You have successfully built a tunnel between your client VM's virtual network and the cluster. You can now access the cluster from the private endpoint in your client VM. The private endpoint's private IP address is associated with an independent virtual network NIC. Get the private endpoint's private IP address using the following commands: ```shell @@ -185,7 +200,7 @@ edb_admin=> ``` -### Step 2: Create an Azure Private DNS Zone for the private endpoint +#### Step 2: Create an Azure Private DNS Zone for the private endpoint EDB strongly recommends using a [private Azure DNS zone](https://docs.microsoft.com/en-us/azure/dns/private-dns-privatednszone) with the private endpoint to establish a connection with a cluster. You can't validate TLS certificates using `verify-full` when connecting to an IP address. From 65777ee5c9986c8267db5c5575c61dc19ba9ebf4 Mon Sep 17 00:00:00 2001 From: Chris Estes <106166814+ccestes@users.noreply.github.com> Date: Thu, 28 Sep 2023 09:00:40 -0400 Subject: [PATCH 03/22] valerio connecting from azure suggestion Co-authored-by: Valerio Del Sarto --- .../01_connecting_from_azure/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx index df19f823b59..3017ef0cfd5 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx @@ -33,7 +33,7 @@ When using BigAnimal's cloud account, when creating a cluster, you provide BigAn 1. After the cluster is created, go to the cluster details to see the corresponding endpoint service name. You need the service name while creating a private endpoint. -1. Create a private endpoint in the client's VPC. The steps for creating a private endpoint in the client's VPC are the same whether you're using BigAnimal's cloud or your own. See [Step 1: Create an Azure private endpoint](#step-1-create-an-azure-private-endpoint) and [Step 2: Create an Azure Private DNS Zone for the private endpoint](#step-2-create-an-azure-private-dns-zone-for-the-private-endpoint). +1. Create a private endpoint in the client's VNet. The steps for creating a private endpoint in the client's VNet are the same whether you're using BigAnimal's cloud or your own. See [Step 1: Create an Azure private endpoint](#step-1-create-an-azure-private-endpoint) and [Step 2: Create an Azure Private DNS Zone for the private endpoint](#step-2-create-an-azure-private-dns-zone-for-the-private-endpoint). 1. In your application's Azure account, select **Private Link Center**, and then select **Private endpoints**. Select the endpoint you created previously, and use the service name provided in the details section in BigAnimal to access your cluster. From f9022feb4331e15475a3c12cebeb2b7537fd206c Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 28 Sep 2023 10:20:45 -0400 Subject: [PATCH 04/22] Update 04-ConfiguringVeeamKasten.mdx --- .../partner_docs/KastenbyVeeam/04-ConfiguringVeeamKasten.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/KastenbyVeeam/04-ConfiguringVeeamKasten.mdx b/advocacy_docs/partner_docs/KastenbyVeeam/04-ConfiguringVeeamKasten.mdx index 73fe4f893c4..c82440b2814 100644 --- a/advocacy_docs/partner_docs/KastenbyVeeam/04-ConfiguringVeeamKasten.mdx +++ b/advocacy_docs/partner_docs/KastenbyVeeam/04-ConfiguringVeeamKasten.mdx @@ -18,7 +18,7 @@ Implementing EDB Postgres for Kubernetes with Kasten by Veeam requires the follo !!! Note For this integration, use the `example.yaml` files provided for the appropriate Kasten configuration pieces, and change any environment variables per your specific needs. - [Add the backup decorator annotations to the cluster](#add-the-backup-decorator-annotations-to-the-cluster) is important for the Kasten add-on integration. + See [Add the backup decorator annotations to the cluster](#add-the-backup-decorator-annotations-to-the-cluster), which is important for the Kasten add-on integration. Refer to the [EDB Postgres for Kubernetes external backup adapter](/postgres_for_kubernetes/latest/addons/#external-backup-adapter) documentation for more detailed information on the EDB Postgres for Kubernetes backup adaptor add-on functionality and additional details on its configuraton parameters. From ebafcc17d1c97b7afd806dc6c42b4e355f6cd1e3 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 28 Sep 2023 14:47:41 -0400 Subject: [PATCH 05/22] Added regions per Jira ticket --- .../overview/03a_region_support/index.mdx | 24 +++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/product_docs/docs/biganimal/release/overview/03a_region_support/index.mdx b/product_docs/docs/biganimal/release/overview/03a_region_support/index.mdx index cb1b41582ef..d13c7e96802 100644 --- a/product_docs/docs/biganimal/release/overview/03a_region_support/index.mdx +++ b/product_docs/docs/biganimal/release/overview/03a_region_support/index.mdx @@ -152,6 +152,30 @@ When using Google Cloud, you can create clusters in the following regions. ## BigAnimal's cloud account +### Azure regions + +When using Azure and BigAnimal's cloud account, you can create clusters in the following regions. + +#### North America (NA) + +| Cloud region | Short name | +| ------------------------ | -------------- | +| US East (Virginia) | eastus2 | +| Canada (Central) | canadacentral | + + +#### Asia and Pacific (APAC) + +| Cloud region | Short name | +| ------------------------ | -------------- | +| Asia Pacific (Mumbai) | india-west | + +#### Europe, Middle East, and Africa (EMEA) + +| Cloud region | Short name | +| ------------------ | ------------ | +| Europe (London) | uksouth | + ### AWS regions When using AWS and BigAnimal's cloud account, you can create clusters in the following regions. From 9ba5ce0a27b43ad6e5c28ad854b361682a113e86 Mon Sep 17 00:00:00 2001 From: Chris Estes <106166814+ccestes@users.noreply.github.com> Date: Fri, 29 Sep 2023 10:09:01 -0400 Subject: [PATCH 06/22] accessing Azure-BAH logs content changed section title changes to example --- .../monitoring_from_azure/index.mdx | 43 +++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/monitoring_from_azure/index.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/monitoring_from_azure/index.mdx index d47e765da01..4a1cf9800b0 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/monitoring_from_azure/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/monitoring_from_azure/index.mdx @@ -53,6 +53,49 @@ PostgresAuditLogs_CL | project record_log_time_s, record_error_severity_s, record_message_s | sort by record_log_time_s desc ``` + +### Using BigAnimal's cloud account + +To access your Postgres cluster logs, when using BigAnimal's cloud account, generate a SAS token from BigAnimal and use it to download the logs. + +1. In the BigAnimal portal, select **Clusters**, select your cluster, and select the **Monitoring & Logging** tab. + +1. Select **Generate Token** and copy the SAS token. The SAS token is a sensitive value and shouldn't be made publicly available. The following is a sample SAS token: + + ``` + https://blobsamples.blob.core.windows.net/?sv=2022-11-02&ss=b&srt=sco&sp=rwlc&se=2023-05-24T09:51:36Z&st=2023-05-24T01:51:36Z&spr=https&sig= + ``` + +1. Enter the `azcopy` command to download the Postgres logs from BigAnimal. For example: + + ``` + azcopy copy '$TOKEN' . --recursive + INFO: Scanning... + INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support + + Job aa4b74a0-bc92-be4e-551c-47aec1c1cfc3 has started + Log file is located at: /Users/sample_user/.azcopy/aa4b74a0-bc92-be4e-551c-47aec1c1cfc3.log + + 100.0 %, 5 Done, 0 Failed, 0 Pending, 0 Skipped, 5 Total, 2-sec Throughput (Mb/s): 0.5375 + + + Job aa4b74a0-bc92-be4e-551c-47aec1c1cfc3 summary + Elapsed Time (Minutes): 0.0333 + Number of File Transfers: 5 + Number of Folder Property Transfers: 0 + Number of Symlink Transfers: 0 + Total Number of Transfers: 5 + Number of File Transfers Completed: 5 + Number of Folder Transfers Completed: 0 + Number of File Transfers Failed: 0 + Number of Folder Transfers Failed: 0 + Number of File Transfers Skipped: 0 + Number of Folder Transfers Skipped: 0 + TotalBytesTransferred: 134416 + Final Job Status: Completed + $ tail p-a1b2c3d4d5/kubernetes-logs/p-a1b2c3d4d5/2023/09/26/13/19/azure_customer_postgresql_cluster.var.log.containers.p-a1b2c3d4d5-1_p-a1b2c3d4d5_postgres-c798aa19ea0481c8d9575f025405b3ad9212816ca7e928f997473055499a692c.log + {"@timestamp":"2023-09-26T13:19:19.572442Z","level":"info","ts":"2023-09-26T13:19:19Z","logger":"wal-archive","msg":"Archived WAL file","logging_pod":"p-a1b2c3d4d5-1","walName":"pg_wal/000000010000000000000006","startTime":"2023-09-26T13:19:18Z","endTime":"2023-09-26T13:19:19Z","elapsedWalTime":1.060413255,"stream":"stdout","logtag":"F","message":"{\"level\":\"info\",\"ts\":\"2023-09-26T13:19:19Z\",\"logger\":\"wal-archive\",\"msg\":\"Archived WAL + ``` ## Metrics From f35f4cb95ed6e89ae5caa248e6dda8aa0b2a8bc3 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 2 Oct 2023 09:57:27 -0400 Subject: [PATCH 07/22] Update product_docs/docs/biganimal/release/overview/03a_region_support/index.mdx Co-authored-by: Valerio Del Sarto --- .../biganimal/release/overview/03a_region_support/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/overview/03a_region_support/index.mdx b/product_docs/docs/biganimal/release/overview/03a_region_support/index.mdx index d13c7e96802..4c5d1c4ac03 100644 --- a/product_docs/docs/biganimal/release/overview/03a_region_support/index.mdx +++ b/product_docs/docs/biganimal/release/overview/03a_region_support/index.mdx @@ -168,7 +168,7 @@ When using Azure and BigAnimal's cloud account, you can create clusters in the f | Cloud region | Short name | | ------------------------ | -------------- | -| Asia Pacific (Mumbai) | india-west | +| Asia Pacific (Pune) | centralindia | #### Europe, Middle East, and Africa (EMEA) From 9682e3c5ed1428afc3a3b20bdcde971b73541fb2 Mon Sep 17 00:00:00 2001 From: Chris Estes <106166814+ccestes@users.noreply.github.com> Date: Tue, 3 Oct 2023 10:12:21 -0400 Subject: [PATCH 08/22] Update product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx Co-authored-by: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> --- .../01_connecting_from_azure/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx index 3017ef0cfd5..17a6d570f16 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx @@ -26,7 +26,7 @@ If you set up a private endpoint and want to change to a public network, you mus When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Azure subscription ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#network-logs--telemetry-section)). BigAnimal, in turn, provides you with a private link alias, which you can use to connect to your cluster privately. -1. When creating your cluster, on the Cluster Settings tab, in the Network section: +1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: 1. Select **Private**. 1. Enter your application's Azure subscription ID. From 7366292b09f59e8d2dac241f3f4ed9c36338ac33 Mon Sep 17 00:00:00 2001 From: Chris Estes <106166814+ccestes@users.noreply.github.com> Date: Tue, 3 Oct 2023 10:12:28 -0400 Subject: [PATCH 09/22] Update product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/monitoring_from_azure/index.mdx Co-authored-by: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> --- .../05_monitoring_and_logging/monitoring_from_azure/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/monitoring_from_azure/index.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/monitoring_from_azure/index.mdx index 4a1cf9800b0..8b56a5e0165 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/monitoring_from_azure/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/monitoring_from_azure/index.mdx @@ -60,7 +60,7 @@ To access your Postgres cluster logs, when using BigAnimal's cloud account, gene 1. In the BigAnimal portal, select **Clusters**, select your cluster, and select the **Monitoring & Logging** tab. -1. Select **Generate Token** and copy the SAS token. The SAS token is a sensitive value and shouldn't be made publicly available. The following is a sample SAS token: +1. Select **Generate Token** and copy the SAS token. The SAS token is a sensitive value, so don't make it publicly available. The following is a sample SAS token: ``` https://blobsamples.blob.core.windows.net/?sv=2022-11-02&ss=b&srt=sco&sp=rwlc&se=2023-05-24T09:51:36Z&st=2023-05-24T01:51:36Z&spr=https&sig= From b0a09c2e70993a95a165b40e4023b43ebb5e7419 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Wed, 4 Oct 2023 16:03:04 +0100 Subject: [PATCH 10/22] Fix small 3.7 typo --- product_docs/docs/pgd/3.7/bdr/nodes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/3.7/bdr/nodes.mdx b/product_docs/docs/pgd/3.7/bdr/nodes.mdx index c9f4ce74a73..0c6edc540b1 100644 --- a/product_docs/docs/pgd/3.7/bdr/nodes.mdx +++ b/product_docs/docs/pgd/3.7/bdr/nodes.mdx @@ -720,7 +720,7 @@ Then all remaining nodes will make a secondary, temporary, connection to the most-recent node to allow them to catch up any missing data. A parted node still is known to BDR, but won't consume resources. A -node my well be re-added under the very same name as a parted node. +node may well be re-added under the very same name as a parted node. In rare cases, it may be advisable to clear all metadata of a parted node with the function `bdr.drop_node()`. From d403f2c7e7d0034ea05a0c78cbea4b5a670db0bc Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Thu, 5 Oct 2023 13:58:54 +0530 Subject: [PATCH 11/22] EPAS 15 - Updated modifying the data directory location as per DB-2501 --- .../modifying_the_data_directory_location.mdx | 24 ++++--------------- 1 file changed, 5 insertions(+), 19 deletions(-) diff --git a/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation/modifying_the_data_directory_location.mdx b/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation/modifying_the_data_directory_location.mdx index e890265315f..e56198cd4cc 100644 --- a/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation/modifying_the_data_directory_location.mdx +++ b/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation/modifying_the_data_directory_location.mdx @@ -14,31 +14,17 @@ By default, data files reside under the `/var/lib/edb/as15/data` directory. To u cp /usr/lib/systemd/system/edb-as-15.service /etc/systemd/system/ ``` -- After copying the unit file to the new location, create the service file `/etc/systemd/system/edb-as-15.service`. - -- In the `/lib/systemd/system/edb-as-15.service` file, update the following values with the new location of the data directory: - - ```text - Environment=PGDATA=/var/lib/edb/as15/data - PIDFile=/var/lib/edb/as15/data/postmaster.pid - ``` - -- Delete the content of the `/etc/systemd/system/edb-as-15.service` file except the following line: +- In the `/etc/systemd/system/edb-as-15.service` file, update the following values with the new location of the data directory: ```text - .include /lib/systemd/system/edb-as-15.service + Environment=PGDATA=/tmp/as15/data + PIDFile=/tmp/as15/data/postmaster.pid ``` -- Initialize the cluster at the new location: +- Go to bin directory and initialize the cluster with the new location: ```text - PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/edb/as15/bin/edb-as-15-setup initdb - ``` - -- Reload systemd, updating the modified service scripts: - - ```text - systemctl daemon-reload + ./edb-as-15-setup initdb ``` - Start the EDB Postgres Advanced Server service: From 5cb231f4a1f8f9083deb984eb3b03b7d03e1f132 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 28 Sep 2023 16:09:02 +0100 Subject: [PATCH 12/22] Replace/Refresh Terminology Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/terminology.mdx | 108 ++++++++++++++++-------- 1 file changed, 73 insertions(+), 35 deletions(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index 43add8dd3fa..1d20759d19f 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -2,83 +2,120 @@ title: Terminology --- -The terminology that follows is important for understanding EDB Postgres Distributed functionality and the requirements that it addresses in the realms of high availability, replication, and clustering. +There are many terms you will come across in EDB Postgres Distributed that you may be unfamiliar with. This page is a list of many of those terms with quick definitions. -#### Asynchronous replication -Copies data to cluster members after the transaction completes on the origin node. Asynchronous replication can provide higher performance and lower latency than synchronous replication. However, it introduces the potential for conflicts because of multiple concurrent changes. You must manage any conflicts that arise. +#### Asynchronous replication -#### Availability +Copies data to cluster members after the transaction completes on the origin node. Asynchronous replication can provide higher performance and lower latency than synchronous replication. However, asynchronous replication can see a lag in how long changes take to appear in the various cluster members. While the cluster will be [eventually consistent](#eventual-consistency), there will be potential for nodes to be apparently out of sync with each other. -The probability that a system will operate satisfactorily at a given time when used in a stated environment. For many people, this is the overall amount of uptime versus downtime for an application. (See also [Nines](#nines)) +#### Commit scopes + +Rules for managing how transactions are committed between the nodes and groups of a PGD cluster. Used to configure [synchronous replication](#synchronous-replication), [Group Commit](#group-commit), [CAMO](#camo-or-commit-at-most-once), [Eager](#eager), lag control and other PGD features. #### CAMO or commit-at-most-once -Wraps Eager Replication with additional transaction management at the application level to guard against a transaction being executed more than once. This transaction management is critical for high-value transactions found in payments solutions. It's roughly equivalent to the Oracle feature Transaction Guard. +High value transactions in some applications require that the application is able to not only confirm that the transaction has been committed but that the transaction is only committed once or not at all. To ensure this happens in PGD, CAMO can be enabled allowing the application to actively participate in the transaction. + +#### Conflicts + +As data is replicated across the nodes of a PGD cluster, there may be occasions when changes from one source clash with changes from another source. This is a conflict and can be handled with conflict resolution (rules which decide which source is correct or preferred), or avoided with conflict-free data types. + +#### Consensus + +How [Raft](#raft) makes group-wide decisions. Given a number of nodes in a group, Raft looks for a consensus of the number of nodes/2+1 voting for a decision. For example, when a write leader is being selected, a Raft consensus is sought over which node in the group will be the write leader. Consensus can only be reached if there is a quorum of voting members. + +#### Cluster + +Generically, a cluster is a group of multiple redundant systems arranged to appear to end users as one system. See also [PGD clusters](#pgd-clusters), [Kubernetes clusters](#kubernetes-clusters) and [Postgres clusters](#postgres-cluster). -#### Clustering +#### DDL -An approach for high availability in which multiple redundant systems are managed to avoid single points of failure. It appears to the end user as one system. +Data Definition Language - The subset of SQL commands that deal with the defining and managing the structure of a database. DDL statements can create, modify and delete objects - schemas, tables and indexes - within the database. Common DDL commands are CREATE, ALTER and DROP. -#### Data sharding +#### DML -Enables scaling out a database by breaking up data into chunks called *shards* and distributing them across separate nodes. +Data Manipulation Language - The subset of SQL commands that deal with manipulating the data held within a database. DML statements can create, modify and delete rows within tables in the database. Common DML commands are INSERT, UPDATE and DELETE. SELECT is also often referred to as DML, although it is actually part of the -#### Eager Replication for PGD -Conflict-free replication with all cluster members. Technically, this is synchronous logical replication using two-phase commit (2PC). +#### Eager + +Eager is a synchronous commit mode which avoids conflicts by detecting incoming potentially conflicting transactions and “eagerly” one of aborting them to maintain consistency. #### Eventual consistency -A distributed computing consistency model stating changes to the same item in different cluster members will converge to the same value. With PGD, this is achieved through asynchronous logical replication with conflict resolution and conflict-free replicated data types. +A distributed computing consistency model stating changes to the same item in different cluster members will eventually converge to the same value. Asynchronous logical replication with conflict resolution and conflict-free replicated data types exhibit eventual consistency in PGD. #### Failover -The automated process that recognizes a failure in a highly available database cluster and takes action to connect the application to another active database. The goal is to minimize downtime and data loss. +The automated process that recognizes a failure in a highly available database cluster and takes action to maintain consistency and availability. The goal is to minimize downtime and data loss. -#### Horizontal scaling or scale out +#### Group commit -A modern distributed computing approach that manages workloads across multiple nodes, such as scaling out a web server to handle increased traffic. +A synchronous commit mode which requires more than one PGD node to successfully receive and confirm a transaction at commit time. -#### Logical replication +#### Immediate consistency + +A distributed computing model where all replicas are updated synchronously and simultaneously. This ensures that all reads after a write has completed will see the same value on all nodes. The downside of this approach is its negative impact on performance. -Provides more flexibility than physical replication in terms of selecting the data replicated between databases in a cluster. Also important is that cluster members can be on different versions of the database software. +#### Kubernetes clusters -#### Nines +A Kubernetes cluster is a group of machines that work together to run containerized applications. A [PGD Cluster](pgd-cluster) can be configured to run as containerized components on a Kubernetes Cluster. -A measure of availability expressed as a percentage of uptime in a given year. Three nines (99.9%) allows for 43.83 minutes of downtime per month. Four nines (99.99%) allows for 4.38 minutes of downtime per month. Five nines (99.999%) allows for 26.3 seconds of downtime per month. +#### Logical replication + +A more efficient method of replicating changes in the database. Rather than duplicate the originating database’s disk blocks, logical replication instead sees the DML commands - inserts, deletes and updates,- published to all systems that have subscribed to see the changes. Each subscriber then applies the changes locally. Logical replication is not able to support most DDL #### Node -One database server in a cluster. A term *node* differs from the term *database server* because there's more than one node in a cluster. A node includes the database server, the OS, and the physical hardware, which is always separate from other nodes in a high-availability context. +A general term for an element of a distributed system. A node can play host to any service. In PGD, there are [PGD Nodes](#pgd-node) which run a Postgres database and the BDR extension and optionally a PGD Proxy service. + +Typically, for high availability, each node runs on separate physical hardware, but not necessarily. For example, in modern cloud platforms such as Kubermetes the hardware may be shared with the cloud. + +#### Node groups + +PGD Nodes in PGD clusters can be organized into groups to reflect the logical operation of the cluster. For example, the data nodes in a particular physical location may be part of a dedicated node group for the location. + + +#### PGD cluster + +A group of multiple redundant database systems and proxies arranged to avoid single points of failure while appearing to end users as one system. PGD clusters may be run on Docker instances, [Kubernetes clusters](kubernetes-clusters), cloud instances or “bare” Linux hosts, or a combination of those platforms. A PGD cluster may also include backup and proxy nodes. The data nodes in a cluster are grouped together in a top level group and into various local [node groups](#node-groups). + +### PGD node + +In a PGD cluster, there are nodes which run databases and participate in the PG Cluster. A typical PGD node will run a Postgres database and the BDR extension and optionally a PGD Proxy service. PGD Nodes may also be referred to as data nodes which suggests they store data, though some PGD Nodes, specifically [witness nodes](#witness-nodes) do not do that. #### Physical replication -Copies all changes from a database to one or more standby cluster members by copying an exact copy of database disk blocks. While fast, this method has downsides. For example, only one master node can run write transactions. Also, you can use this method only where all cluster members are on the same major version of the database software, in addition to several other more complex restrictions. +By making an exact copy of database disk blocks as they are modified to one or more standby cluster members, physical replication provides an easily implemented method to replicate servers. But there are restrictions on how it can be used. For example, only one master node can run write transactions. Also, the method requires that all cluster members are on the same major version of the database software, in addition to several other more complex restrictions. -#### Read scalability +#### Postgres cluster -Can be achieved by introducing one or more read replica nodes to a cluster and have the application direct writes to the primary node and reads to the replica nodes. As the read workload grows, you can increase the number of read replica nodes to maintain performance. +Traditionally, in Postgresql, a number of databases running on a single server is referred to as a cluster (of databases). This kind of Postgres cluster is not highly available. To get high availability and redundancy, you need a [PGD Cluster](#pgd-cluster). -#### Recovery point objective (RPO) +#### Quorum -The maximum targeted period in which data might be lost due to a disruption in delivery of an application. A very low or minimal RPO is a driver for very high availability. +When a [Raft](#Raft) [consensus](#consensus) is needed by a PGD cluster, there needs to be a minimum number of voting nodes participating in the vote. This number is called a quorum. For example, with a 5 node cluster, the quorum would be 3 nodes in the cluster voting. A consensus would be 5/2+1 nodes, 3 nodes voting the same way. If there were only 2 voting nodes, then a consensus would never be established. -#### Recovery time objective (RTO) +#### Raft -The targeted length of time for restoring the disrupted application. A very low or minimal RTO is a driver for very high availability. +Replicated, Available, Fault Tolerance. A consensus algorithm which uses votes from a quorum of machines in a distributed cluster to establish a consensus. PGD uses RAFT within groups (top level or local) to establish which node is the write leader. -#### Single point of failure (SPOF) +#### Read scalability -The identification of a component in a deployed architecture that has no redundancy and therefore prevents you from achieving higher levels of availability. +The ability of a system to handle increasing read workloads. For example, PGD is able to introduce one or more read replica nodes to a cluster and have the application direct writes to the primary node and reads to the replica nodes. As the read workload grows, you can increase the number of read replica nodes to maintain performance. #### Switchover -A planned change in connection between the application and the active database node in a cluster, typically done for maintenance. +A planned change in connection between the application or proxies and the active database node in a cluster, typically done for maintenance. #### Synchronous replication -When changes are updated at all participating nodes at the same time, typically leveraging two-phase commit. While this approach delivers immediate consistency and avoids conflicts, a performance cost in latency occurs due to the coordination required across nodes. +When changes are updated at all participating nodes at the same time, typically leveraging a two-phase commit. While this approach replicates changes and resolves conflicts before committing, a performance cost in latency occurs due to the coordination required across nodes. + +#### Subscriber-Only nodes + +A PGD cluster is based around bidirectional replication, but in some use cases such as needing a read-only server, bidirectional replication is not needed. A Subscriber-Only Node is used in this case; it only subscribes to changes in the database to keep itself up to date and provide correct results to any run directly on the node. This can be used to enable horizontal read scalability in a PGD cluster. #### Two-phase commit (2PC) @@ -88,11 +125,12 @@ A multi-step process for achieving consistency across multiple database nodes. A traditional computing approach of increasing a resource (CPU, memory, storage, network) to support a given workload until the physical limits of that architecture are reached, e.g., Oracle Exadata. -#### Write scalability +#### Witness nodes + +To resolve clusters or groups of nodes coming to a consensus, there needs to be an odd number of data nodes. Where resources are limited, a witness node can be used to create an odd number of data nodes, participating in cluster decisions but not replicating the data. Not holding the data means it is not able to operate as a standby server. -Occurs when replicating the writes from the original node to other cluster members becomes less expensive. In vertical-scaled architectures, write scalability is possible due to shared resources. However, in horizontal scaled (or nothing-shared) architectures, this is possible only in very limited scenarios. #### Write leader -In always-on architectures, a node is selected as the correct connection endpoint for applications. This node is called the write leader. By selecting a write leader for applications to use, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of proxy nodes. If the write leader becomes unavailable, the proxy nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. +In [always-on architectures](#always_on_architecture), a node is selected as the correct connection endpoint for applications This node is called the write leader and once selected proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. From fa80482ac8cd9b1bcd1cc5f282feb9085e6066c0 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Fri, 29 Sep 2023 11:14:06 +0100 Subject: [PATCH 13/22] Apply suggestions from review comments --- product_docs/docs/pgd/5/terminology.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index 1d20759d19f..43ea6fb09f8 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -7,7 +7,7 @@ There are many terms you will come across in EDB Postgres Distributed that you m #### Asynchronous replication -Copies data to cluster members after the transaction completes on the origin node. Asynchronous replication can provide higher performance and lower latency than synchronous replication. However, asynchronous replication can see a lag in how long changes take to appear in the various cluster members. While the cluster will be [eventually consistent](#eventual-consistency), there will be potential for nodes to be apparently out of sync with each other. +A type of replication that copies data to other PGD cluster members after the transaction completes on the origin node. Asynchronous replication can provide higher performance and lower latency than [synchronous replication](#synchrous-replication). However, asynchronous replication can see a lag in how long changes take to appear in the various cluster members. While the cluster will be [eventually consistent](#eventual-consistency), there will be potential for nodes to be apparently out of sync with each other. #### Commit scopes @@ -15,7 +15,7 @@ Rules for managing how transactions are committed between the nodes and groups o #### CAMO or commit-at-most-once -High value transactions in some applications require that the application is able to not only confirm that the transaction has been committed but that the transaction is only committed once or not at all. To ensure this happens in PGD, CAMO can be enabled allowing the application to actively participate in the transaction. +High value transactions in some applications require that the application successfully commited exactly once, and in the event of failover and retrying, only one. To ensure this happens in PGD, CAMO can be enabled allowing the application to actively participate in the transaction. #### Conflicts @@ -35,7 +35,7 @@ Data Definition Language - The subset of SQL commands that deal with the definin #### DML -Data Manipulation Language - The subset of SQL commands that deal with manipulating the data held within a database. DML statements can create, modify and delete rows within tables in the database. Common DML commands are INSERT, UPDATE and DELETE. SELECT is also often referred to as DML, although it is actually part of the +Data Manipulation Language - The subset of SQL commands that deal with manipulating the data held within a database. DML statements can create, modify and delete rows within tables in the database. Common DML commands are INSERT, UPDATE and DELETE. #### Eager @@ -64,7 +64,7 @@ A Kubernetes cluster is a group of machines that work together to run containeri #### Logical replication -A more efficient method of replicating changes in the database. Rather than duplicate the originating database’s disk blocks, logical replication instead sees the DML commands - inserts, deletes and updates,- published to all systems that have subscribed to see the changes. Each subscriber then applies the changes locally. Logical replication is not able to support most DDL +A more efficient method of replicating changes in the database. While physical streaming replication duplicate the originating database’s disk blocks, logical replication instead sees the DML commands - inserts, deletes and updates,- published to all systems that have subscribed to see the changes. Each subscriber then applies the changes locally. Logical replication is not able to support most DDL #### Node @@ -83,7 +83,7 @@ A group of multiple redundant database systems and proxies arranged to avoid sin ### PGD node -In a PGD cluster, there are nodes which run databases and participate in the PG Cluster. A typical PGD node will run a Postgres database and the BDR extension and optionally a PGD Proxy service. PGD Nodes may also be referred to as data nodes which suggests they store data, though some PGD Nodes, specifically [witness nodes](#witness-nodes) do not do that. +In a PGD cluster, there are nodes which run databases and participate in the PGD Cluster. A typical PGD node will run a Postgres database and the BDR extension and optionally a PGD Proxy service. PGD Nodes may also be referred to as data nodes which suggests they store data, though some PGD Nodes, specifically [witness nodes](#witness-nodes) do not do that. #### Physical replication @@ -127,7 +127,7 @@ A traditional computing approach of increasing a resource (CPU, memory, storage, #### Witness nodes -To resolve clusters or groups of nodes coming to a consensus, there needs to be an odd number of data nodes. Where resources are limited, a witness node can be used to create an odd number of data nodes, participating in cluster decisions but not replicating the data. Not holding the data means it is not able to operate as a standby server. +Witness nodes primarily serve to help the cluster establish a consensus. An odd number of data nodes are needed to establish a consensus and, where resources are limited, a witness node can be used to participate in cluster decisions but not replicate the data. Not holding the data means it cannot operate as a standby server or provide majorities in synchronous commits. #### Write leader From 2b0782abf9eb88818cad97f5fd18fa9adda862aa Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Fri, 29 Sep 2023 14:10:54 +0100 Subject: [PATCH 14/22] Remove Kubernetes References --- product_docs/docs/pgd/5/terminology.mdx | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index 43ea6fb09f8..0691efaffd1 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -27,7 +27,7 @@ How [Raft](#raft) makes group-wide decisions. Given a number of nodes in a group #### Cluster -Generically, a cluster is a group of multiple redundant systems arranged to appear to end users as one system. See also [PGD clusters](#pgd-clusters), [Kubernetes clusters](#kubernetes-clusters) and [Postgres clusters](#postgres-cluster). +Generically, a cluster is a group of multiple redundant systems arranged to appear to end users as one system. See also [PGD clusters](#pgd-clusters) and [Postgres clusters](#postgres-cluster). #### DDL @@ -58,10 +58,6 @@ A synchronous commit mode which requires more than one PGD node to successfully A distributed computing model where all replicas are updated synchronously and simultaneously. This ensures that all reads after a write has completed will see the same value on all nodes. The downside of this approach is its negative impact on performance. -#### Kubernetes clusters - -A Kubernetes cluster is a group of machines that work together to run containerized applications. A [PGD Cluster](pgd-cluster) can be configured to run as containerized components on a Kubernetes Cluster. - #### Logical replication A more efficient method of replicating changes in the database. While physical streaming replication duplicate the originating database’s disk blocks, logical replication instead sees the DML commands - inserts, deletes and updates,- published to all systems that have subscribed to see the changes. Each subscriber then applies the changes locally. Logical replication is not able to support most DDL @@ -70,7 +66,7 @@ A more efficient method of replicating changes in the database. While physical s A general term for an element of a distributed system. A node can play host to any service. In PGD, there are [PGD Nodes](#pgd-node) which run a Postgres database and the BDR extension and optionally a PGD Proxy service. -Typically, for high availability, each node runs on separate physical hardware, but not necessarily. For example, in modern cloud platforms such as Kubermetes the hardware may be shared with the cloud. +Typically, for high availability, each node runs on separate physical hardware, but not necessarily. For example, in modern cloud platforms the hardware may be shared with the cloud. #### Node groups @@ -79,7 +75,7 @@ PGD Nodes in PGD clusters can be organized into groups to reflect the logical op #### PGD cluster -A group of multiple redundant database systems and proxies arranged to avoid single points of failure while appearing to end users as one system. PGD clusters may be run on Docker instances, [Kubernetes clusters](kubernetes-clusters), cloud instances or “bare” Linux hosts, or a combination of those platforms. A PGD cluster may also include backup and proxy nodes. The data nodes in a cluster are grouped together in a top level group and into various local [node groups](#node-groups). +A group of multiple redundant database systems and proxies arranged to avoid single points of failure while appearing to end users as one system. PGD clusters may be run on Docker instances, cloud instances or “bare” Linux hosts, or a combination of those platforms. A PGD cluster may also include backup and proxy nodes. The data nodes in a cluster are grouped together in a top level group and into various local [node groups](#node-groups). ### PGD node From 4bbfb81749a80cc3656163cd67974cc45c454be4 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 2 Oct 2023 11:55:52 +0100 Subject: [PATCH 15/22] Apply suggestions from review More review comments fixed up. --- product_docs/docs/pgd/5/terminology.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index 0691efaffd1..0d6dea9150d 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -15,7 +15,7 @@ Rules for managing how transactions are committed between the nodes and groups o #### CAMO or commit-at-most-once -High value transactions in some applications require that the application successfully commited exactly once, and in the event of failover and retrying, only one. To ensure this happens in PGD, CAMO can be enabled allowing the application to actively participate in the transaction. +High value transactions in some applications require that the application successfully commited exactly once, and in the event of failover and retrying, only once. To ensure this happens in PGD, CAMO can be enabled allowing the application to actively participate in the transaction. #### Conflicts @@ -23,7 +23,7 @@ As data is replicated across the nodes of a PGD cluster, there may be occasions #### Consensus -How [Raft](#raft) makes group-wide decisions. Given a number of nodes in a group, Raft looks for a consensus of the number of nodes/2+1 voting for a decision. For example, when a write leader is being selected, a Raft consensus is sought over which node in the group will be the write leader. Consensus can only be reached if there is a quorum of voting members. +How [Raft](#raft) makes group-wide decisions. Given a number of nodes in a group, Raft looks for a consensus of the majority (number of nodes divided by 2 plus 1) voting for a decision. For example, when a write leader is being selected, a Raft consensus is sought over which node in the group will be the write leader. Consensus can only be reached if there is a quorum of voting members. #### Cluster @@ -60,13 +60,13 @@ A distributed computing model where all replicas are updated synchronously and s #### Logical replication -A more efficient method of replicating changes in the database. While physical streaming replication duplicate the originating database’s disk blocks, logical replication instead sees the DML commands - inserts, deletes and updates,- published to all systems that have subscribed to see the changes. Each subscriber then applies the changes locally. Logical replication is not able to support most DDL +A more efficient method of replicating changes in the database. While physical streaming replication duplicate the originating database’s disk blocks, logical replication instead takes the changes made, independent of the underlying physical storage format, and publishes them to all systems that have subscribed to see the changes. Each subscriber then applies the changes locally. Logical replication is not able to support most DDL #### Node A general term for an element of a distributed system. A node can play host to any service. In PGD, there are [PGD Nodes](#pgd-node) which run a Postgres database and the BDR extension and optionally a PGD Proxy service. -Typically, for high availability, each node runs on separate physical hardware, but not necessarily. For example, in modern cloud platforms the hardware may be shared with the cloud. +Typically, for high availability, each node runs on separate physical hardware, but not necessarily. For example, a proxy might share a hardware node with a database. #### Node groups @@ -83,11 +83,11 @@ In a PGD cluster, there are nodes which run databases and participate in the PGD #### Physical replication -By making an exact copy of database disk blocks as they are modified to one or more standby cluster members, physical replication provides an easily implemented method to replicate servers. But there are restrictions on how it can be used. For example, only one master node can run write transactions. Also, the method requires that all cluster members are on the same major version of the database software, in addition to several other more complex restrictions. +By making an exact copy of database disk blocks as they are modified to one or more standby cluster members, physical replication provides an easily implemented method to replicate servers. But there are restrictions on how it can be used. For example, only one master node can run write transactions. Also, the method requires that all cluster members are on the same major version of the database software with the same operating system and CPU architecture. #### Postgres cluster -Traditionally, in Postgresql, a number of databases running on a single server is referred to as a cluster (of databases). This kind of Postgres cluster is not highly available. To get high availability and redundancy, you need a [PGD Cluster](#pgd-cluster). +Traditionally, in PostgreSQL, a number of databases running on a single server is referred to as a cluster (of databases). This kind of Postgres cluster is not highly available. To get high availability and redundancy, you need a [PGD Cluster](#pgd-cluster). #### Quorum @@ -128,5 +128,5 @@ Witness nodes primarily serve to help the cluster establish a consensus. An odd #### Write leader -In [always-on architectures](#always_on_architecture), a node is selected as the correct connection endpoint for applications This node is called the write leader and once selected proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. +In an [Always-On Architectures](#always_on_architecture), a node is selected as the correct connection endpoint for applications. This node is called the write leader, and once selected, proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. From 1282430899ed168a74367517026b437dd2172c8b Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Thu, 5 Oct 2023 08:53:54 +0100 Subject: [PATCH 16/22] Update product_docs/docs/pgd/5/terminology.mdx Co-authored-by: Lenz Grimmer --- product_docs/docs/pgd/5/terminology.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index 0d6dea9150d..b0f513f690a 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -128,5 +128,5 @@ Witness nodes primarily serve to help the cluster establish a consensus. An odd #### Write leader -In an [Always-On Architectures](#always_on_architecture), a node is selected as the correct connection endpoint for applications. This node is called the write leader, and once selected, proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. +In an [Always-On Architecture](#always_on_architecture), a node is selected as the correct connection endpoint for applications. This node is called the write leader, and once selected, proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. From 2390a7b0432fa1fe2f8929b8b1f0d1067a772a9e Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Thu, 5 Oct 2023 09:06:09 +0100 Subject: [PATCH 17/22] Update product_docs/docs/pgd/5/terminology.mdx Co-authored-by: Lenz Grimmer --- product_docs/docs/pgd/5/terminology.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index b0f513f690a..07affc83028 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -7,7 +7,7 @@ There are many terms you will come across in EDB Postgres Distributed that you m #### Asynchronous replication -A type of replication that copies data to other PGD cluster members after the transaction completes on the origin node. Asynchronous replication can provide higher performance and lower latency than [synchronous replication](#synchrous-replication). However, asynchronous replication can see a lag in how long changes take to appear in the various cluster members. While the cluster will be [eventually consistent](#eventual-consistency), there will be potential for nodes to be apparently out of sync with each other. +A type of replication that copies data to other PGD cluster members after the transaction completes on the origin node. Asynchronous replication can provide higher performance and lower latency than [synchronous replication](#synchronous-replication). However, asynchronous replication can see a lag in how long changes take to appear in the various cluster members. While the cluster will be [eventually consistent](#eventual-consistency), there will be potential for nodes to be apparently out of sync with each other. #### Commit scopes From 3a7386b0f99b6f4db17784caf76c67b55bda357d Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Thu, 5 Oct 2023 09:06:41 +0100 Subject: [PATCH 18/22] Update product_docs/docs/pgd/5/terminology.mdx Co-authored-by: Lenz Grimmer --- product_docs/docs/pgd/5/terminology.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index 07affc83028..b3e2ac252a2 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -27,7 +27,7 @@ How [Raft](#raft) makes group-wide decisions. Given a number of nodes in a group #### Cluster -Generically, a cluster is a group of multiple redundant systems arranged to appear to end users as one system. See also [PGD clusters](#pgd-clusters) and [Postgres clusters](#postgres-cluster). +Generically, a cluster is a group of multiple redundant systems arranged to appear to end users as one system. See also [PGD cluster](#pgd-cluster) and [Postgres cluster](#postgres-cluster). #### DDL From 389a8953029ef640dcb7ff8f3cb7c0a9231f859d Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Thu, 5 Oct 2023 09:07:03 +0100 Subject: [PATCH 19/22] Update product_docs/docs/pgd/5/terminology.mdx Co-authored-by: Lenz Grimmer --- product_docs/docs/pgd/5/terminology.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index b3e2ac252a2..4f0180db7de 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -40,7 +40,7 @@ Data Manipulation Language - The subset of SQL commands that deal with manipulat #### Eager -Eager is a synchronous commit mode which avoids conflicts by detecting incoming potentially conflicting transactions and “eagerly” one of aborting them to maintain consistency. +Eager is a synchronous commit mode that avoids conflicts by detecting incoming potentially conflicting transactions and “eagerly” aborts one of them to maintain consistency. #### Eventual consistency From edc021d59db89d38011943d605ce7a9f8231cb1e Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Thu, 5 Oct 2023 09:13:23 +0100 Subject: [PATCH 20/22] Remove Always-On Architecture link --- product_docs/docs/pgd/5/terminology.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index 4f0180db7de..71d193981ab 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -128,5 +128,5 @@ Witness nodes primarily serve to help the cluster establish a consensus. An odd #### Write leader -In an [Always-On Architecture](#always_on_architecture), a node is selected as the correct connection endpoint for applications. This node is called the write leader, and once selected, proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. +In an Always-On Architecture, a node is selected as the correct connection endpoint for applications. This node is called the write leader, and once selected, proxy nodes route queries and updates to it. With only one node receiving writes, unintended multi-node writes can be avoided. The write leader is selected by consensus of a quorum of data nodes. If the write leader becomes unavailable, the data nodes select another node to become write leader. Nodes that aren't the write leader are referred to as *shadow nodes*. From 4f200e65619811d6599132e2c75ed4c623666609 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Thu, 5 Oct 2023 16:45:42 +0530 Subject: [PATCH 21/22] reverted the directory paths --- .../modifying_the_data_directory_location.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation/modifying_the_data_directory_location.mdx b/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation/modifying_the_data_directory_location.mdx index e56198cd4cc..1406e794994 100644 --- a/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation/modifying_the_data_directory_location.mdx +++ b/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation/modifying_the_data_directory_location.mdx @@ -17,8 +17,8 @@ By default, data files reside under the `/var/lib/edb/as15/data` directory. To u - In the `/etc/systemd/system/edb-as-15.service` file, update the following values with the new location of the data directory: ```text - Environment=PGDATA=/tmp/as15/data - PIDFile=/tmp/as15/data/postmaster.pid + Environment=PGDATA=/var/lib/edb/as15/data + PIDFile=/var/lib/edb/as15/data/postmaster.pid ``` - Go to bin directory and initialize the cluster with the new location: From 4c7ebe56df0525648ec079bcb3d217240b2ad466 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Thu, 5 Oct 2023 11:59:09 +0100 Subject: [PATCH 22/22] Update pgd_5.2.0_rel_notes.mdx Missing a .0 in the highlights --- product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx index 46a162b3572..be6583a7bb5 100644 --- a/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx +++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx @@ -7,7 +7,7 @@ Released: 04 Aug 2023 EDB Postgres Distributed version 5.2.0 is a minor version of EDB Postgres Distributed. -## Highlights of EDB Postgres Distributed 5.2 +## Highlights of EDB Postgres Distributed 5.2.0 * Parallel Apply is now available for PGD’s Commit at Most Once (CAMO) synchronous commit scope and improving replication performance. * Parallel Apply for native Postgres asynchronous and synchronous replication has been improved for workloads where the same key is being modified concurrently by multiple transactions to maintain commit sequence and avoid deadlocks.