From 95e79a07ba020f0536724f443645fb75bf3eaeeb Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Fri, 31 May 2024 17:40:21 +0100 Subject: [PATCH 01/23] Fixes from PM review Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/rel_notes/pgd_5.4.1_rel_notes.mdx | 4 ++-- product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.4.1_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.4.1_rel_notes.mdx index 0eda5e3fdce..45e49e64f6f 100644 --- a/product_docs/docs/pgd/5/rel_notes/pgd_5.4.1_rel_notes.mdx +++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.4.1_rel_notes.mdx @@ -14,9 +14,9 @@ We recommend that all users of PGD 5 upgrade to PGD 5.4.1. See [PGD/TPA upgrades ## Bug fixes -| Component | Version | Description | Addresses | +| Component | Version | Description | Tickets | |-----------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| -| BDR | 5.4.1 | Fixed WAL retention logic: prevent a PGD node from running out of disk space due to a bug in 5.4.0 in combination with Postgres 16, PGE 16, and EPAS 16.
We now make sure WAL gets properly cleaned even after 4 GB of WAL produced on a node. A change in 5.4.0 caused WAL to be retained forever after that point. This issue affects only release PGD 5.4.0 in combination with Postgres 16, PGE 16, and EPAS 16. | | +| BDR | 5.4.1 |
Fixed WAL retention logic to prevent disk space issues on a PGD node with version 5.4.0 and Postgres 16, PGE 16, and EPAS 16.
A fix has been implemented to ensure proper cleanup of Write-Ahead Logs (WAL) even after reaching a size of 4 GB on a node. A change in version 5.4.0 resulted in WAL being retained indefinitely after reaching this threshold. This issue is specific to PGD 5.4.0 in conjunction with Postgres 16, PGE 16, and EPAS 16.
| | diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx index b8940b60046..eee123638ff 100644 --- a/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx +++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx @@ -16,7 +16,7 @@ We recommend that all users of PGD 5 upgrade to PGD 5.5.1. See [PGD/TPA upgrades | Component | Version | Description | Ticket | |-----------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------| -| BDR | 5.5.1 |
Fix a data inconsistency issue with mixed version usage during upgradesUpgrading from any previous PGD version to PGD 5.5.0 may result in inconsistencies when replicating from the newer PGD 5.5.0 node to an older version PGD node. This release fixes mixed version operation to allow for smooth rolling upgrades.
| | -| BDR | 5.5.1 |
Disabled auto-triggering of node sync by defaultAutomatically triggered synchronization of data from a down node caused issues by failing to resume once it came back up. As a precautionary measure, the feature is now disabled by default (PGD setting [`bdr.enable_auto_sync_reconcile`](/pgd/latest/reference/pgd-settings#bdrenable_auto_sync_reconcile)).
| 11510 | +| BDR | 5.5.1 |
Fix potential data inconsistency issue with mixed version usage during a rolling upgrade
Backward incompatible change in PGD 5.5.0 may lead to inconsistencies when replicating from a newer PGD 5.5.0 node to an older version of the PGD node, specifically during the mixed mode rolling upgrade.
This release addresses a backward compatibility issue in mixed version operation, enabling seamless rolling upgrades.
| | +| BDR | 5.5.1 |
Disabled auto-triggering of node sync by default
Automatically triggered synchronization of data from a down node caused issues by failing to resume once it came back up. As a precautionary measure, the feature is now disabled by default (PGD setting [`bdr.enable_auto_sync_reconcile`](/pgd/latest/reference/pgd-settings#bdrenable_auto_sync_reconcile)).
| 11510 | From f41f6bed632dc3c82af342dea66aaa3e85ae76d7 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Fri, 31 May 2024 14:12:17 -0400 Subject: [PATCH 02/23] Edits to Patroni PR5656 --- .../patroni/installing_patroni.mdx | 13 ++++++------- .../patroni/installing_with_TPA.mdx | 2 +- 2 files changed, 7 insertions(+), 8 deletions(-) diff --git a/advocacy_docs/supported-open-source/patroni/installing_patroni.mdx b/advocacy_docs/supported-open-source/patroni/installing_patroni.mdx index c926e03e927..5cb1c43944f 100644 --- a/advocacy_docs/supported-open-source/patroni/installing_patroni.mdx +++ b/advocacy_docs/supported-open-source/patroni/installing_patroni.mdx @@ -7,16 +7,16 @@ tags: - Patroni --- -EDB provides Patroni to customers via the `edb-patroni` package. This package provides Patroni itself and all its dependencies, so there is no need to install additional packages on the Postgres nodes. +EDB provides Patroni to customers by way of the `edb-patroni` package. This package provides Patroni and all its dependencies, so you don't need to install additional packages on the Postgres nodes. -As the dependencies no longer have to be installed separately, there is no need to install the python3-cdiff, python3-psutil, python3-psycopg2, python3-ydiff, python3-click, python3-click, python3-six, python3-dateutil, python3-prettytable, python3-pyyaml, python3-urllib3, python3-etcd, python3-dns or python3-certifi packages. +Since the dependencies don't have to be installed separately, you don't need to install the python3-cdiff, python3-psutil, python3-psycopg2, python3-ydiff, python3-click, python3-click, python3-six, python3-dateutil, python3-prettytable, python3-pyyaml, python3-urllib3, python3-etcd, python3-dns, or python3-certifi packages. Packages are available for all subscribed customers with a valid EDB account under any entitlement (Community360, Standard, and Enterprise). !!! Note - The `edb-patroni` does not provide packages for the etcd server needed for the DCS cluster + The `edb-patroni` package doesn't provide packages for the etcd server needed for the DCS cluster. -Once you have the EDB repository configured on all the nodes of the cluster, run the following commands depending on the Linux distribution you are using. +Once you have the EDB repository configured on all the nodes of the cluster, run the following commands based on the Linux distribution you're using. ### Debian/Ubuntu @@ -25,7 +25,7 @@ sudo apt-get install -y edb-patroni ``` !!! Note - On Debian and Ubuntu installations, if you've previously installed the Patroni package named `patroni` using the EDB repositories, `apt upgrade` will not replace this package with the `edb-patroni` package. Executing `apt install edb-patroni` will install `edb-patroni` as a replacement of `patroni`. + On Debian and Ubuntu installations, if you previously installed the Patroni package named `patroni` using the EDB repositories, `apt upgrade` will not replace this package with the `edb-patroni` package. Executing `apt install edb-patroni` will install `edb-patroni` as a replacement of `patroni`. See [Quick start on Debian 11](debian11_quick_start/#4-patroni) for a more detailed configuration example. @@ -39,5 +39,4 @@ See [Quick start on RHEL8](rhel8_quick_start/#4-patroni) for a more detailed con ### Installing community packages -We also support community packages provided through PGDG repositories. Follow the [PGDG deb download instructions](https://www.postgresql.org/download/linux/debian/) to set up the `apt` repository, or the [PGDG rpm download instructions](https://yum.postgresql.org/) for the `yum` repository. Keep in mind that for PGDG rpm repositories you will need to configure Extra Packages for Enterprise Linux ([EPEL](https://fedoraproject.org/wiki/EPEL)). - +We also support community packages provided through PGDG repositories. Follow the [PGDG deb download instructions](https://www.postgresql.org/download/linux/debian/) to set up the `apt` repository or the [PGDG rpm download instructions](https://yum.postgresql.org/) for the `yum` repository. Keep in mind that you need to configure Extra Packages for Enterprise Linux ([EPEL](https://fedoraproject.org/wiki/EPEL)) for PGDG rpm repositories. diff --git a/advocacy_docs/supported-open-source/patroni/installing_with_TPA.mdx b/advocacy_docs/supported-open-source/patroni/installing_with_TPA.mdx index a1a67f4d5c0..c2de41b5e8f 100644 --- a/advocacy_docs/supported-open-source/patroni/installing_with_TPA.mdx +++ b/advocacy_docs/supported-open-source/patroni/installing_with_TPA.mdx @@ -10,4 +10,4 @@ tags: ### Deploying a Patroni cluster with TPA -The recommended way for deploying Patroni clusters is by doing so with TPA. We recommend going over the [TPA documentation](https://www.enterprisedb.com/docs/tpa/latest/) for further information on deploying M1 architectures with Patroni as the failover manager. +We recommend deploying Patroni clusters with TPA. See the [TPA documentation](https://www.enterprisedb.com/docs/tpa/latest/) for more information on deploying M1 architectures with Patroni as the failover manager. From bf3705d11b7d0dbb127259c3125db793ca3ab585 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Fri, 31 May 2024 14:15:23 -0400 Subject: [PATCH 03/23] Update installing_with_TPA.mdx --- .../supported-open-source/patroni/installing_with_TPA.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/supported-open-source/patroni/installing_with_TPA.mdx b/advocacy_docs/supported-open-source/patroni/installing_with_TPA.mdx index c2de41b5e8f..ba2ee18f972 100644 --- a/advocacy_docs/supported-open-source/patroni/installing_with_TPA.mdx +++ b/advocacy_docs/supported-open-source/patroni/installing_with_TPA.mdx @@ -10,4 +10,4 @@ tags: ### Deploying a Patroni cluster with TPA -We recommend deploying Patroni clusters with TPA. See the [TPA documentation](https://www.enterprisedb.com/docs/tpa/latest/) for more information on deploying M1 architectures with Patroni as the failover manager. +We recommend deploying Patroni clusters using TPA. See the [TPA documentation](https://www.enterprisedb.com/docs/tpa/latest/) for more information on deploying M1 architectures with Patroni as the failover manager. From 1d196db936b09bdbc23a013ab602028ae0c9cbc7 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Fri, 31 May 2024 14:34:55 -0400 Subject: [PATCH 04/23] Edits to PGD4K PR5697 --- .../1/identify_images/assemble_command.mdx | 12 +++++------- .../1/identify_images/identify_image_name.mdx | 4 ++-- .../1/identify_images/index.mdx | 4 ++-- 3 files changed, 9 insertions(+), 11 deletions(-) diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/assemble_command.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/assemble_command.mdx index 87bb7c79507..12f40ae1570 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/assemble_command.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/assemble_command.mdx @@ -2,9 +2,9 @@ title: 'Assembling a deployment command' --- -For a quick installation with the aim of testing the product, see the [Quick start](../quickstart/). +For a quick installation with the aim of testing the product, see [Quick start](/postgres_distributed_for_kubernetes/latest/quickstart/). -For more targeted testing or production purposes, this overview describes how to assemble a command to deploy +For more targeted testing or production purposes, you can assemble a command to deploy EDB Postgres Distributed for Kubernetes with the operand and proxy image versions of your choice. ## Prerequisites @@ -36,11 +36,9 @@ helm upgrade --dependency-update \ ``` After assembling the command with the required images, -see [Installation](../installation_upgrade) for instructions on how to add the repository, +see [Installation](/postgres_distributed_for_kubernetes/latest/installation_upgrade) for instructions on how to add the repository, deploy the images, and create a certificate issuer. -For more information about how to assemble your command, see the examples: - ### Examples These example commands: @@ -66,7 +64,7 @@ Set the environment variables for the deployment command: export EDB_SUBSCRIPTION_TOKEN= ``` -1. Set the environment variable to use EBD Postgres Advanced Server 15.6.2 as the Postgres option, +1. Set the environment variable to use EBD Postgres Advanced Server 15.6.2 as the Postgres option and PGD 5.4.1 as the Postgres Distributed version: ``` @@ -104,7 +102,7 @@ Set the environment variables for the deployment command: export EDB_SUBSCRIPTION_TOKEN= ``` -1. Insert the repository, image and proxy names into the command: +1. Insert the repository, image, and proxy names into the command: ``` helm upgrade --dependency-update \ diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/identify_image_name.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/identify_image_name.mdx index ea314d7a152..83dd40ec1d9 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/identify_image_name.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/identify_image_name.mdx @@ -2,9 +2,9 @@ title: 'Identifying image names' --- -For a quick installation with the aim of testing the product, see the [Quick start](../quickstart/). +For a quick installation with the aim of testing the product, see [Quick start](/postgres_distributed_for_kubernetes/latest/quickstart/). For more targeted testing or production purposes, -this overview describes how to select a specific operand and proxy image version that's appropriate for your Postgres distribution. +you can select a specific operand and proxy image version that's appropriate for your Postgres distribution. ## Operand image name diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/index.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/index.mdx index 7ec92b1eadf..7512d02758d 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/index.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/index.mdx @@ -6,8 +6,8 @@ navigation: - assemble_command --- -If you plan on deploying a specific version of Postgres Distributed or a specific Postgres distribution or version, you will need to select the appropriate images and image versions. -Before [installing EDB Postgres Distributed for Kubernetes](../installation_upgrade): +If you plan on deploying a specific version of Postgres Distributed or a specific Postgres distribution or version, you need to select the appropriate images and image versions. +Before [installing EDB Postgres Distributed for Kubernetes](/postgres_distributed_for_kubernetes/latest/installation_upgrade): 1. Identify your repository name and retrieve your user token as explained in [EDB private image registries](private_registries). 1. [Identify the image names for your environment](identify_image_name). From 340ba4e6dae7611cc2d821dc50ea4c790fc9bab1 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Fri, 31 May 2024 15:02:20 -0400 Subject: [PATCH 05/23] Edits to PGD PR5715 --- product_docs/docs/pgd/5/rel_notes/pgd_5.4.1_rel_notes.mdx | 4 ++-- product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx | 8 +++----- 2 files changed, 5 insertions(+), 7 deletions(-) diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.4.1_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.4.1_rel_notes.mdx index 45e49e64f6f..9db05402b11 100644 --- a/product_docs/docs/pgd/5/rel_notes/pgd_5.4.1_rel_notes.mdx +++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.4.1_rel_notes.mdx @@ -5,7 +5,7 @@ navTitle: "Version 5.4.1" Released: 03 Apr 2024 -EDB Postgres Distributed version 5.4.1 is a patch release containing bug-fixes for EDB Postgres Distributed. +EDB Postgres Distributed version 5.4.1 is a patch release containing bug fixes for EDB Postgres Distributed. !!! Important Recommended upgrade We recommend that all users of PGD 5 upgrade to PGD 5.4.1. See [PGD/TPA upgrades](../upgrades/tpa_overview) for details. @@ -16,7 +16,7 @@ We recommend that all users of PGD 5 upgrade to PGD 5.4.1. See [PGD/TPA upgrades | Component | Version | Description | Tickets | |-----------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------| -| BDR | 5.4.1 |
Fixed WAL retention logic to prevent disk space issues on a PGD node with version 5.4.0 and Postgres 16, PGE 16, and EPAS 16.
A fix has been implemented to ensure proper cleanup of Write-Ahead Logs (WAL) even after reaching a size of 4 GB on a node. A change in version 5.4.0 resulted in WAL being retained indefinitely after reaching this threshold. This issue is specific to PGD 5.4.0 in conjunction with Postgres 16, PGE 16, and EPAS 16.
| | +| BDR | 5.4.1 |
Fixed WAL retention logic to prevent disk space issues on a PGD node with version 5.4.0 and Postgres 16, PGE 16, and EDB Postgres Advanced Server 16.
A fix was implemented to ensure proper cleanup of write-ahead logs (WAL) even after reaching a size of 4 GB on a node. A change in version 5.4.0 resulted in WAL being retained indefinitely after reaching this threshold. This issue is specific to PGD 5.4.0 in conjunction with Postgres 16, PGE 16, and EDB Postgres Advanced Server 16.
| | diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx index eee123638ff..19d9d422f4b 100644 --- a/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx +++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx @@ -5,7 +5,7 @@ navTitle: "Version 5.5.1" Released: 31 Mar 2024 -EDB Postgres Distributed version 5.5.1 is a patch release containing bug-fixes for EDB Postgres Distributed. +EDB Postgres Distributed version 5.5.1 is a patch release containing bug fixes for EDB Postgres Distributed. !!! Important Recommended upgrade We recommend that all users of PGD 5 upgrade to PGD 5.5.1. See [PGD/TPA upgrades](../upgrades/tpa_overview) for details. @@ -16,7 +16,5 @@ We recommend that all users of PGD 5 upgrade to PGD 5.5.1. See [PGD/TPA upgrades | Component | Version | Description | Ticket | |-----------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------| -| BDR | 5.5.1 |
Fix potential data inconsistency issue with mixed version usage during a rolling upgrade
Backward incompatible change in PGD 5.5.0 may lead to inconsistencies when replicating from a newer PGD 5.5.0 node to an older version of the PGD node, specifically during the mixed mode rolling upgrade.
This release addresses a backward compatibility issue in mixed version operation, enabling seamless rolling upgrades.
| | -| BDR | 5.5.1 |
Disabled auto-triggering of node sync by default
Automatically triggered synchronization of data from a down node caused issues by failing to resume once it came back up. As a precautionary measure, the feature is now disabled by default (PGD setting [`bdr.enable_auto_sync_reconcile`](/pgd/latest/reference/pgd-settings#bdrenable_auto_sync_reconcile)).
| 11510 | - - +| BDR | 5.5.1 |
Fixed potential data inconsistency issue with mixed-version usage during a rolling upgrade.
Backward-incompatible change in PGD 5.5.0 may lead to inconsistencies when replicating from a newer PGD 5.5.0 node to an older version of the PGD node, specifically during the mixed-mode rolling upgrade.
This release addresses a backward-compatibility issue in mixed-version operation, enabling seamless rolling upgrades.
| | +| BDR | 5.5.1 |
Disabled auto-triggering of node sync by default.
Automatically triggered synchronization of data from a down node caused issues by failing to resume once it came back up. As a precautionary measure, the feature is now disabled by default (PGD setting [`bdr.enable_auto_sync_reconcile`](/pgd/latest/reference/pgd-settings#bdrenable_auto_sync_reconcile)).
| 11510 | From adccc05000db671d53ce34cf778f4a89e9a57176 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 3 Jun 2024 08:31:15 +0100 Subject: [PATCH 06/23] Update product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx --- product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx index 19d9d422f4b..4379675a399 100644 --- a/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx +++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.5.1_rel_notes.mdx @@ -3,7 +3,7 @@ title: "EDB Postgres Distributed 5.5.1 release notes" navTitle: "Version 5.5.1" --- -Released: 31 Mar 2024 +Released: 31 May 2024 EDB Postgres Distributed version 5.5.1 is a patch release containing bug fixes for EDB Postgres Distributed. From 0f761c0cdaca55a66a62657b00129421c8f23d4b Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 29 May 2024 15:54:16 +0100 Subject: [PATCH 07/23] Updated image, some small text changes Signed-off-by: Dj Walker-Morgan --- .../ai-ml/images/pgai-overview.png | 4 ++-- .../ai-ml/install-tech-preview.mdx | 18 ++++++++++-------- .../edb-postgres-ai/ai-ml/overview.mdx | 8 +++++--- 3 files changed, 17 insertions(+), 13 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png b/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png index 1d82930782f..d219ec2ffa1 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png +++ b/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:6202298c01b91caa7a8ea8b81fe3265cf4a9bc13bf6715ebc6008a92613211fd -size 67416 +oid sha256:aa0ca7bad0dfc1b494df9353390adebf324f4b7ccd26630955d378b2f949f5ea +size 67791 diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx index 89535cad785..e6c97ab137d 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx @@ -10,9 +10,11 @@ The preview release of pgai is distributed as a self-contained Docker container ## Configuring and running the container image If you haven’t already, sign up for an EDB account and log in to the EDB container registry. + + Login to docker with your the username tech-preview and your EDB Repo 2.0 Subscription Token as your password: -``` +```shell docker login docker.enterprisedb.com -u tech-preview -p __OUTPUT__ Login Succeeded @@ -20,7 +22,7 @@ Login Succeeded Download the pgai container image: -``` +```shell docker pull docker.enterprisedb.com/tech-preview/pgai __OUTPUT__ ... @@ -30,26 +32,26 @@ docker.enterprisedb.com/tech-preview/pgai:latest Specify a password to use for Postgres in the environment variable PGPASSWORD. The tech preview container will set up Postgres with this password and use it to connect to it. In bash or zsh set it as follows: -``` +```shell export PGPASSWORD= ``` You can use the pgai extension with encoder LLMs in Open AI or with open encoder LLMs from HuggingFace. If you want to use Open AI you also must provide your API key for that in the OPENAI_API_KEY environment variable: -``` +```shell export OPENAI_API_KEY= ``` You can use the pgai extension with AI data stored in Postgres tables or on S3 compatible object storage. To work with object storage you need to specify the ACCESS_KEY and SECRET_KEY environment variables:. -``` +```shell export ACCESS_KEY= export SECRET_KEY= ``` Start the pgai tech preview container with the following command. It makes the tech preview PostgreSQL database available on local port 15432: -``` +```shell docker run -d --name pgai \ -e ACCESS_KEY=$ACCESS_KEY \ -e SECRET_KEY=$SECRET_KEY \ @@ -65,13 +67,13 @@ docker run -d --name pgai \ If you haven’t yet, install the Postgres command-line tools. If you’re on a Mac, using Homebrew, you can install it as follows: -``` +```shell brew install libpq ``` Connect to the tech preview PostgreSQL running in the container. Note that this relies on $PGPASSWORD being set - if you’re using a different terminal for this part, make sure you re-export the password: -``` +```shell $ psql -h localhost -p 15432 -U postgres postgres __OUTPUT__ psql (16.1, server 16.3 (Debian 16.3-1.pgdg120+1)) diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx index becb50a0f74..91df905c519 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx @@ -11,9 +11,11 @@ The pgai extension is currently available as a tech preview. It will be continuo ![PGAI Overview](images/pgai-overview.png) -pgai introduced the concept of a “retriever” that you can create for a given type and location of AI data. Currently pgai supports unstructured plain text documents as well as a set of image formats. This data can either reside in regular columns of a Postgres table or it can reside in an S3 compatible object storage bucket. +pgai introduces the concept of a “retriever” that you can create for a given type and location of AI data. Currently pgai supports unstructured plain text documents as well as a set of image formats. This data can either reside in regular columns of a Postgres table or it can reside in an S3 compatible object storage bucket. -A retriever encapsulates all processing that is needed to make the AI data in the provided source location searchable and retrievable via similarity. The application just needs to create a retriever via the pgai.create_retriever() function. When auto_embedding=TRUE is specified the pgai extension will automatically generate embeddings for all the data in the source location. Otherwise it will be up to the application to request a bulk generation of embeddings via pgai.refresh_retriever(). +A retriever encapsulates all processing that is needed to make the AI data in the provided source location searchable and retrievable through similarity. The application just needs to create a retriever via the `pgai.create_retriever()` function. When `auto_embedding=TRUE` is specified the pgai extension will automatically generate embeddings for all the data in the source location. + +Otherwise it will be up to the application to request a bulk generation of embeddings using `pgai.refresh_retriever()`. Auto embedding is currently supported for AI data stored in Postgres tables and it automates the embedding updates using Postgres triggers. You can also combine the two options by using pgai.refresh_retriever() to embed all previously existing data and also setting `auto_embedding=TRUE` to generate embeddings for all new and changed data from now on. @@ -21,7 +23,7 @@ All embedding generation, storage, indexing and management is handled by the pga Once a retriever is created and all embeddings are up to date, the application can just use pgai.retrieve() to run a similarity search and retrieval by providing a query input. When the retriever is created for text data, the query input is also a text term. For image retrievers the query input is an image. The pgai retriever makes sure to use the same encoder LLM for the query input, conducts a similarity search and finally returns the ranked list of similar data from the source location. -pgai currently supports a broad list of open encoder LLMs from HuggingFace as well as a set of OpenAI encoders. Just consult the list of supported encoder LLMs in the pgai.encoders meta table. HuggingFace LLMs are running locally on the Postgres node, while OpenAI encoders involve a call out to the OpenAI cloud service. +pgai currently supports a broad list of open encoder LLMs from HuggingFace as well as a set of OpenAI encoders. Consult the list of supported encoder LLMs in the pgai.encoders meta table. HuggingFace LLMs are running locally on the Postgres node, while OpenAI encoders involve a call out to the OpenAI cloud service. From a6605b076445b463bfc549eccf8e192fa6b6fd45 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 29 May 2024 16:48:20 +0100 Subject: [PATCH 08/23] model corrections, full sync with final docs Signed-off-by: Dj Walker-Morgan --- .../ai-ml/install-tech-preview.mdx | 7 ++-- ...functions.mdx => additional_functions.mdx} | 37 +++++++++++++++++-- .../working-with-ai-data-in-S3.mdx | 35 +----------------- .../working-with-ai-data-in-postgres.mdx | 12 +++--- 4 files changed, 45 insertions(+), 46 deletions(-) rename advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/{standalone-embedding-functions.mdx => additional_functions.mdx} (52%) diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx index e6c97ab137d..527fa43c002 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx @@ -74,7 +74,7 @@ brew install libpq Connect to the tech preview PostgreSQL running in the container. Note that this relies on $PGPASSWORD being set - if you’re using a different terminal for this part, make sure you re-export the password: ```shell -$ psql -h localhost -p 15432 -U postgres postgres +psql -h localhost -p 15432 -U postgres postgres __OUTPUT__ psql (16.1, server 16.3 (Debian 16.3-1.pgdg120+1)) Type "help" for help. @@ -86,15 +86,16 @@ postgres=# Install the pgai extension: ```sql -postgres=# create extension pgai cascade; +create extension pgai cascade; __OUTPUT__ NOTICE: installing required extension "plpython3u" NOTICE: installing required extension "vector" CREATE EXTENSION +postgres=# ``` ```sql -postgres=# \dx +\dx __OUTPUT__ List of installed extensions Name | Version | Schema | Description diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/standalone-embedding-functions.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx similarity index 52% rename from advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/standalone-embedding-functions.mdx rename to advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx index 809273d4102..893657651e0 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/standalone-embedding-functions.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx @@ -1,10 +1,12 @@ --- -title: Stand-alone Embedding Functions in pgai -navTitle: Stand-alone Embedding Functions -description: Use the pgai extension to generate embeddings for images and text. +title: Additional functions and stand-alone embedding in pgai +navTitle: Additional functions +description: Other pgai extension functions and how to generate embeddings for images and text. --- -Use generate_single_image_embedding function to get embeddings for the given image. Currently, model_provider can only be openai or huggingface. You can check the list of valid embedding models and model providers from the Encoders Supported PGAI section. +## Standalone embedding + +Use the `generate_single_image_embedding` function to get embeddings for the given image. Currently, `model_provider` can only be `openai` or `huggingface`. You can check the list of valid embedding models and model providers from the Encoders Supported PGAI section. ```sql SELECT pgai.generate_single_image_embedding( @@ -37,6 +39,7 @@ __OUTPUT__ (1 row) ``` +## Supported encoders You can check the list of valid embedding models and model providers from pgai.encoders table @@ -50,4 +53,30 @@ __OUTPUT__ (2 rows) ``` +## Available functions + +You can find the complete list of currently available functions of the pgai extension by selecting from `information_schema.routines` any `routine_name` belonging to the pgai routine schema: + + +``` +SELECT routine_name from information_schema.routines WHERE routine_schema='pgai'; +__OUTPUT__ + routine_name +--------------------------------- + init + create_pg_retriever + create_s3_retriever + _embed_table_update + refresh_retriever + retrieve + retrieve_via_s3 + register_prompt_template + render_prompt + generate + ag + rag + generate_text_embedding + generate_single_image_embedding +(14 rows) +``` diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx index 905359f81c8..fab9c220596 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx @@ -14,12 +14,11 @@ First let’s create a retriever for images stored on s3-compatible object stora SELECT pgai.create_s3_retriever( 'image_embeddings', -- Name of the similarity retrieval setup 'public', -- Schema of the source table - 'img_id', -- Primary key 'clip-vit-base-patch32', -- Embeddings encoder model for similarity data 'img', -- data type, could be either img or text 'torsten', -- S3 bucket name - 'https://s3.us-south.cloud-object-storage.appdomain.cloud', -- s3 endpoint address - '' -- prefix + '', -- prefix + 'https://s3.us-south.cloud-object-storage.appdomain.cloud' -- s3 endpoint address ); __OUTPUT__ create_s3_retriever @@ -57,33 +56,3 @@ __OUTPUT__ (1 row) ``` -If you set the `ACCESS_KEY` and `SECRET_KEY` you can use the following queries to run without an `s3_endpoint` setting using a command like this: - -```sql -SELECT pgai.create_s3_retriever( - 'img_file_embeddings', -- Name of the similarity retrieval setup - 'demo', -- Schema of the source table - 'img_id', -- Primary key - 'clip-vit-base-patch32', -- Embeddings encoder model for similarity data - 'img', -- data type - 'bilge-ince-test' -- S3 bucket name -); -__OUTPUT__ - create_s3_retriever ---------------------- - -(1 row) -``` - -```sql -SELECT pgai.refresh_retriever('img_file_embeddings'); -``` - -```sql -SELECT data from pgai.retrieve_via_s3('img_file_embeddings', - 1, - 'bilge-ince-test', - 'kirpis_small.jpg', - 'http://s3.eu-central-1.amazonaws.com' - ); -``` diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx index f0af183386a..aec055f14c4 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx @@ -22,14 +22,14 @@ CREATE TABLE ``` -Now let’s create a retriever with the just created products table as the source. We specify product_id as the unique key column to and we define the product_name and description columns to use for the similarity search by the retriever. We use the text-embeddings-3-small open encoder model from HuggingFace. We set auto_embedding to True so that any future insert, update or delete to the source table will automatically generate, update or delete also the corresponding embedding. We provide a name for the retriever so that we can identify and reference it subsequent operations: +Now let’s create a retriever with the just created products table as the source. We specify product_id as the unique key column to and we define the product_name and description columns to use for the similarity search by the retriever. We use the `all-MiniLM-L6-v2` open encoder model from HuggingFace. We set `auto_embedding` to True so that any future insert, update or delete to the source table will automatically generate, update or delete also the corresponding embedding. We provide a name for the retriever so that we can identify and reference it subsequent operations: ```sql SELECT pgai.create_pg_retriever( 'product_embeddings_auto', -- Retriever name 'public', -- Schema 'product_id', -- Primary key - 'text-embedding-3-small', -- embedding model + 'all-MiniLM-L6-v2', -- embedding model 'text', -- data type 'products', -- Source table ARRAY['product_name', 'description'], -- Columns to vectorize @@ -87,7 +87,7 @@ SELECT pgai.create_pg_retriever( 'product_embeddings_bulk', -- Retriever name 'public', -- Schema 'product_id', -- Primary key - 'text-embedding-3-small', -- embedding model + 'all-MiniLM-L6-v2', -- embedding model 'text', -- data type 'products', -- Source table ARRAY['product_name', 'description'], -- Columns to vectorize @@ -103,7 +103,7 @@ __OUTPUT__ We created this second retriever on the products table after we have inserted the AI records there. If we run a retrieve operation now we would not get back any results: -``` +```sql SELECT data FROM pgai.retrieve( 'I like it', -- The query text to retrieve the top similar data 5, -- top K @@ -117,7 +117,7 @@ __OUTPUT__ That’s why we first need to run a bulk generation of embeddings. This is achieved via the `refresh_retriever()` function: -``` +```sql SELECT pgai.refresh_retriever( 'product_embeddings_bulk' -- name of the retriever ); @@ -229,4 +229,4 @@ __OUTPUT__ We used the two different retrievers for the same source data just to demonstrate the workings of auto embedding compared to explicit `refresh_retriever()`. In practice you may want to combine auto embedding and refresh_retriever() in a single retriever to conduct an initial embedding of data that existed before you created the retriever and then rely on auto embedding for any future data that is ingested, updated or deleted. -You should consider relying on refresh_retriever() only, without auto embedding, if you typically ingest a lot of AI data at once in batch manner. +You should consider relying on `refresh_retriever()` only, without auto embedding, if you typically ingest a lot of AI data at once in a batched manner. From e94c95017b4a47ab6209cc697d49f6f966940609 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 30 May 2024 17:26:02 +0100 Subject: [PATCH 09/23] Full image replacement Signed-off-by: Dj Walker-Morgan --- .../edb_postgres_ai_platform_artificial_intelligence-v4.png | 3 +++ advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png | 3 --- advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) create mode 100644 advocacy_docs/edb-postgres-ai/ai-ml/images/edb_postgres_ai_platform_artificial_intelligence-v4.png delete mode 100644 advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/images/edb_postgres_ai_platform_artificial_intelligence-v4.png b/advocacy_docs/edb-postgres-ai/ai-ml/images/edb_postgres_ai_platform_artificial_intelligence-v4.png new file mode 100644 index 00000000000..bab6416ed98 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/ai-ml/images/edb_postgres_ai_platform_artificial_intelligence-v4.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:acc112c5b73f7e848148a0668f09055047ee7ee846e97654153dfe60aea231f8 +size 242510 diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png b/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png deleted file mode 100644 index d219ec2ffa1..00000000000 --- a/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:aa0ca7bad0dfc1b494df9353390adebf324f4b7ccd26630955d378b2f949f5ea -size 67791 diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx index 91df905c519..fc13572e447 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx @@ -9,7 +9,7 @@ At the heart of EDB Postgres® AI is the EDB Postgres AI database (pgai). This b The pgai extension is currently available as a tech preview. It will be continuously extended with new functions. Here comes a description of the major functionality available to date. -![PGAI Overview](images/pgai-overview.png) +![PGAI Overview](images/edb_postgres_ai_platform_artificial_intelligence-v4.png) pgai introduces the concept of a “retriever” that you can create for a given type and location of AI data. Currently pgai supports unstructured plain text documents as well as a set of image formats. This data can either reside in regular columns of a Postgres table or it can reside in an S3 compatible object storage bucket. From 14b6329371dce7a0eb925e8c0d7790f4ad070530 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Fri, 31 May 2024 08:27:19 +0100 Subject: [PATCH 10/23] Revert "Full image replacement" This reverts commit 0d3887a8ffc861083eedb31604e418bd45a36b0a. --- .../edb_postgres_ai_platform_artificial_intelligence-v4.png | 3 --- advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png | 3 +++ advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) delete mode 100644 advocacy_docs/edb-postgres-ai/ai-ml/images/edb_postgres_ai_platform_artificial_intelligence-v4.png create mode 100644 advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/images/edb_postgres_ai_platform_artificial_intelligence-v4.png b/advocacy_docs/edb-postgres-ai/ai-ml/images/edb_postgres_ai_platform_artificial_intelligence-v4.png deleted file mode 100644 index bab6416ed98..00000000000 --- a/advocacy_docs/edb-postgres-ai/ai-ml/images/edb_postgres_ai_platform_artificial_intelligence-v4.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:acc112c5b73f7e848148a0668f09055047ee7ee846e97654153dfe60aea231f8 -size 242510 diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png b/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png new file mode 100644 index 00000000000..d219ec2ffa1 --- /dev/null +++ b/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa0ca7bad0dfc1b494df9353390adebf324f4b7ccd26630955d378b2f949f5ea +size 67791 diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx index fc13572e447..91df905c519 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx @@ -9,7 +9,7 @@ At the heart of EDB Postgres® AI is the EDB Postgres AI database (pgai). This b The pgai extension is currently available as a tech preview. It will be continuously extended with new functions. Here comes a description of the major functionality available to date. -![PGAI Overview](images/edb_postgres_ai_platform_artificial_intelligence-v4.png) +![PGAI Overview](images/pgai-overview.png) pgai introduces the concept of a “retriever” that you can create for a given type and location of AI data. Currently pgai supports unstructured plain text documents as well as a set of image formats. This data can either reside in regular columns of a Postgres table or it can reside in an S3 compatible object storage bucket. From 1f0de8e01b371168d668cfde6b6427997ec9ad21 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Fri, 31 May 2024 11:15:50 +0100 Subject: [PATCH 11/23] Update image with background to show white text Signed-off-by: Dj Walker-Morgan --- advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png b/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png index d219ec2ffa1..afe4e5b5c43 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png +++ b/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:aa0ca7bad0dfc1b494df9353390adebf324f4b7ccd26630955d378b2f949f5ea -size 67791 +oid sha256:1c96c8a593ede10be25dfd55acddf812ec28358166d8772769a54f0c316cb239 +size 62873 From 6a3b18fd40164f0a2c19dadfa368cb25d935bc4b Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Fri, 31 May 2024 14:58:25 +0100 Subject: [PATCH 12/23] Moved filenames around to avoid stacked LFS commit issue Signed-off-by: Dj Walker-Morgan --- .../{pgai-overview.png => pgai-overview-withbackground.png} | 0 advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx | 2 +- 2 files changed, 1 insertion(+), 1 deletion(-) rename advocacy_docs/edb-postgres-ai/ai-ml/images/{pgai-overview.png => pgai-overview-withbackground.png} (100%) diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png b/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview-withbackground.png similarity index 100% rename from advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview.png rename to advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview-withbackground.png diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx index 91df905c519..8b118c057dc 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx @@ -9,7 +9,7 @@ At the heart of EDB Postgres® AI is the EDB Postgres AI database (pgai). This b The pgai extension is currently available as a tech preview. It will be continuously extended with new functions. Here comes a description of the major functionality available to date. -![PGAI Overview](images/pgai-overview.png) +![PGAI Overview](images/pgai-overview-withbackground.png) pgai introduces the concept of a “retriever” that you can create for a given type and location of AI data. Currently pgai supports unstructured plain text documents as well as a set of image formats. This data can either reside in regular columns of a Postgres table or it can reside in an S3 compatible object storage bucket. From c0054b8d8ba5b06d9c25581a81ec121138090654 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Thu, 30 May 2024 16:24:46 +0530 Subject: [PATCH 13/23] PEM - minor fixes Removed duplicate release note. Added release note as per PEM-5127. Added content to pem agent configuration parameter as per PEM-5095 --- .../modifying_agent_configuration.mdx | 70 +++++++++---------- .../pem/9/pem_rel_notes/960_rel_notes.mdx | 2 +- 2 files changed, 36 insertions(+), 36 deletions(-) diff --git a/product_docs/docs/pem/9/managing_pem_agent/modifying_agent_configuration.mdx b/product_docs/docs/pem/9/managing_pem_agent/modifying_agent_configuration.mdx index b41c8a541e9..128b0043594 100644 --- a/product_docs/docs/pem/9/managing_pem_agent/modifying_agent_configuration.mdx +++ b/product_docs/docs/pem/9/managing_pem_agent/modifying_agent_configuration.mdx @@ -14,41 +14,41 @@ Most agent configuration is managed automatically. We recommend against manually On Linux systems, PEM configuration options are stored in the `agent.cfg` file, located in `/usr/edb/pem/agent/etc`. The `agent.cfg` file contains the entries shown in the following table. -| Parameter name | Description | Default value | -| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------- | -| pem_host | The IP address or hostname of the PEM server. | 127.0.0.1. | -| pem_port | The database server port to which the agent connects to communicate with the PEM server. | Port 5432. | -| pem_agent | A unique identifier assigned to the PEM agent. | The first agent is '1', the second agent is '2', and so on. | -| agent_ssl_key | The complete path to the PEM agent's key file. | /root/.pem/agent.key | -| agent_ssl_crt | The complete path to the PEM agent's certificate file. | /root/.pem/agent.crt | -| agent_flag_dir | Used for HA support. Specifies the directory path checked for requests to take over monitoring another server. Requests are made in the form of a file in the specified flag directory. | Not set by default. | -| log_level | Specifies the type of event to write to the PEM log files, one of `debug2`, `debug`, `info`, `warning`, `error`. These are in descending order of logging verbosity; `debug2` logs everything possible, and `error` only logs errors. | warning | -| log_location | Specifies the location of the PEM worker log file. | 127.0.0.1. | -| agent_log_location | Specifies the location of the PEM agent log file. | /var/log/pem/agent.log | -| long_wait | The maximum length of time (in seconds) for the PEM agent to wait before attempting to connect to the PEM server if an initial connection attempt fails. | 30 seconds | -| short_wait | The minimum length of time (in seconds) for the PEM agent to wait before checking which probes are next in the queue waiting to run. | 10 seconds | -| alert_threads | The number of alert threads to be spawned by the agent. For more information, see [About alert threads](#about-alert-threads). | Set to 1 for the agent that resides on the host of the PEM server, 0 for all other agents. | -| enable_smtp | When set to true, this agent will attempt to send email notifications as configured in the PEM web application. | true for PEM server host, false for all others. | -| enable_snmp | When set to true, this agent will attempt to send SNMP notifications as configured in the PEM web application. | true for PEM server host, false for all others. | -| enable_nagios | When set to true, Nagios alerting is enabled. | true for PEM server host, false for all others. | -| enable_webhook | When set to true, Webhook alerting is enabled. | true for PEM server host, false for all others. | -| max_webhook_retries | Used to set the maximum number of times pemAgent retries to call webhooks on failure. | Default 3. | -| connect_timeout | The maximum time in seconds (a decimal integer string) for the agent to wait for a connection. | Not set by default. Set to 0 to indicate for the agent to wait indefinitely. | -| allow_server_restart | If set to TRUE, the agent can restart the database server that it monitors. Some PEM features might be enabled/disabled, depending on the value of this parameter. | False | -| max_connections | The maximum number of probe connections used by the connection throttler. | 0 (an unlimited number) | -| connection_lifetime | Used ConnectionLifetime (or connection_lifetime) to specify the minimum number of seconds an open but idle connection is retained. This parameter is ignored if the value specified in MaxConnections is reached and a new connection to a different database is required to satisfy a waiting request. | By default, set to 0 (a connection is dropped when the connection is idle after the agent's processing loop). | -| allow_batch_probes | If set to TRUE, the user can create batch probes using the custom probes feature. | false | -| heartbeat_connection | When set to TRUE, a dedicated connection is used for sending the heartbeats. | false | -| batch_script_dir | Provides the path where script file (for alerting) is stored. | /tmp | -| connection_custom_setup | Used to provide SQL code to invoke when a new connection with a monitored server is made. | Not set by default. | -| ca_file | The path to a CA certificate to use instead of the platform default for verifying webhook server certificates. You can override this value with the `--webhook_ssl_ca_crt` option when defining webhooks. | Not set by default. | -| batch_script_user | The name of the user to use for executing the batch/shell scripts. | None | -| **Webhook parameters** | You can specify the following options multiple times. Each time, precede the option with a header of the form `[WEBHOOK/]`, where `` is the name of a previously created webhook. These settings are automatically added when webhooks are created. We don't recommend adding them manually.|| -| webhook_ssl_key | The complete path to the webhook's SSL client key file. | | -| webhook_ssl_crt | The complete path to the webhook's SSL client certificate file. | | -| webhook_ssl_crl | The complete path of the CRL file to validate webhook server certificate. | | -| webhook_ssl_ca_crt | The complete path to the webhook's SSL ca certificate file. | | -| allow_insecure_webhooks | When set to true, allow webhooks to call with insecure flag. | false | +| Parameter name | Description | Default value | +|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------| +| pem_host | The IP address or hostname of the PEM server. | 127.0.0.1. | +| pem_port | The database server port to which the agent connects to communicate with the PEM server. | Port 5432. | +| pem_agent | A unique identifier assigned to the PEM agent. | The first agent is '1', the second agent is '2', and so on. | +| agent_ssl_key | The complete path to the PEM agent's key file. | /root/.pem/agent.key | +| agent_ssl_crt | The complete path to the PEM agent's certificate file. | /root/.pem/agent.crt | +| agent_flag_dir | Used for HA support. Specifies the directory path checked for requests to take over monitoring another server. Requests are made in the form of a file in the specified flag directory. | Not set by default. | +| log_level | Specifies the type of event to write to the PEM log files, one of `debug2`, `debug`, `info`, `warning`, `error`. These are in descending order of logging verbosity; `debug2` logs everything possible, and `error` only logs errors. | warning | +| log_location | Specifies the location of the PEM worker log file. | 127.0.0.1. | +| agent_log_location | Specifies the location of the PEM agent log file. | /var/log/pem/agent.log | +| long_wait | The maximum length of time (in seconds) for the PEM agent to wait before attempting to connect to the PEM server if an initial connection attempt fails. | 30 seconds | +| short_wait | The minimum length of time (in seconds) for the PEM agent to wait before checking which probes are next in the queue waiting to run. | 10 seconds | +| alert_threads | The number of alert threads to be spawned by the agent. For more information, see [About alert threads](#about-alert-threads). | Set to 1 for the agent that resides on the host of the PEM server, 0 for all other agents. | +| enable_smtp | When set to true, this agent will attempt to send email notifications as configured in the PEM web application. | true for PEM server host, false for all others. | +| enable_snmp | When set to true, this agent will attempt to send SNMP notifications as configured in the PEM web application. | true for PEM server host, false for all others. | +| enable_nagios | When set to true, Nagios alerting is enabled. | true for PEM server host, false for all others. | +| enable_webhook | When set to true, Webhook alerting is enabled. | true for PEM server host, false for all others. | +| max_webhook_retries | Used to set the maximum number of times pemAgent retries to call webhooks on failure. | Default 3. | +| connect_timeout | The maximum time in seconds (a decimal integer string) for the agent to wait for a connection. | Not set by default. Set to 0 to indicate for the agent to wait indefinitely. | +| allow_server_restart | If set to TRUE, the agent can restart the database server that it monitors. Some PEM features might be enabled/disabled, depending on the value of this parameter. | False | +| max_connections | The maximum number of probe connections used by the connection throttler. | 0 (an unlimited number) | +| connection_lifetime | Used ConnectionLifetime (or connection_lifetime) to specify the minimum number of seconds an open but idle connection is retained. This parameter is ignored if the value specified in MaxConnections is reached and a new connection to a different database is required to satisfy a waiting request. | By default, set to 0 (a connection is dropped when the connection is idle after the agent's processing loop). | +| allow_batch_probes | If set to TRUE, the user can create batch probes using the custom probes feature. | false | +| heartbeat_connection | When set to TRUE, a dedicated connection is used for sending the heartbeats. If set to TRUE, the `max_connections` parameter must be set to greater than 1. | false | +| batch_script_dir | Provides the path where script file (for alerting) is stored. | /tmp | +| connection_custom_setup | Used to provide SQL code to invoke when a new connection with a monitored server is made. | Not set by default. | +| ca_file | The path to a CA certificate to use instead of the platform default for verifying webhook server certificates. You can override this value with the `--webhook_ssl_ca_crt` option when defining webhooks. | Not set by default. | +| batch_script_user | The name of the user to use for executing the batch/shell scripts. | None | +| **Webhook parameters** | You can specify the following options multiple times. Each time, precede the option with a header of the form `[WEBHOOK/]`, where `` is the name of a previously created webhook. These settings are automatically added when webhooks are created. We don't recommend adding them manually. | | +| webhook_ssl_key | The complete path to the webhook's SSL client key file. | | +| webhook_ssl_crt | The complete path to the webhook's SSL client certificate file. | | +| webhook_ssl_crl | The complete path of the CRL file to validate webhook server certificate. | | +| webhook_ssl_ca_crt | The complete path to the webhook's SSL ca certificate file. | | +| allow_insecure_webhooks | When set to true, allow webhooks to call with insecure flag. | false | ## Contents of the registry diff --git a/product_docs/docs/pem/9/pem_rel_notes/960_rel_notes.mdx b/product_docs/docs/pem/9/pem_rel_notes/960_rel_notes.mdx index b130c7bd57f..df4ec1b6dd4 100644 --- a/product_docs/docs/pem/9/pem_rel_notes/960_rel_notes.mdx +++ b/product_docs/docs/pem/9/pem_rel_notes/960_rel_notes.mdx @@ -17,7 +17,7 @@ New features, enhancements, bug fixes, and other changes in PEM 9.6.0 include: | Enhancement | Added an option to download the Capacity Manager report in JSON format. | | Enhancement | You can now specify a CA Certificate path when registering a Barman server using pemworker without having to specify a client certificate and key. | | Bug fix | Fixed an issue where audit manager was setting `edb_audit_statement` parameter to '' instead of 'none' when log statement parameter was left empty in the GUI resulting in failure of server restart. | -| Bug fix | Fixed an issue where audit manager was setting `edb_audit_statement` parameter to '' instead of 'none' when the log statement parameter was left empty in the GUI. This issue resulted in the failure of server restart. | +| Bug fix | Fixed an issue where `os_info` probe was throwing error whenever timezone/timestamp were changed. | | Bug fix | Fixed an issue whereby an error "`NoneType` has no `len()`" occurred when alerts were listed on the alert dashboard. | | Bug fix | Fixed an issue where team role wasn't working with PEM agents as expected. | | Bug fix | Fixed an issue with the SQL query for the PEM agent registration. Added the public schema name in the table prefix to execute the SQL query successfully even if the schema name isn't in the search path. | From 88b0824a141dd97fddea9a7214219d94f7495255 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Fri, 31 May 2024 14:15:04 +0530 Subject: [PATCH 14/23] fixed the release notes as per comment from Shubam --- product_docs/docs/pem/9/pem_rel_notes/960_rel_notes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pem/9/pem_rel_notes/960_rel_notes.mdx b/product_docs/docs/pem/9/pem_rel_notes/960_rel_notes.mdx index df4ec1b6dd4..d2764f1d669 100644 --- a/product_docs/docs/pem/9/pem_rel_notes/960_rel_notes.mdx +++ b/product_docs/docs/pem/9/pem_rel_notes/960_rel_notes.mdx @@ -17,7 +17,7 @@ New features, enhancements, bug fixes, and other changes in PEM 9.6.0 include: | Enhancement | Added an option to download the Capacity Manager report in JSON format. | | Enhancement | You can now specify a CA Certificate path when registering a Barman server using pemworker without having to specify a client certificate and key. | | Bug fix | Fixed an issue where audit manager was setting `edb_audit_statement` parameter to '' instead of 'none' when log statement parameter was left empty in the GUI resulting in failure of server restart. | -| Bug fix | Fixed an issue where `os_info` probe was throwing error whenever timezone/timestamp were changed. | +| Bug fix | Fixed an issue where `os_info` probe was throwing error whenever timezone/timestamp were changed to some specific values. | | Bug fix | Fixed an issue whereby an error "`NoneType` has no `len()`" occurred when alerts were listed on the alert dashboard. | | Bug fix | Fixed an issue where team role wasn't working with PEM agents as expected. | | Bug fix | Fixed an issue with the SQL query for the PEM agent registration. Added the public schema name in the table prefix to execute the SQL query successfully even if the schema name isn't in the search path. | From 4fcd62cd6c16560277582eba6ce12a6dcc454b98 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 3 Jun 2024 09:34:31 +0100 Subject: [PATCH 15/23] Update advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx Co-authored-by: gvasquezvargas --- advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx index 527fa43c002..23c4d5a7843 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx @@ -12,7 +12,7 @@ The preview release of pgai is distributed as a self-contained Docker container If you haven’t already, sign up for an EDB account and log in to the EDB container registry. -Login to docker with your the username tech-preview and your EDB Repo 2.0 Subscription Token as your password: +Log in to docker with your the username tech-preview and your EDB Repo 2.0 Subscription Token as your password: ```shell docker login docker.enterprisedb.com -u tech-preview -p From 12725626e111c81e996a31133cca67438ae08137 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 30 May 2024 10:57:02 +0100 Subject: [PATCH 16/23] Fix the text around delete_recently_updated Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/3.7/bdr/conflicts.mdx | 20 +++++++++---------- product_docs/docs/pgd/4/bdr/conflicts.mdx | 18 ++++++++--------- .../docs/pgd/5/consistency/conflicts.mdx | 2 +- 3 files changed, 20 insertions(+), 20 deletions(-) diff --git a/product_docs/docs/pgd/3.7/bdr/conflicts.mdx b/product_docs/docs/pgd/3.7/bdr/conflicts.mdx index 336d203b2d4..3724047bf77 100644 --- a/product_docs/docs/pgd/3.7/bdr/conflicts.mdx +++ b/product_docs/docs/pgd/3.7/bdr/conflicts.mdx @@ -326,15 +326,15 @@ case BDR cannot differentiate between `UPDATE`/`DELETE` conflicts and [INSERT/UPDATE Conflicts] and will simply generate the `update_missing` conflict. -Another type of conflicting `DELETE` and `UPDATE` is a `DELETE` operation -that comes after the row was `UPDATEd` locally. In this situation, the -outcome depends upon the type of conflict detection used. When using the -default, [Origin Conflict Detection], no conflict is detected at all, -leading to the `DELETE` being applied and the row removed. If you enable -[Row Version Conflict Detection], a `delete_recently_updated` conflict is -generated. The default resolution for this conflict type is to to apply the -`DELETE` and remove the row, but this can be configured or handled via -a conflict trigger. +Another type of conflicting `DELETE` and `UPDATE` is a `DELETE` operation that +comes after the row was `UPDATEd` locally. In this situation, the outcome +depends upon the type of conflict detection used. When using the default, +[Origin Conflict Detection], no conflict is detected at all, leading to the +`DELETE` being applied and the row removed. If you enable [Row Version Conflict +Detection], a `delete_recently_updated` conflict is generated. The default +resolution for a `delete_recently_updated` conflict is to `skip` the deletion. +However, you can configure the resolution or a conflict trigger can be +configured to handle it. #### INSERT/UPDATE Conflicts @@ -350,7 +350,7 @@ from the `UPDATE` when possible (when the whole row was received). For the reconstruction of the row to be possible, the table either needs to have `REPLICA IDENTITY FULL` or the row must not contain any TOASTed data. -See [TOAST Support Details] for more info about TOASTed data. +See [TOAST Support Details](#toast-support-details) for more info about TOASTed data. #### INSERT/DELETE Conflicts diff --git a/product_docs/docs/pgd/4/bdr/conflicts.mdx b/product_docs/docs/pgd/4/bdr/conflicts.mdx index c48fa48065e..4984beed7d3 100644 --- a/product_docs/docs/pgd/4/bdr/conflicts.mdx +++ b/product_docs/docs/pgd/4/bdr/conflicts.mdx @@ -323,15 +323,15 @@ case, BDR can't differentiate between `UPDATE`/`DELETE` conflicts and [INSERT/UPDATE conflicts](#insertupdate-conflicts) and generates the `update_missing` conflict. -Another type of conflicting `DELETE` and `UPDATE` is a `DELETE` -that comes after the row was updated locally. In this situation, the -outcome depends on the type of conflict detection used. When using the -default, [origin conflict detection](#origin-conflict-detection), no conflict is detected at all, -leading to the `DELETE` being applied and the row removed. If you enable -[row version conflict detection](#row-version-conflict-detection), a `delete_recently_updated` conflict is -generated. The default resolution for this conflict type is to apply the -`DELETE` and remove the row, but you can configure this or this can be handled by -a conflict trigger. +Another type of conflicting `DELETE` and `UPDATE` is a `DELETE` that comes after +the row was updated locally. In this situation, the outcome depends on the type +of conflict detection used. When using the default, [origin conflict +detection](#origin-conflict-detection), no conflict is detected at all, leading +to the `DELETE` being applied and the row removed. If you enable [row version +conflict detection](#row-version-conflict-detection), a +`delete_recently_updated` conflict is generated. The default resolution for a +`delete_recently_updated` conflict is to `skip` the deletion. However, you can +configure the resolution or a conflict trigger can be configured to handle it. #### INSERT/UPDATE conflicts diff --git a/product_docs/docs/pgd/5/consistency/conflicts.mdx b/product_docs/docs/pgd/5/consistency/conflicts.mdx index 0c95109560d..36c915ac514 100644 --- a/product_docs/docs/pgd/5/consistency/conflicts.mdx +++ b/product_docs/docs/pgd/5/consistency/conflicts.mdx @@ -211,7 +211,7 @@ If the deleted row is still detectable (the deleted row wasn't removed by `VACUU The database can clean up the deleted row by the time the `UPDATE` is received in case the local node is lagging behind in replication. In this case, PGD can't differentiate between `UPDATE`/`DELETE` conflicts and [INSERT/UPDATE conflicts](#insertupdate-conflicts). It generates the `update_missing` conflict. Another type of conflicting `DELETE` and `UPDATE` is a `DELETE` that comes after the row was updated locally. In this situation, the outcome depends on the type of conflict detection used. When using the -default, [origin conflict detection](#origin-conflict-detection), no conflict is detected, leading to the `DELETE` being applied and the row removed. If you enable [row version conflict detection](#row-version-conflict-detection), a `delete_recently_updated` conflict is generated. The default resolution for this conflict type is to apply the `DELETE` and remove the row. However, you can configure this or a conflict trigger can handled it. +default, [origin conflict detection](#origin-conflict-detection), no conflict is detected, leading to the `DELETE` being applied and the row removed. If you enable [row version conflict detection](#row-version-conflict-detection), a `delete_recently_updated` conflict is generated. The default resolution for a `delete_recently_updated` conflict is to `skip` the deletion. However, you can configure the resolution or a conflict trigger can be configured to handle it. #### INSERT/UPDATE conflicts From 13e4000445ff79d1f9a9a31cd1a841ee3064ccc3 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 30 May 2024 11:22:55 +0100 Subject: [PATCH 17/23] Fix 3.7 links too Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/3.7/bdr/conflicts.mdx | 43 ++++++++++----------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/product_docs/docs/pgd/3.7/bdr/conflicts.mdx b/product_docs/docs/pgd/3.7/bdr/conflicts.mdx index 3724047bf77..c44f932ea73 100644 --- a/product_docs/docs/pgd/3.7/bdr/conflicts.mdx +++ b/product_docs/docs/pgd/3.7/bdr/conflicts.mdx @@ -114,7 +114,7 @@ user-defined conflict handler. This conflict will generate the `insert_exists` conflict type, which is by default resolved by choosing the newer (based on commit time) row and keeping only that one (`update_if_newer` resolver). Other resolvers can be configured - -see [Conflict Resolution] for details. +see [Conflict Resolution](#conflict-resolution) for details. To resolve this conflict type, you can also use column-level conflict resolution and user-defined conflict triggers. @@ -167,11 +167,11 @@ BDR cannot currently perform conflict resolution where the `PRIMARY KEY` is changed by an `UPDATE` operation. It is permissible to update the primary key, but you must ensure that no conflict with existing values is possible. -Conflicts on the update of the primary key are [Divergent Conflicts] and -require manual operator intervention. +Conflicts on the update of the primary key are [Divergent +Conflicts](#divergent-conflicts) and require manual operator intervention. -Updating a PK is possible in PostgreSQL, but there are -issues in both PostgreSQL and BDR. +Updating a PK is possible in PostgreSQL, but there are issues in both PostgreSQL +and BDR. Let's create a very simple example schema to explain: @@ -318,12 +318,12 @@ It is possible for one node to `UPDATE` a row that another node simultaneously If the `DELETE`d row is still detectable (the deleted row wasn't removed by `VACUUM`), the `update_recently_deleted` conflict will be generated. By default the `UPDATE` will just be skipped, but the resolution for this can be configured; -see [Conflict Resolution] for details. +see [Conflict Resolution](#conflict-resolution) for details. The deleted row can be cleaned up from the database by the time the `UPDATE` is received in case the local node is lagging behind in replication. In this case BDR cannot differentiate between `UPDATE`/`DELETE` -conflicts and [INSERT/UPDATE Conflicts] and will simply generate the +conflicts and [INSERT/UPDATE Conflicts](#insertupdate-conflicts) and will simply generate the `update_missing` conflict. Another type of conflicting `DELETE` and `UPDATE` is a `DELETE` operation that @@ -340,7 +340,7 @@ configured to handle it. When using the default asynchronous mode of operation, a node may receive an `UPDATE` of a row before the original `INSERT` was received. This can only -happen with 3 or more nodes being active (see [Conflicts with 3 or more nodes] below). +happen with 3 or more nodes being active (see [Conflicts with 3 or more nodes](#conflicts-with-3-or-more-nodes)). When this happens, the `update_missing` conflict is generated. The default conflict resolver is `insert_or_skip`, though `insert_or_error` or `skip` @@ -357,7 +357,7 @@ See [TOAST Support Details](#toast-support-details) for more info about TOASTed Similarly to the `INSERT`/`UPDATE` conflict, the node may also receive a `DELETE` operation on a row for which it didn't receive an `INSERT` yet. This is again only possible with 3 or more nodes set up (see [Conflicts with 3 or -more nodes] below). +more nodes](#conflicts-with-3-or-more-nodes)). BDR cannot currently detect this conflict type: the `INSERT` operation will not generate any conflict type and the `INSERT` will be applied. @@ -409,7 +409,7 @@ these conflicts. Note however that enabling this option opens the door for If these are problems, it's recommended to tune freezing settings for a table or database so that they are correctly detected as `update_recently_deleted`. -Another alternative is to use [Eager Replication] to prevent these conflicts. +Another alternative is to use [Eager Replication](eager) to prevent these conflicts. INSERT/DELETE conflicts can also occur with 3 or more nodes. Such a conflict is identical to `INSERT`/`UPDATE`, except with the @@ -776,8 +776,8 @@ mechanisms to cope with the conflict. BDR provides these mechanisms for conflict detection: -- [Origin Conflict Detection] \(default) -- [Row Version Conflict Detection] +- [Origin Conflict Detection](#origin-conflict-detection) (default) +- [Row Version Conflict Detection](#row-version-conflict-detection) - [Column-Level Conflict Detection](column-level-conflicts) ### Origin Conflict Detection @@ -865,7 +865,7 @@ Alternatively, BDR provides the option to use row versioning and make conflict detection independent of the nodes' system clock. Row version conflict detection requires 3 things to be enabled. If any of these -steps are not performed correctly then [Origin Conflict Detection] will be used. +steps are not performed correctly then [Origin Conflict Detection](#origin-conflict-detection) will be used. 1. `check_full_tuple` must be enabled for the BDR node group. @@ -883,12 +883,11 @@ Although the counter is incremented only on UPDATE, this technique allows This approach resembles Lamport timestamps and fully prevents the ABA problem for conflict detection. -!!! Note - The row-level conflict resolution is still handled based on the - [Conflict Resolution] configuration even with row versioning. The way - the row version is generated is only useful for detection of conflicts - and should not be relied to as authoritative information about which - version of row is newer. +!!! Note The row-level conflict resolution is still handled based on the + [Conflict Resolution](#conflict-resolution) configuration even with row + versioning. The way the row version is generated is only useful for + detection of conflicts and should not be relied to as authoritative + information about which version of row is newer. To determine the current conflict resolution strategy used for a specific table, refer to the column `conflict_detection` of the view `bdr.tables`. @@ -916,9 +915,9 @@ bdr.alter_table_conflict_detection(relation regclass, The recognized methods for conflict detection are: -- `row_origin` - origin of the previous change made on the tuple (see - [Origin Conflict Detection] above). This is the only method supported which - does not require an extra column in the table. +- `row_origin` - origin of the previous change made on the tuple (see [Origin + Conflict Detection](#origin-conflict-detection)). This is the only method + supported which does not require an extra column in the table. - `row_version` - row version column (see [Row Version Conflict Detection] above). - `column_commit_timestamp` - per-column commit timestamps (described in the From ad2425f6fd9fe356030e89547fdc9b981f48897d Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Mon, 3 Jun 2024 10:51:28 +0200 Subject: [PATCH 18/23] roll back link alterations --- .../1/identify_images/assemble_command.mdx | 4 ++-- .../1/identify_images/identify_image_name.mdx | 2 +- .../1/identify_images/index.mdx | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/assemble_command.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/assemble_command.mdx index 12f40ae1570..eee22c3228e 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/assemble_command.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/assemble_command.mdx @@ -2,7 +2,7 @@ title: 'Assembling a deployment command' --- -For a quick installation with the aim of testing the product, see [Quick start](/postgres_distributed_for_kubernetes/latest/quickstart/). +For a quick installation with the aim of testing the product, see [Quick start](../quickstart/). For more targeted testing or production purposes, you can assemble a command to deploy EDB Postgres Distributed for Kubernetes with the operand and proxy image versions of your choice. @@ -36,7 +36,7 @@ helm upgrade --dependency-update \ ``` After assembling the command with the required images, -see [Installation](/postgres_distributed_for_kubernetes/latest/installation_upgrade) for instructions on how to add the repository, +see [Installation](../installation_upgrade) for instructions on how to add the repository, deploy the images, and create a certificate issuer. ### Examples diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/identify_image_name.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/identify_image_name.mdx index 83dd40ec1d9..061419ea6bd 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/identify_image_name.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/identify_image_name.mdx @@ -2,7 +2,7 @@ title: 'Identifying image names' --- -For a quick installation with the aim of testing the product, see [Quick start](/postgres_distributed_for_kubernetes/latest/quickstart/). +For a quick installation with the aim of testing the product, see [Quick start](../quickstart/). For more targeted testing or production purposes, you can select a specific operand and proxy image version that's appropriate for your Postgres distribution. diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/index.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/index.mdx index 7512d02758d..369e26ddf3c 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/index.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/index.mdx @@ -7,7 +7,7 @@ navigation: --- If you plan on deploying a specific version of Postgres Distributed or a specific Postgres distribution or version, you need to select the appropriate images and image versions. -Before [installing EDB Postgres Distributed for Kubernetes](/postgres_distributed_for_kubernetes/latest/installation_upgrade): +Before [installing EDB Postgres Distributed for Kubernetes](../installation_upgrade): 1. Identify your repository name and retrieve your user token as explained in [EDB private image registries](private_registries). 1. [Identify the image names for your environment](identify_image_name). From 36dcf00bd2ea6c0fadbb31d0b09b73bcd3c1aba4 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 3 Jun 2024 10:12:51 +0100 Subject: [PATCH 19/23] Update product_docs/docs/pgd/3.7/bdr/conflicts.mdx Co-authored-by: gvasquezvargas --- product_docs/docs/pgd/3.7/bdr/conflicts.mdx | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/3.7/bdr/conflicts.mdx b/product_docs/docs/pgd/3.7/bdr/conflicts.mdx index c44f932ea73..c4fec9384c1 100644 --- a/product_docs/docs/pgd/3.7/bdr/conflicts.mdx +++ b/product_docs/docs/pgd/3.7/bdr/conflicts.mdx @@ -883,7 +883,8 @@ Although the counter is incremented only on UPDATE, this technique allows This approach resembles Lamport timestamps and fully prevents the ABA problem for conflict detection. -!!! Note The row-level conflict resolution is still handled based on the +!!! Note + The row-level conflict resolution is still handled based on the [Conflict Resolution](#conflict-resolution) configuration even with row versioning. The way the row version is generated is only useful for detection of conflicts and should not be relied to as authoritative From 24e716d95a3902305558c570c5ba21f3f58f7070 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 3 Jun 2024 10:13:03 +0100 Subject: [PATCH 20/23] Update product_docs/docs/pgd/3.7/bdr/conflicts.mdx Co-authored-by: gvasquezvargas --- product_docs/docs/pgd/3.7/bdr/conflicts.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/3.7/bdr/conflicts.mdx b/product_docs/docs/pgd/3.7/bdr/conflicts.mdx index c4fec9384c1..2ca74c59e81 100644 --- a/product_docs/docs/pgd/3.7/bdr/conflicts.mdx +++ b/product_docs/docs/pgd/3.7/bdr/conflicts.mdx @@ -329,7 +329,7 @@ conflicts and [INSERT/UPDATE Conflicts](#insertupdate-conflicts) and will simply Another type of conflicting `DELETE` and `UPDATE` is a `DELETE` operation that comes after the row was `UPDATEd` locally. In this situation, the outcome depends upon the type of conflict detection used. When using the default, -[Origin Conflict Detection], no conflict is detected at all, leading to the +[Origin Conflict Detection](#origin-conflict-detection), no conflict is detected at all, leading to the `DELETE` being applied and the row removed. If you enable [Row Version Conflict Detection], a `delete_recently_updated` conflict is generated. The default resolution for a `delete_recently_updated` conflict is to `skip` the deletion. From 15e2c22bb3103511cebeea08be53b55a71c3bd2b Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 21 May 2024 12:52:00 -0400 Subject: [PATCH 21/23] Edits to PGD PR5625 --- .../pgd/5/rel_notes/pgd_5.5.0_rel_notes.mdx | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.5.0_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.5.0_rel_notes.mdx index 29806a56fdf..7b47fd026e0 100644 --- a/product_docs/docs/pgd/5/rel_notes/pgd_5.5.0_rel_notes.mdx +++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.5.0_rel_notes.mdx @@ -35,7 +35,7 @@ Postgres Distributed. | Component | Version | Description | Ticket | |-----------|---------|------------------------------------------------------------------------------------------------|--------| | BDR | 5.5.0 | Added support for read-only proxy routing. | | -| BDR | 5.5.0 | Improve stability of routing leader selection by using Raft heartbeat for connectivity check. | | +| BDR | 5.5.0 | Improved stability of routing leader selection by using Raft heartbeat for connectivity check. | | | CLI | 5.5.0 | Added PGD CLI binaries for macOS. | | | Proxy | 5.5.0 | Added support for read-only proxy routing. | | @@ -45,21 +45,21 @@ Postgres Distributed. | Component | Version | Description | Ticket | |-----------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------|----------------| | BDR | 5.5.0 | Improved bulk INSERT/UPDATE/DELETE performance by sending multiple messages together in a group rather than individually. | | -| BDR | 5.5.0 | Changes received by the writer as now not saved to a temporary file. | | +| BDR | 5.5.0 | Changes received by the writer now aren't saved to a temporary file. | | | BDR | 5.5.0 | BDR now logs completion of an extension upgrade. | | | BDR | 5.5.0 | Added restrictions for group commit options. | | | BDR | 5.5.0 | Each autopartition task is now executed in its own transaction. | RT101407/35476 | | BDR | 5.5.0 | DETACH CONCURRENTLY is now used to drop partitions. | RT101407/35476 | | BDR | 5.5.0 | Node group creation on a node bad state is now disallowed. | | | BDR | 5.5.0 | Granted additional object permissions to role `bdr_read_all_stats`. | | -| BDR | 5.5.0 | Improve stability of manager worker and Raft consensus by not throwing error on non-fatal dynamic shared memory read failures. | | +| BDR | 5.5.0 | Improved stability of manager worker and Raft consensus by not throwing error on non-fatal dynamic shared memory read failures. | | | BDR | 5.5.0 | Improved stability of Raft consensus and workers by handling dynamic shared memory errors in the right place. | | | BDR | 5.5.0 | The number of changes processed by writer in a large transaction is now exposed in [`bdr.writers`](/pgd/latest/reference/catalogs-visible#bdrwriters). | | | BDR | 5.5.0 | `bdr_init_physical` now stops the initial replication connection and starts it only when needed. | RT102828/35305 | | BDR | 5.5.0 | `bdr_superuser` is now granted use of `pg_file_settings` and `pg_show_all_file_settings()`. | | | CLI | 5.5.0 | Added new read scalability related options to JSON output of `show-proxies ` and `show-groups` commands. | | | CLI | 5.5.0 | Added new option called `proxy-mode` to `create-proxy` command for read scalability support. | | -| CLI | 5.5.0 | Added raft leader in tabular output of `show-groups` command. | | +| CLI | 5.5.0 | Added Raft leader in tabular output of `show-groups` command. | | ## Bug fixes @@ -68,17 +68,15 @@ Postgres Distributed. |-----------|---------|------------------------------------------------------------------------------------------------------------------------------|----------------| | BDR | 5.5.0 | Improved handling of node group configuration parameter "check_constraints". | RT99956/31896 | | BDR | 5.5.0 | Fixed incorrect parsing of pre-commit message that caused nodes to diverge on commit decision for group commit transactions. | | -| BDR | 5.5.0 | Prevent potential segfault in `bdr.monitor_group_versions()` | RT102290/34051 | +| BDR | 5.5.0 | Fixed an issue to prevent potential segfault in `bdr.monitor_group_versions()` | RT102290/34051 | | BDR | 5.5.0 | BDR now correctly elects a new leader when the current leader gets route_writes turned off. | | | BDR | 5.5.0 | `bdr.remove_commit_scope()` now handles non-existent commit scope. | | | BDR | 5.5.0 | An improved queue flush process now prevents unexpected writer terminations. | RT98966/35447 | | BDR | 5.5.0 | Fixed multi-row conflict accidentally deleting the wrong tuple multiple times . | | | BDR | 5.5.0 | Fixed receiver to send status update when writer is blocked, avoiding slot disconnect. | | -| BDR | 5.5.0 | Fix minor memory leak during `bdr_join_node_group_sql`. | | +| BDR | 5.5.0 | Fixed minor memory leak during `bdr_join_node_group_sql`. | | | BDR | 5.5.0 | Node joining with witness and standby nodes as source nodes is now disallowed. | | -| BDR | 5.5.0 | Use `bdr.default_sequence_kind` when updating sequence kind of existing sequences upon node creation. | | +| BDR | 5.5.0 | Now use `bdr.default_sequence_kind` when updating sequence kind of existing sequences upon node creation. | | | BDR | 5.5.0 | Fixed a bug preventing some trusted extension management commands (CREATE/ALTER) from being replicated. | | | BDR | 5.5.0 | Fixed a non-critical segfault which could occur in upgrades from BDR 3.7. | | -| BDR | 5.5.0 | Manage rights elevation for trusted extensions | | - - +| BDR | 5.5.0 | Fixed an issue to manage rights elevation for trusted extensions. | | From bd3de0ac8b7cf6deaeeb900fb1a8aab8a752c8bf Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Thu, 23 May 2024 15:50:04 -0400 Subject: [PATCH 22/23] Edits to Migration Toolkit PR5658 --- product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx b/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx index 3d31145e9a3..01a579f42d3 100644 --- a/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx +++ b/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx @@ -259,19 +259,18 @@ To correct the problem, specify `-fetchSize 1` as a command-line argument when y ### Incorrect timestamps and Daylight Saving Time -When migrating from SQL Server to PostgreSQL using the MSSQL JDBC driver there may be no errors observed. If the source database contains rows with timestamp values which are within a range of time when Daylight Savings Time was in effect, they will be migrated to the target database with the wrong timestamps. +When migrating from SQL Server to PostgreSQL using the MSSQL JDBC driver, the migration may error without reporting any errors. However, if the source database contains rows with timestamp values that are within a range of time when Daylight Savings Time was in effect, those rows will be migrated to the target database with the wrong timestamps. -To resolve this issue you can update the `runMTK.sh` file and provide the option `-Duser.timezone=GMT`. This will be in operation then when running the toolkit, +To resolve this issue, update the `runMTK.sh` file, and provide the option `-Duser.timezone=GMT`. This option will then be in effect when you run the toolkit. -For example, given the original line: +For example, suppose this is the original line: ```text runJREApplication $JAVA_HEAP_SIZE -Dprop=$base/etc/toolkit.properties -cp $base/bin/edb-migrationtoolkit.jar:$base/lib/* com.edb.MigrationToolkit "$@" ``` -This should be updated with the `-Duser.timezone=GMT` inserted before the `-cp` option: +Update this line by inserting `-Duser.timezone=GMT` before the `-cp` option: ```text runJREApplication $JAVA_HEAP_SIZE -Dprop=$base/etc/toolkit.properties -Duser.timezone=GMT -cp $base/bin/edb-migrationtoolkit.jar:$base/lib/* com.edb.MigrationToolkit "$@" ``` - From 38f996939d38c3c9812aecdacc1f7bfdd4ecbf8b Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Mon, 3 Jun 2024 12:14:53 +0200 Subject: [PATCH 23/23] timestamp error wording Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> --- product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx b/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx index 01a579f42d3..5067097113e 100644 --- a/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx +++ b/product_docs/docs/migration_toolkit/55/09_mtk_errors.mdx @@ -259,7 +259,7 @@ To correct the problem, specify `-fetchSize 1` as a command-line argument when y ### Incorrect timestamps and Daylight Saving Time -When migrating from SQL Server to PostgreSQL using the MSSQL JDBC driver, the migration may error without reporting any errors. However, if the source database contains rows with timestamp values that are within a range of time when Daylight Savings Time was in effect, those rows will be migrated to the target database with the wrong timestamps. +When migrating from SQL Server to PostgreSQL using the MSSQL JDBC driver, an error in the migration of timestamps may occur that is not reported. Specifically, if the source database contains rows with timestamp values that are within a range of time when Daylight Savings Time was in effect, those rows will be migrated to the target database with the wrong timestamps. To resolve this issue, update the `runMTK.sh` file, and provide the option `-Duser.timezone=GMT`. This option will then be in effect when you run the toolkit.