diff --git a/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx b/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx
index 0fd7f1dcfd1..fb1c932c7b3 100644
--- a/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx
+++ b/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx
@@ -25,5 +25,4 @@ yum update edb*
!!! Note
The `yum update` command performs an update only between minor releases. To update between major releases, use `pg_upgrade`.
-For more information about using yum commands and options, enter `yum --help` at the command line.
-
+For more information about using `yum` commands and options, enter `yum --help` at the command line.
diff --git a/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx b/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx
index 12eca3c3d70..c64b0af55ec 100644
--- a/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx
+++ b/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx
@@ -4,45 +4,44 @@ redirects:
- /epas/latest/epas_upgrade_guide/06_using_stackbuilder_plus_to_perform_a_minor_version_update/
---
-StackBuilder Plus is supported only on Windows systems.
+!!! Note
+ StackBuilder Plus is supported only on Windows systems.
The StackBuilder Plus utility provides a graphical interface that simplifies the process of updating, downloading, and installing modules that complement your EDB Postgres Advanced Server installation. When you install a module with StackBuilder Plus, StackBuilder Plus resolves any software dependencies.
-You can invoke StackBuilder Plus at any time after the installation has completed by selecting the **Apps >StackBuilder Plus**. Enter your system password if prompted, and the StackBuilder Plus welcome window opens.
+1. To invoke StackBuilder Plus at any time after the installation has completed, select **Apps > StackBuilder Plus**. Enter your system password if prompted, and the StackBuilder Plus welcome window opens.
-![The StackBuilder Plus welcome window](images/the_stackBuilder_plus_welcome.png)
+ ![The StackBuilder Plus welcome window](images/the_stackBuilder_plus_welcome.png)
-Select your EDB Postgres Advanced Server installation.
+1. Select your EDB Postgres Advanced Server installation.
-StackBuilder Plus requires internet access. If your installation of EDB Postgres Advanced Server resides behind a firewall (with restricted internet access), StackBuilder Plus can download program installers through a proxy server. The module provider determines if the module can be accessed through an HTTP proxy or an FTP proxy. Currently, all updates are transferred by an HTTP proxy, and the FTP proxy information isn't used.
+ StackBuilder Plus requires internet access. If your installation of EDB Postgres Advanced Server is behind a firewall (with restricted internet access), StackBuilder Plus can download program installers through a proxy server. The module provider determines if the module can be accessed through an HTTP proxy or an FTP proxy. Currently, all updates are transferred by an HTTP proxy, and the FTP proxy information isn't used.
-If the selected EDB Postgres Advanced Server installation has restricted Internet access, on the Welcome screen, select **Proxy Servers** ti open the Proxy servers dialog box:
+1. If the selected EDB Postgres Advanced Server installation has restricted Internet access, on the Welcome screen, select **Proxy Servers** to open the Proxy servers dialog box:
-![The Proxy servers dialog](images/the_proxy_servers_dialog.png)
+ ![The Proxy servers dialog](images/the_proxy_servers_dialog.png)
-On the dialog box, nter the IP address and port number of the proxy server in the **HTTP proxy** box. Currently, all StackBuilder Plus modules are distributed by HTTP proxy (FTP proxy information is ignored).
+1. On the dialog box, enter the IP address and port number of the proxy server in the **HTTP proxy** box. Currently, all StackBuilder Plus modules are distributed by HTTP proxy (FTP proxy information is ignored). Select **OK**.
-Select **OK**.
+ ![The StackBuilder Plus module selection window](images/the_stackBuilder_plus_module_selection_window.png)
-![The StackBuilder Plus module selection window](images/the_stackBuilder_plus_module_selection_window.png)
+ The tree control on the StackBuilder Plus module selection window displays a node for each module category.
-The tree control on the StackBuilder Plus module selection window displays a node for each module category.
+1. To add a component to the selected EDB Postgres Advanced Server installation or to upgrade a component, select the box to the left of the module name, and select **Next**.
-To add a component to the selected EDB Postgres Advanced Server installation or to upgrade a component, select the box to the left of the module name and select **Next**.
+1. If prompted, enter your email address and password on the StackBuilder Plus registration window.
-If prompted, enter your email address and password on the StackBuilder Plus registration window.
+ ![A summary window displays a list of selected packages](images/selected_packages_summary_window.png)
-![A summary window displays a list of selected packages](images/selected_packages_summary_window.png)
+ StackBuilder Plus confirms the packages selected. The Selected packages dialog box displays the name and version of the installer. Select **Next**.
-StackBuilder Plus confirms the packages selected. The Selected packages dialog box displays the name and version of the installer. Select **Next**.
+ When the download completes, a window opens that confirms the installation files were downloaded and are ready for installation.
-When the download completes, a window opens that confirms the installation files were downloaded and are ready for installation.
+ ![Confirmation that the download process is complete](images/download_complete_confirmation.png)
-![Confirmation that the download process is complete](images/download_complete_confirmation.png)
+1. Leave the **Skip Installation** check box cleared and select **Next** to start the installation process. (Select the check box and select **Next** to exit StackBuilder Plus without installing the downloaded files.)
-Leave the **Skip Installation** check box cleared and select **Next** to start the installation process. (Select the check box and select **Next** to exit StackBuilder Plus without installing the downloaded files.)
-
-![StackBuilder Plus confirms the completed installation](images/stackBuilder_plus_confirms_the_completed_installation.png)
+ ![StackBuilder Plus confirms the completed installation](images/stackBuilder_plus_confirms_the_completed_installation.png)
When the upgrade is complete, StackBuilder Plus alerts you to the success or failure of the installation of the requested package. If you were prompted by an installer to restart your computer, restart now.
diff --git a/product_docs/docs/epas/15/upgrading/index.mdx b/product_docs/docs/epas/15/upgrading/index.mdx
index 94de0ce35d1..cea32194dc6 100644
--- a/product_docs/docs/epas/15/upgrading/index.mdx
+++ b/product_docs/docs/epas/15/upgrading/index.mdx
@@ -5,8 +5,8 @@ redirects:
- /epas/latest/epas_upgrade_guide/
---
-Upgrading EDB Postgres Advanced Server involves the following:
+Upgrading EDB Postgres Advanced Server involves:
-- `pg_upgrade` to upgrade from an earlier version of EDB Postgres Advanced Server to EDB Postgres Advanced Server 15.
+- `pg_upgrade` to upgrade from an earlier version of EDB Postgres Advanced Server to the latest version.
- `yum` to perform a minor version upgrade on a Linux host.
- `StackBuilder Plus` to perform a minor version upgrade on a Windows host.
From 99aa392653eb2ea4159898b3c0e4a2ab33e9c260 Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Tue, 28 Feb 2023 13:16:09 -0500
Subject: [PATCH 09/50] Marked places to update with v14
---
.../03_built-in_packages/18_dbms_utility.mdx | 2 ++
...namic_runtime_instrumentation_tools_architecture_DRITA.mdx | 2 ++
.../02_configuring_sql_protect.mdx | 4 ++++
.../04_backing_up_restoring_sql_protect.mdx | 2 +-
.../reference_command_line_options.mdx | 4 +++-
.../managing_an_advanced_server_installation/index.mdx | 2 +-
.../03_upgrading_to_advanced_server.mdx | 2 ++
7 files changed, 15 insertions(+), 3 deletions(-)
diff --git a/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx b/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx
index 0b17b861698..48dcf7ddb19 100644
--- a/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx
+++ b/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx
@@ -322,6 +322,8 @@ DB_VERSION(
OUT VARCHAR2, OUT VARCHAR2)
The following anonymous block displays the database version information.
+
+
```sql
DECLARE
v_version VARCHAR2(150);
diff --git a/product_docs/docs/epas/15/epas_compat_tools_guide/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx b/product_docs/docs/epas/15/epas_compat_tools_guide/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx
index 68b37c2aae4..d111236ac67 100644
--- a/product_docs/docs/epas/15/epas_compat_tools_guide/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx
+++ b/product_docs/docs/epas/15/epas_compat_tools_guide/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx
@@ -578,6 +578,8 @@ edbreport(, )
The call to the `edbreport()` function returns a composite report that contains system information and the reports returned by the other statspack functions:
+
+'
```sql
SELECT * FROM edbreport(9, 10);
__OUTPUT__
diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx
index 806ee3c3fcf..fc85dd74422 100644
--- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx
+++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx
@@ -68,6 +68,8 @@ edb_sql_protect.max_queries_to_save = 5000
This example shows the process to set up protection for a database named `edb`:
+
+
```sql
$ /usr/edb/as14/bin/psql -d edb -U enterprisedb
Password for user enterprisedb:
@@ -115,6 +117,8 @@ For each database that you want to protect, you must determine the roles you wan
1. Connect as a superuser to a database that you want to protect with either `psql` or the Postgres Enterprise Manager client:
+
+
```sql
$ /usr/edb/as14/bin/psql -d edb -U enterprisedb
Password for user enterprisedb:
diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx
index c2ba6ce3ba4..6943392212f 100644
--- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx
+++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx
@@ -78,7 +78,7 @@ CREATE SCHEMA
2. Connect to the new database as a superuser, and delete all rows from the `edb_sql_protect_rel` table.
This deletion removes any existing rows in the `edb_sql_protect_rel` table that were backed up from the original database. These rows don't contain the correct OIDs relative to the database where the backup file was restored.
-
+
```sql
$ /usr/edb/as14/bin/psql -d newdb -U enterprisedb
Password for user enterprisedb:
diff --git a/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx b/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx
index b145be32d5f..a2db1adaded 100644
--- a/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx
+++ b/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx
@@ -35,7 +35,7 @@ Use the `--disable-components` parameter to specify a list of EDB Postgres Advan
`dbserver`
-EDB Postgres Advanced Server 14.
+EDB Postgres Advanced Server.
`pgadmin4`
@@ -163,6 +163,8 @@ Include `--unattendedmodeui minimalWithDialogs` to specify that the installer sh
Include the `--version` parameter to retrieve version information about the installer:
+
+
`EDB Postgres Advanced Server 14.0.3-1 --- Built on 2020-10-23 00:12:44 IB: 20.6.0-202008110127`
`--workload_profile {oltp | mixed | reporting}`
diff --git a/product_docs/docs/epas/15/installing/windows/managing_an_advanced_server_installation/index.mdx b/product_docs/docs/epas/15/installing/windows/managing_an_advanced_server_installation/index.mdx
index c590ca7fed4..952a630c2d0 100644
--- a/product_docs/docs/epas/15/installing/windows/managing_an_advanced_server_installation/index.mdx
+++ b/product_docs/docs/epas/15/installing/windows/managing_an_advanced_server_installation/index.mdx
@@ -24,7 +24,7 @@ The following table lists the names of the services that control EDB Postgres Ad
| EDB Postgres Advanced Server component name | Windows service name |
| ------------------------------ | ------------------------------------------------ |
| EDB Postgres Advanced Server | edb-as-14 |
-| pgAgent | EDB Postgres Advanced Server 14 Scheduling Agent |
+| pgAgent | EDB Postgres Advanced Server Scheduling Agent |
| PgBouncer | edb-pgbouncer-1.14 |
| Slony | edb-slony-replication-14 |
diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx
index 36c62586ccb..9c94347eaaa 100644
--- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx
+++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx
@@ -62,6 +62,7 @@ Before you invoke `pg_upgrade`, you must stop any services that belong to the or
The services in the table are most likely to be running in your installation.
+
| Service | On Linux | On Windows |
| ---------------------------------------------- | -------------------------------------- | ---------------------------------------------------------- |
| EnterprisEDB Postgres Advanced Server 9.6 | edb-as-9.6 | edb-as-9.6 |
@@ -70,6 +71,7 @@ The services in the table are most likely to be running in your installation.
| EnterprisEDB Postgres Advanced Server 12 | edb-as-12 | edb-as-12 |
| EnterprisEDB Postgres Advanced Server 13 | edb-as-13 | edb-as-13 |
| EnterprisEDB Postgres Advanced Server 14 | edb-as-14 | edb-as-14 |
+| EnterprisEDB Postgres Advanced Server 15 | edb-as-15 | edb-as-15 |
| EDB Postgres Advanced Server 9.6 Scheduling Agent (pgAgent) | edb-pgagent-9.6 | EnterprisEDB Postgres Advanced Server 9.6 Scheduling Agent |
| Infinite Cache 9.6 | edb-icache | N/A |
| Infinite Cache 10 | edb-icache | N/A |
From c8733d36fc8b0def3c73f3c32c9a87819db0a087 Mon Sep 17 00:00:00 2001
From: drothery-edb
Date: Tue, 28 Feb 2023 14:55:04 -0500
Subject: [PATCH 10/50] replaced placeholder content with the Limitations
section from the Known issues topics
---
product_docs/docs/pgd/4/known_issues.mdx | 28 +------------
product_docs/docs/pgd/4/limitations.mdx | 50 ++++++++++++------------
product_docs/docs/pgd/5/known_issues.mdx | 26 ------------
product_docs/docs/pgd/5/limitations.mdx | 49 ++++++++++++-----------
4 files changed, 50 insertions(+), 103 deletions(-)
diff --git a/product_docs/docs/pgd/4/known_issues.mdx b/product_docs/docs/pgd/4/known_issues.mdx
index a60faa9de26..a3d0a95a5e4 100644
--- a/product_docs/docs/pgd/4/known_issues.mdx
+++ b/product_docs/docs/pgd/4/known_issues.mdx
@@ -93,30 +93,4 @@ release.
attempting to apply the transaction. Ensure that any transactions
using a specific commit scope have finished before altering or removing it.
-## List of limitations
-
-This is a (non-comprehensive) list of limitations that are
-expected and are by design. They are not expected to be resolved in the
-future.
-
-- Replacing a node with its physical standby doesn't work for nodes that
- use CAMO/Eager/Group Commit. Combining physical standbys and BDR in
- general isn't recommended, even if otherwise possible.
-
-- A `galloc` sequence might skip some chunks if the
- sequence is created in a rolled back transaction and then created
- again with the same name. This can also occur if it is created and dropped when DDL
- replication isn't active and then it is created again when DDL
- replication is active.
- The impact of the problem is mild, because the sequence
- guarantees aren't violated. The sequence skips only some
- initial chunks. Also, as a workaround you can specify the
- starting value for the sequence as an argument to the
- `bdr.alter_sequence_set_kind()` function.
-
-- Legacy BDR synchronous replication uses a mechanism for transaction
- confirmation different from the one used by CAMO, Eager, and Group Commit.
- The two are not compatible and must not be used together. Therefore, nodes
- that appear in `synchronous_standby_names` must not be part of CAMO, Eager,
- or Group Commit configuration. Using synchronous replication to other nodes,
- including both logical and physical standby is possible.
+
diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx
index 2665f4a5b00..27535f6e356 100644
--- a/product_docs/docs/pgd/4/limitations.mdx
+++ b/product_docs/docs/pgd/4/limitations.mdx
@@ -2,29 +2,29 @@
title: "Limitations"
---
-## Using Postgres Distributed for multiple databases on the same instance
-
-The documentation for EDB Postgres Distributed states that it’s best practice for EDB Postgres Distributed to be used for a single database. This is codified in the default deployment automation with TPA and tooling like the CLI and proxy. Additionally, each VM or physical server hosting EDB Postgres Distributed should have only a single Postgres installation.
-
-While the documentation also states up to 10 databases are allowed, the content is included in the “Limitations” section of the documentation and intended to communicate it as an “exception” situation rather than a “best practice”. The documentation for existing PGD versions will be updated in the near future to provide more clarity about the limitations and challenges with the current product when multiple databases are used.
-
-Support for using EDB Postgres Distributed for multiple databases on the same Postgres instance will be deprecated in the near term and not supported in future releases. As we extend the capabilities of the product to include sharding and write anywhere functionality, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design.
-
-Limitations/Risks when using EDB Postgres Distributed for multiple databases on the same instance:
-
-1. Administrative commands need to be executed for each database if PGD configuration changes are needed, which increases risk for potential inconsistencies and errors.
-
-1. Each database needs to be monitored separately, adding overhead.
-
-1. TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases.
-
-1. HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases.
-
-1. Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases.
-
-1. When rebuilding or adding a node, the physical initialization method (“bdr_init_physical”) for one database can only be used for one node, all other databases will have to be initialized by logical replication, which can be problematic for large databases because of the time it might take.
-
-1. Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as expected. Since the Postgres WAL is shared between the databases, a synchronous commit confirmation may come from any database, not necessarily in the right order of commits.
-
-1. CLI and OTEL integration (new with v5) assumes one database.
+This is a (non-comprehensive) list of limitations that are
+expected and are by design. They are not expected to be resolved in the
+future.
+
+- Replacing a node with its physical standby doesn't work for nodes that
+ use CAMO/Eager/Group Commit. Combining physical standbys and BDR in
+ general isn't recommended, even if otherwise possible.
+
+- A `galloc` sequence might skip some chunks if the
+ sequence is created in a rolled back transaction and then created
+ again with the same name. This can also occur if it is created and dropped when DDL
+ replication isn't active and then it is created again when DDL
+ replication is active.
+ The impact of the problem is mild, because the sequence
+ guarantees aren't violated. The sequence skips only some
+ initial chunks. Also, as a workaround you can specify the
+ starting value for the sequence as an argument to the
+ `bdr.alter_sequence_set_kind()` function.
+
+- Legacy BDR synchronous replication uses a mechanism for transaction
+ confirmation different from the one used by CAMO, Eager, and Group Commit.
+ The two are not compatible and must not be used together. Therefore, nodes
+ that appear in `synchronous_standby_names` must not be part of CAMO, Eager,
+ or Group Commit configuration. Using synchronous replication to other nodes,
+ including both logical and physical standby is possible.
\ No newline at end of file
diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx
index bf801110808..30ffd8ac323 100644
--- a/product_docs/docs/pgd/5/known_issues.mdx
+++ b/product_docs/docs/pgd/5/known_issues.mdx
@@ -73,29 +73,3 @@ release.
attempting to apply the transaction. Ensure that any transactions
using a specific commit scope have finished before altering or removing it.
-## List of limitations
-
-This is a (non-comprehensive) list of limitations that are
-expected and are by design. They are not expected to be resolved in the
-future.
-
-- Replacing a node with its physical standby doesn't work for nodes that
- use CAMO/Eager/Group Commit. Combining physical standbys and BDR in
- general isn't recommended, even if otherwise possible.
-
-- A `galloc` sequence might skip some chunks if the
- sequence is created in a rolled back transaction and then created
- again with the same name. This can also occur if it is created and dropped when DDL
- replication isn't active and then it is created again when DDL
- replication is active.
- The impact of the problem is mild, because the sequence
- guarantees aren't violated. The sequence skips only some
- initial chunks. Also, as a workaround you can specify the
- starting value for the sequence as an argument to the
- `bdr.alter_sequence_set_kind()` function.
-
-- Legacy BDR synchronous replication uses a mechanism for transaction
- confirmation different from the one used by CAMO, Eager, and Group Commit.
- The two are not compatible and must not be used together. Using synchronous
- replication to other non-BDR nodes, including both logical and physical
- standby is possible.
diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx
index 2665f4a5b00..1cae37d1257 100644
--- a/product_docs/docs/pgd/5/limitations.mdx
+++ b/product_docs/docs/pgd/5/limitations.mdx
@@ -2,29 +2,28 @@
title: "Limitations"
---
-## Using Postgres Distributed for multiple databases on the same instance
-
-The documentation for EDB Postgres Distributed states that it’s best practice for EDB Postgres Distributed to be used for a single database. This is codified in the default deployment automation with TPA and tooling like the CLI and proxy. Additionally, each VM or physical server hosting EDB Postgres Distributed should have only a single Postgres installation.
-
-While the documentation also states up to 10 databases are allowed, the content is included in the “Limitations” section of the documentation and intended to communicate it as an “exception” situation rather than a “best practice”. The documentation for existing PGD versions will be updated in the near future to provide more clarity about the limitations and challenges with the current product when multiple databases are used.
-
-Support for using EDB Postgres Distributed for multiple databases on the same Postgres instance will be deprecated in the near term and not supported in future releases. As we extend the capabilities of the product to include sharding and write anywhere functionality, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design.
-
-Limitations/Risks when using EDB Postgres Distributed for multiple databases on the same instance:
-
-1. Administrative commands need to be executed for each database if PGD configuration changes are needed, which increases risk for potential inconsistencies and errors.
-
-1. Each database needs to be monitored separately, adding overhead.
-
-1. TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases.
-
-1. HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases.
-
-1. Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases.
-
-1. When rebuilding or adding a node, the physical initialization method (“bdr_init_physical”) for one database can only be used for one node, all other databases will have to be initialized by logical replication, which can be problematic for large databases because of the time it might take.
-
-1. Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as expected. Since the Postgres WAL is shared between the databases, a synchronous commit confirmation may come from any database, not necessarily in the right order of commits.
-
-1. CLI and OTEL integration (new with v5) assumes one database.
+This is a (non-comprehensive) list of limitations that are
+expected and are by design. They are not expected to be resolved in the
+future.
+
+- Replacing a node with its physical standby doesn't work for nodes that
+ use CAMO/Eager/Group Commit. Combining physical standbys and BDR in
+ general isn't recommended, even if otherwise possible.
+
+- A `galloc` sequence might skip some chunks if the
+ sequence is created in a rolled back transaction and then created
+ again with the same name. This can also occur if it is created and dropped when DDL
+ replication isn't active and then it is created again when DDL
+ replication is active.
+ The impact of the problem is mild, because the sequence
+ guarantees aren't violated. The sequence skips only some
+ initial chunks. Also, as a workaround you can specify the
+ starting value for the sequence as an argument to the
+ `bdr.alter_sequence_set_kind()` function.
+
+- Legacy BDR synchronous replication uses a mechanism for transaction
+ confirmation different from the one used by CAMO, Eager, and Group Commit.
+ The two are not compatible and must not be used together. Using synchronous
+ replication to other non-BDR nodes, including both logical and physical
+ standby is possible.
From 40b344c1c9859a6341d6094533754d41ffa35f55 Mon Sep 17 00:00:00 2001
From: kelpoole <44814688+kelpoole@users.noreply.github.com>
Date: Tue, 28 Feb 2023 14:51:25 -0700
Subject: [PATCH 11/50] Update limitations.mdx
---
product_docs/docs/pgd/5/limitations.mdx | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx
index 1cae37d1257..fce135d814b 100644
--- a/product_docs/docs/pgd/5/limitations.mdx
+++ b/product_docs/docs/pgd/5/limitations.mdx
@@ -5,11 +5,11 @@ title: "Limitations"
This is a (non-comprehensive) list of limitations that are
expected and are by design. They are not expected to be resolved in the
-future.
+future and should be taken under consideration when planning your deployment.
- Replacing a node with its physical standby doesn't work for nodes that
- use CAMO/Eager/Group Commit. Combining physical standbys and BDR in
- general isn't recommended, even if otherwise possible.
+ use CAMO/Eager/Group Commit. Combining physical standbys and EDB Postgres
+ Distributed isn't recommended, even if possible.
- A `galloc` sequence might skip some chunks if the
sequence is created in a rolled back transaction and then created
@@ -22,8 +22,8 @@ future.
starting value for the sequence as an argument to the
`bdr.alter_sequence_set_kind()` function.
-- Legacy BDR synchronous replication uses a mechanism for transaction
+- Legacy synchronous replication uses a mechanism for transaction
confirmation different from the one used by CAMO, Eager, and Group Commit.
The two are not compatible and must not be used together. Using synchronous
- replication to other non-BDR nodes, including both logical and physical
+ replication to other non-PGD nodes, including both logical and physical
standby is possible.
From 7227a631198d0a6bb9cd2f48585ebb910271a42b Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Wed, 1 Mar 2023 11:54:55 +0530
Subject: [PATCH 12/50] Correct documentation for edb_wait_states in
Performance Diagnostic
Performance Diagnostic, Prerequisites content change in both doc and OLH
---
.../04_toc_pem_features/21_performance_diagnostic.mdx | 2 +-
.../docs/pem/9/tuning_performance/performance_diagnostic.mdx | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
index 6a89925ef07..1a78f111397 100644
--- a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
+++ b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
@@ -20,7 +20,7 @@ You can analyze the Wait States data on multiple levels by narrowing down your s
Prerequisite:
-- For PostgreSQL, you need to install `edb_wait_states_` package from `edb.repo` where `` is the version of PostgreSQL Server. You can refer to [EDB Build Repository](https://repos.enterprisedb.com/) for the steps to install this package. For Advanced Server, you need to install `edb-as-server-edb-modules`, Where `` is the version of Advanced Server.
+- For PostgreSQL, you need to install `edb_wait_states_` package from `edb.repo` where `` is the version of PostgreSQL Server. You can refer to [EDB Build Repository](https://repos.enterprisedb.com/) for the steps to install this package. For Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
- Once you ensure that EDB Wait States module of EDB Postgres Advanced Server is installed, then configure the list of libraries in the `postgresql.conf` file as below:
diff --git a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
index 170a6881e96..3ecd0010f3d 100644
--- a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
+++ b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
@@ -21,7 +21,7 @@ To analyze the Wait States data on multiple levels, narrow down the data you sel
## Prerequisites
-- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you need to install the `edb-as-server-edb-modules`, where `` is the version of EDB Postgres Advanced Server.
+- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
- After you install the EDB Wait States module of EDB Postgres Advanced Server:
1. Configure the list of libraries in the `postgresql.conf` file as shown:
From 18cd061ec1ba8f1bf0eaf5114822e548a44f7cd9 Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Wed, 1 Mar 2023 09:26:10 +0000
Subject: [PATCH 13/50] Correct version number in "This section"
---
product_docs/docs/pgd/5/known_issues.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx
index 30ffd8ac323..01e8ff48cfb 100644
--- a/product_docs/docs/pgd/5/known_issues.mdx
+++ b/product_docs/docs/pgd/5/known_issues.mdx
@@ -2,7 +2,7 @@
title: 'Known issues'
---
-This section discusses currently known issues in EDB Postgres Distributed 4.
+This section discusses currently known issues in EDB Postgres Distributed 5.
## Data Consistency
From 458cc73095ddec15ad126e91ab78b3fc6c2da2c7 Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Wed, 1 Mar 2023 14:29:30 +0000
Subject: [PATCH 14/50] Limitations updated, known issues linked, PGD
---
product_docs/docs/pgd/4/bdr/index.mdx | 19 ------------
product_docs/docs/pgd/4/index.mdx | 2 +-
product_docs/docs/pgd/4/known_issues.mdx | 1 +
product_docs/docs/pgd/4/limitations.mdx | 29 +++++++++++++++++-
product_docs/docs/pgd/5/index.mdx | 4 +--
product_docs/docs/pgd/5/known_issues.mdx | 1 +
product_docs/docs/pgd/5/limitations.mdx | 34 +++++++++++++++++++++-
product_docs/docs/pgd/5/overview/index.mdx | 27 -----------------
src/constants/products.js | 2 +-
src/pages/index.js | 2 +-
10 files changed, 68 insertions(+), 53 deletions(-)
diff --git a/product_docs/docs/pgd/4/bdr/index.mdx b/product_docs/docs/pgd/4/bdr/index.mdx
index 278829faa71..cf6376d7d21 100644
--- a/product_docs/docs/pgd/4/bdr/index.mdx
+++ b/product_docs/docs/pgd/4/bdr/index.mdx
@@ -241,22 +241,3 @@ BDR provides controls to report and manage any skew that exists. BDR also
provides row-version conflict detection, as described in [Conflict detection](conflicts).
-## Limits
-
-BDR can run hundreds of nodes on good-enough hardware and network. However,
-for mesh-based deployments, we generally don't recommend running more than
-32 nodes in one cluster.
-Each master node can be protected by multiple physical or logical standby nodes.
-There's no specific limit on the number of standby nodes,
-but typical usage is to have 2–3 standbys per master. Standby nodes don't
-add connections to the mesh network, so they aren't included in the
-32-node recommendation.
-
-BDR currently has a hard limit of no more than 1000 active nodes, as this is the
-current maximum Raft connections allowed.
-
-BDR places a limit that at most 10 databases in any one PostgreSQL instance
-can be BDR nodes across different BDR node groups. However, BDR works best if
-you use only one BDR database per PostgreSQL instance.
-
-The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details).
diff --git a/product_docs/docs/pgd/4/index.mdx b/product_docs/docs/pgd/4/index.mdx
index 9fb52ede765..bf565641779 100644
--- a/product_docs/docs/pgd/4/index.mdx
+++ b/product_docs/docs/pgd/4/index.mdx
@@ -1,5 +1,5 @@
---
-title: "EDB Postgres Distributed"
+title: "EDB Postgres Distributed (PGD)"
indexCards: none
redirects:
- /pgd/4/compatibility_matrix
diff --git a/product_docs/docs/pgd/4/known_issues.mdx b/product_docs/docs/pgd/4/known_issues.mdx
index a3d0a95a5e4..f203f3f069d 100644
--- a/product_docs/docs/pgd/4/known_issues.mdx
+++ b/product_docs/docs/pgd/4/known_issues.mdx
@@ -94,3 +94,4 @@ release.
using a specific commit scope have finished before altering or removing it.
+Details of other design or implementation [limitations](limitations) are also available.
\ No newline at end of file
diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx
index 27535f6e356..0d30fd60e0a 100644
--- a/product_docs/docs/pgd/4/limitations.mdx
+++ b/product_docs/docs/pgd/4/limitations.mdx
@@ -3,6 +3,31 @@ title: "Limitations"
---
+This section covers design limitations of BDR, that should be taken into account
+when planning your deployment.
+
+## Limits
+
+- BDR can run hundreds of nodes on good-enough hardware and network. However,
+for mesh-based deployments, we generally don't recommend running more than
+32 nodes in one cluster.
+Each master node can be protected by multiple physical or logical standby nodes.
+There's no specific limit on the number of standby nodes,
+but typical usage is to have 2–3 standbys per master. Standby nodes don't
+add connections to the mesh network, so they aren't included in the
+32-node recommendation.
+
+- BDR currently has a hard limit of no more than 1000 active nodes, as this is the
+current maximum Raft connections allowed.
+
+- BDR places a limit that at most 10 databases in any one PostgreSQL instance
+can be BDR nodes across different BDR node groups. However, BDR works best if
+you use only one BDR database per PostgreSQL instance.
+
+- The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details).
+
+## Other Limitations
+
This is a (non-comprehensive) list of limitations that are
expected and are by design. They are not expected to be resolved in the
future.
@@ -27,4 +52,6 @@ future.
The two are not compatible and must not be used together. Therefore, nodes
that appear in `synchronous_standby_names` must not be part of CAMO, Eager,
or Group Commit configuration. Using synchronous replication to other nodes,
- including both logical and physical standby is possible.
\ No newline at end of file
+ including both logical and physical standby is possible.
+
+
diff --git a/product_docs/docs/pgd/5/index.mdx b/product_docs/docs/pgd/5/index.mdx
index 659663bd225..adc2c7c3f43 100644
--- a/product_docs/docs/pgd/5/index.mdx
+++ b/product_docs/docs/pgd/5/index.mdx
@@ -1,5 +1,5 @@
---
-title: "EDB Postgres Distributed"
+title: "EDB Postgres Distributed (PGD)"
indexCards: none
redirects:
- /pgd/5/compatibility_matrix
@@ -44,7 +44,7 @@ navigation:
---
-EDB Postgres Distributed provides multi-master replication and data distribution with advanced conflict management, data-loss protection, and throughput up to 5X faster than native logical replication, and enables distributed PostgreSQL clusters with high availability up to five 9s.
+EDB Postgres Distributed (PGD) provides multi-master replication and data distribution with advanced conflict management, data-loss protection, and throughput up to 5X faster than native logical replication, and enables distributed PostgreSQL clusters with high availability up to five 9s.
By default EDB Postgres Distributed uses asynchronous replication, applying changes on
the peer nodes only after the local commit. Additional levels of synchronicity can
diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx
index 01e8ff48cfb..665fb4beafb 100644
--- a/product_docs/docs/pgd/5/known_issues.mdx
+++ b/product_docs/docs/pgd/5/known_issues.mdx
@@ -73,3 +73,4 @@ release.
attempting to apply the transaction. Ensure that any transactions
using a specific commit scope have finished before altering or removing it.
+Details of other design or implementation [limitations](limitations) are also available.
\ No newline at end of file
diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx
index fce135d814b..109a9c2044c 100644
--- a/product_docs/docs/pgd/5/limitations.mdx
+++ b/product_docs/docs/pgd/5/limitations.mdx
@@ -2,8 +2,40 @@
title: "Limitations"
---
+This section covers design limitations of PGD, that should be taken into account
+when planning your deployment.
-This is a (non-comprehensive) list of limitations that are
+## Limits
+
+- PGD can run hundreds of nodes on good-enough hardware and network. However,
+for mesh-based deployments, we generally don't recommend running more than
+32 nodes in one cluster.
+Each master node can be protected by multiple physical or logical standby nodes.
+There's no specific limit on the number of standby nodes,
+but typical usage is to have 2–3 standbys per master. Standby nodes don't
+add connections to the mesh network, so they aren't included in the
+32-node recommendation.
+
+- PGD currently has a hard limit of no more than 1000 active nodes, as this is the
+current maximum Raft connections allowed.
+
+- Support for using EDB Postgres Distributed for multiple databases on the same
+Postgres instance is deprecated beginning with EDB Postgres Distributed 5 and
+will no longer be supported with EDB Postgres Distributed 6. As we extend the
+capabilities of the product, the additional complexity introduced operationally
+and functionally is no longer viable in a multi-database design.
+
+- The minimum recommended number of nodes in a group is three to provide fault
+tolerance for PGD's consensus mechanism. With just two nodes, consensus would
+fail if one of the nodes was unresponsive. Consensus is required for some PGD
+operations such as distributed sequence generation. For more information about
+the consensus mechanism used by EDB Postgres Distributed, see
+[Architectural details](../architectures/#architecture-details).
+
+
+## Other Limitations
+
+This is a (non-comprehensive) list of other limitations that are
expected and are by design. They are not expected to be resolved in the
future and should be taken under consideration when planning your deployment.
diff --git a/product_docs/docs/pgd/5/overview/index.mdx b/product_docs/docs/pgd/5/overview/index.mdx
index a949553eb82..19ecc0c52a5 100644
--- a/product_docs/docs/pgd/5/overview/index.mdx
+++ b/product_docs/docs/pgd/5/overview/index.mdx
@@ -221,32 +221,5 @@ PGD provides controls to report and manage any skew that exists. PGD also
provides row-version conflict detection, as described in [Conflict detection](../consistency/conflicts).
-## Limits
-
-PGD can run hundreds of nodes on good-enough hardware and network. However,
-for mesh-based deployments, we generally don't recommend running more than
-32 nodes in one cluster.
-Each master node can be protected by multiple physical or logical standby nodes.
-There's no specific limit on the number of standby nodes,
-but typical usage is to have 2–3 standbys per master. Standby nodes don't
-add connections to the mesh network, so they aren't included in the
-32-node recommendation.
-
-PGD currently has a hard limit of no more than 1000 active nodes, as this is the
-current maximum Raft connections allowed.
-
-Support for using EDB Postgres Distributed for multiple databases on the same
-Postgres instance is deprecated beginning with EDB Postgres Distributed 5 and
-will no longer be supported with EDB Postgres Distributed 6. As we extend the
-capabilities of the product, the additional complexity introduced operationally
-and functionally is no longer viable in a multi-database design.
-
-The minimum recommended number of nodes in a group is three to provide fault
-tolerance for PGD's consensus mechanism. With just two nodes, consensus would
-fail if one of the nodes was unresponsive. Consensus is required for some PGD
-operations such as distributed sequence generation. For more information about
-the consensus mechanism used by EDB Postgres Distributed, see
-[Architectural details](../architectures/#architecture-details).
-
diff --git a/src/constants/products.js b/src/constants/products.js
index 242ad8c70ab..e8a9ea8574b 100644
--- a/src/constants/products.js
+++ b/src/constants/products.js
@@ -46,7 +46,7 @@ export const products = {
pem: { name: "Postgres Enterprise Manager", iconName: IconNames.EDB_PEM },
pgBackRest: { name: "pgBackRest" },
pgbouncer: { name: "PgBouncer", iconName: IconNames.POSTGRESQL },
- pgd: { name: "EDB Postgres Distributed" },
+ pgd: { name: "EDB Postgres Distributed (PGD)" },
pge: { name: "EDB Postgres Extended Server" },
pgpool: { name: "PgPool-II", iconName: IconNames.POSTGRESQL },
pglogical: { name: "pglogical" },
diff --git a/src/pages/index.js b/src/pages/index.js
index 13bb1c87e5f..8c739df920f 100644
--- a/src/pages/index.js
+++ b/src/pages/index.js
@@ -229,7 +229,7 @@ const Page = () => (
headingText="High Availability"
>
- EDB Postgres Distributed
+ EDB Postgres Distributed (PGD)
Failover Manager
From 3451f25fa9712b0336dcc8f41362505c575e36b2 Mon Sep 17 00:00:00 2001
From: David Wicinas <93669463+dwicinas@users.noreply.github.com>
Date: Wed, 1 Mar 2023 11:09:58 -0500
Subject: [PATCH 15/50] possible first draft
---
.../docs/edb_plus/41/installing/windows.mdx | 37 ++++++-------------
1 file changed, 11 insertions(+), 26 deletions(-)
diff --git a/product_docs/docs/edb_plus/41/installing/windows.mdx b/product_docs/docs/edb_plus/41/installing/windows.mdx
index 0df5d6b172b..e687a7819cb 100644
--- a/product_docs/docs/edb_plus/41/installing/windows.mdx
+++ b/product_docs/docs/edb_plus/41/installing/windows.mdx
@@ -6,7 +6,7 @@ redirects:
---
-EDB provides a graphical interactive installer for Windows. You can access it using StackBuilder Plus, which is installed as part of EDB Postgres Advanced Server. With StackBuilder Plus, you can download an installer package for EDB*Plus and invoke the graphical installer. See [Using StackBuilder Plus](/edb_plus/latest/installing/windows/#using-stackbuilder-plus).
+EDB provides a graphical interactive installer for Windows. You access it using StackBuilder Plus, which is installed as part of EDB Postgres Advanced Server.
## Prerequisites
@@ -16,39 +16,24 @@ Before installing EDB\*Plus, you must first install Java (version 1.8 or later).
## Using StackBuilder Plus
-If you have installed EDB Postgres Advanced Server, you can use StackBuilder Plus to invoke the graphical installer for EDB*Plus. See [Using StackBuilder Plus](/epas/latest/epas_inst_windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/).
+After installing EDB Postgres Advanced Server, you can use StackBuilder Plus to invoke the graphical installer for EDB*Plus. See [Using StackBuilder Plus](/epas/latest/epas_inst_windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/).
-1. In StackBuilder Plus, follow the prompts until you get to the module selection page.
+1. Using the Windows start menu, open StackBuilder Plus and follow the prompts until you get to the module selection page.
-1. Expand the **EnterpriseDB Tools** node and select **Replication Server**.
+1. Expand the **Add-ons, tools, and utilities** node and select **EDB*Plus**.
-1. Proceed to the [Using the graphical installer](#using-the-graphical-installer) section in this topic.
+1. Select **Next** and proceed to the [Using the graphical installer](#using-the-graphical-installer) section in this topic.
-. See [Using StackBuilder Plus](/epas/latest/epas_inst_windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/).
+## Using the graphical installer
-1. In StackBuilder Plus, follow the prompts until you get to the module selection page.
+1. Select the installation language and select **OK**.
-1. Expand the **EnterpriseDB Tools** node and select **Replication Server**.
+1. On the Setup EDB*Plus page, select **Next**.
-1. Proceed to the [Using the graphical installer](#using-the-graphical-installer) section in this topic.
+1. Browse to a directory where you want EDB*Plus to be installed, or allow the installer to install it in the default location. Select **Next**.
+1. On the Ready to Install page, select **Next**.
-
-
-Windows installers for EDB\*Plus are available via StackBuilder Plus; you can access StackBuilder Plus through the Windows start menu. After opening StackBuilder Plus and selecting the installation for which you want to install EDB\*Plus, expand the component selection screen tree control to select and download the EDB\*Plus installer.
-
-![The EDBPlus Welcome window](../images/edb_plus_welcome.png)
-
-1. The EDB\*Plus installer welcomes you to the setup wizard, as shown in the figure below.
-
-![The Installation Directory window](../images/installation_directory_new.png)
-
-1. Use the `Installation Directory` field to specify the directory in which you wish to install the EDB\*Plus software. Then, click `Next` to continue.
-
-![The Ready to Install window](../images/ready_to_install.png)
-
-1. The `Ready to Install` window notifies you when the installer has all of the information needed to install EDB\*Plus on your system. Click `Next` to install EDB\*Plus.
-
-![The installation is complete](../images/installation_complete.png)
+ An information box shows installation progress. This may take a few minutes.
1. When the installation has completed, select **Finish**.
From 71cdd157ab582cbe24bf769830ef812ccff5cf66 Mon Sep 17 00:00:00 2001
From: David Wicinas <93669463+dwicinas@users.noreply.github.com>
Date: Wed, 1 Mar 2023 11:51:24 -0500
Subject: [PATCH 16/50] minor edits to one step
---
product_docs/docs/eprs/7/installing/windows.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/eprs/7/installing/windows.mdx b/product_docs/docs/eprs/7/installing/windows.mdx
index d779136b376..a300e3ff69b 100644
--- a/product_docs/docs/eprs/7/installing/windows.mdx
+++ b/product_docs/docs/eprs/7/installing/windows.mdx
@@ -62,7 +62,7 @@ If you are using EDB Postgres Advanced Server, you can invoke the graphical inst
1. If you do not want a particular Replication Server component installed, uncheck the box next to the component name. Select **Next**.
-1. On the Account Registration page, select the option that applies to you. Select **Next**.
+1. On the Account Registration page provide user account information and then select **Next**.
- If you do not have an EnterpriseDB user account, you are directed to the registration page of the EnterpriseDB website.
From 560dec3512fafcf19ec2a2ac523f7448d446536e Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Thu, 2 Mar 2023 16:39:51 +0530
Subject: [PATCH 17/50] Corrected one link
Performance Diagnostic topic
---
.../04_toc_pem_features/21_performance_diagnostic.mdx | 2 +-
.../docs/pem/9/tuning_performance/performance_diagnostic.mdx | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
index 1a78f111397..9f4a93bbc0b 100644
--- a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
+++ b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
@@ -20,7 +20,7 @@ You can analyze the Wait States data on multiple levels by narrowing down your s
Prerequisite:
-- For PostgreSQL, you need to install `edb_wait_states_` package from `edb.repo` where `` is the version of PostgreSQL Server. You can refer to [EDB Build Repository](https://repos.enterprisedb.com/) for the steps to install this package. For Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
+- For PostgreSQL, you need to install `edb_wait_states_` package from `edb.repo` where `` is the version of PostgreSQL Server. You can refer to [EDB Build Repository](https://repos.enterprisedb.com/) for the steps to install this package. For Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
- Once you ensure that EDB Wait States module of EDB Postgres Advanced Server is installed, then configure the list of libraries in the `postgresql.conf` file as below:
diff --git a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
index 3ecd0010f3d..2de298c6180 100644
--- a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
+++ b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
@@ -21,7 +21,7 @@ To analyze the Wait States data on multiple levels, narrow down the data you sel
## Prerequisites
-- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
+- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
- After you install the EDB Wait States module of EDB Postgres Advanced Server:
1. Configure the list of libraries in the `postgresql.conf` file as shown:
From dde44682d7fa10c7a3addf0a8ca0a3563094d881 Mon Sep 17 00:00:00 2001
From: David Wicinas <93669463+dwicinas@users.noreply.github.com>
Date: Thu, 2 Mar 2023 09:39:07 -0500
Subject: [PATCH 18/50] Removed step for account registration
---
product_docs/docs/eprs/7/installing/windows.mdx | 6 ------
1 file changed, 6 deletions(-)
diff --git a/product_docs/docs/eprs/7/installing/windows.mdx b/product_docs/docs/eprs/7/installing/windows.mdx
index a300e3ff69b..accdf3071b5 100644
--- a/product_docs/docs/eprs/7/installing/windows.mdx
+++ b/product_docs/docs/eprs/7/installing/windows.mdx
@@ -62,12 +62,6 @@ If you are using EDB Postgres Advanced Server, you can invoke the graphical inst
1. If you do not want a particular Replication Server component installed, uncheck the box next to the component name. Select **Next**.
-1. On the Account Registration page provide user account information and then select **Next**.
-
- - If you do not have an EnterpriseDB user account, you are directed to the registration page of the EnterpriseDB website.
-
- - If you already have an EnterpriseDB user account, enter the email address and password for your EnterpriseDB user account. Select **Next**.
-
1. Enter information for the Replication Server administrator.
!!! Note
From 150f38c824b4129bec7cd9f78842f25605d62c1a Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Fri, 3 Mar 2023 09:28:55 +0000
Subject: [PATCH 19/50] Adding in Multiple Databases/Single Limitations
---
product_docs/docs/pgd/4/limitations.mdx | 54 ++++++++++++++++++++-
product_docs/docs/pgd/5/limitations.mdx | 63 ++++++++++++++++++++-----
2 files changed, 105 insertions(+), 12 deletions(-)
diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx
index 0d30fd60e0a..03ce2c60af1 100644
--- a/product_docs/docs/pgd/4/limitations.mdx
+++ b/product_docs/docs/pgd/4/limitations.mdx
@@ -24,7 +24,59 @@ current maximum Raft connections allowed.
can be BDR nodes across different BDR node groups. However, BDR works best if
you use only one BDR database per PostgreSQL instance.
-- The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details).
+- The minimum recommended number of nodes in a group is three to provide fault
+tolerance for BDR's consensus mechanism. With just two nodes, consensus would
+fail if one of the nodes was unresponsive. Consensus is required for some BDR
+operations such as distributed sequence generation. For more information about
+the consensus mechanism used by EDB Postgres Distributed, see
+[Architectural details](/pgd/4/architectures/#architecture-details).
+
+- Support for using BDR for multiple databases on the same
+Postgres instance is deprecated beginning with PGD 5 and
+will no longer be supported with PGD 6. As we extend the
+capabilities of the product, the additional complexity introduced operationally
+and functionally is no longer viable in a multi-database design.
+
+## Limitations of Multiple Databases on a Single Instance
+
+It is best practice and recommended that only one database per PGD instance be
+configured. The deployment automation with TPA and the tooling such as the CLI
+and proxy already codify that recommendation. Also, as noted above, support for
+multiple databases on the same PGD instance is being deprecated in PGD 5 and
+will no longer be supported in PGD 6.
+
+While it is still possible to host up to ten databases in a single instance,
+this incurs many immediate risks and current limitations:
+
+- Administrative commands need to be executed for each database if PGD
+ configuration changes are needed, which increases risk for potential
+ inconsistencies and errors.
+
+- Each database needs to be monitored separately, adding overhead.
+
+- TPAexec assumes one database; additional coding is needed by customers
+ or PS in a post-deploy hook to set up replication for additional databases.
+
+- HARP works at the Postgres instance level, not at the database level,
+ meaning the leader node will be the same for all databases.
+
+- Each additional database increases the resource requirements on the server.
+ Each one needs its own set of worker processes maintaining replication
+ (e.g. logical workers, WAL senders, and WAL receivers). Each one also
+ needs its own set of connections to other instances in the replication
+ cluster. This might severely impact performance of all databases.
+
+- When rebuilding or adding a node, the physical initialization method
+ (“bdr_init_physical”) for one database can only be used for one node,
+ all other databases will have to be initialized by logical replication,
+ which can be problematic for large databases because of the time it might take.
+
+- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as
+ expected. Since the Postgres WAL is shared between the databases, a synchronous
+ commit confirmation may come from any database, not necessarily in the right
+ order of commits.
+
+- CLI and OTEL integration (new with v5) assumes one database.
## Other Limitations
diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx
index 109a9c2044c..de5df5005c4 100644
--- a/product_docs/docs/pgd/5/limitations.mdx
+++ b/product_docs/docs/pgd/5/limitations.mdx
@@ -2,8 +2,8 @@
title: "Limitations"
---
-This section covers design limitations of PGD, that should be taken into account
-when planning your deployment.
+This section covers design limitations of EDB Postgres Distributed (PGD), that
+should be taken into account when planning your deployment.
## Limits
@@ -19,12 +19,6 @@ add connections to the mesh network, so they aren't included in the
- PGD currently has a hard limit of no more than 1000 active nodes, as this is the
current maximum Raft connections allowed.
-- Support for using EDB Postgres Distributed for multiple databases on the same
-Postgres instance is deprecated beginning with EDB Postgres Distributed 5 and
-will no longer be supported with EDB Postgres Distributed 6. As we extend the
-capabilities of the product, the additional complexity introduced operationally
-and functionally is no longer viable in a multi-database design.
-
- The minimum recommended number of nodes in a group is three to provide fault
tolerance for PGD's consensus mechanism. With just two nodes, consensus would
fail if one of the nodes was unresponsive. Consensus is required for some PGD
@@ -32,12 +26,59 @@ operations such as distributed sequence generation. For more information about
the consensus mechanism used by EDB Postgres Distributed, see
[Architectural details](../architectures/#architecture-details).
+- Support for using PGD for multiple databases on the same
+Postgres instance is deprecated beginning with PGD 5 and
+will no longer be supported with PGD 6. As we extend the
+capabilities of the product, the additional complexity introduced operationally
+and functionally is no longer viable in a multi-database design.
+
+## Limitations of Multiple Databases on a Single Instance
+
+It is best practice and recommended that only one database per PGD instance be
+configured. The deployment automation with TPA and the tooling such as the CLI
+and proxy already codify that recommendation. Also, as noted above, support for
+multiple databases on the same PGD instance is being deprecated in PGD 5 and
+will no longer be supported in PGD 6.
+
+While it is still possible to host up to ten databases in a single instance,
+this incurs many immediate risks and current limitations:
+
+- Administrative commands need to be executed for each database if PGD
+ configuration changes are needed, which increases risk for potential
+ inconsistencies and errors.
+
+- Each database needs to be monitored separately, adding overhead.
+
+- TPAexec assumes one database; additional coding is needed by customers
+ or PS in a post-deploy hook to set up replication for additional databases.
+
+- HARP works at the Postgres instance level, not at the database level,
+ meaning the leader node will be the same for all databases.
+
+- Each additional database increases the resource requirements on the server.
+ Each one needs its own set of worker processes maintaining replication
+ (e.g. logical workers, WAL senders, and WAL receivers). Each one also
+ needs its own set of connections to other instances in the replication
+ cluster. This might severely impact performance of all databases.
+
+- When rebuilding or adding a node, the physical initialization method
+ (“bdr_init_physical”) for one database can only be used for one node,
+ all other databases will have to be initialized by logical replication,
+ which can be problematic for large databases because of the time it might take.
+
+- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as
+ expected. Since the Postgres WAL is shared between the databases, a synchronous
+ commit confirmation may come from any database, not necessarily in the right
+ order of commits.
+
+- CLI and OTEL integration (new with v5) assumes one database.
+
## Other Limitations
-This is a (non-comprehensive) list of other limitations that are
-expected and are by design. They are not expected to be resolved in the
-future and should be taken under consideration when planning your deployment.
+This is a (non-comprehensive) list of other limitations that are expected and
+are by design. They are not expected to be resolved in the future and should be taken
+under consideration when planning your deployment.
- Replacing a node with its physical standby doesn't work for nodes that
use CAMO/Eager/Group Commit. Combining physical standbys and EDB Postgres
From fd62d55b5c73c3c826d05559e26a1b08f40b5ff4 Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Fri, 3 Mar 2023 16:49:43 +0530
Subject: [PATCH 20/50] Changes based on review comments
---
.../04_toc_pem_features/21_performance_diagnostic.mdx | 6 +++++-
.../pem/9/tuning_performance/performance_diagnostic.mdx | 6 +++++-
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
index 9f4a93bbc0b..b9729420142 100644
--- a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
+++ b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
@@ -20,7 +20,11 @@ You can analyze the Wait States data on multiple levels by narrowing down your s
Prerequisite:
-- For PostgreSQL, you need to install `edb_wait_states_` package from `edb.repo` where `` is the version of PostgreSQL Server. You can refer to [EDB Build Repository](https://repos.enterprisedb.com/) for the steps to install this package. For Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
+- Install the EDB wait states package:
+
+ - For PostgreSQL, see [EDB Build Repository](https://repos.enterprisedb.com/).
+
+ - For EDB Postgres Advanced Server, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
- Once you ensure that EDB Wait States module of EDB Postgres Advanced Server is installed, then configure the list of libraries in the `postgresql.conf` file as below:
diff --git a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
index 2de298c6180..650b136fa90 100644
--- a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
+++ b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
@@ -21,7 +21,11 @@ To analyze the Wait States data on multiple levels, narrow down the data you sel
## Prerequisites
-- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
+- Install the EDB wait states package:
+
+ - For PostgreSQL, see [EDB Build Repository](https://repos.enterprisedb.com/).
+
+ - For EDB Postgres Advanced Server, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
- After you install the EDB Wait States module of EDB Postgres Advanced Server:
1. Configure the list of libraries in the `postgresql.conf` file as shown:
From 259ac63f5dc637c88bf6d269884a9cb08d8d50f7 Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Fri, 3 Mar 2023 12:23:49 +0000
Subject: [PATCH 21/50] Fix heading cases
---
product_docs/docs/pgd/4/limitations.mdx | 4 ++--
product_docs/docs/pgd/5/limitations.mdx | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx
index 03ce2c60af1..4fabf0fe124 100644
--- a/product_docs/docs/pgd/4/limitations.mdx
+++ b/product_docs/docs/pgd/4/limitations.mdx
@@ -37,7 +37,7 @@ will no longer be supported with PGD 6. As we extend the
capabilities of the product, the additional complexity introduced operationally
and functionally is no longer viable in a multi-database design.
-## Limitations of Multiple Databases on a Single Instance
+## Limitations of multiple databases on a single instance
It is best practice and recommended that only one database per PGD instance be
configured. The deployment automation with TPA and the tooling such as the CLI
@@ -78,7 +78,7 @@ this incurs many immediate risks and current limitations:
- CLI and OTEL integration (new with v5) assumes one database.
-## Other Limitations
+## Other limitations
This is a (non-comprehensive) list of limitations that are
expected and are by design. They are not expected to be resolved in the
diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx
index de5df5005c4..af7580268db 100644
--- a/product_docs/docs/pgd/5/limitations.mdx
+++ b/product_docs/docs/pgd/5/limitations.mdx
@@ -32,7 +32,7 @@ will no longer be supported with PGD 6. As we extend the
capabilities of the product, the additional complexity introduced operationally
and functionally is no longer viable in a multi-database design.
-## Limitations of Multiple Databases on a Single Instance
+## Limitations of multiple databases on a single instance
It is best practice and recommended that only one database per PGD instance be
configured. The deployment automation with TPA and the tooling such as the CLI
@@ -74,7 +74,7 @@ this incurs many immediate risks and current limitations:
- CLI and OTEL integration (new with v5) assumes one database.
-## Other Limitations
+## Other limitations
This is a (non-comprehensive) list of other limitations that are expected and
are by design. They are not expected to be resolved in the future and should be taken
From f48bf67a5f057010ae5504d0957c1f0148512ee5 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Mon, 20 Feb 2023 18:02:12 +0530
Subject: [PATCH 22/50] PEM: Adding db server matrix as per PEM-4741
---
.../pem_server_inst_linux/prerequisites.mdx | 4 ++++
product_docs/docs/pem/8/supported_platforms.mdx | 6 ++----
product_docs/docs/pem/9/installing/prerequisites.mdx | 10 +++++++---
product_docs/docs/pem/9/supported_platforms.mdx | 6 ++----
4 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/product_docs/docs/pem/8/installing_pem_server/pem_server_inst_linux/prerequisites.mdx b/product_docs/docs/pem/8/installing_pem_server/pem_server_inst_linux/prerequisites.mdx
index 9f8018488e0..6b51e83dc07 100644
--- a/product_docs/docs/pem/8/installing_pem_server/pem_server_inst_linux/prerequisites.mdx
+++ b/product_docs/docs/pem/8/installing_pem_server/pem_server_inst_linux/prerequisites.mdx
@@ -115,3 +115,7 @@ Make sure the components Postgres Enterprise Manager depends on, such as python3
```shell
zypper update
```
+
+## Supported locales
+
+Currently, the Postgres Enterprise Manager server and web interface support a locale of `English(US) en_US` and use of a period (.) as a language separator character. Using an alternate locale or a separator character other than a period might cause errors.
diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx
index c677ea357aa..943c46177e1 100644
--- a/product_docs/docs/pem/8/supported_platforms.mdx
+++ b/product_docs/docs/pem/8/supported_platforms.mdx
@@ -1,5 +1,5 @@
---
-title: "Supported platforms and locales"
+title: "Platform compatibility"
# This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file.
---
@@ -8,6 +8,4 @@ For information about the platforms and versions supported by PEM, see [Platform
!!! Note
Postgres Enterprise Manager 8.3 and later is supported on SLES.
-## Supported locales
-
-Currently, the Postgres Enterprise Manager server and web interface support a locale of `English(US) en_US` and use of a period (.) as a language separator character. Using an alternate locale or a separator character other than a period might cause errors.
+## Database compatibility
\ No newline at end of file
diff --git a/product_docs/docs/pem/9/installing/prerequisites.mdx b/product_docs/docs/pem/9/installing/prerequisites.mdx
index a1a24edcac8..26b1ec90678 100644
--- a/product_docs/docs/pem/9/installing/prerequisites.mdx
+++ b/product_docs/docs/pem/9/installing/prerequisites.mdx
@@ -133,6 +133,10 @@ To install a Postgres Enterprise Manager server on Linux, you may need to perfor
For SLES:
- ```shell
- zypper update
- ```
+ ```shell
+ zypper update
+ ```
+
+## Supported locales
+
+Currently, the Postgres Enterprise Manager server and web interface support a locale of `English(US) en_US` and use of a period (.) as a language separator character. Using an alternate locale or a separator character other than a period might cause errors.
diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx
index c677ea357aa..9c91759489c 100644
--- a/product_docs/docs/pem/9/supported_platforms.mdx
+++ b/product_docs/docs/pem/9/supported_platforms.mdx
@@ -1,5 +1,5 @@
---
-title: "Supported platforms and locales"
+title: "Platform compatibility"
# This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file.
---
@@ -8,6 +8,4 @@ For information about the platforms and versions supported by PEM, see [Platform
!!! Note
Postgres Enterprise Manager 8.3 and later is supported on SLES.
-## Supported locales
-
-Currently, the Postgres Enterprise Manager server and web interface support a locale of `English(US) en_US` and use of a period (.) as a language separator character. Using an alternate locale or a separator character other than a period might cause errors.
+## Database compatibility
From 7e09b61e3e684b233e13d8f330277a0901189a21 Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Mon, 20 Feb 2023 18:14:57 +0530
Subject: [PATCH 23/50] Updated title
---
product_docs/docs/pem/8/supported_platforms.mdx | 2 +-
product_docs/docs/pem/9/supported_platforms.mdx | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx
index 943c46177e1..6361603d256 100644
--- a/product_docs/docs/pem/8/supported_platforms.mdx
+++ b/product_docs/docs/pem/8/supported_platforms.mdx
@@ -1,5 +1,5 @@
---
-title: "Platform compatibility"
+title: "Product compatibility"
# This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file.
---
diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx
index 9c91759489c..1b02c515179 100644
--- a/product_docs/docs/pem/9/supported_platforms.mdx
+++ b/product_docs/docs/pem/9/supported_platforms.mdx
@@ -1,5 +1,5 @@
---
-title: "Platform compatibility"
+title: "Product compatibility"
# This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file.
---
From c214bfd58e40d258bd2f4bc846986bde9c275540 Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Mon, 20 Feb 2023 23:48:32 +0530
Subject: [PATCH 24/50] Added compatibility matrix table and text
Added compatibility matrix to the Product compatibility topic in v8 and 9
---
product_docs/docs/pem/8/supported_platforms.mdx | 12 +++++++++++-
product_docs/docs/pem/9/supported_platforms.mdx | 10 ++++++++++
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx
index 6361603d256..d540a62bb14 100644
--- a/product_docs/docs/pem/8/supported_platforms.mdx
+++ b/product_docs/docs/pem/8/supported_platforms.mdx
@@ -8,4 +8,14 @@ For information about the platforms and versions supported by PEM, see [Platform
!!! Note
Postgres Enterprise Manager 8.3 and later is supported on SLES.
-## Database compatibility
\ No newline at end of file
+## Database compatibility
+
+The following table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 8.x.
+
+| |**PEM 8.x** | |
+|:----------|:-----------------------------|:-------------------|
+| |**As a monitored instance** |**As a backend** |
+|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PGD** |3, 4 |3, 4 |
diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx
index 1b02c515179..258898121bc 100644
--- a/product_docs/docs/pem/9/supported_platforms.mdx
+++ b/product_docs/docs/pem/9/supported_platforms.mdx
@@ -9,3 +9,13 @@ For information about the platforms and versions supported by PEM, see [Platform
Postgres Enterprise Manager 8.3 and later is supported on SLES.
## Database compatibility
+
+The following table provides information about the PEM versions and their supported corresponding versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD).
+
+| |**PEM 8.x** | |**PEM 9.x** | |
+|:----------|:-----------------------------|:-------------------|:------------------------------|:------------------------|
+| |**As a monitored instance** |**As a backend** |**As a monitored instance** |**As a backend** |
+|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PGD** |3, 4 |3, 4 |3, 4, 5 |3, 4, 5 |
From 813055c43f9c8130145d9bf691344a67bdcb2b8b Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Tue, 21 Feb 2023 11:26:42 +0530
Subject: [PATCH 25/50] Changes based on review comments
Changes to both v8 and 9 product compatibility topics
---
product_docs/docs/pem/8/supported_platforms.mdx | 16 ++++++++--------
product_docs/docs/pem/9/supported_platforms.mdx | 16 ++++++++--------
2 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx
index d540a62bb14..dcb40ed9a28 100644
--- a/product_docs/docs/pem/8/supported_platforms.mdx
+++ b/product_docs/docs/pem/8/supported_platforms.mdx
@@ -10,12 +10,12 @@ For information about the platforms and versions supported by PEM, see [Platform
## Database compatibility
-The following table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 8.x.
+This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD).
-| |**PEM 8.x** | |
-|:----------|:-----------------------------|:-------------------|
-| |**As a monitored instance** |**As a backend** |
-|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PGD** |3, 4 |3, 4 |
+| |**PEM 8.x** | |
+|:----------|:---------------------------|:----------------|
+| |**As a monitored instance** |**As a backend** |
+|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PGD** |3, 4 |3, 4 |
diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx
index 258898121bc..29ee9db5667 100644
--- a/product_docs/docs/pem/9/supported_platforms.mdx
+++ b/product_docs/docs/pem/9/supported_platforms.mdx
@@ -10,12 +10,12 @@ For information about the platforms and versions supported by PEM, see [Platform
## Database compatibility
-The following table provides information about the PEM versions and their supported corresponding versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD).
+This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD).
-| |**PEM 8.x** | |**PEM 9.x** | |
-|:----------|:-----------------------------|:-------------------|:------------------------------|:------------------------|
-| |**As a monitored instance** |**As a backend** |**As a monitored instance** |**As a backend** |
-|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PGD** |3, 4 |3, 4 |3, 4, 5 |3, 4, 5 |
+| |**PEM 9.x** | |
+|:----------|:---------------------------|:-------------------|
+| |**As a monitored instance** |**As a backend** |
+|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PGD** |3, 4, 5 |3, 4, 5 |
From c56f6bdc3957682bc32e06676d9ff6a86948391a Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Tue, 21 Feb 2023 14:07:23 +0530
Subject: [PATCH 26/50] Update 1 based on review comments
Both v8 and 9 topic changed
---
product_docs/docs/pem/8/supported_platforms.mdx | 15 +++++++--------
product_docs/docs/pem/9/supported_platforms.mdx | 5 ++---
2 files changed, 9 insertions(+), 11 deletions(-)
diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx
index dcb40ed9a28..ac02813ddcd 100644
--- a/product_docs/docs/pem/8/supported_platforms.mdx
+++ b/product_docs/docs/pem/8/supported_platforms.mdx
@@ -10,12 +10,11 @@ For information about the platforms and versions supported by PEM, see [Platform
## Database compatibility
-This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD).
+This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 8.x.
-| |**PEM 8.x** | |
-|:----------|:---------------------------|:----------------|
-| |**As a monitored instance** |**As a backend** |
-|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PGD** |3, 4 |3, 4 |
+| |**As a monitored instance** |**As a backend** |
+|:----------|:---------------------------|:------------------|
+|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PGD** |3, 4 |3, 4 |
diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx
index 29ee9db5667..6c4073bd54a 100644
--- a/product_docs/docs/pem/9/supported_platforms.mdx
+++ b/product_docs/docs/pem/9/supported_platforms.mdx
@@ -10,11 +10,10 @@ For information about the platforms and versions supported by PEM, see [Platform
## Database compatibility
-This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD).
+This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 9.x.
-| |**PEM 9.x** | |
-|:----------|:---------------------------|:-------------------|
| |**As a monitored instance** |**As a backend** |
+|:----------|:---------------------------|:-------------------|
|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
From 350c261f4bf1133ee1a6766abb37bc460cf313bf Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Tue, 21 Feb 2023 15:00:48 +0530
Subject: [PATCH 27/50] Changed table header
---
product_docs/docs/pem/8/supported_platforms.mdx | 12 ++++++------
product_docs/docs/pem/9/supported_platforms.mdx | 12 ++++++------
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx
index ac02813ddcd..4961d6c19a5 100644
--- a/product_docs/docs/pem/8/supported_platforms.mdx
+++ b/product_docs/docs/pem/8/supported_platforms.mdx
@@ -12,9 +12,9 @@ For information about the platforms and versions supported by PEM, see [Platform
This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 8.x.
-| |**As a monitored instance** |**As a backend** |
-|:----------|:---------------------------|:------------------|
-|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PGD** |3, 4 |3, 4 |
+| |**Monitored Instance** |**Backend Instance** |
+|:----------|:----------------------|:----------------------|
+|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PGD** |3, 4 |3, 4 |
diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx
index 6c4073bd54a..86fe37c3527 100644
--- a/product_docs/docs/pem/9/supported_platforms.mdx
+++ b/product_docs/docs/pem/9/supported_platforms.mdx
@@ -12,9 +12,9 @@ For information about the platforms and versions supported by PEM, see [Platform
This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 9.x.
-| |**As a monitored instance** |**As a backend** |
-|:----------|:---------------------------|:-------------------|
-|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PGD** |3, 4, 5 |3, 4, 5 |
+| |**Monitored Instance** |**Backend Instance** |
+|:----------|:---------------------------|:-----------------------|
+|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PGD** |3, 4, 5 |3, 4, 5 |
From d829db3c1045e45c05c228300ed13a68423b596d Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Tue, 21 Feb 2023 19:44:07 +0530
Subject: [PATCH 28/50] Updated supported_platforms file for v8 based on review
comments
---
product_docs/docs/pem/8/supported_platforms.mdx | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx
index 4961d6c19a5..66b0efffd8a 100644
--- a/product_docs/docs/pem/8/supported_platforms.mdx
+++ b/product_docs/docs/pem/8/supported_platforms.mdx
@@ -8,13 +8,13 @@ For information about the platforms and versions supported by PEM, see [Platform
!!! Note
Postgres Enterprise Manager 8.3 and later is supported on SLES.
-## Database compatibility
+## Postgres compatibility
-This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 8.x.
+The table lists the compatibility matrix information for PEM 8.x.
-| |**Monitored Instance** |**Backend Instance** |
-|:----------|:----------------------|:----------------------|
-|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PGD** |3, 4 |3, 4 |
+| |**Monitored Instance** |**Backend Instance** |
+|:-----------------------------------------|:----------------------|:----------------------|
+|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PostgreSQL (PG)** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**EDB Postgres Distributed (PGD)** |3, 4 | |
From 9f58d25d8425b6160860a835582190742784260e Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Thu, 23 Feb 2023 10:08:58 +0530
Subject: [PATCH 29/50] Added footnote for PGD5 supported from PEM9.1 and later
---
product_docs/docs/pem/9/supported_platforms.mdx | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx
index 86fe37c3527..6586045672f 100644
--- a/product_docs/docs/pem/9/supported_platforms.mdx
+++ b/product_docs/docs/pem/9/supported_platforms.mdx
@@ -17,4 +17,7 @@ This table provides information about the supported versions of PostgreSQL (PG),
|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PGD** |3, 4, 5 |3, 4, 5 |
+|**PGD** |3, 4, 5[^1] | |
+
+[^1]: From PEM version 9.1 and later, EDB Postgres Distributed (PGD) 5 is supported.
+
From 79d6d029803116eae53f860ec1a4af179601ba19 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Thu, 23 Feb 2023 12:59:03 +0530
Subject: [PATCH 30/50] Moved Postgres compatibility to landing page for
version 8 and separate postgres compatibilty page added for PEM 9
---
product_docs/docs/pem/8/index.mdx | 14 +++++++++++++-
product_docs/docs/pem/8/supported_platforms.mdx | 13 +------------
product_docs/docs/pem/9/index.mdx | 1 +
.../docs/pem/9/supported_database_versions.mdx | 14 ++++++++++++++
product_docs/docs/pem/9/supported_platforms.mdx | 16 +---------------
5 files changed, 30 insertions(+), 28 deletions(-)
create mode 100644 product_docs/docs/pem/9/supported_database_versions.mdx
diff --git a/product_docs/docs/pem/8/index.mdx b/product_docs/docs/pem/8/index.mdx
index 7942425f9cc..4418144c2c7 100644
--- a/product_docs/docs/pem/8/index.mdx
+++ b/product_docs/docs/pem/8/index.mdx
@@ -58,4 +58,16 @@ redirects:
Welcome to Postgres Enterprise Manager (PEM). PEM consists of components that provide the management and analytical functionality for your EDB Postgres Advanced Server or PostgreSQL database. PEM is based on the Open Source pgAdmin 4 project.
-PEM is a comprehensive database design and management system. PEM is designed to meet the needs of both novice and experienced Postgres users alike, providing a powerful graphical interface that simplifies the creation, maintenance, and use of database objects.
+PEM is a comprehensive database design and management system. PEM is designed to meet the needs of both novice and experienced Postgres users alike, providing a powerful graphical interface that simplifies the creation, maintenance, use of database objects and monitoring multiple postgres servers through a single graphical interface.
+
+## Postgres compatibility
+
+The table lists the compatibility matrix information for PEM 8.x.
+
+| |**Monitored Instance** |**Backend Instance** |
+|:-----------------------------------------|:----------------------|:----------------------|
+|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PostgreSQL (PG)** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**EDB Postgres Distributed (PGD)** |3, 4 | |
+
diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx
index 66b0efffd8a..dd9453fb949 100644
--- a/product_docs/docs/pem/8/supported_platforms.mdx
+++ b/product_docs/docs/pem/8/supported_platforms.mdx
@@ -1,5 +1,5 @@
---
-title: "Product compatibility"
+title: "Platform compatibility"
# This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file.
---
@@ -7,14 +7,3 @@ For information about the platforms and versions supported by PEM, see [Platform
!!! Note
Postgres Enterprise Manager 8.3 and later is supported on SLES.
-
-## Postgres compatibility
-
-The table lists the compatibility matrix information for PEM 8.x.
-
-| |**Monitored Instance** |**Backend Instance** |
-|:-----------------------------------------|:----------------------|:----------------------|
-|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PostgreSQL (PG)** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**EDB Postgres Distributed (PGD)** |3, 4 | |
diff --git a/product_docs/docs/pem/9/index.mdx b/product_docs/docs/pem/9/index.mdx
index 85c491df05b..884f03c2e41 100644
--- a/product_docs/docs/pem/9/index.mdx
+++ b/product_docs/docs/pem/9/index.mdx
@@ -5,6 +5,7 @@ directoryDefaults:
navigation:
- pem_rel_notes
- supported_platforms
+ - supported_database_versions
- prerequisites_for_installing_pem_server
- "#Planning"
- pem_architecture
diff --git a/product_docs/docs/pem/9/supported_database_versions.mdx b/product_docs/docs/pem/9/supported_database_versions.mdx
new file mode 100644
index 00000000000..45658066f2f
--- /dev/null
+++ b/product_docs/docs/pem/9/supported_database_versions.mdx
@@ -0,0 +1,14 @@
+---
+title: "Postgres compatibility"
+---
+
+This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 9.x.
+
+| |**Monitored Instance** |**Backend Instance** |
+|:----------|:---------------------------|:-----------------------|
+|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PGD** |3, 4, 5[^1] | |
+
+[^1]: From PEM version 9.1 and later, EDB Postgres Distributed (PGD) 5 is supported.
diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx
index 6586045672f..dd9453fb949 100644
--- a/product_docs/docs/pem/9/supported_platforms.mdx
+++ b/product_docs/docs/pem/9/supported_platforms.mdx
@@ -1,5 +1,5 @@
---
-title: "Product compatibility"
+title: "Platform compatibility"
# This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file.
---
@@ -7,17 +7,3 @@ For information about the platforms and versions supported by PEM, see [Platform
!!! Note
Postgres Enterprise Manager 8.3 and later is supported on SLES.
-
-## Database compatibility
-
-This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 9.x.
-
-| |**Monitored Instance** |**Backend Instance** |
-|:----------|:---------------------------|:-----------------------|
-|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PGD** |3, 4, 5[^1] | |
-
-[^1]: From PEM version 9.1 and later, EDB Postgres Distributed (PGD) 5 is supported.
-
From feba22a83f303b8868bbd1a56494b347b487abf9 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Fri, 24 Feb 2023 16:33:36 +0530
Subject: [PATCH 31/50] Update
product_docs/docs/pem/9/supported_database_versions.mdx
Co-authored-by: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com>
---
product_docs/docs/pem/9/supported_database_versions.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pem/9/supported_database_versions.mdx b/product_docs/docs/pem/9/supported_database_versions.mdx
index 45658066f2f..e40170631d2 100644
--- a/product_docs/docs/pem/9/supported_database_versions.mdx
+++ b/product_docs/docs/pem/9/supported_database_versions.mdx
@@ -11,4 +11,4 @@ This table provides information about the supported versions of PostgreSQL (PG),
|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
|**PGD** |3, 4, 5[^1] | |
-[^1]: From PEM version 9.1 and later, EDB Postgres Distributed (PGD) 5 is supported.
+[^1]: PEM version 9.1 and later supports EDB Postgres Distributed (PGD) 5.
From 3328038812560f962d57f5eac4164db3bbc34c2f Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Fri, 24 Feb 2023 16:34:52 +0530
Subject: [PATCH 32/50] Update product_docs/docs/pem/8/index.mdx
Co-authored-by: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com>
---
product_docs/docs/pem/8/index.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pem/8/index.mdx b/product_docs/docs/pem/8/index.mdx
index 4418144c2c7..f4d100f9801 100644
--- a/product_docs/docs/pem/8/index.mdx
+++ b/product_docs/docs/pem/8/index.mdx
@@ -62,7 +62,7 @@ PEM is a comprehensive database design and management system. PEM is designed to
## Postgres compatibility
-The table lists the compatibility matrix information for PEM 8.x.
+The table provides information about the supported versions of Postgres for PEM 8.x.
| |**Monitored Instance** |**Backend Instance** |
|:-----------------------------------------|:----------------------|:----------------------|
From f99b747442e1d751fc3813b58eb83b24a0bc3ba0 Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Fri, 3 Mar 2023 13:58:46 +0530
Subject: [PATCH 33/50] Updated matrix table, PGD row removed
This updates are based on new information provided by Simon
---
product_docs/docs/pem/8/index.mdx | 12 ++++++------
.../docs/pem/9/supported_database_versions.mdx | 16 +++++++---------
2 files changed, 13 insertions(+), 15 deletions(-)
diff --git a/product_docs/docs/pem/8/index.mdx b/product_docs/docs/pem/8/index.mdx
index f4d100f9801..e08dd30d766 100644
--- a/product_docs/docs/pem/8/index.mdx
+++ b/product_docs/docs/pem/8/index.mdx
@@ -64,10 +64,10 @@ PEM is a comprehensive database design and management system. PEM is designed to
The table provides information about the supported versions of Postgres for PEM 8.x.
-| |**Monitored Instance** |**Backend Instance** |
-|:-----------------------------------------|:----------------------|:----------------------|
-|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**PostgreSQL (PG)** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14 |11, 12, 13, 14 |
-|**EDB Postgres Distributed (PGD)** |3, 4 | |
+| |**Monitored Instance** |**Backend Instance** |
+|:-----------------------------------------|:----------------------|:--------------------|
+|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**PostgreSQL (PG)** |11, 12, 13, 14 |11, 12, 13, 14 |
+|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14 |Note[^1]
+Note[^1]: PEM will support PGE as a backend when `sslutils` is available for this server distribution. It is expected to be available in the second quarter of 2023.
diff --git a/product_docs/docs/pem/9/supported_database_versions.mdx b/product_docs/docs/pem/9/supported_database_versions.mdx
index e40170631d2..9fc8b529e77 100644
--- a/product_docs/docs/pem/9/supported_database_versions.mdx
+++ b/product_docs/docs/pem/9/supported_database_versions.mdx
@@ -1,14 +1,12 @@
---
title: "Postgres compatibility"
---
+The table provides information about the supported versions of Postgres for PEM 9.x.
-This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 9.x.
+| |**Monitored Instance** |**Backend Instance** |
+|:-----------------------------------------|:---------------------------|:---------------------|
+|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PostgreSQL (PG)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14, 15 |Note[^1] |
-| |**Monitored Instance** |**Backend Instance** |
-|:----------|:---------------------------|:-----------------------|
-|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PGD** |3, 4, 5[^1] | |
-
-[^1]: PEM version 9.1 and later supports EDB Postgres Distributed (PGD) 5.
+Note[^1]: PEM will support PGE as a backend when sslutils is available for this server distribution. It is expected to be available in the second quarter of 2023.
From f916a475a76d189defee4a66be267487e4e74317 Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Mon, 6 Mar 2023 11:33:23 +0530
Subject: [PATCH 34/50] Shifted matrix to index files similar to v8
Based on Simon comments
---
product_docs/docs/pem/9/index.mdx | 12 ++++++++++++
.../docs/pem/9/supported_database_versions.mdx | 12 ------------
2 files changed, 12 insertions(+), 12 deletions(-)
delete mode 100644 product_docs/docs/pem/9/supported_database_versions.mdx
diff --git a/product_docs/docs/pem/9/index.mdx b/product_docs/docs/pem/9/index.mdx
index 884f03c2e41..3be9ae244b5 100644
--- a/product_docs/docs/pem/9/index.mdx
+++ b/product_docs/docs/pem/9/index.mdx
@@ -59,3 +59,15 @@ redirects:
Welcome to Postgres Enterprise Manager (PEM). PEM consists of components that provide the management and analytical functionality for your EDB Postgres Advanced Server or PostgreSQL database. PEM is based on the Open Source pgAdmin 4 project.
PEM is a comprehensive database design and management system. PEM is designed to meet the needs of both novice and experienced Postgres users alike, providing a powerful graphical interface that simplifies the creation, maintenance, and use of database objects.
+
+## Postgres compatibility
+
+The table provides information about the supported versions of Postgres for PEM 9.x.
+
+| |**Monitored Instance** |**Backend Instance** |
+|:-----------------------------------------|:---------------------------|:---------------------|
+|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**PostgreSQL (PG)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
+|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14, 15 |Note[^1] |
+
+Note[^1]: PEM will support PGE as a backend when sslutils is available for this server distribution. It is expected to be available in the second quarter of 2023.
diff --git a/product_docs/docs/pem/9/supported_database_versions.mdx b/product_docs/docs/pem/9/supported_database_versions.mdx
deleted file mode 100644
index 9fc8b529e77..00000000000
--- a/product_docs/docs/pem/9/supported_database_versions.mdx
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: "Postgres compatibility"
----
-The table provides information about the supported versions of Postgres for PEM 9.x.
-
-| |**Monitored Instance** |**Backend Instance** |
-|:-----------------------------------------|:---------------------------|:---------------------|
-|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**PostgreSQL (PG)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 |
-|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14, 15 |Note[^1] |
-
-Note[^1]: PEM will support PGE as a backend when sslutils is available for this server distribution. It is expected to be available in the second quarter of 2023.
From 86564650b850a9e176ed51e24a38580432f5f613 Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Mon, 6 Mar 2023 10:52:26 +0000
Subject: [PATCH 35/50] Tidied formatting, removed "above" reference
---
product_docs/docs/pgd/4/limitations.mdx | 94 +++++-----------------
product_docs/docs/pgd/5/limitations.mdx | 100 ++++++------------------
2 files changed, 42 insertions(+), 152 deletions(-)
diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx
index 4fabf0fe124..fdfdcbbde29 100644
--- a/product_docs/docs/pgd/4/limitations.mdx
+++ b/product_docs/docs/pgd/4/limitations.mdx
@@ -8,102 +8,46 @@ when planning your deployment.
## Limits
-- BDR can run hundreds of nodes on good-enough hardware and network. However,
-for mesh-based deployments, we generally don't recommend running more than
-32 nodes in one cluster.
-Each master node can be protected by multiple physical or logical standby nodes.
-There's no specific limit on the number of standby nodes,
-but typical usage is to have 2–3 standbys per master. Standby nodes don't
-add connections to the mesh network, so they aren't included in the
+- BDR can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the
32-node recommendation.
-- BDR currently has a hard limit of no more than 1000 active nodes, as this is the
-current maximum Raft connections allowed.
+- BDR currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed.
-- BDR places a limit that at most 10 databases in any one PostgreSQL instance
-can be BDR nodes across different BDR node groups. However, BDR works best if
-you use only one BDR database per PostgreSQL instance.
+- BDR places a limit that at most 10 databases in any one PostgreSQL instance can be BDR nodes across different BDR node groups. However, BDR works best if you use only one BDR database per PostgreSQL instance.
-- The minimum recommended number of nodes in a group is three to provide fault
-tolerance for BDR's consensus mechanism. With just two nodes, consensus would
-fail if one of the nodes was unresponsive. Consensus is required for some BDR
-operations such as distributed sequence generation. For more information about
-the consensus mechanism used by EDB Postgres Distributed, see
-[Architectural details](/pgd/4/architectures/#architecture-details).
+- The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details).
-- Support for using BDR for multiple databases on the same
-Postgres instance is deprecated beginning with PGD 5 and
-will no longer be supported with PGD 6. As we extend the
-capabilities of the product, the additional complexity introduced operationally
-and functionally is no longer viable in a multi-database design.
+- Support for using BDR for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design.
## Limitations of multiple databases on a single instance
-It is best practice and recommended that only one database per PGD instance be
-configured. The deployment automation with TPA and the tooling such as the CLI
-and proxy already codify that recommendation. Also, as noted above, support for
-multiple databases on the same PGD instance is being deprecated in PGD 5 and
-will no longer be supported in PGD 6.
+It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation. Also, as noted [in the Limits section](#limits), support for multiple databases on the same PGD instance is being deprecated in PGD 5 and will no longer be supported in PGD 6.
-While it is still possible to host up to ten databases in a single instance,
-this incurs many immediate risks and current limitations:
+While it is still possible to host up to ten databases in a single instance, this incurs many immediate risks and current limitations:
-- Administrative commands need to be executed for each database if PGD
- configuration changes are needed, which increases risk for potential
- inconsistencies and errors.
+- Administrative commands need to be executed for each database if PGD configuration changes are needed, which increases risk for potential inconsistencies and errors.
- Each database needs to be monitored separately, adding overhead.
-- TPAexec assumes one database; additional coding is needed by customers
- or PS in a post-deploy hook to set up replication for additional databases.
+- TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases.
-- HARP works at the Postgres instance level, not at the database level,
- meaning the leader node will be the same for all databases.
+- HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases.
-- Each additional database increases the resource requirements on the server.
- Each one needs its own set of worker processes maintaining replication
- (e.g. logical workers, WAL senders, and WAL receivers). Each one also
- needs its own set of connections to other instances in the replication
- cluster. This might severely impact performance of all databases.
+- Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases.
-- When rebuilding or adding a node, the physical initialization method
- (“bdr_init_physical”) for one database can only be used for one node,
- all other databases will have to be initialized by logical replication,
- which can be problematic for large databases because of the time it might take.
+- When rebuilding or adding a node, the physical initialization method (“bdr_init_physical”) for one database can only be used for one node, all other databases will have to be initialized by logical replication, which can be problematic for large databases because of the time it might take.
-- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as
- expected. Since the Postgres WAL is shared between the databases, a synchronous
- commit confirmation may come from any database, not necessarily in the right
- order of commits.
+- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as expected. Since the Postgres WAL is shared between the databases, a synchronous commit confirmation may come from any database, not necessarily in the right order of commits.
- CLI and OTEL integration (new with v5) assumes one database.
## Other limitations
-This is a (non-comprehensive) list of limitations that are
-expected and are by design. They are not expected to be resolved in the
-future.
-
-- Replacing a node with its physical standby doesn't work for nodes that
- use CAMO/Eager/Group Commit. Combining physical standbys and BDR in
- general isn't recommended, even if otherwise possible.
-
-- A `galloc` sequence might skip some chunks if the
- sequence is created in a rolled back transaction and then created
- again with the same name. This can also occur if it is created and dropped when DDL
- replication isn't active and then it is created again when DDL
- replication is active.
- The impact of the problem is mild, because the sequence
- guarantees aren't violated. The sequence skips only some
- initial chunks. Also, as a workaround you can specify the
- starting value for the sequence as an argument to the
- `bdr.alter_sequence_set_kind()` function.
-
-- Legacy BDR synchronous replication uses a mechanism for transaction
- confirmation different from the one used by CAMO, Eager, and Group Commit.
- The two are not compatible and must not be used together. Therefore, nodes
- that appear in `synchronous_standby_names` must not be part of CAMO, Eager,
- or Group Commit configuration. Using synchronous replication to other nodes,
- including both logical and physical standby is possible.
+This is a (non-comprehensive) list of limitations that are expected and are by design. They are not expected to be resolved in the future.
+- Replacing a node with its physical standby doesn't work for nodes that use CAMO/Eager/Group Commit. Combining physical standbys and BDR in general isn't recommended, even if otherwise possible.
+
+- A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function.
+
+- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration. Using synchronous replication to other nodes, including both logical and physical standby is possible.
diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx
index af7580268db..49a668064d7 100644
--- a/product_docs/docs/pgd/5/limitations.mdx
+++ b/product_docs/docs/pgd/5/limitations.mdx
@@ -2,101 +2,47 @@
title: "Limitations"
---
-This section covers design limitations of EDB Postgres Distributed (PGD), that
-should be taken into account when planning your deployment.
+This section covers design limitations of EDB Postgres Distributed (PGD), that should be taken into account when planning your deployment.
## Limits
-- PGD can run hundreds of nodes on good-enough hardware and network. However,
-for mesh-based deployments, we generally don't recommend running more than
-32 nodes in one cluster.
-Each master node can be protected by multiple physical or logical standby nodes.
-There's no specific limit on the number of standby nodes,
-but typical usage is to have 2–3 standbys per master. Standby nodes don't
-add connections to the mesh network, so they aren't included in the
-32-node recommendation.
-
-- PGD currently has a hard limit of no more than 1000 active nodes, as this is the
-current maximum Raft connections allowed.
-
-- The minimum recommended number of nodes in a group is three to provide fault
-tolerance for PGD's consensus mechanism. With just two nodes, consensus would
-fail if one of the nodes was unresponsive. Consensus is required for some PGD
-operations such as distributed sequence generation. For more information about
-the consensus mechanism used by EDB Postgres Distributed, see
-[Architectural details](../architectures/#architecture-details).
-
-- Support for using PGD for multiple databases on the same
-Postgres instance is deprecated beginning with PGD 5 and
-will no longer be supported with PGD 6. As we extend the
-capabilities of the product, the additional complexity introduced operationally
-and functionally is no longer viable in a multi-database design.
+- PGD can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation.
+
+- PGD currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed.
+
+- The minimum recommended number of nodes in a group is three to provide fault tolerance for PGD's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some PGD operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](../architectures/#architecture-details).
+
+- Support for using PGD for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design.
## Limitations of multiple databases on a single instance
-It is best practice and recommended that only one database per PGD instance be
-configured. The deployment automation with TPA and the tooling such as the CLI
-and proxy already codify that recommendation. Also, as noted above, support for
-multiple databases on the same PGD instance is being deprecated in PGD 5 and
-will no longer be supported in PGD 6.
+It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation. Also, as noted in the [Limits section](#limits), support for multiple databases on the same PGD instance is being deprecated in PGD 5 and will no longer be supported in PGD 6.
-While it is still possible to host up to ten databases in a single instance,
-this incurs many immediate risks and current limitations:
+While it is still possible to host up to ten databases in a single instance, this incurs many immediate risks and current limitations:
-- Administrative commands need to be executed for each database if PGD
- configuration changes are needed, which increases risk for potential
- inconsistencies and errors.
+- Administrative commands need to be executed for each database if PGD configuration changes are needed, which increases risk for potential inconsistencies and errors.
- Each database needs to be monitored separately, adding overhead.
-- TPAexec assumes one database; additional coding is needed by customers
- or PS in a post-deploy hook to set up replication for additional databases.
+- TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases.
-- HARP works at the Postgres instance level, not at the database level,
- meaning the leader node will be the same for all databases.
+- HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases.
-- Each additional database increases the resource requirements on the server.
- Each one needs its own set of worker processes maintaining replication
- (e.g. logical workers, WAL senders, and WAL receivers). Each one also
- needs its own set of connections to other instances in the replication
- cluster. This might severely impact performance of all databases.
+- Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases.
-- When rebuilding or adding a node, the physical initialization method
- (“bdr_init_physical”) for one database can only be used for one node,
- all other databases will have to be initialized by logical replication,
- which can be problematic for large databases because of the time it might take.
+- When rebuilding or adding a node, the physical initialization method (“bdr_init_physical”) for one database can only be used for one node, all other databases will have to be initialized by logical replication, which can be problematic for large databases because of the time it might take.
-- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as
- expected. Since the Postgres WAL is shared between the databases, a synchronous
- commit confirmation may come from any database, not necessarily in the right
- order of commits.
+- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as expected. Since the Postgres WAL is shared between the databases, a synchronous commit confirmation may come from any database, not necessarily in the right order of commits.
- CLI and OTEL integration (new with v5) assumes one database.
## Other limitations
-This is a (non-comprehensive) list of other limitations that are expected and
-are by design. They are not expected to be resolved in the future and should be taken
-under consideration when planning your deployment.
-
-- Replacing a node with its physical standby doesn't work for nodes that
- use CAMO/Eager/Group Commit. Combining physical standbys and EDB Postgres
- Distributed isn't recommended, even if possible.
-
-- A `galloc` sequence might skip some chunks if the
- sequence is created in a rolled back transaction and then created
- again with the same name. This can also occur if it is created and dropped when DDL
- replication isn't active and then it is created again when DDL
- replication is active.
- The impact of the problem is mild, because the sequence
- guarantees aren't violated. The sequence skips only some
- initial chunks. Also, as a workaround you can specify the
- starting value for the sequence as an argument to the
- `bdr.alter_sequence_set_kind()` function.
-
-- Legacy synchronous replication uses a mechanism for transaction
- confirmation different from the one used by CAMO, Eager, and Group Commit.
- The two are not compatible and must not be used together. Using synchronous
- replication to other non-PGD nodes, including both logical and physical
- standby is possible.
+This is a (non-comprehensive) list of other limitations that are expected and are by design. They are not expected to be resolved in the future and should be taken under consideration when planning your deployment.
+
+- Replacing a node with its physical standby doesn't work for nodes that use CAMO/Eager/Group Commit. Combining physical standbys and EDB Postgres Distributed isn't recommended, even if possible.
+
+- A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function.
+
+- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Using synchronous replication to other non-PGD nodes, including both logical and physical standby is possible.
From 0f070a691387656b3cfb296d6ba79dc72410dd86 Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Mon, 6 Mar 2023 17:59:36 +0000
Subject: [PATCH 36/50] Revised layout
---
product_docs/docs/pgd/5/known_issues.mdx | 69 +++----------------
product_docs/docs/pgd/5/limitations.mdx | 8 +--
.../docs/pgd/5/other_considerations.mdx | 58 +++++++---------
3 files changed, 40 insertions(+), 95 deletions(-)
diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx
index 665fb4beafb..c57c6be7df0 100644
--- a/product_docs/docs/pgd/5/known_issues.mdx
+++ b/product_docs/docs/pgd/5/known_issues.mdx
@@ -2,75 +2,26 @@
title: 'Known issues'
---
-This section discusses currently known issues in EDB Postgres Distributed 5.
+This section discusses currently known issues in EDB Postgres Distributed 5. These issues are tracked in EDB's ticketing system and are expected to be resolved in a future release.
-## Data Consistency
+- If the resolver for the `update_origin_change` conflict is set to `skip`, `synchronous_commit=remote_apply` is used, and concurrent updates of the same row are repeatedly applied on two different nodes, then one of the update statements might hang due to a deadlock with the BDR writer. As mentioned in the [Conflicts](consistency/conflicts/) chapter, `skip` is not the default resolver for the `update_origin_change` conflict, and this combination isn't intended to be used in production. It discards one of the two conflicting updates based on the order of arrival on that node, which is likely to cause a divergent cluster. In the rare situation that you do choose to use the `skip` conflict resolver, note the issue with the use of the `remote_apply` mode.
-Read about [Conflicts](consistency/conflicts/) to understand
-the implications of the asynchronous operation mode in terms of data
-consistency.
+- The Decoding Worker feature doesn't work with CAMO/EAGER/Group Commit. Installations using CAMO/Eager/Group Commit must keep `enable_wal_decoder` disabled.
-## List of issues
+- Lag control doesn't adjust commit delay in any way on a fully isolated node, that is, in case all other nodes are unreachable or not operational. As soon as at least one node is connected, replication lag control picks up its work and adjusts the BDR commit delay again.
-These known issues are tracked in BDR's
-ticketing system and are expected to be resolved in a future
-release.
+- For time-based lag control, BDR currently uses the lag time (measured by commit timestamps) rather than the estimated catchup time that's based on historic apply rate.
-- If the resolver for the `update_origin_change` conflict
- is set to `skip`, `synchronous_commit=remote_apply` is used, and
- concurrent updates of the same row are repeatedly applied on two
- different nodes, then one of the update statements might hang due
- to a deadlock with the BDR writer. As mentioned in the
- [Conflicts](consistency/conflicts/) chapter, `skip` is not the default
- resolver for the `update_origin_change` conflict, and this
- combination isn't intended to be used in production. It discards
- one of the two conflicting updates based on the order of arrival
- on that node, which is likely to cause a divergent cluster.
- In the rare situation that you do choose to use the `skip`
- conflict resolver, note the issue with the use of the
- `remote_apply` mode.
+- Changing the CAMO partners in a CAMO pair isn't currently possible. It's possible only to add or remove a pair. Adding or removing a pair doesn't need a restart of Postgres or even a reload of the configuration.
-- The Decoding Worker feature doesn't work with CAMO/EAGER/Group Commit.
- Installations using CAMO/Eager/Group Commit must keep `enable_wal_decoder`
- disabled.
+- Group Commit cannot be combined with [CAMO](durability/camo/) or [Eager All Node replication](consistency/eager/). Eager Replication currently only works by using the "global" BDR commit scope.
-- Lag control doesn't adjust commit delay in any way on a fully
- isolated node, that is, in case all other nodes are unreachable or not
- operational. As soon as at least one node is connected, replication
- lag control picks up its work and adjusts the BDR commit delay
- again.
-
-- For time-based lag control, BDR currently uses the lag time (measured
- by commit timestamps) rather than the estimated catchup time that's
- based on historic apply rate.
-
-- Changing the CAMO partners in a CAMO pair isn't currently possible.
- It's possible only to add or remove a pair.
- Adding or removing a pair doesn't need a restart of Postgres or even a
- reload of the configuration.
-
-- Group Commit cannot be combined with [CAMO](durability/camo/) or [Eager All Node
- replication](consistency/eager/). Eager Replication currently only works by using the
- "global" BDR commit scope.
-
-- Transactions using Eager Replication can't yet execute DDL,
- nor do they support explicit two-phase commit.
- The TRUNCATE command is allowed.
+- Transactions using Eager Replication can't yet execute DDL, nor do they support explicit two-phase commit. The TRUNCATE command is allowed.
- Not all DDL can be run when either CAMO or Group Commit is used.
-- Parallel apply is not currently supported in combination with Group
- Commit, please make sure to disable it when using Group Commit by
- either setting `num_writers` to 1 for the node group (using
- [`bdr.alter_node_group_config`](nodes#bdralter_node_group_config)) or
- via the GUC `bdr.writers_per_subscription` (see
- [Configuration of Generic Replication](configuration#generic-replication)).
+- Parallel apply is not currently supported in combination with Group Commit, please make sure to disable it when using Group Commit by either setting `num_writers` to 1 for the node group (using [`bdr.alter_node_group_config`](nodes#bdralter_node_group_config)) or via the GUC `bdr.writers_per_subscription` (see [Configuration of Generic Replication](configuration#generic-replication)).
-- There currently is no protection against altering or removing a commit
- scope. Running transactions in a commit scope that is concurrently
- being altered or removed can lead to the transaction blocking or
- replication stalling completely due to an error on the downstream node
- attempting to apply the transaction. Ensure that any transactions
- using a specific commit scope have finished before altering or removing it.
+- There currently is no protection against altering or removing a commit scope. Running transactions in a commit scope that is concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to apply the transaction. Ensure that any transactions using a specific commit scope have finished before altering or removing it.
Details of other design or implementation [limitations](limitations) are also available.
\ No newline at end of file
diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx
index 49a668064d7..79abe37150f 100644
--- a/product_docs/docs/pgd/5/limitations.mdx
+++ b/product_docs/docs/pgd/5/limitations.mdx
@@ -4,7 +4,7 @@ title: "Limitations"
This section covers design limitations of EDB Postgres Distributed (PGD), that should be taken into account when planning your deployment.
-## Limits
+## Limits on nodes
- PGD can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation.
@@ -12,11 +12,11 @@ This section covers design limitations of EDB Postgres Distributed (PGD), that s
- The minimum recommended number of nodes in a group is three to provide fault tolerance for PGD's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some PGD operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](../architectures/#architecture-details).
-- Support for using PGD for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design.
-
## Limitations of multiple databases on a single instance
-It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation. Also, as noted in the [Limits section](#limits), support for multiple databases on the same PGD instance is being deprecated in PGD 5 and will no longer be supported in PGD 6.
+Support for using PGD for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design.
+
+It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation.
While it is still possible to host up to ten databases in a single instance, this incurs many immediate risks and current limitations:
diff --git a/product_docs/docs/pgd/5/other_considerations.mdx b/product_docs/docs/pgd/5/other_considerations.mdx
index e3ac2c5c7a3..286a801f9a3 100644
--- a/product_docs/docs/pgd/5/other_considerations.mdx
+++ b/product_docs/docs/pgd/5/other_considerations.mdx
@@ -4,43 +4,37 @@ title: "Other considerations"
Review these other considerations when planning your deployment.
-## Deployment and sizing considerations
-
-For production deployments, EDB recommends a minimum of 4 cores for each
-Postgres data node. Witness nodes don't participate in the data replication
-operation and don't have to meet this requirement. Always size logical standbys
-exactly like the data nodes to avoid performance degradations in case of a node
-promotion. In production deployments, PGD proxy nodes require minimum of 1 core,
-and should increase incrementally in correlation with an increase in the number
-of database cores in approximately a 1:10 ratio. EDB recommends detailed
-benchmarking of your specific performance requirements to determine appropriate
-sizing based on your workload. The EDB Professional Services team is available
-to assist if needed.
-
-For development purposes, don't assign Postgres data nodes fewer than two cores.
-The sizing of Barman nodes depends on the database size and the data change
-rate.
-
-You can deploy Postgres data nodes, Barman nodes, and PGD proxy nodes on virtual
-machines or in a bare metal deployment mode. However, don't deploy multiple data
-nodes on VMs that are on the same physical hardware, as that reduces resiliency.
-Also don't deploy multiple PGD proxy nodes on VMs on the same physical hardware,
-as that, too, reduces resiliency.
+## Data Consistency
+
+Read about [Conflicts](consistency/conflicts/) to understand
+the implications of the asynchronous operation mode in terms of data
+consistency.
+
+## Deployment
+
+EDB PGD is intended to be deployed in one of a small number of known-good configurations,
+using either [Trusted Postgres Architect](/tpa/latest) or a configuration management approach
+and deployment architecture approved by Technical Support.
+
+Manual deployment isn't recommended and might not be supported.
+
+Log messages and documentation are currently available only in English.
+
+## Sizing considerations
+
+For production deployments, EDB recommends a minimum of 4 cores for each Postgres data node. Witness nodes don't participate in the data replication operation and don't have to meet this requirement. Always size logical standbys exactly like the data nodes to avoid performance degradations in case of a node promotion. In production deployments, PGD proxy nodes require minimum of 1 core, and should increase incrementally in correlation with an increase in the number of database cores in approximately a 1:10 ratio. EDB recommends detailed benchmarking of your specific performance requirements to determine appropriate sizing based on your workload. The EDB Professional Services team is available to assist if needed.
+
+For development purposes, don't assign Postgres data nodes fewer than two cores. The sizing of Barman nodes depends on the database size and the data change rate.
+
+You can deploy Postgres data nodes, Barman nodes, and PGD proxy nodes on virtual machines or in a bare metal deployment mode. However, don't deploy multiple data nodes on VMs that are on the same physical hardware, as that reduces resiliency. Also don't deploy multiple PGD proxy nodes on VMs on the same physical hardware, as that, too, reduces resiliency.
Single PGD Proxy nodes can be co-located with single PGD data nodes.
## Clocks and timezones
-EDB Postgres Distributed has been designed to operate with nodes in multiple
-timezones, allowing a truly worldwide database cluster. Individual servers do
-not need to be configured with matching timezones, though we do recommend using
-log_timezone = UTC to ensure the human readable server log is more accessible
-and comparable.
+EDB Postgres Distributed has been designed to operate with nodes in multiple timezones, allowing a truly worldwide database cluster. Individual servers do not need to be configured with matching timezones, though we do recommend using `log_timezone = UTC` to ensure the human readable server log is more accessible and comparable.
Server clocks should be synchronized using NTP or other solutions.
-Clock synchronization is not critical to performance, as is the case with some
-other solutions. Clock skew can impact Origin Conflict Detection, though EDB
-Postgres Distributed provides controls to report and manage any skew that
-exists. EDB Postgres Distributed also provides Row Version Conflict Detection,
-as described in [Conflict Detection](consistency/conflicts).
+Clock synchronization is not critical to performance, as is the case with some other solutions. Clock skew can impact Origin Conflict Detection, though EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides Row Version Conflict Detection, as described in [Conflict Detection](consistency/conflicts).
+
From 604a2983930e74e94820febeebfd664a0bfc5e2e Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Mon, 6 Mar 2023 18:17:08 +0000
Subject: [PATCH 37/50] Updated overview to move clocks etc out
---
product_docs/docs/pgd/5/overview/index.mdx | 197 ++++-----------------
1 file changed, 38 insertions(+), 159 deletions(-)
diff --git a/product_docs/docs/pgd/5/overview/index.mdx b/product_docs/docs/pgd/5/overview/index.mdx
index 19ecc0c52a5..d648a51cfb8 100644
--- a/product_docs/docs/pgd/5/overview/index.mdx
+++ b/product_docs/docs/pgd/5/overview/index.mdx
@@ -3,223 +3,102 @@ title: "Overview"
redirect: bdr
---
-EDB Postgres Distributed (PGD) provides multi-master replication and data
-distribution with advanced conflict management, data-loss protection, and
-throughput up to 5X faster than native logical replication, and enables
-distributed Postgres clusters with high availability up to five 9s.
+EDB Postgres Distributed (PGD) provides multi-master replication and data distribution with advanced conflict management, data-loss protection, and throughput up to 5X faster than native logical replication, and enables distributed Postgres clusters with high availability up to five 9s.
-PGD provides loosely coupled, multi-master logical replication
-using a mesh topology. This means that you can write to any server and the
-changes are sent directly, row-by-row, to all the
-other servers that are part of the same PGD group.
+PGD provides loosely coupled, multi-master logical replication using a mesh topology. This means that you can write to any server and the changes are sent directly, row-by-row, to all the other servers that are part of the same PGD group.
-By default, PGD uses asynchronous replication, applying changes on
-the peer nodes only after the local commit. Multiple synchronous replication
-options are also available.
+By default, PGD uses asynchronous replication, applying changes on the peer nodes only after the local commit. Multiple synchronous replication options are also available.
## Basic architecture
### Multiple groups
-A PGD node is a member of at least one *node group*, and in the most
-basic architecture there is a single node group for the whole PGD
-cluster.
+A PGD node is a member of at least one *node group*, and in the most basic architecture there is a single node group for the whole PGD cluster.
### Multiple masters
-Each node (database) participating in a PGD group both receives
-changes from other members and can be written to directly by the user.
+Each node (database) participating in a PGD group both receives changes from other members and can be written to directly by the user.
-This is distinct from hot or warm standby, where only one master
-server accepts writes, and all the other nodes are standbys that
-replicate either from the master or from another standby.
+This is distinct from hot or warm standby, where only one master server accepts writes, and all the other nodes are standbys that replicate either from the master or from another standby.
-You don't have to write to all the masters all of the time.
-A frequent configuration directs writes mostly to just one master.
+You don't have to write to all the masters all of the time. A frequent configuration directs writes mostly to just one master.
### Asynchronous, by default
-Changes made on one PGD node aren't replicated to other nodes until
-they're committed locally. As a result, the data isn't exactly the
-same on all nodes at any given time. Some nodes have data that
-hasn't yet arrived at other nodes. PostgreSQL's block-based replication
-solutions default to asynchronous replication as well. In PGD,
-because there are multiple masters and, as a result, multiple data streams,
-data on different nodes might differ even when
-`synchronous_commit` and `synchronous_standby_names` are used.
+Changes made on one PGD node aren't replicated to other nodes until they're committed locally. As a result, the data isn't exactly the same on all nodes at any given time. Some nodes have data that hasn't yet arrived at other nodes. PostgreSQL's block-based replication solutions default to asynchronous replication as well. In PGD, because there are multiple masters and, as a result, multiple data streams, data on different nodes might differ even when `synchronous_commit` and `synchronous_standby_names` are used.
### Mesh topology
-PGD is structured around a mesh network where every node connects to every
-other node and all nodes exchange data directly with each other. There's no
-forwarding of data in PGD except in special circumstances such as adding and removing nodes.
-Data can arrive from outside the EDB Postgres Distributed cluster or
-be sent onwards using native PostgreSQL logical replication.
+PGD is structured around a mesh network where every node connects to every other node and all nodes exchange data directly with each other. There's no forwarding of data in PGD except in special circumstances such as adding and removing nodes. Data can arrive from outside the EDB Postgres Distributed cluster or be sent onwards using native PostgreSQL logical replication.
### Logical replication
-Logical replication is a method of replicating data rows and their changes
-based on their replication identity (usually a primary key).
-We use the term *logical* in contrast to *physical* replication, which uses
-exact block addresses and byte-by-byte replication. Index changes aren't
-replicated, thereby avoiding write amplification and reducing bandwidth.
+Logical replication is a method of replicating data rows and their changes based on their replication identity (usually a primary key). We use the term *logical* in contrast to *physical* replication, which uses exact block addresses and byte-by-byte replication. Index changes aren't replicated, thereby avoiding write amplification and reducing bandwidth.
-Logical replication starts by copying a snapshot of the data from the
-source node. Once that is done, later commits are sent to other nodes as
-they occur in real time. Changes are replicated without re-executing SQL,
-so the exact data written is replicated quickly and accurately.
+Logical replication starts by copying a snapshot of the data from the source node. Once that is done, later commits are sent to other nodes as they occur in real time. Changes are replicated without re-executing SQL, so the exact data written is replicated quickly and accurately.
-Nodes apply data in the order in which commits were made on the source node,
-ensuring transactional consistency is guaranteed for the changes from
-any single node. Changes from different nodes are applied independently of
-other nodes to ensure the rapid replication of changes.
+Nodes apply data in the order in which commits were made on the source node, ensuring transactional consistency is guaranteed for the changes from any single node. Changes from different nodes are applied independently of other nodes to ensure the rapid replication of changes.
Replicated data is sent in binary form, when it's safe to do so.
### Connection management
-[Connection management](../routing) leverages consensus-driven quorum to determine
-the correct connection end-point in a semi-exclusive manner to prevent unintended
-multi-node writes from an application. This reduces the potential for data conflicts.
+[Connection management](../routing) leverages consensus-driven quorum to determine the correct connection end-point in a semi-exclusive manner to prevent unintended multi-node writes from an application. This reduces the potential for data conflicts.
-[PGD Proxy](../routing/proxy) is the tool for application connection management
-provided as part of EDB Postgres Distributed.
+[PGD Proxy](../routing/proxy) is the tool for application connection management provided as part of EDB Postgres Distributed.
### High availability
-Each master node can be protected by one or more standby nodes, so any node
-that goes down can be quickly replaced and continue. Each standby node can
-be either a logical or a physical standby node.
+Each master node can be protected by one or more standby nodes, so any node that goes down can be quickly replaced and continue. Each standby node can be either a logical or a physical standby node.
-Replication continues between currently connected nodes even if one or more
-nodes are currently unavailable. When the node recovers, replication
-can restart from where it left off without missing any changes.
+Replication continues between currently connected nodes even if one or more nodes are currently unavailable. When the node recovers, replication can restart from where it left off without missing any changes.
-Nodes can run different release levels, negotiating the required protocols
-to communicate. As a result, EDB Postgres Distributed clusters can use rolling upgrades, even
-for major versions of database software.
+Nodes can run different release levels, negotiating the required protocols to communicate. As a result, EDB Postgres Distributed clusters can use rolling upgrades, even for major versions of database software.
-DDL is replicated across nodes by default. DDL execution can
-be user controlled to allow rolling application upgrades, if desired.
+DDL is replicated across nodes by default. DDL execution can be user controlled to allow rolling application upgrades, if desired.
## Architectural options and performance
### Always On architectures
-A number of different architectures can be configured, each of which has
-different performance and scalability characteristics.
+A number of different architectures can be configured, each of which has different performance and scalability characteristics.
-The group is the basic building block consisting of 2+ nodes
-(servers). In a group, each node is in a different availability zone, with dedicated router
-and backup, giving immediate switchover and high availability. Each group has a
-dedicated replication set defined on it. If the group loses a node, you can easily
-repair or replace it by copying an existing node from the group.
+The group is the basic building block consisting of 2+ nodes (servers). In a group, each node is in a different availability zone, with dedicated router and backup, giving immediate switchover and high availability. Each group has a dedicated replication set defined on it. If the group loses a node, you can easily repair or replace it by copying an existing node from the group.
-The Always On architectures are built from either one group in a single location
-or two groups in two separate locations. Each group provides high availability. When two
-groups are leveraged in remote locations, they together also provide disaster recovery (DR).
+The Always On architectures are built from either one group in a single location or two groups in two separate locations. Each group provides high availability. When two groups are leveraged in remote locations, they together also provide disaster recovery (DR).
-Tables are created across both groups, so any change goes to all nodes, not just to
-nodes in the local group.
+Tables are created across both groups, so any change goes to all nodes, not just to nodes in the local group.
-One node in each group is the target for the main application. All other nodes are described as
-shadow nodes (or "read-write replica"), waiting to take over when needed. If a node
-loses contact, we switch immediately to a shadow node to continue processing. If a
-group fails, we can switch to the other group. Scalability isn't the goal of this
-architecture.
+One node in each group is the target for the main application. All other nodes are described as shadow nodes (or "read-write replica"), waiting to take over when needed. If a node loses contact, we switch immediately to a shadow node to continue processing. If a group fails, we can switch to the other group. Scalability isn't the goal of this architecture.
-Since we write mainly to only one node, the possibility of contention between is
-reduced to almost zero. As a result, performance impact is much reduced.
+Since we write mainly to only one node, the possibility of contention between is reduced to almost zero. As a result, performance impact is much reduced.
-Secondary applications might execute against the shadow nodes, although these are
-reduced or interrupted if the main application begins using that node.
+Secondary applications might execute against the shadow nodes, although these are reduced or interrupted if the main application begins using that node.
-In the future, one node will be elected as the main replicator to other groups, limiting CPU
-overhead of replication as the cluster grows and minimizing the bandwidth to other groups.
+In the future, one node will be elected as the main replicator to other groups, limiting CPU overhead of replication as the cluster grows and minimizing the bandwidth to other groups.
### Supported Postgres database servers
-PGD is compatible with [PostgreSQL](https://www.postgresql.org/), [EDB Postgres Extended Server](https://techsupport.enterprisedb.com/customer_portal/sw/2ndqpostgres/) and [EDB Postgres Advanced Server](/epas/latest)
-and is deployed as a standard Postgres extension named BDR. See the [Compatibility matrix](../#compatibility-matrix)
-for details of supported version combinations.
-
-Some key PGD features depend on certain core
-capabilities being available in the targeted Postgres database server.
-Therefore, PGD users must also adopt the Postgres
-database server distribution that's best suited to their business needs. For
-example, if having the PGD feature Commit At Most Once (CAMO) is mission
-critical to your use case, don't adopt the community
-PostgreSQL distribution because it doesn't have the core capability required to handle
-CAMO. See the full feature matrix compatibility in
-[Choosing a Postgres distribution](../choosing_server/).
-
-PGD offers close to native Postgres compatibility. However, some access
-patterns don't necessarily work as well in multi-node setup as they do on a
-single instance. There are also some limitations in what can be safely
-replicated in multi-node setting. [Application usage](../appusage)
-goes into detail on how PGD behaves from an application development perspective.
+PGD is compatible with [PostgreSQL](https://www.postgresql.org/), [EDB Postgres Extended Server](https://techsupport.enterprisedb.com/customer_portal/sw/2ndqpostgres/) and [EDB Postgres Advanced Server](/epas/latest) and is deployed as a standard Postgres extension named BDR. See the [Compatibility matrix](../#compatibility-matrix) for details of supported version combinations.
-### Characteristics affecting performance
-
-By default, PGD keeps one copy of each table on each node in the group, and any
-changes propagate to all nodes in the group.
-
-Since copies of data are everywhere, SELECTs need only ever access the local node.
-On a read-only cluster, performance on any one node isn't affected by the
-number of nodes and is immune to replication conflicts on other nodes caused by
-long-running SELECT queries. Thus, adding nodes increases linearly the total possible SELECT
-throughput.
-
-If an INSERT, UPDATE, and DELETE (DML) is performed locally, then the changes
-propagate to all nodes in the group. The overhead of DML apply is less than the
-original execution, so if you run a pure write workload on multiple nodes
-concurrently, a multi-node cluster can handle more TPS than a single node.
-
-Conflict handling has a cost that acts to reduce the throughput. The throughput
-then depends on how much contention the application displays in practice.
-Applications with very low contention perform better than a single node.
-Applications with high contention can perform worse than a single node.
-These results are consistent with any multi-master technology. They aren't particular to PGD.
-
-Synchronous replilcation options can send changes concurrently to multiple nodes
-so that the replication lag is minimized. Adding more nodes means using more CPU for
-replication, so peak TPS reduces slightly as each node is added.
-
-If the workload tries to use all CPU resources, then this resource constrains
-replication, which can then affect the replication lag.
+Some key PGD features depend on certain core capabilities being available in the targeted Postgres database server. Therefore, PGD users must also adopt the Postgres database server distribution that's best suited to their business needs. For example, if having the PGD feature Commit At Most Once (CAMO) is mission critical to your use case, don't adopt the community PostgreSQL distribution because it doesn't have the core capability required to handle CAMO. See the full feature matrix compatibility in [Choosing a Postgres distribution](../choosing_server/).
-In summary, adding more master nodes to a PGD group doesn't result in significant write
-throughput increase when most tables are replicated because all the writes will
-be replayed on all nodes. Because PGD writes are in general more effective
-than writes coming from Postgres clients by way of SQL, some performance increase
-can be achieved. Read throughput generally scales linearly with the number of
-nodes.
-
-## Deployment
-
-PGD is intended to be deployed in one of a small number of known-good configurations,
-using either [Trusted Postgres Architect](/tpa/latest) or a configuration management approach
-and deployment architecture approved by Technical Support.
-
-Manual deployment isn't recommended and might not be supported.
+PGD offers close to native Postgres compatibility. However, some access patterns don't necessarily work as well in multi-node setup as they do on a single instance. There are also some limitations in what can be safely replicated in multi-node setting. [Application usage](../appusage) goes into detail on how PGD behaves from an application development perspective.
-Log messages and documentation are currently available only in English.
-
-## Clocks and timezones
+### Characteristics affecting performance
-PGD is designed to operate with nodes in multiple timezones, allowing a
-truly worldwide database cluster. Individual servers don't need to be configured
-with matching timezones, although we do recommend using `log_timezone = UTC` to
-ensure the human-readable server log is more accessible and comparable.
+By default, PGD keeps one copy of each table on each node in the group, and any changes propagate to all nodes in the group.
-Synchronize server clocks using NTP or other solutions.
+Since copies of data are everywhere, SELECTs need only ever access the local node. On a read-only cluster, performance on any one node isn't affected by the number of nodes and is immune to replication conflicts on other nodes caused by long-running SELECT queries. Thus, adding nodes increases linearly the total possible SELECT throughput.
-Clock synchronization isn't critical to performance, as it is with some
-other solutions. Clock skew can impact origin conflict detection, although
-PGD provides controls to report and manage any skew that exists. PGD also
-provides row-version conflict detection, as described in [Conflict detection](../consistency/conflicts).
+If an INSERT, UPDATE, and DELETE (DML) is performed locally, then the changes propagate to all nodes in the group. The overhead of DML apply is less than the original execution, so if you run a pure write workload on multiple nodes concurrently, a multi-node cluster can handle more TPS than a single node.
+Conflict handling has a cost that acts to reduce the throughput. The throughput then depends on how much contention the application displays in practice. Applications with very low contention perform better than a single node. Applications with high contention can perform worse than a single node. These results are consistent with any multi-master technology. They aren't particular to PGD.
+Synchronous replication options can send changes concurrently to multiple nodes so that the replication lag is minimized. Adding more nodes means using more CPU for replication, so peak TPS reduces slightly as each node is added.
+If the workload tries to use all CPU resources, then this resource constrains replication, which can then affect the replication lag.
+In summary, adding more master nodes to a PGD group doesn't result in significant write
+throughput increase when most tables are replicated because all the writes will be replayed on all nodes. Because PGD writes are in general more effective than writes coming from Postgres clients by way of SQL, some performance increase can be achieved. Read throughput generally scales linearly with the number of nodes.
From faf333c72bff407aeada9e84eb31dd917dd10c32 Mon Sep 17 00:00:00 2001
From: Arup Roy
Date: Wed, 8 Mar 2023 12:17:42 +0530
Subject: [PATCH 38/50] Changes based on today's review
---
.../04_toc_pem_features/21_performance_diagnostic.mdx | 2 +-
.../docs/pem/9/tuning_performance/performance_diagnostic.mdx | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
index b9729420142..56691461802 100644
--- a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
+++ b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
@@ -22,7 +22,7 @@ Prerequisite:
- Install the EDB wait states package:
- - For PostgreSQL, see [EDB Build Repository](https://repos.enterprisedb.com/).
+ - For PostgreSQL, see [EDB Repository](https://repos.enterprisedb.com/).
- For EDB Postgres Advanced Server, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
diff --git a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
index 650b136fa90..4364f8b2514 100644
--- a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
+++ b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
@@ -23,7 +23,7 @@ To analyze the Wait States data on multiple levels, narrow down the data you sel
- Install the EDB wait states package:
- - For PostgreSQL, see [EDB Build Repository](https://repos.enterprisedb.com/).
+ - For PostgreSQL, see [EDB Repository](https://repos.enterprisedb.com/).
- For EDB Postgres Advanced Server, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
From 641aef85106451b330cf4df8a53c884b0a981cac Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Wed, 8 Mar 2023 13:54:54 +0530
Subject: [PATCH 39/50] Update 04_backing_up_restoring_sql_protect.mdx
---
.../04_backing_up_restoring_sql_protect.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx
index 6943392212f..6ff0e7ad6a7 100644
--- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx
+++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx
@@ -8,7 +8,7 @@ legacyRedirectsGenerated:
-Backing up a database that's configured with SQL/Protect and then restoring the backup file to a new database requires considerations in addition to those normally associated with backup and restore procedures. These added considerations are due mainly to the use of object identification numbers (OIDs) in the SQL/Protect tables.
+Backing up a database that's configured with SQL/Protect and then restoring the backup file to a new database requires considerations in addition to those normally associated with backup and restore procedures. These added considerations are mainly due to the use of object identification numbers (OIDs) in the SQL/Protect tables.
!!! Note
This information applies if your backup and restore procedures result in re-creating database objects in the new database with new OIDs, such as when using the `pg_dump` backup program.
From 576373ed5bbd56d39183c8c4a9ff7bec3fd7d436 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Wed, 8 Mar 2023 15:20:24 +0530
Subject: [PATCH 40/50] Update
product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx
---
.../docs/epas/15/epas_security_guide/05_data_redaction.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx
index 743ce29bbdd..a9fc118bfe2 100644
--- a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx
+++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx
@@ -329,7 +329,7 @@ To use `ALTER REDACTION POLICY`, you must own the table that the data redaction
`scope_value`
- The scope identifies the query part to apply redaction to for the column. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for the details.
+ The scope identifies the query part to apply redaction for the column. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for the details.
`exception_value`
From 9868652d4ca5cd762f3ca589323b934edd998f58 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Wed, 8 Mar 2023 15:21:54 +0530
Subject: [PATCH 41/50] Update
product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx
---
.../docs/epas/15/epas_security_guide/05_data_redaction.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx
index a9fc118bfe2..74b5fd7d442 100644
--- a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx
+++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx
@@ -388,7 +388,7 @@ To use `DROP REDACTION POLICY`, you must own the table that the redaction policy
`table_name`
- The optionally sechem-qualified name of the table that the data redaction policy is on.
+ The optionally schema-qualified name of the table that the data redaction policy is on.
`CASCADE`
From 9ac19b0c988af9fec4c64e0d0024f2171f5b4017 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Wed, 8 Mar 2023 16:30:48 +0530
Subject: [PATCH 42/50] Updated the example
---
.../03_built-in_packages/18_dbms_utility.mdx | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx b/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx
index 48dcf7ddb19..2a92d3bbfce 100644
--- a/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx
+++ b/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx
@@ -322,7 +322,6 @@ DB_VERSION( OUT VARCHAR2, OUT VARCHAR2)
The following anonymous block displays the database version information.
-
```sql
DECLARE
@@ -334,10 +333,10 @@ BEGIN
DBMS_OUTPUT.PUT_LINE('Compatibility: ' || v_compat);
END;
-Version: EnterpriseDB 14.0.0 on i686-pc-linux-gnu, compiled by GCC gcc
-(GCC) 4.1.2 20080704 (Red Hat 4.1.2-48), 32-bit
-Compatibility: EnterpriseDB 14.0.0 on i686-pc-linux-gnu, compiled by GCC
-gcc (GCC) 4.1.220080704 (Red Hat 4.1.2-48), 32-bit
+Version: PostgreSQL 15.2 (EnterpriseDB Advanced Server 15.2.0 (Debian 15.2.0-1.bullseye)) on x86_64-pc-linux-gnu,
+compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
+Compatibility: PostgreSQL 15.2 (EnterpriseDB Advanced Server 15.2.0 (Debian 15.2.0-1.bullseye)) on x86_64-pc-linux-gnu,
+compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
```
## EXEC_DDL_STATEMENT
From 562c25aa421f7c19114884f4516724118a4d8c04 Mon Sep 17 00:00:00 2001
From: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com>
Date: Wed, 8 Mar 2023 08:34:28 -0500
Subject: [PATCH 43/50] Apply suggestions from code review
Shortening stem sentence in 8
---
product_docs/docs/pem/8/index.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pem/8/index.mdx b/product_docs/docs/pem/8/index.mdx
index e08dd30d766..788ced7a919 100644
--- a/product_docs/docs/pem/8/index.mdx
+++ b/product_docs/docs/pem/8/index.mdx
@@ -62,7 +62,7 @@ PEM is a comprehensive database design and management system. PEM is designed to
## Postgres compatibility
-The table provides information about the supported versions of Postgres for PEM 8.x.
+Supported versions of Postgres for PEM 8.x:
| |**Monitored Instance** |**Backend Instance** |
|:-----------------------------------------|:----------------------|:--------------------|
From 53302068928d491b16a36ae3b0b9967f13ede205 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Wed, 8 Mar 2023 19:04:31 +0530
Subject: [PATCH 44/50] minor edit done
---
.../reference_command_line_options.mdx | 4 ----
1 file changed, 4 deletions(-)
diff --git a/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx b/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx
index a2db1adaded..2a847ccdae7 100644
--- a/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx
+++ b/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx
@@ -163,10 +163,6 @@ Include `--unattendedmodeui minimalWithDialogs` to specify that the installer sh
Include the `--version` parameter to retrieve version information about the installer:
-
-
-`EDB Postgres Advanced Server 14.0.3-1 --- Built on 2020-10-23 00:12:44 IB: 20.6.0-202008110127`
-
`--workload_profile {oltp | mixed | reporting}`
Use the `--workload_profile` parameter to specify an initial value for the `edb_dynatune_profile` configuration parameter. `edb_dynatune_profile` controls aspects of performance-tuning based on the type of work that the server performs.
From 340c752125c96fadb0020ae4247c1acb39b378ce Mon Sep 17 00:00:00 2001
From: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com>
Date: Wed, 8 Mar 2023 08:34:46 -0500
Subject: [PATCH 45/50] Shortening stem sentence in 9
---
product_docs/docs/pem/9/index.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pem/9/index.mdx b/product_docs/docs/pem/9/index.mdx
index 3be9ae244b5..6196514ff6b 100644
--- a/product_docs/docs/pem/9/index.mdx
+++ b/product_docs/docs/pem/9/index.mdx
@@ -62,7 +62,7 @@ PEM is a comprehensive database design and management system. PEM is designed to
## Postgres compatibility
-The table provides information about the supported versions of Postgres for PEM 9.x.
+Supported versions of Postgres for PEM 9.x:
| |**Monitored Instance** |**Backend Instance** |
|:-----------------------------------------|:---------------------------|:---------------------|
From 7ae813e1e2f42acf1a0b5b4406c3009e60c81a42 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Wed, 8 Mar 2023 19:09:38 +0530
Subject: [PATCH 46/50] formatting edits done
---
.../04_toc_pem_features/21_performance_diagnostic.mdx | 9 ++++-----
.../pem/9/tuning_performance/performance_diagnostic.mdx | 8 ++++----
2 files changed, 8 insertions(+), 9 deletions(-)
diff --git a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
index 56691461802..3a9aa58ba16 100644
--- a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
+++ b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx
@@ -7,16 +7,15 @@ legacyRedirectsGenerated:
-You can use the Performance Diagnostic dashboard to analyze the database performance for Postgres instances by monitoring the wait events. To display the diagnostic graphs, PEM uses the data collected by EDB Wait States module.
+The Performance Diagnostic dashboard analyzes the database performance for Postgres instances by monitoring the wait events. To display the diagnostic graphs, PEM uses the data collected by the EDB wait states module. For more information on EDB wait states, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
-Peformance Diagnostic feature is supported for Advanced Server databases from PEM 7.6 version onwards and for PostgreSQL databases it is supported from PEM 8.0 onwards.
+To analyze the wait states data on multiple levels, narrow down the data you select. The data you select at the higher level of the graph populates the lower level.
!!! Note
- For PostgreSQL databases, Performance Diagnostics is supported only for versions 10, 11, 12, and 13 installed on supported platforms.
+ - For PostgreSQL databases, Performance Diagnostic is supported for version 10 or later installed on the supported CentOS or RHEL platforms.
-For more information on EDB Wait States, see [EDB wait states docs](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
+ - For EDB Postgres Extended databases, Performance Diagnostic is supported for version 11 or later on the supported CentOS or RHEL platforms.
-You can analyze the Wait States data on multiple levels by narrowing down your selection of data. Each level of the graph is populated on the basis of your selection of data at the higher level.
Prerequisite:
diff --git a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
index 4364f8b2514..bb53ee3d807 100644
--- a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
+++ b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx
@@ -8,16 +8,15 @@ redirects:
- /pem/latest/pem_ent_feat/15_performance_diagnostic/
---
-The Performance Diagnostic dashboard analyzes the database performance for Postgres instances by monitoring the wait events. To display the diagnostic graphs, PEM uses the data collected by the EDB Wait States module.
+The Performance Diagnostic dashboard analyzes the database performance for Postgres instances by monitoring the wait events. To display the diagnostic graphs, PEM uses the data collected by the EDB wait states module. For more information on EDB wait states, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
+
+To analyze the wait states data on multiple levels, narrow down the data you select. The data you select at the higher level of the graph populates the lower level.
!!! Note
- For PostgreSQL databases, Performance Diagnostic is supported for version 10 or later installed on the supported CentOS or RHEL platforms.
- For EDB Postgres Extended databases, Performance Diagnostic is supported for version 11 or later on the supported CentOS or RHEL platforms.
-For more information on EDB wait states, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
-
-To analyze the Wait States data on multiple levels, narrow down the data you select. The data you select at the higher level of the graph populates the lower level.
## Prerequisites
@@ -28,6 +27,7 @@ To analyze the Wait States data on multiple levels, narrow down the data you sel
- For EDB Postgres Advanced Server, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states).
- After you install the EDB Wait States module of EDB Postgres Advanced Server:
+
1. Configure the list of libraries in the `postgresql.conf` file as shown:
```ini
From f579e452d8830bebab7bab8611ee91dd7e2993fa Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Wed, 8 Mar 2023 15:09:39 +0000
Subject: [PATCH 47/50] Sync changes back to v4
---
product_docs/docs/pgd/4/known_issues.mdx | 96 ++++---------------
product_docs/docs/pgd/4/limitations.mdx | 16 ++--
.../docs/pgd/4/other_considerations.mdx | 10 +-
product_docs/docs/pgd/5/known_issues.mdx | 3 +-
product_docs/docs/pgd/5/limitations.mdx | 2 +-
.../docs/pgd/5/other_considerations.mdx | 8 +-
6 files changed, 37 insertions(+), 98 deletions(-)
diff --git a/product_docs/docs/pgd/4/known_issues.mdx b/product_docs/docs/pgd/4/known_issues.mdx
index f203f3f069d..bd46f91504b 100644
--- a/product_docs/docs/pgd/4/known_issues.mdx
+++ b/product_docs/docs/pgd/4/known_issues.mdx
@@ -2,96 +2,36 @@
title: 'Known issues'
---
-This section discusses currently known issues in EDB Postgres Distributed 4.
-
-## Data Consistency
-
-Read about [Conflicts](/pgd/4/bdr/conflicts/) to understand
-the implications of the asynchronous operation mode in terms of data
-consistency.
-
-## List of issues
-
-These known issues are tracked in BDR's
-ticketing system and are expected to be resolved in a future
-release.
-
-- Performance of HARP in terms of failover and switchover time depends
- non-linearly on the latencies between DCS nodes. Which is why
- we currently recommend using etcd cluster per region for HARP in case
- of EDB Postgres Distributed deployment over multiple regions (typically
- the Gold and Platinum layouts). TPAexec already sets up the etcd do run
- per region cluster for these when `harp_consensus_protocol` option
- is set to `etcd` in the `config.yml`.
-
- It's recommended to increase the `leader_lease_duration` HARP option
- (`harp_leader_lease_duration` in TPAexec) for DCS deployments across higher
- latency network.
-
-- If the resolver for the `update_origin_change` conflict
- is set to `skip`, `synchronous_commit=remote_apply` is used, and
- concurrent updates of the same row are repeatedly applied on two
- different nodes, then one of the update statements might hang due
- to a deadlock with the BDR writer. As mentioned in the
- [Conflicts](/pgd/4/bdr/conflicts/) chapter, `skip` is not the default
- resolver for the `update_origin_change` conflict, and this
- combination isn't intended to be used in production. It discards
- one of the two conflicting updates based on the order of arrival
- on that node, which is likely to cause a divergent cluster.
- In the rare situation that you do choose to use the `skip`
- conflict resolver, note the issue with the use of the
- `remote_apply` mode.
-
-- The Decoding Worker feature doesn't work with CAMO/EAGER/Group Commit.
- Installations using CAMO/Eager/Group Commit must keep `enable_wal_decoder`
- disabled.
+This section discusses currently known issues in EDB Postgres Distributed 4. These issues are tracked in BDR's
+ticketing system and are expected to be resolved in a future release.
+
+- Performance of HARP in terms of failover and switchover time depends non-linearly on the latencies between DCS nodes. Which is why we currently recommend using etcd cluster per region for HARP in case of EDB Postgres Distributed deployment over multiple regions (typically the Gold and Platinum layouts). TPAexec already sets up the etcd do run per region cluster for these when `harp_consensus_protocol` option is set to `etcd` in the `config.yml`. It's recommended to increase the `leader_lease_duration` HARP option (`harp_leader_lease_duration` in TPAexec) for DCS deployments across higher latency network.
+
+- If the resolver for the `update_origin_change` conflict is set to `skip`, `synchronous_commit=remote_apply` is used, and concurrent updates of the same row are repeatedly applied on two different nodes, then one of the update statements might hang due to a deadlock with the BDR writer. As mentioned in the [Conflicts](/pgd/4/bdr/conflicts/) chapter, `skip` is not the default resolver for the `update_origin_change` conflict, and this combination isn't intended to be used in production. It discards one of the two conflicting updates based on the order of arrival on that node, which is likely to cause a divergent cluster. In the rare situation that you do choose to use the `skip` conflict resolver, note the issue with the use of the `remote_apply` mode.
+
+- The Decoding Worker feature doesn't work with CAMO/EAGER/Group Commit. Installations using CAMO/Eager/Group Commit must keep `enable_wal_decoder` disabled.
- Decoding Worker works only with the default replication sets.
-- Lag control doesn't adjust commit delay in any way on a fully
- isolated node, that is, in case all other nodes are unreachable or not
- operational. As soon as at least one node is connected, replication
- lag control picks up its work and adjusts the BDR commit delay
- again.
+- Lag control doesn't adjust commit delay in any way on a fully isolated node, that is, in case all other nodes are unreachable or not operational. As soon as at least one node is connected, replication lag control picks up its work and adjusts the BDR commit delay again.
-- For time-based lag control, BDR currently uses the lag time (measured
- by commit timestamps) rather than the estimated catchup time that's
- based on historic apply rate.
+- For time-based lag control, BDR currently uses the lag time (measured by commit timestamps) rather than the estimated catchup time that's based on historic apply rate.
-- Changing the CAMO partners in a CAMO pair isn't currently possible.
- It's possible only to add or remove a pair.
- Adding or removing a pair doesn't need a restart of Postgres or even a
- reload of the configuration.
+- Changing the CAMO partners in a CAMO pair isn't currently possible. It's possible only to add or remove a pair. Adding or removing a pair doesn't need a restart of Postgres or even a reload of the configuration.
-- Group Commit cannot be combined with [CAMO](/pgd/4/bdr/camo/) or [Eager All Node
- replication](/pgd/4/bdr/eager/). Eager Replication currently only works by using the
- "global" BDR commit scope.
+- Group Commit cannot be combined with [CAMO](/pgd/4/bdr/camo/) or [Eager All Node replication](/pgd/4/bdr/eager/). Eager Replication currently only works by using the "global" BDR commit scope.
-- Neither Eager replication nor Group Commit support
- `synchronous_replication_availability = 'async'`.
+- Neither Eager replication nor Group Commit support `synchronous_replication_availability = 'async'`.
-- Group Commit doesn't support a timeout of the
- commit after `bdr.global_commit_timeout`.
+- Group Commit doesn't support a timeout of the commit after `bdr.global_commit_timeout`.
-- Transactions using Eager Replication can't yet execute DDL,
- nor do they support explicit two-phase commit.
- The TRUNCATE command is allowed.
+- Transactions using Eager Replication can't yet execute DDL, nor do they support explicit two-phase commit. The TRUNCATE command is allowed.
- Not all DDL can be run when either CAMO or Group Commit is used.
-- Parallel apply is not currently supported in combination with Group
- Commit, please make sure to disable it when using Group Commit by
- either setting `num_writers` to 1 for the node group (using
- [`bdr.alter_node_group_config`](/pgd/4/bdr/nodes#bdralter_node_group_config)) or
- via the GUC `bdr.writers_per_subscription` (see
- [Configuration of Generic Replication](/pgd/4/bdr/configuration#generic-replication)).
-
-- There currently is no protection against altering or removing a commit
- scope. Running transactions in a commit scope that is concurrently
- being altered or removed can lead to the transaction blocking or
- replication stalling completely due to an error on the downstream node
- attempting to apply the transaction. Ensure that any transactions
- using a specific commit scope have finished before altering or removing it.
+- Parallel apply is not currently supported in combination with Group Commit, please make sure to disable it when using Group Commit by either setting `num_writers` to 1 for the node group (using [`bdr.alter_node_group_config`](/pgd/4/bdr/nodes#bdralter_node_group_config)) or via the GUC `bdr.writers_per_subscription` (see [Configuration of Generic Replication](/pgd/4/bdr/configuration#generic-replication)).
+
+- There currently is no protection against altering or removing a commit scope. Running transactions in a commit scope that is concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to apply the transaction. Ensure that any transactions using a specific commit scope have finished before altering or removing it.
Details of other design or implementation [limitations](limitations) are also available.
\ No newline at end of file
diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx
index fdfdcbbde29..b30707b72e8 100644
--- a/product_docs/docs/pgd/4/limitations.mdx
+++ b/product_docs/docs/pgd/4/limitations.mdx
@@ -3,13 +3,11 @@ title: "Limitations"
---
-This section covers design limitations of BDR, that should be taken into account
-when planning your deployment.
+This section covers design limitations of BDR, that should be taken into account when planning your deployment.
-## Limits
+## Limits on nodes
-- BDR can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the
-32-node recommendation.
+- BDR can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation.
- BDR currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed.
@@ -17,11 +15,12 @@ when planning your deployment.
- The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details).
-- Support for using BDR for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design.
## Limitations of multiple databases on a single instance
-It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation. Also, as noted [in the Limits section](#limits), support for multiple databases on the same PGD instance is being deprecated in PGD 5 and will no longer be supported in PGD 6.
+Support for using BDR for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design.
+
+It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation.
While it is still possible to host up to ten databases in a single instance, this incurs many immediate risks and current limitations:
@@ -49,5 +48,4 @@ This is a (non-comprehensive) list of limitations that are expected and are by d
- A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function.
-- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration. Using synchronous replication to other nodes, including both logical and physical standby is possible.
-
+- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration.
\ No newline at end of file
diff --git a/product_docs/docs/pgd/4/other_considerations.mdx b/product_docs/docs/pgd/4/other_considerations.mdx
index 9fd8f4de3f5..f96982165bf 100644
--- a/product_docs/docs/pgd/4/other_considerations.mdx
+++ b/product_docs/docs/pgd/4/other_considerations.mdx
@@ -4,6 +4,11 @@ title: "Other considerations"
Review these other considerations when planning your deployment.
+# Data Consistency
+
+Read about [Conflicts](/pgd/4/bdr/conflicts/) to understand the implications of the asynchronous operation mode in terms of data
+consistency.
+
## Deployment and sizing considerations
For production deployments, EDB recommends a minimum of four cores for each Postgres data node and each logical standby. Witness nodes don't participate in the data replication operation and don't have to meet this requirement. Always size logical standbys exactly like the data nodes to avoid performance degradations in case of a node promotion. In production deployments, HARP proxy nodes require a minimum of two cores each. EDB recommends detailed benchmarking based on your performance requirements to determine the correct sizing for your environment. EDB’s Professional Services team is available to assist, if needed.
@@ -16,9 +21,8 @@ You can deploy single HARP proxy nodes with single data nodes on the same physic
## Clocks and timezones
-EDB Postgres Distributed has been designed to operate with nodes in multiple timezones, allowing a truly worldwide database cluster. Individual servers do not need to be configured with matching timezones, though we do recommend using log_timezone = UTC to ensure the human readable server log is more accessible and comparable.
+EDB Postgres Distributed has been designed to operate with nodes in multiple timezones, allowing a truly worldwide database cluster. Individual servers do not need to be configured with matching timezones, though we do recommend using `log_timezone = UTC` to ensure the human readable server log is more accessible and comparable.
Server clocks should be synchronized using NTP or other solutions.
-Clock synchronization is not critical to performance, as is the case with some other solutions. Clock skew can impact Origin Conflict Detection, though
-EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides Row Version Conflict Detection, as described in [Conflict Detection](/pgd/4/bdr/conflicts).
+Clock synchronization is not critical to performance, as is the case with some other solutions. Clock skew can impact Origin Conflict Detection, though EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides Row Version Conflict Detection, as described in [Conflict Detection](/pgd/4/bdr/conflicts).
diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx
index c57c6be7df0..9ae33626d07 100644
--- a/product_docs/docs/pgd/5/known_issues.mdx
+++ b/product_docs/docs/pgd/5/known_issues.mdx
@@ -24,4 +24,5 @@ This section discusses currently known issues in EDB Postgres Distributed 5. The
- There currently is no protection against altering or removing a commit scope. Running transactions in a commit scope that is concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to apply the transaction. Ensure that any transactions using a specific commit scope have finished before altering or removing it.
-Details of other design or implementation [limitations](limitations) are also available.
\ No newline at end of file
+Details of other design or implementation [limitations](limitations) are also available.
+
diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx
index 79abe37150f..f2b4b37e0bf 100644
--- a/product_docs/docs/pgd/5/limitations.mdx
+++ b/product_docs/docs/pgd/5/limitations.mdx
@@ -45,4 +45,4 @@ This is a (non-comprehensive) list of other limitations that are expected and ar
- A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function.
-- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Using synchronous replication to other non-PGD nodes, including both logical and physical standby is possible.
+- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together.
\ No newline at end of file
diff --git a/product_docs/docs/pgd/5/other_considerations.mdx b/product_docs/docs/pgd/5/other_considerations.mdx
index 286a801f9a3..3a7996607aa 100644
--- a/product_docs/docs/pgd/5/other_considerations.mdx
+++ b/product_docs/docs/pgd/5/other_considerations.mdx
@@ -6,15 +6,11 @@ Review these other considerations when planning your deployment.
## Data Consistency
-Read about [Conflicts](consistency/conflicts/) to understand
-the implications of the asynchronous operation mode in terms of data
-consistency.
+Read about [Conflicts](consistency/conflicts/) to understand the implications of the asynchronous operation mode in terms of data consistency.
## Deployment
-EDB PGD is intended to be deployed in one of a small number of known-good configurations,
-using either [Trusted Postgres Architect](/tpa/latest) or a configuration management approach
-and deployment architecture approved by Technical Support.
+EDB PGD is intended to be deployed in one of a small number of known-good configurations, using either [Trusted Postgres Architect](/tpa/latest) or a configuration management approach and deployment architecture approved by Technical Support.
Manual deployment isn't recommended and might not be supported.
From cf8c380fc6a3c0b7bbd931c06249ca0600eb4670 Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Wed, 8 Mar 2023 17:00:16 +0000
Subject: [PATCH 48/50] Updated nodes limitations
---
product_docs/docs/pgd/5/limitations.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx
index f2b4b37e0bf..fc58d560248 100644
--- a/product_docs/docs/pgd/5/limitations.mdx
+++ b/product_docs/docs/pgd/5/limitations.mdx
@@ -6,7 +6,7 @@ This section covers design limitations of EDB Postgres Distributed (PGD), that s
## Limits on nodes
-- PGD can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation.
+- PGD can run hundreds of nodes assuming adequate hardware and network. However, for mesh-based deployments, we generally don’t recommend running more than 48 nodes in one cluster. If extra read scalability is needed beyond the 48 node limit, subscriber only nodes can be added without adding connections to the mesh network.
- PGD currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed.
From cf3aaebb78601ba18666c9c1c988c102b1d04561 Mon Sep 17 00:00:00 2001
From: Dj Walker-Morgan
Date: Wed, 8 Mar 2023 17:04:14 +0000
Subject: [PATCH 49/50] Updated to new text
---
product_docs/docs/pgd/4/limitations.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx
index b30707b72e8..321ba978ad3 100644
--- a/product_docs/docs/pgd/4/limitations.mdx
+++ b/product_docs/docs/pgd/4/limitations.mdx
@@ -7,7 +7,7 @@ This section covers design limitations of BDR, that should be taken into account
## Limits on nodes
-- BDR can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation.
+- BDR can run hundreds of nodes assuming adequate hardware and network. However, for mesh-based deployments, we generally don’t recommend running more than 48 nodes in one cluster. If extra read scalability is needed beyond the 48 node limit, subscriber only nodes can be added without adding connections to the mesh network.
- BDR currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed.
From 6c1e091d0e5ddc97a6dcacaae32f783b0320a4b8 Mon Sep 17 00:00:00 2001
From: kelpoole <44814688+kelpoole@users.noreply.github.com>
Date: Wed, 8 Mar 2023 10:32:51 -0700
Subject: [PATCH 50/50] Update limitations.mdx
---
product_docs/docs/pgd/5/limitations.mdx | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx
index fc58d560248..2b40ec1b509 100644
--- a/product_docs/docs/pgd/5/limitations.mdx
+++ b/product_docs/docs/pgd/5/limitations.mdx
@@ -26,7 +26,7 @@ While it is still possible to host up to ten databases in a single instance, thi
- TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases.
-- HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases.
+- PGD-Proxy works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases.
- Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases.
@@ -45,4 +45,4 @@ This is a (non-comprehensive) list of other limitations that are expected and ar
- A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function.
-- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together.
\ No newline at end of file
+- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together.