From 8b0b999488be5975e0d988bc09d8c9cc421e63d3 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 21 Feb 2023 17:26:46 -0500 Subject: [PATCH 01/50] Edits to security section and started on install --- .../15/epas_guide/01_introduction/index.mdx | 2 +- .../docs/epas/15/epas_limitations/index.mdx | 2 +- .../epas/15/epas_platform_support/index.mdx | 2 +- .../docs/epas/15/epas_requirements/index.mdx | 11 +- .../01_sql_protect_overview.mdx | 50 +++---- .../02_configuring_sql_protect.mdx | 138 +++++++++--------- .../03_common_maintenance_operations.mdx | 56 ++++--- .../04_backing_up_restoring_sql_protect.mdx | 72 +++++---- .../index.mdx | 2 +- .../03_virtual_private_database.mdx | 25 ++-- .../15/epas_security_guide/04_sslutils.mdx | 14 +- .../epas_security_guide/05_data_redaction.mdx | 86 +++++------ .../epas/15/epas_security_guide/index.mdx | 2 +- product_docs/docs/epas/15/index.mdx | 2 +- .../component_locations.mdx | 8 +- .../linux_install_details/index.mdx | 4 +- ...installing_epas_using_local_repository.mdx | 6 +- .../01_performing_an_upgrade/index.mdx | 2 +- .../01_command_line_options_reference.mdx | 2 +- .../02_invoking_pg_upgrade/index.mdx | 14 +- .../03_upgrading_to_advanced_server.mdx | 16 +- .../04_upgrading_a_pgAgent_installation.mdx | 2 +- .../05_pg_upgrade_troubleshooting.mdx | 2 +- .../index.mdx | 4 +- product_docs/docs/epas/15/upgrading/index.mdx | 2 +- 25 files changed, 255 insertions(+), 271 deletions(-) diff --git a/product_docs/docs/epas/15/epas_guide/01_introduction/index.mdx b/product_docs/docs/epas/15/epas_guide/01_introduction/index.mdx index 182d57f2a54..a43535027b7 100644 --- a/product_docs/docs/epas/15/epas_guide/01_introduction/index.mdx +++ b/product_docs/docs/epas/15/epas_guide/01_introduction/index.mdx @@ -11,7 +11,7 @@ legacyRedirectsGenerated: - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.003.html" --- -See the [release notes](../../epas_rel_notes) for the features added in EDB Postgres Advanced Server 14. +See the [release notes](../../epas_rel_notes) for the features added in EDB Postgres Advanced Server 15. ## Hard limits diff --git a/product_docs/docs/epas/15/epas_limitations/index.mdx b/product_docs/docs/epas/15/epas_limitations/index.mdx index e9bed63f207..dbf814fb646 100644 --- a/product_docs/docs/epas/15/epas_limitations/index.mdx +++ b/product_docs/docs/epas/15/epas_limitations/index.mdx @@ -5,4 +5,4 @@ title: "Limitations" The following limitations apply: - EDB recommends you don't store the `data` directory of a production database on an NFS file system. If you plan to go against this recommendation, see the [19.2.2.1. NFS](https://www.postgresql.org/docs/14/creating-cluster.html#CREATING-CLUSTER-FILESYSTEM) section in the PostgreSQL documentation for guidance about configuration. -- The LLVM JIT package is supported on RHEL or CentOS x86 only. +- The LLVM JIT package is supported only on RHEL or CentOS x86. diff --git a/product_docs/docs/epas/15/epas_platform_support/index.mdx b/product_docs/docs/epas/15/epas_platform_support/index.mdx index b2b0e3cf633..fbe75333ee1 100644 --- a/product_docs/docs/epas/15/epas_platform_support/index.mdx +++ b/product_docs/docs/epas/15/epas_platform_support/index.mdx @@ -4,7 +4,7 @@ redirects: - ../epas_inst_linux/02_supported_platforms --- -EDB Postgres Advanced Server v14 supports installations on Linux and Windows platforms. See [Product Compatibility](https://www.enterprisedb.com/platform-compatibility#epas) for details. +EDB Postgres Advanced Server supports installations on Linux and Windows platforms. See [Product Compatibility](https://www.enterprisedb.com/platform-compatibility#epas) for details. diff --git a/product_docs/docs/epas/15/epas_requirements/index.mdx b/product_docs/docs/epas/15/epas_requirements/index.mdx index aa499c2b1ef..24b91d96500 100644 --- a/product_docs/docs/epas/15/epas_requirements/index.mdx +++ b/product_docs/docs/epas/15/epas_requirements/index.mdx @@ -5,7 +5,7 @@ title: Requirements ## Hardware requirements -The following installation requirements assume you select the default options during the installation process. The minimum hardware requirements to install and run EDB Postgres Advanced Server are: +The following installation requirements assume that you selected the default options during the installation process. The minimum hardware requirements to install and run EDB Postgres Advanced Server are: - 1 GHz processor - 2 GB of RAM @@ -17,13 +17,10 @@ Additional disk space is required for data or supporting components. ### User privileges -To perform an EDB Postgres Advanced Server installation on a Linux system you must have superuser or administrator or sudo privileges. +To perform an EDB Postgres Advanced Server installation on a Linux system you need superuser, administrator, or sudo privileges. -To perform an EDB Postgres Advanced Server installation on a Windows system you must have administrator privilege. If you are installing EDB Postgres Advanced Server on a Windows system that is configured with `User Account Control` enabled, you can assume sufficient privileges to invoke the graphical installer by right clicking on the name of the installer and selecting `Run as administrator` from the context menu. +To perform an EDB Postgres Advanced Server installation on a Windows system, you need administrator privileges. If you're installing EDB Postgres Advanced Server on a Windows system that's configured with **User Account Control** enabled, you can assume the privileges required to invoke the graphical installer. Right-click the name of the installer, and select **Run as administrator** from the context menu. ### Windows-specific software requirements -Apply the Windows operating system updates before invoking the installer. If the installer encounters errors during the installation process, exit the installation and ensure that your Windows version is up-to-date before restarting the installer. - - - +Apply the Windows operating system updates before invoking the installer. If the installer encounters errors during the installation process, exit the installation, and ensure that your Windows version is up to date. Then restart the installer. diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/01_sql_protect_overview.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/01_sql_protect_overview.mdx index 9d2156040c2..b43d65b54be 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/01_sql_protect_overview.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/01_sql_protect_overview.mdx @@ -8,25 +8,25 @@ legacyRedirectsGenerated: -This section contains an introduction to the different types of SQL injection attacks and describes how SQL/Protect guards against them. +SQL/Protect guards against different types of SQL injection attacks. ## Types of SQL injection attacks -There are a number of different techniques used to perpetrate SQL injection attacks. Each technique is characterized by a certain *signature*. SQL/Protect examines queries for the following signatures: +A number of different techniques are used to perpetrate SQL injection attacks. Each technique is characterized by a certain *signature*. SQL/Protect examines queries for the following signatures. ### Unauthorized relations -While EDB Postgres Advanced Server allows administrators to restrict access to relations (tables, views, etc.), many administrators don't perform this tedious task. SQL/Protect provides a *learn* mode that tracks the relations a user accesses. +While EDB Postgres Advanced Server allows administrators to restrict access to relations (such as tables and views), many administrators don't perform this tedious task. SQL/Protect provides a *learn* mode that tracks the relations a user accesses. -This allows administrators to examine the workload of an application, and for SQL/Protect to learn which relations an application should be allowed to access for a given user or group of users in a role. +This mode allows administrators to examine the workload of an application and for SQL/Protect to learn the relations an application is allowed to access for a given user or group of users in a role. -When SQL/Protect is switched to either *passive* or *active* mode, the incoming queries are checked against the list of learned relations. +When SQL/Protect is switched to *passive* or *active* mode, the incoming queries are checked against the list of learned relations. ### Utility commands -A common technique used in SQL injection attacks is to run utility commands, which are typically SQL Data Definition Language (DDL) statements. An example is creating a user-defined function that has the ability to access other system resources. +A common technique used in SQL injection attacks is to run utility commands, which are typically SQL Data Definition Language (DDL) statements. An example is creating a user-defined function that can access other system resources. -SQL/Protect can block the running of all utility commands, which are not normally needed during standard application processing. +SQL/Protect can block the running of all utility commands that aren't normally needed during standard application processing. ### SQL tautology @@ -40,42 +40,42 @@ Attackers usually start identifying security weaknesses using this technique. SQ ### Unbounded DML statements -A dangerous action taken during SQL injection attacks is the running of unbounded DML statements. These are `UPDATE` and `DELETE` statements with no `WHERE` clause. For example, an attacker may update all users’ passwords to a known value or initiate a denial of service attack by deleting all of the data in a key table. +A dangerous action taken during SQL injection attacks is running unbounded DML statements. These are `UPDATE` and `DELETE` statements with no `WHERE` clause. For example, an attacker mighy update all users’ passwords to a known value or initiate a denial of service attack by deleting all of the data in a key table. ## Monitoring SQL injection attacks -This section describes how SQL/Protect monitors and reports on SQL injection attacks. +SQL/Protect can monitor and report on SQL injection attacks. ### Protected roles -Monitoring for SQL injection attacks involves analyzing SQL statements originating in database sessions where the current user of the session is a protected role. A *protected role* is an EDB Postgres Advanced Server user or group that the database administrator has chosen to monitor using SQL/Protect. (In EDB Postgres Advanced Server, users and groups are collectively referred to as *roles*.) +Monitoring for SQL injection attacks involves analyzing SQL statements originating in database sessions where the current user of the session is a *protected role*. A protected role is an EDB Postgres Advanced Server user or group that the database administrator chooses to monitor using SQL/Protect. (In EDB Postgres Advanced Server, users and groups are collectively referred to as *roles*.) -Each protected role can be customized for the types of SQL injection attacks for which it is to be monitored, thus providing different levels of protection by role and significantly reducing the user maintenance load for DBAs. +You can customize each protected role for the types of SQL injection attacks it's being monitored for, This approach provides different levels of protection by role and significantly reduces the user-maintenance load for DBAs. -A role with the superuser privilege cannot be made a protected role. If a protected non-superuser role is subsequently altered to become a superuser, certain behaviors are exhibited whenever an attempt is made by that superuser to issue any command: +You can't make a role with the superuser privilege a protected role. If a protected non-superuser role is later altered to become a superuser, certain behaviors are exhibited whenever that superuser tries to issue any command: -- A warning message is issued by SQL/Protect on every command issued by the protected superuser. -- The statistic in column superusers of `edb_sql_protect_stats` is incremented with every command issued by the protected superuser. See *Attack Attempt Statistics* for information on the `edb_sql_protect_stats` view. +- SQL/Protect issues a warning message for every command issued by the protected superuser. +- The statistic in column superusers of `edb_sql_protect_stats` is incremented with every command issued by the protected superuser. See [Attack attempt statistics](#attack-attempt-statistics) for information on the `edb_sql_protect_stats` view. - When SQL/Protect is in active mode, all commands issued by the protected superuser are prevented from running. -A protected role that has the superuser privilege should either be altered so that it is no longer a superuser, or it should be reverted back to an unprotected role. +Either alter a protected role that has the superuser privilege so that it's no longer a superuser, or revert it to an unprotected role. ### Attack attempt statistics -Each usage of a command by a protected role that is considered an attack by SQL/Protect is recorded. Statistics are collected by type of SQL injection attack as discussed in *Types of SQL Injection Attacks*. +SQL/Protect records each use of a command by a protected role that's considered an attack. It collects statistics by type of SQL injection attack, as discussed in [Types of SQL injection attacks](#types-of-injection-attacks). -These statistics are accessible from view `edb_sql_protect_stats` that can be easily monitored to identify the start of a potential attack. +You can access these statistics from view `edb_sql_protect_stats`. You can easily monitor this view to identify the start of a potential attack. The columns in `edb_sql_protect_stats` monitor the following: - **username.** Name of the protected role. -- **superusers.** Number of SQL statements issued when the protected role is a superuser. In effect, any SQL statement issued by a protected superuser increases this statistic. See *Protected Roles* for information on protected superusers. -- **relations.** Number of SQL statements issued referencing relations that were not learned by a protected role. (That is, relations that are not in a role’s protected relations list.) +- **superusers.** Number of SQL statements issued when the protected role is a superuser. In effect, any SQL statement issued by a protected superuser increases this statistic. See [Protected roles](#protected-roles) for information on protected superusers. +- **relations.** Number of SQL statements issued referencing relations that weren't learned by a protected role. (These relations aren't in a role’s protected relations list.) - **commands.** Number of DDL statements issued by a protected role. - **tautology.** Number of SQL statements issued by a protected role that contained a tautological condition. -- **dml.** Number of `UPDATE` and `DELETE` statements issued by a protected role that did not contain a `WHERE` clause. +- **dml.** Number of `UPDATE` and `DELETE` statements issued by a protected role that didn't contain a `WHERE` clause. -This gives database administrators the opportunity to react proactively in preventing theft of valuable data or other malicious actions. +These statistics give database administrators the chance to react proactively in preventing theft of valuable data or other malicious actions. If a role is protected in more than one database, the role’s statistics for attacks in each database are maintained separately and are viewable only when connected to the respective database. @@ -84,17 +84,17 @@ If a role is protected in more than one database, the role’s statistics for at ### Attack attempt queries -Each usage of a command by a protected role that is considered an attack by SQL/Protect is recorded in the `edb_sql_protect_queries` view. +Each use of a command by a protected role that's considered an attack by SQL/Protect is recorded in the `edb_sql_protect_queries` view. The `edb_sql_protect_queries` view contains the following columns: - **username.** Database user name of the attacker used to log into the database server. - **ip_address.** IP address of the machine from which the attack was initiated. - **port.** Port number from which the attack originated. -- **machine_name.** Name of the machine, if known, from which the attack originated. -- **date_time.** Date and time at which the query was received by the database server. The time is stored to the precision of a minute. +- **machine_name.** Name of the machine from which the attack originated, if known. +- **date_time.** Date and time when the database server received the query. The time is stored to the precision of a minute. - **query.** The query string sent by the attacker. The maximum number of offending queries that are saved in `edb_sql_protect_queries` is controlled by the `edb_sql_protect.max_queries_to_save` configuration parameter. -If a role is protected in more than one database, the role’s queries for attacks in each database are maintained separately and are viewable only when connected to the respective database. +If a role is protected in more than one database, the role’s queries for attacks in each database are maintained separately. They are viewable only when connected to the respective database. diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx index 3a03b3a401c..da06afea927 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx @@ -8,38 +8,36 @@ legacyRedirectsGenerated: -Ensure the following prerequisites are met before configuring SQL/Protect: +Make sure the following prerequisites are met before configuring SQL/Protect: -- The library file (`sqlprotect.so` on Linux, `sqlprotect.dll` on Windows) necessary to run `SQL/Protect` should be installed in the `lib` subdirectory of your EDB Postgres Advanced Server home directory. For Windows, this should be done by the EDB Postgres Advanced Server installer. For Linux, install the `edb-asxx-server-sqlprotect` RPM package where `xx` is the EDB Postgres Advanced Server version number. +- The library file (`sqlprotect.so` on Linux, `sqlprotect.dll` on Windows) needed to run `SQL/Protect` is installed in the `lib` subdirectory of your EDB Postgres Advanced Server home directory. For Windows, the EDB Postgres Advanced Server installer does this. For Linux, install the `edb-asxx-server-sqlprotect` RPM package, where `xx` is the EDB Postgres Advanced Server version number. -- You also need the SQL script file `sqlprotect.sql` located in the `share/contrib` subdirectory of your EDB Postgres Advanced Server home directory. +- You need the SQL script file `sqlprotect.sql` located in the `share/contrib` subdirectory of your EDB Postgres Advanced Server home directory. - You must configure the database server to use `SQL/Protect`, and you must configure each database that you want `SQL/Protect` to monitor: - - The database server configuration file, `postgresql.conf`, must be modified by adding and enabling configuration parameters used by `SQL/Protect`. - - Database objects used by `SQL/Protect` must be installed in each database that you want `SQL/Protect` to monitor. + - You must modify the database server configuration file, `postgresql.conf`, by adding and enabling configuration parameters used by `SQL/Protect`. + - Install database objects used by `SQL/Protect` in each database that you want `SQL/Protect` to monitor. -**Step 1:** Edit the following configuration parameters in the `postgresql.conf` file located in the `data` subdirectory of your EDB Postgres Advanced Server home directory. +1. Edit the following configuration parameters in the `postgresql.conf` file located in the `data` subdirectory of your EDB Postgres Advanced Server home directory: -- **shared_preload_libraries.** Add `$libdir/sqlprotect` to the list of libraries. + - **shared_preload_libraries.** Add `$libdir/sqlprotect` to the list of libraries. -- **edb_sql_protect.enabled.** Controls whether or not `SQL/Protect` is actively monitoring protected roles by analyzing SQL statements issued by those roles and reacting according to the setting of `edb_sql_protect.level`. When you are ready to begin monitoring with `SQL/Protect` set this parameter to `on`. If this parameter is omitted, the default is `off`. + - **edb_sql_protect.enabled.** Controls whether `SQL/Protect` is actively monitoring protected roles by analyzing SQL statements issued by those roles and reacting according to the setting of `edb_sql_protect.level`. When you're ready to begin monitoring with `SQL/Protect`, set this parameter to `on`. Yhe default is `off`. -- **edb_sql_protect.level.** Sets the action taken by `SQL/Protect` when a SQL statement is issued by a protected role. If this parameter is omitted, the default behavior is `passive`. Initially, set this parameter to `learn`. + - **edb_sql_protect.level.** Sets the action taken by `SQL/Protect` when a SQL statement is issued by a protected role. The default behavior is `passive`. Initially, set this parameter to `learn`. See [Setting the protection level](#setting-the-protection-level) for more information. - See [Setting the Protection Level](#setting-the-protection-level) for more information. + - **edb_sql_protect.max_protected_roles.** Sets the maximum number of roles to protect. The default is `64`. -- **edb_sql_protect.max_protected_roles.** Sets the maximum number of roles that can be protected. If this parameter is omitted, the default setting is `64`. - -- **edb_sql_protect.max_protected_relations.** Sets the maximum number of relations that can be protected per role. If this parameter is omitted, the default setting is `1024`. + - **edb_sql_protect.max_protected_relations.** Sets the maximum number of relations to protect per role. The default is `1024`. The total number of protected relations for the server is the number of protected relations times the number of protected roles. Every protected relation consumes space in shared memory. The space for the maximum possible protected relations is reserved during database server startup. -- **edb_sql_protect.max_queries_to_save.** Sets the maximum number of offending queries to save in the `edb_sql_protect_queries` view. If this parameter is omitted, the default setting is `5000`. If the number of offending queries reaches the limit, additional queries are not saved in the view, but are accessible in the database server log file. + - **edb_sql_protect.max_queries_to_save.** Sets the maximum number of offending queries to save in the `edb_sql_protect_queries` view. The default is `5000`. If the number of offending queries reaches the limit, additional queries aren't saved in the view but are accessible in the database server log file. - The minimum valid value for this parameter is `100`. If a value less than `100` is specified, the database server starts using the default setting of `5000`. A warning message is recorded in the database server log file. + The minimum valid value for this parameter is `100`. If you specify a value less than `100`, the database server starts using the default setting of `5000`. A warning message is recorded in the database server log file. -The following example shows the settings of these parameters in the `postgresql.conf` file: + The following example shows the settings of these parameters in the `postgresql.conf` file: ```ini shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen,$libdir/sqlprotect' @@ -54,21 +52,21 @@ edb_sql_protect.max_protected_relations = 1024 edb_sql_protect.max_queries_to_save = 5000 ``` -**Step 2:** Restart the database server after you have modified the `postgresql.conf` file. +2. After you modify the `postgresql.conf` file, restart the database server. -**On Linux:** Invoke the EDB Postgres Advanced Server service script with the `restart` option. + - **On Linux:** Invoke the EDB Postgres Advanced Server service script with the `restart` option. -On a Redhat or CentOS 7.x installation, use the command: + On a Redhat or CentOS 7.x installation, use the command: -```shell -systemctl restart edb-as-14 -``` + ```shell + systemctl restart edb-as-14 + ``` -**On Windows:** Use the Windows Services applet to restart the service named `edb-as-14`. + **On Windows:** Use the Windows Services applet to restart the service named `edb-as-14`. -**Step 3:** For each database that you want to protect from SQL injection attacks, connect to the database as a superuser (either `enterprisedb` or `postgres`, depending upon your installation options) and run the script `sqlprotect.sql` located in the `share/contrib` subdirectory of your EDB Postgres Advanced Server home directory. The script creates the SQL/Protect database objects in a schema named `sqlprotect`. +3. For each database that you want to protect from SQL injection attacks, connect to the database as a superuser (either `enterprisedb` or `postgres`, depending upon your installation options). Then run the script `sqlprotect.sql` located in the `share/contrib` subdirectory of your EDB Postgres Advanced Server home directory. The script creates the SQL/Protect database objects in a schema named `sqlprotect`. -The following example shows this process to set up protection for a database named `edb`: +This example shows the process to set up protection for a database named `edb`: ```sql $ /usr/edb/as14/bin/psql -d edb -U enterprisedb @@ -109,13 +107,13 @@ SET ## Selecting roles to protect -After the SQL/Protect database objects have been created in a database, you can select the roles for which SQL queries are to be monitored for protection, and the level of protection that is assigned to each role. +After you create the SQL/Protect database objects in a database, you can select the roles for which to monitor SQL queries for protection and the level of protection to assign to each role. ### Setting the protected roles list For each database that you want to protect, you must determine the roles you want to monitor and then add those roles to the *protected roles list* of that database. -**Step 1:** Connect as a superuser to a database that you wish to protect with either `psql` or Postgres Enterprise Manager Client: +1. Connect as a superuser to a database that you want to protect with either `psql` or Postgres Enterprise Manager Client: ```sql $ /usr/edb/as14/bin/psql -d edb -U enterprisedb @@ -126,16 +124,16 @@ Type "help" for help. edb=# ``` -**Step 2:** Since the SQL/Protect tables, functions, and views are built under the `sqlprotect` schema, use the `SET search_path` command to include the `sqlprotect` schema in your search path. This eliminates the need to schema-qualify any operation or query involving SQL/Protect database objects: +2. Since the SQL/Protect tables, functions, and views are built under the `sqlprotect` schema, use the `SET search_path` command to include the `sqlprotect` schema in your search path. Doing so eliminates the need to schema-qualify any operation or query involving SQL/Protect database objects: ```sql edb=# SET search_path TO sqlprotect; SET ``` -**Step 3:** Each role that you wish to protect must be added to the protected roles list. This list is maintained in the table `edb_sql_protect`. +3. You must add each role that you want to protect to the protected roles list. This list is maintained in the table `edb_sql_protect`. -To add a role, use the function `protect_role('rolename')`. The following example protects a role named `appuser`: + To add a role, use the function `protect_role('rolename')`. This example protects a role named `appuser`: ```sql edb=# SELECT protect_role('appuser'); @@ -146,7 +144,7 @@ __OUTPUT__ (1 row) ``` -You can list the roles that have been added to the protected roles list by issuing the following query: +You can list the roles that were added to the protected roles list with the following query: ```sql edb=# SELECT * FROM edb_sql_protect; @@ -159,7 +157,7 @@ __OUTPUT__ (1 row) ``` -A view is also provided that gives the same information using the object names instead of the Object Identification numbers (OIDs): +A view is also provided that gives the same information using the object names instead of the object identification numbers (OIDs): ```sql edb=# \x @@ -181,42 +179,42 @@ allow_empty_dml | f The `edb_sql_protect.level` configuration parameter sets the protection level, which defines the behavior of SQL/Protect when a protected role issues a SQL statement. The defined behavior applies to all roles in the protected roles lists of all databases configured with SQL/Protect in the database server. -The `edb_sql_protect.level` configuration parameter (in the `postgresql.conf` file) can be set to one of the following values to use either `learn` mode, `passive` mode, or `active` mode: +You can set the `edb_sql_protect.level` configuration parameter in the `postgresql.conf` file to one of the following values to specify learn, passive, or active mode: -- **learn.** Tracks the activities of protected roles and records the relations used by the roles. This is used when initially configuring SQL/Protect so the expected behaviors of the protected applications are learned. -- **passive.** Issues warnings if protected roles are breaking the defined rules, but does not stop any SQL statements from executing. This is the next step after SQL/Protect has learned the expected behavior of the protected roles. This essentially behaves in intrusion detection mode and can be run in production when properly monitored. -- **active.** Stops all invalid statements for a protected role. This behaves as a SQL firewall preventing dangerous queries from running. This is particularly effective against early penetration testing when the attacker is trying to determine the vulnerability point and the type of database behind the application. Not only does SQL/Protect close those vulnerability points, but it tracks the blocked queries allowing administrators to be alerted before the attacker finds an alternate method of penetrating the system. +- `learn`. Tracks the activities of protected roles and records the relations used by the roles. Use this mode when first configuring SQL/Protect so the expected behaviors of the protected applications are learned. +- `passive`. Issues warnings if protected roles are breaking the defined rules but doesn't stop any SQL statements from executing. This mode is the next step after SQL/Protect learns the expected behavior of the protected roles. It essentially behaves in intrusion detection mode and you can run this mode in production when properly monitored. +- `active`. Stops all invalid statements for a protected role. This mode behaves as a SQL firewall, preventing dangerous queries from running. This approach is particularly effective against early penetration testing when the attacker is trying to find the vulnerability point and the type of database behind the application. Not only does SQL/Protect close those vulnerability points, it tracks the blocked queries. This tracking allows administrators to be alerted before the attacker finds another way to penetrate the system. -If the `edb_sql_protect.level` parameter is not set or is omitted from the configuration file, the default behavior of `SQL/Protect` is `passive`. +The default mode is `passive`. -If you are using `SQL/Protect` for the first time, set `edb_sql_protect.level` to `learn`. +If you're using `SQL/Protect` for the first time, set `edb_sql_protect.level` to `learn`. ## Monitoring protected roles -Once you have configured SQL/Protect in a database, added roles to the protected roles list, and set the desired protection level, you can then activate SQL/Protect in either `learn` mode, `passive` mode, or `active` mode. You can then start running your applications. +After you configure SQL/Protect in a database, add roles to the protected roles list, and set the desired protection level, you can activate SQL/Protect in `learn`, `passive`, or `active`. You can then start running your applications. -With a new SQL/Protect installation, the first step is to determine the relations that protected roles should be permitted to access during normal operation. Learn mode allows a role to run applications during which time SQL/Protect is recording the relations that are accessed. These are added to the role’s *protected relations list* stored in table `edb_sql_protect_rel`. +With a new SQL/Protect installation, the first step is to determine the relations that protected roles are allowed to access during normal operation. Learn mode allows a role to run applications during which time SQL/Protect is recording the relations that are accessed. These are added to the role’s *protected relations list* stored in table `edb_sql_protect_rel`. -Monitoring for protection against attack begins when SQL/Protect is run in passive or active mode. In passive and active modes, the role is permitted to access the relations in its protected relations list as these were determined to be the relations the role should be able to access during typical usage. +Monitoring for protection against attack begins when you run SQL/Protect in passive or active mode. In passive and active modes, the role is permitted to access the relations in its protected relations list. These are the specifiedd relations the role can access during typical usage. -However, if a role attempts to access a relation that is not in its protected relations list, a `WARNING` or `ERROR` severity level message is returned by SQL/Protect. The role’s attempted action on the relation may or may not be carried out depending upon whether the mode is passive or active. +However, if a role attempts to access a relation that isn't in its protected relations list, SQL/Protect returns a `WARNING` or `ERROR` severity-level message. The role’s attempted action on the relation might not be carried out, depending on whether the mode is passive or active. ### Learn mode -**Step 1:** To activate SQL/Protect in learn mode, set the parameters in the `postgresql.conf` file as shown below: +To activate SQL/Protect in learn mode: + +1. Set the parameters in the `postgresql.conf` file: ```ini edb_sql_protect.enabled = on edb_sql_protect.level = learn ``` -**Step 2:** Reload the `postgresql.conf` file. +2. Reload the `postgresql.conf` file. From the EDB Postgres Advanced Server application menu, select **Reload Configuration > Expert Configuration**. -Choose `Expert Configuration`, then `Reload Configuration` from the EDB Postgres Advanced Server application menu. - -For an alternative method of reloading the configuration file, use the `pg_reload_conf` function. Be sure you are connected to a database as a superuser and execute `function pg_reload_conf` as shown by the following example: + For an alternative method of reloading the configuration file, use the `pg_reload_conf` function. Be sure you're connected to a database as a superuser, and execute `function pg_reload_conf`: ```sql edb=# SELECT pg_reload_conf(); @@ -227,9 +225,9 @@ __OUTPUT__ (1 row) ``` -**Step 3:** Allow the protected roles to run their applications. +3. Allow the protected roles to run their applications. -As an example the following queries are issued in the `psql` application by protected role `appuser`: + For example, the following queries are issued in the `psql` application by protected role `appuser`: ```sql edb=> SELECT * FROM dept; @@ -255,9 +253,9 @@ NOTICE: SQLPROTECT: Learned relation: 16391 (3 rows) ``` -SQL/Protect generates a `NOTICE` severity level message indicating the relation has been added to the role’s protected relations list. + SQL/Protect generates a `NOTICE` severity-level message, indicating the relation was added to the role’s protected relations list. -In SQL/Protect learn mode, SQL statements that are cause for suspicion are not prevented from executing, but a message is issued to alert the user to potentially dangerous statements as shown by the following example: + In SQL/Protect learn mode, SQL statements that are cause for suspicion aren't prevented from executing. However, a message is issued to alert the user to potentially dangerous statements: ```sql edb=> CREATE TABLE appuser_tab (f1 INTEGER); @@ -269,16 +267,14 @@ NOTICE: SQLPROTECT: Illegal Query: empty DML DELETE 0 ``` -**Step 4:** As a protected role runs applications, the SQL/Protect tables can be queried to observe the addition of relations to the role’s protected relations list. - -Connect as a superuser to the database you are monitoring and set the search path to include the `sqlprotect` schema: +4. As a protected role runs applications, you can query the SQL/Protect tables to see that relations were added to the role’s protected relations list. Connect as a superuser to the database you're monitoring, and set the search path to include the `sqlprotect` schema: ```sql edb=# SET search_path TO sqlprotect; SET ``` -Query the `edb_sql_protect_rel` table to see the relations added to the protected relations list: + Query the `edb_sql_protect_rel` table to see the relations added to the protected relations list: ```sql edb=# SELECT * FROM edb_sql_protect_rel; @@ -291,7 +287,7 @@ __OUTPUT__ (3 rows) ``` -The `list_protected_rels` view provides more comprehensive information along with the object names instead of the OIDs: + The `list_protected_rels` view provides more comprehensive information along with the object names instead of the OIDs: ```sql edb=# SELECT * FROM list_protected_rels; @@ -306,20 +302,20 @@ __OUTPUT__ ### Passive mode -Once you have determined that a role’s applications have accessed all relations they need, you can now change the protection level so that SQL/Protect can actively monitor the incoming SQL queries and protect against SQL injection attacks. +After a role’s applications have accessed all relations they need, you can change the protection level so that SQL/Protect can actively monitor the incoming SQL queries and protect against SQL injection attacks. -Passive mode is the less restrictive of the two protection modes, passive and active. +Passive mode is a less restrictive protection mode than active. -**Step 1:** To activate `SQL/Protect` in passive mode, set the following parameters in the `postgresql.conf` file as shown below: +1. To activate `SQL/Protect` in passive mode, set the following parameters in the `postgresql.conf` file: ```ini edb_sql_protect.enabled = on edb_sql_protect.level = passive ``` -**Step 2:** Reload the configuration file as shown in Step 2 of the [Learn Mode](#learn-mode) section. +2. Reload the configuration file as shown in Step 2 of [Learn mode](#learn-mode). -Now SQL/Protect is in passive mode. For relations that have been learned such as the `dept` and `emp` tables of the prior examples, SQL statements are permitted with no special notification to the client by `SQL/Protect` as shown by the following queries run by user `appuser`: + Now SQL/Protect is in passive mode. For relations that were learned, such as the `dept` and `emp` tables of the prior examples, SQL statements are permitted. No special notification to the client by `SQL/Protect` is required, as shown by the following queries run by user `appuser`: ```sql edb=> SELECT * FROM dept; @@ -343,7 +339,7 @@ __OUTPUT__ (3 rows) ``` -SQL/Protect does not prevent any SQL statement from executing, but issues a message of `WARNING` severity level for SQL statements executed against relations that were not learned, or for SQL statements that contain a prohibited signature as shown in the following example: + SQL/Protect doesn't prevent any SQL statement from executing. However, it issues a message of `WARNING` severity level for SQL statements executed against relations that weren't learned. It also issues a warning for SQL statements that contain a prohibited signature: ```sql edb=> CREATE TABLE appuser_tab_2 (f1 INTEGER); @@ -366,11 +362,11 @@ f1 (2 rows) ``` -**Step 3:** Monitor the statistics for suspicious activity. +3. Monitor the statistics for suspicious activity. -By querying the view `edb_sql_protect_stats`, you can see the number of times SQL statements were executed that referenced relations that were not in a role’s protected relations list, or contained SQL injection attack signatures. See *Attack Attempt Statistics* for more information on view `edb_sql_protect_stats`. + By querying the view `edb_sql_protect_stats`, you can see the number of times SQL statements executed that referenced relations that weren't in a role’s protected relations list or contained SQL injection attack signatures. -The following is a query on `edb_sql_protect_stats`: + The following is a query on `edb_sql_protect_stats`: ```sql edb=# SET search_path TO sqlprotect; @@ -383,11 +379,11 @@ __OUTPUT__ (1 row) ``` -**Step 4:** View information on specific attacks. +4. View information on specific attacks. -By querying the `edb_sql_protect_queries` view, you can see the SQL statements that were executed that referenced relations that were not in a role’s protected relations list, or contained SQL injection attack signatures. See *Attack Attempt Queries* for more information on view `edb_sql_protect_queries`. + By querying the `edb_sql_protect_queries` view, you can see the SQL statements that were executed that referenced relations that weren't in a role’s protected relations list or that contained SQL injection attack signatures. -The following code sample shows a query on `edb_sql_protect_queries`: + The following code sample shows a query on `edb_sql_protect_queries`: ```sql edb=# SELECT * FROM edb_sql_protect_queries; @@ -429,16 +425,16 @@ __OUTPUT__ In active mode, disallowed SQL statements are prevented from executing. Also, the message issued by SQL/Protect has a higher severity level of `ERROR` instead of `WARNING`. -**Step 1:** To activate `SQL/Protect` in active mode, set the following parameters in the `postgresql.conf` file as shown below: +1. To activate `SQL/Protect` in active mode, set the following parameters in the `postgresql.conf` file: ```ini edb_sql_protect.enabled = on edb_sql_protect.level = active ``` -**Step 2:** Reload the configuration file as shown in Step 2 of the [Learn Mode](#learn-mode) section. +2. Reload the configuration file as shown in Step 2 of [Learn mode](#learn-mode). -The following example illustrates SQL statements similar to those given in the examples of Step 2 in `Passive Mode`, but executed by user `appuser` when `edb_sql_protect.level` is set to `active`: +This example shows SQL statements similar to those given in the examples of Step 2 in [Passive mode](#passive-mode). These statements are executed by user `appuser` when `edb_sql_protect.level` is set to `active`: ```sql edb=> CREATE TABLE appuser_tab_3 (f1 INTEGER); diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx index 31cd4947d08..8a84f44da92 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx @@ -8,13 +8,11 @@ legacyRedirectsGenerated: -The following describes how to perform other common operations. - -You must be connected as a superuser to perform these operations and have included the `sqlprotect` schema in your search path. +You must be connected as a superuser to perform these operations and include the `sqlprotect` schema in your search path. ## Adding a role to the protected roles list -To add a role to the protected roles list run `protect_role('rolename')` as shown in the following example: +Add a role to the protected roles list run `protect_role('rolename')`, as shown in this example: ```sql edb=# SELECT protect_role('newuser'); @@ -34,15 +32,15 @@ unprotect_role('rolename') unprotect_role(roleoid) ``` -The variation of the function using the `OID` is useful if you remove the role using the `DROP ROLE` or `DROP USER` SQL statement before removing the role from the protected roles list. If a query on a SQL/Protect relation returns a value such as `unknown (OID=16458)` for the user name, use the `unprotect_role(roleoid)` form of the function to remove the entry for the deleted role from the protected roles list. +The variation of the function using the OID is useful if you remove the role using the `DROP ROLE` or `DROP USER` SQL statement before removing the role from the protected roles list. If a query on a SQL/Protect relation returns a value such as `unknown (OID=16458)` for the user name, use the `unprotect_role(roleoid)` form of the function to remove the entry for the deleted role from the protected roles list. Removing a role using these functions also removes the role’s protected relations list. -The statistics for a role that has been removed are not deleted until you use the [drop_stats function](#drop_stats). +To delete the statistics for a role that was removed, use the [drop_stats function](#drop_stats). -The offending queries for a role that has been removed are not deleted until you use the [drop_queries function](#drop_queries). +To delete the queries for a role that was removed, use the [drop_queries function](#drop_queries). -The following is an example of the `unprotect_role` function: +This example shows the `unprotect_role` function: ```sql edb=# SELECT unprotect_role('newuser'); @@ -52,7 +50,7 @@ __OUTPUT__ (1 row) ``` -Alternatively, the role could be removed by giving its OID of `16693`: +Alternatively, you can remove the role by giving its OID of `16693`: ```sql edb=# SELECT unprotect_role(16693); @@ -64,23 +62,23 @@ __OUTPUT__ ## Setting the types of protection for a role -You can change whether or not a role is protected from a certain type of SQL injection attack. +You can change whether a role is protected from a certain type of SQL injection attack. -Change the Boolean value for the column in `edb_sql_protect` corresponding to the type of SQL injection attack for which protection of a role is to be disabled or enabled. +Change the Boolean value for the column in `edb_sql_protect` corresponding to the type of SQL injection attack for which you want to enable or disable protection of a role. Be sure to qualify the following columns in your `WHERE` clause of the statement that updates `edb_sql_protect`: -- **dbid.** OID of the database for which you are making the change -- **roleid.** OID of the role for which you are changing the Boolean settings +- **dbid.** OID of the database for which you're making the change. +- **roleid.** OID of the role for which you're changing the Boolean settings -For example, to allow a given role to issue utility commands, update the `allow_utility_cmds` column as follows: +For example, to allow a given role to issue utility commands, update the `allow_utility_cmds` column: ```sql UPDATE edb_sql_protect SET allow_utility_cmds = TRUE WHERE dbid = 13917 AND roleid = 16671; ``` -You can verify the change was made by querying `edb_sql_protect` or `list_protected_users`. In the following query note that column `allow_utility_cmds` now contains `t`: +You can verify the change was made by querying `edb_sql_protect` or `list_protected_users`. In the following query, note that column `allow_utility_cmds` now contains `t`: ```sql edb=# SELECT dbid, roleid, allow_utility_cmds FROM edb_sql_protect; @@ -95,7 +93,7 @@ The updated rules take effect on new sessions started by the role since the chan ## Removing a relation from the protected relations list -If SQL/Protect has learned that a given relation is accessible for a given role, you can subsequently remove that relation from the role’s protected relations list. +If SQL/Protect learns that a given relation is accessible for a given role, you can later remove that relation from the role’s protected relations list. Delete its entry from the `edb_sql_protect_rel` table using any of the following functions: @@ -105,11 +103,11 @@ unprotect_rel('rolename', 'schema', 'relname') unprotect_rel(roleoid, reloid) ``` -If the relation given by `relname` is not in your current search path, specify the relation’s schema using the second function format. +If the relation given by `relname` isn't in your current search path, specify the relation’s schema using the second function format. The third function format allows you to specify the OIDs of the role and relation, respectively, instead of their text names. -The following example illustrates the removal of the `public.emp` relation from the protected relations list of the role `appuser`: +This example removes the `public.emp` relation from the protected relations list of the role `appuser`: ```sql edb=# SELECT unprotect_rel('appuser', 'public', 'emp'); @@ -119,7 +117,7 @@ __OUTPUT__ (1 row) ``` -The following query shows there is no longer an entry for the `emp` relation: +This query shows there's no longer an entry for the `emp` relation: ```sql edb=# SELECT * FROM list_protected_rels; @@ -131,13 +129,13 @@ __OUTPUT__ (2 rows) ``` -SQL/Protect now issues a warning or completely blocks access (depending upon the setting of `edb_sql_protect.level`) whenever the role attempts to utilize that relation. +SQL/Protect now issues a warning or completely blocks access (depending on the setting of `edb_sql_protect.level`) when the role attempts to utilize that relation. ## Deleting statistics -You can delete statistics from view `edb_sql_protect_stats` using either of the two following functions: +You can delete statistics from view `edb_sql_protect_stats` using either of the following functions: ```sql drop_stats('rolename') @@ -147,7 +145,7 @@ drop_stats(roleoid) The variation of the function using the OID is useful if you remove the role using the `DROP ROLE` or `DROP USER` SQL statement before deleting the role’s statistics using `drop_stats('rolename')`. If a query on `edb_sql_protect_stats` returns a value such as `unknown (OID=16458)` for the user name, use the `drop_stats(roleoid)` form of the function to remove the deleted role’s statistics from `edb_sql_protect_stats`. -The following is an example of the `drop_stats` function: +This example shows the `drop_stats` function: ```sql edb=# SELECT drop_stats('appuser'); @@ -164,7 +162,7 @@ __OUTPUT__ (0 rows) ``` -The following is an example of using the `drop_stats(roleoid)` form of the function when a role is dropped before deleting its statistics: +This example uses the `drop_stats(roleoid)` form of the function when a role is dropped before deleting its statistics: ```sql edb=# SELECT * FROM edb_sql_protect_stats; @@ -196,7 +194,7 @@ __OUTPUT__ ## Deleting offending queries -You can delete offending queries from view `edb_sql_protect_queries` using either of the two following functions: +You can delete offending queries from view `edb_sql_protect_queries` using either of the following functions: ```sql drop_queries('rolename') @@ -204,9 +202,9 @@ drop_queries('rolename') drop_queries(roleoid) ``` -The variation of the function using the OID is useful if you remove the role using the `DROP ROLE` or `DROP USER` SQL statement before deleting the role’s offending queries using `drop_queries('rolename')`. If a query on `edb_sql_protect_queries` returns a value such as `unknown (OID=16454)` for the user name, use the `drop_queries(roleoid)` form of the function to remove the deleted role’s offending queries from `edb_sql_protect_queries`. +The variation of the function using the OID is useful if you remove the role using the `DROP ROLE` or `DROP USER` SQL statement before deleting the role’s offending queries using `drop_queries('rolename')`. If a query on `edb_sql_protect_queries` returns a value such as `unknown (OID=16454)` for the user name, use the `drop_queries(roleoid)` form of the function to remove the deleted role’s queries from `edb_sql_protect_queries`. -The following is an example of the `drop_queries` function: +This example shows the `drop_queries` function: ```sql edb=# SELECT drop_queries('appuser'); @@ -224,7 +222,7 @@ __OUTPUT__ (0 rows) ``` -The following is an example of using the `drop_queries(roleoid)` form of the function when a role is dropped before deleting its queries: +This example uses the `drop_queries(roleoid)` form of the function when a role is dropped before deleting its queries: ```sql edb=# SELECT username, query FROM edb_sql_protect_queries; @@ -256,6 +254,6 @@ __OUTPUT__ ## Disabling and enabling monitoring -If you wish to turn off SQL/Protect monitoring, modify the `postgresql.conf` file, setting the `edb_sql_protect.enabled` parameter to `off`. After saving the file, reload the server configuration to apply the settings. +If you want to turn off SQL/Protect monitoring, modify the `postgresql.conf` file, setting the `edb_sql_protect.enabled` parameter to `off`. After saving the file, reload the server configuration to apply the settings. -If you wish to turn on SQL/Protect monitoring, modify the `postgresql.conf` file, setting the `edb_sql_protect.enabled` parameter to `on`. After saving the file, reload the server configuration to apply the settings. +If you want to turn on SQL/Protect monitoring, modify the `postgresql.conf` file, setting the `edb_sql_protect.enabled` parameter to `on`. Save the file, and then reload the server configuration to apply the settings. diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx index 18b597f08d3..33beea7ecaa 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx @@ -8,30 +8,30 @@ legacyRedirectsGenerated: -Backing up a database that is configured with SQL/Protect, and then restoring the backup file to a new database requires additional considerations to what is normally associated with backup and restore procedures. This is primarily due to the use of Object Identification numbers (OIDs) in the SQL/Protect tables as explained in this section. +Backing up a database that's configured with SQL/Protect and then restoring the backup file to a new database requires considerations in addition to those normally associated with backup and restore procedures. These added considerations are due mainly to the use of object identification numbers (OIDs) in the SQL/Protect tables. !!! Note - This section applies if your backup and restore procedures result in the re-creation of database objects in the new database with new OIDs such as is the case when using the `pg_dump` backup program. + This information applies if your backup and restore procedures result in re-creating database objects in the new database with new OIDs, such as is when using the `pg_dump` backup program. -If you are backing up your EDB Postgres Advanced Server database server by simply using the operating system’s copy utility to create a binary image of the EDB Postgres Advanced Server data files (file system backup method), then this section does not apply. + If you're backing up your EDB Postgres Advanced Server database server by using the operating system’s copy utility to create a binary image of the EDB Postgres Advanced Server data files (file system backup method), then this information doesn't apply. ## Object identification numbers in SQL/Protect tables -SQL/Protect uses two tables (`edb_sql_protect` and `edb_sql_protect_rel`) to store information on database objects such as databases, roles, and relations. References to these database objects in these tables are done using the objects’ OIDs, and not the objects’ text names. The OID is a numeric data type used by EDB Postgres Advanced Server to uniquely identify each database object. +SQL/Protect uses two tables, `edb_sql_protect` and `edb_sql_protect_rel`, to store information on database objects such as databases, roles, and relations. References to these database objects in these tables are done using the objects’ OIDs, not the objects’ text names. The OID is a numeric data type used by EDB Postgres Advanced Server to uniquely identify each database object. -When a database object is created, EDB Postgres Advanced Server assigns an OID to the object, which is then used whenever a reference is needed to the object in the database catalogs. If you create the same database object in two databases, such as a table with the same `CREATE TABLE` statement, each table is assigned a different OID in each database. +When a database object is created, EDB Postgres Advanced Server assigns an OID to the object, which is then used when a reference to the object is needed in the database catalogs. If you create the same database object in two databases, such as a table with the same `CREATE TABLE` statement, each table is assigned a different OID in each database. -In a backup and restore operation that results in the re-creation of the backed up database objects, the restored objects end up with different OIDs in the new database than what they were assigned in the original database. As a result, the OIDs referencing databases, roles, and relations stored in the `edb_sql_protect` and `edb_sql_protect_rel` tables are no longer valid when these tables are simply dumped to a backup file and then restored to a new database. +In a backup and restore operation that results in re-creating the backed-up database objects, the restored objects end up with different OIDs in the new database from what they were assigned in the original database. As a result, the OIDs referencing databases, roles, and relations stored in the `edb_sql_protect` and `edb_sql_protect_rel` tables are no longer valid when these tables are dumped to a backup file and then restored to a new database. -The following sections describe two functions, `export_sqlprotect` and `import_sqlprotect`, that are used specifically for backing up and restoring SQL/Protect tables in order to ensure the OIDs in the SQL/Protect tables reference the correct database objects after the tables are restored. +Two functions, `export_sqlprotect` and `import_sqlprotect`, are used specifically for backing up and restoring SQL/Protect tables to ensure the OIDs in the SQL/Protect tables reference the correct database objects after the tables are restored. ## Backing up the database -The following steps back up a database that has been configured with SQL/Protect. +Back up a database that was configured with SQL/Protect. -**Step 1:** Create a backup file using `pg_dump`. +1. Create a backup file using `pg_dump`. -This example shows a plain-text backup file named `/tmp/edb.dmp` created from database `edb` using the `pg_dump` utility program: + This example shows a plain-text backup file named `/tmp/edb.dmp` created from database `edb` using the `pg_dump` utility program: ```shell $ cd /usr/edb/as14/bin @@ -40,9 +40,9 @@ Password: $ ``` -**Step 2:** Connect to the database as a superuser and export the SQL/Protect data using the `export_sqlprotect('sqlprotect_file')` function (where `sqlprotect_file` is the fully qualified path to a file where the `SQL/Protect` data is to be saved). +2. Connect to the database as a superuser, and export the SQL/Protect data using the `export_sqlprotect('sqlprotect_file')` function. `sqlprotect_file` is the fully qualified path to a file where the SQL/Protect data is saved. -The `enterprisedb` operating system account (`postgres` if you installed EDB Postgres Advanced Server in PostgreSQL compatibility mode) must have read and write access to the directory specified in `sqlprotect_file`. + The `enterprisedb` operating system account (`postgres` if you installed EDB Postgres Advanced Server in PostgreSQL compatibility mode) must have read and write access to the directory specified in `sqlprotect_file`. ```sql edb=# SELECT sqlprotect.export_sqlprotect('/tmp/sqlprotect.dmp'); @@ -56,9 +56,9 @@ The files `/tmp/edb.dmp` and `/tmp/sqlprotect.dmp` comprise your total database ## Restoring From the Backup Files -**Step 1:** Restore the backup file to the new database. +1. Restore the backup file to the new database. -The following example uses the `psql` utility program to restore the plain-text backup file `/tmp/edb.dmp` to a newly created database named `newdb`: + This example uses the `psql` utility program to restore the plain-text backup file `/tmp/edb.dmp` to a newly created database named `newdb`: ```sql $ /usr/edb/as14/bin/psql -d newdb -U enterprisedb -f /tmp/edb.dmp @@ -75,9 +75,9 @@ CREATE SCHEMA . ``` -**Step 2:** Connect to the new database as a superuser and delete all rows from the `edb_sql_protect_rel` table. +2. Connect to the new database as a superuser, and delete all rows from the `edb_sql_protect_rel` table. -This step removes any existing rows in the `edb_sql_protect_rel` table that were backed up from the original database. These rows don't contain the correct OIDs relative to the database where the backup file has been restored: + This deletion removes any existing rows in the `edb_sql_protect_rel` table that were backed up from the original database. These rows don't contain the correct OIDs relative to the database where the backup file was restored: ```sql $ /usr/edb/as14/bin/psql -d newdb -U enterprisedb @@ -89,18 +89,18 @@ newdb=# DELETE FROM sqlprotect.edb_sql_protect_rel; DELETE 2 ``` -**Step 3:** Delete all rows from the `edb_sql_protect` table. +3. Delete all rows from the `edb_sql_protect` table. -This step removes any existing rows in the `edb_sql_protect` table that were backed up from the original database. These rows don't contain the correct OIDs relative to the database where the backup file has been restored: + This deletion removes any existing rows in the `edb_sql_protect` table that were backed up from the original database. These rows don't contain the correct OIDs relative to the database where the backup file was restored: ```sql newdb=# DELETE FROM sqlprotect.edb_sql_protect; DELETE 1 ``` -**Step 4:** Delete any statistics that may exist for the database. +4. Delete any statistics that exist for the database. -This step removes any existing statistics that may exist for the database to which you are restoring the backup. The following query displays any existing statistics: + This deletion removes any existing statistics that exist for the database to which you're restoring the backup. The following query displays any existing statistics: ```sql newdb=# SELECT * FROM sqlprotect.edb_sql_protect_stats; @@ -110,9 +110,9 @@ __OUTPUT__ (0 rows) ``` -For each row that appears in the preceding query, use the `drop_stats` function specifying the role name of the entry. + For each row that appears in the preceding query, use the `drop_stats` function, specifying the role name of the entry. -For example, if a row appeared with `appuser` in the `username` column, issue the following command to remove it: + For example, if a row appeared with `appuser` in the `username` column, issue the following command to remove it: ```sql newdb=# SELECT sqlprotect.drop_stats('appuser'); @@ -122,9 +122,9 @@ __OUTPUT__ (1 row) ``` -**Step 5:** Delete any offending queries that may exist for the database. +5. Delete any outdated queries that exist for the database. -This step removes any existing queries that may exist for the database to which you are restoring the backup. The following query displays any existing queries: + This deletion removes any existing queries that exist for the database to which you're restoring the backup. This query displays any existing queries: ```sql edb=# SELECT * FROM sqlprotect.edb_sql_protect_queries; @@ -134,9 +134,7 @@ __OUTPUT__ (0 rows) ``` -For each row that appears in the preceding query, use the `drop_queries` function specifying the role name of the entry. - -For example, if a row appeared with `appuser` in the `username` column, issue the following command to remove it: + For each row that appears in the preceding query, use the `drop_queries` function, specifying the role name of the entry. For example, if a row appeared with `appuser` in the `username` column, issue the following command to remove it: ```sql edb=# SELECT sqlprotect.drop_queries('appuser'); @@ -146,11 +144,11 @@ __OUTPUT__ (1 row) ``` -**Step 6:** Make sure the role names that were protected by SQL/Protect in the original database exist in the database server where the new database resides. +6. Make sure the role names that were protected by SQL/Protect in the original database exist in the database server where the new database resides. -If the original and new databases reside in the same database server, then nothing needs to be done assuming you have not deleted any of these roles from the database server. + If the original and new databases reside in the same database server, then you don't need to do anything if you didn't delete any of these roles from the database server. -**Step 7:** Run the function `import_sqlprotect('sqlprotect_file')` where `sqlprotect_file` is the fully qualified path to the file you created in Step 2 of *Backing Up the Database*. +7. Run the function `import_sqlprotect('sqlprotect_file')`, where `sqlprotect_file` is the fully qualified path to the file you created in Step 2 of [Backing up the database](#backing-up-the-database). ```sql newdb=# SELECT sqlprotect.import_sqlprotect('/tmp/sqlprotect.dmp'); @@ -160,9 +158,9 @@ __OUTPUT__ (1 row) ``` -Tables `edb_sql_protect` and `edb_sql_protect_rel` are now populated with entries containing the OIDs of the database objects as assigned in the new database. The statistics view `edb_sql_protect_stats` also now displays the statistics imported from the original database. + Tables `edb_sql_protect` and `edb_sql_protect_rel` are populated with entries containing the OIDs of the database objects as assigned in the new database. The statistics view `edb_sql_protect_stats` also displays the statistics imported from the original database. -The SQL/Protect tables and statistics are now properly restored for this database. This is verified by the following queries on the EDB Postgres Advanced Server system catalogs: + The SQL/Protect tables and statistics are properly restored for this database. Use the following queries on the EDB Postgres Advanced Server system catalogs to verify: ```sql newdb=# SELECT datname, oid FROM pg_database; @@ -265,13 +263,13 @@ __OUTPUT__ query | SELECT * FROM appuser_tab_2 WHERE 'x' = 'x'; ``` -Note the following about the columns in tables `edb_sql_protect` and `edb_sql_protect_rel`: + Note the following about the columns in tables `edb_sql_protect` and `edb_sql_protect_rel`: -- **dbid.** Matches the value in the `oid` column from `pg_database` for `newdb` -- **roleid.** Matches the value in the `oid` column from `pg_roles` for `appuser` + - **dbid.** Matches the value in the `oid` column from `pg_database` for `newdb`. + - **roleid.** Matches the value in the `oid` column from `pg_roles` for `appuser`. -Also note that in table `edb_sql_protect_rel`, the values in the `relid` column match the values in the `oid` column of `pg_class` for relations `dept` and `appuser_tab`. + Also, in table `edb_sql_protect_rel`, the values in the `relid` column match the values in the `oid` column of `pg_class` for relations `dept` and `appuser_tab`. -**Step 8:** Verify that the SQL/Protect configuration parameters are set as desired in the `postgresql.conf` file for the database server running the new database. Restart the database server or reload the configuration file as appropriate. +8. Verify that the SQL/Protect configuration parameters are set as desired in the `postgresql.conf` file for the database server running the new database. Restart the database server or reload the configuration file as appropriate. You can now monitor the database using SQL/Protect. diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/index.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/index.mdx index b78ec13cf41..5364c484748 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/index.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/index.mdx @@ -8,7 +8,7 @@ legacyRedirectsGenerated: -EDB Postgres Advanced Server provides protection against SQL injection attacks. A *SQL injection attack* is an attempt to compromise a database by running SQL statements whose results provide clues to the attacker as to the content, structure, or security of that database. +EDB Postgres Advanced Server provides protection against *SQL injection attacks*. A SQL injection attack is an attempt to compromise a database by running SQL statements whose results provide clues to the attacker as to the content, structure, or security of that database. Preventing a SQL injection attack is normally the responsibility of the application developer. The database administrator typically has little or no control over the potential threat. The difficulty for database administrators is that the application must have access to the data to function properly. diff --git a/product_docs/docs/epas/15/epas_security_guide/03_virtual_private_database.mdx b/product_docs/docs/epas/15/epas_security_guide/03_virtual_private_database.mdx index 7578b577b28..f4207eef16a 100644 --- a/product_docs/docs/epas/15/epas_security_guide/03_virtual_private_database.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/03_virtual_private_database.mdx @@ -1,5 +1,5 @@ --- -title: "Virtual private database" +title: "Virtual Private Database" legacyRedirectsGenerated: # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.6/EDB_Postgres_Advanced_Server_Guide.1.32.html" @@ -8,25 +8,24 @@ legacyRedirectsGenerated: -*Virtual Private Database* is a type of fine-grained access control using security policies. *Fine-grained access control* means that access to data can be controlled down to specific rows as defined by the security policy. +Virtual Private Database is a type of *fine-grained access control* using security policies. Fine-grained access control means that you can control access to data down to specific rows as defined by the security policy. -The rules that encode a security policy are defined in a *policy function*, which is an SPL function with certain input parameters and return value. The *security policy* is the named association of the policy function to a particular database object, typically a table. +The rules that encode a *security policy* are defined in a *policy function*, which is an SPL function with certain input parameters and return value. The security policy is the named association of the policy function to a particular database object, typically a table. -In EDB Postgres Advanced Server, the policy function can be written in any language supported by EDB Postgres Advanced Server such as SQL and PL/pgSQL in addition to SPL. +In EDB Postgres Advanced Server, you can write the policy function in any language supported by EDB Postgres Advanced Server, such as SQL and PL/pgSQL, in addition to SPL. !!! Note - The database objects currently supported by EDB Postgres Advanced Server Virtual Private Database are tables. Policies cannot be applied to views or synonyms. + The database objects currently supported by EDB Postgres Advanced Server Virtual Private Database are tables. You can apply policies to views or synonyms. -The advantages of using Virtual Private Database are the following: +The following are advantages of using Virtual Private Database: -- Provides a fine-grained level of security. Database object level privileges given by the `GRANT` command determine access privileges to the entire instance of a database object, while Virtual Private Database provides access control for the individual rows of a database object instance. -- A different security policy can be applied depending upon the type of SQL command (`INSERT, UPDATE, DELETE, or SELECT`). -- The security policy can vary dynamically for each applicable SQL command affecting the database object depending upon factors such as the session user of the application accessing the database object. -- Invocation of the security policy is transparent to all applications that access the database object and thus, individual applications don't have to be modified to apply the security policy. -- Once a security policy is enabled, it is not possible for any application (including new applications) to circumvent the security policy except by the system privilege noted by the following. -- Even superusers cannot circumvent the security policy except by the system privilege noted by the following. +- Provides a fine-grained level of security. Database-object-level privileges given by the `GRANT` command determine access privileges to the entire instance of a database object. Virtual Private Database provides access control for the individual rows of a database object instance. +- You can apply a different security policy depending on the type of SQL command (`INSERT`, `UPDATE`, `DELETE`, or `SELECT`). +- The security policy can vary dynamically for each applicable SQL command affecting the database object depending on factors such as the session user of the application accessing the database object. +- Invoking the security policy is transparent to all applications that access the database object. You don't have to modify ndividual applications to apply the security policy. +- After you enable a security policy, no application (including new applications) can circumvent the security policy except by the system privilege described in the note that follows. Even superusers can't circumvent the security policy except by the noted system privilege. !!! Note - The only way security policies can be circumvented is if the `EXEMPT ACCESS POLICY` system privilege has been granted to a user. The `EXEMPT ACCESS POLICY` privilege should be granted with extreme care as a user with this privilege is exempted from all policies in the database. + The only way you can circumvent security policies is if the `EXEMPT ACCESS POLICY` system privilege is granted to a user. Use extreme care when granting the `EXEMPT ACCESS POLICY` privilege. A user with this privilege is exempted from all policies in the database. The `DBMS_RLS` package provides procedures to create policies, remove policies, enable policies, and disable policies. diff --git a/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx b/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx index a98db08d029..8aac6f82243 100644 --- a/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx @@ -4,11 +4,9 @@ title: "sslutils" -`sslutils` is a Postgres extension that provides SSL certificate generation functions to EDB Postgres Advanced Server for use by the EDB Postgres Enterprise Manager server. `sslutils` is installed by using the `edb-asxx-server-sslutils` RPM package where `xx` is the EDB Postgres Advanced Server version number. +`sslutils` is a Postgres extension that provides SSL certificate generation functions to EDB Postgres Advanced Server for use by the EDB Postgres Enterprise Manager server. Install `sslutils` by using the `edb-asxx-server-sslutils` RPM package, where `xx` is the EDB Postgres Advanced Server version number. -The `sslutils` package provides the functions shown in the following sections. - -In these sections, each parameter in the function’s parameter list is described by `parameter n` under the **Parameters** subsection where `n` refers to the `nth` ordinal position (for example, first, second, third, etc.) within the function’s parameter list. +Each parameter in the function’s parameter list is described by `parameter n`, where `n` refers to the `nth` ordinal position (for example, first, second, third, etc.) in the function’s parameter list. ## openssl_rsa_generate_key @@ -18,7 +16,7 @@ The `openssl_rsa_generate_key` function generates an RSA private key. The functi openssl_rsa_generate_key() RETURNS ``` -When invoking the function, pass the number of bits as an integer value; the function returns the generated key. +When invoking the function, pass the number of bits as an integer value. The function returns the generated key. ## openssl_rsa_key_to_csr @@ -43,15 +41,15 @@ The function generates and returns the certificate signing request. `parameter 3` - The name of the country in which the server resides. + The name of the country where the server resides. `parameter 4` - The name of the state in which the server resides. + The name of the state where the server resides. `parameter 5` - The location (city) within the state in which the server resides. + The location (city) in the state where the server resides. `parameter 6` diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx index ec8515ae28e..71c4e78b858 100644 --- a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx @@ -4,33 +4,35 @@ title: "Data redaction" -*Data redaction* limits sensitive data exposure by dynamically changing data as it is displayed for certain users. +*Data redaction* limits sensitive data exposure by dynamically changing data as it's displayed for certain users. -For example, a social security number (SSN) is stored as `021-23-9567`. Privileged users can see the full SSN, while other users only see the last four digits `xxx-xx-9567`. +For example, a social security number (SSN) is stored as `021-23-9567`. Privileged users can see the full SSN, while other users see only the last four digits: `xxx-xx-9567`. -Data redaction is implemented by defining a function for each field to which redaction is to be applied. The function returns the value that should be displayed to the users subject to the data redaction. +You implement data redaction by defining a function for each field to which to apply redaction . The function returns the value to display to the users subject to the data redaction. -So for example, for the SSN field, the redaction function would return `xxx-xx-9567` for an input SSN of `021-23-9567`. +For example, for the SSN field, the redaction function returns `xxx-xx-9567` for an input SSN of `021-23-9567`. -For a salary field, a redaction function would always return `$0.00` regardless of the input salary value. +For a salary field, a redaction function always returns `$0.00`, regardless of the input salary value. -These functions are then incorporated into a redaction policy by using the `CREATE REDACTION POLICY` command. This command specifies the table on which the policy applies, the table columns to be affected by the specified redaction functions, expressions to determine which session users are to be affected, and other options. +These functions are then incorporated into a redaction policy by using the `CREATE REDACTION POLICY` command. In addition to other options, this command specifies: -The `edb_data_redaction` parameter in the `postgresql.conf` file then determines whether or not data redaction is to be applied. +- The table on which the policy applies +- The table columns affected by the specified redaction functions +- Expressions to determine the affect session users -By default, the parameter is enabled so the redaction policy is in effect and the following occurs: +The `edb_data_redaction` parameter in the `postgresql.conf` file then determines whether to apply data redaction. + +By default, the parameter is enabled, so the redaction policy is in effect. The following occurs: - Superusers and the table owner bypass data redaction and see the original data. -- All other users get the redaction policy applied and see the reformatted data. +- All other users have the redaction policy applied and see the reformatted data. If the parameter is disabled by having it set to `FALSE` during the session, then the following occurs: - Superusers and the table owner bypass data redaction and see the original data. - All other users get an error. -A redaction policy can be changed by using the `ALTER REDACTION POLICY` command, or it can be eliminated using the `DROP REDACTION POLICY` command. - -The redaction policy commands are described in more detail in the subsequent sections. +You can change a redaction policy by using the `ALTER REDACTION POLICY` command. Or, you can eliminate it using the `DROP REDACTION POLICY` command. ## CREATE REDACTION POLICY @@ -48,7 +50,7 @@ CREATE REDACTION POLICY ON ] [, ...] ``` -where `redaction_option` is: +Where `redaction_option` is: ```sql { SCOPE | @@ -57,7 +59,7 @@ where `redaction_option` is: ### Description -The `CREATE REDACTION POLICY` command defines a new column-level security policy for a table by redacting column data using redaction function. A newly created data redaction policy is enabled by default. The policy can be disabled using `ALTER REDACTION POLICY ... DISABLE`. +The `CREATE REDACTION POLICY` command defines a new column-level security policy for a table by redacting column data using a redaction function. A newly created data redaction policy is enabled by default. You can disable the policy using `ALTER REDACTION POLICY ... DISABLE`. `FOR ( expression )` @@ -65,13 +67,13 @@ The `CREATE REDACTION POLICY` command defines a new column-level security policy `ADD [ COLUMN ]` - This optional form adds a column of the table to the data redaction policy. The `USING` specifies a redaction function expression. Multiple `ADD [ COLUMN ]` form can be used, if you want to add multiple columns of the table to the data redaction policy being created. The optional `WITH OPTIONS ( ... )` clause specifies a scope and/or an exception to the data redaction policy to be applied. If the scope and/or exception are not specified, the default values for scope and exception are `query` and `none` respectively. + This optional form adds a column of the table to the data redaction policy. The `USING` specifies a redaction function expression. You can use multiple `ADD [ COLUMN ]` forms if you want to add multiple columns of the table to the data redaction policy being created. The optional `WITH OPTIONS ( ... )` clause specifies a scope or an exception to the data redaction policy to apply. If you don't specify the scope or exception, the default values for scope and exception are `query` and `none`, respectively. ### Parameters `name` - The name of the data redaction policy to be created. This must be distinct from the name of any other existing data redaction policy for the table. + The name of the data redaction policy to create. This must be distinct from the name of any other existing data redaction policy for the table. `table_name` @@ -83,19 +85,19 @@ The `CREATE REDACTION POLICY` command defines a new column-level security policy `column_name` - Name of the existing column of the table on which the data redaction policy being created. + Name of the existing column of the table on which the data redaction policy is being created. `funcname_clause` - The data redaction function which decides how to compute the redacted column value. Return type of the redaction function should be same as the column type on which data redaction policy being added. + The data redaction function that decides how to compute the redacted column value. Return type of the redaction function must be the same as the column type on which the data redaction policy is being added. `scope_value` - The scope identified the query part where redaction to be applied for the column. Scope value could be `query, top_tlist` or `top_tlist_or_error`. If the scope is `query` then, the redaction applied on the column irrespective of where it appears in the query. If the scope is `top_tlist` then, the redaction applied on the column only when it appears in the query’s top target list. If the scope is `top_tlist_or_error`, the behavior is same as the `top_tlist` but throws an errors when the column appears anywhere else in the query. + The scope identifies the query part to apply redaction for the column. Scope value can be `query`, `top_tlist` or `top_tlist_or_error`. If the scope is `query`, then the redaction is applied on the column regardless of where it appears in the query. If the scope is `top_tlist`, then the redaction is applied on the column only when it appears in the query’s top target list. If the scope is `top_tlist_or_error`, the behavior is same as the `top_tlist` but throws an errors when the column appears anywhere else in the query. `exception_value` - The exception identified the query part where redaction to be exempted. Exception value could be `none, equal` or `leakproof`. If exception is `none` then there is no exemption. If exception is `equal`, then the column is not redacted when used in an equality test. If exception is `leakproof`, the column isn't redacted when a leakproof function is applied to it. + The exception identifies the query part where redaction is exempted. Exception value can be `none`, `equal` or `leakproof`. If exception is `none`, then there is no exemption. If exception is `equal`, then the column isn't redacted when used in an equality test. If exception is `leakproof`, the column isn't redacted when a leakproof function is applied to it. ### Notes @@ -105,7 +107,7 @@ The superuser and the table owner are exempt from the data redaction policy. ### Examples -Below is an example of how this feature can be used in production environments. Create the components for a data redaction policy on the `employees` table: +This example shows how you can use this feature in production environments. Create the components for a data redaction policy on the `employees` table: ```sql CREATE TABLE employees ( @@ -146,7 +148,7 @@ CREATE OR REPLACE FUNCTION redact_salary () RETURN money IS BEGIN return END; ``` -Now create a data redaction policy on `employees` to redact column `ssn` which should be accessible in equality condition and `salary` with default scope and exception. The redaction policy is exempt for the `hr` user. +Create a data redaction policy on `employees` to redact column `ssn`, which must be accessible in equality condition, and `salary` with default scope and exception. The redaction policy is exempt for the `hr` user. ```sql CREATE REDACTION POLICY redact_policy_personal_info ON employees FOR (session_user != 'hr') @@ -194,7 +196,7 @@ email (3 rows) ``` -But `ssn` data is accessible when it used for equality check due to `exception_value` setting. +But `ssn` data is accessible when used for equality check due to the `exception_value` setting: ```sql -- Get ssn number starting from 123 @@ -213,13 +215,11 @@ __OUTPUT__ ### Caveats -1. The data redaction policy created on inheritance hierarchies aren't cascaded. For example, if the data redaction policy is created for a parent, it isn't be applied to the child table, which inherits it and vice versa. Someone who has access to these child tables can see the non-redacted data. For information about inheritance hierarchies, see *Inheritance* in the PostgreSQL Core Documentation available at: - - +- The data redaction policies created on inheritance hierarchies aren't cascaded. For example, if the data redaction policy is created for a parent, it isn't applied to the child table that inherits it, and vice versa. Someone who has access to these child tables can see the non-redacted data. For information about inheritance hierarchies, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/ddl-inherit.html). -2. If the superuser or the table owner has created any materialized view on the table and has provided the access rights `GRANT SELECT` on the table and the materialized view to any non-superuser, then the non-superuser can access the non-redacted data through the materialized view. +- If the superuser or the table owner created any materialized view on the table and provided the access rights `GRANT SELECT` on the table and the materialized view to any non-superuser, then the non-superuser can access the non-redacted data through the materialized view. -3. The objects accessed in the redaction function body should be schema qualified otherwise `pg_dump` might fail. +- The objects accessed in the redaction function body must be schema qualified. Otherwise `pg_dump` might fail. ### Compatibility @@ -262,7 +262,7 @@ ALTER REDACTION POLICY ON DROP [ COLUMN ] ``` -where `redaction_option` is: +Where `redaction_option` is: ```sql { SCOPE | @@ -289,11 +289,11 @@ To use `ALTER REDACTION POLICY`, you must own the table that the data redaction `ADD [ COLUMN ]` - This form adds a column of the table to the existing redaction policy. See `CREATE REDACTION POLICY` for the details. + This form adds a column of the table to the existing redaction policy. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for details. `MODIFY [ COLUMN ]` - This form modifies the data redaction policy on the column of the table. You can update the redaction function clause and/or the redaction options for the column. The `USING` clause specifies the redaction function expression to be updated and the `WITH OPTIONS ( ... )` clause specifies the scope and/or the exception. For more details on the redaction function clause, the redaction scope and the redaction exception, see `CREATE REDACTION POLICY`. + This form modifies the data redaction policy on the column of the table. You can update the redaction function clause or the redaction options for the column. The `USING` clause specifies the redaction function expression to update. The `WITH OPTIONS ( ... )` clause specifies the scope or the exception. For more details on the redaction function clause, the redaction scope, and the redaction exception, see [`CREATE REDACTION POLICY`](#create-redaction-policy). `DROP [ COLUMN ]` @@ -319,30 +319,30 @@ To use `ALTER REDACTION POLICY`, you must own the table that the data redaction `column_name` - Name of existing column of the table on which the data redaction policy being altered or dropped. + Name of existing column of the table on which the data redaction policy is being altered or dropped. `funcname_clause` - The data redaction function expression for the column. See `CREATE REDACTION POLICY` for details. + The data redaction function expression for the column. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for details. `scope_value` - The scope identified the query part where redaction to be applied for the column. See `CREATE REDACTION POLICY` for the details. + The scope identifies the query part to apply redaction to for the column. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for the details. `exception_value` - The exception identified the query part where redaction to be exempted. See `CREATE REDACTION POLICY` for the details. + The exception identifies the query part where redaction are exempted. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for the details. ### Examples -Update data redaction policy called `redact_policy_personal_info` on the table named `employees`: +Update the data redaction policy called `redact_policy_personal_info` on the table named `employees`: ```sql ALTER REDACTION POLICY redact_policy_personal_info ON employees FOR (session_user != 'hr' AND session_user != 'manager'); ``` -And to update data redaction function for the column `ssn` in the same policy: +To update the data redaction function for the column `ssn` in the same policy: ```sql ALTER REDACTION POLICY redact_policy_personal_info ON employees @@ -378,7 +378,7 @@ To use `DROP REDACTION POLICY`, you must own the table that the redaction policy `IF EXISTS` - Don't throw an error if the data redaction policy does not exist. A notice is issued in this case. + Don't throw an error if the data redaction policy doesn't exist. A notice is issued in this case. `name` @@ -392,7 +392,7 @@ To use `DROP REDACTION POLICY`, you must own the table that the redaction policy `RESTRICT` - These keywords don't have any effect, since there are no dependencies on the data redaction policies. + These keywords don't have any effect, as there are no dependencies on the data redaction policies. ### Examples @@ -410,9 +410,9 @@ DROP REDACTION POLICY redact_policy_personal_info ON employees; `CREATE REDACTION POLICY, ALTER REDACTION POLICY` -## System Catalogs +## System catalogs -This section describes the system catalogs that store the redaction policy information. +System catalogs store the redaction policy information. ### edb_redaction_column @@ -421,7 +421,7 @@ The `edb_redaction_column` system catalog stores information about the data reda | Column | Type | References | Description | | ------------- | -------------- | -------------------------- | --------------------------------------------------------------------------- | | `oid` | `oid` | | Row identifier (hidden attribute, must be explicitly selected) | -| `rdpolicyid` | `oid` | `edb_redaction_policy.oid` | The data redaction policy applies to the described column | +| `rdpolicyid` | `oid` | `edb_redaction_policy.oid` | The data redaction policy that applies to the described column | | `rdrelid` | `oid` | `pg_class.oid` | The table that the described column belongs to | | `rdattnum` | `int2` | `pg_attribute.attnum` | The number of the described column | | `rdscope` | `int2` | | The redaction scope: `1` = query, `2` = top_tlist, `4` = top_tlist_or_error | @@ -444,4 +444,4 @@ The catalog `edb_redaction_policy` stores information about the redaction polici | `rdexpr` | `pg_node_tree` | | The data redaction policy expression | !!! Note - The data redaction policy applies for the table if it is enabled and the expression ever evaluated true. + The data redaction policy applies for the table if it's enabled and the expression ever evaluated true. diff --git a/product_docs/docs/epas/15/epas_security_guide/index.mdx b/product_docs/docs/epas/15/epas_security_guide/index.mdx index c5ba80ec9f6..2a1f3d44a21 100644 --- a/product_docs/docs/epas/15/epas_security_guide/index.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/index.mdx @@ -12,4 +12,4 @@ EDB Postgres Advanced Server security features include: - [Data redaction](05_data_redaction/#data_redaction) functionality allows you to dynamically mask portions of data. -For information about Postgres authentication and security features, consult the [PostgreSQL core documentation](https://www.postgresql.org/docs/). \ No newline at end of file +For information about Postgres authentication and security features, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/). \ No newline at end of file diff --git a/product_docs/docs/epas/15/index.mdx b/product_docs/docs/epas/15/index.mdx index 0887af9b847..ab9d5f086a8 100644 --- a/product_docs/docs/epas/15/index.mdx +++ b/product_docs/docs/epas/15/index.mdx @@ -1,7 +1,7 @@ --- title: EDB Postgres Advanced Server directoryDefaults: - description: "EDB Postgres Advanced Server Version 14 documentation and release notes. Oracle database compatibility with higher security and data redaction for enterprises." + description: "EDB Postgres Advanced Server Version 15 documentation and release notes. Oracle database compatibility with higher security and data redaction for enterprises." navigation: - epas_rel_notes - epas_platform_support diff --git a/product_docs/docs/epas/15/installing/linux_install_details/component_locations.mdx b/product_docs/docs/epas/15/installing/linux_install_details/component_locations.mdx index 84860ed0702..56f9e4666c8 100644 --- a/product_docs/docs/epas/15/installing/linux_install_details/component_locations.mdx +++ b/product_docs/docs/epas/15/installing/linux_install_details/component_locations.mdx @@ -6,12 +6,12 @@ redirects: The package managers for the various Linux variations install EDB Postgres Advanced Server components in different locations. If you need to access the components after installation, see: -- [RHEL/OL/Rocky Linux/AlmaLinux/CentOS/SLES Locations](#rhelolrocky-linuxalmalinuxcentossles-locations) -- [Debian/Ubuntu Locations](#debianubuntu-locations) +- [RHEL/OL/Rocky Linux/AlmaLinux/CentOS/SLES locations](#rhelolrocky-linuxalmalinuxcentossles-locations) +- [Debian/Ubuntu locations](#debianubuntu-locations) ## RHEL/OL/Rocky Linux/AlmaLinux/CentOS/SLES Locations -The RPM installers place EDB Postgres Advanced Server components in the directories listed in the table below: +The RPM installers place EDB Postgres Advanced Server components in the directories listed in the table. | Component | Location | | --------------------------------- | ------------------------------------------ | @@ -36,7 +36,7 @@ The RPM installers place EDB Postgres Advanced Server components in the director ## Debian/Ubuntu Locations -The Debian package manager places EDB Postgres Advanced Server and supporting components in the directories listed in the following table: +The Debian package manager places EDB Postgres Advanced Server and supporting components in the directories listed in the table. | Component | Location | | -------------------------------- | --------------------------------------------------------------------------------------- | diff --git a/product_docs/docs/epas/15/installing/linux_install_details/index.mdx b/product_docs/docs/epas/15/installing/linux_install_details/index.mdx index ffe1db65e43..daf8a79205f 100644 --- a/product_docs/docs/epas/15/installing/linux_install_details/index.mdx +++ b/product_docs/docs/epas/15/installing/linux_install_details/index.mdx @@ -13,6 +13,6 @@ If you need access to the EDB Postgres Advanced Server components after installa For information on available native packages from the EDB repository, see [Available native packages](rpm_packages). -To set up a local repository and install EDB Postgres Advanced server, see [Installing using a local EDB repository](installing_epas_using_local_repository). +To set up a local repository and install EDB Postgres Advanced Server, see [Installing using a local EDB repository](installing_epas_using_local_repository). -For information about managing start/stop/restart of services, managing authentication, and initializing new clusters see [Managing an EDB Postgres Advanced Server installation](managing_an_advanced_server_installation). +For information about managing start/stop/restart of services, managing authentication, and initializing new clusters, see [Managing an EDB Postgres Advanced Server installation](managing_an_advanced_server_installation). diff --git a/product_docs/docs/epas/15/installing/linux_install_details/installing_epas_using_local_repository.mdx b/product_docs/docs/epas/15/installing/linux_install_details/installing_epas_using_local_repository.mdx index cd9d9e6742a..bab3198552a 100644 --- a/product_docs/docs/epas/15/installing/linux_install_details/installing_epas_using_local_repository.mdx +++ b/product_docs/docs/epas/15/installing/linux_install_details/installing_epas_using_local_repository.mdx @@ -6,7 +6,7 @@ redirects: - /epas/latest/epas_inst_linux/installing_epas_using_local_repository/ --- -You can create a local repository to act as a host for the EDB Postgres Advanced Server native packages if the server on which you wish to install EDB Postgres Advanced Server (or supporting components) cannot directly access the EDB repository. This is a high-level listing of the steps requires; modify the process for your individual network. +You can create a local repository to act as a host for the EDB Postgres Advanced Server native packages if the server on which you want to install EDB Postgres Advanced Server or supporting components can't directly access the EDB repository. This is a high-level listing of the steps required. Modify the process for your network. To create and use a local repository, you must: @@ -69,6 +69,4 @@ After specifying the location and connection information for your local reposito dnf -y install edb-as15-server ``` -For more information about creating a local `yum` repository, visit: - - +For more information about creating a local `yum` repository, see the [Centos wiki](https://wiki.centos.org/HowTos/CreateLocalRepos). diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx index 55b07622ddc..3910179a4b1 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx @@ -20,7 +20,7 @@ After extracting the metadata from the old cluster, `pg_upgrade` performs the bo `pg_upgrade` runs the `pg_dumpall` script against the new cluster to create (empty) database objects of the same shape and type as those found in the old cluster. Then, `pg_upgrade` links or copies each table and index from the old cluster to the new cluster. -If you are upgrading to EDB Postgres Advanced Server 14 and have installed the `edb_dblink_oci` or `edb_dblink_libpq` extension, you must drop the extension before performing an upgrade. To drop the extension, connect to the server with the psql or PEM client, and invoke the commands: +If you are upgrading to EDB Postgres Advanced Server and have installed the `edb_dblink_oci` or `edb_dblink_libpq` extension, you must drop the extension before performing an upgrade. To drop the extension, connect to the server with the psql or PEM client, and invoke the commands: ```sql DROP EXTENSION edb_dblink_oci; diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx index c99f46b5457..fb66bad267f 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx @@ -64,7 +64,7 @@ Include the `-p` or `--old-port` keyword to specify the port number of the EDB P Include the `-P` or `--new-port` keyword to specify the port number of the new EDB Postgres Advanced Server installation. !!! Note - If the original EDB Postgres Advanced Server installation is using port number `5444` when you invoke the EDB Postgres Advanced Server 14 installer, the installer recommends using listener port `5445` for the new installation of EDB Postgres Advanced Server. + If the original EDB Postgres Advanced Server installation is using port number `5444` when you invoke the EDB Postgres Advanced Server installer, the installer recommends using listener port `5445` for the new installation of EDB Postgres Advanced Server. `-r` `--retain` diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx index cbcf9f1b6b7..9df291db6d9 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx @@ -4,7 +4,7 @@ redirects: - /epas/latest/epas_upgrade_guide/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/ --- -When invoking `pg_upgrade`, you must specify the location of the old and new cluster's `PGDATA` and executable (`/bin`) directories, as well as the name of the EDB Postgres Advanced Server superuser, and the ports on which the installations are listening. A typical call to invoke `pg_upgrade` to migrate from EDB Postgres Advanced Server 13 to EDB Postgres Advanced Server 14 takes the form: +When invoking `pg_upgrade`, you must specify the location of the old and new cluster's `PGDATA` and executable (`/bin`) directories, as well as the name of the EDB Postgres Advanced Server superuser, and the ports on which the installations are listening. A typical call to invoke `pg_upgrade` to migrate from EDB Postgres Advanced Server 14 to EDB Postgres Advanced Server 15 takes the form: ```shell pg_upgrade @@ -20,11 +20,11 @@ Where: `--old-datadir path_to_13_data_directory` -Use the `--old-datadir` option to specify the complete path to the `data` directory within the EDB Postgres Advanced Server 13 installation. +Use the `--old-datadir` option to specify the complete path to the `data` directory within the EDB Postgres Advanced Server 14 installation. `--new-datadir path_to_14_data_directory` -Use the `--new-datadir` option to specify the complete path to the `data` directory within the EDB Postgres Advanced Server 14 installation. +Use the `--new-datadir` option to specify the complete path to the `data` directory within the EDB Postgres Advanced Server 15 installation. `--username superuser_name` @@ -34,19 +34,19 @@ If the EDB Postgres Advanced Server superuser name is not the same in both clust `--old-bindir path_to_13_bin_directory` -Use the `--old-bindir` option to specify the complete path to the `bin` directory in the EDB Postgres Advanced Server 13 installation. +Use the `--old-bindir` option to specify the complete path to the `bin` directory in the EDB Postgres Advanced Server 14 installation. `--new-bindir path_to_14_bin_directory` -Use the `--new-bindir` option to specify the complete path to the `bin` directory in the EDB Postgres Advanced Server 14 installation. +Use the `--new-bindir` option to specify the complete path to the `bin` directory in the EDB Postgres Advanced Server 15 installation. `--old-port 13_port` -Include the `--old-port` option to specify the port on which EDB Postgres Advanced Server 13 listens for connections. +Include the `--old-port` option to specify the port on which EDB Postgres Advanced Server 14 listens for connections. `--new-port 14_port` -Include the `--new-port` option to specify the port on which EDB Postgres Advanced Server 14 listens for connections. +Include the `--new-port` option to specify the port on which EDB Postgres Advanced Server 15 listens for connections.
diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx index 9a55d8004bb..235f5eb3e90 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx @@ -1,12 +1,12 @@ --- -title: "Upgrading to EDB Postgres Advanced Server 14" +title: "Upgrading to EDB Postgres Advanced Server 15" redirects: - /epas/latest/epas_upgrade_guide/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server/ --- -You can use `pg_upgrade` to upgrade from an existing installation of EDB Postgres Advanced Server into the cluster built by the EDB Postgres Advanced Server 14 installer or into an alternate cluster created using the `initdb` command. +You can use `pg_upgrade` to upgrade from an existing installation of EDB Postgres Advanced Server into the cluster built by the EDB Postgres Advanced Server installer or into an alternate cluster created using the `initdb` command. -The basic steps to perform an upgrade into an empty cluster created with the `initdb` command are the same as the steps to upgrade into the cluster created by the EDB Postgres Advanced Server 14 installer, but you can omit Step 2 `(Empty the edb database)`, and substitute the location of the alternate cluster when specifying a target cluster for the upgrade. +The basic steps to perform an upgrade into an empty cluster created with the `initdb` command are the same as the steps to upgrade into the cluster created by the EDB Postgres Advanced Server installer, but you can omit Step 2 `(Empty the edb database)`, and substitute the location of the alternate cluster when specifying a target cluster for the upgrade. If a problem occurs during the upgrade process, you can revert to the previous version. See [Reverting to the old cluster](06_reverting_to_the_old_cluster/#reverting_to_the_old_cluster) Section for detailed information about this process. @@ -14,11 +14,11 @@ You must be an operating system superuser or Windows Administrator to perform an **Step 1 - Install the new server** -Install EDB Postgres Advanced Server 14, specifying the same non-server components that were installed during the previous EDB Postgres Advanced Server installation. The new cluster and the old cluster must reside in different directories. +Install EDB Postgres Advanced Server 15, specifying the same non-server components that were installed during the previous EDB Postgres Advanced Server installation. The new cluster and the old cluster must reside in different directories. **Step 2 - Empty the target database** -The target cluster must not contain any data; you can create an empty cluster using the `initdb` command, or you can empty a database that was created during the installation of EDB Postgres Advanced Server 14. If you have installed EDB Postgres Advanced Server in PostgreSQL mode, the installer creates a single database named `postgres`; if you have installed EDB Postgres Advanced Server in Oracle mode, it creates a database named `postgres` and a database named `edb`. +The target cluster must not contain any data; you can create an empty cluster using the `initdb` command, or you can empty a database that was created during the installation of EDB Postgres Advanced Server. If you have installed EDB Postgres Advanced Server in PostgreSQL mode, the installer creates a single database named `postgres`; if you have installed EDB Postgres Advanced Server in Oracle mode, it creates a database named `postgres` and a database named `edb`. The easiest way to empty the target database is to drop the database and then create a new database. Before invoking the `DROP DATABASE` command, you must disconnect any users and halt any services that are currently using the database. @@ -48,7 +48,7 @@ CREATE DATABASE ; During the upgrade process, `pg_upgrade` connects to the old and new servers several times; to make the connection process easier, you can edit the `pg_hba.conf` file, setting the authentication mode to `trust`. To modify the `pg_hba.conf` file, navigate through the `Start` menu to the `EDB Postgres` menu; to the `EDB Postgres Advanced Server` menu, and open the `Expert Configuration` menu; select the `Edit pg_hba.conf` menu option to open the `pg_hba.conf` file. -You must allow trust authentication for the previous EDB Postgres Advanced Server installation, and EDB Postgres Advanced Server 14 servers. Edit the `pg_hba.conf` file for both installations of EDB Postgres Advanced Server as shown in the following figure. +You must allow trust authentication for the previous EDB Postgres Advanced Server installation, and EDB Postgres Advanced Server servers. Edit the `pg_hba.conf` file for both installations of EDB Postgres Advanced Server as shown in the following figure. ![Configuring EDB Postgres Advanced Server to use trust authentication.](../images/configuring_advanced_server_to_use_trust_authentication.png) @@ -63,7 +63,7 @@ If the system is required to maintain `md5` authentication mode during the upgra **Step 4 - Stop all component services and servers** -Before you invoke `pg_upgrade`, you must stop any services that belong to the original EDB Postgres Advanced Server installation, EDB Postgres Advanced Server 14, or the supporting components. This ensures that a service doesn't attempt to access either cluster during the upgrade process. +Before you invoke `pg_upgrade`, you must stop any services that belong to the original EDB Postgres Advanced Server installation, EDB Postgres Advanced Server, or the supporting components. This ensures that a service doesn't attempt to access either cluster during the upgrade process. The services that are most likely to be running in your installation are: @@ -192,7 +192,7 @@ pg_upgrade.exe During the consistency checking process, `pg_upgrade` logs any discrepancies that it finds to a file located in the directory from which `pg_upgrade` was invoked. When the consistency check completes, review the file to identify any missing components or upgrade conflicts. You must resolve any conflicts before invoking `pg_upgrade` to perform a version upgrade. -If `pg_upgrade` alerts you to a missing component, you can use StackBuilder Plus to add the component that contains the component. Before using StackBuilder Plus, you must restart the EDB Postgres Advanced Server 14 service. After restarting the service, open StackBuilder Plus by navigating through the `Start` menu to the `EDB Postgres Advanced Server 14` menu, and selecting `StackBuilder Plus`. Follow the onscreen advice of the StackBuilder Plus wizard to download and install the missing components. +If `pg_upgrade` alerts you to a missing component, you can use StackBuilder Plus to add the component that contains the component. Before using StackBuilder Plus, you must restart the EDB Postgres Advanced Server service. After restarting the service, open StackBuilder Plus by navigating through the `Start` menu to the `EDB Postgres Advanced Server 15` menu, and selecting `StackBuilder Plus`. Follow the onscreen advice of the StackBuilder Plus wizard to download and install the missing components. When `pg_upgrade` has confirmed that the clusters are compatible, you can perform a version upgrade. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/04_upgrading_a_pgAgent_installation.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/04_upgrading_a_pgAgent_installation.mdx index ccf69cbb12d..4261ac05a49 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/04_upgrading_a_pgAgent_installation.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/04_upgrading_a_pgAgent_installation.mdx @@ -4,7 +4,7 @@ redirects: - /epas/latest/epas_upgrade_guide/04_upgrading_an_installation_with_pg_upgrade/04_upgrading_a_pgAgent_installation/ --- -If your existing EDB Postgres Advanced Server installation uses pgAgent, you can use a script provided with the EDB Postgres Advanced Server 14 installer to update pgAgent. The script is named `dbms_job.upgrade.script.sql`, and is located in the `/share/contrib/` directory under your EDB Postgres Advanced Server installation. +If your existing EDB Postgres Advanced Server installation uses pgAgent, you can use a script provided with the EDB Postgres Advanced Server installer to update pgAgent. The script is named `dbms_job.upgrade.script.sql`, and is located in the `/share/contrib/` directory under your EDB Postgres Advanced Server installation. If you are using `pg_upgrade` to upgrade your installation, you should: diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/05_pg_upgrade_troubleshooting.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/05_pg_upgrade_troubleshooting.mdx index 719c0739a2f..f46fe6b0c9c 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/05_pg_upgrade_troubleshooting.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/05_pg_upgrade_troubleshooting.mdx @@ -20,7 +20,7 @@ If `pg_upgrade` reports that the new cluster is not empty, empty the new cluster ## Upgrade Error - Failed to load library -If the original EDB Postgres Advanced Server cluster included libraries that are not included in the EDB Postgres Advanced Server 14 cluster, `pg_upgrade` alerts you to the missing component during the consistency check by writing an entry to the `loadable_libraries.txt` file in the directory from which you invoked `pg_upgrade`. Generally, for missing libraries that are not part of a major component upgrade, perform the following steps: +If the original EDB Postgres Advanced Server cluster included libraries that are not included in the EDB Postgres Advanced Server cluster, `pg_upgrade` alerts you to the missing component during the consistency check by writing an entry to the `loadable_libraries.txt` file in the directory from which you invoked `pg_upgrade`. Generally, for missing libraries that are not part of a major component upgrade, perform the following steps: 1. Restart the EDB Postgres Advanced Server service. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx index a4effc919b4..1d70ff60ece 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx @@ -8,7 +8,7 @@ redirects: While minor upgrades between versions are fairly simple and require only the installation of new executables, past major version upgrades has been both expensive and time consuming. `pg_upgrade` facilitates migration between any version of EDB Postgres Advanced Server (version 9.0 or later), and any subsequent release of EDB Postgres Advanced Server that is supported on the same platform. -Without `pg_upgrade`, to migrate from an earlier version of EDB Postgres Advanced Server to EDB Postgres Advanced Server 14, you must export all of your data using `pg_dump`, install the new release, run `initdb` to create a new cluster, and then import your old data. +Without `pg_upgrade`, to migrate from an earlier version of EDB Postgres Advanced Server to EDB Postgres Advanced Server 15, you must export all of your data using `pg_dump`, install the new release, run `initdb` to create a new cluster, and then import your old data. *pg_upgrade can reduce both the amount of time required and the disk space required for many major-version upgrades.* @@ -24,7 +24,7 @@ Before performing a version upgrade, `pg_upgrade` verifies that the two clusters If the upgrade involves a change in the on-disk representation of database objects or data, or involves a change in the binary representation of data types, `pg_upgrade` can't perform the upgrade; to upgrade, you have to `pg_dump` the old data and then import that data into the new cluster. -The `pg_upgrade` executable is distributed with EDB Postgres Advanced Server 14, and is installed as part of the `Database Server` component; no additional installation or configuration steps are required. +The `pg_upgrade` executable is distributed with EDB Postgres Advanced Server, and is installed as part of the `Database Server` component; no additional installation or configuration steps are required.
diff --git a/product_docs/docs/epas/15/upgrading/index.mdx b/product_docs/docs/epas/15/upgrading/index.mdx index 6308571528b..f6cf99e8e64 100644 --- a/product_docs/docs/epas/15/upgrading/index.mdx +++ b/product_docs/docs/epas/15/upgrading/index.mdx @@ -7,6 +7,6 @@ redirects: This section provides information about upgrading EDB Postgres Advanced Server, including: -- `pg_upgrade` to upgrade from an earlier version of EDB Postgres Advanced Server to EDB Postgres Advanced Server 14. +- `pg_upgrade` to upgrade from an earlier version of EDB Postgres Advanced Server to EDB Postgres Advanced Server 15. - `yum` to perform a minor version upgrade on a Linux host. - `StackBuilder Plus` to perform a minor version upgrade on a Windows host. From 5d2e1eb6118c37e013ffcd16d80aea9c3bba1db1 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Thu, 23 Feb 2023 08:19:41 -0500 Subject: [PATCH 02/50] Some preliminary edits --- .../docs/edb_plus/41/installing/windows.mdx | 43 +++++++++++++------ 1 file changed, 31 insertions(+), 12 deletions(-) diff --git a/product_docs/docs/edb_plus/41/installing/windows.mdx b/product_docs/docs/edb_plus/41/installing/windows.mdx index bec5f2a0ff3..0df5d6b172b 100644 --- a/product_docs/docs/edb_plus/41/installing/windows.mdx +++ b/product_docs/docs/edb_plus/41/installing/windows.mdx @@ -4,32 +4,51 @@ navTitle: "On Windows" redirects: - /edb_plus/latest/03_installing_edb_plus/install_on_windows/ --- + + +EDB provides a graphical interactive installer for Windows. You can access it using StackBuilder Plus, which is installed as part of EDB Postgres Advanced Server. With StackBuilder Plus, you can download an installer package for EDB*Plus and invoke the graphical installer. See [Using StackBuilder Plus](/edb_plus/latest/installing/windows/#using-stackbuilder-plus). + +## Prerequisites + Before installing EDB\*Plus, you must first install Java (version 1.8 or later). For Windows, Java installers and instructions are available online at: +## Using StackBuilder Plus + +If you have installed EDB Postgres Advanced Server, you can use StackBuilder Plus to invoke the graphical installer for EDB*Plus. See [Using StackBuilder Plus](/epas/latest/epas_inst_windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/). + +1. In StackBuilder Plus, follow the prompts until you get to the module selection page. + +1. Expand the **EnterpriseDB Tools** node and select **Replication Server**. + +1. Proceed to the [Using the graphical installer](#using-the-graphical-installer) section in this topic. + +. See [Using StackBuilder Plus](/epas/latest/epas_inst_windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/). + +1. In StackBuilder Plus, follow the prompts until you get to the module selection page. + +1. Expand the **EnterpriseDB Tools** node and select **Replication Server**. + +1. Proceed to the [Using the graphical installer](#using-the-graphical-installer) section in this topic. + + + + Windows installers for EDB\*Plus are available via StackBuilder Plus; you can access StackBuilder Plus through the Windows start menu. After opening StackBuilder Plus and selecting the installation for which you want to install EDB\*Plus, expand the component selection screen tree control to select and download the EDB\*Plus installer. ![The EDBPlus Welcome window](../images/edb_plus_welcome.png) -
Fig. 1: The EDB*Plus Welcome window
- -The EDB\*Plus installer welcomes you to the setup wizard, as shown in the figure below. +1. The EDB\*Plus installer welcomes you to the setup wizard, as shown in the figure below. ![The Installation Directory window](../images/installation_directory_new.png) -
Fig. 2: The Installation Directory window
- -Use the `Installation Directory` field to specify the directory in which you wish to install the EDB\*Plus software. Then, click `Next` to continue. +1. Use the `Installation Directory` field to specify the directory in which you wish to install the EDB\*Plus software. Then, click `Next` to continue. ![The Ready to Install window](../images/ready_to_install.png) -
Fig. 4: The Ready to Install window
- -The `Ready to Install` window notifies you when the installer has all of the information needed to install EDB\*Plus on your system. Click `Next` to install EDB\*Plus. +1. The `Ready to Install` window notifies you when the installer has all of the information needed to install EDB\*Plus on your system. Click `Next` to install EDB\*Plus. ![The installation is complete](../images/installation_complete.png) -
Fig. 5: The installation is complete
- -The installer notifies you when the setup wizard has completed the EDB\*Plus installation. Click `Finish` to exit the installer. +1. When the installation has completed, select **Finish**. From 2ab6ca90e56e7c1f49d183d537cd2396337e48ce Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 23 Feb 2023 13:47:38 -0500 Subject: [PATCH 03/50] reread of security and parts of install --- .../01_sql_protect_overview.mdx | 20 ++++---- .../02_configuring_sql_protect.mdx | 48 +++++++++---------- .../03_common_maintenance_operations.mdx | 14 +++--- .../04_backing_up_restoring_sql_protect.mdx | 10 ++-- .../03_virtual_private_database.mdx | 12 ++--- .../15/epas_security_guide/04_sslutils.mdx | 4 +- .../epas_security_guide/05_data_redaction.mdx | 24 +++++----- .../epas/15/epas_security_guide/index.mdx | 1 - ...installing_epas_using_local_repository.mdx | 2 +- ...naging_an_advanced_server_installation.mdx | 6 +-- 10 files changed, 70 insertions(+), 71 deletions(-) diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/01_sql_protect_overview.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/01_sql_protect_overview.mdx index b43d65b54be..7bd4b0fdccc 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/01_sql_protect_overview.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/01_sql_protect_overview.mdx @@ -24,9 +24,9 @@ When SQL/Protect is switched to *passive* or *active* mode, the incoming queries ### Utility commands -A common technique used in SQL injection attacks is to run utility commands, which are typically SQL Data Definition Language (DDL) statements. An example is creating a user-defined function that can access other system resources. +A common technique used in SQL injection attacks is to run utility commands, which are typically SQL data definition language (DDL) statements. An example is creating a user-defined function that can access other system resources. -SQL/Protect can block the running of all utility commands that aren't normally needed during standard application processing. +SQL/Protect can block running all utility commands that aren't normally needed during standard application processing. ### SQL tautology @@ -40,7 +40,7 @@ Attackers usually start identifying security weaknesses using this technique. SQ ### Unbounded DML statements -A dangerous action taken during SQL injection attacks is running unbounded DML statements. These are `UPDATE` and `DELETE` statements with no `WHERE` clause. For example, an attacker mighy update all users’ passwords to a known value or initiate a denial of service attack by deleting all of the data in a key table. +A dangerous action taken during SQL injection attacks is running unbounded DML statements. These are `UPDATE` and `DELETE` statements with no `WHERE` clause. For example, an attacker might update all users’ passwords to a known value or initiate a denial of service attack by deleting all of the data in a key table. ## Monitoring SQL injection attacks @@ -52,11 +52,11 @@ Monitoring for SQL injection attacks involves analyzing SQL statements originati You can customize each protected role for the types of SQL injection attacks it's being monitored for, This approach provides different levels of protection by role and significantly reduces the user-maintenance load for DBAs. -You can't make a role with the superuser privilege a protected role. If a protected non-superuser role is later altered to become a superuser, certain behaviors are exhibited whenever that superuser tries to issue any command: +You can't make a role with the superuser privilege a protected role. If a protected non-superuser role later becomes a superuser, certain behaviors occur when that superuser tries to issue any command: - SQL/Protect issues a warning message for every command issued by the protected superuser. -- The statistic in column superusers of `edb_sql_protect_stats` is incremented with every command issued by the protected superuser. See [Attack attempt statistics](#attack-attempt-statistics) for information on the `edb_sql_protect_stats` view. -- When SQL/Protect is in active mode, all commands issued by the protected superuser are prevented from running. +- The statistic in the column superusers of `edb_sql_protect_stats` is incremented with every command issued by the protected superuser. See [Attack attempt statistics](#attack-attempt-statistics) for information on the `edb_sql_protect_stats` view. +- SQL/Protect in active mode prevents all commands issued by the protected superuser from running. Either alter a protected role that has the superuser privilege so that it's no longer a superuser, or revert it to an unprotected role. @@ -64,12 +64,12 @@ Either alter a protected role that has the superuser privilege so that it's no l SQL/Protect records each use of a command by a protected role that's considered an attack. It collects statistics by type of SQL injection attack, as discussed in [Types of SQL injection attacks](#types-of-injection-attacks). -You can access these statistics from view `edb_sql_protect_stats`. You can easily monitor this view to identify the start of a potential attack. +You can access these statistics from the view `edb_sql_protect_stats`. You can easily monitor this view to identify the start of a potential attack. The columns in `edb_sql_protect_stats` monitor the following: - **username.** Name of the protected role. -- **superusers.** Number of SQL statements issued when the protected role is a superuser. In effect, any SQL statement issued by a protected superuser increases this statistic. See [Protected roles](#protected-roles) for information on protected superusers. +- **superusers.** Number of SQL statements issued when the protected role is a superuser. In effect, any SQL statement issued by a protected superuser increases this statistic. See [Protected roles](#protected-roles) for information about protected superusers. - **relations.** Number of SQL statements issued referencing relations that weren't learned by a protected role. (These relations aren't in a role’s protected relations list.) - **commands.** Number of DDL statements issued by a protected role. - **tautology.** Number of SQL statements issued by a protected role that contained a tautological condition. @@ -84,9 +84,7 @@ If a role is protected in more than one database, the role’s statistics for at ### Attack attempt queries -Each use of a command by a protected role that's considered an attack by SQL/Protect is recorded in the `edb_sql_protect_queries` view. - -The `edb_sql_protect_queries` view contains the following columns: +Each use of a command by a protected role that's considered an attack by SQL/Protect is recorded in the `edb_sql_protect_queries` view, which contains the following columns: - **username.** Database user name of the attacker used to log into the database server. - **ip_address.** IP address of the machine from which the attack was initiated. diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx index da06afea927..806ee3c3fcf 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx @@ -8,36 +8,36 @@ legacyRedirectsGenerated: -Make sure the following prerequisites are met before configuring SQL/Protect: +Meet the following prerequisites before configuring SQL/Protect: -- The library file (`sqlprotect.so` on Linux, `sqlprotect.dll` on Windows) needed to run `SQL/Protect` is installed in the `lib` subdirectory of your EDB Postgres Advanced Server home directory. For Windows, the EDB Postgres Advanced Server installer does this. For Linux, install the `edb-asxx-server-sqlprotect` RPM package, where `xx` is the EDB Postgres Advanced Server version number. +- The library file (`sqlprotect.so` on Linux, `sqlprotect.dll` on Windows) needed to run SQL/Protect is installed in the `lib` subdirectory of your EDB Postgres Advanced Server home directory. For Windows, the EDB Postgres Advanced Server installer does this. For Linux, install the `edb-asxx-server-sqlprotect` RPM package, where `xx` is the EDB Postgres Advanced Server version number. - You need the SQL script file `sqlprotect.sql` located in the `share/contrib` subdirectory of your EDB Postgres Advanced Server home directory. -- You must configure the database server to use `SQL/Protect`, and you must configure each database that you want `SQL/Protect` to monitor: +- You must configure the database server to use SQL/Protect, and you must configure each database that you want SQL/Protect to monitor: - - You must modify the database server configuration file, `postgresql.conf`, by adding and enabling configuration parameters used by `SQL/Protect`. - - Install database objects used by `SQL/Protect` in each database that you want `SQL/Protect` to monitor. + - You must modify the database server configuration file `postgresql.conf` by adding and enabling configuration parameters used by SQL/Protect. + - Install database objects used by SQL/Protect in each database that you want SQL/Protect to monitor. 1. Edit the following configuration parameters in the `postgresql.conf` file located in the `data` subdirectory of your EDB Postgres Advanced Server home directory: - - **shared_preload_libraries.** Add `$libdir/sqlprotect` to the list of libraries. + - `shared_preload_libraries`. Add `$libdir/sqlprotect` to the list of libraries. - - **edb_sql_protect.enabled.** Controls whether `SQL/Protect` is actively monitoring protected roles by analyzing SQL statements issued by those roles and reacting according to the setting of `edb_sql_protect.level`. When you're ready to begin monitoring with `SQL/Protect`, set this parameter to `on`. Yhe default is `off`. + - `edb_sql_protect.enabled`. Controls whether SQL/Protect is actively monitoring protected roles by analyzing SQL statements issued by those roles and reacting according to the setting of `edb_sql_protect.level`. When you're ready to begin monitoring with SQL/Protect, set this parameter to `on`. The default is `off`. - - **edb_sql_protect.level.** Sets the action taken by `SQL/Protect` when a SQL statement is issued by a protected role. The default behavior is `passive`. Initially, set this parameter to `learn`. See [Setting the protection level](#setting-the-protection-level) for more information. + - `edb_sql_protect.level`. Sets the action taken by SQL/Protect when a SQL statement is issued by a protected role. The default behavior is `passive`. Initially, set this parameter to `learn`. See [Setting the protection level](#setting-the-protection-level) for more information. - - **edb_sql_protect.max_protected_roles.** Sets the maximum number of roles to protect. The default is `64`. + - `edb_sql_protect.max_protected_roles`. Sets the maximum number of roles to protect. The default is `64`. - - **edb_sql_protect.max_protected_relations.** Sets the maximum number of relations to protect per role. The default is `1024`. + - `edb_sql_protect.max_protected_relations`. Sets the maximum number of relations to protect per role. The default is `1024`. The total number of protected relations for the server is the number of protected relations times the number of protected roles. Every protected relation consumes space in shared memory. The space for the maximum possible protected relations is reserved during database server startup. - - **edb_sql_protect.max_queries_to_save.** Sets the maximum number of offending queries to save in the `edb_sql_protect_queries` view. The default is `5000`. If the number of offending queries reaches the limit, additional queries aren't saved in the view but are accessible in the database server log file. + - `edb_sql_protect.max_queries_to_save`. Sets the maximum number of offending queries to save in the `edb_sql_protect_queries` view. The default is `5000`. If the number of offending queries reaches the limit, additional queries aren't saved in the view but are accessible in the database server log file. The minimum valid value for this parameter is `100`. If you specify a value less than `100`, the database server starts using the default setting of `5000`. A warning message is recorded in the database server log file. - The following example shows the settings of these parameters in the `postgresql.conf` file: + This example shows the settings of these parameters in the `postgresql.conf` file: ```ini shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen,$libdir/sqlprotect' @@ -64,7 +64,7 @@ edb_sql_protect.max_queries_to_save = 5000 **On Windows:** Use the Windows Services applet to restart the service named `edb-as-14`. -3. For each database that you want to protect from SQL injection attacks, connect to the database as a superuser (either `enterprisedb` or `postgres`, depending upon your installation options). Then run the script `sqlprotect.sql` located in the `share/contrib` subdirectory of your EDB Postgres Advanced Server home directory. The script creates the SQL/Protect database objects in a schema named `sqlprotect`. +3. For each database that you want to protect from SQL injection attacks, connect to the database as a superuser (either `enterprisedb` or `postgres`, depending on your installation options). Then run the script `sqlprotect.sql`, located in the `share/contrib` subdirectory of your EDB Postgres Advanced Server home directory. The script creates the SQL/Protect database objects in a schema named `sqlprotect`. This example shows the process to set up protection for a database named `edb`: @@ -113,7 +113,7 @@ After you create the SQL/Protect database objects in a database, you can select For each database that you want to protect, you must determine the roles you want to monitor and then add those roles to the *protected roles list* of that database. -1. Connect as a superuser to a database that you want to protect with either `psql` or Postgres Enterprise Manager Client: +1. Connect as a superuser to a database that you want to protect with either `psql` or the Postgres Enterprise Manager client: ```sql $ /usr/edb/as14/bin/psql -d edb -U enterprisedb @@ -182,20 +182,20 @@ The `edb_sql_protect.level` configuration parameter sets the protection level, w You can set the `edb_sql_protect.level` configuration parameter in the `postgresql.conf` file to one of the following values to specify learn, passive, or active mode: - `learn`. Tracks the activities of protected roles and records the relations used by the roles. Use this mode when first configuring SQL/Protect so the expected behaviors of the protected applications are learned. -- `passive`. Issues warnings if protected roles are breaking the defined rules but doesn't stop any SQL statements from executing. This mode is the next step after SQL/Protect learns the expected behavior of the protected roles. It essentially behaves in intrusion detection mode and you can run this mode in production when properly monitored. -- `active`. Stops all invalid statements for a protected role. This mode behaves as a SQL firewall, preventing dangerous queries from running. This approach is particularly effective against early penetration testing when the attacker is trying to find the vulnerability point and the type of database behind the application. Not only does SQL/Protect close those vulnerability points, it tracks the blocked queries. This tracking allows administrators to be alerted before the attacker finds another way to penetrate the system. +- `passive`. Issues warnings if protected roles are breaking the defined rules but doesn't stop any SQL statements from executing. This mode is the next step after SQL/Protect learns the expected behavior of the protected roles. It essentially behaves in intrusion detection mode. You can run this mode in production when proper monitoring is in place. +- `active`. Stops all invalid statements for a protected role. This mode behaves as a SQL firewall, preventing dangerous queries from running. This approach is particularly effective against early penetration testing when the attacker is trying to find the vulnerability point and the type of database behind the application. Not only does SQL/Protect close those vulnerability points, it tracks the blocked queries. This tracking can alert administrators before the attacker finds another way to penetrate the system. The default mode is `passive`. -If you're using `SQL/Protect` for the first time, set `edb_sql_protect.level` to `learn`. +If you're using SQL/Protect for the first time, set `edb_sql_protect.level` to `learn`. ## Monitoring protected roles -After you configure SQL/Protect in a database, add roles to the protected roles list, and set the desired protection level, you can activate SQL/Protect in `learn`, `passive`, or `active`. You can then start running your applications. +After you configure SQL/Protect in a database, add roles to the protected roles list, and set the desired protection level, you can activate SQL/Protect in `learn`, `passive`, or `active` mode. You can then start running your applications. With a new SQL/Protect installation, the first step is to determine the relations that protected roles are allowed to access during normal operation. Learn mode allows a role to run applications during which time SQL/Protect is recording the relations that are accessed. These are added to the role’s *protected relations list* stored in table `edb_sql_protect_rel`. -Monitoring for protection against attack begins when you run SQL/Protect in passive or active mode. In passive and active modes, the role is permitted to access the relations in its protected relations list. These are the specifiedd relations the role can access during typical usage. +Monitoring for protection against attack begins when you run SQL/Protect in passive or active mode. In passive and active modes, the role is permitted to access the relations in its protected relations list. These are the specified relations the role can access during typical usage. However, if a role attempts to access a relation that isn't in its protected relations list, SQL/Protect returns a `WARNING` or `ERROR` severity-level message. The role’s attempted action on the relation might not be carried out, depending on whether the mode is passive or active. @@ -227,7 +227,7 @@ __OUTPUT__ 3. Allow the protected roles to run their applications. - For example, the following queries are issued in the `psql` application by protected role `appuser`: + For example, the following queries are issued in the `psql` application by the protected role `appuser`: ```sql edb=> SELECT * FROM dept; @@ -306,7 +306,7 @@ After a role’s applications have accessed all relations they need, you can cha Passive mode is a less restrictive protection mode than active. -1. To activate `SQL/Protect` in passive mode, set the following parameters in the `postgresql.conf` file: +1. To activate SQL/Protect in passive mode, set the following parameters in the `postgresql.conf` file: ```ini edb_sql_protect.enabled = on @@ -315,7 +315,7 @@ edb_sql_protect.level = passive 2. Reload the configuration file as shown in Step 2 of [Learn mode](#learn-mode). - Now SQL/Protect is in passive mode. For relations that were learned, such as the `dept` and `emp` tables of the prior examples, SQL statements are permitted. No special notification to the client by `SQL/Protect` is required, as shown by the following queries run by user `appuser`: + Now SQL/Protect is in passive mode. For relations that were learned, such as the `dept` and `emp` tables of the prior examples, SQL statements are permitted. No special notification to the client by SQL/Protect is required, as shown by the following queries run by user `appuser`: ```sql edb=> SELECT * FROM dept; @@ -425,7 +425,7 @@ __OUTPUT__ In active mode, disallowed SQL statements are prevented from executing. Also, the message issued by SQL/Protect has a higher severity level of `ERROR` instead of `WARNING`. -1. To activate `SQL/Protect` in active mode, set the following parameters in the `postgresql.conf` file: +1. To activate SQL/Protect in active mode, set the following parameters in the `postgresql.conf` file: ```ini edb_sql_protect.enabled = on @@ -434,7 +434,7 @@ edb_sql_protect.level = active 2. Reload the configuration file as shown in Step 2 of [Learn mode](#learn-mode). -This example shows SQL statements similar to those given in the examples of Step 2 in [Passive mode](#passive-mode). These statements are executed by user `appuser` when `edb_sql_protect.level` is set to `active`: +This example shows SQL statements similar to those given in the examples of Step 2 in [Passive mode](#passive-mode). These statements are executed by the user `appuser` when `edb_sql_protect.level` is set to `active`: ```sql edb=> CREATE TABLE appuser_tab_3 (f1 INTEGER); diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx index 8a84f44da92..be43c319694 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx @@ -8,11 +8,11 @@ legacyRedirectsGenerated: -You must be connected as a superuser to perform these operations and include the `sqlprotect` schema in your search path. +You must be connected as a superuser to perform these operations. Include the `sqlprotect` schema in your search path. ## Adding a role to the protected roles list -Add a role to the protected roles list run `protect_role('rolename')`, as shown in this example: +Add a role to the protected roles list. Run `protect_role('rolename')`, as shown in this example: ```sql edb=# SELECT protect_role('newuser'); @@ -95,7 +95,7 @@ The updated rules take effect on new sessions started by the role since the chan If SQL/Protect learns that a given relation is accessible for a given role, you can later remove that relation from the role’s protected relations list. -Delete its entry from the `edb_sql_protect_rel` table using any of the following functions: +Delete the entry from the `edb_sql_protect_rel` table using any of the following functions: ```sql unprotect_rel('rolename', 'relname') @@ -129,13 +129,13 @@ __OUTPUT__ (2 rows) ``` -SQL/Protect now issues a warning or completely blocks access (depending on the setting of `edb_sql_protect.level`) when the role attempts to utilize that relation. +SQL/Protect now issues a warning or completely blocks access (depending on the setting of `edb_sql_protect.level`) when the role attempts to use that relation. ## Deleting statistics -You can delete statistics from view `edb_sql_protect_stats` using either of the following functions: +You can delete statistics from the view `edb_sql_protect_stats` using either of the following functions: ```sql drop_stats('rolename') @@ -143,7 +143,7 @@ drop_stats('rolename') drop_stats(roleoid) ``` -The variation of the function using the OID is useful if you remove the role using the `DROP ROLE` or `DROP USER` SQL statement before deleting the role’s statistics using `drop_stats('rolename')`. If a query on `edb_sql_protect_stats` returns a value such as `unknown (OID=16458)` for the user name, use the `drop_stats(roleoid)` form of the function to remove the deleted role’s statistics from `edb_sql_protect_stats`. +The form of the function using the OID is useful if you remove the role using the `DROP ROLE` or `DROP USER` SQL statement before deleting the role’s statistics using `drop_stats('rolename')`. If a query on `edb_sql_protect_stats` returns a value such as `unknown (OID=16458)` for the user name, use the `drop_stats(roleoid)` form of the function to remove the deleted role’s statistics from `edb_sql_protect_stats`. This example shows the `drop_stats` function: @@ -194,7 +194,7 @@ __OUTPUT__ ## Deleting offending queries -You can delete offending queries from view `edb_sql_protect_queries` using either of the following functions: +You can delete offending queries from the view `edb_sql_protect_queries` using either of the following functions: ```sql drop_queries('rolename') diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx index 33beea7ecaa..709f9840910 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx @@ -11,13 +11,13 @@ legacyRedirectsGenerated: Backing up a database that's configured with SQL/Protect and then restoring the backup file to a new database requires considerations in addition to those normally associated with backup and restore procedures. These added considerations are due mainly to the use of object identification numbers (OIDs) in the SQL/Protect tables. !!! Note - This information applies if your backup and restore procedures result in re-creating database objects in the new database with new OIDs, such as is when using the `pg_dump` backup program. + This information applies if your backup and restore procedures result in re-creating database objects in the new database with new OIDs, such as when using the `pg_dump` backup program. If you're backing up your EDB Postgres Advanced Server database server by using the operating system’s copy utility to create a binary image of the EDB Postgres Advanced Server data files (file system backup method), then this information doesn't apply. ## Object identification numbers in SQL/Protect tables -SQL/Protect uses two tables, `edb_sql_protect` and `edb_sql_protect_rel`, to store information on database objects such as databases, roles, and relations. References to these database objects in these tables are done using the objects’ OIDs, not the objects’ text names. The OID is a numeric data type used by EDB Postgres Advanced Server to uniquely identify each database object. +SQL/Protect uses two tables, `edb_sql_protect` and `edb_sql_protect_rel`, to store information on database objects such as databases, roles, and relations. References to these database objects in these tables are done using the objects’ OIDs, not their text names. The OID is a numeric data type used by EDB Postgres Advanced Server to uniquely identify each database object. When a database object is created, EDB Postgres Advanced Server assigns an OID to the object, which is then used when a reference to the object is needed in the database catalogs. If you create the same database object in two databases, such as a table with the same `CREATE TABLE` statement, each table is assigned a different OID in each database. @@ -54,7 +54,7 @@ __OUTPUT__ The files `/tmp/edb.dmp` and `/tmp/sqlprotect.dmp` comprise your total database backup. -## Restoring From the Backup Files +## Restoring from the backup files 1. Restore the backup file to the new database. @@ -77,7 +77,7 @@ CREATE SCHEMA 2. Connect to the new database as a superuser, and delete all rows from the `edb_sql_protect_rel` table. - This deletion removes any existing rows in the `edb_sql_protect_rel` table that were backed up from the original database. These rows don't contain the correct OIDs relative to the database where the backup file was restored: + This deletion removes any existing rows in the `edb_sql_protect_rel` table that were backed up from the original database. These rows don't contain the correct OIDs relative to the database where the backup file was restored. ```sql $ /usr/edb/as14/bin/psql -d newdb -U enterprisedb @@ -91,7 +91,7 @@ DELETE 2 3. Delete all rows from the `edb_sql_protect` table. - This deletion removes any existing rows in the `edb_sql_protect` table that were backed up from the original database. These rows don't contain the correct OIDs relative to the database where the backup file was restored: + This deletion removes any existing rows in the `edb_sql_protect` table that were backed up from the original database. These rows don't contain the correct OIDs relative to the database where the backup file was restored. ```sql newdb=# DELETE FROM sqlprotect.edb_sql_protect; diff --git a/product_docs/docs/epas/15/epas_security_guide/03_virtual_private_database.mdx b/product_docs/docs/epas/15/epas_security_guide/03_virtual_private_database.mdx index f4207eef16a..f2e2296f004 100644 --- a/product_docs/docs/epas/15/epas_security_guide/03_virtual_private_database.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/03_virtual_private_database.mdx @@ -10,22 +10,22 @@ legacyRedirectsGenerated: Virtual Private Database is a type of *fine-grained access control* using security policies. Fine-grained access control means that you can control access to data down to specific rows as defined by the security policy. -The rules that encode a *security policy* are defined in a *policy function*, which is an SPL function with certain input parameters and return value. The security policy is the named association of the policy function to a particular database object, typically a table. +The rules that encode a *security policy* are defined in a *policy function*. A policy function is an SPL function with certain input parameters and return value. The security policy is the named association of the policy function to a particular database object, typically a table. -In EDB Postgres Advanced Server, you can write the policy function in any language supported by EDB Postgres Advanced Server, such as SQL and PL/pgSQL, in addition to SPL. +In EDB Postgres Advanced Server, you can write the policy function in any language it supports, such as SQL and PL/pgSQL, in addition to SPL. !!! Note The database objects currently supported by EDB Postgres Advanced Server Virtual Private Database are tables. You can apply policies to views or synonyms. The following are advantages of using Virtual Private Database: -- Provides a fine-grained level of security. Database-object-level privileges given by the `GRANT` command determine access privileges to the entire instance of a database object. Virtual Private Database provides access control for the individual rows of a database object instance. +- It provides a fine-grained level of security. Database-object-level privileges given by the `GRANT` command determine access privileges to the entire instance of a database object. Virtual Private Database provides access control for the individual rows of a database object instance. - You can apply a different security policy depending on the type of SQL command (`INSERT`, `UPDATE`, `DELETE`, or `SELECT`). -- The security policy can vary dynamically for each applicable SQL command affecting the database object depending on factors such as the session user of the application accessing the database object. -- Invoking the security policy is transparent to all applications that access the database object. You don't have to modify ndividual applications to apply the security policy. +- The security policy can vary dynamically for each applicable SQL command affecting the database object. Factors such as the session user of the application accessing the database object affect the security policy. +- Invoking the security policy is transparent to all applications that access the database object. You don't have to modify individual applications to apply the security policy. - After you enable a security policy, no application (including new applications) can circumvent the security policy except by the system privilege described in the note that follows. Even superusers can't circumvent the security policy except by the noted system privilege. !!! Note - The only way you can circumvent security policies is if the `EXEMPT ACCESS POLICY` system privilege is granted to a user. Use extreme care when granting the `EXEMPT ACCESS POLICY` privilege. A user with this privilege is exempted from all policies in the database. + The only way you can circumvent security policies is if the user is granted `EXEMPT ACCESS POLICY` system privilege. Use extreme care when granting the `EXEMPT ACCESS POLICY` privilege. A user with this privilege is exempted from all policies in the database. The `DBMS_RLS` package provides procedures to create policies, remove policies, enable policies, and disable policies. diff --git a/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx b/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx index 8aac6f82243..ecdaeab36bc 100644 --- a/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx @@ -6,7 +6,7 @@ title: "sslutils" `sslutils` is a Postgres extension that provides SSL certificate generation functions to EDB Postgres Advanced Server for use by the EDB Postgres Enterprise Manager server. Install `sslutils` by using the `edb-asxx-server-sslutils` RPM package, where `xx` is the EDB Postgres Advanced Server version number. -Each parameter in the function’s parameter list is described by `parameter n`, where `n` refers to the `nth` ordinal position (for example, first, second, third, etc.) in the function’s parameter list. +Each parameter in the function’s parameter list is described by `parameter n`, where `n` refers to the `nth` ordinal position (for example, first, second, or third) in the function’s parameter list. ## openssl_rsa_generate_key @@ -81,7 +81,7 @@ The function returns the self-signed certificate or certificate authority certif `parameter 3` - The path to the certificate authority’s private key or (if argument `2` is `NULL`) the path to a private key. + The path to the certificate authority’s private key or, if argument `2` is `NULL`, the path to a private key. ## openssl_rsa_generate_crl diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx index 71c4e78b858..743ce29bbdd 100644 --- a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx @@ -8,7 +8,7 @@ title: "Data redaction" For example, a social security number (SSN) is stored as `021-23-9567`. Privileged users can see the full SSN, while other users see only the last four digits: `xxx-xx-9567`. -You implement data redaction by defining a function for each field to which to apply redaction . The function returns the value to display to the users subject to the data redaction. +You implement data redaction by defining a function for each field to which to apply redaction. The function returns the value to display to the users subject to the data redaction. For example, for the SSN field, the redaction function returns `xxx-xx-9567` for an input SSN of `021-23-9567`. @@ -32,7 +32,7 @@ If the parameter is disabled by having it set to `FALSE` during the session, the - Superusers and the table owner bypass data redaction and see the original data. - All other users get an error. -You can change a redaction policy by using the `ALTER REDACTION POLICY` command. Or, you can eliminate it using the `DROP REDACTION POLICY` command. +You can change a redaction policy using the `ALTER REDACTION POLICY` command. Or, you can eliminate it using the `DROP REDACTION POLICY` command. ## CREATE REDACTION POLICY @@ -67,7 +67,7 @@ The `CREATE REDACTION POLICY` command defines a new column-level security policy `ADD [ COLUMN ]` - This optional form adds a column of the table to the data redaction policy. The `USING` specifies a redaction function expression. You can use multiple `ADD [ COLUMN ]` forms if you want to add multiple columns of the table to the data redaction policy being created. The optional `WITH OPTIONS ( ... )` clause specifies a scope or an exception to the data redaction policy to apply. If you don't specify the scope or exception, the default values for scope and exception are `query` and `none`, respectively. + This optional form adds a column of the table to the data redaction policy. The `USING` clause specifies a redaction function expression. You can use multiple `ADD [ COLUMN ]` forms if you want to add multiple columns of the table to the data redaction policy being created. The optional `WITH OPTIONS ( ... )` clause specifies a scope or an exception to the data redaction policy to apply. If you don't specify the scope or exception, the default value for scope is `query` and for exception is `none`. ### Parameters @@ -77,7 +77,7 @@ The `CREATE REDACTION POLICY` command defines a new column-level security policy `table_name` - The name (optionally schema-qualified) of the table the data redaction policy applies to. + The optionally schema-qualified name of the table the data redaction policy applies to. `expression` @@ -93,11 +93,11 @@ The `CREATE REDACTION POLICY` command defines a new column-level security policy `scope_value` - The scope identifies the query part to apply redaction for the column. Scope value can be `query`, `top_tlist` or `top_tlist_or_error`. If the scope is `query`, then the redaction is applied on the column regardless of where it appears in the query. If the scope is `top_tlist`, then the redaction is applied on the column only when it appears in the query’s top target list. If the scope is `top_tlist_or_error`, the behavior is same as the `top_tlist` but throws an errors when the column appears anywhere else in the query. + The scope identifies the query part to apply redaction for the column. Scope value can be `query`, `top_tlist`, or `top_tlist_or_error`. If the scope is `query`, then the redaction is applied on the column regardless of where it appears in the query. If the scope is `top_tlist`, then the redaction is applied on the column only when it appears in the query’s top target list. If the scope is `top_tlist_or_error`, the behavior is the same as the `top_tlist` but throws an errors when the column appears anywhere else in the query. `exception_value` - The exception identifies the query part where redaction is exempted. Exception value can be `none`, `equal` or `leakproof`. If exception is `none`, then there is no exemption. If exception is `equal`, then the column isn't redacted when used in an equality test. If exception is `leakproof`, the column isn't redacted when a leakproof function is applied to it. + The exception identifies the query part where redaction is exempted. Exception value can be `none`, `equal`, or `leakproof`. If exception is `none`, then there's no exemption. If exception is `equal`, then the column isn't redacted when used in an equality test. If exception is `leakproof`, the column isn't redacted when a leakproof function is applied to it. ### Notes @@ -107,7 +107,9 @@ The superuser and the table owner are exempt from the data redaction policy. ### Examples -This example shows how you can use this feature in production environments. Create the components for a data redaction policy on the `employees` table: +This example shows how you can use this feature in production environments. + +Create the components for a data redaction policy on the `employees` table: ```sql CREATE TABLE employees ( @@ -148,7 +150,7 @@ CREATE OR REPLACE FUNCTION redact_salary () RETURN money IS BEGIN return END; ``` -Create a data redaction policy on `employees` to redact column `ssn`, which must be accessible in equality condition, and `salary` with default scope and exception. The redaction policy is exempt for the `hr` user. +Create a data redaction policy on `employees` to redact column `ssn` and `salary` with default scope and exception. Column `ssn` must be accessible in equality condition. The redaction policy is exempt for the `hr` user. ```sql CREATE REDACTION POLICY redact_policy_personal_info ON employees FOR (session_user != 'hr') @@ -215,7 +217,7 @@ __OUTPUT__ ### Caveats -- The data redaction policies created on inheritance hierarchies aren't cascaded. For example, if the data redaction policy is created for a parent, it isn't applied to the child table that inherits it, and vice versa. Someone who has access to these child tables can see the non-redacted data. For information about inheritance hierarchies, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/ddl-inherit.html). +- The data redaction policies created on inheritance hierarchies aren't cascaded. For example, if the data redaction policy is created for a parent, it isn't applied to the child table that inherits it, and vice versa. A user with access to these child tables can see the non-redacted data. For information about inheritance hierarchies, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/ddl-inherit.html). - If the superuser or the table owner created any materialized view on the table and provided the access rights `GRANT SELECT` on the table and the materialized view to any non-superuser, then the non-superuser can access the non-redacted data through the materialized view. @@ -307,7 +309,7 @@ To use `ALTER REDACTION POLICY`, you must own the table that the data redaction `table_name` - The name (optionally schema-qualified) of the table that the data redaction policy is on. + The optionally schema-qualified name of the table that the data redaction policy is on. `new_name` @@ -386,7 +388,7 @@ To use `DROP REDACTION POLICY`, you must own the table that the redaction policy `table_name` - The name (optionally schema-qualified) of the table that the data redaction policy is on. + The optionally sechem-qualified name of the table that the data redaction policy is on. `CASCADE` diff --git a/product_docs/docs/epas/15/epas_security_guide/index.mdx b/product_docs/docs/epas/15/epas_security_guide/index.mdx index 2a1f3d44a21..89183a7fb75 100644 --- a/product_docs/docs/epas/15/epas_security_guide/index.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/index.mdx @@ -11,5 +11,4 @@ EDB Postgres Advanced Server security features include: - [sslutils](04_sslutils/#sslutils) is a Postgres extension that allows you to generate SSL certificates. - [Data redaction](05_data_redaction/#data_redaction) functionality allows you to dynamically mask portions of data. - For information about Postgres authentication and security features, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/). \ No newline at end of file diff --git a/product_docs/docs/epas/15/installing/linux_install_details/installing_epas_using_local_repository.mdx b/product_docs/docs/epas/15/installing/linux_install_details/installing_epas_using_local_repository.mdx index bab3198552a..db196daa48e 100644 --- a/product_docs/docs/epas/15/installing/linux_install_details/installing_epas_using_local_repository.mdx +++ b/product_docs/docs/epas/15/installing/linux_install_details/installing_epas_using_local_repository.mdx @@ -36,7 +36,7 @@ To create and use a local repository, you must: - Copy the RPM installation packages to your local repository. You can download the individual packages or use a tarball to populate the repository. The packages are available from the EDB repository at . -- Sync the RPM packages and create the repository. +- Sync the RPM packages, and create the repository. ```text reposync -r edbas15 -p /srv/repos diff --git a/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation.mdx b/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation.mdx index 1cd839d7ecc..a376a6d1fa5 100644 --- a/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation.mdx +++ b/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation.mdx @@ -8,13 +8,13 @@ redirects: - /epas/latest/epas_inst_linux/managing_an_advanced_server_installation/ --- -Unless otherwise noted, the commands and paths noted in the following section assume that you have performed an installation using the native packages. +Unless otherwise noted, the commands and paths in these instructions assume that you performed an installation using the native packages. ## Starting and stopping services -A service is a program that runs in the background and requires no user interaction (in fact, a service provides no user interface); a service can be configured to start at boot time, or manually on demand. Services are best controlled using the platform-specific operating system service control utility. Many of the EDB Postgres Advanced Server supporting components are services. +A service is a program that runs in the background and requires no user interaction. In fact, a service provides no user interface. You can configure a service to start at boot time or manually on demand. Services are best controlled using the platform-specific operating system service control utility. Many of the EDB Postgres Advanced Server supporting components are services. -The following table lists the names of the services that control EDB Postgres Advanced Server and services that control EDB Postgres Advanced Server supporting components: +The following table lists the names of the services that control EDB Postgres Advanced Server and services that control EDB Postgres Advanced Server supporting components. | EDB Postgres Advanced Server component name | Linux service Name | Debian service name | | ------------------------------ | ------------------------ | --------------------------------------- | From efae7f5e93486f232e04ff58c2be960b331a5dd6 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 23 Feb 2023 17:02:36 -0500 Subject: [PATCH 04/50] first review of upgrade guide --- product_docs/docs/epas/15/index.mdx | 2 +- ...naging_an_advanced_server_installation.mdx | 122 ++++++++---------- .../linux_install_details/rpm_packages.mdx | 26 ++-- .../docs/epas/15/upgrading/03_limitations.mdx | 6 +- .../01_linking_versus_copying.mdx | 8 +- .../01_performing_an_upgrade/index.mdx | 20 +-- .../01_command_line_options_reference.mdx | 16 +-- .../02_invoking_pg_upgrade/index.mdx | 28 ++-- .../03_upgrading_to_advanced_server.mdx | 114 ++++++++-------- .../04_upgrading_a_pgAgent_installation.mdx | 4 +- .../05_pg_upgrade_troubleshooting.mdx | 16 +-- .../06_reverting_to_the_old_cluster.mdx | 8 +- .../index.mdx | 13 +- ..._version_update_of_an_rpm_installation.mdx | 7 +- ...plus_to_perform_a_minor_version_update.mdx | 45 +++---- product_docs/docs/epas/15/upgrading/index.mdx | 2 +- 16 files changed, 201 insertions(+), 236 deletions(-) diff --git a/product_docs/docs/epas/15/index.mdx b/product_docs/docs/epas/15/index.mdx index ab9d5f086a8..a23bd254eef 100644 --- a/product_docs/docs/epas/15/index.mdx +++ b/product_docs/docs/epas/15/index.mdx @@ -40,4 +40,4 @@ legacyRedirectsGenerated: - "/edb-docs/p/edb-postgres-advanced-server/9.5" --- - With EDB Postgres Advanced Server, EnterpriseDB continues to lead as the only worldwide company to deliver innovative and low-cost, open-source-derived database solutions with commercial quality, ease of use, compatibility, scalability, and performance for small or large-scale enterprises. +With EDB Postgres Advanced Server, EDB continues to lead. It's the only worldwide company to deliver innovative and low-cost, open-source-derived database solutions with commercial quality, ease of use, compatibility, scalability, and performance for small or large-scale enterprises. diff --git a/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation.mdx b/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation.mdx index a376a6d1fa5..e9eabddc83b 100644 --- a/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation.mdx +++ b/product_docs/docs/epas/15/installing/linux_install_details/managing_an_advanced_server_installation.mdx @@ -12,7 +12,7 @@ Unless otherwise noted, the commands and paths in these instructions assume that ## Starting and stopping services -A service is a program that runs in the background and requires no user interaction. In fact, a service provides no user interface. You can configure a service to start at boot time or manually on demand. Services are best controlled using the platform-specific operating system service control utility. Many of the EDB Postgres Advanced Server supporting components are services. +A service is a program that runs in the background and doesn't require user interaction. A service provides no user interface. You can configure a service to start at boot time or manually on demand. Services are best controlled using the platform-specific operating system service control utility. Many of the EDB Postgres Advanced Server supporting components are services. The following table lists the names of the services that control EDB Postgres Advanced Server and services that control EDB Postgres Advanced Server supporting components. @@ -31,7 +31,7 @@ You can use the Linux command line to control EDB Postgres Advanced Server's dat If your installation of EDB Postgres Advanced Server resides on version 7.x | 8.x of RHEL and CentOS, you must use the `systemctl` command to control the EDB Postgres Advanced Server service and supporting components. -The `systemctl` command must be in your search path and must be invoked with superuser privileges. To use the command, open a command line, and enter: +The `systemctl` command must be in your search path and you must invoke it with superuser privileges. To use the command, open a command line, and enter: ```text systemctl @@ -84,7 +84,13 @@ If your installation of EDB Postgres Advanced Server resides on version 18.04 | ### Using pg_ctl to control EDB Postgres Advanced Server -You can use the `pg_ctl` utility to control an EDB Postgres Advanced Server service from the command line on any platform. `pg_ctl` allows you to start, stop, or restart the EDB Postgres Advanced Server database server, reload the configuration parameters, or display the status of a running server. To invoke the utility, assume the identity of the cluster owner, navigate into the home directory of EDB Postgres Advanced Server, and issue the command: +You can use the `pg_ctl` utility to control an EDB Postgres Advanced Server service from the command line on any platform. `pg_ctl` allows you to: + +- Start, stop, or restart the EDB Postgres Advanced Server database server +- Reload the configuration parameters +- Display the status of a running server + +To invoke the utility, assume the identity of the cluster owner. In the home directory of EDB Postgres Advanced Server, issue the command: ```text ./bin/pg_ctl -D @@ -97,20 +103,18 @@ You can use the `pg_ctl` utility to control an EDB Postgres Advanced Server serv - `start` to start the service. - `stop` to stop the service. - `restart` to stop and then start the service. -- `reload` sends the server a `SIGHUP` signal, reloading configuration parameters +- `reload` to send the server a `SIGHUP` signal, reloading configuration parameters - `status` to discover the current status of the service. -For more information about using the `pg_ctl` utility, or the command line options available, see the official PostgreSQL Core Documentation available at: - - +For more information about using the `pg_ctl` utility or the command line options available, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/app-pg-ctl.html). -**Choosing Between pg_ctl and the service Command** +#### Choosing between pg_ctl and the service command -You can use the `pg_ctl` utility to manage the status of an EDB Postgres Advanced Server cluster, but it is important to note that `pg_ctl` does not alert the operating system service controller to changes in the status of a server, so it is beneficial to use the `service` command whenever possible. +You can use the `pg_ctl` utility to manage the status of an EDB Postgres Advanced Server cluster. However, it's important to note that `pg_ctl` doesn't alert the operating system service controller to changes in the status of a server. We recommend using the `service` command when possible. ### Configuring component services to autostart at system reboot -After installing, configuring, and starting the services of EDB Postgres Advanced Server supporting components on a Linux system, you must manually configure your system to autostart the service when your system reboots. To configure a service to autostart on a Linux system, open a command line, assume superuser privileges, and enter the following command. +After installing, configuring, and starting the services of EDB Postgres Advanced Server supporting components on a Linux system, you must manually configure your system to autostart the service when your system restarts. To configure a service to autostart on a Linux system, open a command line, assume superuser privileges, and enter the command. On a Redhat-compatible Linux system, enter: @@ -122,7 +126,7 @@ Where `service_name` specifies the name of the service. ## Connecting to EDB Postgres Advanced Server with edb-psql -`edb-psql` is a command line client application that allows you to execute SQL commands and view the results. To open the `edb-psql` client, the client must be in your search path. The executable resides in the `bin` directory, under your EDB Postgres Advanced Server installation. +`edb-psql` is a command line client application that allows you to execute SQL commands and view the results. To open the `edb-psql` client, the client must be in your search path. The executable resides in the `bin` directory under your EDB Postgres Advanced Server installation. Use the following command and options to start the `edb-psql` client: @@ -136,38 +140,32 @@ Where: `-U` specifies the identity of the database user to use for the session. -`edb-psql` is a symbolic link to a binary called `psql`, a modified version of the PostgreSQL community `psql`, with added support for Advanced Server features. For more information about using the command line client, see the PostgreSQL Core Documentation at: - - +`edb-psql` is a symbolic link to a binary called `psql`, a modified version of the PostgreSQL community `psql`, with added support for EDB Postgres Advanced Server features. For more information about using the command line client, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/app-psql.html). ### Managing authentication on a Debian or Ubuntu host By default, the server is running with the peer or md5 permission on a Debian or Ubuntu host. You can change the authentication method by modifying the `pg_hba.conf` file, located under `/etc/edb-as/15/main/`. -For more information about modifying the `pg_hba.conf` file, see the PostgreSQL core documentation available at: - - +For more information about modifying the `pg_hba.conf` file, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html). ## Configuring a package installation -The packages that install the database server component create a unit file (on version 7.x or 8.x hosts) and service startup scripts. +The packages that install the database server component create a unit file on version 7.x or 8.x hosts and service startup scripts. ### Creating a database cluster and starting the service -The PostgreSQL `initdb` command creates a database cluster; when installing EDB Postgres Advanced Server with an RPM package, the `initdb` executable is in `/usr/edb/asx.x/bin`. After installing EDB Postgres Advanced Server, you must manually configure the service and invoke `initdb` to create your cluster. When invoking `initdb`, you can: +The PostgreSQL `initdb` command creates a database cluster. When installing EDB Postgres Advanced Server with an RPM package, the `initdb` executable is in `/usr/edb/asx.x/bin`. After installing EDB Postgres Advanced Server, you must manually configure the service and invoke `initdb` to create your cluster. When invoking `initdb`, you can: - Specify environment options on the command line. - Include the `systemd` service manager on RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x and use a service configuration file to configure the environment. -To review the `initdb` documentation, visit: +For more information, see the [`initdb` documentation](https://www.postgresql.org/docs/current/static/app-initdb.html). - - -After specifying any options in the service configuration file, you can create the database cluster and start the service; these steps are platform specific. +After specifying any options in the service configuration file, you can create the database cluster and start the service. The steps are platform specific. #### On RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x -To invoke `initdb` on a RHEL/CentOS 7.x or Rocky Linux/AlmaLinux 8.x system, with the options specified in the service configuration file, assume the identity of the operating system superuser: +To invoke `initdb` on a RHEL/CentOS 7.x or Rocky Linux/AlmaLinux 8.x system with the options specified in the service configuration file, assume the identity of the operating system superuser: ```text su - root @@ -175,7 +173,7 @@ su - root To initialize a cluster with the non-default values, you can use the `PGSETUP_INITDB_OPTIONS` environment variable. You can initialize the cluster using the `edb-as-15-setup` script under `EPAS_Home/bin`. -To invoke `initdb` export the `PGSETUP_INITDB_OPTIONS` environment variable with the following command: +To invoke `initdb`, export the `PGSETUP_INITDB_OPTIONS` environment variable: ```text PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/edb/as15/bin/edb-as-15-setup initdb @@ -195,25 +193,25 @@ You can initialize multiple clusters using the bundled scripts. To create a new /usr/bin/epas_createcluster 15 main2 ``` -To start a new cluster, use the following command: +To start a new cluster: ```text /usr/bin/epas_ctlcluster 15 main2 start   ``` -To list all the available clusters, use the following command: +To list all the available clusters: ```text /usr/bin/epas_lsclusters ``` !!! Note - The data directory is created under `/var/lib/edb-as/15/main2` and configuration directory is created under `/etc/edb-as/15/main/`. + The data directory is created under `/var/lib/edb-as/15/main2`, and the configuration directory is created under `/etc/edb-as/15/main/`. ## Specifying cluster options with INITDBOPTS -You can use the `INITDBOPTS` variable to specify your cluster configuration preferences. By default, the `INITDBOPTS` variable is commented out in the service configuration file; unless modified, when you run the service startup script, the new cluster is created in a mode compatible with Oracle databases. Clusters created in this mode contain a database named `edb`, and have a database superuser named `enterprisedb`. +You can use the `INITDBOPTS` variable to specify your cluster configuration preferences. By default, the `INITDBOPTS` variable is commented out in the service configuration file. Unless you modify it, when you run the service startup script, the new cluster is created in a mode compatible with Oracle databases. Clusters created in this mode contain a database named `edb` and have a database superuser named `enterprisedb`. ### Initializing the cluster in Oracle mode @@ -222,15 +220,15 @@ If you initialize the database using Oracle compatibility mode, the installation - Data dictionary views compatible with Oracle databases. - Oracle data type conversions. - Date values displayed in a format compatible with Oracle syntax. -- Support for Oracle-styled concatenation rules (if you concatenate a string value with a `NULL` value, the returned value is the value of the string). +- Support for Oracle-styled concatenation rules. If you concatenate a string value with a `NULL` value, the returned value is the value of the string. - Support for the following Oracle built-in packages. | Package | Functionality compatible with Oracle databases | | ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `dbms_alert` | Provides the capability to register for, send, and receive alerts. | -| `dbms_job` | Provides the capability for the creation, scheduling, and managing of jobs. | +| `dbms_job` | Provides the capability to create, schedule, and manage jobs. | | `dbms_lob` | Provides the capability to manage on large objects. | -| `dbms_output` | Provides the capability to send messages to a message buffer, or get messages from the message buffer. | +| `dbms_output` | Provides the capability to send messages to a message buffer or get messages from the message buffer. | | `dbms_pipe` | Provides the capability to send messages through a pipe within or between sessions connected to the same database cluster. | | `dbms_rls` | Enables the implementation of Virtual Private Database on certain EDB Postgres Advanced Server database objects. | | `dbms_sql` | Provides an application interface to the EDB dynamic SQL functionality. | @@ -239,7 +237,7 @@ If you initialize the database using Oracle compatibility mode, the installation | `dbms_aq` | Provides message queueing and processing for EDB Postgres Advanced Server. | | `dbms_profiler` | Collects and stores performance information about the PL/pgSQL and SPL statements that are executed during a performance profiling session. | | `dbms_random` | Provides a number of methods to generate random values. | -| `dbms_redact` | Enables the redacting or masking of data that is returned by a query. | +| `dbms_redact` | Enables redacting or masking data that's returned by a query. | | `dbms_lock` | Provides support for the `DBMS_LOCK.SLEEP` procedure. | | `dbms_scheduler` | Provides a way to create and manage jobs, programs, and job schedules. | | `dbms_crypto` | Provides functions and procedures to encrypt or decrypt RAW, BLOB or CLOB data. You can also use `DBMS_CRYPTO` functions to generate cryptographically strong random values. | @@ -247,10 +245,10 @@ If you initialize the database using Oracle compatibility mode, the installation | `dbms_session` | Provides support for the `DBMS_SESSION.SET_ROLE` procedure. | | `utl_encode` | Provides a way to encode and decode data. | | `utl_http` | Provides a way to use the HTTP or HTTPS protocol to retrieve information found at an URL. | -| `utl_file` | Provides the capability to read from, and write to files on the operating system’s file system. | +| `utl_file` | Provides the capability to read from and write to files on the operating system’s file system. | | `utl_smtp` | Provides the capability to send e-mails over the Simple Mail Transfer Protocol (SMTP). | -| `utl_mail` | Provides the capability to manage e-mail. | -| `utl_url` | Provides a way to escape illegal and reserved characters within an URL. | +| `utl_mail` | Provides the capability to manage email. | +| `utl_url` | Provides a way to escape illegal and reserved characters in a URL. | | `utl_raw` | Provides a way to manipulate or retrieve the length of raw data types. | @@ -258,14 +256,12 @@ If you initialize the database using Oracle compatibility mode, the installation Clusters created in PostgreSQL mode don't include compatibility features. To create a new cluster in PostgreSQL mode, remove the pound sign (#) in front of the `INITDBOPTS` variable, enabling the `"--no-redwood-compat"` option. Clusters created in PostgreSQL mode contain a database named `postgres` and have a database superuser named `postgres`. -You may also specify multiple `initdb` options. For example, the following statement: +You can also specify multiple `initdb` options. For example, the following statement creates a database cluster without compatibility features for Oracle. The cluster contains a database named `postgres` that's owned by a user named `alice`. The cluster uses `UTF-8` encoding. ```text INITDBOPTS="--no-redwood-compat -U alice --locale=en_US.UTF-8" ``` -Creates a database cluster (without compatibility features for Oracle) that contains a database named `postgres` that is owned by a user named `alice`; the cluster uses `UTF-8` encoding. - If you initialize the database using `"--no-redwood-compat"` mode, the installation includes the following package: | Package | Functionality noncompatible with Oracle databases | @@ -274,7 +270,7 @@ If you initialize the database using `"--no-redwood-compat"` mode, the installat | `dbms_aq` | Provides message queueing and processing for EDB Postgres Advanced Server. | | `edb_bulkload` | Provides direct/conventional data loading capability when loading huge amount of data into a database. | | `edb_gen` | Provides miscellaneous packages to run built-in packages. | -| `edb_objects` | Provides Oracle compatible objects such as packages, procedures etc. | +| `edb_objects` | Provides Oracle-compatible objects such as packages and procedures. | | `waitstates` | Provides monitor session blocking. | | `edb_dblink_libpq` | Provides link to foreign databases via libpq. | | `edb_dblink_oci` | Provides link to foreign databases via OCI. | @@ -286,21 +282,17 @@ In addition to the cluster configuration options documented in the PostgreSQL co `--no-redwood-compat` -Include the `--no-redwood-compat` keywords to instruct the server to create the cluster in PostgreSQL mode. When the cluster is created in PostgreSQL mode, the name of the database superuser is `postgres` and the name of the default database is `postgres`. The few Advanced Server’s features compatible with Oracle databases will be available with this mode. However, we recommend using the Advanced server in redwood compatibility mode to use all its features. +Include the `--no-redwood-compat` keywords to create the cluster in PostgreSQL mode. When the cluster is created in PostgreSQL mode, the name of the database superuser is `postgres`, and the name of the default database is `postgres`. The few EDB Postgres Advanced Server features compatible with Oracle databases are available with this mode. However, we recommend using the EDB Postgres Advanced Server in redwood compatibility mode to use all its features. `--redwood-like` -Include the `--redwood-like` keywords to instruct the server to use an escape character (an empty string ('')) following the `LIKE` (or PostgreSQL-compatible `ILIKE`) operator in a SQL statement that is compatible with Oracle syntax. +Include the `--redwood-like` keywords to use an escape character, that is, an empty string (''), following the `LIKE` (or PostgreSQL-compatible `ILIKE`) operator in a SQL statement that's compatible with Oracle syntax. `--icu-short-form` -Include the `--icu-short-form` keywords to create a cluster that uses a default ICU (International Components for Unicode) collation for all databases in the cluster. For more information about Unicode collations, refer to the *EDB Postgres Advanced Server Guide* available at: - -[https://www.enterprisedb.com/docs](/epas/latest/) - -For more information about using `initdb`, and the available cluster configuration options, see the PostgreSQL Core Documentation available at: +Include the `--icu-short-form` keywords to create a cluster that uses a default International Components for Unicode (ICU) collation for all databases in the cluster. For more information about Unicode collations, see [Unicode collation algorithm](epas/latest/epas_guide/03_database_administration/06_unicode_collation_algorithm/). - +For more information about using `initdb` and the available cluster configuration options, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/app-initdb.html). You can also view online help for `initdb` by assuming superuser privileges and entering: @@ -314,7 +306,7 @@ Where `path_to_initdb_installation_directory` specifies the location of the `ini ### On RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x -On a RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x host, the unit file is named `edb-as-15.service` and resides in `/usr/lib/systemd/system`. The unit file contains references to the location of the EDB Postgres Advanced Server `data` directory. Avoid making any modifications directly to the unit file because it might be overwritten during package upgrades. +On a RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x host, the unit file is named `edb-as-15.service` and resides in `/usr/lib/systemd/system`. The unit file contains references to the location of the EDB Postgres Advanced Server `data` directory. Avoid making any modifications directly to the unit file because they might be overwritten during package upgrades. By default, data files reside under `/var/lib/edb/as15/data` directory. To use a data directory that resides in a non-default location: @@ -333,7 +325,7 @@ By default, data files reside under `/var/lib/edb/as15/data` directory. To use a PIDFile=/var/lib/edb/as15/data/postmaster.pid ``` -- Delete the entire content of `/etc/systemd/system/edb-as-15.service` file, except the following line: +- Delete the content of the `/etc/systemd/system/edb-as-15.service` file except the following line: ```text .include /lib/systemd/system/edb-as-15.service @@ -359,9 +351,9 @@ By default, data files reside under `/var/lib/edb/as15/data` directory. To use a ### Configuring SELinux policy to change the data directory location on RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x -By default, the data files reside under `/var/lib/edb/as15/data` directory. To change the default data directory location depending on individual environment preferences, you must configure the SELinux policy: +By default, the data files reside under the `/var/lib/edb/as15/data` directory. To change the default data directory location depending on individual environment preferences, you must configure the SELinux policy: -- Stop the server using the following command: +- Stop the server: ```text systemctl stop edb-as-15 @@ -385,26 +377,26 @@ By default, the data files reside under `/var/lib/edb/as15/data` directory. To c Max kernel policy version: 31 ``` -- Use the following command to view the SELinux context of the default database location: +- View the SELinux context of the default database location: ```text ls -lZ /var/lib/edb/as15/data drwx------. enterprisedb enterprisedb unconfined_u:object_r:var_lib_t:s0 log ``` -- Create a new directory for a new location of the database using the following command: +- Create a new directory for a new location of the database: ```text mkdir /opt/edb ``` -- Use the following command to move the data directory to `/opt/edb`: +- Move the data directory to `/opt/edb`: ```text mv /var/lib/edb/as15/data /opt/edb/ ``` -- Create a file `edb-as-15.service` under `/etc/systemd/system` directory to include the location of a new data directory: +- Create a file `edb-as-15.service` under `/etc/systemd/system` to include the location of a new data directory: ```text .include /lib/systemd/system/edb-as-15.service @@ -419,19 +411,19 @@ By default, the data files reside under `/var/lib/edb/as15/data` directory. To c semanage fcontext --add --equal /var/lib/edb/as15/data /opt/edb ``` -- Apply the context mapping using `restorecon` utility: +- Apply the context mapping using the `restorecon` utility: ```text restorecon -rv /opt/edb/ ``` -- Reload `systemd` to modify the service script using the following command: +- Reload `systemd` to modify the service script : ```text systemctl daemon-reload ``` -- Now, the `/opt/edb` location has been labeled correctly with the context, use the following command to start the service: +- With the `/opt/edb` location labeled correctly with the context, start the service: ```text systemctl start edb-as-15 @@ -445,15 +437,15 @@ You can configure EDB Postgres Advanced Server to use multiple postmasters, each The `edb-as15-server-core` RPM for version 7.x | 8.x contains a unit file that starts the EDB Postgres Advanced Server instance. The file allows you to start multiple services, with unique `data` directories and monitor different ports. You need to have `root` access to invoke or modify the script. -The example that follows creates an EDB Postgres Advanced Server installation with two instances; the secondary instance is named `secondary`: +This example creates an EDB Postgres Advanced Server installation with two instances. The secondary instance is named `secondary`. -- Make a copy of the default file with the new name. As noted at the top of the file, all modifications must reside under `/etc`. You must pick a name that is not already used in `/etc/systemd/system`. +- Make a copy of the default file with the new name. As noted at the top of the file, all modifications must reside under `/etc`. You must pick a name that isn't already used in `/etc/systemd/system`. ```text cp /usr/lib/systemd/system/edb-as-15.service /etc/systemd/system/secondary-edb-as-15.service ``` -- Edit the file, changing `PGDATA` to point to the new `data` directory that you will create the cluster against. +- Edit the file, changing `PGDATA` to point to the new `data` directory that you'll create the cluster against. - Create the target `PGDATA` with user `enterprisedb`. @@ -463,15 +455,15 @@ The example that follows creates an EDB Postgres Advanced Server installation wi /usr/edb/as15/bin/edb-as-15-setup initdb secondary-edb-as-15 ``` -- Edit the `postgresql.conf` file for the new instance, specifying the port, the IP address, TCP/IP settings, etc. +- Edit the `postgresql.conf` file for the new instance, specifying the port, the IP address, TCP/IP settings, and so on. -- Make sure that new cluster runs after a reboot: +- Make sure that the new cluster runs after a reboot: ```text systemctl enable secondary-edb-as-15 ``` -- Start the second cluster with the following command: +- Start the second cluster: ```text systemctl start secondary-edb-as-15 diff --git a/product_docs/docs/epas/15/installing/linux_install_details/rpm_packages.mdx b/product_docs/docs/epas/15/installing/linux_install_details/rpm_packages.mdx index 71ee2c27e70..654fc8463d4 100644 --- a/product_docs/docs/epas/15/installing/linux_install_details/rpm_packages.mdx +++ b/product_docs/docs/epas/15/installing/linux_install_details/rpm_packages.mdx @@ -4,7 +4,7 @@ redirects: - /epas/latest/epas_inst_linux/install_details/rpm_packages/ --- -EDB provides a number of native packages in the EDB repository. The packages vary slightly for the various Linux variations, see: +EDB provides a number of native packages in the EDB repository. The packages vary slightly for the various Linux variations. See: - [RHEL/OL/Rocky Linux/AlmaLinux/CentOS/SLES Packages](#rhelolrocky-linuxalmalinuxcentossles-packages) - [Debian/Ubuntu Packages](#debianubuntu-packages) @@ -45,7 +45,7 @@ Note: The available package list is subject to change. | edb-as15-server-llvmjit | Contains support for just-in-time (JIT) compiling parts of EDB Postgres Adavanced Servers queries. | | edb-as15-server-pldebugger | Implements an API for debugging PL/pgSQL functions on EDB Postgres Advanced Server. | | edb-as15-server-plperl | Installs the PL/Perl procedural language for EDB Postgres Advanced Server. The `edb-as15-server-plperl` package depends on the platform-supplied version of Perl. | -| edb-as15-server-plpython3 | Installs the PL/Python procedural language for EDB Postgres Advanced Server. The PL/Python2 support is no longer available from EDB Postgres Advanced Server version 15 and later. | +| edb-as15-server-plpython3 | Installs the PL/Python procedural language for EDB Postgres Advanced Server. The PL/Python2 support is no longer available in EDB Postgres Advanced Server version 15 and later. | | edb-as15-server-pltcl | Installs the PL/Tcl procedural language for EDB Postgres Advanced Server. The `edb-as15-pltcl` package depends on the platform-supplied version of TCL. | | edb-as15-server-sqlprofiler | Installs EDB Postgres Advanced Server's SQL Profiler feature. SQL Profiler helps identify and optimize SQL code. | | edb-as15-server-sqlprotect | Installs EDB Postgres Advanced Server's SQL Protect feature. SQL Protect provides protection against SQL injection attacks. | @@ -89,18 +89,18 @@ The following table lists the packages for EDB Postgres Advanced Server 15 suppo | edb-pem-agent | The `edb-pem-agent` is an agent component of Postgres Enterprise Manager. | | edb-pem-docs | Contains documentation for various languages, which are in HTML format. | | edb-pem-server | Contains server components of Postgres Enterprise Manager. | -| edb-pgadmin4 | It is a management tool for PostgreSQL capable of hosting the Python application and presenting it to the user as a desktop application. | +| edb-pgadmin4 | A management tool for PostgreSQL capable of hosting the Python application and presenting it to the user as a desktop application. | | edb-pgadmin4-desktop-common | Installs the desktop components of pgAdmin4 for all window managers. | | edb-pgadmin4-desktop-gnome | Installs the gnome desktop components of pgAdmin4 | | edb-pgadmin4-docs | Contains documentation of pgAdmin4. | | edb-pgadmin4-web | Contains the required files to run pgAdmin4 as a web application. | | edb-efm40 | Installs EDB Failover Manager that adds fault tolerance to database clusters to minimize downtime when a primary database fails by keeping data online in high availability configurations. | -| edb-rs | It is a java-based replication framework that provides asynchronous replication across Postgres and EPAS database servers. It supports primary-standby, primary-primary, and hybrid configurations. | -| edb-rs-client | It is a java-based command-line tool that is used to configure and operate a replication network via different commands by interacting with the EPRS server. | -| edb-rs-datavalidator | It is a java-based command-line tool that provides row and column level data comparison of a source and target database table. The supported RDBMS servers include PostgreSQL, EPAS, Oracle, and MS SQL Server. | +| edb-rs | A Java-based replication framework that provides asynchronous replication across Postgres and EPAS database servers. It supports primary-standby, primary-primary, and hybrid configurations. | +| edb-rs-client | A Java-based command-line tool that is used to configure and operate a replication network via different commands by interacting with the EPRS server. | +| edb-rs-datavalidator | A Java-based command-line tool that provides row and column level data comparison of a source and target database table. The supported RDBMS servers include PostgreSQL, EPAS, Oracle, and MS SQL Server. | | edb-rs-libs | Contains certain libraries that are commonly used by ERPS Server, EPRS Client, and Monitoring modules. | -| edb-rs-monitor | It is a java-based application that provides monitoring capabilities to ensure a smooth functioning of the EPRS replication cluster. | -| edb-rs-server | It is a java-based replication framework that provides asynchronous replication across Postgres and EPAS database servers. It supports primary-standby, primary-primary, and hybrid configurations. | +| edb-rs-monitor | A Java-based application that provides monitoring capabilities to ensure a smooth functioning of the EPRS replication cluster. | +| edb-rs-server | A Java-based replication framework that provides asynchronous replication across Postgres and EPAS database servers. It supports primary-standby, primary-primary, and hybrid configurations. | | edb-bart | Installs the Backup and Recovery Tool (BART) to support online backup and recovery across local and remote PostgreSQL and EDB EDB Postgres Advanced Servers. | | libevent-edb | Contains supporting library files. | | libiconv-edb | Contains supporting library files. | @@ -110,7 +110,7 @@ The following table lists the packages for EDB Postgres Advanced Server 15 suppo ### EDB Postgres Advanced Server Debian packages -The table that follows lists some of the Debian packages that are available from EDB. You can also use the `apt list` command to access a list of the packages that are currently available from your configured repository. Open a command line, assume superuser privileges, and enter: +The table lists some of the Debian packages that are available from EDB. You can also use the `apt list` command to access a list of the packages that are currently available from your configured repository. Open a command line, assume superuser privileges, and enter: ```text apt list edb* @@ -124,7 +124,7 @@ apt list edb* | edb-as15-server | Installs core components of the EDB Postgres Advanced Server database server. | | edb-as15-server-client | Includes client programs and utilities that you can use to access and manage EDB Postgres Advanced Server. | | edb-as15-server-core | Includes the programs needed to create the core functionality behind the EDB Postgres Advanced Server database. | -| edb-as15-server-dev | The `edb-as15-server-dev` package contains the header files and libraries needed to compile C or C++ applications that directly interact with an EDB Postgres Advanced Server server and the ecpg or ecpgPlus C preprocessor. | +| edb-as15-server-dev | Package that contains the header files and libraries needed to compile C or C++ applications that directly interact with an EDB Postgres Advanced Server server and the ecpg or ecpgPlus C preprocessor. | | edb-as15-server-doc | Installs the readme file. | | edb-as15-server-edb-modules | Installs supporting modules for EDB Postgres Advanced Server. | | edb-as15-server-indexadvisor | Installs EDB Postgres Advanced Server's Index Advisor feature. The Index Advisor utility helps to determine the columns to index to improve performance in a given workload. | @@ -197,10 +197,8 @@ If you have an existing EDB Postgres Advanced Server RPM installation, you can u ``` !!! Note - The `yum upgrade` or `dnf upgrade` command perform an update only between minor releases; to update between major releases, you must use `pg_upgrade`. + The `yum upgrade` or `dnf upgrade` commands perform an update only between minor releases. To update between major releases, use `pg_upgrade`. For more information about using yum commands and options, enter `yum --help` on your command line. -For more information about using `dnf` commands and options, visit: - - \ No newline at end of file +For more information about using `dnf` commands and options, see the [`dnf` documentation](https://docs.fedoraproject.org/en-US/quick-docs/dnf/). \ No newline at end of file diff --git a/product_docs/docs/epas/15/upgrading/03_limitations.mdx b/product_docs/docs/epas/15/upgrading/03_limitations.mdx index 2c280bde749..e12b74444a4 100644 --- a/product_docs/docs/epas/15/upgrading/03_limitations.mdx +++ b/product_docs/docs/epas/15/upgrading/03_limitations.mdx @@ -7,11 +7,11 @@ redirects: Consider the following when upgrading EDB Postgres Advanced Server: -- The `pg_upgrade` utility cannot upgrade a partitioned table if a foreign key refers to the partitioned table. -- If you are upgrading from the version 9.4 server or a lower version of EDB Postgres Advanced Server, and you use partitioned tables that include a `SUBPARTITION BY` clause, you must use `pg_dump` and `pg_restore` to upgrade an existing EDB Postgres Advanced Server installation to a later version of EDB Postgres Advanced Server. To upgrade, you must: +- The `pg_upgrade` utility can't upgrade a partitioned table if a foreign key refers to the partitioned table. +- If you're upgrading from the version 9.4 server or a lower version of EDB Postgres Advanced Server, and you use partitioned tables that include a `SUBPARTITION BY` clause, you must use `pg_dump` and `pg_restore` to upgrade an existing EDB Postgres Advanced Server installation to a later version of EDB Postgres Advanced Server. To upgrade, you must: 1. Use `pg_dump` to preserve the content of the subpartitioned table. 2. Drop the table from the EDB Postgres Advanced Server 9.4 database or a lower version of EDB Postgres Advanced Server database. 3. Use `pg_upgrade` to upgrade the rest of the EDB Postgres Advanced Server database to a more recent version. 4. Use `pg_restore` to restore the subpartitioned table to the latest upgraded EDB Postgres Advanced Server database. - If you perform an upgrade of the EDB Postgres Advanced Server installation, you must rebuild any hash-partitioned table on the upgraded server. -- If you are using an ODBC, JDBC, OCI, or .NET driver to connect to your database applications and upgrading to a new major version of EDB Postgres Advanced Server, upgrade your driver to the latest version when upgrading EDB Postgres Advanced Server. +- If you're using an ODBC, JDBC, OCI, or .NET driver to connect to your database applications and upgrading to a new major version of EDB Postgres Advanced Server, upgrade your driver to the latest version when upgrading EDB Postgres Advanced Server. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/01_linking_versus_copying.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/01_linking_versus_copying.mdx index aca4ba39144..cb3e8a9c39f 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/01_linking_versus_copying.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/01_linking_versus_copying.mdx @@ -6,10 +6,10 @@ redirects: -When invoking `pg_upgrade`, you can use a command-line option to specify whether `pg_upgrade` should *copy* or *link* each table and index in the old cluster to the new cluster. +When invoking `pg_upgrade`, you can use a command-line option to specify whether to copy or link each table and index in the old cluster to the new cluster. -Linking is much faster because `pg_upgrade` simply creates a second name (a hard link) for each file in the cluster; linking also requires no extra workspace because `pg_upgrade` does not make a copy of the original data. When linking the old cluster and the new cluster, the old and new clusters share the data; note that after starting the new cluster, your data can no longer be used with the previous version of EDB Postgres Advanced Server. +Linking is much faster because `pg_upgrade` creates a second name (a hard link) for each file in the cluster. Linking also requires no extra workspace because `pg_upgrade` doesn't make a copy of the original data. When linking the old cluster and the new cluster, the old and new clusters share the data. After starting the new cluster, your data can no longer be used with the previous version of EDB Postgres Advanced Server. -If you choose to copy data from the old cluster to the new cluster, `pg_upgrade` still reduces the amount of time required to perform an upgrade compared to the traditional `dump/restore` procedure. `pg_upgrade` uses a file-at-a-time mechanism to copy data files from the old cluster to the new cluster (versus the row-by-row mechanism used by `dump/restore`). When you use `pg_upgrade`, you avoid building indexes in the new cluster; each index is simply copied from the old cluster to the new cluster. Finally, using a `dump/restore` procedure to upgrade requires a great deal of workspace to hold the intermediate text-based dump of all of your data, while `pg_upgrade` requires very little extra workspace. +If you choose to copy data from the old cluster to the new cluster, `pg_upgrade` still reduces the amount of time required to perform an upgrade compared to the traditional `dump/restore` procedure. `pg_upgrade` uses a file-at-a-time mechanism to copy data files from the old cluster to the new cluster versus the row-by-row mechanism used by `dump/restore`. When you use `pg_upgrade`, you avoid building indexes in the new cluster. Each index is instead copied from the old cluster to the new cluster. Finally, using a `dump/restore` procedure to upgrade requires a great deal of workspace to hold the intermediate text-based dump of all of your data, while `pg_upgrade` requires very little extra workspace. -Data that is stored in user-defined tablespaces is not copied to the new cluster; it stays in the same location in the file system, but is copied into a subdirectory whose name reflects the version number of the new cluster. To manually relocate files that are stored in a tablespace after upgrading, move the files to the new location and update the symbolic links (located in the `pg_tblspc` directory under your cluster's `data` directory) to point to the files. +Data that's stored in user-defined tablespaces isn't copied to the new cluster. It stays in the same location in the file system but is copied into a subdirectory whose name shows the version number of the new cluster. To manually relocate files that are stored in a tablespace after upgrading, move the files to the new location and update the symbolic links to point to the files. The symbolic links are located in the `pg_tblspc` directory under your cluster's `data` directory. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx index 3910179a4b1..cd66e4ec2e4 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx @@ -6,28 +6,28 @@ redirects: To upgrade an earlier version of EDB Postgres Advanced Server to the current version, you must: -- Install the current version of EDB Postgres Advanced Server. The new installation must contain the same supporting server components as the old installation. -- Empty the target database or create a new target cluster with `initdb`. -- Place the `pg_hba.conf` file for both databases in `trust` authentication mode (to avoid authentication conflicts). -- Shut down the old and new EDB Postgres Advanced Server services. -- Invoke the `pg_upgrade` utility. +1. Install the current version of EDB Postgres Advanced Server. The new installation must contain the same supporting server components as the old installation. +1. Empty the target database or create a new target cluster with `initdb`. +1. To avoid authentication conflicts, place the `pg_hba.conf` file for both databases in `trust` authentication mode. +1. Shut down the old and new EDB Postgres Advanced Server services. +1. Invoke the `pg_upgrade` utility. -When `pg_upgrade` starts, it performs a compatibility check to ensure that all required executables are present and contain the expected version numbers. The verification process also checks the old and new `$PGDATA` directories to ensure that the expected files and subdirectories are in place. If the verification process succeeds, `pg_upgrade` starts the old `postmaster` and runs `pg_dumpall --schema-only` to capture the metadata contained in the old cluster. The script produced by `pg_dumpall` is used in a later step to recreate all user-defined objects in the new cluster. +When `pg_upgrade` starts, it performs a compatibility check to ensure that all required executables are present and contain the expected version numbers. The verification process also checks the old and new `$PGDATA` directories to ensure that the expected files and subdirectories are in place. If the verification process succeeds, `pg_upgrade` starts the old `postmaster` and runs `pg_dumpall --schema-only` to capture the metadata contained in the old cluster. The script produced by `pg_dumpall` is used in later to re-create all user-defined objects in the new cluster. -Note that the script produced by `pg_dumpall` recreates only user-defined objects and not system-defined objects. The new cluster *already* contains the system-defined objects created by the latest version of EDB Postgres Advanced Server. +The script produced by `pg_dumpall` re-creates only user-defined objects and not system-defined objects. The new cluster already contains the system-defined objects created by the latest version of EDB Postgres Advanced Server. After extracting the metadata from the old cluster, `pg_upgrade` performs the bookkeeping tasks required to sync the new cluster with the existing data. -`pg_upgrade` runs the `pg_dumpall` script against the new cluster to create (empty) database objects of the same shape and type as those found in the old cluster. Then, `pg_upgrade` links or copies each table and index from the old cluster to the new cluster. +`pg_upgrade` runs the `pg_dumpall` script against the new cluster to create empty database objects of the same shape and type as those found in the old cluster. Then, `pg_upgrade` links or copies each table and index from the old cluster to the new cluster. -If you are upgrading to EDB Postgres Advanced Server and have installed the `edb_dblink_oci` or `edb_dblink_libpq` extension, you must drop the extension before performing an upgrade. To drop the extension, connect to the server with the psql or PEM client, and invoke the commands: +If you're upgrading to EDB Postgres Advanced Server and installed the `edb_dblink_oci` or `edb_dblink_libpq` extension, you must drop the extension before performing an upgrade. To drop the extension, connect to the server with the psql or PEM client, and invoke the commands: ```sql DROP EXTENSION edb_dblink_oci; DROP EXTENSION edb_dblink_libpq; ``` -When you have completed upgrading, you can use the `CREATE EXTENSION` command to add the current versions of the extensions to your installation. +When finish upgrading, you can use the `CREATE EXTENSION` command to add the current versions of the extensions to your installation.
diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx index fb66bad267f..f65ff6004fb 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx @@ -4,7 +4,7 @@ redirects: - /epas/latest/epas_upgrade_guide/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference/ --- -`pg_upgrade` accepts the following command line options; each option is available in a long form or a short form: +`pg_upgrade` accepts the following command line options. Each option is available in a long form or a short form: `-b path_to_old_bin_directory` `--old-bindir path_to_old_bin_directory` @@ -19,7 +19,7 @@ Use the `-B` or `--new-bindir` keyword to specify the location of the new cluste `-c` `--check` -Include the `-c` or `--check` keyword to specify that `pg_upgrade` should perform a consistency check on the old and new cluster without performing a version upgrade. +Include the `-c` or `--check` keyword to specify for `pg_upgrade` to perform a consistency check on the old and new cluster without performing a version upgrade. `-d path_to_old_data_directory` `--old-datadir path_to_old_data_directory` @@ -31,7 +31,7 @@ Use the `-d` or `--old-datadir` keyword to specify the location of the old clust Use the `-D` or `--new-datadir` keyword to specify the location of the new cluster's `data` directory. -Data that is stored in user-defined tablespaces is not copied to the new cluster; it stays in the same location in the file system, but is copied into a subdirectory whose name reflects the version number of the new cluster. To manually relocate files that are stored in a tablespace after upgrading, you must move the files to the new location and update the symbolic links (located in the `pg_tblspc` directory under your cluster's `data` directory) to point to the files. +Data that's stored in user-defined tablespaces isn't copied to the new cluster. It stays in the same location in the file system but is copied into a subdirectory whose name reflects the version number of the new cluster. To manually relocate files that are stored in a tablespace after upgrading, you must move the files to the new location and update the symbolic links (located in the `pg_tblspc` directory under your cluster's `data` directory) to point to the files. `-j` `--jobs` @@ -41,17 +41,17 @@ Include the `-j` or `--jobs` keyword to specify the number of simultaneous proce `-k` `--link` -Include the `-k` or `--link` keyword to create a hard link from the new cluster to the old cluster. See [Linking versus Copying](../01_performing_an_upgrade/01_linking_versus_copying/#linking_versus_copying) for more information about using a symbolic link. +Include the `-k` or `--link` keyword to create a hard link from the new cluster to the old cluster. See [Linking versus copying](../01_performing_an_upgrade/01_linking_versus_copying/#linking_versus_copying) for more information about using a symbolic link. `-o options` `--old-options options` -Use the `-o` or `--old-options` keyword to specify options to pass to the old `postgres` command. Enclose options in single or double quotes to ensure that they are passed as a group. +Use the `-o` or `--old-options` keyword to specify options to pass to the old `postgres` command. Enclose options in single or double quotes to ensure that they're passed as a group. `-O options` `--new-options options` -Use the `-O` or `--new-options` keyword to specify options to pass to the new `postgres` command. Enclose options in single or double quotes to ensure that they are passed as a group. +Use the `-O` or `--new-options` keyword to specify options to pass to the new `postgres` command. Enclose options in single or double quotes to ensure that they're passed as a group. `-p old_port_number` `--old-port old_port_number` @@ -69,7 +69,7 @@ Include the `-P` or `--new-port` keyword to specify the port number of the new E `-r` `--retain` -During the upgrade process, `pg_upgrade` creates four append-only log files; when the upgrade is completed, `pg_upgrade` deletes these files. Include the `-r` or `--retain` option to specify that the server should retain the `pg_upgrade` log files. +During the upgrade process, `pg_upgrade` creates four append-only log files; when the upgrade is completed, `pg_upgrade` deletes these files. Include the `-r` or `--retain` option to specify for the server to retain the `pg_upgrade` log files. `-U user_name` `--username user_name` @@ -90,4 +90,4 @@ Use the `-V` or `--version` keyword to display version information for `pg_upgra `-h` `--help` -Use `-?, -h,` or `--help` options to display `pg_upgrade` help information. +Use `-?`, `-h,` or `--help` options to display `pg_upgrade` help information. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx index 9df291db6d9..02da5a53207 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx @@ -8,43 +8,43 @@ When invoking `pg_upgrade`, you must specify the location of the old and new clu ```shell pg_upgrade ---old-datadir ---new-datadir +--old-datadir +--new-datadir --user ---old-bindir ---new-bindir +--old-bindir +--new-bindir --old-port <13_port> --new-port <14_port> ``` Where: -`--old-datadir path_to_13_data_directory` +`--old-datadir path_to_14_data_directory` -Use the `--old-datadir` option to specify the complete path to the `data` directory within the EDB Postgres Advanced Server 14 installation. +Use the `--old-datadir` option to specify the complete path to the `data` directory in the EDB Postgres Advanced Server 14 installation. -`--new-datadir path_to_14_data_directory` +`--new-datadir path_to_15_data_directory` -Use the `--new-datadir` option to specify the complete path to the `data` directory within the EDB Postgres Advanced Server 15 installation. +Use the `--new-datadir` option to specify the complete path to the `data` directory in the EDB Postgres Advanced Server 15 installation. `--username superuser_name` -Include the `--username` option to specify the name of the EDB Postgres Advanced Server superuser. The superuser name should be the same in both versions of EDB Postgres Advanced Server. By default, when EDB Postgres Advanced Server is installed in Oracle mode, the superuser is named `enterprisedb`. If installed in PostgreSQL mode, the superuser is named `postgres`. +Include the `--username` option to specify the name of the EDB Postgres Advanced Server superuser. The superuser name must be the same in both versions of EDB Postgres Advanced Server. By default, when EDB Postgres Advanced Server is installed in Oracle mode, the superuser is named `enterprisedb`. If installed in PostgreSQL mode, the superuser is named `postgres`. -If the EDB Postgres Advanced Server superuser name is not the same in both clusters, the clusters will not pass the `pg_upgrade` consistency check. +If the EDB Postgres Advanced Server superuser name isn't the same in both clusters, the clusters won't pass the `pg_upgrade` consistency check. -`--old-bindir path_to_13_bin_directory` +`--old-bindir path_to_14_bin_directory` Use the `--old-bindir` option to specify the complete path to the `bin` directory in the EDB Postgres Advanced Server 14 installation. -`--new-bindir path_to_14_bin_directory` +`--new-bindir path_to_15_bin_directory` Use the `--new-bindir` option to specify the complete path to the `bin` directory in the EDB Postgres Advanced Server 15 installation. -`--old-port 13_port` +`--old-port 14_port` Include the `--old-port` option to specify the port on which EDB Postgres Advanced Server 14 listens for connections. -`--new-port 14_port` +`--new-port 15_port` Include the `--new-port` option to specify the port on which EDB Postgres Advanced Server 15 listens for connections. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx index 235f5eb3e90..15d549271f7 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx @@ -6,31 +6,31 @@ redirects: You can use `pg_upgrade` to upgrade from an existing installation of EDB Postgres Advanced Server into the cluster built by the EDB Postgres Advanced Server installer or into an alternate cluster created using the `initdb` command. -The basic steps to perform an upgrade into an empty cluster created with the `initdb` command are the same as the steps to upgrade into the cluster created by the EDB Postgres Advanced Server installer, but you can omit Step 2 `(Empty the edb database)`, and substitute the location of the alternate cluster when specifying a target cluster for the upgrade. +The basic steps to perform an upgrade into an empty cluster created with the `initdb` command are the same as the steps to upgrade into the cluster created by the EDB Postgres Advanced Server installer, but you can omit Step 2 (Empty the edb database) and substitute the location of the alternate cluster when specifying a target cluster for the upgrade. -If a problem occurs during the upgrade process, you can revert to the previous version. See [Reverting to the old cluster](06_reverting_to_the_old_cluster/#reverting_to_the_old_cluster) Section for detailed information about this process. +If a problem occurs during the upgrade process, you can revert to the previous version. See [Reverting to the old cluster](06_reverting_to_the_old_cluster/#reverting_to_the_old_cluster) for detailed information about this process. You must be an operating system superuser or Windows Administrator to perform an EDB Postgres Advanced Server upgrade. -**Step 1 - Install the new server** +## Step 1 - Install the new server Install EDB Postgres Advanced Server 15, specifying the same non-server components that were installed during the previous EDB Postgres Advanced Server installation. The new cluster and the old cluster must reside in different directories. -**Step 2 - Empty the target database** +## Step 2 - Empty the target database -The target cluster must not contain any data; you can create an empty cluster using the `initdb` command, or you can empty a database that was created during the installation of EDB Postgres Advanced Server. If you have installed EDB Postgres Advanced Server in PostgreSQL mode, the installer creates a single database named `postgres`; if you have installed EDB Postgres Advanced Server in Oracle mode, it creates a database named `postgres` and a database named `edb`. +The target cluster must not contain any data. You can create an empty cluster using the `initdb` command, or you can empty a database that was created during the installation of EDB Postgres Advanced Server. If you installed EDB Postgres Advanced Server in PostgreSQL mode, the installer creates a single database named `postgres`. If you installed EDB Postgres Advanced Server in Oracle mode, it creates a database named `postgres` and a database named `edb`. The easiest way to empty the target database is to drop the database and then create a new database. Before invoking the `DROP DATABASE` command, you must disconnect any users and halt any services that are currently using the database. -On Windows, navigate through the `Control Panel` to the `Services` manager; highlight each service in the `Services` list, and select `Stop`. +On Windows, navigate through the Control Panel to the Services. manager. Select each service in the **Services** list, and select **Stop**. -On Linux, open a terminal window, assume superuser privileges, and manually stop each service; for example, invoke the following command to stop the pgAgent service: +On Linux, open a terminal window, assume superuser privileges, and manually stop each service. For example, invoke the following command to stop the pgAgent service: ```shell service edb-pgagent-14 stop ``` -After stopping any services that are currently connected to EDB Postgres Advanced Server, you can use the EDB-PSQL command line client to drop and create a database. When the client opens, connect to the `template1` database as the database superuser; if prompted, provide authentication information. Then, use the following command to drop your database: +After stopping any services that are currently connected to EDB Postgres Advanced Server, you can use the EDB-PSQL command line client to drop and create a database. When the client opens, connect to the `template1` database as the database superuser. If prompted, provide authentication information. Then, use the following command to drop your database: ```sql DROP DATABASE ; @@ -44,24 +44,19 @@ Then, create an empty database based on the contents of the `template1` database CREATE DATABASE ; ``` -**Step 3 - Set both servers in trust mode** +## Step 3 - Set both servers in trust mode -During the upgrade process, `pg_upgrade` connects to the old and new servers several times; to make the connection process easier, you can edit the `pg_hba.conf` file, setting the authentication mode to `trust`. To modify the `pg_hba.conf` file, navigate through the `Start` menu to the `EDB Postgres` menu; to the `EDB Postgres Advanced Server` menu, and open the `Expert Configuration` menu; select the `Edit pg_hba.conf` menu option to open the `pg_hba.conf` file. +During the upgrade process, `pg_upgrade` connects to the old and new servers several times. To make the connection process easier, you can edit the `pg_hba.conf` file, setting the authentication mode to `trust`. To modify the `pg_hba.conf` file, from the Start menu, select **EDB Postgres > EDB Postgres Advanced Server > Expert Configuration**. Select **Edit pg_hba.conf** to open the `pg_hba.conf` file. -You must allow trust authentication for the previous EDB Postgres Advanced Server installation, and EDB Postgres Advanced Server servers. Edit the `pg_hba.conf` file for both installations of EDB Postgres Advanced Server as shown in the following figure. +You must allow trust authentication for the previous EDB Postgres Advanced Server installation and EDB Postgres Advanced Server servers. Edit the `pg_hba.conf` file for both installations of EDB Postgres Advanced Server as shown in the following figure. ![Configuring EDB Postgres Advanced Server to use trust authentication.](../images/configuring_advanced_server_to_use_trust_authentication.png) -
Fig. 1: Configuring EDB Postgres Advanced Server to use trust authentication
+After editing each file, save the file, and exit the editor. +If the system is required to maintain `md5` authentication mode during the upgrade process, you can specify user passwords for the database superuser in a password file (`pgpass.conf` on Windows, `.pgpass` on Linux). For more information about configuring a password file, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/libpq-pgpass.html). -After editing each file, save the file and exit the editor. - -If the system is required to maintain `md5` authentication mode during the upgrade process, you can specify user passwords for the database superuser in a password file (`pgpass.conf` on Windows, `.pgpass` on Linux). For more information about configuring a password file, see the PostgreSQL Core Documentation, available at: - - - -**Step 4 - Stop all component services and servers** +## Step 4 - Stop all component services and servers Before you invoke `pg_upgrade`, you must stop any services that belong to the original EDB Postgres Advanced Server installation, EDB Postgres Advanced Server, or the supporting components. This ensures that a service doesn't attempt to access either cluster during the upgrade process. @@ -92,17 +87,17 @@ The services that are most likely to be running in your installation are: | EDB Replication Server v6.x | edb-xdbpubserver | Publication Service for xDB Replication Server | | EDB Subscription Server v6.x | edb-xdbsubserver | Subscription Service for xDB Replication Server | -**To stop a service on Windows:** +### To stop a service on Windows -Open the `Services` applet; highlight each EDB Postgres Advanced Server or supporting component service displayed in the list, and select `Stop`. +Open the Services applet. Select each EDB Postgres Advanced Server or supporting component service displayed in the list, and select **Stop**. -**To stop a service on Linux:** +### To stop a service on Linux Open a terminal window and manually stop each service at the command line. -**Step 5 For Linux only - Assume the identity of the cluster owner** +## Step 5 For Linux only - Assume the identity of the cluster owner -If you are using Linux, assume the identity of the EDB Postgres Advanced Server cluster owner. (The following example assumes EDB Postgres Advanced Server was installed in the default, compatibility with Oracle database mode, thus assigning `enterprisedb` as the cluster owner. If installed in compatibility with PostgreSQL database mode, `postgres` is the cluster owner.) +If you're using Linux, assume the identity of the EDB Postgres Advanced Server cluster owner. The following example assumes EDB Postgres Advanced Server was installed in the default, compatibility-with-Oracle database mode, assigning `enterprisedb` as the cluster owner. If installed in compatibility-with-PostgreSQL database mode, `postgres` is the cluster owner. ```shell su - enterprisedb @@ -120,57 +115,49 @@ During the upgrade process, `pg_upgrade` writes a file to the current working di cd /tmp ``` -Proceed to Step 6. +## Step 5 For Windows only - Assume the identity of the cluster owner -**Step 5 For Windows only - Assume the identity of the cluster owner** +If you're using Windows, open a terminal window, assume the identity of the EDB Postgres Advanced Server cluster owner, and set the path to the `pg_upgrade` executable. -If you are using Windows, open a terminal window, assume the identity of the EDB Postgres Advanced Server cluster owner and set the path to the `pg_upgrade` executable. - -If the `--serviceaccount service_account_user` parameter was specified during the initial installation of EDB Postgres Advanced Server, then `service_account_user` is the EDB Postgres Advanced Server cluster owner and is the user to be given with the `RUNAS` command. +If the `--serviceaccount service_account_user` parameter was specified during the initial installation of EDB Postgres Advanced Server, then `service_account_user` is the EDB Postgres Advanced Server cluster owner and is the user to give with the `RUNAS` command. ```sql RUNAS /USER:service_account_user "CMD.EXE" SET PATH=%PATH%;C:\Program Files\edb\as14\bin ``` -During the upgrade process, `pg_upgrade` writes a file to the current working directory of the service account user; you must invoke `pg_upgrade` from a directory where the service account user has `write` privileges. After performing the above commands, navigate to a directory in which the service account user has sufficient privileges to write a file. +During the upgrade process, `pg_upgrade` writes a file to the current working directory of the service account user. You must invoke `pg_upgrade` from a directory where the service account user has `write` privileges. After performing the above commands, navigate to a directory in which the service account user has privileges to write a file: ```shell cd %TEMP% ``` -Proceed to Step 6. - -If the `--serviceaccount` parameter was omitted during the initial installation of EDB Postgres Advanced Server, then the default owner of the EDB Postgres Advanced Server service and the database cluster is `NT AUTHORITY\NetworkService`. +If you omitted the `--serviceaccount` parameter during the initial installation of EDB Postgres Advanced Server, then the default owner of the EDB Postgres Advanced Server service and the database cluster is `NT AUTHORITY\NetworkService`. -When `NT AUTHORITY\NetworkService` is the service account user, the `RUNAS` command may not be usable as it prompts for a password and the `NT AUTHORITY\NetworkService` account is not assigned a password. Thus, there is typically a failure with an error message such as, “Unable to acquire user password”. +When `NT AUTHORITY\NetworkService` is the service account user, the `RUNAS` command might not be usable. It prompts for a password, and the `NT AUTHORITY\NetworkService` account isn't assigned a password. Thus, there's typically a failure with an error message such as “Unable to acquire user password.” -Under this circumstance a Windows utility program named `PsExec` must be used to run `CMD.EXE` as the service account `NT AUTHORITY\NetworkService`. +Under this circumstance, you must use a Windows utility program named `PsExec` to run `CMD.EXE` as the service account `NT AUTHORITY\NetworkService`. -The `PsExec` program must be obtained by downloading `PsTools`, which is available at the following site: +Obtain the `PsExec` program by downloading `PsTools`, which is available at the [Microsoft site](https://technet.microsoft.com/en-us/sysinternals/bb897553.aspx). -. - -You can then use the following command to run `CMD.EXE` as `NT AUTHORITY\NetworkService`, and then set the path to the `pg_upgrade` executable. +You can then use the following command to run `CMD.EXE` as `NT AUTHORITY\NetworkService`. Then set the path to the `pg_upgrade` executable: ```shell psexec.exe -u "NT AUTHORITY\NetworkService" CMD.EXE SET PATH=%PATH%;C:\Program Files\edb\as14\bin ``` -During the upgrade process, `pg_upgrade` writes a file to the current working directory of the service account user; you must invoke `pg_upgrade` from a directory where the service account user has `write` privileges. After performing the above commands, navigate to a directory in which the service account user has sufficient privileges to write a file. +During the upgrade process, `pg_upgrade` writes a file to the current working directory of the service account user. You must invoke `pg_upgrade` from a directory where the service account user has `write` privileges. After performing the above commands, navigate to a directory in which the service account user has privileges to write a file: ```shell cd %TEMP% ``` -Proceed with Step 6. - -**Step 6 - Perform a consistency check** +## Step 6 - Perform a consistency check -Before attempting an upgrade, perform a consistency check to assure that the old and new clusters are compatible and properly configured. Include the `--check` option to instruct `pg_upgrade` to perform the consistency check. +Before attempting an upgrade, perform a consistency check to ensure that the old and new clusters are compatible and properly configured. Include the `--check` option to instruct `pg_upgrade` to perform the consistency check. -The following example demonstrates invoking `pg_upgrade` to perform a consistency check on Linux: +Thid example dhoed invoking `pg_upgrade` to perform a consistency check on Linux: ```shell pg_upgrade -d /var/lib/edb/as13/data @@ -180,7 +167,7 @@ pg_upgrade -d /var/lib/edb/as13/data If the command is successful, it returns `*Clusters are compatible*`. -If you are using Windows, you must quote any directory names that contain a space: +If you're using Windows, you must quote any directory names that contain a space: ```shell pg_upgrade.exe @@ -190,13 +177,13 @@ pg_upgrade.exe -B "C:\Program Files\edb\as14\bin" -p 5444 -P 5445 --check ``` -During the consistency checking process, `pg_upgrade` logs any discrepancies that it finds to a file located in the directory from which `pg_upgrade` was invoked. When the consistency check completes, review the file to identify any missing components or upgrade conflicts. You must resolve any conflicts before invoking `pg_upgrade` to perform a version upgrade. +During the consistency checking process, `pg_upgrade` logs any discrepancies that it finds to a file located in the directory from which you invoked `pg_upgrade`. When the consistency check completes, review the file to identify any missing components or upgrade conflicts. You must resolve any conflicts before invoking `pg_upgrade` to perform a version upgrade. -If `pg_upgrade` alerts you to a missing component, you can use StackBuilder Plus to add the component that contains the component. Before using StackBuilder Plus, you must restart the EDB Postgres Advanced Server service. After restarting the service, open StackBuilder Plus by navigating through the `Start` menu to the `EDB Postgres Advanced Server 15` menu, and selecting `StackBuilder Plus`. Follow the onscreen advice of the StackBuilder Plus wizard to download and install the missing components. +If `pg_upgrade` alerts you to a missing component, you can use StackBuilder Plus to add the component that contains the component. Before using StackBuilder Plus, you must restart the EDB Postgres Advanced Server service. After restarting the service, open StackBuilder Plus by selecting **Start > EDB Postgres Advanced Server 15 > StackBuilder Plus**. Follow the onscreen advice of the StackBuilder Plus wizard to download and install the missing components. -When `pg_upgrade` has confirmed that the clusters are compatible, you can perform a version upgrade. +After `pg_upgrade` confirms that the clusters are compatible, you can perform a version upgrade. -**Step 7 - Run pg_upgrade** +## Step 7 - Run pg_upgrade After confirming that the clusters are compatible, you can invoke `pg_upgrade` to upgrade the old cluster to the new version of EDB Postgres Advanced Server. @@ -235,9 +222,12 @@ Checking for presence of required libraries ok Checking database user is a superuser ok Checking for prepared transactions ok -If pg_upgrade fails after this point, you must re-initdb the -new cluster before continuing. +``` +If `pg_upgrade` fails after this point, you must re-initdb the +new cluster before continuing. Otherwise, it continues as follows: + +```shell Performing Upgrade ------------------ Analyzing all rows in the new cluster ok @@ -267,7 +257,7 @@ Running this script will delete the old cluster's data files: delete_old_cluster.sh ``` -While `pg_upgrade` runs, it may generate SQL scripts that handle special circumstances that it has encountered during your upgrade. For example, if the old cluster contains large objects, you may need to invoke a script that defines the default permissions for the objects in the new cluster. When performing the pre-upgrade consistency check `pg_upgrade` alerts you to any script that you may be required to run manually. +While `pg_upgrade` runs, it might generate SQL scripts that handle special circumstances that it encountered during your upgrade. For example, if the old cluster contains large objects, you might need to invoke a script that defines the default permissions for the objects in the new cluster. When performing the pre-upgrade consistency check, `pg_upgrade` alerts you to any script that you might need to run manually. You must invoke the scripts after `pg_upgrade` completes. To invoke the scripts, connect to the new cluster as a database superuser with the EDB-PSQL command line client, and invoke each script using the `\i` option: @@ -275,26 +265,26 @@ You must invoke the scripts after `pg_upgrade` completes. To invoke the scripts, \i complete_path_to_script/script.sql ``` -It is generally unsafe to access tables referenced in rebuild scripts until the rebuild scripts have completed; accessing the tables could yield incorrect results or poor performance. Tables not referenced in rebuild scripts can be accessed immediately. +It's generally unsafe to access tables referenced in rebuild scripts until the rebuild scripts finish. Accessing the tables might yield incorrect results or poor performance. You cam access tables not referenced in rebuild scripts immediately. -If `pg_upgrade` fails to complete the upgrade process, the old cluster is unchanged, except that `$PGDATA/global/pg_control` is renamed to `pg_control.old` and each tablespace is renamed to `tablespace.old`. To revert to the pre-invocation state: +If `pg_upgrade` fails to complete the upgrade process, the old cluster is unchanged, except that `$PGDATA/global/pg_control` is renamed to `pg_control.old`, and each tablespace is renamed to `tablespace.old`. To revert to the pre-invocation state: 1. Delete any tablespace directories created by the new cluster. 2. Rename `$PGDATA/global/pg_control`, removing the `.old` suffix. 3. Rename the old cluster tablespace directory names, removing the `.old` suffix. -4. Remove any database objects (from the new cluster) that may have been moved before the upgrade failed. +4. Remove any database objects from the new cluster that were moved before the upgrade failed. -After performing these steps, resolve any upgrade conflicts encountered before attempting the upgrade again. +Then, resolve any upgrade conflicts encountered and try the upgrade again. -When the upgrade is complete, `pg_upgrade` may also recommend vacuuming the new cluster and provides a script that allows you to delete the old cluster. +When the upgrade is complete, `pg_upgrade` might also recommend vacuuming the new cluster. It provides a script that allows you to delete the old cluster. !!! Note - Before removing the old cluster, ensure that the cluster has been upgraded as expected, and that you have preserved a backup of the cluster in case you need to revert to a previous version. + Before removing the old cluster, ensure that the cluster was upgraded as expected and that you have a backup of the cluster in case you need to revert to a previous version. -**Step 8 - Restore the authentication settings in the pg_hba.conf file** +## Step 8 - Restore the authentication settings in the pg_hba.conf file If you modified the `pg_hba.conf` file to permit `trust` authentication, update the contents of the `pg_hba.conf` file to reflect your preferred authentication settings. -**Step 9 - Move and identify user-defined tablespaces (Optional)** +## Step 9 - Move and identify user-defined tablespaces (optional) -If you have data stored in a user-defined tablespace, you must manually relocate tablespace files after upgrading; move the files to the new location and update the symbolic links (located in the `pg_tblspc` directory under your cluster's `data` directory) to point to the files. +If you have data stored in a user-defined tablespace, you must manually relocate tablespace files after upgrading. Move the files to the new location and update the symbolic links to point to the files. The symbolic links are located in the `pg_tblspc` directory under your cluster's `data` directory. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/04_upgrading_a_pgAgent_installation.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/04_upgrading_a_pgAgent_installation.mdx index 4261ac05a49..d27bce0e21b 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/04_upgrading_a_pgAgent_installation.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/04_upgrading_a_pgAgent_installation.mdx @@ -4,9 +4,9 @@ redirects: - /epas/latest/epas_upgrade_guide/04_upgrading_an_installation_with_pg_upgrade/04_upgrading_a_pgAgent_installation/ --- -If your existing EDB Postgres Advanced Server installation uses pgAgent, you can use a script provided with the EDB Postgres Advanced Server installer to update pgAgent. The script is named `dbms_job.upgrade.script.sql`, and is located in the `/share/contrib/` directory under your EDB Postgres Advanced Server installation. +If your existing EDB Postgres Advanced Server installation uses pgAgent, you can use a script provided with the EDB Postgres Advanced Server installer to update pgAgent. The script is named `dbms_job.upgrade.script.sql` and is located in the `/share/contrib/` directory under your EDB Postgres Advanced Server installation. -If you are using `pg_upgrade` to upgrade your installation, you should: +If you're using `pg_upgrade` to upgrade your installation: 1. Perform the upgrade. 2. Invoke the `dbms_job.upgrade.script.sql` script to update the catalog files. If your existing pgAgent installation was performed with a script, the update converts the installation to an extension. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/05_pg_upgrade_troubleshooting.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/05_pg_upgrade_troubleshooting.mdx index f46fe6b0c9c..dade9bc417e 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/05_pg_upgrade_troubleshooting.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/05_pg_upgrade_troubleshooting.mdx @@ -4,7 +4,7 @@ redirects: - /epas/latest/epas_upgrade_guide/04_upgrading_an_installation_with_pg_upgrade/05_pg_upgrade_troubleshooting/ --- -The troubleshooting tips in this section address problems you may encounter when using `pg_upgrade`. +These troubleshooting tips address problems you might encounter when using `pg_upgrade`. ## Upgrade Error - There seems to be a postmaster servicing the cluster @@ -12,22 +12,22 @@ If `pg_upgrade` reports that a postmaster is servicing the cluster, stop all EDB ## Upgrade Error - fe_sendauth: no password supplied -If `pg_upgrade` reports an authentication error that references a missing password, modify the `pg_hba.conf` files in the old and new cluster to enable `trust` authentication, or configure the system to use a `pgpass.conf` file. +If `pg_upgrade` reports an authentication error that references a missing password, modify the `pg_hba.conf` files in the old and new cluster to enable `trust` authentication, or configure the system to use a `pgpass.conf` file. ## Upgrade Error - New cluster is not empty; exiting -If `pg_upgrade` reports that the new cluster is not empty, empty the new cluster. The target cluster may not contain any user-defined databases. +If `pg_upgrade` reports that the new cluster isn't empty, empty the new cluster. The target cluster might not contain any user-defined databases. ## Upgrade Error - Failed to load library -If the original EDB Postgres Advanced Server cluster included libraries that are not included in the EDB Postgres Advanced Server cluster, `pg_upgrade` alerts you to the missing component during the consistency check by writing an entry to the `loadable_libraries.txt` file in the directory from which you invoked `pg_upgrade`. Generally, for missing libraries that are not part of a major component upgrade, perform the following steps: +If the original EDB Postgres Advanced Server cluster included libraries that aren't included in the EDB Postgres Advanced Server cluster, `pg_upgrade` alerts you to the missing component during the consistency check by writing an entry to the `loadable_libraries.txt` file in the directory from which you invoked `pg_upgrade`. Generally, for missing libraries that aren't part of a major component upgrade: 1. Restart the EDB Postgres Advanced Server service. - Use StackBuilder Plus to download and install the missing module. Then: +2. Use StackBuilder Plus to download and install the missing module. -2. Stop the EDB Postgres Advanced Server service. +3. Stop the EDB Postgres Advanced Server service. -3. Resume the upgrade process: invoke `pg_upgrade` to perform consistency checking. +4. Resume the upgrade process. Invoke `pg_upgrade` to perform consistency checking. -4. When you have resolved any remaining problems noted in the consistency checks, invoke `pg_upgrade` to perform the data migration from the old cluster to the new cluster. +4. After you resolve any remaining problems noted in the consistency checks, invoke `pg_upgrade` to perform the data migration from the old cluster to the new cluster. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/06_reverting_to_the_old_cluster.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/06_reverting_to_the_old_cluster.mdx index 3f6312a0feb..024f334af90 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/06_reverting_to_the_old_cluster.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/06_reverting_to_the_old_cluster.mdx @@ -8,7 +8,7 @@ redirects: The method used to revert to a previous cluster varies with the options specified when invoking `pg_upgrade`: -- If you specified the `--check` option when invoking `pg_upgrade`, an upgrade has not been performed, and no modifications have been made to the old cluster; you can re-use the old cluster at any time. -- If you included the `--link` option when invoking `pg_upgrade`, the data files are shared between the old and new cluster after the upgrade completes. If you have started the server that is servicing the new cluster, the new server has written to those shared files and it is unsafe to use the old cluster. -- If you ran `pg_upgrade` without the `--link` specification or have not started the new server, the old cluster is unchanged, except that the `.old` suffix has been appended to the `$PGDATA/global/pg_control` and tablespace directories. -- To reuse the old cluster, delete the tablespace directories created by the new cluster and remove the `.old` suffix from `$PGDATA/global/pg_control` and the old cluster tablespace directory names and restart the server that services the old cluster. +- If you specified the `--check` option when invoking `pg_upgrade`, an upgrade wasn't performed and no modifications were made to the old cluster. You can reuse the old cluster at any time. +- If you included the `--link` option when invoking `pg_upgrade`, the data files are shared between the old and new cluster after the upgrade completes. If you started the server that's servicing the new cluster, the new server wrote to those shared files and it's unsafe to use the old cluster. +- If you ran `pg_upgrade` without the `--link` specification or haven't started the new server, the old cluster is unchanged, except that the `.old` suffix was appended to the `$PGDATA/global/pg_control` and tablespace directories. +- To reuse the old cluster, delete the tablespace directories created by the new cluster and remove the `.old` suffix from `$PGDATA/global/pg_control` and the old cluster tablespace directory names. Restart the server that services the old cluster. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx index 1d70ff60ece..5d4909ce2b4 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx @@ -6,11 +6,12 @@ redirects: -While minor upgrades between versions are fairly simple and require only the installation of new executables, past major version upgrades has been both expensive and time consuming. `pg_upgrade` facilitates migration between any version of EDB Postgres Advanced Server (version 9.0 or later), and any subsequent release of EDB Postgres Advanced Server that is supported on the same platform. +While minor upgrades between versions are fairly simple and require only installing new executables, past major version upgrades were both expensive and time consuming. `pg_upgrade` facilitates migration between any version of EDB Postgres Advanced Server (version 9.0 or later) and any subsequent release of EDB Postgres Advanced Server that's supported on the same platform. Without `pg_upgrade`, to migrate from an earlier version of EDB Postgres Advanced Server to EDB Postgres Advanced Server 15, you must export all of your data using `pg_dump`, install the new release, run `initdb` to create a new cluster, and then import your old data. -*pg_upgrade can reduce both the amount of time required and the disk space required for many major-version upgrades.* +!!! Note + `pg_upgrade` can reduce both the amount of time and the disk space required for many major-version upgrades. The `pg_upgrade` utility performs an in-place transfer of existing data between EDB Postgres Advanced Server and any subsequent version. @@ -18,13 +19,13 @@ Several factors determine if an in-place upgrade is practical: - The on-disk representation of user-defined tables must not change between the original version and the upgraded version. - The on-disk representation of data types must not change between the original version and the upgraded version. -- To upgrade between major versions of EDB Postgres Advanced Server with `pg_upgrade`, both versions must share a common binary representation for each data type. Therefore, you cannot use `pg_upgrade` to migrate from a 32-bit to a 64-bit Linux platform. +- To upgrade between major versions of EDB Postgres Advanced Server with `pg_upgrade`, both versions must share a common binary representation for each data type. Therefore, you can't use `pg_upgrade` to migrate from a 32-bit to a 64-bit Linux platform. -Before performing a version upgrade, `pg_upgrade` verifies that the two clusters (the old cluster and the new cluster) are compatible. +Before performing a version upgrade, `pg_upgrade` verifies that the old cluster and the new cluster are compatible. -If the upgrade involves a change in the on-disk representation of database objects or data, or involves a change in the binary representation of data types, `pg_upgrade` can't perform the upgrade; to upgrade, you have to `pg_dump` the old data and then import that data into the new cluster. +If the upgrade involves a change in the on-disk representation of database objects or data or involves a change in the binary representation of data types, `pg_upgrade` can't perform the upgrade. To upgrade, you have to `pg_dump` the old data and then import that data to the new cluster. -The `pg_upgrade` executable is distributed with EDB Postgres Advanced Server, and is installed as part of the `Database Server` component; no additional installation or configuration steps are required. +The `pg_upgrade` executable is distributed with EDB Postgres Advanced Server and is installed as part of the Database Server component. No additional installation or configuration steps are required.
diff --git a/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx b/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx index f97fcf7e6d1..0fd7f1dcfd1 100644 --- a/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx +++ b/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx @@ -16,15 +16,14 @@ Where `package_name` is the search term for which you want to search for updates yum update ``` -Where `package_name` is the name of the package you wish to update. Include wild-card values in the update command to update multiple related packages with a single command. For example, use the following command to update all packages whose names include the expression `edb`: +Where `package_name` is the name of the package you want to update. Include wildcard values in the update command to update multiple related packages with a single command. For example, use the following command to update all packages whose names include the expression `edb`: ```shell yum update edb* ``` !!! Note - The `yum update` command performs an update only between minor releases; to update between major releases, you must use `pg_upgrade`. - -For more information about using yum commands and options, enter `yum --help` on your command line. + The `yum update` command performs an update only between minor releases. To update between major releases, use `pg_upgrade`. +For more information about using yum commands and options, enter `yum --help` at the command line. diff --git a/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx b/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx index a2973b7cad0..12eca3c3d70 100644 --- a/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx +++ b/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx @@ -6,59 +6,44 @@ redirects: StackBuilder Plus is supported only on Windows systems. -The StackBuilder Plus utility provides a graphical interface that simplifies the process of updating, downloading, and installing modules that complement your EDB Postgres Advanced Server installation. When you install a module with StackBuilder Plus, StackBuilder Plus automatically resolves any software dependencies. +The StackBuilder Plus utility provides a graphical interface that simplifies the process of updating, downloading, and installing modules that complement your EDB Postgres Advanced Server installation. When you install a module with StackBuilder Plus, StackBuilder Plus resolves any software dependencies. -You can invoke StackBuilder Plus at any time after the installation has completed by selecting the `StackBuilder Plus` menu option from the `Apps` menu. Enter your system password (if prompted), and the StackBuilder Plus welcome window opens. +You can invoke StackBuilder Plus at any time after the installation has completed by selecting the **Apps >StackBuilder Plus**. Enter your system password if prompted, and the StackBuilder Plus welcome window opens. ![The StackBuilder Plus welcome window](images/the_stackBuilder_plus_welcome.png) -
Fig. 1: The StackBuilder Plus welcome window
+Select your EDB Postgres Advanced Server installation. +StackBuilder Plus requires internet access. If your installation of EDB Postgres Advanced Server resides behind a firewall (with restricted internet access), StackBuilder Plus can download program installers through a proxy server. The module provider determines if the module can be accessed through an HTTP proxy or an FTP proxy. Currently, all updates are transferred by an HTTP proxy, and the FTP proxy information isn't used. -Use the drop-down listbox on the welcome window to select your EDB Postgres Advanced Server installation. - -StackBuilder Plus requires Internet access; if your installation of EDB Postgres Advanced Server resides behind a firewall (with restricted Internet access), StackBuilder Plus can download program installers through a proxy server. The module provider determines if the module can be accessed through an HTTP proxy or an FTP proxy; currently, all updates are transferred via an HTTP proxy and the FTP proxy information is not used. - -If the selected EDB Postgres Advanced Server installation has restricted Internet access, use the `Proxy Servers` on the `Welcome` window to open the `Proxy servers` dialog (shown in the following figure). +If the selected EDB Postgres Advanced Server installation has restricted Internet access, on the Welcome screen, select **Proxy Servers** ti open the Proxy servers dialog box: ![The Proxy servers dialog](images/the_proxy_servers_dialog.png) -
Fig. 2: The Proxy servers dialog
- +On the dialog box, nter the IP address and port number of the proxy server in the **HTTP proxy** box. Currently, all StackBuilder Plus modules are distributed by HTTP proxy (FTP proxy information is ignored). -Enter the IP address and port number of the proxy server in the `HTTP proxy` on the `Proxy servers` dialog. Currently, all StackBuilder Plus modules are distributed via HTTP proxy (FTP proxy information is ignored). Click `OK` to continue. +Select **OK**. ![The StackBuilder Plus module selection window](images/the_stackBuilder_plus_module_selection_window.png) -
Fig. 3: The StackBuilder Plus module selection window
+The tree control on the StackBuilder Plus module selection window displays a node for each module category. +To add a component to the selected EDB Postgres Advanced Server installation or to upgrade a component, select the box to the left of the module name and select **Next**. -The tree control on the StackBuilder Plus module selection window (shown in the figure) displays a node for each module category. - -To add a new component to the selected EDB Postgres Advanced Server installation or to upgrade a component, check the box to the left of the module name and click `Next`. If prompted, enter your email address and password on the StackBuilder Plus registration window. +If prompted, enter your email address and password on the StackBuilder Plus registration window. ![A summary window displays a list of selected packages](images/selected_packages_summary_window.png) -
Fig. 4: A summary window displays a list of selected packages
- +StackBuilder Plus confirms the packages selected. The Selected packages dialog box displays the name and version of the installer. Select **Next**. -StackBuilder Plus confirms the packages selected. The `Selected packages` dialog displays the name and version of the installer; click `Next` to continue. - -When the download completes, a window opens that confirms the installation files have been downloaded and are ready for installation. +When the download completes, a window opens that confirms the installation files were downloaded and are ready for installation. ![Confirmation that the download process is complete](images/download_complete_confirmation.png) -
Fig. 5: Confirmation that the download process is complete
- - -You can check the box next to `Skip Installation`, and select `Next` to exit StackBuilder Plus without installing the downloaded files, or leave the box unchecked and click `Next` to start the installation process. +Leave the **Skip Installation** check box cleared and select **Next** to start the installation process. (Select the check box and select **Next** to exit StackBuilder Plus without installing the downloaded files.) ![StackBuilder Plus confirms the completed installation](images/stackBuilder_plus_confirms_the_completed_installation.png) -
Fig. 6: StackBuilder Plus confirms the completed installation
- - -When the upgrade is complete, StackBuilder Plus alerts you to the success or failure of the installation of the requested package. If you were prompted by an installer to restart your computer, reboot now. +When the upgrade is complete, StackBuilder Plus alerts you to the success or failure of the installation of the requested package. If you were prompted by an installer to restart your computer, restart now. -!!! Note - If the update fails to install, StackBuilder Plus alerts you to the installation error with a popup dialog and writes a message to the log file at `%TEMP%`. +If the update fails to install, StackBuilder Plus alerts you to the installation error and writes a message to the log file at `%TEMP%`. diff --git a/product_docs/docs/epas/15/upgrading/index.mdx b/product_docs/docs/epas/15/upgrading/index.mdx index f6cf99e8e64..94de0ce35d1 100644 --- a/product_docs/docs/epas/15/upgrading/index.mdx +++ b/product_docs/docs/epas/15/upgrading/index.mdx @@ -5,7 +5,7 @@ redirects: - /epas/latest/epas_upgrade_guide/ --- -This section provides information about upgrading EDB Postgres Advanced Server, including: +Upgrading EDB Postgres Advanced Server involves the following: - `pg_upgrade` to upgrade from an earlier version of EDB Postgres Advanced Server to EDB Postgres Advanced Server 15. - `yum` to perform a minor version upgrade on a Linux host. From 5e424b9e0c7512bc4b3c420bb8ad764a74804e44 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 23 Feb 2023 17:13:09 -0500 Subject: [PATCH 05/50] returned some deleted words to security doc --- .../03_common_maintenance_operations.mdx | 4 ++-- .../04_backing_up_restoring_sql_protect.mdx | 10 +++++----- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx index be43c319694..8f97578f71b 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/03_common_maintenance_operations.mdx @@ -38,7 +38,7 @@ Removing a role using these functions also removes the role’s protected relati To delete the statistics for a role that was removed, use the [drop_stats function](#drop_stats). -To delete the queries for a role that was removed, use the [drop_queries function](#drop_queries). +To delete the offending queries for a role that was removed, use the [drop_queries function](#drop_queries). This example shows the `unprotect_role` function: @@ -202,7 +202,7 @@ drop_queries('rolename') drop_queries(roleoid) ``` -The variation of the function using the OID is useful if you remove the role using the `DROP ROLE` or `DROP USER` SQL statement before deleting the role’s offending queries using `drop_queries('rolename')`. If a query on `edb_sql_protect_queries` returns a value such as `unknown (OID=16454)` for the user name, use the `drop_queries(roleoid)` form of the function to remove the deleted role’s queries from `edb_sql_protect_queries`. +The variation of the function using the OID is useful if you remove the role using the `DROP ROLE` or `DROP USER` SQL statement before deleting the role’s offending queries using `drop_queries('rolename')`. If a query on `edb_sql_protect_queries` returns a value such as `unknown (OID=16454)` for the user name, use the `drop_queries(roleoid)` form of the function to remove the deleted role’s offending queries from `edb_sql_protect_queries`. This example shows the `drop_queries` function: diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx index 709f9840910..c2ba6ce3ba4 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx @@ -98,9 +98,9 @@ newdb=# DELETE FROM sqlprotect.edb_sql_protect; DELETE 1 ``` -4. Delete any statistics that exist for the database. +4. Delete any of the database's statistics. - This deletion removes any existing statistics that exist for the database to which you're restoring the backup. The following query displays any existing statistics: + This deletion removes any existing statistics for the database to which you're restoring the backup. The following query displays any existing statistics: ```sql newdb=# SELECT * FROM sqlprotect.edb_sql_protect_stats; @@ -122,9 +122,9 @@ __OUTPUT__ (1 row) ``` -5. Delete any outdated queries that exist for the database. +5. Delete any of the database's offending queries. - This deletion removes any existing queries that exist for the database to which you're restoring the backup. This query displays any existing queries: + This deletion removes any existing queries for the database to which you're restoring the backup. This query displays any existing queries: ```sql edb=# SELECT * FROM sqlprotect.edb_sql_protect_queries; @@ -144,7 +144,7 @@ __OUTPUT__ (1 row) ``` -6. Make sure the role names that were protected by SQL/Protect in the original database exist in the database server where the new database resides. +6. Make sure the role names that were protected by SQL/Protect in the original database are in the database server where the new database resides. If the original and new databases reside in the same database server, then you don't need to do anything if you didn't delete any of these roles from the database server. From 2c4267a402a827bafc8b4ccfb2625a4f46536819 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 23 Feb 2023 17:16:14 -0500 Subject: [PATCH 06/50] small correction to uninstalling --- product_docs/docs/epas/15/uninstalling/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/epas/15/uninstalling/index.mdx b/product_docs/docs/epas/15/uninstalling/index.mdx index 81efe322677..b91c9a83f58 100644 --- a/product_docs/docs/epas/15/uninstalling/index.mdx +++ b/product_docs/docs/epas/15/uninstalling/index.mdx @@ -5,4 +5,4 @@ redirects: - /epas/latest/uninstalling_epas/ --- -This section provides detailed information about uninstalling the EDB Postgres Advanced server on specific platforms. \ No newline at end of file +You can uninstall the EDB Postgres Advanced Server on specific platforms. \ No newline at end of file From f9ca41333b62471233552aaa4d6821ab8e4e1b52 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 28 Feb 2023 16:13:13 +0000 Subject: [PATCH 07/50] Positioning for Limitations section in pgd 4 and 5 --- product_docs/docs/pgd/4/index.mdx | 1 + product_docs/docs/pgd/4/limitations.mdx | 30 +++++++++++++++++++++++++ product_docs/docs/pgd/5/index.mdx | 1 + product_docs/docs/pgd/5/limitations.mdx | 30 +++++++++++++++++++++++++ 4 files changed, 62 insertions(+) create mode 100644 product_docs/docs/pgd/4/limitations.mdx create mode 100644 product_docs/docs/pgd/5/limitations.mdx diff --git a/product_docs/docs/pgd/4/index.mdx b/product_docs/docs/pgd/4/index.mdx index 294b7009a06..9fb52ede765 100644 --- a/product_docs/docs/pgd/4/index.mdx +++ b/product_docs/docs/pgd/4/index.mdx @@ -17,6 +17,7 @@ navigation: - choosing_server - choosing_durability - other_considerations + - limitations - "#Installing" - deployments - upgrades diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx new file mode 100644 index 00000000000..2665f4a5b00 --- /dev/null +++ b/product_docs/docs/pgd/4/limitations.mdx @@ -0,0 +1,30 @@ +--- +title: "Limitations" +--- + +## Using Postgres Distributed for multiple databases on the same instance + +The documentation for EDB Postgres Distributed states that it’s best practice for EDB Postgres Distributed to be used for a single database. This is codified in the default deployment automation with TPA and tooling like the CLI and proxy. Additionally, each VM or physical server hosting EDB Postgres Distributed should have only a single Postgres installation. + +While the documentation also states up to 10 databases are allowed, the content is included in the “Limitations” section of the documentation and intended to communicate it as an “exception” situation rather than a “best practice”. The documentation for existing PGD versions will be updated in the near future to provide more clarity about the limitations and challenges with the current product when multiple databases are used. + +Support for using EDB Postgres Distributed for multiple databases on the same Postgres instance will be deprecated in the near term and not supported in future releases. As we extend the capabilities of the product to include sharding and write anywhere functionality, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. + +Limitations/Risks when using EDB Postgres Distributed for multiple databases on the same instance: + +1. Administrative commands need to be executed for each database if PGD configuration changes are needed, which increases risk for potential inconsistencies and errors. + +1. Each database needs to be monitored separately, adding overhead. + +1. TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases. + +1. HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases. + +1. Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases. + +1. When rebuilding or adding a node, the physical initialization method (“bdr_init_physical”) for one database can only be used for one node, all other databases will have to be initialized by logical replication, which can be problematic for large databases because of the time it might take. + +1. Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as expected. Since the Postgres WAL is shared between the databases, a synchronous commit confirmation may come from any database, not necessarily in the right order of commits. + +1. CLI and OTEL integration (new with v5) assumes one database. + diff --git a/product_docs/docs/pgd/5/index.mdx b/product_docs/docs/pgd/5/index.mdx index 607a2aca3ed..659663bd225 100644 --- a/product_docs/docs/pgd/5/index.mdx +++ b/product_docs/docs/pgd/5/index.mdx @@ -15,6 +15,7 @@ navigation: - choosing_server - deployments - other_considerations + - limitations - "#Installing" - tpa - upgrades diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx new file mode 100644 index 00000000000..2665f4a5b00 --- /dev/null +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -0,0 +1,30 @@ +--- +title: "Limitations" +--- + +## Using Postgres Distributed for multiple databases on the same instance + +The documentation for EDB Postgres Distributed states that it’s best practice for EDB Postgres Distributed to be used for a single database. This is codified in the default deployment automation with TPA and tooling like the CLI and proxy. Additionally, each VM or physical server hosting EDB Postgres Distributed should have only a single Postgres installation. + +While the documentation also states up to 10 databases are allowed, the content is included in the “Limitations” section of the documentation and intended to communicate it as an “exception” situation rather than a “best practice”. The documentation for existing PGD versions will be updated in the near future to provide more clarity about the limitations and challenges with the current product when multiple databases are used. + +Support for using EDB Postgres Distributed for multiple databases on the same Postgres instance will be deprecated in the near term and not supported in future releases. As we extend the capabilities of the product to include sharding and write anywhere functionality, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. + +Limitations/Risks when using EDB Postgres Distributed for multiple databases on the same instance: + +1. Administrative commands need to be executed for each database if PGD configuration changes are needed, which increases risk for potential inconsistencies and errors. + +1. Each database needs to be monitored separately, adding overhead. + +1. TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases. + +1. HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases. + +1. Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases. + +1. When rebuilding or adding a node, the physical initialization method (“bdr_init_physical”) for one database can only be used for one node, all other databases will have to be initialized by logical replication, which can be problematic for large databases because of the time it might take. + +1. Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as expected. Since the Postgres WAL is shared between the databases, a synchronous commit confirmation may come from any database, not necessarily in the right order of commits. + +1. CLI and OTEL integration (new with v5) assumes one database. + From c8c708d48e4f115375939921bee29cfd734bcf74 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 28 Feb 2023 13:07:11 -0500 Subject: [PATCH 08/50] Second read of upgrading and uninstall text --- .../epas/15/uninstalling/linux_uninstall.mdx | 17 +++--- .../15/uninstalling/windows_uninstall.mdx | 33 ++++++------ .../01_linking_versus_copying.mdx | 4 +- .../01_performing_an_upgrade/index.mdx | 6 +-- .../01_command_line_options_reference.mdx | 4 +- .../02_invoking_pg_upgrade/index.mdx | 2 +- .../03_upgrading_to_advanced_server.mdx | 53 +++++++++---------- .../06_reverting_to_the_old_cluster.mdx | 8 +-- .../index.mdx | 15 ++++-- ..._version_update_of_an_rpm_installation.mdx | 3 +- ...plus_to_perform_a_minor_version_update.mdx | 39 +++++++------- product_docs/docs/epas/15/upgrading/index.mdx | 4 +- 12 files changed, 95 insertions(+), 93 deletions(-) diff --git a/product_docs/docs/epas/15/uninstalling/linux_uninstall.mdx b/product_docs/docs/epas/15/uninstalling/linux_uninstall.mdx index c7dd5a08b26..c6194f8c63c 100644 --- a/product_docs/docs/epas/15/uninstalling/linux_uninstall.mdx +++ b/product_docs/docs/epas/15/uninstalling/linux_uninstall.mdx @@ -8,21 +8,22 @@ legacyRedirects: - /edb-docs/d/edb-postgres-advanced-server/installation-getting-started/installation-guide/9.6/EDB_Postgres_Advanced_Server_Installation_Guide.1.77.html --- -Note that after uninstalling EDB Postgres Advanced Server, the cluster data files remain intact and the service user persists. You may manually remove the cluster `data` and service user from the system. +!!! Note + After uninstalling EDB Postgres Advanced Server, the cluster data files remain intact, and the service user persists. You can manually remove the cluster `data` and service user from the system. ## Uninstalling on RHEL/OL/AlmaLinux/Rocky Linux -You can use variations of the `rpm, yum` or `dnf` command to remove installed packages. Note that removing a package does not damage the EDB Postgres Advanced Server `data` directory. +You can use variations of the `rpm`, `yum`, or `dnf` command to remove installed packages. Removing a package doesn't damage the EDB Postgres Advanced Server `data` directory. -Include the `-e` option when invoking the `rpm` command to remove an installed package; the command syntax is: +Include the `-e` option when invoking the `rpm` command to remove an installed package: ```text rpm -e ``` -Where `package_name` is the name of the package that you would like to remove. +Where `package_name` is the name of the package that you want to remove. -You can use the `yum remove` or `dnf remove` command to remove a package installed by `yum` or `dnf`. To remove a package, open a terminal window, assume superuser privileges, and enter the command: +You can use the `yum remove` or `dnf remove` command to remove a package installed by `yum` or `dnf`. To remove a package, open a terminal window, assume superuser privileges, and enter the appropriate command. - On RHEL or CentOS 7: @@ -38,12 +39,12 @@ You can use the `yum remove` or `dnf remove` command to remove a package install Where `package_name` is the name of the package that you want to remove. -`yum` and RPM don't remove a package that is required by another package. If you attempt to remove a package that satisfies a package dependency, `yum` or RPM provides a warning. +`yum` and `rpm` don't remove a package that's required by another package. If you attempt to remove a package that satisfies a package dependency, `yum` or `rpm` provides a warning. !!! Note - In RHEL or Rocky Linux or AlmaLinux 8, removing a package also removes all its dependencies that are not required by other packages. To override this default behavior of RHEL or Rocky Linux or AlmaLinux 8, you must disable the `clean_requirements_on_remove` parameter in the `/etc/yum.conf` file. + In RHEL or Rocky Linux or AlmaLinux 8, removing a package also removes all its dependencies that aren't required by other packages. To override this default behavior of RHEL or Rocky Linux or AlmaLinux 8, disable the `clean_requirements_on_remove` parameter in the `/etc/yum.conf` file. -To uninstall EDB Postgres Advanced Server and its dependent packages; use the following command: +To uninstall EDB Postgres Advanced Server and its dependent packages, use the appropriate command. - On RHEL or CentOS 7: diff --git a/product_docs/docs/epas/15/uninstalling/windows_uninstall.mdx b/product_docs/docs/epas/15/uninstalling/windows_uninstall.mdx index f1db0a88271..955d647e535 100644 --- a/product_docs/docs/epas/15/uninstalling/windows_uninstall.mdx +++ b/product_docs/docs/epas/15/uninstalling/windows_uninstall.mdx @@ -6,34 +6,33 @@ redirects: - /epas/latest/uninstalling_epas/on_windows/ --- -Note that after uninstalling EDB Postgres Advanced Server, the cluster data files remain intact and the service user persists. You may manually remove the cluster data and service user from the system. +!!! Note + After uninstalling EDB Postgres Advanced Server, the cluster data files remain intact, and the service user persists. You can manually remove the cluster data and service user from the system. ## Using EDB Postgres Advanced Server uninstallers at the command line -The EDB Postgres Advanced Server interactive installer creates an uninstaller that you can use to remove EDB Postgres Advanced Server or components that reside on a Windows host. The uninstaller is created in `C:\Program Files\edb\as15`. To open the uninstaller, assume superuser privileges, navigate into the directory that contains the uninstaller, and enter: +The EDB Postgres Advanced Server interactive installer creates an uninstaller that you can use to remove EDB Postgres Advanced Server or components that reside on a Windows host. The uninstaller is created in `C:\Program Files\edb\as15`. -```text -uninstall-edb-as15-server.exe -``` +1. Assume superuser privileges and, in the directory that contains the uninstaller, enter: -The uninstaller opens. + ```text + uninstall-edb-as15-server.exe + ``` -![The EDB Postgres Advanced Server uninstaller](images/advanced_server_uninstaller.png) + The uninstaller opens. -
Fig. 1: The EDB Postgres Advanced Server uninstaller
+ ![The EDB Postgres Advanced Server uninstaller](images/advanced_server_uninstaller.png) -You can remove the `Entire application` (the default), or select the radio button next to `Individual components` to select components for removal; if you select `Individual components`, a dialog prompts you to select the components you wish to remove. After making your selection, click `Next`. +1. By default, the installer removes the entire application. If you instead want to select components to remove, select **Individual components**. A dialog box prompts you to select the components you want to remove. Make your selections. -![Acknowledge that dependent components are removed first](images/acknowledging_components_removed.png) +1. Select **Next**. -
Fig. 2: Acknowledge that dependent components are removed first
+ If you selected components to remove that depend on EDB Postgres Advanced Server, those components are removed first. -If you have elected to remove components that are dependent on EDB Postgres Advanced Server, those components are removed first; click `Yes` to acknowledge that you wish to continue. + ![Acknowledge that dependent components are removed first](images/acknowledging_components_removed.png) -Progress bars are displayed as the software is removed. When the uninstallation is complete, an `Info` dialog opens to confirm that EDB Postgres Advanced Server (and/or its components) has been removed. - -![The uninstallation is complete](images/uninstallation_complete.png) - -
Fig. 3: The uninstallation is complete
+1. To continue, select **Yes**. + Progress bars are displayed as the software is removed. A confirmation reports when the uninstall process is complete. + ![The uninstall is complete](images/uninstallation_complete.png) diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/01_linking_versus_copying.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/01_linking_versus_copying.mdx index cb3e8a9c39f..9e8ac90e11c 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/01_linking_versus_copying.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/01_linking_versus_copying.mdx @@ -8,8 +8,8 @@ redirects: When invoking `pg_upgrade`, you can use a command-line option to specify whether to copy or link each table and index in the old cluster to the new cluster. -Linking is much faster because `pg_upgrade` creates a second name (a hard link) for each file in the cluster. Linking also requires no extra workspace because `pg_upgrade` doesn't make a copy of the original data. When linking the old cluster and the new cluster, the old and new clusters share the data. After starting the new cluster, your data can no longer be used with the previous version of EDB Postgres Advanced Server. +Linking is much faster because `pg_upgrade` creates a second name (a hard link) for each file in the cluster. Linking also requires no extra workspace because `pg_upgrade` doesn't make a copy of the original data. When linking the old cluster and the new cluster, the old and new clusters share the data. After starting the new cluster, you can no longer use your data with the previous version of EDB Postgres Advanced Server. -If you choose to copy data from the old cluster to the new cluster, `pg_upgrade` still reduces the amount of time required to perform an upgrade compared to the traditional `dump/restore` procedure. `pg_upgrade` uses a file-at-a-time mechanism to copy data files from the old cluster to the new cluster versus the row-by-row mechanism used by `dump/restore`. When you use `pg_upgrade`, you avoid building indexes in the new cluster. Each index is instead copied from the old cluster to the new cluster. Finally, using a `dump/restore` procedure to upgrade requires a great deal of workspace to hold the intermediate text-based dump of all of your data, while `pg_upgrade` requires very little extra workspace. +If you choose to copy data from the old cluster to the new cluster, `pg_upgrade` still reduces the time required to perform an upgrade compared to the traditional `dump/restore` procedure. `pg_upgrade` uses a file-at-a-time mechanism to copy data files from the old cluster to the new cluster versus the row-by-row mechanism used by `dump/restore`. When you use `pg_upgrade`, you avoid building indexes in the new cluster. Each index is instead copied from the old cluster to the new cluster. Finally, using a `dump/restore` procedure to upgrade requires a lot of workspace to hold the intermediate text-based dump of all of your data, while `pg_upgrade` requires very little extra workspace. Data that's stored in user-defined tablespaces isn't copied to the new cluster. It stays in the same location in the file system but is copied into a subdirectory whose name shows the version number of the new cluster. To manually relocate files that are stored in a tablespace after upgrading, move the files to the new location and update the symbolic links to point to the files. The symbolic links are located in the `pg_tblspc` directory under your cluster's `data` directory. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx index cd66e4ec2e4..ef480377cea 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/index.mdx @@ -4,7 +4,7 @@ redirects: - /epas/latest/epas_upgrade_guide/04_upgrading_an_installation_with_pg_upgrade/01_performing_an_upgrade/ --- -To upgrade an earlier version of EDB Postgres Advanced Server to the current version, you must: +To upgrade an earlier version of EDB Postgres Advanced Server to the current version: 1. Install the current version of EDB Postgres Advanced Server. The new installation must contain the same supporting server components as the old installation. 1. Empty the target database or create a new target cluster with `initdb`. @@ -20,14 +20,14 @@ After extracting the metadata from the old cluster, `pg_upgrade` performs the bo `pg_upgrade` runs the `pg_dumpall` script against the new cluster to create empty database objects of the same shape and type as those found in the old cluster. Then, `pg_upgrade` links or copies each table and index from the old cluster to the new cluster. -If you're upgrading to EDB Postgres Advanced Server and installed the `edb_dblink_oci` or `edb_dblink_libpq` extension, you must drop the extension before performing an upgrade. To drop the extension, connect to the server with the psql or PEM client, and invoke the commands: +If you're upgrading to EDB Postgres Advanced Server and installed the `edb_dblink_oci` or `edb_dblink_libpq` extension, drop the extension before performing an upgrade. To drop the extension, connect to the server with the psql or PEM client, and invoke the commands: ```sql DROP EXTENSION edb_dblink_oci; DROP EXTENSION edb_dblink_libpq; ``` -When finish upgrading, you can use the `CREATE EXTENSION` command to add the current versions of the extensions to your installation. +When you finish upgrading, you can use the `CREATE EXTENSION` command to add the current versions of the extensions to your installation.
diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx index f65ff6004fb..053609d6f69 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/01_command_line_options_reference.mdx @@ -56,7 +56,7 @@ Use the `-O` or `--new-options` keyword to specify options to pass to the new `p `-p old_port_number` `--old-port old_port_number` -Include the `-p` or `--old-port` keyword to specify the port number of the EDB Postgres Advanced Server installation that you are upgrading. +Include the `-p` or `--old-port` keyword to specify the port number of the EDB Postgres Advanced Server installation that you're upgrading. `-P new_port_number` `--new-port new_port_number` @@ -69,7 +69,7 @@ Include the `-P` or `--new-port` keyword to specify the port number of the new E `-r` `--retain` -During the upgrade process, `pg_upgrade` creates four append-only log files; when the upgrade is completed, `pg_upgrade` deletes these files. Include the `-r` or `--retain` option to specify for the server to retain the `pg_upgrade` log files. +During the upgrade process, `pg_upgrade` creates four append-only log files. When the upgrade is completed, `pg_upgrade` deletes these files. Include the `-r` or `--retain` option to retain the `pg_upgrade` log files. `-U user_name` `--username user_name` diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx index 02da5a53207..39953d6b9e6 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/index.mdx @@ -4,7 +4,7 @@ redirects: - /epas/latest/epas_upgrade_guide/04_upgrading_an_installation_with_pg_upgrade/02_invoking_pg_upgrade/ --- -When invoking `pg_upgrade`, you must specify the location of the old and new cluster's `PGDATA` and executable (`/bin`) directories, as well as the name of the EDB Postgres Advanced Server superuser, and the ports on which the installations are listening. A typical call to invoke `pg_upgrade` to migrate from EDB Postgres Advanced Server 14 to EDB Postgres Advanced Server 15 takes the form: +When invoking `pg_upgrade`, you must specify the location of the old and new cluster's `PGDATA` and executable (`/bin`) directories, the name of the EDB Postgres Advanced Server superuser, and the ports on which the installations are listening. A typical call to invoke `pg_upgrade` to migrate from EDB Postgres Advanced Server 14 to EDB Postgres Advanced Server 15 takes the form: ```shell pg_upgrade diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx index 15d549271f7..36c62586ccb 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx @@ -4,9 +4,9 @@ redirects: - /epas/latest/epas_upgrade_guide/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server/ --- -You can use `pg_upgrade` to upgrade from an existing installation of EDB Postgres Advanced Server into the cluster built by the EDB Postgres Advanced Server installer or into an alternate cluster created using the `initdb` command. +You can use `pg_upgrade` to upgrade from an existing installation of EDB Postgres Advanced Server into the cluster built by the EDB Postgres Advanced Server installer or into an alternative cluster created using the `initdb` command. -The basic steps to perform an upgrade into an empty cluster created with the `initdb` command are the same as the steps to upgrade into the cluster created by the EDB Postgres Advanced Server installer, but you can omit Step 2 (Empty the edb database) and substitute the location of the alternate cluster when specifying a target cluster for the upgrade. +The basic steps to perform an upgrade into an empty cluster created with the `initdb` command are the same as the steps to upgrade into the cluster created by the EDB Postgres Advanced Server installer. However, you can omit Step 2 - Empty the edb database and substitute the location of the alternative cluster when specifying a target cluster for the upgrade. If a problem occurs during the upgrade process, you can revert to the previous version. See [Reverting to the old cluster](06_reverting_to_the_old_cluster/#reverting_to_the_old_cluster) for detailed information about this process. @@ -18,11 +18,11 @@ Install EDB Postgres Advanced Server 15, specifying the same non-server componen ## Step 2 - Empty the target database -The target cluster must not contain any data. You can create an empty cluster using the `initdb` command, or you can empty a database that was created during the installation of EDB Postgres Advanced Server. If you installed EDB Postgres Advanced Server in PostgreSQL mode, the installer creates a single database named `postgres`. If you installed EDB Postgres Advanced Server in Oracle mode, it creates a database named `postgres` and a database named `edb`. +The target cluster must not contain any data. You can create an empty cluster using the `initdb` command, or you can empty a database that was created during the installation of EDB Postgres Advanced Server. If you installed EDB Postgres Advanced Server in PostgreSQL mode, the installer creates a single database named `postgres`. Installing EDB Postgres Advanced Server in Oracle mode creates a database named `postgres` and a database named `edb`. The easiest way to empty the target database is to drop the database and then create a new database. Before invoking the `DROP DATABASE` command, you must disconnect any users and halt any services that are currently using the database. -On Windows, navigate through the Control Panel to the Services. manager. Select each service in the **Services** list, and select **Stop**. +On Windows, from the Control Panel, go to the Services manager. Select each service in the **Services** list, and select **Stop**. On Linux, open a terminal window, assume superuser privileges, and manually stop each service. For example, invoke the following command to stop the pgAgent service: @@ -48,7 +48,7 @@ CREATE DATABASE ; During the upgrade process, `pg_upgrade` connects to the old and new servers several times. To make the connection process easier, you can edit the `pg_hba.conf` file, setting the authentication mode to `trust`. To modify the `pg_hba.conf` file, from the Start menu, select **EDB Postgres > EDB Postgres Advanced Server > Expert Configuration**. Select **Edit pg_hba.conf** to open the `pg_hba.conf` file. -You must allow trust authentication for the previous EDB Postgres Advanced Server installation and EDB Postgres Advanced Server servers. Edit the `pg_hba.conf` file for both installations of EDB Postgres Advanced Server as shown in the following figure. +You must allow trust authentication for the previous EDB Postgres Advanced Server installation and EDB Postgres Advanced Server servers. Edit the `pg_hba.conf` file for both installations of EDB Postgres Advanced Server as shown in the figure. ![Configuring EDB Postgres Advanced Server to use trust authentication.](../images/configuring_advanced_server_to_use_trust_authentication.png) @@ -58,9 +58,9 @@ If the system is required to maintain `md5` authentication mode during the upgra ## Step 4 - Stop all component services and servers -Before you invoke `pg_upgrade`, you must stop any services that belong to the original EDB Postgres Advanced Server installation, EDB Postgres Advanced Server, or the supporting components. This ensures that a service doesn't attempt to access either cluster during the upgrade process. +Before you invoke `pg_upgrade`, you must stop any services that belong to the original EDB Postgres Advanced Server installation, EDB Postgres Advanced Server, or the supporting components. Stopping these services ensures that a service doesn't attempt to access either cluster during the upgrade process. -The services that are most likely to be running in your installation are: +The services in the table are most likely to be running in your installation. | Service | On Linux | On Windows | | ---------------------------------------------- | -------------------------------------- | ---------------------------------------------------------- | @@ -95,38 +95,38 @@ Open the Services applet. Select each EDB Postgres Advanced Server or supporting Open a terminal window and manually stop each service at the command line. -## Step 5 For Linux only - Assume the identity of the cluster owner +## Step 5 for Linux only - Assume the identity of the cluster owner -If you're using Linux, assume the identity of the EDB Postgres Advanced Server cluster owner. The following example assumes EDB Postgres Advanced Server was installed in the default, compatibility-with-Oracle database mode, assigning `enterprisedb` as the cluster owner. If installed in compatibility-with-PostgreSQL database mode, `postgres` is the cluster owner. +If you're using Linux, assume the identity of the EDB Postgres Advanced Server cluster owner. This example assumes EDB Postgres Advanced Server was installed in the default, compatibility-with-Oracle database mode, assigning `enterprisedb` as the cluster owner. (If installed in compatibility-with-PostgreSQL database mode, `postgres` is the cluster owner.) ```shell su - enterprisedb ``` -Enter the EDB Postgres Advanced Server cluster owner password if prompted. Then, set the path to include the location of the `pg_upgrade` executable: +If prompted, enter the EDB Postgres Advanced Server cluster owner password. Then, set the path to include the location of the `pg_upgrade` executable: ```shell export PATH=$PATH:/usr/edb/as14/bin ``` -During the upgrade process, `pg_upgrade` writes a file to the current working directory of the `enterprisedb` user; you must invoke `pg_upgrade` from a directory where the `enterprisedb` user has `write` privileges. After performing the above commands, navigate to a directory in which the `enterprisedb` user has sufficient privileges to write a file. +During the upgrade process, `pg_upgrade` writes a file to the current working directory of the `enterprisedb` user. You must invoke `pg_upgrade` from a directory where the `enterprisedb` user has write privileges. After performing the previous commands, navigate to a directory in which the `enterprisedb` user has sufficient privileges to write a file. ```shell cd /tmp ``` -## Step 5 For Windows only - Assume the identity of the cluster owner +## Step 5 for Windows only - Assume the identity of the cluster owner If you're using Windows, open a terminal window, assume the identity of the EDB Postgres Advanced Server cluster owner, and set the path to the `pg_upgrade` executable. -If the `--serviceaccount service_account_user` parameter was specified during the initial installation of EDB Postgres Advanced Server, then `service_account_user` is the EDB Postgres Advanced Server cluster owner and is the user to give with the `RUNAS` command. +If the `--serviceaccount service_account_user` parameter was specified during the initial installation of EDB Postgres Advanced Server, then `service_account_user` is the EDB Postgres Advanced Server cluster owner. In that case, give this user with the `RUNAS` command: ```sql RUNAS /USER:service_account_user "CMD.EXE" SET PATH=%PATH%;C:\Program Files\edb\as14\bin ``` -During the upgrade process, `pg_upgrade` writes a file to the current working directory of the service account user. You must invoke `pg_upgrade` from a directory where the service account user has `write` privileges. After performing the above commands, navigate to a directory in which the service account user has privileges to write a file: +During the upgrade process, `pg_upgrade` writes a file to the current working directory of the service account user. You must invoke `pg_upgrade` from a directory where the service account user has write privileges. After performing the previous commands, navigate to a directory in which the service account user has privileges to write a file: ```shell cd %TEMP% @@ -136,7 +136,7 @@ If you omitted the `--serviceaccount` parameter during the initial installation When `NT AUTHORITY\NetworkService` is the service account user, the `RUNAS` command might not be usable. It prompts for a password, and the `NT AUTHORITY\NetworkService` account isn't assigned a password. Thus, there's typically a failure with an error message such as “Unable to acquire user password.” -Under this circumstance, you must use a Windows utility program named `PsExec` to run `CMD.EXE` as the service account `NT AUTHORITY\NetworkService`. +Under this circumstance, you must use the Windows utility program `PsExec` to run `CMD.EXE` as the service account `NT AUTHORITY\NetworkService`. Obtain the `PsExec` program by downloading `PsTools`, which is available at the [Microsoft site](https://technet.microsoft.com/en-us/sysinternals/bb897553.aspx). @@ -147,7 +147,7 @@ psexec.exe -u "NT AUTHORITY\NetworkService" CMD.EXE SET PATH=%PATH%;C:\Program Files\edb\as14\bin ``` -During the upgrade process, `pg_upgrade` writes a file to the current working directory of the service account user. You must invoke `pg_upgrade` from a directory where the service account user has `write` privileges. After performing the above commands, navigate to a directory in which the service account user has privileges to write a file: +During the upgrade process, `pg_upgrade` writes a file to the current working directory of the service account user. You must invoke `pg_upgrade` from a directory where the service account user has write privileges. After performing the previous commands, navigate to a directory in which the service account user has privileges to write a file: ```shell cd %TEMP% @@ -157,7 +157,7 @@ cd %TEMP% Before attempting an upgrade, perform a consistency check to ensure that the old and new clusters are compatible and properly configured. Include the `--check` option to instruct `pg_upgrade` to perform the consistency check. -Thid example dhoed invoking `pg_upgrade` to perform a consistency check on Linux: +This example shows invoking `pg_upgrade` to perform a consistency check on Linux: ```shell pg_upgrade -d /var/lib/edb/as13/data @@ -167,7 +167,7 @@ pg_upgrade -d /var/lib/edb/as13/data If the command is successful, it returns `*Clusters are compatible*`. -If you're using Windows, you must quote any directory names that contain a space: +If you're using Windows, quote any directory names that contain a space: ```shell pg_upgrade.exe @@ -177,9 +177,9 @@ pg_upgrade.exe -B "C:\Program Files\edb\as14\bin" -p 5444 -P 5445 --check ``` -During the consistency checking process, `pg_upgrade` logs any discrepancies that it finds to a file located in the directory from which you invoked `pg_upgrade`. When the consistency check completes, review the file to identify any missing components or upgrade conflicts. You must resolve any conflicts before invoking `pg_upgrade` to perform a version upgrade. +During the consistency checking process, `pg_upgrade` logs any discrepancies that it finds to a file located in the directory from which you invoked `pg_upgrade`. When the consistency check completes, review the file to identify any missing components or upgrade conflicts. Resolve any conflicts before invoking `pg_upgrade` to perform a version upgrade. -If `pg_upgrade` alerts you to a missing component, you can use StackBuilder Plus to add the component that contains the component. Before using StackBuilder Plus, you must restart the EDB Postgres Advanced Server service. After restarting the service, open StackBuilder Plus by selecting **Start > EDB Postgres Advanced Server 15 > StackBuilder Plus**. Follow the onscreen advice of the StackBuilder Plus wizard to download and install the missing components. +If `pg_upgrade` alerts you to a missing component, you can use StackBuilder Plus to add the component that contains the component. Before using StackBuilder Plus, restart the EDB Postgres Advanced Server service. Then, open StackBuilder Plus by selecting from the Start menu **EDB Postgres Advanced Server 15 > StackBuilder Plus**. Follow the onscreen advice of the StackBuilder Plus wizard to download and install the missing components. After `pg_upgrade` confirms that the clusters are compatible, you can perform a version upgrade. @@ -224,8 +224,7 @@ Checking for prepared transactions ok ``` -If `pg_upgrade` fails after this point, you must re-initdb the -new cluster before continuing. Otherwise, it continues as follows: +If `pg_upgrade` fails after this point, you must re-initdb the new cluster before continuing. Otherwise, it continues as follows: ```shell Performing Upgrade @@ -259,15 +258,15 @@ Running this script will delete the old cluster's data files: While `pg_upgrade` runs, it might generate SQL scripts that handle special circumstances that it encountered during your upgrade. For example, if the old cluster contains large objects, you might need to invoke a script that defines the default permissions for the objects in the new cluster. When performing the pre-upgrade consistency check, `pg_upgrade` alerts you to any script that you might need to run manually. -You must invoke the scripts after `pg_upgrade` completes. To invoke the scripts, connect to the new cluster as a database superuser with the EDB-PSQL command line client, and invoke each script using the `\i` option: +You must invoke the scripts after `pg_upgrade` completes. To invoke the scripts, connect to the new cluster as a database superuser with the EDB-PSQL command-line client, and invoke each script using the `\i` option: ```shell \i complete_path_to_script/script.sql ``` -It's generally unsafe to access tables referenced in rebuild scripts until the rebuild scripts finish. Accessing the tables might yield incorrect results or poor performance. You cam access tables not referenced in rebuild scripts immediately. +It's generally unsafe to access tables referenced in rebuild scripts until the rebuild scripts finish. Accessing the tables might yield incorrect results or poor performance. You can access tables not referenced in rebuild scripts immediately. -If `pg_upgrade` fails to complete the upgrade process, the old cluster is unchanged, except that `$PGDATA/global/pg_control` is renamed to `pg_control.old`, and each tablespace is renamed to `tablespace.old`. To revert to the pre-invocation state: +If `pg_upgrade` fails to complete the upgrade process, the old cluster is unchanged except that `$PGDATA/global/pg_control` is renamed to `pg_control.old` and each tablespace is renamed to `tablespace.old`. To revert to the pre-invocation state: 1. Delete any tablespace directories created by the new cluster. 2. Rename `$PGDATA/global/pg_control`, removing the `.old` suffix. @@ -279,7 +278,7 @@ Then, resolve any upgrade conflicts encountered and try the upgrade again. When the upgrade is complete, `pg_upgrade` might also recommend vacuuming the new cluster. It provides a script that allows you to delete the old cluster. !!! Note - Before removing the old cluster, ensure that the cluster was upgraded as expected and that you have a backup of the cluster in case you need to revert to a previous version. + In case you need to revert to a previous version, before removing the old cluster, make sure that you have a backup of the cluster and that the cluster was upgraded. ## Step 8 - Restore the authentication settings in the pg_hba.conf file @@ -287,4 +286,4 @@ If you modified the `pg_hba.conf` file to permit `trust` authentication, update ## Step 9 - Move and identify user-defined tablespaces (optional) -If you have data stored in a user-defined tablespace, you must manually relocate tablespace files after upgrading. Move the files to the new location and update the symbolic links to point to the files. The symbolic links are located in the `pg_tblspc` directory under your cluster's `data` directory. +If you have data stored in a user-defined tablespace, you must manually relocate tablespace files after upgrading. Move the files to the new location, and update the symbolic links to point to the files. The symbolic links are located in the `pg_tblspc` directory under your cluster's `data` directory. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/06_reverting_to_the_old_cluster.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/06_reverting_to_the_old_cluster.mdx index 024f334af90..10fa13ed52c 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/06_reverting_to_the_old_cluster.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/06_reverting_to_the_old_cluster.mdx @@ -6,9 +6,9 @@ redirects: -The method used to revert to a previous cluster varies with the options specified when invoking `pg_upgrade`: +The method you use to revert to a previous cluster varies with the options specified when invoking `pg_upgrade`: - If you specified the `--check` option when invoking `pg_upgrade`, an upgrade wasn't performed and no modifications were made to the old cluster. You can reuse the old cluster at any time. -- If you included the `--link` option when invoking `pg_upgrade`, the data files are shared between the old and new cluster after the upgrade completes. If you started the server that's servicing the new cluster, the new server wrote to those shared files and it's unsafe to use the old cluster. -- If you ran `pg_upgrade` without the `--link` specification or haven't started the new server, the old cluster is unchanged, except that the `.old` suffix was appended to the `$PGDATA/global/pg_control` and tablespace directories. -- To reuse the old cluster, delete the tablespace directories created by the new cluster and remove the `.old` suffix from `$PGDATA/global/pg_control` and the old cluster tablespace directory names. Restart the server that services the old cluster. +- If you included the `--link` option when invoking `pg_upgrade`, the data files are shared between the old and new cluster after the upgrade completes. If you started the server that's servicing the new cluster, the new server wrote to those shared files, and it's unsafe to use the old cluster. +- If you ran `pg_upgrade` without the `--link` specification or haven't started the new server, the old cluster is unchanged except that the `.old` suffix was appended to the `$PGDATA/global/pg_control` and tablespace directories. +- To reuse the old cluster, delete the tablespace directories created by the new cluster. Remove the `.old` suffix from `$PGDATA/global/pg_control` and the old cluster tablespace directory names. Restart the server that services the old cluster. diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx index 5d4909ce2b4..a9f99cf05eb 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/index.mdx @@ -6,14 +6,19 @@ redirects: -While minor upgrades between versions are fairly simple and require only installing new executables, past major version upgrades were both expensive and time consuming. `pg_upgrade` facilitates migration between any version of EDB Postgres Advanced Server (version 9.0 or later) and any subsequent release of EDB Postgres Advanced Server that's supported on the same platform. +While minor upgrades between versions are fairly simple and require only installing new executables, past major version upgrades were both expensive and time consuming. `pg_upgrade` eases migration between any version of EDB Postgres Advanced Server (version 9.0 or later) and any later release of EDB Postgres Advanced Server that's supported on the same platform. -Without `pg_upgrade`, to migrate from an earlier version of EDB Postgres Advanced Server to EDB Postgres Advanced Server 15, you must export all of your data using `pg_dump`, install the new release, run `initdb` to create a new cluster, and then import your old data. +Without `pg_upgrade`, to migrate from an earlier version of EDB Postgres Advanced Server to the newest version: + +1. Export all of your data using `pg_dump`. +1. Install the new release. +1. Run `initdb` to create a new cluster. +1. Import your old data. !!! Note `pg_upgrade` can reduce both the amount of time and the disk space required for many major-version upgrades. -The `pg_upgrade` utility performs an in-place transfer of existing data between EDB Postgres Advanced Server and any subsequent version. +The `pg_upgrade` utility performs an in-place transfer of existing data between EDB Postgres Advanced Server and any later version. Several factors determine if an in-place upgrade is practical: @@ -23,9 +28,9 @@ Several factors determine if an in-place upgrade is practical: Before performing a version upgrade, `pg_upgrade` verifies that the old cluster and the new cluster are compatible. -If the upgrade involves a change in the on-disk representation of database objects or data or involves a change in the binary representation of data types, `pg_upgrade` can't perform the upgrade. To upgrade, you have to `pg_dump` the old data and then import that data to the new cluster. +If the upgrade involves a change in the on-disk representation of database objects or data, or if it involves a change in the binary representation of data types, `pg_upgrade` can't perform the upgrade. To upgrade, you have to `pg_dump` the old data and then import that data to the new cluster. -The `pg_upgrade` executable is distributed with EDB Postgres Advanced Server and is installed as part of the Database Server component. No additional installation or configuration steps are required. +The `pg_upgrade` executable is distributed with EDB Postgres Advanced Server and is installed as part of the Database Server component. You don't need to further install or configure it.
diff --git a/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx b/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx index 0fd7f1dcfd1..fb1c932c7b3 100644 --- a/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx +++ b/product_docs/docs/epas/15/upgrading/05_performing_a_minor_version_update_of_an_rpm_installation.mdx @@ -25,5 +25,4 @@ yum update edb* !!! Note The `yum update` command performs an update only between minor releases. To update between major releases, use `pg_upgrade`. -For more information about using yum commands and options, enter `yum --help` at the command line. - +For more information about using `yum` commands and options, enter `yum --help` at the command line. diff --git a/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx b/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx index 12eca3c3d70..c64b0af55ec 100644 --- a/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx +++ b/product_docs/docs/epas/15/upgrading/06_using_stackbuilder_plus_to_perform_a_minor_version_update.mdx @@ -4,45 +4,44 @@ redirects: - /epas/latest/epas_upgrade_guide/06_using_stackbuilder_plus_to_perform_a_minor_version_update/ --- -StackBuilder Plus is supported only on Windows systems. +!!! Note + StackBuilder Plus is supported only on Windows systems. The StackBuilder Plus utility provides a graphical interface that simplifies the process of updating, downloading, and installing modules that complement your EDB Postgres Advanced Server installation. When you install a module with StackBuilder Plus, StackBuilder Plus resolves any software dependencies. -You can invoke StackBuilder Plus at any time after the installation has completed by selecting the **Apps >StackBuilder Plus**. Enter your system password if prompted, and the StackBuilder Plus welcome window opens. +1. To invoke StackBuilder Plus at any time after the installation has completed, select **Apps > StackBuilder Plus**. Enter your system password if prompted, and the StackBuilder Plus welcome window opens. -![The StackBuilder Plus welcome window](images/the_stackBuilder_plus_welcome.png) + ![The StackBuilder Plus welcome window](images/the_stackBuilder_plus_welcome.png) -Select your EDB Postgres Advanced Server installation. +1. Select your EDB Postgres Advanced Server installation. -StackBuilder Plus requires internet access. If your installation of EDB Postgres Advanced Server resides behind a firewall (with restricted internet access), StackBuilder Plus can download program installers through a proxy server. The module provider determines if the module can be accessed through an HTTP proxy or an FTP proxy. Currently, all updates are transferred by an HTTP proxy, and the FTP proxy information isn't used. + StackBuilder Plus requires internet access. If your installation of EDB Postgres Advanced Server is behind a firewall (with restricted internet access), StackBuilder Plus can download program installers through a proxy server. The module provider determines if the module can be accessed through an HTTP proxy or an FTP proxy. Currently, all updates are transferred by an HTTP proxy, and the FTP proxy information isn't used. -If the selected EDB Postgres Advanced Server installation has restricted Internet access, on the Welcome screen, select **Proxy Servers** ti open the Proxy servers dialog box: +1. If the selected EDB Postgres Advanced Server installation has restricted Internet access, on the Welcome screen, select **Proxy Servers** to open the Proxy servers dialog box: -![The Proxy servers dialog](images/the_proxy_servers_dialog.png) + ![The Proxy servers dialog](images/the_proxy_servers_dialog.png) -On the dialog box, nter the IP address and port number of the proxy server in the **HTTP proxy** box. Currently, all StackBuilder Plus modules are distributed by HTTP proxy (FTP proxy information is ignored). +1. On the dialog box, enter the IP address and port number of the proxy server in the **HTTP proxy** box. Currently, all StackBuilder Plus modules are distributed by HTTP proxy (FTP proxy information is ignored). Select **OK**. -Select **OK**. + ![The StackBuilder Plus module selection window](images/the_stackBuilder_plus_module_selection_window.png) -![The StackBuilder Plus module selection window](images/the_stackBuilder_plus_module_selection_window.png) + The tree control on the StackBuilder Plus module selection window displays a node for each module category. -The tree control on the StackBuilder Plus module selection window displays a node for each module category. +1. To add a component to the selected EDB Postgres Advanced Server installation or to upgrade a component, select the box to the left of the module name, and select **Next**. -To add a component to the selected EDB Postgres Advanced Server installation or to upgrade a component, select the box to the left of the module name and select **Next**. +1. If prompted, enter your email address and password on the StackBuilder Plus registration window. -If prompted, enter your email address and password on the StackBuilder Plus registration window. + ![A summary window displays a list of selected packages](images/selected_packages_summary_window.png) -![A summary window displays a list of selected packages](images/selected_packages_summary_window.png) + StackBuilder Plus confirms the packages selected. The Selected packages dialog box displays the name and version of the installer. Select **Next**. -StackBuilder Plus confirms the packages selected. The Selected packages dialog box displays the name and version of the installer. Select **Next**. + When the download completes, a window opens that confirms the installation files were downloaded and are ready for installation. -When the download completes, a window opens that confirms the installation files were downloaded and are ready for installation. + ![Confirmation that the download process is complete](images/download_complete_confirmation.png) -![Confirmation that the download process is complete](images/download_complete_confirmation.png) +1. Leave the **Skip Installation** check box cleared and select **Next** to start the installation process. (Select the check box and select **Next** to exit StackBuilder Plus without installing the downloaded files.) -Leave the **Skip Installation** check box cleared and select **Next** to start the installation process. (Select the check box and select **Next** to exit StackBuilder Plus without installing the downloaded files.) - -![StackBuilder Plus confirms the completed installation](images/stackBuilder_plus_confirms_the_completed_installation.png) + ![StackBuilder Plus confirms the completed installation](images/stackBuilder_plus_confirms_the_completed_installation.png) When the upgrade is complete, StackBuilder Plus alerts you to the success or failure of the installation of the requested package. If you were prompted by an installer to restart your computer, restart now. diff --git a/product_docs/docs/epas/15/upgrading/index.mdx b/product_docs/docs/epas/15/upgrading/index.mdx index 94de0ce35d1..cea32194dc6 100644 --- a/product_docs/docs/epas/15/upgrading/index.mdx +++ b/product_docs/docs/epas/15/upgrading/index.mdx @@ -5,8 +5,8 @@ redirects: - /epas/latest/epas_upgrade_guide/ --- -Upgrading EDB Postgres Advanced Server involves the following: +Upgrading EDB Postgres Advanced Server involves: -- `pg_upgrade` to upgrade from an earlier version of EDB Postgres Advanced Server to EDB Postgres Advanced Server 15. +- `pg_upgrade` to upgrade from an earlier version of EDB Postgres Advanced Server to the latest version. - `yum` to perform a minor version upgrade on a Linux host. - `StackBuilder Plus` to perform a minor version upgrade on a Windows host. From 99aa392653eb2ea4159898b3c0e4a2ab33e9c260 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 28 Feb 2023 13:16:09 -0500 Subject: [PATCH 09/50] Marked places to update with v14 --- .../03_built-in_packages/18_dbms_utility.mdx | 2 ++ ...namic_runtime_instrumentation_tools_architecture_DRITA.mdx | 2 ++ .../02_configuring_sql_protect.mdx | 4 ++++ .../04_backing_up_restoring_sql_protect.mdx | 2 +- .../reference_command_line_options.mdx | 4 +++- .../managing_an_advanced_server_installation/index.mdx | 2 +- .../03_upgrading_to_advanced_server.mdx | 2 ++ 7 files changed, 15 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx b/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx index 0b17b861698..48dcf7ddb19 100644 --- a/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx +++ b/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx @@ -322,6 +322,8 @@ DB_VERSION( OUT VARCHAR2, OUT VARCHAR2) The following anonymous block displays the database version information. + + ```sql DECLARE v_version VARCHAR2(150); diff --git a/product_docs/docs/epas/15/epas_compat_tools_guide/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx b/product_docs/docs/epas/15/epas_compat_tools_guide/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx index 68b37c2aae4..d111236ac67 100644 --- a/product_docs/docs/epas/15/epas_compat_tools_guide/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx +++ b/product_docs/docs/epas/15/epas_compat_tools_guide/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx @@ -578,6 +578,8 @@ edbreport(, ) The call to the `edbreport()` function returns a composite report that contains system information and the reports returned by the other statspack functions: + +' ```sql SELECT * FROM edbreport(9, 10); __OUTPUT__ diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx index 806ee3c3fcf..fc85dd74422 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx @@ -68,6 +68,8 @@ edb_sql_protect.max_queries_to_save = 5000 This example shows the process to set up protection for a database named `edb`: + + ```sql $ /usr/edb/as14/bin/psql -d edb -U enterprisedb Password for user enterprisedb: @@ -115,6 +117,8 @@ For each database that you want to protect, you must determine the roles you wan 1. Connect as a superuser to a database that you want to protect with either `psql` or the Postgres Enterprise Manager client: + + ```sql $ /usr/edb/as14/bin/psql -d edb -U enterprisedb Password for user enterprisedb: diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx index c2ba6ce3ba4..6943392212f 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx @@ -78,7 +78,7 @@ CREATE SCHEMA 2. Connect to the new database as a superuser, and delete all rows from the `edb_sql_protect_rel` table. This deletion removes any existing rows in the `edb_sql_protect_rel` table that were backed up from the original database. These rows don't contain the correct OIDs relative to the database where the backup file was restored. - + ```sql $ /usr/edb/as14/bin/psql -d newdb -U enterprisedb Password for user enterprisedb: diff --git a/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx b/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx index b145be32d5f..a2db1adaded 100644 --- a/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx +++ b/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx @@ -35,7 +35,7 @@ Use the `--disable-components` parameter to specify a list of EDB Postgres Advan `dbserver` -EDB Postgres Advanced Server 14. +EDB Postgres Advanced Server. `pgadmin4` @@ -163,6 +163,8 @@ Include `--unattendedmodeui minimalWithDialogs` to specify that the installer sh Include the `--version` parameter to retrieve version information about the installer: + + `EDB Postgres Advanced Server 14.0.3-1 --- Built on 2020-10-23 00:12:44 IB: 20.6.0-202008110127` `--workload_profile {oltp | mixed | reporting}` diff --git a/product_docs/docs/epas/15/installing/windows/managing_an_advanced_server_installation/index.mdx b/product_docs/docs/epas/15/installing/windows/managing_an_advanced_server_installation/index.mdx index c590ca7fed4..952a630c2d0 100644 --- a/product_docs/docs/epas/15/installing/windows/managing_an_advanced_server_installation/index.mdx +++ b/product_docs/docs/epas/15/installing/windows/managing_an_advanced_server_installation/index.mdx @@ -24,7 +24,7 @@ The following table lists the names of the services that control EDB Postgres Ad | EDB Postgres Advanced Server component name | Windows service name | | ------------------------------ | ------------------------------------------------ | | EDB Postgres Advanced Server | edb-as-14 | -| pgAgent | EDB Postgres Advanced Server 14 Scheduling Agent | +| pgAgent | EDB Postgres Advanced Server Scheduling Agent | | PgBouncer | edb-pgbouncer-1.14 | | Slony | edb-slony-replication-14 | diff --git a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx index 36c62586ccb..9c94347eaaa 100644 --- a/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx +++ b/product_docs/docs/epas/15/upgrading/04_upgrading_an_installation_with_pg_upgrade/03_upgrading_to_advanced_server.mdx @@ -62,6 +62,7 @@ Before you invoke `pg_upgrade`, you must stop any services that belong to the or The services in the table are most likely to be running in your installation. + | Service | On Linux | On Windows | | ---------------------------------------------- | -------------------------------------- | ---------------------------------------------------------- | | EnterprisEDB Postgres Advanced Server 9.6 | edb-as-9.6 | edb-as-9.6 | @@ -70,6 +71,7 @@ The services in the table are most likely to be running in your installation. | EnterprisEDB Postgres Advanced Server 12 | edb-as-12 | edb-as-12 | | EnterprisEDB Postgres Advanced Server 13 | edb-as-13 | edb-as-13 | | EnterprisEDB Postgres Advanced Server 14 | edb-as-14 | edb-as-14 | +| EnterprisEDB Postgres Advanced Server 15 | edb-as-15 | edb-as-15 | | EDB Postgres Advanced Server 9.6 Scheduling Agent (pgAgent) | edb-pgagent-9.6 | EnterprisEDB Postgres Advanced Server 9.6 Scheduling Agent | | Infinite Cache 9.6 | edb-icache | N/A | | Infinite Cache 10 | edb-icache | N/A | From c8733d36fc8b0def3c73f3c32c9a87819db0a087 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Tue, 28 Feb 2023 14:55:04 -0500 Subject: [PATCH 10/50] replaced placeholder content with the Limitations section from the Known issues topics --- product_docs/docs/pgd/4/known_issues.mdx | 28 +------------ product_docs/docs/pgd/4/limitations.mdx | 50 ++++++++++++------------ product_docs/docs/pgd/5/known_issues.mdx | 26 ------------ product_docs/docs/pgd/5/limitations.mdx | 49 ++++++++++++----------- 4 files changed, 50 insertions(+), 103 deletions(-) diff --git a/product_docs/docs/pgd/4/known_issues.mdx b/product_docs/docs/pgd/4/known_issues.mdx index a60faa9de26..a3d0a95a5e4 100644 --- a/product_docs/docs/pgd/4/known_issues.mdx +++ b/product_docs/docs/pgd/4/known_issues.mdx @@ -93,30 +93,4 @@ release. attempting to apply the transaction. Ensure that any transactions using a specific commit scope have finished before altering or removing it. -## List of limitations - -This is a (non-comprehensive) list of limitations that are -expected and are by design. They are not expected to be resolved in the -future. - -- Replacing a node with its physical standby doesn't work for nodes that - use CAMO/Eager/Group Commit. Combining physical standbys and BDR in - general isn't recommended, even if otherwise possible. - -- A `galloc` sequence might skip some chunks if the - sequence is created in a rolled back transaction and then created - again with the same name. This can also occur if it is created and dropped when DDL - replication isn't active and then it is created again when DDL - replication is active. - The impact of the problem is mild, because the sequence - guarantees aren't violated. The sequence skips only some - initial chunks. Also, as a workaround you can specify the - starting value for the sequence as an argument to the - `bdr.alter_sequence_set_kind()` function. - -- Legacy BDR synchronous replication uses a mechanism for transaction - confirmation different from the one used by CAMO, Eager, and Group Commit. - The two are not compatible and must not be used together. Therefore, nodes - that appear in `synchronous_standby_names` must not be part of CAMO, Eager, - or Group Commit configuration. Using synchronous replication to other nodes, - including both logical and physical standby is possible. + diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx index 2665f4a5b00..27535f6e356 100644 --- a/product_docs/docs/pgd/4/limitations.mdx +++ b/product_docs/docs/pgd/4/limitations.mdx @@ -2,29 +2,29 @@ title: "Limitations" --- -## Using Postgres Distributed for multiple databases on the same instance - -The documentation for EDB Postgres Distributed states that it’s best practice for EDB Postgres Distributed to be used for a single database. This is codified in the default deployment automation with TPA and tooling like the CLI and proxy. Additionally, each VM or physical server hosting EDB Postgres Distributed should have only a single Postgres installation. - -While the documentation also states up to 10 databases are allowed, the content is included in the “Limitations” section of the documentation and intended to communicate it as an “exception” situation rather than a “best practice”. The documentation for existing PGD versions will be updated in the near future to provide more clarity about the limitations and challenges with the current product when multiple databases are used. - -Support for using EDB Postgres Distributed for multiple databases on the same Postgres instance will be deprecated in the near term and not supported in future releases. As we extend the capabilities of the product to include sharding and write anywhere functionality, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. - -Limitations/Risks when using EDB Postgres Distributed for multiple databases on the same instance: - -1. Administrative commands need to be executed for each database if PGD configuration changes are needed, which increases risk for potential inconsistencies and errors. - -1. Each database needs to be monitored separately, adding overhead. - -1. TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases. - -1. HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases. - -1. Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases. - -1. When rebuilding or adding a node, the physical initialization method (“bdr_init_physical”) for one database can only be used for one node, all other databases will have to be initialized by logical replication, which can be problematic for large databases because of the time it might take. - -1. Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as expected. Since the Postgres WAL is shared between the databases, a synchronous commit confirmation may come from any database, not necessarily in the right order of commits. - -1. CLI and OTEL integration (new with v5) assumes one database. +This is a (non-comprehensive) list of limitations that are +expected and are by design. They are not expected to be resolved in the +future. + +- Replacing a node with its physical standby doesn't work for nodes that + use CAMO/Eager/Group Commit. Combining physical standbys and BDR in + general isn't recommended, even if otherwise possible. + +- A `galloc` sequence might skip some chunks if the + sequence is created in a rolled back transaction and then created + again with the same name. This can also occur if it is created and dropped when DDL + replication isn't active and then it is created again when DDL + replication is active. + The impact of the problem is mild, because the sequence + guarantees aren't violated. The sequence skips only some + initial chunks. Also, as a workaround you can specify the + starting value for the sequence as an argument to the + `bdr.alter_sequence_set_kind()` function. + +- Legacy BDR synchronous replication uses a mechanism for transaction + confirmation different from the one used by CAMO, Eager, and Group Commit. + The two are not compatible and must not be used together. Therefore, nodes + that appear in `synchronous_standby_names` must not be part of CAMO, Eager, + or Group Commit configuration. Using synchronous replication to other nodes, + including both logical and physical standby is possible. \ No newline at end of file diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx index bf801110808..30ffd8ac323 100644 --- a/product_docs/docs/pgd/5/known_issues.mdx +++ b/product_docs/docs/pgd/5/known_issues.mdx @@ -73,29 +73,3 @@ release. attempting to apply the transaction. Ensure that any transactions using a specific commit scope have finished before altering or removing it. -## List of limitations - -This is a (non-comprehensive) list of limitations that are -expected and are by design. They are not expected to be resolved in the -future. - -- Replacing a node with its physical standby doesn't work for nodes that - use CAMO/Eager/Group Commit. Combining physical standbys and BDR in - general isn't recommended, even if otherwise possible. - -- A `galloc` sequence might skip some chunks if the - sequence is created in a rolled back transaction and then created - again with the same name. This can also occur if it is created and dropped when DDL - replication isn't active and then it is created again when DDL - replication is active. - The impact of the problem is mild, because the sequence - guarantees aren't violated. The sequence skips only some - initial chunks. Also, as a workaround you can specify the - starting value for the sequence as an argument to the - `bdr.alter_sequence_set_kind()` function. - -- Legacy BDR synchronous replication uses a mechanism for transaction - confirmation different from the one used by CAMO, Eager, and Group Commit. - The two are not compatible and must not be used together. Using synchronous - replication to other non-BDR nodes, including both logical and physical - standby is possible. diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index 2665f4a5b00..1cae37d1257 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -2,29 +2,28 @@ title: "Limitations" --- -## Using Postgres Distributed for multiple databases on the same instance - -The documentation for EDB Postgres Distributed states that it’s best practice for EDB Postgres Distributed to be used for a single database. This is codified in the default deployment automation with TPA and tooling like the CLI and proxy. Additionally, each VM or physical server hosting EDB Postgres Distributed should have only a single Postgres installation. - -While the documentation also states up to 10 databases are allowed, the content is included in the “Limitations” section of the documentation and intended to communicate it as an “exception” situation rather than a “best practice”. The documentation for existing PGD versions will be updated in the near future to provide more clarity about the limitations and challenges with the current product when multiple databases are used. - -Support for using EDB Postgres Distributed for multiple databases on the same Postgres instance will be deprecated in the near term and not supported in future releases. As we extend the capabilities of the product to include sharding and write anywhere functionality, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. - -Limitations/Risks when using EDB Postgres Distributed for multiple databases on the same instance: - -1. Administrative commands need to be executed for each database if PGD configuration changes are needed, which increases risk for potential inconsistencies and errors. - -1. Each database needs to be monitored separately, adding overhead. - -1. TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases. - -1. HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases. - -1. Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases. - -1. When rebuilding or adding a node, the physical initialization method (“bdr_init_physical”) for one database can only be used for one node, all other databases will have to be initialized by logical replication, which can be problematic for large databases because of the time it might take. - -1. Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as expected. Since the Postgres WAL is shared between the databases, a synchronous commit confirmation may come from any database, not necessarily in the right order of commits. - -1. CLI and OTEL integration (new with v5) assumes one database. +This is a (non-comprehensive) list of limitations that are +expected and are by design. They are not expected to be resolved in the +future. + +- Replacing a node with its physical standby doesn't work for nodes that + use CAMO/Eager/Group Commit. Combining physical standbys and BDR in + general isn't recommended, even if otherwise possible. + +- A `galloc` sequence might skip some chunks if the + sequence is created in a rolled back transaction and then created + again with the same name. This can also occur if it is created and dropped when DDL + replication isn't active and then it is created again when DDL + replication is active. + The impact of the problem is mild, because the sequence + guarantees aren't violated. The sequence skips only some + initial chunks. Also, as a workaround you can specify the + starting value for the sequence as an argument to the + `bdr.alter_sequence_set_kind()` function. + +- Legacy BDR synchronous replication uses a mechanism for transaction + confirmation different from the one used by CAMO, Eager, and Group Commit. + The two are not compatible and must not be used together. Using synchronous + replication to other non-BDR nodes, including both logical and physical + standby is possible. From 40b344c1c9859a6341d6094533754d41ffa35f55 Mon Sep 17 00:00:00 2001 From: kelpoole <44814688+kelpoole@users.noreply.github.com> Date: Tue, 28 Feb 2023 14:51:25 -0700 Subject: [PATCH 11/50] Update limitations.mdx --- product_docs/docs/pgd/5/limitations.mdx | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index 1cae37d1257..fce135d814b 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -5,11 +5,11 @@ title: "Limitations" This is a (non-comprehensive) list of limitations that are expected and are by design. They are not expected to be resolved in the -future. +future and should be taken under consideration when planning your deployment. - Replacing a node with its physical standby doesn't work for nodes that - use CAMO/Eager/Group Commit. Combining physical standbys and BDR in - general isn't recommended, even if otherwise possible. + use CAMO/Eager/Group Commit. Combining physical standbys and EDB Postgres + Distributed isn't recommended, even if possible. - A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created @@ -22,8 +22,8 @@ future. starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function. -- Legacy BDR synchronous replication uses a mechanism for transaction +- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Using synchronous - replication to other non-BDR nodes, including both logical and physical + replication to other non-PGD nodes, including both logical and physical standby is possible. From 7227a631198d0a6bb9cd2f48585ebb910271a42b Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Wed, 1 Mar 2023 11:54:55 +0530 Subject: [PATCH 12/50] Correct documentation for edb_wait_states in Performance Diagnostic Performance Diagnostic, Prerequisites content change in both doc and OLH --- .../04_toc_pem_features/21_performance_diagnostic.mdx | 2 +- .../docs/pem/9/tuning_performance/performance_diagnostic.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx index 6a89925ef07..1a78f111397 100644 --- a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx +++ b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx @@ -20,7 +20,7 @@ You can analyze the Wait States data on multiple levels by narrowing down your s Prerequisite: -- For PostgreSQL, you need to install `edb_wait_states_` package from `edb.repo` where `` is the version of PostgreSQL Server. You can refer to [EDB Build Repository](https://repos.enterprisedb.com/) for the steps to install this package. For Advanced Server, you need to install `edb-as-server-edb-modules`, Where `` is the version of Advanced Server. +- For PostgreSQL, you need to install `edb_wait_states_` package from `edb.repo` where `` is the version of PostgreSQL Server. You can refer to [EDB Build Repository](https://repos.enterprisedb.com/) for the steps to install this package. For Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). - Once you ensure that EDB Wait States module of EDB Postgres Advanced Server is installed, then configure the list of libraries in the `postgresql.conf` file as below: diff --git a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx index 170a6881e96..3ecd0010f3d 100644 --- a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx +++ b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx @@ -21,7 +21,7 @@ To analyze the Wait States data on multiple levels, narrow down the data you sel ## Prerequisites -- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you need to install the `edb-as-server-edb-modules`, where `` is the version of EDB Postgres Advanced Server. +- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). - After you install the EDB Wait States module of EDB Postgres Advanced Server: 1. Configure the list of libraries in the `postgresql.conf` file as shown: From 18cd061ec1ba8f1bf0eaf5114822e548a44f7cd9 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 1 Mar 2023 09:26:10 +0000 Subject: [PATCH 13/50] Correct version number in "This section" --- product_docs/docs/pgd/5/known_issues.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx index 30ffd8ac323..01e8ff48cfb 100644 --- a/product_docs/docs/pgd/5/known_issues.mdx +++ b/product_docs/docs/pgd/5/known_issues.mdx @@ -2,7 +2,7 @@ title: 'Known issues' --- -This section discusses currently known issues in EDB Postgres Distributed 4. +This section discusses currently known issues in EDB Postgres Distributed 5. ## Data Consistency From 458cc73095ddec15ad126e91ab78b3fc6c2da2c7 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 1 Mar 2023 14:29:30 +0000 Subject: [PATCH 14/50] Limitations updated, known issues linked, PGD --- product_docs/docs/pgd/4/bdr/index.mdx | 19 ------------ product_docs/docs/pgd/4/index.mdx | 2 +- product_docs/docs/pgd/4/known_issues.mdx | 1 + product_docs/docs/pgd/4/limitations.mdx | 29 +++++++++++++++++- product_docs/docs/pgd/5/index.mdx | 4 +-- product_docs/docs/pgd/5/known_issues.mdx | 1 + product_docs/docs/pgd/5/limitations.mdx | 34 +++++++++++++++++++++- product_docs/docs/pgd/5/overview/index.mdx | 27 ----------------- src/constants/products.js | 2 +- src/pages/index.js | 2 +- 10 files changed, 68 insertions(+), 53 deletions(-) diff --git a/product_docs/docs/pgd/4/bdr/index.mdx b/product_docs/docs/pgd/4/bdr/index.mdx index 278829faa71..cf6376d7d21 100644 --- a/product_docs/docs/pgd/4/bdr/index.mdx +++ b/product_docs/docs/pgd/4/bdr/index.mdx @@ -241,22 +241,3 @@ BDR provides controls to report and manage any skew that exists. BDR also provides row-version conflict detection, as described in [Conflict detection](conflicts). -## Limits - -BDR can run hundreds of nodes on good-enough hardware and network. However, -for mesh-based deployments, we generally don't recommend running more than -32 nodes in one cluster. -Each master node can be protected by multiple physical or logical standby nodes. -There's no specific limit on the number of standby nodes, -but typical usage is to have 2–3 standbys per master. Standby nodes don't -add connections to the mesh network, so they aren't included in the -32-node recommendation. - -BDR currently has a hard limit of no more than 1000 active nodes, as this is the -current maximum Raft connections allowed. - -BDR places a limit that at most 10 databases in any one PostgreSQL instance -can be BDR nodes across different BDR node groups. However, BDR works best if -you use only one BDR database per PostgreSQL instance. - -The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details). diff --git a/product_docs/docs/pgd/4/index.mdx b/product_docs/docs/pgd/4/index.mdx index 9fb52ede765..bf565641779 100644 --- a/product_docs/docs/pgd/4/index.mdx +++ b/product_docs/docs/pgd/4/index.mdx @@ -1,5 +1,5 @@ --- -title: "EDB Postgres Distributed" +title: "EDB Postgres Distributed (PGD)" indexCards: none redirects: - /pgd/4/compatibility_matrix diff --git a/product_docs/docs/pgd/4/known_issues.mdx b/product_docs/docs/pgd/4/known_issues.mdx index a3d0a95a5e4..f203f3f069d 100644 --- a/product_docs/docs/pgd/4/known_issues.mdx +++ b/product_docs/docs/pgd/4/known_issues.mdx @@ -94,3 +94,4 @@ release. using a specific commit scope have finished before altering or removing it. +Details of other design or implementation [limitations](limitations) are also available. \ No newline at end of file diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx index 27535f6e356..0d30fd60e0a 100644 --- a/product_docs/docs/pgd/4/limitations.mdx +++ b/product_docs/docs/pgd/4/limitations.mdx @@ -3,6 +3,31 @@ title: "Limitations" --- +This section covers design limitations of BDR, that should be taken into account +when planning your deployment. + +## Limits + +- BDR can run hundreds of nodes on good-enough hardware and network. However, +for mesh-based deployments, we generally don't recommend running more than +32 nodes in one cluster. +Each master node can be protected by multiple physical or logical standby nodes. +There's no specific limit on the number of standby nodes, +but typical usage is to have 2–3 standbys per master. Standby nodes don't +add connections to the mesh network, so they aren't included in the +32-node recommendation. + +- BDR currently has a hard limit of no more than 1000 active nodes, as this is the +current maximum Raft connections allowed. + +- BDR places a limit that at most 10 databases in any one PostgreSQL instance +can be BDR nodes across different BDR node groups. However, BDR works best if +you use only one BDR database per PostgreSQL instance. + +- The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details). + +## Other Limitations + This is a (non-comprehensive) list of limitations that are expected and are by design. They are not expected to be resolved in the future. @@ -27,4 +52,6 @@ future. The two are not compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration. Using synchronous replication to other nodes, - including both logical and physical standby is possible. \ No newline at end of file + including both logical and physical standby is possible. + + diff --git a/product_docs/docs/pgd/5/index.mdx b/product_docs/docs/pgd/5/index.mdx index 659663bd225..adc2c7c3f43 100644 --- a/product_docs/docs/pgd/5/index.mdx +++ b/product_docs/docs/pgd/5/index.mdx @@ -1,5 +1,5 @@ --- -title: "EDB Postgres Distributed" +title: "EDB Postgres Distributed (PGD)" indexCards: none redirects: - /pgd/5/compatibility_matrix @@ -44,7 +44,7 @@ navigation: --- -EDB Postgres Distributed provides multi-master replication and data distribution with advanced conflict management, data-loss protection, and throughput up to 5X faster than native logical replication, and enables distributed PostgreSQL clusters with high availability up to five 9s. +EDB Postgres Distributed (PGD) provides multi-master replication and data distribution with advanced conflict management, data-loss protection, and throughput up to 5X faster than native logical replication, and enables distributed PostgreSQL clusters with high availability up to five 9s. By default EDB Postgres Distributed uses asynchronous replication, applying changes on the peer nodes only after the local commit. Additional levels of synchronicity can diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx index 01e8ff48cfb..665fb4beafb 100644 --- a/product_docs/docs/pgd/5/known_issues.mdx +++ b/product_docs/docs/pgd/5/known_issues.mdx @@ -73,3 +73,4 @@ release. attempting to apply the transaction. Ensure that any transactions using a specific commit scope have finished before altering or removing it. +Details of other design or implementation [limitations](limitations) are also available. \ No newline at end of file diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index fce135d814b..109a9c2044c 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -2,8 +2,40 @@ title: "Limitations" --- +This section covers design limitations of PGD, that should be taken into account +when planning your deployment. -This is a (non-comprehensive) list of limitations that are +## Limits + +- PGD can run hundreds of nodes on good-enough hardware and network. However, +for mesh-based deployments, we generally don't recommend running more than +32 nodes in one cluster. +Each master node can be protected by multiple physical or logical standby nodes. +There's no specific limit on the number of standby nodes, +but typical usage is to have 2–3 standbys per master. Standby nodes don't +add connections to the mesh network, so they aren't included in the +32-node recommendation. + +- PGD currently has a hard limit of no more than 1000 active nodes, as this is the +current maximum Raft connections allowed. + +- Support for using EDB Postgres Distributed for multiple databases on the same +Postgres instance is deprecated beginning with EDB Postgres Distributed 5 and +will no longer be supported with EDB Postgres Distributed 6. As we extend the +capabilities of the product, the additional complexity introduced operationally +and functionally is no longer viable in a multi-database design. + +- The minimum recommended number of nodes in a group is three to provide fault +tolerance for PGD's consensus mechanism. With just two nodes, consensus would +fail if one of the nodes was unresponsive. Consensus is required for some PGD +operations such as distributed sequence generation. For more information about +the consensus mechanism used by EDB Postgres Distributed, see +[Architectural details](../architectures/#architecture-details). + + +## Other Limitations + +This is a (non-comprehensive) list of other limitations that are expected and are by design. They are not expected to be resolved in the future and should be taken under consideration when planning your deployment. diff --git a/product_docs/docs/pgd/5/overview/index.mdx b/product_docs/docs/pgd/5/overview/index.mdx index a949553eb82..19ecc0c52a5 100644 --- a/product_docs/docs/pgd/5/overview/index.mdx +++ b/product_docs/docs/pgd/5/overview/index.mdx @@ -221,32 +221,5 @@ PGD provides controls to report and manage any skew that exists. PGD also provides row-version conflict detection, as described in [Conflict detection](../consistency/conflicts). -## Limits - -PGD can run hundreds of nodes on good-enough hardware and network. However, -for mesh-based deployments, we generally don't recommend running more than -32 nodes in one cluster. -Each master node can be protected by multiple physical or logical standby nodes. -There's no specific limit on the number of standby nodes, -but typical usage is to have 2–3 standbys per master. Standby nodes don't -add connections to the mesh network, so they aren't included in the -32-node recommendation. - -PGD currently has a hard limit of no more than 1000 active nodes, as this is the -current maximum Raft connections allowed. - -Support for using EDB Postgres Distributed for multiple databases on the same -Postgres instance is deprecated beginning with EDB Postgres Distributed 5 and -will no longer be supported with EDB Postgres Distributed 6. As we extend the -capabilities of the product, the additional complexity introduced operationally -and functionally is no longer viable in a multi-database design. - -The minimum recommended number of nodes in a group is three to provide fault -tolerance for PGD's consensus mechanism. With just two nodes, consensus would -fail if one of the nodes was unresponsive. Consensus is required for some PGD -operations such as distributed sequence generation. For more information about -the consensus mechanism used by EDB Postgres Distributed, see -[Architectural details](../architectures/#architecture-details). - diff --git a/src/constants/products.js b/src/constants/products.js index 242ad8c70ab..e8a9ea8574b 100644 --- a/src/constants/products.js +++ b/src/constants/products.js @@ -46,7 +46,7 @@ export const products = { pem: { name: "Postgres Enterprise Manager", iconName: IconNames.EDB_PEM }, pgBackRest: { name: "pgBackRest" }, pgbouncer: { name: "PgBouncer", iconName: IconNames.POSTGRESQL }, - pgd: { name: "EDB Postgres Distributed" }, + pgd: { name: "EDB Postgres Distributed (PGD)" }, pge: { name: "EDB Postgres Extended Server" }, pgpool: { name: "PgPool-II", iconName: IconNames.POSTGRESQL }, pglogical: { name: "pglogical" }, diff --git a/src/pages/index.js b/src/pages/index.js index 13bb1c87e5f..8c739df920f 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -229,7 +229,7 @@ const Page = () => ( headingText="High Availability" > - EDB Postgres Distributed + EDB Postgres Distributed (PGD) Failover Manager From 3451f25fa9712b0336dcc8f41362505c575e36b2 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 1 Mar 2023 11:09:58 -0500 Subject: [PATCH 15/50] possible first draft --- .../docs/edb_plus/41/installing/windows.mdx | 37 ++++++------------- 1 file changed, 11 insertions(+), 26 deletions(-) diff --git a/product_docs/docs/edb_plus/41/installing/windows.mdx b/product_docs/docs/edb_plus/41/installing/windows.mdx index 0df5d6b172b..e687a7819cb 100644 --- a/product_docs/docs/edb_plus/41/installing/windows.mdx +++ b/product_docs/docs/edb_plus/41/installing/windows.mdx @@ -6,7 +6,7 @@ redirects: --- -EDB provides a graphical interactive installer for Windows. You can access it using StackBuilder Plus, which is installed as part of EDB Postgres Advanced Server. With StackBuilder Plus, you can download an installer package for EDB*Plus and invoke the graphical installer. See [Using StackBuilder Plus](/edb_plus/latest/installing/windows/#using-stackbuilder-plus). +EDB provides a graphical interactive installer for Windows. You access it using StackBuilder Plus, which is installed as part of EDB Postgres Advanced Server. ## Prerequisites @@ -16,39 +16,24 @@ Before installing EDB\*Plus, you must first install Java (version 1.8 or later). ## Using StackBuilder Plus -If you have installed EDB Postgres Advanced Server, you can use StackBuilder Plus to invoke the graphical installer for EDB*Plus. See [Using StackBuilder Plus](/epas/latest/epas_inst_windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/). +After installing EDB Postgres Advanced Server, you can use StackBuilder Plus to invoke the graphical installer for EDB*Plus. See [Using StackBuilder Plus](/epas/latest/epas_inst_windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/). -1. In StackBuilder Plus, follow the prompts until you get to the module selection page. +1. Using the Windows start menu, open StackBuilder Plus and follow the prompts until you get to the module selection page. -1. Expand the **EnterpriseDB Tools** node and select **Replication Server**. +1. Expand the **Add-ons, tools, and utilities** node and select **EDB*Plus**. -1. Proceed to the [Using the graphical installer](#using-the-graphical-installer) section in this topic. +1. Select **Next** and proceed to the [Using the graphical installer](#using-the-graphical-installer) section in this topic. -. See [Using StackBuilder Plus](/epas/latest/epas_inst_windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/). +## Using the graphical installer -1. In StackBuilder Plus, follow the prompts until you get to the module selection page. +1. Select the installation language and select **OK**. -1. Expand the **EnterpriseDB Tools** node and select **Replication Server**. +1. On the Setup EDB*Plus page, select **Next**. -1. Proceed to the [Using the graphical installer](#using-the-graphical-installer) section in this topic. +1. Browse to a directory where you want EDB*Plus to be installed, or allow the installer to install it in the default location. Select **Next**. +1. On the Ready to Install page, select **Next**. - - -Windows installers for EDB\*Plus are available via StackBuilder Plus; you can access StackBuilder Plus through the Windows start menu. After opening StackBuilder Plus and selecting the installation for which you want to install EDB\*Plus, expand the component selection screen tree control to select and download the EDB\*Plus installer. - -![The EDBPlus Welcome window](../images/edb_plus_welcome.png) - -1. The EDB\*Plus installer welcomes you to the setup wizard, as shown in the figure below. - -![The Installation Directory window](../images/installation_directory_new.png) - -1. Use the `Installation Directory` field to specify the directory in which you wish to install the EDB\*Plus software. Then, click `Next` to continue. - -![The Ready to Install window](../images/ready_to_install.png) - -1. The `Ready to Install` window notifies you when the installer has all of the information needed to install EDB\*Plus on your system. Click `Next` to install EDB\*Plus. - -![The installation is complete](../images/installation_complete.png) + An information box shows installation progress. This may take a few minutes. 1. When the installation has completed, select **Finish**. From 71cdd157ab582cbe24bf769830ef812ccff5cf66 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 1 Mar 2023 11:51:24 -0500 Subject: [PATCH 16/50] minor edits to one step --- product_docs/docs/eprs/7/installing/windows.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/eprs/7/installing/windows.mdx b/product_docs/docs/eprs/7/installing/windows.mdx index d779136b376..a300e3ff69b 100644 --- a/product_docs/docs/eprs/7/installing/windows.mdx +++ b/product_docs/docs/eprs/7/installing/windows.mdx @@ -62,7 +62,7 @@ If you are using EDB Postgres Advanced Server, you can invoke the graphical inst 1. If you do not want a particular Replication Server component installed, uncheck the box next to the component name. Select **Next**. -1. On the Account Registration page, select the option that applies to you. Select **Next**. +1. On the Account Registration page provide user account information and then select **Next**. - If you do not have an EnterpriseDB user account, you are directed to the registration page of the EnterpriseDB website. From 560dec3512fafcf19ec2a2ac523f7448d446536e Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Thu, 2 Mar 2023 16:39:51 +0530 Subject: [PATCH 17/50] Corrected one link Performance Diagnostic topic --- .../04_toc_pem_features/21_performance_diagnostic.mdx | 2 +- .../docs/pem/9/tuning_performance/performance_diagnostic.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx index 1a78f111397..9f4a93bbc0b 100644 --- a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx +++ b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx @@ -20,7 +20,7 @@ You can analyze the Wait States data on multiple levels by narrowing down your s Prerequisite: -- For PostgreSQL, you need to install `edb_wait_states_` package from `edb.repo` where `` is the version of PostgreSQL Server. You can refer to [EDB Build Repository](https://repos.enterprisedb.com/) for the steps to install this package. For Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). +- For PostgreSQL, you need to install `edb_wait_states_` package from `edb.repo` where `` is the version of PostgreSQL Server. You can refer to [EDB Build Repository](https://repos.enterprisedb.com/) for the steps to install this package. For Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). - Once you ensure that EDB Wait States module of EDB Postgres Advanced Server is installed, then configure the list of libraries in the `postgresql.conf` file as below: diff --git a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx index 3ecd0010f3d..2de298c6180 100644 --- a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx +++ b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx @@ -21,7 +21,7 @@ To analyze the Wait States data on multiple levels, narrow down the data you sel ## Prerequisites -- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). +- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). - After you install the EDB Wait States module of EDB Postgres Advanced Server: 1. Configure the list of libraries in the `postgresql.conf` file as shown: From dde44682d7fa10c7a3addf0a8ca0a3563094d881 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Thu, 2 Mar 2023 09:39:07 -0500 Subject: [PATCH 18/50] Removed step for account registration --- product_docs/docs/eprs/7/installing/windows.mdx | 6 ------ 1 file changed, 6 deletions(-) diff --git a/product_docs/docs/eprs/7/installing/windows.mdx b/product_docs/docs/eprs/7/installing/windows.mdx index a300e3ff69b..accdf3071b5 100644 --- a/product_docs/docs/eprs/7/installing/windows.mdx +++ b/product_docs/docs/eprs/7/installing/windows.mdx @@ -62,12 +62,6 @@ If you are using EDB Postgres Advanced Server, you can invoke the graphical inst 1. If you do not want a particular Replication Server component installed, uncheck the box next to the component name. Select **Next**. -1. On the Account Registration page provide user account information and then select **Next**. - - - If you do not have an EnterpriseDB user account, you are directed to the registration page of the EnterpriseDB website. - - - If you already have an EnterpriseDB user account, enter the email address and password for your EnterpriseDB user account. Select **Next**. - 1. Enter information for the Replication Server administrator. !!! Note From 150f38c824b4129bec7cd9f78842f25605d62c1a Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Fri, 3 Mar 2023 09:28:55 +0000 Subject: [PATCH 19/50] Adding in Multiple Databases/Single Limitations --- product_docs/docs/pgd/4/limitations.mdx | 54 ++++++++++++++++++++- product_docs/docs/pgd/5/limitations.mdx | 63 ++++++++++++++++++++----- 2 files changed, 105 insertions(+), 12 deletions(-) diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx index 0d30fd60e0a..03ce2c60af1 100644 --- a/product_docs/docs/pgd/4/limitations.mdx +++ b/product_docs/docs/pgd/4/limitations.mdx @@ -24,7 +24,59 @@ current maximum Raft connections allowed. can be BDR nodes across different BDR node groups. However, BDR works best if you use only one BDR database per PostgreSQL instance. -- The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details). +- The minimum recommended number of nodes in a group is three to provide fault +tolerance for BDR's consensus mechanism. With just two nodes, consensus would +fail if one of the nodes was unresponsive. Consensus is required for some BDR +operations such as distributed sequence generation. For more information about +the consensus mechanism used by EDB Postgres Distributed, see +[Architectural details](/pgd/4/architectures/#architecture-details). + +- Support for using BDR for multiple databases on the same +Postgres instance is deprecated beginning with PGD 5 and +will no longer be supported with PGD 6. As we extend the +capabilities of the product, the additional complexity introduced operationally +and functionally is no longer viable in a multi-database design. + +## Limitations of Multiple Databases on a Single Instance + +It is best practice and recommended that only one database per PGD instance be +configured. The deployment automation with TPA and the tooling such as the CLI +and proxy already codify that recommendation. Also, as noted above, support for +multiple databases on the same PGD instance is being deprecated in PGD 5 and +will no longer be supported in PGD 6. + +While it is still possible to host up to ten databases in a single instance, +this incurs many immediate risks and current limitations: + +- Administrative commands need to be executed for each database if PGD + configuration changes are needed, which increases risk for potential + inconsistencies and errors. + +- Each database needs to be monitored separately, adding overhead. + +- TPAexec assumes one database; additional coding is needed by customers + or PS in a post-deploy hook to set up replication for additional databases. + +- HARP works at the Postgres instance level, not at the database level, + meaning the leader node will be the same for all databases. + +- Each additional database increases the resource requirements on the server. + Each one needs its own set of worker processes maintaining replication + (e.g. logical workers, WAL senders, and WAL receivers). Each one also + needs its own set of connections to other instances in the replication + cluster. This might severely impact performance of all databases. + +- When rebuilding or adding a node, the physical initialization method + (“bdr_init_physical”) for one database can only be used for one node, + all other databases will have to be initialized by logical replication, + which can be problematic for large databases because of the time it might take. + +- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as + expected. Since the Postgres WAL is shared between the databases, a synchronous + commit confirmation may come from any database, not necessarily in the right + order of commits. + +- CLI and OTEL integration (new with v5) assumes one database. ## Other Limitations diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index 109a9c2044c..de5df5005c4 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -2,8 +2,8 @@ title: "Limitations" --- -This section covers design limitations of PGD, that should be taken into account -when planning your deployment. +This section covers design limitations of EDB Postgres Distributed (PGD), that +should be taken into account when planning your deployment. ## Limits @@ -19,12 +19,6 @@ add connections to the mesh network, so they aren't included in the - PGD currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed. -- Support for using EDB Postgres Distributed for multiple databases on the same -Postgres instance is deprecated beginning with EDB Postgres Distributed 5 and -will no longer be supported with EDB Postgres Distributed 6. As we extend the -capabilities of the product, the additional complexity introduced operationally -and functionally is no longer viable in a multi-database design. - - The minimum recommended number of nodes in a group is three to provide fault tolerance for PGD's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some PGD @@ -32,12 +26,59 @@ operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](../architectures/#architecture-details). +- Support for using PGD for multiple databases on the same +Postgres instance is deprecated beginning with PGD 5 and +will no longer be supported with PGD 6. As we extend the +capabilities of the product, the additional complexity introduced operationally +and functionally is no longer viable in a multi-database design. + +## Limitations of Multiple Databases on a Single Instance + +It is best practice and recommended that only one database per PGD instance be +configured. The deployment automation with TPA and the tooling such as the CLI +and proxy already codify that recommendation. Also, as noted above, support for +multiple databases on the same PGD instance is being deprecated in PGD 5 and +will no longer be supported in PGD 6. + +While it is still possible to host up to ten databases in a single instance, +this incurs many immediate risks and current limitations: + +- Administrative commands need to be executed for each database if PGD + configuration changes are needed, which increases risk for potential + inconsistencies and errors. + +- Each database needs to be monitored separately, adding overhead. + +- TPAexec assumes one database; additional coding is needed by customers + or PS in a post-deploy hook to set up replication for additional databases. + +- HARP works at the Postgres instance level, not at the database level, + meaning the leader node will be the same for all databases. + +- Each additional database increases the resource requirements on the server. + Each one needs its own set of worker processes maintaining replication + (e.g. logical workers, WAL senders, and WAL receivers). Each one also + needs its own set of connections to other instances in the replication + cluster. This might severely impact performance of all databases. + +- When rebuilding or adding a node, the physical initialization method + (“bdr_init_physical”) for one database can only be used for one node, + all other databases will have to be initialized by logical replication, + which can be problematic for large databases because of the time it might take. + +- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as + expected. Since the Postgres WAL is shared between the databases, a synchronous + commit confirmation may come from any database, not necessarily in the right + order of commits. + +- CLI and OTEL integration (new with v5) assumes one database. + ## Other Limitations -This is a (non-comprehensive) list of other limitations that are -expected and are by design. They are not expected to be resolved in the -future and should be taken under consideration when planning your deployment. +This is a (non-comprehensive) list of other limitations that are expected and +are by design. They are not expected to be resolved in the future and should be taken +under consideration when planning your deployment. - Replacing a node with its physical standby doesn't work for nodes that use CAMO/Eager/Group Commit. Combining physical standbys and EDB Postgres From fd62d55b5c73c3c826d05559e26a1b08f40b5ff4 Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Fri, 3 Mar 2023 16:49:43 +0530 Subject: [PATCH 20/50] Changes based on review comments --- .../04_toc_pem_features/21_performance_diagnostic.mdx | 6 +++++- .../pem/9/tuning_performance/performance_diagnostic.mdx | 6 +++++- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx index 9f4a93bbc0b..b9729420142 100644 --- a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx +++ b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx @@ -20,7 +20,11 @@ You can analyze the Wait States data on multiple levels by narrowing down your s Prerequisite: -- For PostgreSQL, you need to install `edb_wait_states_` package from `edb.repo` where `` is the version of PostgreSQL Server. You can refer to [EDB Build Repository](https://repos.enterprisedb.com/) for the steps to install this package. For Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). +- Install the EDB wait states package: + + - For PostgreSQL, see [EDB Build Repository](https://repos.enterprisedb.com/). + + - For EDB Postgres Advanced Server, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). - Once you ensure that EDB Wait States module of EDB Postgres Advanced Server is installed, then configure the list of libraries in the `postgresql.conf` file as below: diff --git a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx index 2de298c6180..650b136fa90 100644 --- a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx +++ b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx @@ -21,7 +21,11 @@ To analyze the Wait States data on multiple levels, narrow down the data you sel ## Prerequisites -- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you must install EDB wait states; see [EDB Wait States Background Worker (EWSBW)](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states-background-worker-ewsbw) under [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). +- Install the EDB wait states package: + + - For PostgreSQL, see [EDB Build Repository](https://repos.enterprisedb.com/). + + - For EDB Postgres Advanced Server, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). - After you install the EDB Wait States module of EDB Postgres Advanced Server: 1. Configure the list of libraries in the `postgresql.conf` file as shown: From 259ac63f5dc637c88bf6d269884a9cb08d8d50f7 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Fri, 3 Mar 2023 12:23:49 +0000 Subject: [PATCH 21/50] Fix heading cases --- product_docs/docs/pgd/4/limitations.mdx | 4 ++-- product_docs/docs/pgd/5/limitations.mdx | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx index 03ce2c60af1..4fabf0fe124 100644 --- a/product_docs/docs/pgd/4/limitations.mdx +++ b/product_docs/docs/pgd/4/limitations.mdx @@ -37,7 +37,7 @@ will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. -## Limitations of Multiple Databases on a Single Instance +## Limitations of multiple databases on a single instance It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI @@ -78,7 +78,7 @@ this incurs many immediate risks and current limitations: - CLI and OTEL integration (new with v5) assumes one database. -## Other Limitations +## Other limitations This is a (non-comprehensive) list of limitations that are expected and are by design. They are not expected to be resolved in the diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index de5df5005c4..af7580268db 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -32,7 +32,7 @@ will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. -## Limitations of Multiple Databases on a Single Instance +## Limitations of multiple databases on a single instance It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI @@ -74,7 +74,7 @@ this incurs many immediate risks and current limitations: - CLI and OTEL integration (new with v5) assumes one database. -## Other Limitations +## Other limitations This is a (non-comprehensive) list of other limitations that are expected and are by design. They are not expected to be resolved in the future and should be taken From f48bf67a5f057010ae5504d0957c1f0148512ee5 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 20 Feb 2023 18:02:12 +0530 Subject: [PATCH 22/50] PEM: Adding db server matrix as per PEM-4741 --- .../pem_server_inst_linux/prerequisites.mdx | 4 ++++ product_docs/docs/pem/8/supported_platforms.mdx | 6 ++---- product_docs/docs/pem/9/installing/prerequisites.mdx | 10 +++++++--- product_docs/docs/pem/9/supported_platforms.mdx | 6 ++---- 4 files changed, 15 insertions(+), 11 deletions(-) diff --git a/product_docs/docs/pem/8/installing_pem_server/pem_server_inst_linux/prerequisites.mdx b/product_docs/docs/pem/8/installing_pem_server/pem_server_inst_linux/prerequisites.mdx index 9f8018488e0..6b51e83dc07 100644 --- a/product_docs/docs/pem/8/installing_pem_server/pem_server_inst_linux/prerequisites.mdx +++ b/product_docs/docs/pem/8/installing_pem_server/pem_server_inst_linux/prerequisites.mdx @@ -115,3 +115,7 @@ Make sure the components Postgres Enterprise Manager depends on, such as python3 ```shell zypper update ``` + +## Supported locales + +Currently, the Postgres Enterprise Manager server and web interface support a locale of `English(US) en_US` and use of a period (.) as a language separator character. Using an alternate locale or a separator character other than a period might cause errors. diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx index c677ea357aa..943c46177e1 100644 --- a/product_docs/docs/pem/8/supported_platforms.mdx +++ b/product_docs/docs/pem/8/supported_platforms.mdx @@ -1,5 +1,5 @@ --- -title: "Supported platforms and locales" +title: "Platform compatibility" # This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file. --- @@ -8,6 +8,4 @@ For information about the platforms and versions supported by PEM, see [Platform !!! Note Postgres Enterprise Manager 8.3 and later is supported on SLES. -## Supported locales - -Currently, the Postgres Enterprise Manager server and web interface support a locale of `English(US) en_US` and use of a period (.) as a language separator character. Using an alternate locale or a separator character other than a period might cause errors. +## Database compatibility \ No newline at end of file diff --git a/product_docs/docs/pem/9/installing/prerequisites.mdx b/product_docs/docs/pem/9/installing/prerequisites.mdx index a1a24edcac8..26b1ec90678 100644 --- a/product_docs/docs/pem/9/installing/prerequisites.mdx +++ b/product_docs/docs/pem/9/installing/prerequisites.mdx @@ -133,6 +133,10 @@ To install a Postgres Enterprise Manager server on Linux, you may need to perfor For SLES: - ```shell - zypper update - ``` + ```shell + zypper update + ``` + +## Supported locales + +Currently, the Postgres Enterprise Manager server and web interface support a locale of `English(US) en_US` and use of a period (.) as a language separator character. Using an alternate locale or a separator character other than a period might cause errors. diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx index c677ea357aa..9c91759489c 100644 --- a/product_docs/docs/pem/9/supported_platforms.mdx +++ b/product_docs/docs/pem/9/supported_platforms.mdx @@ -1,5 +1,5 @@ --- -title: "Supported platforms and locales" +title: "Platform compatibility" # This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file. --- @@ -8,6 +8,4 @@ For information about the platforms and versions supported by PEM, see [Platform !!! Note Postgres Enterprise Manager 8.3 and later is supported on SLES. -## Supported locales - -Currently, the Postgres Enterprise Manager server and web interface support a locale of `English(US) en_US` and use of a period (.) as a language separator character. Using an alternate locale or a separator character other than a period might cause errors. +## Database compatibility From 7e09b61e3e684b233e13d8f330277a0901189a21 Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Mon, 20 Feb 2023 18:14:57 +0530 Subject: [PATCH 23/50] Updated title --- product_docs/docs/pem/8/supported_platforms.mdx | 2 +- product_docs/docs/pem/9/supported_platforms.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx index 943c46177e1..6361603d256 100644 --- a/product_docs/docs/pem/8/supported_platforms.mdx +++ b/product_docs/docs/pem/8/supported_platforms.mdx @@ -1,5 +1,5 @@ --- -title: "Platform compatibility" +title: "Product compatibility" # This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file. --- diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx index 9c91759489c..1b02c515179 100644 --- a/product_docs/docs/pem/9/supported_platforms.mdx +++ b/product_docs/docs/pem/9/supported_platforms.mdx @@ -1,5 +1,5 @@ --- -title: "Platform compatibility" +title: "Product compatibility" # This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file. --- From c214bfd58e40d258bd2f4bc846986bde9c275540 Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Mon, 20 Feb 2023 23:48:32 +0530 Subject: [PATCH 24/50] Added compatibility matrix table and text Added compatibility matrix to the Product compatibility topic in v8 and 9 --- product_docs/docs/pem/8/supported_platforms.mdx | 12 +++++++++++- product_docs/docs/pem/9/supported_platforms.mdx | 10 ++++++++++ 2 files changed, 21 insertions(+), 1 deletion(-) diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx index 6361603d256..d540a62bb14 100644 --- a/product_docs/docs/pem/8/supported_platforms.mdx +++ b/product_docs/docs/pem/8/supported_platforms.mdx @@ -8,4 +8,14 @@ For information about the platforms and versions supported by PEM, see [Platform !!! Note Postgres Enterprise Manager 8.3 and later is supported on SLES. -## Database compatibility \ No newline at end of file +## Database compatibility + +The following table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 8.x. + +| |**PEM 8.x** | | +|:----------|:-----------------------------|:-------------------| +| |**As a monitored instance** |**As a backend** | +|**PG** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 | +|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PGD** |3, 4 |3, 4 | diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx index 1b02c515179..258898121bc 100644 --- a/product_docs/docs/pem/9/supported_platforms.mdx +++ b/product_docs/docs/pem/9/supported_platforms.mdx @@ -9,3 +9,13 @@ For information about the platforms and versions supported by PEM, see [Platform Postgres Enterprise Manager 8.3 and later is supported on SLES. ## Database compatibility + +The following table provides information about the PEM versions and their supported corresponding versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD). + +| |**PEM 8.x** | |**PEM 9.x** | | +|:----------|:-----------------------------|:-------------------|:------------------------------|:------------------------| +| |**As a monitored instance** |**As a backend** |**As a monitored instance** |**As a backend** | +|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PGD** |3, 4 |3, 4 |3, 4, 5 |3, 4, 5 | From 813055c43f9c8130145d9bf691344a67bdcb2b8b Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Tue, 21 Feb 2023 11:26:42 +0530 Subject: [PATCH 25/50] Changes based on review comments Changes to both v8 and 9 product compatibility topics --- product_docs/docs/pem/8/supported_platforms.mdx | 16 ++++++++-------- product_docs/docs/pem/9/supported_platforms.mdx | 16 ++++++++-------- 2 files changed, 16 insertions(+), 16 deletions(-) diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx index d540a62bb14..dcb40ed9a28 100644 --- a/product_docs/docs/pem/8/supported_platforms.mdx +++ b/product_docs/docs/pem/8/supported_platforms.mdx @@ -10,12 +10,12 @@ For information about the platforms and versions supported by PEM, see [Platform ## Database compatibility -The following table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 8.x. +This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD). -| |**PEM 8.x** | | -|:----------|:-----------------------------|:-------------------| -| |**As a monitored instance** |**As a backend** | -|**PG** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 | -|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PGD** |3, 4 |3, 4 | +| |**PEM 8.x** | | +|:----------|:---------------------------|:----------------| +| |**As a monitored instance** |**As a backend** | +|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PG** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PGD** |3, 4 |3, 4 | diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx index 258898121bc..29ee9db5667 100644 --- a/product_docs/docs/pem/9/supported_platforms.mdx +++ b/product_docs/docs/pem/9/supported_platforms.mdx @@ -10,12 +10,12 @@ For information about the platforms and versions supported by PEM, see [Platform ## Database compatibility -The following table provides information about the PEM versions and their supported corresponding versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD). +This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD). -| |**PEM 8.x** | |**PEM 9.x** | | -|:----------|:-----------------------------|:-------------------|:------------------------------|:------------------------| -| |**As a monitored instance** |**As a backend** |**As a monitored instance** |**As a backend** | -|**PG** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PGD** |3, 4 |3, 4 |3, 4, 5 |3, 4, 5 | +| |**PEM 9.x** | | +|:----------|:---------------------------|:-------------------| +| |**As a monitored instance** |**As a backend** | +|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PGD** |3, 4, 5 |3, 4, 5 | From c56f6bdc3957682bc32e06676d9ff6a86948391a Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Tue, 21 Feb 2023 14:07:23 +0530 Subject: [PATCH 26/50] Update 1 based on review comments Both v8 and 9 topic changed --- product_docs/docs/pem/8/supported_platforms.mdx | 15 +++++++-------- product_docs/docs/pem/9/supported_platforms.mdx | 5 ++--- 2 files changed, 9 insertions(+), 11 deletions(-) diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx index dcb40ed9a28..ac02813ddcd 100644 --- a/product_docs/docs/pem/8/supported_platforms.mdx +++ b/product_docs/docs/pem/8/supported_platforms.mdx @@ -10,12 +10,11 @@ For information about the platforms and versions supported by PEM, see [Platform ## Database compatibility -This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD). +This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 8.x. -| |**PEM 8.x** | | -|:----------|:---------------------------|:----------------| -| |**As a monitored instance** |**As a backend** | -|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PG** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PGD** |3, 4 |3, 4 | +| |**As a monitored instance** |**As a backend** | +|:----------|:---------------------------|:------------------| +|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PG** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PGD** |3, 4 |3, 4 | diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx index 29ee9db5667..6c4073bd54a 100644 --- a/product_docs/docs/pem/9/supported_platforms.mdx +++ b/product_docs/docs/pem/9/supported_platforms.mdx @@ -10,11 +10,10 @@ For information about the platforms and versions supported by PEM, see [Platform ## Database compatibility -This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD). +This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 9.x. -| |**PEM 9.x** | | -|:----------|:---------------------------|:-------------------| | |**As a monitored instance** |**As a backend** | +|:----------|:---------------------------|:-------------------| |**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | |**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | |**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | From 350c261f4bf1133ee1a6766abb37bc460cf313bf Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Tue, 21 Feb 2023 15:00:48 +0530 Subject: [PATCH 27/50] Changed table header --- product_docs/docs/pem/8/supported_platforms.mdx | 12 ++++++------ product_docs/docs/pem/9/supported_platforms.mdx | 12 ++++++------ 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx index ac02813ddcd..4961d6c19a5 100644 --- a/product_docs/docs/pem/8/supported_platforms.mdx +++ b/product_docs/docs/pem/8/supported_platforms.mdx @@ -12,9 +12,9 @@ For information about the platforms and versions supported by PEM, see [Platform This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 8.x. -| |**As a monitored instance** |**As a backend** | -|:----------|:---------------------------|:------------------| -|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PG** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PGD** |3, 4 |3, 4 | +| |**Monitored Instance** |**Backend Instance** | +|:----------|:----------------------|:----------------------| +|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PG** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PGD** |3, 4 |3, 4 | diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx index 6c4073bd54a..86fe37c3527 100644 --- a/product_docs/docs/pem/9/supported_platforms.mdx +++ b/product_docs/docs/pem/9/supported_platforms.mdx @@ -12,9 +12,9 @@ For information about the platforms and versions supported by PEM, see [Platform This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 9.x. -| |**As a monitored instance** |**As a backend** | -|:----------|:---------------------------|:-------------------| -|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PGD** |3, 4, 5 |3, 4, 5 | +| |**Monitored Instance** |**Backend Instance** | +|:----------|:---------------------------|:-----------------------| +|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PGD** |3, 4, 5 |3, 4, 5 | From d829db3c1045e45c05c228300ed13a68423b596d Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Tue, 21 Feb 2023 19:44:07 +0530 Subject: [PATCH 28/50] Updated supported_platforms file for v8 based on review comments --- product_docs/docs/pem/8/supported_platforms.mdx | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx index 4961d6c19a5..66b0efffd8a 100644 --- a/product_docs/docs/pem/8/supported_platforms.mdx +++ b/product_docs/docs/pem/8/supported_platforms.mdx @@ -8,13 +8,13 @@ For information about the platforms and versions supported by PEM, see [Platform !!! Note Postgres Enterprise Manager 8.3 and later is supported on SLES. -## Database compatibility +## Postgres compatibility -This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 8.x. +The table lists the compatibility matrix information for PEM 8.x. -| |**Monitored Instance** |**Backend Instance** | -|:----------|:----------------------|:----------------------| -|**EPAS** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PG** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PGE** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PGD** |3, 4 |3, 4 | +| |**Monitored Instance** |**Backend Instance** | +|:-----------------------------------------|:----------------------|:----------------------| +|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PostgreSQL (PG)** |11, 12, 13, 14 |11, 12, 13, 14 | +|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14 |11, 12, 13, 14 | +|**EDB Postgres Distributed (PGD)** |3, 4 | | From 9f58d25d8425b6160860a835582190742784260e Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Thu, 23 Feb 2023 10:08:58 +0530 Subject: [PATCH 29/50] Added footnote for PGD5 supported from PEM9.1 and later --- product_docs/docs/pem/9/supported_platforms.mdx | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx index 86fe37c3527..6586045672f 100644 --- a/product_docs/docs/pem/9/supported_platforms.mdx +++ b/product_docs/docs/pem/9/supported_platforms.mdx @@ -17,4 +17,7 @@ This table provides information about the supported versions of PostgreSQL (PG), |**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | |**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | |**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PGD** |3, 4, 5 |3, 4, 5 | +|**PGD** |3, 4, 5[^1] | | + +[^1]: From PEM version 9.1 and later, EDB Postgres Distributed (PGD) 5 is supported. + From 79d6d029803116eae53f860ec1a4af179601ba19 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Thu, 23 Feb 2023 12:59:03 +0530 Subject: [PATCH 30/50] Moved Postgres compatibility to landing page for version 8 and separate postgres compatibilty page added for PEM 9 --- product_docs/docs/pem/8/index.mdx | 14 +++++++++++++- product_docs/docs/pem/8/supported_platforms.mdx | 13 +------------ product_docs/docs/pem/9/index.mdx | 1 + .../docs/pem/9/supported_database_versions.mdx | 14 ++++++++++++++ product_docs/docs/pem/9/supported_platforms.mdx | 16 +--------------- 5 files changed, 30 insertions(+), 28 deletions(-) create mode 100644 product_docs/docs/pem/9/supported_database_versions.mdx diff --git a/product_docs/docs/pem/8/index.mdx b/product_docs/docs/pem/8/index.mdx index 7942425f9cc..4418144c2c7 100644 --- a/product_docs/docs/pem/8/index.mdx +++ b/product_docs/docs/pem/8/index.mdx @@ -58,4 +58,16 @@ redirects: Welcome to Postgres Enterprise Manager (PEM). PEM consists of components that provide the management and analytical functionality for your EDB Postgres Advanced Server or PostgreSQL database. PEM is based on the Open Source pgAdmin 4 project. -PEM is a comprehensive database design and management system. PEM is designed to meet the needs of both novice and experienced Postgres users alike, providing a powerful graphical interface that simplifies the creation, maintenance, and use of database objects. +PEM is a comprehensive database design and management system. PEM is designed to meet the needs of both novice and experienced Postgres users alike, providing a powerful graphical interface that simplifies the creation, maintenance, use of database objects and monitoring multiple postgres servers through a single graphical interface. + +## Postgres compatibility + +The table lists the compatibility matrix information for PEM 8.x. + +| |**Monitored Instance** |**Backend Instance** | +|:-----------------------------------------|:----------------------|:----------------------| +|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PostgreSQL (PG)** |11, 12, 13, 14 |11, 12, 13, 14 | +|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14 |11, 12, 13, 14 | +|**EDB Postgres Distributed (PGD)** |3, 4 | | + diff --git a/product_docs/docs/pem/8/supported_platforms.mdx b/product_docs/docs/pem/8/supported_platforms.mdx index 66b0efffd8a..dd9453fb949 100644 --- a/product_docs/docs/pem/8/supported_platforms.mdx +++ b/product_docs/docs/pem/8/supported_platforms.mdx @@ -1,5 +1,5 @@ --- -title: "Product compatibility" +title: "Platform compatibility" # This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file. --- @@ -7,14 +7,3 @@ For information about the platforms and versions supported by PEM, see [Platform !!! Note Postgres Enterprise Manager 8.3 and later is supported on SLES. - -## Postgres compatibility - -The table lists the compatibility matrix information for PEM 8.x. - -| |**Monitored Instance** |**Backend Instance** | -|:-----------------------------------------|:----------------------|:----------------------| -|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PostgreSQL (PG)** |11, 12, 13, 14 |11, 12, 13, 14 | -|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14 |11, 12, 13, 14 | -|**EDB Postgres Distributed (PGD)** |3, 4 | | diff --git a/product_docs/docs/pem/9/index.mdx b/product_docs/docs/pem/9/index.mdx index 85c491df05b..884f03c2e41 100644 --- a/product_docs/docs/pem/9/index.mdx +++ b/product_docs/docs/pem/9/index.mdx @@ -5,6 +5,7 @@ directoryDefaults: navigation: - pem_rel_notes - supported_platforms + - supported_database_versions - prerequisites_for_installing_pem_server - "#Planning" - pem_architecture diff --git a/product_docs/docs/pem/9/supported_database_versions.mdx b/product_docs/docs/pem/9/supported_database_versions.mdx new file mode 100644 index 00000000000..45658066f2f --- /dev/null +++ b/product_docs/docs/pem/9/supported_database_versions.mdx @@ -0,0 +1,14 @@ +--- +title: "Postgres compatibility" +--- + +This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 9.x. + +| |**Monitored Instance** |**Backend Instance** | +|:----------|:---------------------------|:-----------------------| +|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PGD** |3, 4, 5[^1] | | + +[^1]: From PEM version 9.1 and later, EDB Postgres Distributed (PGD) 5 is supported. diff --git a/product_docs/docs/pem/9/supported_platforms.mdx b/product_docs/docs/pem/9/supported_platforms.mdx index 6586045672f..dd9453fb949 100644 --- a/product_docs/docs/pem/9/supported_platforms.mdx +++ b/product_docs/docs/pem/9/supported_platforms.mdx @@ -1,5 +1,5 @@ --- -title: "Product compatibility" +title: "Platform compatibility" # This is a new file and content is moved from 02_pem_hardware_software_requirements.dx file. --- @@ -7,17 +7,3 @@ For information about the platforms and versions supported by PEM, see [Platform !!! Note Postgres Enterprise Manager 8.3 and later is supported on SLES. - -## Database compatibility - -This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 9.x. - -| |**Monitored Instance** |**Backend Instance** | -|:----------|:---------------------------|:-----------------------| -|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PGD** |3, 4, 5[^1] | | - -[^1]: From PEM version 9.1 and later, EDB Postgres Distributed (PGD) 5 is supported. - From feba22a83f303b8868bbd1a56494b347b487abf9 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Fri, 24 Feb 2023 16:33:36 +0530 Subject: [PATCH 31/50] Update product_docs/docs/pem/9/supported_database_versions.mdx Co-authored-by: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com> --- product_docs/docs/pem/9/supported_database_versions.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pem/9/supported_database_versions.mdx b/product_docs/docs/pem/9/supported_database_versions.mdx index 45658066f2f..e40170631d2 100644 --- a/product_docs/docs/pem/9/supported_database_versions.mdx +++ b/product_docs/docs/pem/9/supported_database_versions.mdx @@ -11,4 +11,4 @@ This table provides information about the supported versions of PostgreSQL (PG), |**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | |**PGD** |3, 4, 5[^1] | | -[^1]: From PEM version 9.1 and later, EDB Postgres Distributed (PGD) 5 is supported. +[^1]: PEM version 9.1 and later supports EDB Postgres Distributed (PGD) 5. From 3328038812560f962d57f5eac4164db3bbc34c2f Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Fri, 24 Feb 2023 16:34:52 +0530 Subject: [PATCH 32/50] Update product_docs/docs/pem/8/index.mdx Co-authored-by: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com> --- product_docs/docs/pem/8/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pem/8/index.mdx b/product_docs/docs/pem/8/index.mdx index 4418144c2c7..f4d100f9801 100644 --- a/product_docs/docs/pem/8/index.mdx +++ b/product_docs/docs/pem/8/index.mdx @@ -62,7 +62,7 @@ PEM is a comprehensive database design and management system. PEM is designed to ## Postgres compatibility -The table lists the compatibility matrix information for PEM 8.x. +The table provides information about the supported versions of Postgres for PEM 8.x. | |**Monitored Instance** |**Backend Instance** | |:-----------------------------------------|:----------------------|:----------------------| From f99b747442e1d751fc3813b58eb83b24a0bc3ba0 Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Fri, 3 Mar 2023 13:58:46 +0530 Subject: [PATCH 33/50] Updated matrix table, PGD row removed This updates are based on new information provided by Simon --- product_docs/docs/pem/8/index.mdx | 12 ++++++------ .../docs/pem/9/supported_database_versions.mdx | 16 +++++++--------- 2 files changed, 13 insertions(+), 15 deletions(-) diff --git a/product_docs/docs/pem/8/index.mdx b/product_docs/docs/pem/8/index.mdx index f4d100f9801..e08dd30d766 100644 --- a/product_docs/docs/pem/8/index.mdx +++ b/product_docs/docs/pem/8/index.mdx @@ -64,10 +64,10 @@ PEM is a comprehensive database design and management system. PEM is designed to The table provides information about the supported versions of Postgres for PEM 8.x. -| |**Monitored Instance** |**Backend Instance** | -|:-----------------------------------------|:----------------------|:----------------------| -|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14 |11, 12, 13, 14 | -|**PostgreSQL (PG)** |11, 12, 13, 14 |11, 12, 13, 14 | -|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14 |11, 12, 13, 14 | -|**EDB Postgres Distributed (PGD)** |3, 4 | | +| |**Monitored Instance** |**Backend Instance** | +|:-----------------------------------------|:----------------------|:--------------------| +|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14 |11, 12, 13, 14 | +|**PostgreSQL (PG)** |11, 12, 13, 14 |11, 12, 13, 14 | +|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14 |Note[^1] +Note[^1]: PEM will support PGE as a backend when `sslutils` is available for this server distribution. It is expected to be available in the second quarter of 2023. diff --git a/product_docs/docs/pem/9/supported_database_versions.mdx b/product_docs/docs/pem/9/supported_database_versions.mdx index e40170631d2..9fc8b529e77 100644 --- a/product_docs/docs/pem/9/supported_database_versions.mdx +++ b/product_docs/docs/pem/9/supported_database_versions.mdx @@ -1,14 +1,12 @@ --- title: "Postgres compatibility" --- +The table provides information about the supported versions of Postgres for PEM 9.x. -This table provides information about the supported versions of PostgreSQL (PG), EDB Postgres Extended Server (PGE), EDB Postgres Advanced Server (EPAS), and EDB Postgres Distributed (PGD) for PEM 9.x. +| |**Monitored Instance** |**Backend Instance** | +|:-----------------------------------------|:---------------------------|:---------------------| +|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PostgreSQL (PG)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14, 15 |Note[^1] | -| |**Monitored Instance** |**Backend Instance** | -|:----------|:---------------------------|:-----------------------| -|**EPAS** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PG** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PGE** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PGD** |3, 4, 5[^1] | | - -[^1]: PEM version 9.1 and later supports EDB Postgres Distributed (PGD) 5. +Note[^1]: PEM will support PGE as a backend when sslutils is available for this server distribution. It is expected to be available in the second quarter of 2023. From f916a475a76d189defee4a66be267487e4e74317 Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Mon, 6 Mar 2023 11:33:23 +0530 Subject: [PATCH 34/50] Shifted matrix to index files similar to v8 Based on Simon comments --- product_docs/docs/pem/9/index.mdx | 12 ++++++++++++ .../docs/pem/9/supported_database_versions.mdx | 12 ------------ 2 files changed, 12 insertions(+), 12 deletions(-) delete mode 100644 product_docs/docs/pem/9/supported_database_versions.mdx diff --git a/product_docs/docs/pem/9/index.mdx b/product_docs/docs/pem/9/index.mdx index 884f03c2e41..3be9ae244b5 100644 --- a/product_docs/docs/pem/9/index.mdx +++ b/product_docs/docs/pem/9/index.mdx @@ -59,3 +59,15 @@ redirects: Welcome to Postgres Enterprise Manager (PEM). PEM consists of components that provide the management and analytical functionality for your EDB Postgres Advanced Server or PostgreSQL database. PEM is based on the Open Source pgAdmin 4 project. PEM is a comprehensive database design and management system. PEM is designed to meet the needs of both novice and experienced Postgres users alike, providing a powerful graphical interface that simplifies the creation, maintenance, and use of database objects. + +## Postgres compatibility + +The table provides information about the supported versions of Postgres for PEM 9.x. + +| |**Monitored Instance** |**Backend Instance** | +|:-----------------------------------------|:---------------------------|:---------------------| +|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**PostgreSQL (PG)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | +|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14, 15 |Note[^1] | + +Note[^1]: PEM will support PGE as a backend when sslutils is available for this server distribution. It is expected to be available in the second quarter of 2023. diff --git a/product_docs/docs/pem/9/supported_database_versions.mdx b/product_docs/docs/pem/9/supported_database_versions.mdx deleted file mode 100644 index 9fc8b529e77..00000000000 --- a/product_docs/docs/pem/9/supported_database_versions.mdx +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: "Postgres compatibility" ---- -The table provides information about the supported versions of Postgres for PEM 9.x. - -| |**Monitored Instance** |**Backend Instance** | -|:-----------------------------------------|:---------------------------|:---------------------| -|**EDB Postgres Advanced Server (EPAS)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**PostgreSQL (PG)** |11, 12, 13, 14, 15 |11, 12, 13, 14, 15 | -|**EDB Postgres Extended Server (PGE)** |11, 12, 13, 14, 15 |Note[^1] | - -Note[^1]: PEM will support PGE as a backend when sslutils is available for this server distribution. It is expected to be available in the second quarter of 2023. From 86564650b850a9e176ed51e24a38580432f5f613 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Mon, 6 Mar 2023 10:52:26 +0000 Subject: [PATCH 35/50] Tidied formatting, removed "above" reference --- product_docs/docs/pgd/4/limitations.mdx | 94 +++++----------------- product_docs/docs/pgd/5/limitations.mdx | 100 ++++++------------------ 2 files changed, 42 insertions(+), 152 deletions(-) diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx index 4fabf0fe124..fdfdcbbde29 100644 --- a/product_docs/docs/pgd/4/limitations.mdx +++ b/product_docs/docs/pgd/4/limitations.mdx @@ -8,102 +8,46 @@ when planning your deployment. ## Limits -- BDR can run hundreds of nodes on good-enough hardware and network. However, -for mesh-based deployments, we generally don't recommend running more than -32 nodes in one cluster. -Each master node can be protected by multiple physical or logical standby nodes. -There's no specific limit on the number of standby nodes, -but typical usage is to have 2–3 standbys per master. Standby nodes don't -add connections to the mesh network, so they aren't included in the +- BDR can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation. -- BDR currently has a hard limit of no more than 1000 active nodes, as this is the -current maximum Raft connections allowed. +- BDR currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed. -- BDR places a limit that at most 10 databases in any one PostgreSQL instance -can be BDR nodes across different BDR node groups. However, BDR works best if -you use only one BDR database per PostgreSQL instance. +- BDR places a limit that at most 10 databases in any one PostgreSQL instance can be BDR nodes across different BDR node groups. However, BDR works best if you use only one BDR database per PostgreSQL instance. -- The minimum recommended number of nodes in a group is three to provide fault -tolerance for BDR's consensus mechanism. With just two nodes, consensus would -fail if one of the nodes was unresponsive. Consensus is required for some BDR -operations such as distributed sequence generation. For more information about -the consensus mechanism used by EDB Postgres Distributed, see -[Architectural details](/pgd/4/architectures/#architecture-details). +- The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details). -- Support for using BDR for multiple databases on the same -Postgres instance is deprecated beginning with PGD 5 and -will no longer be supported with PGD 6. As we extend the -capabilities of the product, the additional complexity introduced operationally -and functionally is no longer viable in a multi-database design. +- Support for using BDR for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. ## Limitations of multiple databases on a single instance -It is best practice and recommended that only one database per PGD instance be -configured. The deployment automation with TPA and the tooling such as the CLI -and proxy already codify that recommendation. Also, as noted above, support for -multiple databases on the same PGD instance is being deprecated in PGD 5 and -will no longer be supported in PGD 6. +It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation. Also, as noted [in the Limits section](#limits), support for multiple databases on the same PGD instance is being deprecated in PGD 5 and will no longer be supported in PGD 6. -While it is still possible to host up to ten databases in a single instance, -this incurs many immediate risks and current limitations: +While it is still possible to host up to ten databases in a single instance, this incurs many immediate risks and current limitations: -- Administrative commands need to be executed for each database if PGD - configuration changes are needed, which increases risk for potential - inconsistencies and errors. +- Administrative commands need to be executed for each database if PGD configuration changes are needed, which increases risk for potential inconsistencies and errors. - Each database needs to be monitored separately, adding overhead. -- TPAexec assumes one database; additional coding is needed by customers - or PS in a post-deploy hook to set up replication for additional databases. +- TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases. -- HARP works at the Postgres instance level, not at the database level, - meaning the leader node will be the same for all databases. +- HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases. -- Each additional database increases the resource requirements on the server. - Each one needs its own set of worker processes maintaining replication - (e.g. logical workers, WAL senders, and WAL receivers). Each one also - needs its own set of connections to other instances in the replication - cluster. This might severely impact performance of all databases. +- Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases. -- When rebuilding or adding a node, the physical initialization method - (“bdr_init_physical”) for one database can only be used for one node, - all other databases will have to be initialized by logical replication, - which can be problematic for large databases because of the time it might take. +- When rebuilding or adding a node, the physical initialization method (“bdr_init_physical”) for one database can only be used for one node, all other databases will have to be initialized by logical replication, which can be problematic for large databases because of the time it might take. -- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as - expected. Since the Postgres WAL is shared between the databases, a synchronous - commit confirmation may come from any database, not necessarily in the right - order of commits. +- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as expected. Since the Postgres WAL is shared between the databases, a synchronous commit confirmation may come from any database, not necessarily in the right order of commits. - CLI and OTEL integration (new with v5) assumes one database. ## Other limitations -This is a (non-comprehensive) list of limitations that are -expected and are by design. They are not expected to be resolved in the -future. - -- Replacing a node with its physical standby doesn't work for nodes that - use CAMO/Eager/Group Commit. Combining physical standbys and BDR in - general isn't recommended, even if otherwise possible. - -- A `galloc` sequence might skip some chunks if the - sequence is created in a rolled back transaction and then created - again with the same name. This can also occur if it is created and dropped when DDL - replication isn't active and then it is created again when DDL - replication is active. - The impact of the problem is mild, because the sequence - guarantees aren't violated. The sequence skips only some - initial chunks. Also, as a workaround you can specify the - starting value for the sequence as an argument to the - `bdr.alter_sequence_set_kind()` function. - -- Legacy BDR synchronous replication uses a mechanism for transaction - confirmation different from the one used by CAMO, Eager, and Group Commit. - The two are not compatible and must not be used together. Therefore, nodes - that appear in `synchronous_standby_names` must not be part of CAMO, Eager, - or Group Commit configuration. Using synchronous replication to other nodes, - including both logical and physical standby is possible. +This is a (non-comprehensive) list of limitations that are expected and are by design. They are not expected to be resolved in the future. +- Replacing a node with its physical standby doesn't work for nodes that use CAMO/Eager/Group Commit. Combining physical standbys and BDR in general isn't recommended, even if otherwise possible. + +- A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function. + +- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration. Using synchronous replication to other nodes, including both logical and physical standby is possible. diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index af7580268db..49a668064d7 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -2,101 +2,47 @@ title: "Limitations" --- -This section covers design limitations of EDB Postgres Distributed (PGD), that -should be taken into account when planning your deployment. +This section covers design limitations of EDB Postgres Distributed (PGD), that should be taken into account when planning your deployment. ## Limits -- PGD can run hundreds of nodes on good-enough hardware and network. However, -for mesh-based deployments, we generally don't recommend running more than -32 nodes in one cluster. -Each master node can be protected by multiple physical or logical standby nodes. -There's no specific limit on the number of standby nodes, -but typical usage is to have 2–3 standbys per master. Standby nodes don't -add connections to the mesh network, so they aren't included in the -32-node recommendation. - -- PGD currently has a hard limit of no more than 1000 active nodes, as this is the -current maximum Raft connections allowed. - -- The minimum recommended number of nodes in a group is three to provide fault -tolerance for PGD's consensus mechanism. With just two nodes, consensus would -fail if one of the nodes was unresponsive. Consensus is required for some PGD -operations such as distributed sequence generation. For more information about -the consensus mechanism used by EDB Postgres Distributed, see -[Architectural details](../architectures/#architecture-details). - -- Support for using PGD for multiple databases on the same -Postgres instance is deprecated beginning with PGD 5 and -will no longer be supported with PGD 6. As we extend the -capabilities of the product, the additional complexity introduced operationally -and functionally is no longer viable in a multi-database design. +- PGD can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation. + +- PGD currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed. + +- The minimum recommended number of nodes in a group is three to provide fault tolerance for PGD's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some PGD operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](../architectures/#architecture-details). + +- Support for using PGD for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. ## Limitations of multiple databases on a single instance -It is best practice and recommended that only one database per PGD instance be -configured. The deployment automation with TPA and the tooling such as the CLI -and proxy already codify that recommendation. Also, as noted above, support for -multiple databases on the same PGD instance is being deprecated in PGD 5 and -will no longer be supported in PGD 6. +It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation. Also, as noted in the [Limits section](#limits), support for multiple databases on the same PGD instance is being deprecated in PGD 5 and will no longer be supported in PGD 6. -While it is still possible to host up to ten databases in a single instance, -this incurs many immediate risks and current limitations: +While it is still possible to host up to ten databases in a single instance, this incurs many immediate risks and current limitations: -- Administrative commands need to be executed for each database if PGD - configuration changes are needed, which increases risk for potential - inconsistencies and errors. +- Administrative commands need to be executed for each database if PGD configuration changes are needed, which increases risk for potential inconsistencies and errors. - Each database needs to be monitored separately, adding overhead. -- TPAexec assumes one database; additional coding is needed by customers - or PS in a post-deploy hook to set up replication for additional databases. +- TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases. -- HARP works at the Postgres instance level, not at the database level, - meaning the leader node will be the same for all databases. +- HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases. -- Each additional database increases the resource requirements on the server. - Each one needs its own set of worker processes maintaining replication - (e.g. logical workers, WAL senders, and WAL receivers). Each one also - needs its own set of connections to other instances in the replication - cluster. This might severely impact performance of all databases. +- Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases. -- When rebuilding or adding a node, the physical initialization method - (“bdr_init_physical”) for one database can only be used for one node, - all other databases will have to be initialized by logical replication, - which can be problematic for large databases because of the time it might take. +- When rebuilding or adding a node, the physical initialization method (“bdr_init_physical”) for one database can only be used for one node, all other databases will have to be initialized by logical replication, which can be problematic for large databases because of the time it might take. -- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as - expected. Since the Postgres WAL is shared between the databases, a synchronous - commit confirmation may come from any database, not necessarily in the right - order of commits. +- Synchronous replication methods (e.g. CAMO, Group Commit) won’t work as expected. Since the Postgres WAL is shared between the databases, a synchronous commit confirmation may come from any database, not necessarily in the right order of commits. - CLI and OTEL integration (new with v5) assumes one database. ## Other limitations -This is a (non-comprehensive) list of other limitations that are expected and -are by design. They are not expected to be resolved in the future and should be taken -under consideration when planning your deployment. - -- Replacing a node with its physical standby doesn't work for nodes that - use CAMO/Eager/Group Commit. Combining physical standbys and EDB Postgres - Distributed isn't recommended, even if possible. - -- A `galloc` sequence might skip some chunks if the - sequence is created in a rolled back transaction and then created - again with the same name. This can also occur if it is created and dropped when DDL - replication isn't active and then it is created again when DDL - replication is active. - The impact of the problem is mild, because the sequence - guarantees aren't violated. The sequence skips only some - initial chunks. Also, as a workaround you can specify the - starting value for the sequence as an argument to the - `bdr.alter_sequence_set_kind()` function. - -- Legacy synchronous replication uses a mechanism for transaction - confirmation different from the one used by CAMO, Eager, and Group Commit. - The two are not compatible and must not be used together. Using synchronous - replication to other non-PGD nodes, including both logical and physical - standby is possible. +This is a (non-comprehensive) list of other limitations that are expected and are by design. They are not expected to be resolved in the future and should be taken under consideration when planning your deployment. + +- Replacing a node with its physical standby doesn't work for nodes that use CAMO/Eager/Group Commit. Combining physical standbys and EDB Postgres Distributed isn't recommended, even if possible. + +- A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function. + +- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Using synchronous replication to other non-PGD nodes, including both logical and physical standby is possible. From 0f070a691387656b3cfb296d6ba79dc72410dd86 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Mon, 6 Mar 2023 17:59:36 +0000 Subject: [PATCH 36/50] Revised layout --- product_docs/docs/pgd/5/known_issues.mdx | 69 +++---------------- product_docs/docs/pgd/5/limitations.mdx | 8 +-- .../docs/pgd/5/other_considerations.mdx | 58 +++++++--------- 3 files changed, 40 insertions(+), 95 deletions(-) diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx index 665fb4beafb..c57c6be7df0 100644 --- a/product_docs/docs/pgd/5/known_issues.mdx +++ b/product_docs/docs/pgd/5/known_issues.mdx @@ -2,75 +2,26 @@ title: 'Known issues' --- -This section discusses currently known issues in EDB Postgres Distributed 5. +This section discusses currently known issues in EDB Postgres Distributed 5. These issues are tracked in EDB's ticketing system and are expected to be resolved in a future release. -## Data Consistency +- If the resolver for the `update_origin_change` conflict is set to `skip`, `synchronous_commit=remote_apply` is used, and concurrent updates of the same row are repeatedly applied on two different nodes, then one of the update statements might hang due to a deadlock with the BDR writer. As mentioned in the [Conflicts](consistency/conflicts/) chapter, `skip` is not the default resolver for the `update_origin_change` conflict, and this combination isn't intended to be used in production. It discards one of the two conflicting updates based on the order of arrival on that node, which is likely to cause a divergent cluster. In the rare situation that you do choose to use the `skip` conflict resolver, note the issue with the use of the `remote_apply` mode. -Read about [Conflicts](consistency/conflicts/) to understand -the implications of the asynchronous operation mode in terms of data -consistency. +- The Decoding Worker feature doesn't work with CAMO/EAGER/Group Commit. Installations using CAMO/Eager/Group Commit must keep `enable_wal_decoder` disabled. -## List of issues +- Lag control doesn't adjust commit delay in any way on a fully isolated node, that is, in case all other nodes are unreachable or not operational. As soon as at least one node is connected, replication lag control picks up its work and adjusts the BDR commit delay again. -These known issues are tracked in BDR's -ticketing system and are expected to be resolved in a future -release. +- For time-based lag control, BDR currently uses the lag time (measured by commit timestamps) rather than the estimated catchup time that's based on historic apply rate. -- If the resolver for the `update_origin_change` conflict - is set to `skip`, `synchronous_commit=remote_apply` is used, and - concurrent updates of the same row are repeatedly applied on two - different nodes, then one of the update statements might hang due - to a deadlock with the BDR writer. As mentioned in the - [Conflicts](consistency/conflicts/) chapter, `skip` is not the default - resolver for the `update_origin_change` conflict, and this - combination isn't intended to be used in production. It discards - one of the two conflicting updates based on the order of arrival - on that node, which is likely to cause a divergent cluster. - In the rare situation that you do choose to use the `skip` - conflict resolver, note the issue with the use of the - `remote_apply` mode. +- Changing the CAMO partners in a CAMO pair isn't currently possible. It's possible only to add or remove a pair. Adding or removing a pair doesn't need a restart of Postgres or even a reload of the configuration. -- The Decoding Worker feature doesn't work with CAMO/EAGER/Group Commit. - Installations using CAMO/Eager/Group Commit must keep `enable_wal_decoder` - disabled. +- Group Commit cannot be combined with [CAMO](durability/camo/) or [Eager All Node replication](consistency/eager/). Eager Replication currently only works by using the "global" BDR commit scope. -- Lag control doesn't adjust commit delay in any way on a fully - isolated node, that is, in case all other nodes are unreachable or not - operational. As soon as at least one node is connected, replication - lag control picks up its work and adjusts the BDR commit delay - again. - -- For time-based lag control, BDR currently uses the lag time (measured - by commit timestamps) rather than the estimated catchup time that's - based on historic apply rate. - -- Changing the CAMO partners in a CAMO pair isn't currently possible. - It's possible only to add or remove a pair. - Adding or removing a pair doesn't need a restart of Postgres or even a - reload of the configuration. - -- Group Commit cannot be combined with [CAMO](durability/camo/) or [Eager All Node - replication](consistency/eager/). Eager Replication currently only works by using the - "global" BDR commit scope. - -- Transactions using Eager Replication can't yet execute DDL, - nor do they support explicit two-phase commit. - The TRUNCATE command is allowed. +- Transactions using Eager Replication can't yet execute DDL, nor do they support explicit two-phase commit. The TRUNCATE command is allowed. - Not all DDL can be run when either CAMO or Group Commit is used. -- Parallel apply is not currently supported in combination with Group - Commit, please make sure to disable it when using Group Commit by - either setting `num_writers` to 1 for the node group (using - [`bdr.alter_node_group_config`](nodes#bdralter_node_group_config)) or - via the GUC `bdr.writers_per_subscription` (see - [Configuration of Generic Replication](configuration#generic-replication)). +- Parallel apply is not currently supported in combination with Group Commit, please make sure to disable it when using Group Commit by either setting `num_writers` to 1 for the node group (using [`bdr.alter_node_group_config`](nodes#bdralter_node_group_config)) or via the GUC `bdr.writers_per_subscription` (see [Configuration of Generic Replication](configuration#generic-replication)). -- There currently is no protection against altering or removing a commit - scope. Running transactions in a commit scope that is concurrently - being altered or removed can lead to the transaction blocking or - replication stalling completely due to an error on the downstream node - attempting to apply the transaction. Ensure that any transactions - using a specific commit scope have finished before altering or removing it. +- There currently is no protection against altering or removing a commit scope. Running transactions in a commit scope that is concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to apply the transaction. Ensure that any transactions using a specific commit scope have finished before altering or removing it. Details of other design or implementation [limitations](limitations) are also available. \ No newline at end of file diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index 49a668064d7..79abe37150f 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -4,7 +4,7 @@ title: "Limitations" This section covers design limitations of EDB Postgres Distributed (PGD), that should be taken into account when planning your deployment. -## Limits +## Limits on nodes - PGD can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation. @@ -12,11 +12,11 @@ This section covers design limitations of EDB Postgres Distributed (PGD), that s - The minimum recommended number of nodes in a group is three to provide fault tolerance for PGD's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some PGD operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](../architectures/#architecture-details). -- Support for using PGD for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. - ## Limitations of multiple databases on a single instance -It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation. Also, as noted in the [Limits section](#limits), support for multiple databases on the same PGD instance is being deprecated in PGD 5 and will no longer be supported in PGD 6. +Support for using PGD for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. + +It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation. While it is still possible to host up to ten databases in a single instance, this incurs many immediate risks and current limitations: diff --git a/product_docs/docs/pgd/5/other_considerations.mdx b/product_docs/docs/pgd/5/other_considerations.mdx index e3ac2c5c7a3..286a801f9a3 100644 --- a/product_docs/docs/pgd/5/other_considerations.mdx +++ b/product_docs/docs/pgd/5/other_considerations.mdx @@ -4,43 +4,37 @@ title: "Other considerations" Review these other considerations when planning your deployment. -## Deployment and sizing considerations - -For production deployments, EDB recommends a minimum of 4 cores for each -Postgres data node. Witness nodes don't participate in the data replication -operation and don't have to meet this requirement. Always size logical standbys -exactly like the data nodes to avoid performance degradations in case of a node -promotion. In production deployments, PGD proxy nodes require minimum of 1 core, -and should increase incrementally in correlation with an increase in the number -of database cores in approximately a 1:10 ratio. EDB recommends detailed -benchmarking of your specific performance requirements to determine appropriate -sizing based on your workload. The EDB Professional Services team is available -to assist if needed. - -For development purposes, don't assign Postgres data nodes fewer than two cores. -The sizing of Barman nodes depends on the database size and the data change -rate. - -You can deploy Postgres data nodes, Barman nodes, and PGD proxy nodes on virtual -machines or in a bare metal deployment mode. However, don't deploy multiple data -nodes on VMs that are on the same physical hardware, as that reduces resiliency. -Also don't deploy multiple PGD proxy nodes on VMs on the same physical hardware, -as that, too, reduces resiliency. +## Data Consistency + +Read about [Conflicts](consistency/conflicts/) to understand +the implications of the asynchronous operation mode in terms of data +consistency. + +## Deployment + +EDB PGD is intended to be deployed in one of a small number of known-good configurations, +using either [Trusted Postgres Architect](/tpa/latest) or a configuration management approach +and deployment architecture approved by Technical Support. + +Manual deployment isn't recommended and might not be supported. + +Log messages and documentation are currently available only in English. + +## Sizing considerations + +For production deployments, EDB recommends a minimum of 4 cores for each Postgres data node. Witness nodes don't participate in the data replication operation and don't have to meet this requirement. Always size logical standbys exactly like the data nodes to avoid performance degradations in case of a node promotion. In production deployments, PGD proxy nodes require minimum of 1 core, and should increase incrementally in correlation with an increase in the number of database cores in approximately a 1:10 ratio. EDB recommends detailed benchmarking of your specific performance requirements to determine appropriate sizing based on your workload. The EDB Professional Services team is available to assist if needed. + +For development purposes, don't assign Postgres data nodes fewer than two cores. The sizing of Barman nodes depends on the database size and the data change rate. + +You can deploy Postgres data nodes, Barman nodes, and PGD proxy nodes on virtual machines or in a bare metal deployment mode. However, don't deploy multiple data nodes on VMs that are on the same physical hardware, as that reduces resiliency. Also don't deploy multiple PGD proxy nodes on VMs on the same physical hardware, as that, too, reduces resiliency. Single PGD Proxy nodes can be co-located with single PGD data nodes. ## Clocks and timezones -EDB Postgres Distributed has been designed to operate with nodes in multiple -timezones, allowing a truly worldwide database cluster. Individual servers do -not need to be configured with matching timezones, though we do recommend using -log_timezone = UTC to ensure the human readable server log is more accessible -and comparable. +EDB Postgres Distributed has been designed to operate with nodes in multiple timezones, allowing a truly worldwide database cluster. Individual servers do not need to be configured with matching timezones, though we do recommend using `log_timezone = UTC` to ensure the human readable server log is more accessible and comparable. Server clocks should be synchronized using NTP or other solutions. -Clock synchronization is not critical to performance, as is the case with some -other solutions. Clock skew can impact Origin Conflict Detection, though EDB -Postgres Distributed provides controls to report and manage any skew that -exists. EDB Postgres Distributed also provides Row Version Conflict Detection, -as described in [Conflict Detection](consistency/conflicts). +Clock synchronization is not critical to performance, as is the case with some other solutions. Clock skew can impact Origin Conflict Detection, though EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides Row Version Conflict Detection, as described in [Conflict Detection](consistency/conflicts). + From 604a2983930e74e94820febeebfd664a0bfc5e2e Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Mon, 6 Mar 2023 18:17:08 +0000 Subject: [PATCH 37/50] Updated overview to move clocks etc out --- product_docs/docs/pgd/5/overview/index.mdx | 197 ++++----------------- 1 file changed, 38 insertions(+), 159 deletions(-) diff --git a/product_docs/docs/pgd/5/overview/index.mdx b/product_docs/docs/pgd/5/overview/index.mdx index 19ecc0c52a5..d648a51cfb8 100644 --- a/product_docs/docs/pgd/5/overview/index.mdx +++ b/product_docs/docs/pgd/5/overview/index.mdx @@ -3,223 +3,102 @@ title: "Overview" redirect: bdr --- -EDB Postgres Distributed (PGD) provides multi-master replication and data -distribution with advanced conflict management, data-loss protection, and -throughput up to 5X faster than native logical replication, and enables -distributed Postgres clusters with high availability up to five 9s. +EDB Postgres Distributed (PGD) provides multi-master replication and data distribution with advanced conflict management, data-loss protection, and throughput up to 5X faster than native logical replication, and enables distributed Postgres clusters with high availability up to five 9s. -PGD provides loosely coupled, multi-master logical replication -using a mesh topology. This means that you can write to any server and the -changes are sent directly, row-by-row, to all the -other servers that are part of the same PGD group. +PGD provides loosely coupled, multi-master logical replication using a mesh topology. This means that you can write to any server and the changes are sent directly, row-by-row, to all the other servers that are part of the same PGD group. -By default, PGD uses asynchronous replication, applying changes on -the peer nodes only after the local commit. Multiple synchronous replication -options are also available. +By default, PGD uses asynchronous replication, applying changes on the peer nodes only after the local commit. Multiple synchronous replication options are also available. ## Basic architecture ### Multiple groups -A PGD node is a member of at least one *node group*, and in the most -basic architecture there is a single node group for the whole PGD -cluster. +A PGD node is a member of at least one *node group*, and in the most basic architecture there is a single node group for the whole PGD cluster. ### Multiple masters -Each node (database) participating in a PGD group both receives -changes from other members and can be written to directly by the user. +Each node (database) participating in a PGD group both receives changes from other members and can be written to directly by the user. -This is distinct from hot or warm standby, where only one master -server accepts writes, and all the other nodes are standbys that -replicate either from the master or from another standby. +This is distinct from hot or warm standby, where only one master server accepts writes, and all the other nodes are standbys that replicate either from the master or from another standby. -You don't have to write to all the masters all of the time. -A frequent configuration directs writes mostly to just one master. +You don't have to write to all the masters all of the time. A frequent configuration directs writes mostly to just one master. ### Asynchronous, by default -Changes made on one PGD node aren't replicated to other nodes until -they're committed locally. As a result, the data isn't exactly the -same on all nodes at any given time. Some nodes have data that -hasn't yet arrived at other nodes. PostgreSQL's block-based replication -solutions default to asynchronous replication as well. In PGD, -because there are multiple masters and, as a result, multiple data streams, -data on different nodes might differ even when -`synchronous_commit` and `synchronous_standby_names` are used. +Changes made on one PGD node aren't replicated to other nodes until they're committed locally. As a result, the data isn't exactly the same on all nodes at any given time. Some nodes have data that hasn't yet arrived at other nodes. PostgreSQL's block-based replication solutions default to asynchronous replication as well. In PGD, because there are multiple masters and, as a result, multiple data streams, data on different nodes might differ even when `synchronous_commit` and `synchronous_standby_names` are used. ### Mesh topology -PGD is structured around a mesh network where every node connects to every -other node and all nodes exchange data directly with each other. There's no -forwarding of data in PGD except in special circumstances such as adding and removing nodes. -Data can arrive from outside the EDB Postgres Distributed cluster or -be sent onwards using native PostgreSQL logical replication. +PGD is structured around a mesh network where every node connects to every other node and all nodes exchange data directly with each other. There's no forwarding of data in PGD except in special circumstances such as adding and removing nodes. Data can arrive from outside the EDB Postgres Distributed cluster or be sent onwards using native PostgreSQL logical replication. ### Logical replication -Logical replication is a method of replicating data rows and their changes -based on their replication identity (usually a primary key). -We use the term *logical* in contrast to *physical* replication, which uses -exact block addresses and byte-by-byte replication. Index changes aren't -replicated, thereby avoiding write amplification and reducing bandwidth. +Logical replication is a method of replicating data rows and their changes based on their replication identity (usually a primary key). We use the term *logical* in contrast to *physical* replication, which uses exact block addresses and byte-by-byte replication. Index changes aren't replicated, thereby avoiding write amplification and reducing bandwidth. -Logical replication starts by copying a snapshot of the data from the -source node. Once that is done, later commits are sent to other nodes as -they occur in real time. Changes are replicated without re-executing SQL, -so the exact data written is replicated quickly and accurately. +Logical replication starts by copying a snapshot of the data from the source node. Once that is done, later commits are sent to other nodes as they occur in real time. Changes are replicated without re-executing SQL, so the exact data written is replicated quickly and accurately. -Nodes apply data in the order in which commits were made on the source node, -ensuring transactional consistency is guaranteed for the changes from -any single node. Changes from different nodes are applied independently of -other nodes to ensure the rapid replication of changes. +Nodes apply data in the order in which commits were made on the source node, ensuring transactional consistency is guaranteed for the changes from any single node. Changes from different nodes are applied independently of other nodes to ensure the rapid replication of changes. Replicated data is sent in binary form, when it's safe to do so. ### Connection management -[Connection management](../routing) leverages consensus-driven quorum to determine -the correct connection end-point in a semi-exclusive manner to prevent unintended -multi-node writes from an application. This reduces the potential for data conflicts. +[Connection management](../routing) leverages consensus-driven quorum to determine the correct connection end-point in a semi-exclusive manner to prevent unintended multi-node writes from an application. This reduces the potential for data conflicts. -[PGD Proxy](../routing/proxy) is the tool for application connection management -provided as part of EDB Postgres Distributed. +[PGD Proxy](../routing/proxy) is the tool for application connection management provided as part of EDB Postgres Distributed. ### High availability -Each master node can be protected by one or more standby nodes, so any node -that goes down can be quickly replaced and continue. Each standby node can -be either a logical or a physical standby node. +Each master node can be protected by one or more standby nodes, so any node that goes down can be quickly replaced and continue. Each standby node can be either a logical or a physical standby node. -Replication continues between currently connected nodes even if one or more -nodes are currently unavailable. When the node recovers, replication -can restart from where it left off without missing any changes. +Replication continues between currently connected nodes even if one or more nodes are currently unavailable. When the node recovers, replication can restart from where it left off without missing any changes. -Nodes can run different release levels, negotiating the required protocols -to communicate. As a result, EDB Postgres Distributed clusters can use rolling upgrades, even -for major versions of database software. +Nodes can run different release levels, negotiating the required protocols to communicate. As a result, EDB Postgres Distributed clusters can use rolling upgrades, even for major versions of database software. -DDL is replicated across nodes by default. DDL execution can -be user controlled to allow rolling application upgrades, if desired. +DDL is replicated across nodes by default. DDL execution can be user controlled to allow rolling application upgrades, if desired. ## Architectural options and performance ### Always On architectures -A number of different architectures can be configured, each of which has -different performance and scalability characteristics. +A number of different architectures can be configured, each of which has different performance and scalability characteristics. -The group is the basic building block consisting of 2+ nodes -(servers). In a group, each node is in a different availability zone, with dedicated router -and backup, giving immediate switchover and high availability. Each group has a -dedicated replication set defined on it. If the group loses a node, you can easily -repair or replace it by copying an existing node from the group. +The group is the basic building block consisting of 2+ nodes (servers). In a group, each node is in a different availability zone, with dedicated router and backup, giving immediate switchover and high availability. Each group has a dedicated replication set defined on it. If the group loses a node, you can easily repair or replace it by copying an existing node from the group. -The Always On architectures are built from either one group in a single location -or two groups in two separate locations. Each group provides high availability. When two -groups are leveraged in remote locations, they together also provide disaster recovery (DR). +The Always On architectures are built from either one group in a single location or two groups in two separate locations. Each group provides high availability. When two groups are leveraged in remote locations, they together also provide disaster recovery (DR). -Tables are created across both groups, so any change goes to all nodes, not just to -nodes in the local group. +Tables are created across both groups, so any change goes to all nodes, not just to nodes in the local group. -One node in each group is the target for the main application. All other nodes are described as -shadow nodes (or "read-write replica"), waiting to take over when needed. If a node -loses contact, we switch immediately to a shadow node to continue processing. If a -group fails, we can switch to the other group. Scalability isn't the goal of this -architecture. +One node in each group is the target for the main application. All other nodes are described as shadow nodes (or "read-write replica"), waiting to take over when needed. If a node loses contact, we switch immediately to a shadow node to continue processing. If a group fails, we can switch to the other group. Scalability isn't the goal of this architecture. -Since we write mainly to only one node, the possibility of contention between is -reduced to almost zero. As a result, performance impact is much reduced. +Since we write mainly to only one node, the possibility of contention between is reduced to almost zero. As a result, performance impact is much reduced. -Secondary applications might execute against the shadow nodes, although these are -reduced or interrupted if the main application begins using that node. +Secondary applications might execute against the shadow nodes, although these are reduced or interrupted if the main application begins using that node. -In the future, one node will be elected as the main replicator to other groups, limiting CPU -overhead of replication as the cluster grows and minimizing the bandwidth to other groups. +In the future, one node will be elected as the main replicator to other groups, limiting CPU overhead of replication as the cluster grows and minimizing the bandwidth to other groups. ### Supported Postgres database servers -PGD is compatible with [PostgreSQL](https://www.postgresql.org/), [EDB Postgres Extended Server](https://techsupport.enterprisedb.com/customer_portal/sw/2ndqpostgres/) and [EDB Postgres Advanced Server](/epas/latest) -and is deployed as a standard Postgres extension named BDR. See the [Compatibility matrix](../#compatibility-matrix) -for details of supported version combinations. - -Some key PGD features depend on certain core -capabilities being available in the targeted Postgres database server. -Therefore, PGD users must also adopt the Postgres -database server distribution that's best suited to their business needs. For -example, if having the PGD feature Commit At Most Once (CAMO) is mission -critical to your use case, don't adopt the community -PostgreSQL distribution because it doesn't have the core capability required to handle -CAMO. See the full feature matrix compatibility in -[Choosing a Postgres distribution](../choosing_server/). - -PGD offers close to native Postgres compatibility. However, some access -patterns don't necessarily work as well in multi-node setup as they do on a -single instance. There are also some limitations in what can be safely -replicated in multi-node setting. [Application usage](../appusage) -goes into detail on how PGD behaves from an application development perspective. +PGD is compatible with [PostgreSQL](https://www.postgresql.org/), [EDB Postgres Extended Server](https://techsupport.enterprisedb.com/customer_portal/sw/2ndqpostgres/) and [EDB Postgres Advanced Server](/epas/latest) and is deployed as a standard Postgres extension named BDR. See the [Compatibility matrix](../#compatibility-matrix) for details of supported version combinations. -### Characteristics affecting performance - -By default, PGD keeps one copy of each table on each node in the group, and any -changes propagate to all nodes in the group. - -Since copies of data are everywhere, SELECTs need only ever access the local node. -On a read-only cluster, performance on any one node isn't affected by the -number of nodes and is immune to replication conflicts on other nodes caused by -long-running SELECT queries. Thus, adding nodes increases linearly the total possible SELECT -throughput. - -If an INSERT, UPDATE, and DELETE (DML) is performed locally, then the changes -propagate to all nodes in the group. The overhead of DML apply is less than the -original execution, so if you run a pure write workload on multiple nodes -concurrently, a multi-node cluster can handle more TPS than a single node. - -Conflict handling has a cost that acts to reduce the throughput. The throughput -then depends on how much contention the application displays in practice. -Applications with very low contention perform better than a single node. -Applications with high contention can perform worse than a single node. -These results are consistent with any multi-master technology. They aren't particular to PGD. - -Synchronous replilcation options can send changes concurrently to multiple nodes -so that the replication lag is minimized. Adding more nodes means using more CPU for -replication, so peak TPS reduces slightly as each node is added. - -If the workload tries to use all CPU resources, then this resource constrains -replication, which can then affect the replication lag. +Some key PGD features depend on certain core capabilities being available in the targeted Postgres database server. Therefore, PGD users must also adopt the Postgres database server distribution that's best suited to their business needs. For example, if having the PGD feature Commit At Most Once (CAMO) is mission critical to your use case, don't adopt the community PostgreSQL distribution because it doesn't have the core capability required to handle CAMO. See the full feature matrix compatibility in [Choosing a Postgres distribution](../choosing_server/). -In summary, adding more master nodes to a PGD group doesn't result in significant write -throughput increase when most tables are replicated because all the writes will -be replayed on all nodes. Because PGD writes are in general more effective -than writes coming from Postgres clients by way of SQL, some performance increase -can be achieved. Read throughput generally scales linearly with the number of -nodes. - -## Deployment - -PGD is intended to be deployed in one of a small number of known-good configurations, -using either [Trusted Postgres Architect](/tpa/latest) or a configuration management approach -and deployment architecture approved by Technical Support. - -Manual deployment isn't recommended and might not be supported. +PGD offers close to native Postgres compatibility. However, some access patterns don't necessarily work as well in multi-node setup as they do on a single instance. There are also some limitations in what can be safely replicated in multi-node setting. [Application usage](../appusage) goes into detail on how PGD behaves from an application development perspective. -Log messages and documentation are currently available only in English. - -## Clocks and timezones +### Characteristics affecting performance -PGD is designed to operate with nodes in multiple timezones, allowing a -truly worldwide database cluster. Individual servers don't need to be configured -with matching timezones, although we do recommend using `log_timezone = UTC` to -ensure the human-readable server log is more accessible and comparable. +By default, PGD keeps one copy of each table on each node in the group, and any changes propagate to all nodes in the group. -Synchronize server clocks using NTP or other solutions. +Since copies of data are everywhere, SELECTs need only ever access the local node. On a read-only cluster, performance on any one node isn't affected by the number of nodes and is immune to replication conflicts on other nodes caused by long-running SELECT queries. Thus, adding nodes increases linearly the total possible SELECT throughput. -Clock synchronization isn't critical to performance, as it is with some -other solutions. Clock skew can impact origin conflict detection, although -PGD provides controls to report and manage any skew that exists. PGD also -provides row-version conflict detection, as described in [Conflict detection](../consistency/conflicts). +If an INSERT, UPDATE, and DELETE (DML) is performed locally, then the changes propagate to all nodes in the group. The overhead of DML apply is less than the original execution, so if you run a pure write workload on multiple nodes concurrently, a multi-node cluster can handle more TPS than a single node. +Conflict handling has a cost that acts to reduce the throughput. The throughput then depends on how much contention the application displays in practice. Applications with very low contention perform better than a single node. Applications with high contention can perform worse than a single node. These results are consistent with any multi-master technology. They aren't particular to PGD. +Synchronous replication options can send changes concurrently to multiple nodes so that the replication lag is minimized. Adding more nodes means using more CPU for replication, so peak TPS reduces slightly as each node is added. +If the workload tries to use all CPU resources, then this resource constrains replication, which can then affect the replication lag. +In summary, adding more master nodes to a PGD group doesn't result in significant write +throughput increase when most tables are replicated because all the writes will be replayed on all nodes. Because PGD writes are in general more effective than writes coming from Postgres clients by way of SQL, some performance increase can be achieved. Read throughput generally scales linearly with the number of nodes. From faf333c72bff407aeada9e84eb31dd917dd10c32 Mon Sep 17 00:00:00 2001 From: Arup Roy Date: Wed, 8 Mar 2023 12:17:42 +0530 Subject: [PATCH 38/50] Changes based on today's review --- .../04_toc_pem_features/21_performance_diagnostic.mdx | 2 +- .../docs/pem/9/tuning_performance/performance_diagnostic.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx index b9729420142..56691461802 100644 --- a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx +++ b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx @@ -22,7 +22,7 @@ Prerequisite: - Install the EDB wait states package: - - For PostgreSQL, see [EDB Build Repository](https://repos.enterprisedb.com/). + - For PostgreSQL, see [EDB Repository](https://repos.enterprisedb.com/). - For EDB Postgres Advanced Server, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). diff --git a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx index 650b136fa90..4364f8b2514 100644 --- a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx +++ b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx @@ -23,7 +23,7 @@ To analyze the Wait States data on multiple levels, narrow down the data you sel - Install the EDB wait states package: - - For PostgreSQL, see [EDB Build Repository](https://repos.enterprisedb.com/). + - For PostgreSQL, see [EDB Repository](https://repos.enterprisedb.com/). - For EDB Postgres Advanced Server, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). From 641aef85106451b330cf4df8a53c884b0a981cac Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Wed, 8 Mar 2023 13:54:54 +0530 Subject: [PATCH 39/50] Update 04_backing_up_restoring_sql_protect.mdx --- .../04_backing_up_restoring_sql_protect.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx index 6943392212f..6ff0e7ad6a7 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/04_backing_up_restoring_sql_protect.mdx @@ -8,7 +8,7 @@ legacyRedirectsGenerated: -Backing up a database that's configured with SQL/Protect and then restoring the backup file to a new database requires considerations in addition to those normally associated with backup and restore procedures. These added considerations are due mainly to the use of object identification numbers (OIDs) in the SQL/Protect tables. +Backing up a database that's configured with SQL/Protect and then restoring the backup file to a new database requires considerations in addition to those normally associated with backup and restore procedures. These added considerations are mainly due to the use of object identification numbers (OIDs) in the SQL/Protect tables. !!! Note This information applies if your backup and restore procedures result in re-creating database objects in the new database with new OIDs, such as when using the `pg_dump` backup program. From 576373ed5bbd56d39183c8c4a9ff7bec3fd7d436 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Wed, 8 Mar 2023 15:20:24 +0530 Subject: [PATCH 40/50] Update product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx --- .../docs/epas/15/epas_security_guide/05_data_redaction.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx index 743ce29bbdd..a9fc118bfe2 100644 --- a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx @@ -329,7 +329,7 @@ To use `ALTER REDACTION POLICY`, you must own the table that the data redaction `scope_value` - The scope identifies the query part to apply redaction to for the column. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for the details. + The scope identifies the query part to apply redaction for the column. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for the details. `exception_value` From 9868652d4ca5cd762f3ca589323b934edd998f58 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Wed, 8 Mar 2023 15:21:54 +0530 Subject: [PATCH 41/50] Update product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx --- .../docs/epas/15/epas_security_guide/05_data_redaction.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx index a9fc118bfe2..74b5fd7d442 100644 --- a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx @@ -388,7 +388,7 @@ To use `DROP REDACTION POLICY`, you must own the table that the redaction policy `table_name` - The optionally sechem-qualified name of the table that the data redaction policy is on. + The optionally schema-qualified name of the table that the data redaction policy is on. `CASCADE` From 9ac19b0c988af9fec4c64e0d0024f2171f5b4017 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Wed, 8 Mar 2023 16:30:48 +0530 Subject: [PATCH 42/50] Updated the example --- .../03_built-in_packages/18_dbms_utility.mdx | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx b/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx index 48dcf7ddb19..2a92d3bbfce 100644 --- a/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx +++ b/product_docs/docs/epas/15/epas_compat_bip_guide/03_built-in_packages/18_dbms_utility.mdx @@ -322,7 +322,6 @@ DB_VERSION( OUT VARCHAR2, OUT VARCHAR2) The following anonymous block displays the database version information. - ```sql DECLARE @@ -334,10 +333,10 @@ BEGIN DBMS_OUTPUT.PUT_LINE('Compatibility: ' || v_compat); END; -Version: EnterpriseDB 14.0.0 on i686-pc-linux-gnu, compiled by GCC gcc -(GCC) 4.1.2 20080704 (Red Hat 4.1.2-48), 32-bit -Compatibility: EnterpriseDB 14.0.0 on i686-pc-linux-gnu, compiled by GCC -gcc (GCC) 4.1.220080704 (Red Hat 4.1.2-48), 32-bit +Version: PostgreSQL 15.2 (EnterpriseDB Advanced Server 15.2.0 (Debian 15.2.0-1.bullseye)) on x86_64-pc-linux-gnu, +compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit +Compatibility: PostgreSQL 15.2 (EnterpriseDB Advanced Server 15.2.0 (Debian 15.2.0-1.bullseye)) on x86_64-pc-linux-gnu, +compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit ``` ## EXEC_DDL_STATEMENT From 562c25aa421f7c19114884f4516724118a4d8c04 Mon Sep 17 00:00:00 2001 From: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com> Date: Wed, 8 Mar 2023 08:34:28 -0500 Subject: [PATCH 43/50] Apply suggestions from code review Shortening stem sentence in 8 --- product_docs/docs/pem/8/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pem/8/index.mdx b/product_docs/docs/pem/8/index.mdx index e08dd30d766..788ced7a919 100644 --- a/product_docs/docs/pem/8/index.mdx +++ b/product_docs/docs/pem/8/index.mdx @@ -62,7 +62,7 @@ PEM is a comprehensive database design and management system. PEM is designed to ## Postgres compatibility -The table provides information about the supported versions of Postgres for PEM 8.x. +Supported versions of Postgres for PEM 8.x: | |**Monitored Instance** |**Backend Instance** | |:-----------------------------------------|:----------------------|:--------------------| From 53302068928d491b16a36ae3b0b9967f13ede205 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Wed, 8 Mar 2023 19:04:31 +0530 Subject: [PATCH 44/50] minor edit done --- .../reference_command_line_options.mdx | 4 ---- 1 file changed, 4 deletions(-) diff --git a/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx b/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx index a2db1adaded..2a847ccdae7 100644 --- a/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx +++ b/product_docs/docs/epas/15/installing/windows/installing_advanced_server_with_the_interactive_installer/invoking_the_graphical_installer_from_the_command_line/reference_command_line_options.mdx @@ -163,10 +163,6 @@ Include `--unattendedmodeui minimalWithDialogs` to specify that the installer sh Include the `--version` parameter to retrieve version information about the installer: - - -`EDB Postgres Advanced Server 14.0.3-1 --- Built on 2020-10-23 00:12:44 IB: 20.6.0-202008110127` - `--workload_profile {oltp | mixed | reporting}` Use the `--workload_profile` parameter to specify an initial value for the `edb_dynatune_profile` configuration parameter. `edb_dynatune_profile` controls aspects of performance-tuning based on the type of work that the server performs. From 340c752125c96fadb0020ae4247c1acb39b378ce Mon Sep 17 00:00:00 2001 From: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com> Date: Wed, 8 Mar 2023 08:34:46 -0500 Subject: [PATCH 45/50] Shortening stem sentence in 9 --- product_docs/docs/pem/9/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pem/9/index.mdx b/product_docs/docs/pem/9/index.mdx index 3be9ae244b5..6196514ff6b 100644 --- a/product_docs/docs/pem/9/index.mdx +++ b/product_docs/docs/pem/9/index.mdx @@ -62,7 +62,7 @@ PEM is a comprehensive database design and management system. PEM is designed to ## Postgres compatibility -The table provides information about the supported versions of Postgres for PEM 9.x. +Supported versions of Postgres for PEM 9.x: | |**Monitored Instance** |**Backend Instance** | |:-----------------------------------------|:---------------------------|:---------------------| From 7ae813e1e2f42acf1a0b5b4406c3009e60c81a42 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Wed, 8 Mar 2023 19:09:38 +0530 Subject: [PATCH 46/50] formatting edits done --- .../04_toc_pem_features/21_performance_diagnostic.mdx | 9 ++++----- .../pem/9/tuning_performance/performance_diagnostic.mdx | 8 ++++---- 2 files changed, 8 insertions(+), 9 deletions(-) diff --git a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx index 56691461802..3a9aa58ba16 100644 --- a/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx +++ b/product_docs/docs/pem/9/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx @@ -7,16 +7,15 @@ legacyRedirectsGenerated: -You can use the Performance Diagnostic dashboard to analyze the database performance for Postgres instances by monitoring the wait events. To display the diagnostic graphs, PEM uses the data collected by EDB Wait States module. +The Performance Diagnostic dashboard analyzes the database performance for Postgres instances by monitoring the wait events. To display the diagnostic graphs, PEM uses the data collected by the EDB wait states module. For more information on EDB wait states, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). -Peformance Diagnostic feature is supported for Advanced Server databases from PEM 7.6 version onwards and for PostgreSQL databases it is supported from PEM 8.0 onwards. +To analyze the wait states data on multiple levels, narrow down the data you select. The data you select at the higher level of the graph populates the lower level. !!! Note - For PostgreSQL databases, Performance Diagnostics is supported only for versions 10, 11, 12, and 13 installed on supported platforms. + - For PostgreSQL databases, Performance Diagnostic is supported for version 10 or later installed on the supported CentOS or RHEL platforms. -For more information on EDB Wait States, see [EDB wait states docs](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). + - For EDB Postgres Extended databases, Performance Diagnostic is supported for version 11 or later on the supported CentOS or RHEL platforms. -You can analyze the Wait States data on multiple levels by narrowing down your selection of data. Each level of the graph is populated on the basis of your selection of data at the higher level. Prerequisite: diff --git a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx index 4364f8b2514..bb53ee3d807 100644 --- a/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx +++ b/product_docs/docs/pem/9/tuning_performance/performance_diagnostic.mdx @@ -8,16 +8,15 @@ redirects: - /pem/latest/pem_ent_feat/15_performance_diagnostic/ --- -The Performance Diagnostic dashboard analyzes the database performance for Postgres instances by monitoring the wait events. To display the diagnostic graphs, PEM uses the data collected by the EDB Wait States module. +The Performance Diagnostic dashboard analyzes the database performance for Postgres instances by monitoring the wait events. To display the diagnostic graphs, PEM uses the data collected by the EDB wait states module. For more information on EDB wait states, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). + +To analyze the wait states data on multiple levels, narrow down the data you select. The data you select at the higher level of the graph populates the lower level. !!! Note - For PostgreSQL databases, Performance Diagnostic is supported for version 10 or later installed on the supported CentOS or RHEL platforms. - For EDB Postgres Extended databases, Performance Diagnostic is supported for version 11 or later on the supported CentOS or RHEL platforms. -For more information on EDB wait states, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). - -To analyze the Wait States data on multiple levels, narrow down the data you select. The data you select at the higher level of the graph populates the lower level. ## Prerequisites @@ -28,6 +27,7 @@ To analyze the Wait States data on multiple levels, narrow down the data you sel - For EDB Postgres Advanced Server, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). - After you install the EDB Wait States module of EDB Postgres Advanced Server: + 1. Configure the list of libraries in the `postgresql.conf` file as shown: ```ini From f579e452d8830bebab7bab8611ee91dd7e2993fa Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 8 Mar 2023 15:09:39 +0000 Subject: [PATCH 47/50] Sync changes back to v4 --- product_docs/docs/pgd/4/known_issues.mdx | 96 ++++--------------- product_docs/docs/pgd/4/limitations.mdx | 16 ++-- .../docs/pgd/4/other_considerations.mdx | 10 +- product_docs/docs/pgd/5/known_issues.mdx | 3 +- product_docs/docs/pgd/5/limitations.mdx | 2 +- .../docs/pgd/5/other_considerations.mdx | 8 +- 6 files changed, 37 insertions(+), 98 deletions(-) diff --git a/product_docs/docs/pgd/4/known_issues.mdx b/product_docs/docs/pgd/4/known_issues.mdx index f203f3f069d..bd46f91504b 100644 --- a/product_docs/docs/pgd/4/known_issues.mdx +++ b/product_docs/docs/pgd/4/known_issues.mdx @@ -2,96 +2,36 @@ title: 'Known issues' --- -This section discusses currently known issues in EDB Postgres Distributed 4. - -## Data Consistency - -Read about [Conflicts](/pgd/4/bdr/conflicts/) to understand -the implications of the asynchronous operation mode in terms of data -consistency. - -## List of issues - -These known issues are tracked in BDR's -ticketing system and are expected to be resolved in a future -release. - -- Performance of HARP in terms of failover and switchover time depends - non-linearly on the latencies between DCS nodes. Which is why - we currently recommend using etcd cluster per region for HARP in case - of EDB Postgres Distributed deployment over multiple regions (typically - the Gold and Platinum layouts). TPAexec already sets up the etcd do run - per region cluster for these when `harp_consensus_protocol` option - is set to `etcd` in the `config.yml`. - - It's recommended to increase the `leader_lease_duration` HARP option - (`harp_leader_lease_duration` in TPAexec) for DCS deployments across higher - latency network. - -- If the resolver for the `update_origin_change` conflict - is set to `skip`, `synchronous_commit=remote_apply` is used, and - concurrent updates of the same row are repeatedly applied on two - different nodes, then one of the update statements might hang due - to a deadlock with the BDR writer. As mentioned in the - [Conflicts](/pgd/4/bdr/conflicts/) chapter, `skip` is not the default - resolver for the `update_origin_change` conflict, and this - combination isn't intended to be used in production. It discards - one of the two conflicting updates based on the order of arrival - on that node, which is likely to cause a divergent cluster. - In the rare situation that you do choose to use the `skip` - conflict resolver, note the issue with the use of the - `remote_apply` mode. - -- The Decoding Worker feature doesn't work with CAMO/EAGER/Group Commit. - Installations using CAMO/Eager/Group Commit must keep `enable_wal_decoder` - disabled. +This section discusses currently known issues in EDB Postgres Distributed 4. These issues are tracked in BDR's +ticketing system and are expected to be resolved in a future release. + +- Performance of HARP in terms of failover and switchover time depends non-linearly on the latencies between DCS nodes. Which is why we currently recommend using etcd cluster per region for HARP in case of EDB Postgres Distributed deployment over multiple regions (typically the Gold and Platinum layouts). TPAexec already sets up the etcd do run per region cluster for these when `harp_consensus_protocol` option is set to `etcd` in the `config.yml`. It's recommended to increase the `leader_lease_duration` HARP option (`harp_leader_lease_duration` in TPAexec) for DCS deployments across higher latency network. + +- If the resolver for the `update_origin_change` conflict is set to `skip`, `synchronous_commit=remote_apply` is used, and concurrent updates of the same row are repeatedly applied on two different nodes, then one of the update statements might hang due to a deadlock with the BDR writer. As mentioned in the [Conflicts](/pgd/4/bdr/conflicts/) chapter, `skip` is not the default resolver for the `update_origin_change` conflict, and this combination isn't intended to be used in production. It discards one of the two conflicting updates based on the order of arrival on that node, which is likely to cause a divergent cluster. In the rare situation that you do choose to use the `skip` conflict resolver, note the issue with the use of the `remote_apply` mode. + +- The Decoding Worker feature doesn't work with CAMO/EAGER/Group Commit. Installations using CAMO/Eager/Group Commit must keep `enable_wal_decoder` disabled. - Decoding Worker works only with the default replication sets. -- Lag control doesn't adjust commit delay in any way on a fully - isolated node, that is, in case all other nodes are unreachable or not - operational. As soon as at least one node is connected, replication - lag control picks up its work and adjusts the BDR commit delay - again. +- Lag control doesn't adjust commit delay in any way on a fully isolated node, that is, in case all other nodes are unreachable or not operational. As soon as at least one node is connected, replication lag control picks up its work and adjusts the BDR commit delay again. -- For time-based lag control, BDR currently uses the lag time (measured - by commit timestamps) rather than the estimated catchup time that's - based on historic apply rate. +- For time-based lag control, BDR currently uses the lag time (measured by commit timestamps) rather than the estimated catchup time that's based on historic apply rate. -- Changing the CAMO partners in a CAMO pair isn't currently possible. - It's possible only to add or remove a pair. - Adding or removing a pair doesn't need a restart of Postgres or even a - reload of the configuration. +- Changing the CAMO partners in a CAMO pair isn't currently possible. It's possible only to add or remove a pair. Adding or removing a pair doesn't need a restart of Postgres or even a reload of the configuration. -- Group Commit cannot be combined with [CAMO](/pgd/4/bdr/camo/) or [Eager All Node - replication](/pgd/4/bdr/eager/). Eager Replication currently only works by using the - "global" BDR commit scope. +- Group Commit cannot be combined with [CAMO](/pgd/4/bdr/camo/) or [Eager All Node replication](/pgd/4/bdr/eager/). Eager Replication currently only works by using the "global" BDR commit scope. -- Neither Eager replication nor Group Commit support - `synchronous_replication_availability = 'async'`. +- Neither Eager replication nor Group Commit support `synchronous_replication_availability = 'async'`. -- Group Commit doesn't support a timeout of the - commit after `bdr.global_commit_timeout`. +- Group Commit doesn't support a timeout of the commit after `bdr.global_commit_timeout`. -- Transactions using Eager Replication can't yet execute DDL, - nor do they support explicit two-phase commit. - The TRUNCATE command is allowed. +- Transactions using Eager Replication can't yet execute DDL, nor do they support explicit two-phase commit. The TRUNCATE command is allowed. - Not all DDL can be run when either CAMO or Group Commit is used. -- Parallel apply is not currently supported in combination with Group - Commit, please make sure to disable it when using Group Commit by - either setting `num_writers` to 1 for the node group (using - [`bdr.alter_node_group_config`](/pgd/4/bdr/nodes#bdralter_node_group_config)) or - via the GUC `bdr.writers_per_subscription` (see - [Configuration of Generic Replication](/pgd/4/bdr/configuration#generic-replication)). - -- There currently is no protection against altering or removing a commit - scope. Running transactions in a commit scope that is concurrently - being altered or removed can lead to the transaction blocking or - replication stalling completely due to an error on the downstream node - attempting to apply the transaction. Ensure that any transactions - using a specific commit scope have finished before altering or removing it. +- Parallel apply is not currently supported in combination with Group Commit, please make sure to disable it when using Group Commit by either setting `num_writers` to 1 for the node group (using [`bdr.alter_node_group_config`](/pgd/4/bdr/nodes#bdralter_node_group_config)) or via the GUC `bdr.writers_per_subscription` (see [Configuration of Generic Replication](/pgd/4/bdr/configuration#generic-replication)). + +- There currently is no protection against altering or removing a commit scope. Running transactions in a commit scope that is concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to apply the transaction. Ensure that any transactions using a specific commit scope have finished before altering or removing it. Details of other design or implementation [limitations](limitations) are also available. \ No newline at end of file diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx index fdfdcbbde29..b30707b72e8 100644 --- a/product_docs/docs/pgd/4/limitations.mdx +++ b/product_docs/docs/pgd/4/limitations.mdx @@ -3,13 +3,11 @@ title: "Limitations" --- -This section covers design limitations of BDR, that should be taken into account -when planning your deployment. +This section covers design limitations of BDR, that should be taken into account when planning your deployment. -## Limits +## Limits on nodes -- BDR can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the -32-node recommendation. +- BDR can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation. - BDR currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed. @@ -17,11 +15,12 @@ when planning your deployment. - The minimum recommended number of nodes in a group is three to provide fault tolerance for BDR's consensus mechanism. With just two nodes, consensus would fail if one of the nodes was unresponsive. Consensus is required for some BDR operations such as distributed sequence generation. For more information about the consensus mechanism used by EDB Postgres Distributed, see [Architectural details](/pgd/4/architectures/#architecture-details). -- Support for using BDR for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. ## Limitations of multiple databases on a single instance -It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation. Also, as noted [in the Limits section](#limits), support for multiple databases on the same PGD instance is being deprecated in PGD 5 and will no longer be supported in PGD 6. +Support for using BDR for multiple databases on the same Postgres instance is deprecated beginning with PGD 5 and will no longer be supported with PGD 6. As we extend the capabilities of the product, the additional complexity introduced operationally and functionally is no longer viable in a multi-database design. + +It is best practice and recommended that only one database per PGD instance be configured. The deployment automation with TPA and the tooling such as the CLI and proxy already codify that recommendation. While it is still possible to host up to ten databases in a single instance, this incurs many immediate risks and current limitations: @@ -49,5 +48,4 @@ This is a (non-comprehensive) list of limitations that are expected and are by d - A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function. -- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration. Using synchronous replication to other nodes, including both logical and physical standby is possible. - +- Legacy BDR synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Therefore, nodes that appear in `synchronous_standby_names` must not be part of CAMO, Eager, or Group Commit configuration. \ No newline at end of file diff --git a/product_docs/docs/pgd/4/other_considerations.mdx b/product_docs/docs/pgd/4/other_considerations.mdx index 9fd8f4de3f5..f96982165bf 100644 --- a/product_docs/docs/pgd/4/other_considerations.mdx +++ b/product_docs/docs/pgd/4/other_considerations.mdx @@ -4,6 +4,11 @@ title: "Other considerations" Review these other considerations when planning your deployment. +# Data Consistency + +Read about [Conflicts](/pgd/4/bdr/conflicts/) to understand the implications of the asynchronous operation mode in terms of data +consistency. + ## Deployment and sizing considerations For production deployments, EDB recommends a minimum of four cores for each Postgres data node and each logical standby. Witness nodes don't participate in the data replication operation and don't have to meet this requirement. Always size logical standbys exactly like the data nodes to avoid performance degradations in case of a node promotion. In production deployments, HARP proxy nodes require a minimum of two cores each. EDB recommends detailed benchmarking based on your performance requirements to determine the correct sizing for your environment. EDB’s Professional Services team is available to assist, if needed. @@ -16,9 +21,8 @@ You can deploy single HARP proxy nodes with single data nodes on the same physic ## Clocks and timezones -EDB Postgres Distributed has been designed to operate with nodes in multiple timezones, allowing a truly worldwide database cluster. Individual servers do not need to be configured with matching timezones, though we do recommend using log_timezone = UTC to ensure the human readable server log is more accessible and comparable. +EDB Postgres Distributed has been designed to operate with nodes in multiple timezones, allowing a truly worldwide database cluster. Individual servers do not need to be configured with matching timezones, though we do recommend using `log_timezone = UTC` to ensure the human readable server log is more accessible and comparable. Server clocks should be synchronized using NTP or other solutions. -Clock synchronization is not critical to performance, as is the case with some other solutions. Clock skew can impact Origin Conflict Detection, though -EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides Row Version Conflict Detection, as described in [Conflict Detection](/pgd/4/bdr/conflicts). +Clock synchronization is not critical to performance, as is the case with some other solutions. Clock skew can impact Origin Conflict Detection, though EDB Postgres Distributed provides controls to report and manage any skew that exists. EDB Postgres Distributed also provides Row Version Conflict Detection, as described in [Conflict Detection](/pgd/4/bdr/conflicts). diff --git a/product_docs/docs/pgd/5/known_issues.mdx b/product_docs/docs/pgd/5/known_issues.mdx index c57c6be7df0..9ae33626d07 100644 --- a/product_docs/docs/pgd/5/known_issues.mdx +++ b/product_docs/docs/pgd/5/known_issues.mdx @@ -24,4 +24,5 @@ This section discusses currently known issues in EDB Postgres Distributed 5. The - There currently is no protection against altering or removing a commit scope. Running transactions in a commit scope that is concurrently being altered or removed can lead to the transaction blocking or replication stalling completely due to an error on the downstream node attempting to apply the transaction. Ensure that any transactions using a specific commit scope have finished before altering or removing it. -Details of other design or implementation [limitations](limitations) are also available. \ No newline at end of file +Details of other design or implementation [limitations](limitations) are also available. + diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index 79abe37150f..f2b4b37e0bf 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -45,4 +45,4 @@ This is a (non-comprehensive) list of other limitations that are expected and ar - A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function. -- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. Using synchronous replication to other non-PGD nodes, including both logical and physical standby is possible. +- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. \ No newline at end of file diff --git a/product_docs/docs/pgd/5/other_considerations.mdx b/product_docs/docs/pgd/5/other_considerations.mdx index 286a801f9a3..3a7996607aa 100644 --- a/product_docs/docs/pgd/5/other_considerations.mdx +++ b/product_docs/docs/pgd/5/other_considerations.mdx @@ -6,15 +6,11 @@ Review these other considerations when planning your deployment. ## Data Consistency -Read about [Conflicts](consistency/conflicts/) to understand -the implications of the asynchronous operation mode in terms of data -consistency. +Read about [Conflicts](consistency/conflicts/) to understand the implications of the asynchronous operation mode in terms of data consistency. ## Deployment -EDB PGD is intended to be deployed in one of a small number of known-good configurations, -using either [Trusted Postgres Architect](/tpa/latest) or a configuration management approach -and deployment architecture approved by Technical Support. +EDB PGD is intended to be deployed in one of a small number of known-good configurations, using either [Trusted Postgres Architect](/tpa/latest) or a configuration management approach and deployment architecture approved by Technical Support. Manual deployment isn't recommended and might not be supported. From cf8c380fc6a3c0b7bbd931c06249ca0600eb4670 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 8 Mar 2023 17:00:16 +0000 Subject: [PATCH 48/50] Updated nodes limitations --- product_docs/docs/pgd/5/limitations.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index f2b4b37e0bf..fc58d560248 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -6,7 +6,7 @@ This section covers design limitations of EDB Postgres Distributed (PGD), that s ## Limits on nodes -- PGD can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation. +- PGD can run hundreds of nodes assuming adequate hardware and network. However, for mesh-based deployments, we generally don’t recommend running more than 48 nodes in one cluster. If extra read scalability is needed beyond the 48 node limit, subscriber only nodes can be added without adding connections to the mesh network. - PGD currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed. From cf3aaebb78601ba18666c9c1c988c102b1d04561 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 8 Mar 2023 17:04:14 +0000 Subject: [PATCH 49/50] Updated to new text --- product_docs/docs/pgd/4/limitations.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/4/limitations.mdx b/product_docs/docs/pgd/4/limitations.mdx index b30707b72e8..321ba978ad3 100644 --- a/product_docs/docs/pgd/4/limitations.mdx +++ b/product_docs/docs/pgd/4/limitations.mdx @@ -7,7 +7,7 @@ This section covers design limitations of BDR, that should be taken into account ## Limits on nodes -- BDR can run hundreds of nodes on good-enough hardware and network. However, for mesh-based deployments, we generally don't recommend running more than 32 nodes in one cluster. Each master node can be protected by multiple physical or logical standby nodes. There's no specific limit on the number of standby nodes, but typical usage is to have 2–3 standbys per master. Standby nodes don't add connections to the mesh network, so they aren't included in the 32-node recommendation. +- BDR can run hundreds of nodes assuming adequate hardware and network. However, for mesh-based deployments, we generally don’t recommend running more than 48 nodes in one cluster. If extra read scalability is needed beyond the 48 node limit, subscriber only nodes can be added without adding connections to the mesh network. - BDR currently has a hard limit of no more than 1000 active nodes, as this is the current maximum Raft connections allowed. From 6c1e091d0e5ddc97a6dcacaae32f783b0320a4b8 Mon Sep 17 00:00:00 2001 From: kelpoole <44814688+kelpoole@users.noreply.github.com> Date: Wed, 8 Mar 2023 10:32:51 -0700 Subject: [PATCH 50/50] Update limitations.mdx --- product_docs/docs/pgd/5/limitations.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index fc58d560248..2b40ec1b509 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -26,7 +26,7 @@ While it is still possible to host up to ten databases in a single instance, thi - TPAexec assumes one database; additional coding is needed by customers or PS in a post-deploy hook to set up replication for additional databases. -- HARP works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases. +- PGD-Proxy works at the Postgres instance level, not at the database level, meaning the leader node will be the same for all databases. - Each additional database increases the resource requirements on the server. Each one needs its own set of worker processes maintaining replication (e.g. logical workers, WAL senders, and WAL receivers). Each one also needs its own set of connections to other instances in the replication cluster. This might severely impact performance of all databases. @@ -45,4 +45,4 @@ This is a (non-comprehensive) list of other limitations that are expected and ar - A `galloc` sequence might skip some chunks if the sequence is created in a rolled back transaction and then created again with the same name. This can also occur if it is created and dropped when DDL replication isn't active and then it is created again when DDL replication is active. The impact of the problem is mild, because the sequence guarantees aren't violated. The sequence skips only some initial chunks. Also, as a workaround you can specify the starting value for the sequence as an argument to the `bdr.alter_sequence_set_kind()` function. -- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together. \ No newline at end of file +- Legacy synchronous replication uses a mechanism for transaction confirmation different from the one used by CAMO, Eager, and Group Commit. The two are not compatible and must not be used together.