From 46219066f5ea0b1ba6b5ece26698f43bbfe81ade Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Wed, 28 Jun 2023 15:09:56 -0400 Subject: [PATCH 01/18] TDE: performance --- product_docs/docs/tde/15/index.mdx | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/product_docs/docs/tde/15/index.mdx b/product_docs/docs/tde/15/index.mdx index 6ec90a36060..748ac06c688 100644 --- a/product_docs/docs/tde/15/index.mdx +++ b/product_docs/docs/tde/15/index.mdx @@ -29,7 +29,6 @@ It encrypts any user data stored in the database system. This encryption is tran TDE encrypts: - - The files underlying tables, sequences, indexes, including TOAST tables and system catalogs, and including all forks. These files are known as *data files*. - The write-ahead log (WAL). @@ -63,7 +62,7 @@ The following aren't encrypted or otherwise disguised by TDE: ### How does TDE affect performance? -Performance is in line with the general overhead for AES. +The performance impact of TDE is low. For details, see the [Transparent Data Encryption Impacts on EDB Postgres Advanced Server 15](https://www.enterprisedb.com/blog/TDE-Postgres-Advanced-Server-15-Launch) blog. ## How does TDE work? From 9b6896ff7df439e8675182929301ac36b78f4f5c Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 5 Jul 2023 12:03:44 -0400 Subject: [PATCH 02/18] first draft of cookbook for adding new platform --- install_template/README.md | 40 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/install_template/README.md b/install_template/README.md index 470d7c3c770..4352fe2bf6d 100644 --- a/install_template/README.md +++ b/install_template/README.md @@ -225,3 +225,43 @@ In a handful of situations it is useful to employ conditionals to include or mod Be wary of over-using these; prefer inheritance structures that push this content down to sufficiently-specific "leaf" templates instead. As a general rule, use conditionals only when the test is much, much simpler than the alternative inheritance structure, and be willing to abandon it when (over time) that simplicity is lost. In particular, avoid the trap of setting a flag in a leaf template and then checking it in a base template: this separates the context from the decision, making it extremely difficult to judge when the test has lost its value. + +#### Adding a new platform + +1. Modify **config.yaml**. For example, when adding the RHEL 9 platform, the following entries were made to each product: + +- name: RHEL 9 + arch: ppc64le + supported versions: [] +- name: AlmaLinux 9 or Rocky Linux 9 + arch: x86_64 + supported versions: [] +- name: RHEL 9 or OL 9 + arch: x86_64 + supported versions: [] + +1. In **templates/platformBase_deploymentConstants.njk**, update the `map_platform` and `map_platform_old` blocks. For example, for RHEL 9, the following lines were added to both blocks of code: + +"AlmaLinux 9 or Rocky Linux 9": "other_linux9", +. +. +. +"RHEL 9 or OL 9": "rhel9", +"RHEL 9": "rhel9", + +1. In the **templates/platformBase** folder, copy existing topics, such as **rhel-8-or-ol-8.njk**, and use the copied file to create a new version. + - Update the name to the next version, such as **rhel-9-or-ol-9.njk**. + - Update content as necessary. For example, the file may include a reference to "latest-8.noarch.rpm" which should be updated to "latest-9.noarch.rpm". + -The number of topics that need to be updated will vary, depending on the platform being added. For RHEL 9, two new topics were created: **rhel-9-or-ol-9.njk** and **almalinux-9-or-rocky-linux-9.njk**. + +1. Update **templates/platformBase/ppc64le_index.njk** to add an entry in the navigation block in the front matter. For example, for RHEL 9, this entry was added: + - {{productShortname}}_rhel_9 + +1. Update **templates/platformBase/x86_64_index.njk** to add an entry in the navigation block in the front matter. For example, for RHEL 9, these entries were added: + - {{productShortname}}_rhel_9 + - {{productShortname}}_other_linux_9 + +1. In the **templates/products** folder, for each platform, copy existing topics, such as **rhel-8-or-ol-8.njk**, and use the copied file create a new version. + - Update the name to the next version, such as **rhel-9-or-ol-9.njk**. + - In each file, update the entry for `platformBaseTemplate` so it points to the appropriate template, either in the **templates/platformBase** folder or in the current **templates/products** folder. + - Check content to determine if other references require updating. The number of topics that need to be updated will vary, depending on the platform being added. For RHEL 9, two new topics were created: **rhel-9-or-ol-9.njk** and **almalinux-9-or-rocky-linux-9.njk**. From 53f0341e3d38f940eaf28dcfb02935d87ddd6341 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Thu, 6 Jul 2023 08:54:40 -0400 Subject: [PATCH 03/18] minor edits --- install_template/README.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/install_template/README.md b/install_template/README.md index 4352fe2bf6d..47e0f35c083 100644 --- a/install_template/README.md +++ b/install_template/README.md @@ -249,19 +249,20 @@ In particular, avoid the trap of setting a flag in a leaf template and then chec "RHEL 9 or OL 9": "rhel9", "RHEL 9": "rhel9", -1. In the **templates/platformBase** folder, copy existing topics, such as **rhel-8-or-ol-8.njk**, and use the copied file to create a new version. +1. In the **templates/platformBase** folder, copy existing topics, such as **rhel-8-or-ol-8.njk**, and use the copied files to create new versions. - Update the name to the next version, such as **rhel-9-or-ol-9.njk**. - - Update content as necessary. For example, the file may include a reference to "latest-8.noarch.rpm" which should be updated to "latest-9.noarch.rpm". - -The number of topics that need to be updated will vary, depending on the platform being added. For RHEL 9, two new topics were created: **rhel-9-or-ol-9.njk** and **almalinux-9-or-rocky-linux-9.njk**. + - Update content as necessary. For example, the file may include a reference to "latest-8.noarch.rpm", which should be updated to "latest-9.noarch.rpm". + -The number of topics that need to be updated will vary depending on the platform being added. For RHEL 9, two new topics were created: **rhel-9-or-ol-9.njk** and **almalinux-9-or-rocky-linux-9.njk**. 1. Update **templates/platformBase/ppc64le_index.njk** to add an entry in the navigation block in the front matter. For example, for RHEL 9, this entry was added: - {{productShortname}}_rhel_9 -1. Update **templates/platformBase/x86_64_index.njk** to add an entry in the navigation block in the front matter. For example, for RHEL 9, these entries were added: +1. Update **templates/platformBase/x86_64_index.njk** to add entires in the navigation block in the front matter. For example, for RHEL 9, these entries were added: - {{productShortname}}_rhel_9 - {{productShortname}}_other_linux_9 1. In the **templates/products** folder, for each platform, copy existing topics, such as **rhel-8-or-ol-8.njk**, and use the copied file create a new version. - Update the name to the next version, such as **rhel-9-or-ol-9.njk**. - In each file, update the entry for `platformBaseTemplate` so it points to the appropriate template, either in the **templates/platformBase** folder or in the current **templates/products** folder. - - Check content to determine if other references require updating. The number of topics that need to be updated will vary, depending on the platform being added. For RHEL 9, two new topics were created: **rhel-9-or-ol-9.njk** and **almalinux-9-or-rocky-linux-9.njk**. + - Check content to determine if other references require updating. + - The number of topics that need to be updated will vary depending on the platform being added. For RHEL 9, two new topics were created: **rhel-9-or-ol-9.njk** and **almalinux-9-or-rocky-linux-9.njk**. From 0060068aa4de02d5b34f112a09a709501c57aba6 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 12 Jul 2023 14:46:12 -0400 Subject: [PATCH 04/18] wording suggestion from Nidhi Co-authored-by: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> --- install_template/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/install_template/README.md b/install_template/README.md index 47e0f35c083..7d05bc6fa76 100644 --- a/install_template/README.md +++ b/install_template/README.md @@ -261,7 +261,7 @@ In particular, avoid the trap of setting a flag in a leaf template and then chec - {{productShortname}}_rhel_9 - {{productShortname}}_other_linux_9 -1. In the **templates/products** folder, for each platform, copy existing topics, such as **rhel-8-or-ol-8.njk**, and use the copied file create a new version. +1. In the **templates/products** folder, for each product, copy existing topics, such as **rhel-8-or-ol-8.njk**, and use the copied file create a new version. - Update the name to the next version, such as **rhel-9-or-ol-9.njk**. - In each file, update the entry for `platformBaseTemplate` so it points to the appropriate template, either in the **templates/platformBase** folder or in the current **templates/products** folder. - Check content to determine if other references require updating. From f3b2591c8fd348b938c3c3551fd78c6ce2a7d128 Mon Sep 17 00:00:00 2001 From: francoughlin Date: Thu, 13 Jul 2023 11:31:39 -0400 Subject: [PATCH 05/18] EPAS reorg: Phase 2 Security branch topic restructure Breaking up the Redacting data topic and the EDB*Wrap topic into multiple subtopics; tested/fixed all links in branch; misc edits --- .../02_configuring_sql_protect.mdx | 2 +- .../03_edb_wrap/edb_wrap_key_concepts.mdx | 17 + .../epas_security_guide/03_edb_wrap/index.mdx | 23 + .../obfuscating_source_code.mdx} | 44 +- .../profile_overview.mdx | 2 +- .../epas_security_guide/05_data_redaction.mdx | 450 ------------------ .../creating_a_data_redaction_policy.mdx | 200 ++++++++ .../data_redaction_key_concepts.mdx | 36 ++ .../data_redaction_system_catalogs.mdx | 39 ++ .../05_data_redaction/index.mdx | 15 + .../modifying_a_data_redaction_policy.mdx | 130 +++++ .../removing_a_data_redaction_policy.mdx | 55 +++ ...audit_logging_configuration_parameters.mdx | 2 +- .../02_selecting_sql_statements_to_audit.mdx | 2 + .../04_audit_log_file.mdx | 2 + .../08_audit_log_archiving.mdx | 4 + .../05_edb_audit_logging/index.mdx | 2 +- 17 files changed, 536 insertions(+), 489 deletions(-) create mode 100644 product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/edb_wrap_key_concepts.mdx create mode 100644 product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/index.mdx rename product_docs/docs/epas/15/epas_security_guide/{03_edb_wrap.mdx => 03_edb_wrap/obfuscating_source_code.mdx} (71%) delete mode 100644 product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx create mode 100644 product_docs/docs/epas/15/epas_security_guide/05_data_redaction/creating_a_data_redaction_policy.mdx create mode 100644 product_docs/docs/epas/15/epas_security_guide/05_data_redaction/data_redaction_key_concepts.mdx create mode 100644 product_docs/docs/epas/15/epas_security_guide/05_data_redaction/data_redaction_system_catalogs.mdx create mode 100644 product_docs/docs/epas/15/epas_security_guide/05_data_redaction/index.mdx create mode 100644 product_docs/docs/epas/15/epas_security_guide/05_data_redaction/modifying_a_data_redaction_policy.mdx create mode 100644 product_docs/docs/epas/15/epas_security_guide/05_data_redaction/removing_a_data_redaction_policy.mdx diff --git a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx index d2b03eeca56..ae809754d3b 100644 --- a/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/02_protecting_against_sql_injection_attacks/02_configuring_sql_protect.mdx @@ -69,7 +69,7 @@ edb_sql_protect.max_queries_to_save = 5000 systemctl restart edb-as-14 ``` - **On Windows:** Use the Windows Services applet to restart the service named `edb-as-14`. + - **On Windows:** Use the Windows Services applet to restart the service named `edb-as-14`. 3. For each database that you want to protect from SQL injection attacks, connect to the database as a superuser (either `enterprisedb` or `postgres`, depending on your installation options). Then run the script `sqlprotect.sql`, located in the `share/contrib` subdirectory of your EDB Postgres Advanced Server home directory. The script creates the SQL/Protect database objects in a schema named `sqlprotect`. diff --git a/product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/edb_wrap_key_concepts.mdx b/product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/edb_wrap_key_concepts.mdx new file mode 100644 index 00000000000..7bfde479819 --- /dev/null +++ b/product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/edb_wrap_key_concepts.mdx @@ -0,0 +1,17 @@ +--- +title: "EDB*Wrap key concepts" +description: "Describes the benefits and basic operation of the EDB*Wrap feature" +--- + +The EDB\*Wrap program translates a plaintext file that contains SPL or PL/pgSQL source code into a file that contains the same code in a form that's nearly impossible to read. Once you have the obfuscated form of the code, you can send that code to the PostgreSQL server, and the server stores those programs in obfuscated form. While EDB\*Wrap does obscure code, table definitions are still exposed. + +Everything you wrap is stored in obfuscated form. If you wrap an entire package, the package body source, as well as the prototypes contained in the package header and the functions and procedures contained in the package body, are stored in obfuscated form. + +If you wrap a `CREATE PACKAGE` statement, you hide the package API from other developers. You might want to wrap the package body but not the package header so users can see the package prototypes and other public variables that are defined in the package body. To allow users to see the prototypes the package contains, use EDBWrap to obfuscate only the `CREATE PACKAGE BODY` statement in the `edbwrap` input file, omitting the `CREATE PACKAGE` statement. The package header source is stored as plaintext, while the package body source and package functions and procedures are obfuscated. + +![image](../../images/epas_tools_utility_edb_wrap.png) + +You can't unwrap or debug wrapped source code and programs. Reverse engineering is possible but very difficult. + +The entire source file is wrapped into one unit. Any `psql` meta-commands included in the wrapped file aren't recognized when the file is executed. Executing an obfuscated file that contains a psql meta-command causes a syntax error. `edbwrap` doesn't validate SQL source code. If the plaintext form contains a syntax error, `edbwrap` doesn't report it. Instead, the server reports an error and aborts the entire file when you try to execute the obfuscated form. + diff --git a/product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/index.mdx b/product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/index.mdx new file mode 100644 index 00000000000..9d04f25dd22 --- /dev/null +++ b/product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/index.mdx @@ -0,0 +1,23 @@ +--- +title: "Protecting proprietary source code" +description: "Describes how to use the EDB*Wrap utility to obfuscate proprietary source code and programs" +indexCards: simple +navigation: +- edb_wrap_key_concepts +- obfuscating_source_code +legacyRedirectsGenerated: + # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.17.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.18.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.316.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.317.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.077.html" +redirects: + - ../../epas_compat_tools_guide/03_edb_wrap #generated for docs/epas/reorg-role-use-case-mode +--- + + + +The EDB\*Wrap utility protects proprietary source code and programs like functions, stored procedures, triggers, and packages from unauthorized scrutiny. + + diff --git a/product_docs/docs/epas/15/epas_security_guide/03_edb_wrap.mdx b/product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/obfuscating_source_code.mdx similarity index 71% rename from product_docs/docs/epas/15/epas_security_guide/03_edb_wrap.mdx rename to product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/obfuscating_source_code.mdx index c60440919bc..f059bcbedef 100644 --- a/product_docs/docs/epas/15/epas_security_guide/03_edb_wrap.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/03_edb_wrap/obfuscating_source_code.mdx @@ -1,41 +1,12 @@ --- -title: "Protecting proprietary source code" -description: "Describes how to use the EDB*Wrap utility to obfuscate proprietary source code and programs" -legacyRedirectsGenerated: - # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.17.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.18.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.316.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.317.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.077.html" -redirects: - - ../../epas_compat_tools_guide/03_edb_wrap #generated for docs/epas/reorg-role-use-case-mode +title: "Obfuscating source code" +description: "Describes how to use the EDB*Wrap utility protects proprietary source code and programs" --- - - -The EDB\*Wrap utility protects proprietary source code and programs like functions, stored procedures, triggers, and packages from unauthorized scrutiny. - -## Overview of the utility - -The EDB\*Wrap program translates a plaintext file that contains SPL or PL/pgSQL source code into a file that contains the same code in a form that's nearly impossible to read. Once you have the obfuscated form of the code, you can send that code to the PostgreSQL server, and the server stores those programs in obfuscated form. While EDB\*Wrap does obscure code, table definitions are still exposed. - -Everything you wrap is stored in obfuscated form. If you wrap an entire package, the package body source, as well as the prototypes contained in the package header and the functions and procedures contained in the package body, are stored in obfuscated form. - -If you wrap a `CREATE PACKAGE` statement, you hide the package API from other developers. You might want to wrap the package body but not the package header so users can see the package prototypes and other public variables that are defined in the package body. To allow users to see the prototypes the package contains, use EDBWrap to obfuscate only the `CREATE PACKAGE BODY` statement in the `edbwrap` input file, omitting the `CREATE PACKAGE` statement. The package header source is stored as plaintext, while the package body source and package functions and procedures are obfuscated. - -![image](../images/epas_tools_utility_edb_wrap.png) - -You can't unwrap or debug wrapped source code and programs. Reverse engineering is possible but very difficult. - -The entire source file is wrapped into one unit. Any `psql` meta-commands included in the wrapped file aren't recognized when the file is executed. Executing an obfuscated file that contains a psql meta-command causes a syntax error. `edbwrap` doesn't validate SQL source code. If the plaintext form contains a syntax error, `edbwrap` doesn't report it. Instead, the server reports an error and aborts the entire file when you try to execute the obfuscated form. - - - -## Using EDB\*Wrap to obfuscate source code - EDB\*Wrap is a command line utility that accepts a single input source file, obfuscates the contents, and returns a single output file. When you invoke the `edbwrap` utility, you must provide the name of the file that contains the source code to obfuscate. You can also specify the name of the file where `edbwrap` writes the obfuscated form of the code. +## Overview of the command-line styles + `edbwrap` offers three different command-line styles. The first style is compatible with Oracle's `wrap` utility: ```shell @@ -65,9 +36,9 @@ In summary, to obfuscate code with EDB\*Wrap, you: 2. Invoke EDB\*Wrap to obfuscate the code. 3. Import the file as if it were in plaintext form. -The following sequence shows how to use `edbwrap`. +## Creating the source code file -Create the source code for the `list_emp` procedure in plaintext form: +To use the EDB\*Wrap utility, create the source code for the `list_emp` procedure in plaintext form: ```sql [bash] cat listemp.sql @@ -133,6 +104,7 @@ __EDBwrapped__ edb=# quit ``` +## Invoking EDB\*Wrap Ofuscate the plaintext file with EDB\*Wrap: @@ -162,6 +134,8 @@ $__EDBwrapped__$ The second line of the wrapped file contains an encoding name. In this case, the encoding is UTF8. When you obfuscate a file, `edbwrap` infers the encoding of the input file by examining the locale. For example, if you're running `edbwrap` while your locale is set to `en_US.utf8`, `edbwrap` assumes that the input file is encoded in UTF8. Be sure to examine the output file after running `edbwrap`. If the locale contained in the wrapped file doesn't match the encoding of the input file, change your locale and rewrap the input file. +## Importing the obfuscated code to the PostgreSQL server + You can import the obfuscated code to the PostgreSQL server using the same tools that work with plaintext code: ```sql diff --git a/product_docs/docs/epas/15/epas_security_guide/04_profile_management/profile_overview.mdx b/product_docs/docs/epas/15/epas_security_guide/04_profile_management/profile_overview.mdx index 9121bf06425..c400f84cabc 100644 --- a/product_docs/docs/epas/15/epas_security_guide/04_profile_management/profile_overview.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/04_profile_management/profile_overview.mdx @@ -1,5 +1,5 @@ --- -title: "Profile overview" +title: "Profile management key concepts" description: "Provides an overview of how to manage user profiles" --- diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx deleted file mode 100644 index 283a7d2cf61..00000000000 --- a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction.mdx +++ /dev/null @@ -1,450 +0,0 @@ ---- -title: "Redacting data" -description: "Use the EPAS Data redaction feature to limit exposure to sensitive data by dynamically changing the data as it is displayed for certain users" ---- - - - -*Data redaction* limits sensitive data exposure by dynamically changing data as it's displayed for certain users. - -For example, a social security number (SSN) is stored as `021-23-9567`. Privileged users can see the full SSN, while other users see only the last four digits: `xxx-xx-9567`. - -You implement data redaction by defining a function for each field to which to apply redaction. The function returns the value to display to the users subject to the data redaction. - -For example, for the SSN field, the redaction function returns `xxx-xx-9567` for an input SSN of `021-23-9567`. - -For a salary field, a redaction function always returns `$0.00`, regardless of the input salary value. - -These functions are then incorporated into a redaction policy by using the `CREATE REDACTION POLICY` command. In addition to other options, this command specifies: - -- The table on which the policy applies -- The table columns affected by the specified redaction functions -- Expressions to determine the affect session users - -The `edb_data_redaction` parameter in the `postgresql.conf` file then determines whether to apply data redaction. - -By default, the parameter is enabled, so the redaction policy is in effect. The following occurs: - -- Superusers and the table owner bypass data redaction and see the original data. -- All other users have the redaction policy applied and see the reformatted data. - -If the parameter is disabled by having it set to `FALSE` during the session, then the following occurs: - -- Superusers and the table owner bypass data redaction and see the original data. -- All other users get an error. - -You can change a redaction policy using the `ALTER REDACTION POLICY` command. Or, you can eliminate it using the `DROP REDACTION POLICY` command. - -## CREATE REDACTION POLICY - -`CREATE REDACTION POLICY` defines a new data redaction policy for a table. - -### Synopsis - -```sql -CREATE REDACTION POLICY ON - [ FOR ( ) ] - [ ADD [ COLUMN ] USING - [ WITH OPTIONS ( [ ] - [, ] ) - ] - ] [, ...] -``` - -Where `redaction_option` is: - -```sql -{ SCOPE | - EXCEPTION } -``` - -### Description - -The `CREATE REDACTION POLICY` command defines a new column-level security policy for a table by redacting column data using a redaction function. A newly created data redaction policy is enabled by default. You can disable the policy using `ALTER REDACTION POLICY ... DISABLE`. - -`FOR ( expression )` - - This form adds a redaction policy expression. - -`ADD [ COLUMN ]` - - This optional form adds a column of the table to the data redaction policy. The `USING` clause specifies a redaction function expression. You can use multiple `ADD [ COLUMN ]` forms if you want to add multiple columns of the table to the data redaction policy being created. The optional `WITH OPTIONS ( ... )` clause specifies a scope or an exception to the data redaction policy to apply. If you don't specify the scope or exception, the default value for scope is `query` and for exception is `none`. - -### Parameters - -`name` - - The name of the data redaction policy to create. This must be distinct from the name of any other existing data redaction policy for the table. - -`table_name` - - The optionally schema-qualified name of the table the data redaction policy applies to. - -`expression` - - The data redaction policy expression. No redaction is applied if this expression evaluates to false. - -`column_name` - - Name of the existing column of the table on which the data redaction policy is being created. - -`funcname_clause` - - The data redaction function that decides how to compute the redacted column value. Return type of the redaction function must be the same as the column type on which the data redaction policy is being added. - -`scope_value` - - The scope identifies the query part to apply redaction for the column. Scope value can be `query`, `top_tlist`, or `top_tlist_or_error`. If the scope is `query`, then the redaction is applied on the column regardless of where it appears in the query. If the scope is `top_tlist`, then the redaction is applied on the column only when it appears in the query’s top target list. If the scope is `top_tlist_or_error`, the behavior is the same as the `top_tlist` but throws an errors when the column appears anywhere else in the query. - -`exception_value` - - The exception identifies the query part where redaction is exempted. Exception value can be `none`, `equal`, or `leakproof`. If exception is `none`, then there's no exemption. If exception is `equal`, then the column isn't redacted when used in an equality test. If exception is `leakproof`, the column isn't redacted when a leakproof function is applied to it. - -### Notes - -You must be the owner of a table to create or change data redaction policies for it. - -The superuser and the table owner are exempt from the data redaction policy. - -### Examples - -This example shows how you can use this feature in production environments. - -Create the components for a data redaction policy on the `employees` table: - -```sql -CREATE TABLE employees ( - id integer GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, - name varchar(40) NOT NULL, - ssn varchar(11) NOT NULL, - phone varchar(10), - birthday date, - salary money, - email varchar(100) -); - --- Insert some data -INSERT INTO employees (name, ssn, phone, birthday, salary, email) -VALUES -( 'Sally Sample', '020-78-9345', '5081234567', '1961-02-02', 51234.34, -'sally.sample@enterprisedb.com'), -( 'Jane Doe', '123-33-9345', '6171234567', '1963-02-14', 62500.00, -'jane.doe@gmail.com'), -( 'Bill Foo', '123-89-9345', '9781234567','1963-02-14', 45350, -'william.foe@hotmail.com'); - --- Create a user hr who can see all the data in employees -CREATE USER hr; --- Create a normal user -CREATE USER alice; -GRANT ALL ON employees TO hr, alice; - --- Create redaction function in which actual redaction logic resides -CREATE OR REPLACE FUNCTION redact_ssn (ssn varchar(11)) RETURN varchar(11) IS -BEGIN - /* replaces 020-12-9876 with xxx-xx-9876 */ - return overlay (ssn placing 'xxx-xx' from 1) ; -END; - -CREATE OR REPLACE FUNCTION redact_salary () RETURN money IS BEGIN return -0::money; -END; -``` - -Create a data redaction policy on `employees` to redact column `ssn` and `salary` with default scope and exception. Column `ssn` must be accessible in equality condition. The redaction policy is exempt for the `hr` user. - -```sql -CREATE REDACTION POLICY redact_policy_personal_info ON employees FOR (session_user != 'hr') -ADD COLUMN ssn USING redact_ssn(ssn) WITH OPTIONS (SCOPE query, EXCEPTION equal), -ADD COLUMN salary USING redact_salary(); -``` - -The visible data for the `hr` user is: - -```sql --- hr can view all columns data -edb=# \c edb hr -edb=> SELECT * FROM employees; -__OUTPUT__ - id | name | ssn | phone | birthday | - salary | email -----+--------------+-------------+------------+--------------------+--- ---+--------------------- - 1 | Sally Sample | 020-78-9345 | 5081234567 | 02-FEB-61 00:00:00 | - $51,234.34 | sally.sample@enterprisedb.com - 2 | Jane Doe | 123-33-9345 | 6171234567 | 14-FEB-63 00:00:00 | - $62,500.00 | jane.doe@gmail.com - 3 | Bill Foo | 123-89-9345 | 9781234567 | 14-FEB-63 00:00:00 | - $45,350.00 | william.foe@hotmail.com -(3 rows) -``` - -The visible data for the normal user `alice` is: - -```sql --- Normal user cannot see salary and ssn number. -edb=> \c edb alice -edb=> SELECT * FROM employees; -__OUTPUT__ -id | name | ssn | phone | birthday | salary | -email -----+--------------+-------------+------------+--------------------+--------+- ------------------------------- - 1 | Sally Sample | xxx-xx-9345 | 5081234567 | 02-FEB-61 00:00:00 | $0.00 | - sally.sample@enterprisedb.com - 2 | Jane Doe | xxx-xx-9345 | 6171234567 | 14-FEB-63 00:00:00 | $0.00 | - jane.doe@gmail.com - 3 | Bill Foo | xxx-xx-9345 | 9781234567 | 14-FEB-63 00:00:00 | $0.00 | - william.foe@hotmail.com -(3 rows) -``` - -But `ssn` data is accessible when used for equality check due to the `exception_value` setting: - -```sql --- Get ssn number starting from 123 -edb=> SELECT * FROM employees WHERE substring(ssn from 0 for 4) = '123'; -__OUTPUT__ - id | name | ssn | phone | birthday | salary | - email -----+----------+-------------+------------+--------------------+--------+----- --------------------- - 2 | Jane Doe | xxx-xx-9345 | 6171234567 | 14-FEB-63 00:00:00 | $0.00 | - jane.doe@gmail.com - 3 | Bill Foo | xxx-xx-9345 | 9781234567 | 14-FEB-63 00:00:00 | $0.00 | - william.foe@hotmail.com -(2 rows) -``` - -### Caveats - -- The data redaction policies created on inheritance hierarchies aren't cascaded. For example, if the data redaction policy is created for a parent, it isn't applied to the child table that inherits it, and vice versa. A user with access to these child tables can see the non-redacted data. For information about inheritance hierarchies, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/ddl-inherit.html). - -- If the superuser or the table owner created any materialized view on the table and provided the access rights `GRANT SELECT` on the table and the materialized view to any non-superuser, then the non-superuser can access the non-redacted data through the materialized view. - -- The objects accessed in the redaction function body must be schema qualified. Otherwise `pg_dump` might fail. - -### Compatibility - -`CREATE REDACTION POLICY` is an EDB extension. - -### See also - -`ALTER REDACTION POLICY, DROP REDACTION POLICY` - -## ALTER REDACTION POLICY - -`ALTER REDACTION POLICY` changes the definition of data redaction policy for a table. - -### Synopsis - -```sql -ALTER REDACTION POLICY ON RENAME TO - -ALTER REDACTION POLICY ON FOR ( ) - -ALTER REDACTION POLICY ON { ENABLE | DISABLE} - -ALTER REDACTION POLICY ON - ADD [ COLUMN ] USING - [ WITH OPTIONS ( [ ] - [, ] ) - ] - -ALTER REDACTION POLICY ON - MODIFY [ COLUMN ] - { - [ USING ] - | - [ WITH OPTIONS ( [ ] - [, ] ) - ] - } - -ALTER REDACTION POLICY ON - DROP [ COLUMN ] -``` - -Where `redaction_option` is: - -```sql -{ SCOPE | - EXCEPTION } -``` - -### Description - -`ALTER REDACTION POLICY` changes the definition of an existing data redaction policy. - -To use `ALTER REDACTION POLICY`, you must own the table that the data redaction policy applies to. - -`FOR ( expression )` - - This form adds or replaces the data redaction policy expression. - -`ENABLE` - - Enables the previously disabled data redaction policy for a table. - -`DISABLE` - - Disables the data redaction policy for a table. - -`ADD [ COLUMN ]` - - This form adds a column of the table to the existing redaction policy. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for details. - -`MODIFY [ COLUMN ]` - - This form modifies the data redaction policy on the column of the table. You can update the redaction function clause or the redaction options for the column. The `USING` clause specifies the redaction function expression to update. The `WITH OPTIONS ( ... )` clause specifies the scope or the exception. For more details on the redaction function clause, the redaction scope, and the redaction exception, see [`CREATE REDACTION POLICY`](#create-redaction-policy). - -`DROP [ COLUMN ]` - - This form removes the column of the table from the data redaction policy. - -### Parameters - -`name` - - The name of an existing data redaction policy to alter. - -`table_name` - - The optionally schema-qualified name of the table that the data redaction policy is on. - -`new_name` - - The new name for the data redaction policy. This must be distinct from the name of any other existing data redaction policy for the table. - -`expression` - - The data redaction policy expression. - -`column_name` - - Name of existing column of the table on which the data redaction policy is being altered or dropped. - -`funcname_clause` - - The data redaction function expression for the column. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for details. - -`scope_value` - - The scope identifies the query part to apply redaction for the column. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for the details. - -`exception_value` - - The exception identifies the query part where redaction are exempted. See [`CREATE REDACTION POLICY`](#create-redaction-policy) for the details. - -### Examples - -Update the data redaction policy called `redact_policy_personal_info` on the table named `employees`: - -```sql -ALTER REDACTION POLICY redact_policy_personal_info ON employees -FOR (session_user != 'hr' AND session_user != 'manager'); -``` - -To update the data redaction function for the column `ssn` in the same policy: - -```sql -ALTER REDACTION POLICY redact_policy_personal_info ON employees -MODIFY COLUMN ssn USING redact_ssn_new(ssn); -``` - -### Compatibility - -`ALTER REDACTION POLICY` is an EDB extension. - -### See also - -`CREATE REDACTION POLICY, DROP REDACTION POLICY` - -## DROP REDACTION POLICY - -`DROP REDACTION POLICY` removes a data redaction policy from a table. - -### Synopsis - -```sql -DROP REDACTION POLICY [ IF EXISTS ] ON - [ CASCADE | RESTRICT ] -``` - -### Description - -`DROP REDACTION POLICY` removes the specified data redaction policy from the table. - -To use `DROP REDACTION POLICY`, you must own the table that the redaction policy applies to. - -### Parameters - -`IF EXISTS` - - Don't throw an error if the data redaction policy doesn't exist. A notice is issued in this case. - -`name` - - The name of the data redaction policy to drop. - -`table_name` - - The optionally schema-qualified name of the table that the data redaction policy is on. - -`CASCADE` - -`RESTRICT` - - These keywords don't have any effect, as there are no dependencies on the data redaction policies. - -### Examples - -To drop the data redaction policy called `redact_policy_personal_info` on the table named `employees`: - -```sql -DROP REDACTION POLICY redact_policy_personal_info ON employees; -``` - -### Compatibility - -`DROP REDACTION POLICY` is an EDB extension. - -### See also - -`CREATE REDACTION POLICY, ALTER REDACTION POLICY` - -## System catalogs - -System catalogs store the redaction policy information. - -### edb_redaction_column - -The `edb_redaction_column` system catalog stores information about the data redaction policy attached to the columns of a table. - -| Column | Type | References | Description | -| ------------- | -------------- | -------------------------- | --------------------------------------------------------------------------- | -| `oid` | `oid` | | Row identifier (hidden attribute, must be explicitly selected) | -| `rdpolicyid` | `oid` | `edb_redaction_policy.oid` | The data redaction policy that applies to the described column | -| `rdrelid` | `oid` | `pg_class.oid` | The table that the described column belongs to | -| `rdattnum` | `int2` | `pg_attribute.attnum` | The number of the described column | -| `rdscope` | `int2` | | The redaction scope: `1` = query, `2` = top_tlist, `4` = top_tlist_or_error | -| `rdexception` | `int2` | | The redaction exception: `8` = none, `16` = equal, `32` = leakproof | -| `rdfuncexpr` | `pg_node_tree` | | Data redaction function expression | - -!!! Note - The described column is redacted if the redaction policy `edb_redaction_column.rdpolicyid` on the table is enabled and the redaction policy expression `edb_redaction_policy.rdexpr` evaluates to `true`. - -### edb_redaction_policy - -The catalog `edb_redaction_policy` stores information about the redaction policies for tables. - -| Column | Type | References | Description | -| ---------- | -------------- | -------------- | -------------------------------------------------------------- | -| `oid` | `oid` | | Row identifier (hidden attribute, must be explicitly selected) | -| `rdname` | `name` | | The name of the data redaction policy | -| `rdrelid` | `oid` | `pg_class.oid` | The table to which the data redaction policy applies | -| `rdenable` | `boolean` | | Is the data redaction policy enabled? | -| `rdexpr` | `pg_node_tree` | | The data redaction policy expression | - -!!! Note - The data redaction policy applies for the table if it's enabled and the expression ever evaluated true. diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/creating_a_data_redaction_policy.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/creating_a_data_redaction_policy.mdx new file mode 100644 index 00000000000..03b3f1656fb --- /dev/null +++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/creating_a_data_redaction_policy.mdx @@ -0,0 +1,200 @@ +--- +title: "Creating a data redaction policy" +description: "How to use the CREATE REDACTION POLICY command to define a new data redaction policy for a table" +--- + +The `CREATE REDACTION POLICY` command defines a new data redaction policy for a table. + +## Synopsis + +```sql +CREATE REDACTION POLICY ON + [ FOR ( ) ] + [ ADD [ COLUMN ] USING + [ WITH OPTIONS ( [ ] + [, ] ) + ] + ] [, ...] +``` + +Where `redaction_option` is: + +```sql +{ SCOPE | + EXCEPTION } +``` + +## Description + +The `CREATE REDACTION POLICY` command defines a new column-level security policy for a table by redacting column data using a redaction function. A newly created data redaction policy is enabled by default. You can disable the policy using `ALTER REDACTION POLICY ... DISABLE`. + +`FOR ( expression )` + + This form adds a redaction policy expression. + +`ADD [ COLUMN ]` + + This optional form adds a column of the table to the data redaction policy. The `USING` clause specifies a redaction function expression. You can use multiple `ADD [ COLUMN ]` forms if you want to add multiple columns of the table to the data redaction policy being created. The optional `WITH OPTIONS ( ... )` clause specifies a scope or an exception to the data redaction policy to apply. If you don't specify the scope or exception, the default value for scope is `query` and for exception is `none`. + +## Parameters + +`name` + + The name of the data redaction policy to create. This must be distinct from the name of any other existing data redaction policy for the table. + +`table_name` + + The optionally schema-qualified name of the table the data redaction policy applies to. + +`expression` + + The data redaction policy expression. No redaction is applied if this expression evaluates to false. + +`column_name` + + Name of the existing column of the table on which the data redaction policy is being created. + +`funcname_clause` + + The data redaction function that decides how to compute the redacted column value. Return type of the redaction function must be the same as the column type on which the data redaction policy is being added. + +`scope_value` + + The scope identifies the query part to apply redaction for the column. Scope value can be `query`, `top_tlist`, or `top_tlist_or_error`. If the scope is `query`, then the redaction is applied on the column regardless of where it appears in the query. If the scope is `top_tlist`, then the redaction is applied on the column only when it appears in the query’s top target list. If the scope is `top_tlist_or_error`, the behavior is the same as the `top_tlist` but throws an errors when the column appears anywhere else in the query. + +`exception_value` + + The exception identifies the query part where redaction is exempted. Exception value can be `none`, `equal`, or `leakproof`. If exception is `none`, then there's no exemption. If exception is `equal`, then the column isn't redacted when used in an equality test. If exception is `leakproof`, the column isn't redacted when a leakproof function is applied to it. + +## Notes + +You must be the owner of a table to create or change data redaction policies for it. + +The superuser and the table owner are exempt from the data redaction policy. + +## Examples + +This example shows how you can use this feature in production environments. + +Create the components for a data redaction policy on the `employees` table: + +```sql +CREATE TABLE employees ( + id integer GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, + name varchar(40) NOT NULL, + ssn varchar(11) NOT NULL, + phone varchar(10), + birthday date, + salary money, + email varchar(100) +); + +-- Insert some data +INSERT INTO employees (name, ssn, phone, birthday, salary, email) +VALUES +( 'Sally Sample', '020-78-9345', '5081234567', '1961-02-02', 51234.34, +'sally.sample@enterprisedb.com'), +( 'Jane Doe', '123-33-9345', '6171234567', '1963-02-14', 62500.00, +'jane.doe@gmail.com'), +( 'Bill Foo', '123-89-9345', '9781234567','1963-02-14', 45350, +'william.foe@hotmail.com'); + +-- Create a user hr who can see all the data in employees +CREATE USER hr; +-- Create a normal user +CREATE USER alice; +GRANT ALL ON employees TO hr, alice; + +-- Create redaction function in which actual redaction logic resides +CREATE OR REPLACE FUNCTION redact_ssn (ssn varchar(11)) RETURN varchar(11) IS +BEGIN + /* replaces 020-12-9876 with xxx-xx-9876 */ + return overlay (ssn placing 'xxx-xx' from 1) ; +END; + +CREATE OR REPLACE FUNCTION redact_salary () RETURN money IS BEGIN return +0::money; +END; +``` + +Create a data redaction policy on `employees` to redact column `ssn` and `salary` with default scope and exception. Column `ssn` must be accessible in equality condition. The redaction policy is exempt for the `hr` user. + +```sql +CREATE REDACTION POLICY redact_policy_personal_info ON employees FOR (session_user != 'hr') +ADD COLUMN ssn USING redact_ssn(ssn) WITH OPTIONS (SCOPE query, EXCEPTION equal), +ADD COLUMN salary USING redact_salary(); +``` + +The visible data for the `hr` user is: + +```sql +-- hr can view all columns data +edb=# \c edb hr +edb=> SELECT * FROM employees; +__OUTPUT__ + id | name | ssn | phone | birthday | + salary | email +----+--------------+-------------+------------+--------------------+--- +--+--------------------- + 1 | Sally Sample | 020-78-9345 | 5081234567 | 02-FEB-61 00:00:00 | + $51,234.34 | sally.sample@enterprisedb.com + 2 | Jane Doe | 123-33-9345 | 6171234567 | 14-FEB-63 00:00:00 | + $62,500.00 | jane.doe@gmail.com + 3 | Bill Foo | 123-89-9345 | 9781234567 | 14-FEB-63 00:00:00 | + $45,350.00 | william.foe@hotmail.com +(3 rows) +``` + +The visible data for the normal user `alice` is: + +```sql +-- Normal user cannot see salary and ssn number. +edb=> \c edb alice +edb=> SELECT * FROM employees; +__OUTPUT__ +id | name | ssn | phone | birthday | salary | +email +----+--------------+-------------+------------+--------------------+--------+- +------------------------------ + 1 | Sally Sample | xxx-xx-9345 | 5081234567 | 02-FEB-61 00:00:00 | $0.00 | + sally.sample@enterprisedb.com + 2 | Jane Doe | xxx-xx-9345 | 6171234567 | 14-FEB-63 00:00:00 | $0.00 | + jane.doe@gmail.com + 3 | Bill Foo | xxx-xx-9345 | 9781234567 | 14-FEB-63 00:00:00 | $0.00 | + william.foe@hotmail.com +(3 rows) +``` + +But `ssn` data is accessible when used for equality check due to the `exception_value` setting: + +```sql +-- Get ssn number starting from 123 +edb=> SELECT * FROM employees WHERE substring(ssn from 0 for 4) = '123'; +__OUTPUT__ + id | name | ssn | phone | birthday | salary | + email +----+----------+-------------+------------+--------------------+--------+----- +-------------------- + 2 | Jane Doe | xxx-xx-9345 | 6171234567 | 14-FEB-63 00:00:00 | $0.00 | + jane.doe@gmail.com + 3 | Bill Foo | xxx-xx-9345 | 9781234567 | 14-FEB-63 00:00:00 | $0.00 | + william.foe@hotmail.com +(2 rows) +``` + +## Caveats + +- The data redaction policies created on inheritance hierarchies aren't cascaded. For example, if the data redaction policy is created for a parent, it isn't applied to the child table that inherits it, and vice versa. A user with access to these child tables can see the non-redacted data. For information about inheritance hierarchies, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/ddl-inherit.html). + +- If the superuser or the table owner created any materialized view on the table and provided the access rights `GRANT SELECT` on the table and the materialized view to any non-superuser, then the non-superuser can access the non-redacted data through the materialized view. + +- The objects accessed in the redaction function body must be schema qualified. Otherwise `pg_dump` might fail. + +## Compatibility + +`CREATE REDACTION POLICY` is an EDB extension. + +## See also + +[`ALTER REDACTION POLICY`](modifying_a_data_redaction_policy), [`DROP REDACTION POLICY`](removing_a_data_redaction_policy) + diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/data_redaction_key_concepts.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/data_redaction_key_concepts.mdx new file mode 100644 index 00000000000..1b150518ae8 --- /dev/null +++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/data_redaction_key_concepts.mdx @@ -0,0 +1,36 @@ +--- +title: "Data redaction key concepts" +description: "Describes the benefits and basic operation of the data redaction feature" +--- + + + +The DB Postgres Advanced Server *Data redaction* feature limits sensitive data exposure by dynamically changing data as it's displayed for certain users. + +For example, a social security number (SSN) is stored as `021-23-9567`. Privileged users can see the full SSN, while other users see only the last four digits: `xxx-xx-9567`. + +You implement data redaction by defining a function for each field to which to apply redaction. The function returns the value to display to the users subject to the data redaction. + +For example, for the SSN field, the redaction function returns `xxx-xx-9567` for an input SSN of `021-23-9567`. + +For a salary field, a redaction function always returns `$0.00`, regardless of the input salary value. + +These functions are then incorporated into a redaction policy by using the `CREATE REDACTION POLICY` command. In addition to other options, this command specifies: + +- The table on which the policy applies +- The table columns affected by the specified redaction functions +- Expressions to determine the affect session users + +The `edb_data_redaction` parameter in the `postgresql.conf` file then determines whether to apply data redaction. + +By default, the parameter is enabled, so the redaction policy is in effect. The following occurs: + +- Superusers and the table owner bypass data redaction and see the original data. +- All other users have the redaction policy applied and see the reformatted data. + +If the parameter is disabled by having it set to `FALSE` during the session, then the following occurs: + +- Superusers and the table owner bypass data redaction and see the original data. +- All other users get an error. + +You can change a redaction policy using the `ALTER REDACTION POLICY` command. Or, you can eliminate it using the `DROP REDACTION POLICY` command. diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/data_redaction_system_catalogs.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/data_redaction_system_catalogs.mdx new file mode 100644 index 00000000000..e3ccc9b047d --- /dev/null +++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/data_redaction_system_catalogs.mdx @@ -0,0 +1,39 @@ +--- +title: "Data redaction system catalogs" +description: "Describes the system catalogs related to the data redaction feature" +--- + +System catalogs store the redaction policy information. + +## edb_redaction_column + +The `edb_redaction_column` system catalog stores information about the data redaction policy attached to the columns of a table. + +| Column | Type | References | Description | +| ------------- | -------------- | -------------------------- | --------------------------------------------------------------------------- | +| `oid` | `oid` | | Row identifier (hidden attribute, must be explicitly selected) | +| `rdpolicyid` | `oid` | `edb_redaction_policy.oid` | The data redaction policy that applies to the described column | +| `rdrelid` | `oid` | `pg_class.oid` | The table that the described column belongs to | +| `rdattnum` | `int2` | `pg_attribute.attnum` | The number of the described column | +| `rdscope` | `int2` | | The redaction scope: `1` = query, `2` = top_tlist, `4` = top_tlist_or_error | +| `rdexception` | `int2` | | The redaction exception: `8` = none, `16` = equal, `32` = leakproof | +| `rdfuncexpr` | `pg_node_tree` | | Data redaction function expression | + +!!! Note + The described column is redacted if the redaction policy `edb_redaction_column.rdpolicyid` on the table is enabled and the redaction policy expression `edb_redaction_policy.rdexpr` evaluates to `true`. + +## edb_redaction_policy + +The catalog `edb_redaction_policy` stores information about the redaction policies for tables. + +| Column | Type | References | Description | +| ---------- | -------------- | -------------- | -------------------------------------------------------------- | +| `oid` | `oid` | | Row identifier (hidden attribute, must be explicitly selected) | +| `rdname` | `name` | | The name of the data redaction policy | +| `rdrelid` | `oid` | `pg_class.oid` | The table to which the data redaction policy applies | +| `rdenable` | `boolean` | | Is the data redaction policy enabled? | +| `rdexpr` | `pg_node_tree` | | The data redaction policy expression | + +!!! Note + The data redaction policy applies for the table if it's enabled and the expression ever evaluated true. + diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/index.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/index.mdx new file mode 100644 index 00000000000..1df6a6de4f5 --- /dev/null +++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/index.mdx @@ -0,0 +1,15 @@ +--- +title: "Redacting data" +description: "Use the EPAS Data redaction feature to limit exposure to sensitive data by dynamically changing the data as it is displayed for certain users" +indexCards: simple +navigation: +- data_redaction_key_concepts +- creating_a_data_redaction_policy +- modifying_a_data_redaction_policy +- removing_a_data_redaction_policy +- data_redaction_system_catalogs +--- + +EDB Postgres Advanced Server includes features to help you to maintain, secure, and operate EDB Postgres Advanced Server databases. The DB Postgres Advanced Server *Data redaction* feature limits sensitive data exposure by dynamically changing data as it's displayed for certain users. + + diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/modifying_a_data_redaction_policy.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/modifying_a_data_redaction_policy.mdx new file mode 100644 index 00000000000..45d45b3013c --- /dev/null +++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/modifying_a_data_redaction_policy.mdx @@ -0,0 +1,130 @@ +--- +title: "Modifying a data redaction policy" +description: "How to use the ALTER REDACTION POLICY command to change the definition of a data redaction policy for a table" +--- + +The `ALTER REDACTION POLICY` command changes the definition of data redaction policy for a table. + +## Synopsis + +```sql +ALTER REDACTION POLICY ON RENAME TO + +ALTER REDACTION POLICY ON FOR ( ) + +ALTER REDACTION POLICY ON { ENABLE | DISABLE} + +ALTER REDACTION POLICY ON + ADD [ COLUMN ] USING + [ WITH OPTIONS ( [ ] + [, ] ) + ] + +ALTER REDACTION POLICY ON + MODIFY [ COLUMN ] + { + [ USING ] + | + [ WITH OPTIONS ( [ ] + [, ] ) + ] + } + +ALTER REDACTION POLICY ON + DROP [ COLUMN ] +``` + +Where `redaction_option` is: + +```sql +{ SCOPE | + EXCEPTION } +``` + +## Description + +`ALTER REDACTION POLICY` changes the definition of an existing data redaction policy. + +To use `ALTER REDACTION POLICY`, you must own the table that the data redaction policy applies to. + +`FOR ( expression )` + + This form adds or replaces the data redaction policy expression. + +`ENABLE` + + Enables the previously disabled data redaction policy for a table. + +`DISABLE` + + Disables the data redaction policy for a table. + +`ADD [ COLUMN ]` + + This form adds a column of the table to the existing redaction policy. See [`CREATE REDACTION POLICY`](creating_a_data_redaction_policy) for details. + +`MODIFY [ COLUMN ]` + + This form modifies the data redaction policy on the column of the table. You can update the redaction function clause or the redaction options for the column. The `USING` clause specifies the redaction function expression to update. The `WITH OPTIONS ( ... )` clause specifies the scope or the exception. For more details on the redaction function clause, the redaction scope, and the redaction exception, see [`CREATE REDACTION POLICY`](creating_a_data_redaction_policy). + +`DROP [ COLUMN ]` + + This form removes the column of the table from the data redaction policy. + +## Parameters + +`name` + + The name of an existing data redaction policy to alter. + +`table_name` + + The optionally schema-qualified name of the table that the data redaction policy is on. + +`new_name` + + The new name for the data redaction policy. This must be distinct from the name of any other existing data redaction policy for the table. + +`expression` + + The data redaction policy expression. + +`column_name` + + Name of existing column of the table on which the data redaction policy is being altered or dropped. + +`funcname_clause` + + The data redaction function expression for the column. See [`CREATE REDACTION POLICY`](creating_a_data_redaction_policy) for details. + +`scope_value` + + The scope identifies the query part to apply redaction for the column. See [`CREATE REDACTION POLICY`](creating_a_data_redaction_policy) for the details. + +`exception_value` + + The exception identifies the query part where redaction are exempted. See [`CREATE REDACTION POLICY`](creating_a_data_redaction_policy) for the details. + +## Examples + +Update the data redaction policy called `redact_policy_personal_info` on the table named `employees`: + +```sql +ALTER REDACTION POLICY redact_policy_personal_info ON employees +FOR (session_user != 'hr' AND session_user != 'manager'); +``` + +To update the data redaction function for the column `ssn` in the same policy: + +```sql +ALTER REDACTION POLICY redact_policy_personal_info ON employees +MODIFY COLUMN ssn USING redact_ssn_new(ssn); +``` + +## Compatibility + +`ALTER REDACTION POLICY` is an EDB extension. + +## See also + +[`CREATE REDACTION POLICY`](creating_a_data_redaction_policy), [`DROP REDACTION POLICY`](removing_a_data_redaction_policy) diff --git a/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/removing_a_data_redaction_policy.mdx b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/removing_a_data_redaction_policy.mdx new file mode 100644 index 00000000000..60509c6b1ac --- /dev/null +++ b/product_docs/docs/epas/15/epas_security_guide/05_data_redaction/removing_a_data_redaction_policy.mdx @@ -0,0 +1,55 @@ +--- +title: "Removing a data redaction policy" +description: "How to use the DROP REDACTION POLICY command to remove a data redaction policy from a table" +--- + +The `DROP REDACTION POLICY` command removes a data redaction policy from a table. + +## Synopsis + +```sql +DROP REDACTION POLICY [ IF EXISTS ] ON + [ CASCADE | RESTRICT ] +``` + +## Description + +`DROP REDACTION POLICY` removes the specified data redaction policy from the table. + +To use `DROP REDACTION POLICY`, you must own the table that the redaction policy applies to. + +## Parameters + +`IF EXISTS` + + Don't throw an error if the data redaction policy doesn't exist. A notice is issued in this case. + +`name` + + The name of the data redaction policy to drop. + +`table_name` + + The optionally schema-qualified name of the table that the data redaction policy is on. + +`CASCADE` + +`RESTRICT` + + These keywords don't have any effect, as there are no dependencies on the data redaction policies. + +## Examples + +To drop the data redaction policy called `redact_policy_personal_info` on the table named `employees`: + +```sql +DROP REDACTION POLICY redact_policy_personal_info ON employees; +``` + +## Compatibility + +`DROP REDACTION POLICY` is an EDB extension. + +## See also + +[`CREATE REDACTION POLICY`](creating_a_data_redaction_policy), [`ALTER REDACTION POLICY`](modifying_a_data_redaction_policy) diff --git a/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/01_audit_logging_configuration_parameters.mdx b/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/01_audit_logging_configuration_parameters.mdx index 87587a8a4a3..751b338c05a 100644 --- a/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/01_audit_logging_configuration_parameters.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/01_audit_logging_configuration_parameters.mdx @@ -5,7 +5,7 @@ redirects: - ../../../epas_guide/03_database_administration/05_edb_audit_logging/01_audit_logging_configuration_parameters #generated for docs/epas/reorg-role-use-case-mode --- -Use the following configuration parameters to control database auditing. See [Summary of configuration parameters](/../../database_administration/01_configuration_parameters/02_summary_of_configuration_parameters/#summary_of_configuration_parameters) to determine if a change to the configuration parameter: +Use the following configuration parameters to control database auditing. See [Summary of configuration parameters](/epas/latest/database_administration/01_configuration_parameters/02_summary_of_configuration_parameters/) to determine if a change to the configuration parameter: - Takes effect immediately - Requires reloading the configuration diff --git a/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/02_selecting_sql_statements_to_audit.mdx b/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/02_selecting_sql_statements_to_audit.mdx index f15bfb1b8e0..c1febf688f1 100644 --- a/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/02_selecting_sql_statements_to_audit.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/02_selecting_sql_statements_to_audit.mdx @@ -15,6 +15,8 @@ edb_audit_statement = 'value_1[, value_2]...' The comma-separated values can include or omit space characters following the comma. You can specify the values in any combination of lowercase or uppercase letters. +## Overview of the parameters + The basic parameter values are the following: - `all` — Audit and log every statement including any error messages on statements. diff --git a/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/04_audit_log_file.mdx b/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/04_audit_log_file.mdx index 26356a8357e..0ef413b5fb7 100644 --- a/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/04_audit_log_file.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/04_audit_log_file.mdx @@ -11,6 +11,8 @@ You can generate the audit log file in CSV or XML format. The format is determin The information in the audit log is based on the logging performed by PostgreSQL, as described in "Using CSV-Format Log Output” under “Error Reporting and Logging” in the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/runtime-config-logging.html). +## Overview of the CSV audit log format + The following table lists the fields in the order they appear in the CSV audit log format. The table contains the following information: - **Field** — Name of the field as shown in the sample table definition in the PostgreSQL documentation. diff --git a/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/08_audit_log_archiving.mdx b/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/08_audit_log_archiving.mdx index 7847d73d1a0..4fe056839cd 100644 --- a/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/08_audit_log_archiving.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/08_audit_log_archiving.mdx @@ -17,8 +17,12 @@ The audit archiver helps to: - Determine the log files to remove based on the `edb_audit_archiver_expire_time_limit` and `edb_audit_archiver_expire_size_limit` parameter. - Execute the expiration command specified in the `edb_audit_archiver_expire_command` parameter to remove the log files. +## Rotating out older audit log files + To rotate out the older audit log files, you can set the log file rotation day when the new file is created. To do so, set the parameter `edb_audit_rotation_day` to the desired value. The audit log records are overwritten on a first-in, first-out basis if space isn't available for more audit log records. +## Enabling compression and expiration of log files + To configure EDB Postgres Advanced Server to enable compression and expiration of the log files: 1. Enable audit log archiving by setting the `edb_audit_archiver` parameter to `on` in the `postgresql.conf` file. diff --git a/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/index.mdx b/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/index.mdx index 18b8b91794e..5bc991e54db 100644 --- a/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/index.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/05_edb_audit_logging/index.mdx @@ -9,7 +9,7 @@ redirects: -EDB Postgres Advanced Server allows database and security administrators, auditors, and operators to track and analyze database activities using EDB *audit logging*. EDB audit logging generates audit log files, which contain all of the relevant information. You can configure the audit logs to record information such as: +EDB Postgres Advanced Server allows database and security administrators, auditors, and operators to track and analyze database activities using EDB *audit logging*. EDB audit logging generates audit log files, which can be configured to record information such as: - When a role establishes a connection to an EDB Postgres Advanced Server database - The database objects a role creates, modifies, or deletes when connected to EDB Postgres Advanced Server From 5367fd2a3b6718cfe8951289bf944c1e787a7350 Mon Sep 17 00:00:00 2001 From: francoughlin Date: Thu, 13 Jul 2023 15:39:02 -0400 Subject: [PATCH 06/18] EPAS reorg: Phase 2 Application programming branch topic restructure Broke out subsections for the Using enhanced SQL and other miscellaneous features topic and the Debugging programs topic into individual child topics; add index cards and descriptions --- .../application_programming/12_debugger.mdx | 250 ------------------ .../12_debugger/configuring_debugger.mdx | 38 +++ .../12_debugger/debugger_interface.mdx | 52 ++++ .../12_debugger/debugging_a_program.mdx | 115 ++++++++ .../12_debugger/index.mdx | 32 +++ .../12_debugger/starting_debugger.mdx | 31 +++ .../comment_command.mdx} | 73 +---- .../index.mdx | 18 ++ .../logical_decoding.mdx | 17 ++ .../obtaining_version_information.mdx | 40 +++ ...lding_executing_dynamic_sql_statements.mdx | 13 +- 11 files changed, 356 insertions(+), 323 deletions(-) delete mode 100644 product_docs/docs/epas/15/application_programming/12_debugger.mdx create mode 100644 product_docs/docs/epas/15/application_programming/12_debugger/configuring_debugger.mdx create mode 100644 product_docs/docs/epas/15/application_programming/12_debugger/debugger_interface.mdx create mode 100644 product_docs/docs/epas/15/application_programming/12_debugger/debugging_a_program.mdx create mode 100644 product_docs/docs/epas/15/application_programming/12_debugger/index.mdx create mode 100644 product_docs/docs/epas/15/application_programming/12_debugger/starting_debugger.mdx rename product_docs/docs/epas/15/application_programming/{15_enhanced_sql_and_other_misc_features.mdx => 15_enhanced_sql_and_other_misc_features/comment_command.mdx} (62%) create mode 100644 product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/index.mdx create mode 100644 product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/logical_decoding.mdx create mode 100644 product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/obtaining_version_information.mdx diff --git a/product_docs/docs/epas/15/application_programming/12_debugger.mdx b/product_docs/docs/epas/15/application_programming/12_debugger.mdx deleted file mode 100644 index 7604ae2b59d..00000000000 --- a/product_docs/docs/epas/15/application_programming/12_debugger.mdx +++ /dev/null @@ -1,250 +0,0 @@ ---- -title: "Debugging programs" -description: "How to use the debugger to identify ways to make your program run faster, more efficiently, and more reliably" -legacyRedirectsGenerated: - # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.6/EDB_Postgres_Advanced_Server_Guide.1.41.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.6/EDB_Postgres_Advanced_Server_Guide.1.42.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.6/EDB_Postgres_Advanced_Server_Guide.1.40.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.110.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.112.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.111.html" -redirects: - - ../../epas_guide/12_debugger #generated for docs/epas/reorg-role-use-case-mode ---- - - - -The debugger gives developers and DBAs the ability to test and debug server-side programs using a graphical, dynamic environment. The types of programs that you can debug are: -- SPL stored procedures -- functions -- triggers -- packages -- PL/pgSQL functions and triggers. - -The debugger is integrated with pgAdmin 4 and EDB Postgres Enterprise Manager. If you installed EDB Postgres Advanced Server on a Windows host, pgAdmin 4 is automatically installed. The pgAdmin 4 icon is in the Windows Start menu. - -If your EDB Postgres Advanced Server host is on a CentOS or Linux system, you can use `yum` to install pgAdmin4. Open a command line, assume superuser privileges, and enter: - -```shell -yum install edb-pgadmin4* -``` - -On Linux, you must also install the `edb-asxx-server-pldebugger` RPM package, where `xx` is the EDB Postgres Advanced Server version number. Information about pgAdmin 4 is available at . - -The RPM installation adds the pgAdmin4 icon to your Applications menu. - -## Using the debugger - -You can use the debugger in two basic ways to test programs: - -- **Standalone debugging** — Use the debugger to start the program to test. Supply any input parameter values required by the program. You can immediately observe and step through the code of the program. Standalone debugging is the typical method used for new programs and for initial problem investigation. -- **In-context debugging** — In-context debugging is useful if it's difficult to reproduce a problem using standalone debugging due to complex interaction with the calling application. Using this approach, the program to test is started by an application other than the debugger. You set a *global breakpoint* on the program to test. The application that makes the first call to the program encounters the global breakpoint. Then the application suspends execution. At that point, the debugger takes control of the called program. You can then observe and step through the code of the called program as it runs in the context of the calling application. - - After you have completely stepped through the code of the called program in the debugger, the suspended application resumes executing. - -The debugging tools and operations are the same whether using standalone or in-context debugging. The difference is in how to invoke the program being debugged. - -These instructions use the standalone debugging method. To start the debugger for in-context debugging, see [Setting global breakpoint for in-context debugging](#setting_global_breakpoint_for_in_context_debugging). - -## Configuring the debugger - -Before using the debugger, edit the `postgresql.conf` file (located in the `data` subdirectory of your EDB Postgres Advanced Server home directory). Add `$libdir/plugin_debugger` to the libraries listed in the `shared_preload_libraries` configuration parameter: - -```ini -shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen,$libdir/plugin_debugger' -``` - -- On Linux, the `postgresql.conf` file is located in: `/var/lib/edb/asxx/data` -- On Windows, the `postgresql.conf` file is located in: `C:\Program Files\edb\asxx\data` - -Where `xx` is the version of EDB Postgres Advanced Server. - -After modifying the `shared_preload_libraries` parameter, restart the database server. - -## Starting the debugger - -Use pgAdmin 4 to access the debugger for standalone debugging. To open the debugger: - -1. Select the name of the stored procedure or function you want to debug in the pgAdmin 4 **Browser** panel. Or, to debug a package, select the specific procedure or function under the package node of the package you want to debug. - -1. Select **Object > Debugging > Debug**. - -You can't debug triggers using standalone debugging. You must use in-context debugging. See [Setting global breakpoint for in-context debugging](#setting_global_breakpoint_for_in_context_debugging) for information. - -## The Debugger window - -You can use the Debugger window to pass parameter values when you are standalone debugging a program that expects parameters. When you start the debugger, the Debugger window opens to display any `IN` or `IN OUT` parameters the program expects. If the program declares no `IN` or `IN OUT` parameters, the Debugger window doesn't open. - -Use the fields on the Debugger window to provide a value for each parameter: - -- The **Name** field contains the formal parameter name. -- The **Type** field contains the parameter data type. -- Select the **Null?** check box to indicate that the parameter is a `NULL` value. -- Select the **Expression?** check box if the `Value` field contains an expression. -- The **Value** field contains the parameter value that's passed to the program. -- Select the **Use Default?** check box to indicate for the program to use the value in the **Default Value** field. -- The **Default Value** field contains the default value of the parameter. - -If you're debugging a procedure or function that's a member of a package that has an initialization section, select the **Debug Package Initializer** check box to step into the package initialization section, This setting allows you to debug the initialization section code before debugging the procedure or function. If you don't select the check box, the debugger executes the package initialization section without allowing you to see or step through the individual lines of code as they execute. - -After entering the desired parameter values, select **Debug** to start the debugging process. - -!!! Note - The Debugger window doesn't open during in-context debugging. Instead, the application calling the program to debug must supply any required input parameter values. - -After you complete a full debugging cycle by stepping through the program code, the Debugger window reopens. You can enter new parameter values and repeat the debugging cycle or end the debugging session. - -## Main debugger window - -The main debugger window contains two panels: - -- The top Program Body panel displays the program source code. -- The bottom Tabs panel provides a set of tabs for different information. - -Use the tool bar icons located at the top panel to access debugging functions. - -### The Program Body panel - -The Program Body panel displays the source code of the program that's being debugged. The figure shows that the debugger is about to execute the `SELECT` statement. The blue indicator in the program body highlights the next statement to execute. - -![The Program Body](../images/program_body.png) - -### The Tabs panel - -You can use the bottom Tabs panel to view or modify parameter values or local variables or to view messages generated by `RAISE INFO` and function results. - -The following is the information displayed by the tabs in the panel: - -- The **Parameters** tab displays the current parameter values. -- The **Local variables** tab displays the value of any variables declared in the program. -- The **Messages** tab displays any results returned by the program as it executes. -- The **Results** tab displays any program results, such as the value from the `RETURN` statement of a function. -- The **Stack** tab displays the call stack. - -### The Stack tab - - - -The **Stack** tab displays a list of programs that are currently on the call stack, that is, programs that were invoked but that haven't yet completed. When a program is called, the name of the program is added to the top of the list displayed in the **Stack** tab. When the program ends, its name is removed from the list. - -The **Stack** tab also displays information about program calls. The information includes: - -- The location of the call in the program -- The call arguments -- The name of the program being called - -Reviewing the call stack can help you trace the course of execution through a series of nested programs. -The figure shows that `emp_query_caller` is about to call a subprogram named `emp_query`. `emp_query_caller` is currently at the top of the call stack. - -![A debugged program calling a subprogram](../images/stack_tab.png) - -After the call to `emp_query` executes, `emp_query` is displayed at the top of the **Stack** tab, and its code is displayed in the Program Body panel. - -![Debugging the called subprogram](../images/stack_tab.png) - -After completing execution of the subprogram, control returns to the calling program (`emp_query_caller`), now displayed at the top of the **Stack** tab. - -## Debugging a program - -You can perform the following operations to debug a program: - -- Step through the program one line at a time. -- Execute the program until you reach a breakpoint. -- View and change local variable values within the program. - -### Stepping through the code - -Use the tool bar icons to step through a program with the debugger. The icons serve the following purposes: - -- **Step into.** Execute the currently highlighted line of code. -- **Step over.** Execute a line of code, stepping over any subfunctions invoked by the code. The subfunction executes but is debugged only if it contains a breakpoint. -- **Continue/Start.** Execute the highlighted code and continue until the program encounters a breakpoint or completes. -- **Stop.** Halt a program. - -### Using breakpoints - -As the debugger executes a program, it pauses when it reaches a breakpoint. When the debugger pauses, you can observe or change local variables or navigate to an entry in the call stack to observe variables or set other breakpoints. The next step into, step over, or continue operation forces the debugger to resume executing with the next line of code following the breakpoint. - -These are the two types of breakpoints: - -- **Local breakpoint** — You can set a local breakpoint at any executable line of code in a program. The debugger pauses execution when it reaches a line where a local breakpoint was set. - -- **Global breakpoint** — A global breakpoint triggers when any session reaches that breakpoint. Set a global breakpoint if you want to perform in-context debugging of a program. When you set a global breakpoint on a program, the debugging session that set the global breakpoint waits until that program is invoked in another session. Only a superuser can set a global breakpoint. - -To create a local breakpoint, select the grey shaded margin to the left of the line of code where you want the local breakpoint set. The spot you select must be close to the right side of the margin as in the spot where the breakpoint dot is shown on source code line 12. When the breakpoint is created, the debugger displays a dark dot in the margin, indicating a breakpoint was set at the selected line of code. - -![Set a breakpoint by clicking in left-hand margin](../images/setting_global_breakpoint_from_left-hand_margin.png) - -You can set as many local breakpoints as you want. Local breakpoints remain in effect for the rest of a debugging session until you remove them. - -#### Removing a local breakpoint - -To remove a local breakpoint, select the breakpoint dot. The dot disappears. - -To remove all of the breakpoints from the program that currently appears in the Program Body frame, select the **Clear all breakpoints** icon. - -!!! Note - When you perform any of these actions, only the breakpoints in the program that currently appears in the Program Body panel are removed. Breakpoints in called subprograms or breakpoints in programs that call the program currently appearing in the Program Body panel aren't removed. - -### Setting a global breakpoint for in-context debugging - - - -To set a global breakpoint for in-context debugging: - -1. In the Browser panel, select the stored procedure, function, or trigger on which you want to set the breakpoint. - -1. Select **Object > Debugging > Set Breakpoint**. - -To set a global breakpoint on a trigger: - -1. Expand the table node that contains the trigger. - -1. Select the specific trigger you want to debug. - -1. Select **Object > Debugging > Set Breakpoint**. - -To set a global breakpoint in a package: - -1. Select the specific procedure or function under the package node of the package you want to debug. - -1. Select **Object > Debugging > Set Breakpoint**. - -After you select **Set Breakpoint**, the Debugger window opens and waits for an application to call the program to debug. - -The PSQL client invokes the `select_emp` function on which a global breakpoint was set. - -```sql -$ psql edb enterprisedb -psql.bin (14.0.0, server 14.0.0) -Type "help" for help. - -edb=# SELECT select_emp(7900); -``` - -The `select_emp` function doesn't finish until you step through the program in the debugger. - -![Program on which a global breakpoint was set](../images/parameters_tab.png) - -You can now debug the program using the operations like step into, step over, and continue. Or you can set local breakpoints. After you step through executing the program, the calling application (PSQL) regains control, the `select_emp` function finishes executing, and its output is displayed. - -```sql -$ psql edb enterprisedb -psql.bin (14.0.0, server 14.0.0) -Type "help" for help. - -edb=# SELECT select_emp(7900); -__OUTPUT__ -INFO: Number : 7900 -INFO: Name : JAMES -INFO: Hire Date : 12/03/1981 -INFO: Salary : 950.00 -INFO: Commission: 0.00 -INFO: Department: SALES - select_emp ------------- -(1 row) -``` - -At this point, you can end the debugger session. If you don't end the debugger session, the next application that invokes the program encounters the global breakpoint, and the debugging cycle begins again. diff --git a/product_docs/docs/epas/15/application_programming/12_debugger/configuring_debugger.mdx b/product_docs/docs/epas/15/application_programming/12_debugger/configuring_debugger.mdx new file mode 100644 index 00000000000..e4fd8d20b7f --- /dev/null +++ b/product_docs/docs/epas/15/application_programming/12_debugger/configuring_debugger.mdx @@ -0,0 +1,38 @@ +--- +title: "Configuring the debugger" +description: "Describes how to configure the debugger program prior to use" +--- + +The debugger is integrated with pgAdmin 4 and EDB Postgres Enterprise Manager. If you installed EDB Postgres Advanced Server on a Windows host, pgAdmin 4 is automatically installed. The pgAdmin 4 icon is in the Windows Start menu. + +You can use the debugger in two basic ways to test programs: + +- **Standalone debugging** — Use the debugger to start the program to test. Supply any input parameter values required by the program. You can immediately observe and step through the code of the program. Standalone debugging is the typical method used for new programs and for initial problem investigation. +- **In-context debugging** — In-context debugging is useful if it's difficult to reproduce a problem using standalone debugging due to complex interaction with the calling application. Using this approach, the program to test is started by an application other than the debugger. You set a *global breakpoint* on the program to test. The application that makes the first call to the program encounters the global breakpoint. Then the application suspends execution. At that point, the debugger takes control of the called program. You can then observe and step through the code of the called program as it runs in the context of the calling application. + + After you have completely stepped through the code of the called program in the debugger, the suspended application resumes executing. + +The debugging tools and operations are the same whether using standalone or in-context debugging. The difference is in how to invoke the program being debugged. + +If your EDB Postgres Advanced Server host is on a CentOS or Linux system, you can use `yum` to install pgAdmin4. Open a command line, assume superuser privileges, and enter: + +```shell +yum install edb-pgadmin4* +``` + +On Linux, you must also install the `edb-asxx-server-pldebugger` RPM package, where `xx` is the EDB Postgres Advanced Server version number. Information about pgAdmin 4 is available at . + +The RPM installation adds the pgAdmin4 icon to your Applications menu. + +Before using the debugger, edit the `postgresql.conf` file (located in the `data` subdirectory of your EDB Postgres Advanced Server home directory). Add `$libdir/plugin_debugger` to the libraries listed in the `shared_preload_libraries` configuration parameter: + +```ini +shared_preload_libraries = '$libdir/dbms_pipe,$libdir/edb_gen,$libdir/plugin_debugger' +``` + +- On Linux, the `postgresql.conf` file is located in: `/var/lib/edb/asxx/data` +- On Windows, the `postgresql.conf` file is located in: `C:\Program Files\edb\asxx\data` + +Where `xx` is the version of EDB Postgres Advanced Server. + +After modifying the `shared_preload_libraries` parameter, restart the database server. diff --git a/product_docs/docs/epas/15/application_programming/12_debugger/debugger_interface.mdx b/product_docs/docs/epas/15/application_programming/12_debugger/debugger_interface.mdx new file mode 100644 index 00000000000..6ee1511d447 --- /dev/null +++ b/product_docs/docs/epas/15/application_programming/12_debugger/debugger_interface.mdx @@ -0,0 +1,52 @@ +--- +title: "Debugger interface overview" +description: "Provides an overview of the main window of the debugger program" +--- + +The main debugger window contains two panels: + +- The top Program Body panel displays the program source code. +- The bottom Tabs panel provides a set of tabs for different information. + +Use the tool bar icons located at the top panel to access debugging functions. + +## The Program Body panel + +The Program Body panel displays the source code of the program that's being debugged. The figure shows that the debugger is about to execute the `SELECT` statement. The blue indicator in the program body highlights the next statement to execute. + +![The Program Body](../../images/program_body.png) + +## The Tabs panel + +You can use the bottom Tabs panel to view or modify parameter values or local variables or to view messages generated by `RAISE INFO` and function results. + +The following is the information displayed by the tabs in the panel: + +- The **Parameters** tab displays the current parameter values. +- The **Local variables** tab displays the value of any variables declared in the program. +- The **Messages** tab displays any results returned by the program as it executes. +- The **Results** tab displays any program results, such as the value from the `RETURN` statement of a function. +- The **Stack** tab displays the call stack. + +## The Stack tab + + + +The **Stack** tab displays a list of programs that are currently on the call stack, that is, programs that were invoked but that haven't yet completed. When a program is called, the name of the program is added to the top of the list displayed in the **Stack** tab. When the program ends, its name is removed from the list. + +The **Stack** tab also displays information about program calls. The information includes: + +- The location of the call in the program +- The call arguments +- The name of the program being called + +Reviewing the call stack can help you trace the course of execution through a series of nested programs. +The figure shows that `emp_query_caller` is about to call a subprogram named `emp_query`. `emp_query_caller` is currently at the top of the call stack. + +![A debugged program calling a subprogram](../../images/stack_tab.png) + +After the call to `emp_query` executes, `emp_query` is displayed at the top of the **Stack** tab, and its code is displayed in the Program Body panel. + +![Debugging the called subprogram](../../images/stack_tab.png) + +After completing execution of the subprogram, control returns to the calling program (`emp_query_caller`), now displayed at the top of the **Stack** tab. diff --git a/product_docs/docs/epas/15/application_programming/12_debugger/debugging_a_program.mdx b/product_docs/docs/epas/15/application_programming/12_debugger/debugging_a_program.mdx new file mode 100644 index 00000000000..89bd1e6754e --- /dev/null +++ b/product_docs/docs/epas/15/application_programming/12_debugger/debugging_a_program.mdx @@ -0,0 +1,115 @@ +--- +title: "Running the debugger" +description: "Describes the operations you can perform to debug a program" +--- + +You can perform the following operations to debug a program: + +- Step through the program one line at a time. +- Execute the program until you reach a breakpoint. +- View and change local variable values within the program. + +## Considerations when using the program + +- These instructions use the standalone debugging method. To start the debugger for in-context debugging, see [Setting global breakpoint for in-context debugging](#setting_global_breakpoint_for_in_context_debugging). + +- You can't debug triggers using standalone debugging. You must use in-context debugging. See [Setting global breakpoint for in-context debugging](#setting_global_breakpoint_for_in_context_debugging) for information. + + +## Stepping through the code + +Use the tool bar icons to step through a program with the debugger. The icons serve the following purposes: + +- **Step into.** Execute the currently highlighted line of code. +- **Step over.** Execute a line of code, stepping over any subfunctions invoked by the code. The subfunction executes but is debugged only if it contains a breakpoint. +- **Continue/Start.** Execute the highlighted code and continue until the program encounters a breakpoint or completes. +- **Stop.** Halt a program. + +## Using breakpoints + +As the debugger executes a program, it pauses when it reaches a breakpoint. When the debugger pauses, you can observe or change local variables or navigate to an entry in the call stack to observe variables or set other breakpoints. The next step into, step over, or continue operation forces the debugger to resume executing with the next line of code following the breakpoint. + +These are the two types of breakpoints: + +- **Local breakpoint** — You can set a local breakpoint at any executable line of code in a program. The debugger pauses execution when it reaches a line where a local breakpoint was set. + +- **Global breakpoint** — A global breakpoint triggers when any session reaches that breakpoint. Set a global breakpoint if you want to perform in-context debugging of a program. When you set a global breakpoint on a program, the debugging session that set the global breakpoint waits until that program is invoked in another session. Only a superuser can set a global breakpoint. + +### Setting a local breakpoint + +To create a local breakpoint, select the grey shaded margin to the left of the line of code where you want the local breakpoint set. The spot you select must be close to the right side of the margin as in the spot where the breakpoint dot is shown on source code line 12. When the breakpoint is created, the debugger displays a dark dot in the margin, indicating a breakpoint was set at the selected line of code. + +![Set a breakpoint by clicking in left-hand margin](../../images/setting_global_breakpoint_from_left-hand_margin.png) + +You can set as many local breakpoints as you want. Local breakpoints remain in effect for the rest of a debugging session until you remove them. + +### Removing a local breakpoint + +To remove a local breakpoint, select the breakpoint dot. The dot disappears. + +To remove all of the breakpoints from the program that currently appears in the Program Body frame, select the **Clear all breakpoints** icon. + +!!! Note + When you perform any of these actions, only the breakpoints in the program that currently appears in the Program Body panel are removed. Breakpoints in called subprograms or breakpoints in programs that call the program currently appearing in the Program Body panel aren't removed. + +### Setting a global breakpoint for in-context debugging + + + +To set a global breakpoint for in-context debugging: + +1. In the Browser panel, select the stored procedure, function, or trigger on which you want to set the breakpoint. + +1. Select **Object > Debugging > Set Breakpoint**. + +To set a global breakpoint on a trigger: + +1. Expand the table node that contains the trigger. + +1. Select the specific trigger you want to debug. + +1. Select **Object > Debugging > Set Breakpoint**. + +To set a global breakpoint in a package: + +1. Select the specific procedure or function under the package node of the package you want to debug. + +1. Select **Object > Debugging > Set Breakpoint**. + +After you select **Set Breakpoint**, the Debugger window opens and waits for an application to call the program to debug. + +The PSQL client invokes the `select_emp` function on which a global breakpoint was set. + +```sql +$ psql edb enterprisedb +psql.bin (14.0.0, server 14.0.0) +Type "help" for help. + +edb=# SELECT select_emp(7900); +``` + +The `select_emp` function doesn't finish until you step through the program in the debugger. + +![Program on which a global breakpoint was set](../../images/parameters_tab.png) + +You can now debug the program using the operations like step into, step over, and continue. Or you can set local breakpoints. After you step through executing the program, the calling application (PSQL) regains control, the `select_emp` function finishes executing, and its output is displayed. + +```sql +$ psql edb enterprisedb +psql.bin (14.0.0, server 14.0.0) +Type "help" for help. + +edb=# SELECT select_emp(7900); +__OUTPUT__ +INFO: Number : 7900 +INFO: Name : JAMES +INFO: Hire Date : 12/03/1981 +INFO: Salary : 950.00 +INFO: Commission: 0.00 +INFO: Department: SALES + select_emp +------------ +(1 row) +``` + +At this point, you can end the debugger session. If you don't end the debugger session, the next application that invokes the program encounters the global breakpoint, and the debugging cycle begins again. diff --git a/product_docs/docs/epas/15/application_programming/12_debugger/index.mdx b/product_docs/docs/epas/15/application_programming/12_debugger/index.mdx new file mode 100644 index 00000000000..40af6f0ca01 --- /dev/null +++ b/product_docs/docs/epas/15/application_programming/12_debugger/index.mdx @@ -0,0 +1,32 @@ +--- +title: "Debugging programs" +description: "How to use the debugger to identify ways to make your program run faster, more efficiently, and more reliably" +indexCards: simple +navigation: +- configuring_debugger +- starting_debugger +- debugger_interface +- debugging_a_program +legacyRedirectsGenerated: + # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.6/EDB_Postgres_Advanced_Server_Guide.1.41.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.6/EDB_Postgres_Advanced_Server_Guide.1.42.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.6/EDB_Postgres_Advanced_Server_Guide.1.40.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.110.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.112.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.111.html" +redirects: + - ../../epas_guide/12_debugger #generated for docs/epas/reorg-role-use-case-mode +--- + + + +The debugger gives developers and DBAs the ability to test and debug server-side programs using a graphical, dynamic environment. The types of programs that you can debug are: +- SPL stored procedures +- functions +- triggers +- packages +- PL/pgSQL functions and triggers. + + + diff --git a/product_docs/docs/epas/15/application_programming/12_debugger/starting_debugger.mdx b/product_docs/docs/epas/15/application_programming/12_debugger/starting_debugger.mdx new file mode 100644 index 00000000000..7443ef2a04c --- /dev/null +++ b/product_docs/docs/epas/15/application_programming/12_debugger/starting_debugger.mdx @@ -0,0 +1,31 @@ +--- +title: "Starting the debugger" +description: "Describes how to open the debugger program" +--- + +Use pgAdmin 4 to access the debugger for standalone debugging. To open the debugger: + +1. Select the name of the stored procedure or function you want to debug in the pgAdmin 4 **Browser** panel. Or, to debug a package, select the specific procedure or function under the package node of the package you want to debug. + +1. Select **Object > Debugging > Debug**. + +You can use the Debugger window to pass parameter values when you are standalone debugging a program that expects parameters. When you start the debugger, the Debugger window opens to display any `IN` or `IN OUT` parameters the program expects. If the program declares no `IN` or `IN OUT` parameters, the Debugger window doesn't open. + +Use the fields on the Debugger window to provide a value for each parameter: + +- The **Name** field contains the formal parameter name. +- The **Type** field contains the parameter data type. +- Select the **Null?** check box to indicate that the parameter is a `NULL` value. +- Select the **Expression?** check box if the `Value` field contains an expression. +- The **Value** field contains the parameter value that's passed to the program. +- Select the **Use Default?** check box to indicate for the program to use the value in the **Default Value** field. +- The **Default Value** field contains the default value of the parameter. + +If you're debugging a procedure or function that's a member of a package that has an initialization section, select the **Debug Package Initializer** check box to step into the package initialization section, This setting allows you to debug the initialization section code before debugging the procedure or function. If you don't select the check box, the debugger executes the package initialization section without allowing you to see or step through the individual lines of code as they execute. + +After entering the desired parameter values, select **Debug** to start the debugging process. + +!!! Note + The Debugger window doesn't open during in-context debugging. Instead, the application calling the program to debug must supply any required input parameter values. + +After you complete a full debugging cycle by stepping through the program code, the Debugger window reopens. You can enter new parameter values and repeat the debugging cycle or end the debugging session. diff --git a/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features.mdx b/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/comment_command.mdx similarity index 62% rename from product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features.mdx rename to product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/comment_command.mdx index 50bed4621be..f79c2742d94 100644 --- a/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features.mdx +++ b/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/comment_command.mdx @@ -1,21 +1,8 @@ --- -title: "Using enhanced SQL and other miscellaneous features" -description: "How to use the enhanced SQL functionality and additional productivity features included in EDB Postgres Advanced Server" -legacyRedirectsGenerated: - # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.6/EDB_Postgres_Advanced_Server_Guide.1.60.html" - - "/edb-docs/d/edb-postgres-advanced-server/reference/database-compatibility-for-oracle-developers-reference-guide/9.6/Database_Compatibility_for_Oracle_Developers_Reference_Guide.1.032.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.062.html" -redirects: - - ../../epas_guide/15_enhanced_sql_and_other_misc_features #generated for docs/epas/reorg-role-use-case-mode +title: "Using the COMMENT command" +description: "Describes how to add comments to objects" --- - - -EDB Postgres Advanced Server includes enhanced SQL functionality and other features that add flexibility and convenience. - -## COMMENT - In addition to allowing comments on objects supported by the PostgreSQL `COMMENT` command, EDB Postgres Advanced Server supports comments on other object types. The complete supported syntax is: ```sql @@ -73,7 +60,7 @@ Where `aggregate_signature` is: ORDER BY [ ] [ ] [ , ... ] ``` -### Parameters +## Parameters `object_name` @@ -163,7 +150,7 @@ The comment, written as a string literal, or `NULL` to drop the comment. !!! Note Names of tables, aggregates, collations, conversions, domains, foreign tables, functions, indexes, operators, operator classes, operator families, packages, procedures, sequences, text search objects, types, and views can be schema qualified. -### Example +## Example This example adds a comment to a table named `new_emp`: @@ -173,55 +160,3 @@ employees.'; ``` For more information about using the `COMMENT` command, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/sql-comment.html). - -## Output of function version() - -The text string output of the `version()` function displays the name of the product, its version, and the host system on which it was installed. - -For EDB Postgres Advanced Server, the `version()` output is in a format similar to the PostgreSQL community version. The first text word is *PostgreSQL* instead of *EnterpriseDB* as in EDB Postgres Advanced Server version 10 and earlier. - -The general format of the `version()` output is: - -```text -PostgreSQL $PG_VERSION_EXT (EnterpriseDB EDB Postgres Advanced Server $PG_VERSION) on $host -``` - -So for the current EDB Postgres Advanced Server, the version string appears as follows: - -```sql -edb@45032=#select version(); -__OUTPUT__ -version ------------------------------------------------------------------------------------------------ ------------------------------------------------- -PostgreSQL 14.0 (EnterpriseDB EDB Postgres Advanced Server 14.0.0) on x86_64-pc-linux-gnu, compiled by gcc -(GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit -(1 row) -``` - -In contrast, for EDB Postgres Advanced Server 10, the version string was the following: - -```sql -edb=# select version(); -__OUTPUT__ - version ------------------------------------------------------------------------------------------- -------------------- -EnterpriseDB 10.4.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat -4.4.7-18), 64-bit -(1 row) -``` - -## Logical decoding on standby - -Logical decoding on a standby server allows you to create a logical replication slot on a standby server that can respond to API operations such as `get`, `peek`, and `advance`. - -For more information about logical decoding, refer to the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html). - -For a logical slot on a standby server to work, you must set the `hot_standby_feedback` parameter to `ON` on the standby. The `hot_standby_feedback` parameter prevents `VACCUM` from removing recently dead rows that are required by an existing logical replication slot on the standby server. If a slot conflict occurs on the standby, the slots are dropped. - -For logical decoding on a standby to work, you must set `wal_level` to `logical` on both the primary and standby servers. If you set `wal_level` to a value other than `logical`, then slots aren't created. If you set `wal_level` to a value other than `logical` on primary, and if existing logical slots are on standby, such slots are dropped. You can't create new slots. - -When transactions are written to the primary server, the activity triggers the creation of a logical slot on the standby server. If a primary server is idle, creating a logical slot on a standby server might take noticeable time. - -For more information about functions that support replication, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-REPLICATION). See also this [logical decoding example](https://www.postgresql.org/docs/current/logicaldecoding-example.html). diff --git a/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/index.mdx b/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/index.mdx new file mode 100644 index 00000000000..8fe07fa37d3 --- /dev/null +++ b/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/index.mdx @@ -0,0 +1,18 @@ +--- +title: "Using enhanced SQL and other miscellaneous features" +description: "How to use the enhanced SQL functionality and additional productivity features included in EDB Postgres Advanced Server" +indexCards: simple +legacyRedirectsGenerated: + # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.6/EDB_Postgres_Advanced_Server_Guide.1.60.html" + - "/edb-docs/d/edb-postgres-advanced-server/reference/database-compatibility-for-oracle-developers-reference-guide/9.6/Database_Compatibility_for_Oracle_Developers_Reference_Guide.1.032.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.062.html" +redirects: + - ../../epas_guide/15_enhanced_sql_and_other_misc_features #generated for docs/epas/reorg-role-use-case-mode +--- + + + +EDB Postgres Advanced Server includes enhanced SQL functionality and other features that add flexibility and convenience. + + diff --git a/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/logical_decoding.mdx b/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/logical_decoding.mdx new file mode 100644 index 00000000000..c739ab937e1 --- /dev/null +++ b/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/logical_decoding.mdx @@ -0,0 +1,17 @@ +--- +title: "Configuring logical decoding on standby" +description: "Describes how to create a logical replication slot on a standby server" +--- + +Logical decoding on a standby server allows you to create a logical replication slot on a standby server that can respond to API operations such as `get`, `peek`, and `advance`. + +For more information about logical decoding, refer to the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/logicaldecoding-explanation.html). + +For a logical slot on a standby server to work, you must set the `hot_standby_feedback` parameter to `ON` on the standby. The `hot_standby_feedback` parameter prevents `VACCUM` from removing recently dead rows that are required by an existing logical replication slot on the standby server. If a slot conflict occurs on the standby, the slots are dropped. + +For logical decoding on a standby to work, you must set `wal_level` to `logical` on both the primary and standby servers. If you set `wal_level` to a value other than `logical`, then slots aren't created. If you set `wal_level` to a value other than `logical` on primary, and if existing logical slots are on standby, such slots are dropped. You can't create new slots. + +When transactions are written to the primary server, the activity triggers the creation of a logical slot on the standby server. If a primary server is idle, creating a logical slot on a standby server might take noticeable time. + +For more information about functions that support replication, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-REPLICATION). See also this [logical decoding example](https://www.postgresql.org/docs/current/logicaldecoding-example.html). + diff --git a/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/obtaining_version_information.mdx b/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/obtaining_version_information.mdx new file mode 100644 index 00000000000..0b333766426 --- /dev/null +++ b/product_docs/docs/epas/15/application_programming/15_enhanced_sql_and_other_misc_features/obtaining_version_information.mdx @@ -0,0 +1,40 @@ +--- +title: "Obtaining version information" +description: "Describes how to display the product name, version, and the host system on which it was installed." +--- + +The text string output of the `version()` function displays the name of the product, its version, and the host system on which it was installed. + +For EDB Postgres Advanced Server, the `version()` output is in a format similar to the PostgreSQL community version. The first text word is *PostgreSQL* instead of *EnterpriseDB* as in EDB Postgres Advanced Server version 10 and earlier. + +The general format of the `version()` output is: + +```text +PostgreSQL $PG_VERSION_EXT (EnterpriseDB EDB Postgres Advanced Server $PG_VERSION) on $host +``` + +So for the current EDB Postgres Advanced Server, the version string appears as follows: + +```sql +edb@45032=#select version(); +__OUTPUT__ +version +----------------------------------------------------------------------------------------------- +------------------------------------------------ +PostgreSQL 14.0 (EnterpriseDB EDB Postgres Advanced Server 14.0.0) on x86_64-pc-linux-gnu, compiled by gcc +(GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit +(1 row) +``` + +In contrast, for EDB Postgres Advanced Server 10, the version string was the following: + +```sql +edb=# select version(); +__OUTPUT__ + version +------------------------------------------------------------------------------------------ +------------------- +EnterpriseDB 10.4.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat +4.4.7-18), 64-bit +(1 row) +``` diff --git a/product_docs/docs/epas/15/application_programming/ecpgplus_guide/05_building_executing_dynamic_sql_statements.mdx b/product_docs/docs/epas/15/application_programming/ecpgplus_guide/05_building_executing_dynamic_sql_statements.mdx index a72f18a7d21..7c60e142479 100644 --- a/product_docs/docs/epas/15/application_programming/ecpgplus_guide/05_building_executing_dynamic_sql_statements.mdx +++ b/product_docs/docs/epas/15/application_programming/ecpgplus_guide/05_building_executing_dynamic_sql_statements.mdx @@ -16,10 +16,12 @@ redirects: The following examples show four techniques for building and executing dynamic SQL statements. Each example shows processing a different combination of statement and input types: -- The first example shows processing and executing a SQL statement that doesn't contain a `SELECT` statement and doesn't require input variables. This example corresponds to the techniques used by Oracle Dynamic SQL Method 1. -- The second example shows processing and executing a SQL statement that doesn't contain a `SELECT` statement and contains a known number of input variables. This example corresponds to the techniques used by Oracle Dynamic SQL Method 2. -- The third example shows processing and executing a SQL statement that might contain a `SELECT` statement and includes a known number of input variables. This example corresponds to the techniques used by Oracle Dynamic SQL Method 3. -- The fourth example shows processing and executing a SQL statement that might contain a `SELECT` statement and includes an unknown number of input variables. This example corresponds to the techniques used by Oracle Dynamic SQL Method 4. +- The [first example](#executing_a_nonquery_statement_without_parameters) shows processing and executing a SQL statement that doesn't contain a `SELECT` statement and doesn't require input variables. This example corresponds to the techniques used by Oracle Dynamic SQL Method 1. +- The [second example](#executing_a_nonquery_statement_with_a_specified_number_of_placeholders) shows processing and executing a SQL statement that doesn't contain a `SELECT` statement and contains a known number of input variables. This example corresponds to the techniques used by Oracle Dynamic SQL Method 2. +- The [third example](#executing_a_query_statement_with_known_number_of_placeholders) shows processing and executing a SQL statement that might contain a `SELECT` statement and includes a known number of input variables. This example corresponds to the techniques used by Oracle Dynamic SQL Method 3. +- The [fourth example](#executing_query_with_unknown_number_of_variables) shows processing and executing a SQL statement that might contain a `SELECT` statement and includes an unknown number of input variables. This example corresponds to the techniques used by Oracle Dynamic SQL Method 4. + + ## Example: Executing a nonquery statement without parameters @@ -126,6 +128,8 @@ static void handle_error(void) } ``` + + ## Example: Executing a nonquery statement with a specified number of placeholders To execute a nonquery command that includes a known number of parameter placeholders, you must first `PREPARE` the statement (providing a *statement handle*) and then `EXECUTE` the statement using the statement handle. When the application executes the statement, it must provide a value for each placeholder found in the statement. @@ -239,6 +243,7 @@ static void handle_error(void) exit(EXIT_FAILURE); } ``` + ## Example: Executing a query with a known number of placeholders From 868cacc07f8f4d6d1da3258159fb982551194796 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Fri, 14 Jul 2023 15:23:02 +0530 Subject: [PATCH 07/18] NET connector - added EDBDataSource to overview section --- .../7.0.4.1/03_the_advanced_server_net_connector_overview.mdx | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx b/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx index 138f23121a6..6c53ef798e5 100644 --- a/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx +++ b/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx @@ -20,6 +20,10 @@ The .NET Connector supports the following frameworks: The .NET class hierarchy contains classes that you can use to create objects that control a connection to the EDB Postgres Advanced Server database and manipulate the data stored on the server. The following are a few of the most commonly used object classes. +`EDBDataSource` + +The `EDBDataSource` is the entry point for all the connections made to the database. It is responsible for issuing connections to the server and efficiently managing them. With `EDB .NET Connector 7.0.4.1`, you no longer need direct instantiation of `EDBConnection`. Instantiate `EDBDataSource` and use the method provided to create commands or execute queries. + `EDBConnection` The `EDBConnection` class represents a connection to EDB Postgres Advanced Server. An `EDBConnection` object contains a `ConnectionString` that instructs the .NET client how to connect to an EDB Postgres Advanced Server database. From a6c554300f38217c2dcc7f1c41e8e94ef4c93111 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Fri, 14 Jul 2023 15:42:06 +0530 Subject: [PATCH 08/18] Update product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx Co-authored-by: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com> --- .../7.0.4.1/03_the_advanced_server_net_connector_overview.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx b/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx index 6c53ef798e5..5e53bda5720 100644 --- a/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx +++ b/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx @@ -22,7 +22,7 @@ The .NET class hierarchy contains classes that you can use to create objects tha `EDBDataSource` -The `EDBDataSource` is the entry point for all the connections made to the database. It is responsible for issuing connections to the server and efficiently managing them. With `EDB .NET Connector 7.0.4.1`, you no longer need direct instantiation of `EDBConnection`. Instantiate `EDBDataSource` and use the method provided to create commands or execute queries. +The `EDBDataSource` is the entry point for all the connections made to the database. It is responsible for issuing connections to the server and efficiently managing them. Starting with EDB .NET Connector 7.0.4.1, you no longer need direct instantiation of `EDBConnection`. Instantiate `EDBDataSource` and use the method provided to create commands or execute queries. `EDBConnection` From a77769913f9fe71ae785c89f1392c0f9134764a3 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Fri, 14 Jul 2023 15:46:26 +0530 Subject: [PATCH 09/18] updated EDBConnection as per comment from Moazzum --- .../7.0.4.1/03_the_advanced_server_net_connector_overview.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx b/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx index 5e53bda5720..d0956a818b4 100644 --- a/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx +++ b/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx @@ -26,7 +26,7 @@ The `EDBDataSource` is the entry point for all the connections made to the datab `EDBConnection` - The `EDBConnection` class represents a connection to EDB Postgres Advanced Server. An `EDBConnection` object contains a `ConnectionString` that instructs the .NET client how to connect to an EDB Postgres Advanced Server database. + The `EDBConnection` class represents a connection to EDB Postgres Advanced Server. An `EDBConnection` object contains a `ConnectionString` that instructs the .NET client how to connect to an EDB Postgres Advanced Server database. `EDBConnection` should be obtained from an `EDBDataSource` instance and used directly only in specific scenario such as transactions. `EDBCommand` From 339096055753c774c5fd5a4d4a78e62f08107bae Mon Sep 17 00:00:00 2001 From: francoughlin Date: Fri, 14 Jul 2023 15:43:43 -0400 Subject: [PATCH 10/18] DRITA and Partitioning restructures Broke out subsections for the DRITA topic in Managing performance and the Table partitioning topic in Application programming; add index cards and descriptions --- .../defining_a_default_partition.mdx} | 95 +-- .../defining_a_maxvalue_partition.mdx | 86 +++ .../index.mdx | 19 + .../index.mdx | 40 ++ .../performance_tuning_recommendations.mdx | 102 +++ .../simulating_statspack_reports.mdx} | 671 +----------------- .../taking_a_snapshot.mdx | 48 ++ .../using_drita_functions.mdx | 471 ++++++++++++ 8 files changed, 782 insertions(+), 750 deletions(-) rename product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/{05_handling_stray_values_in_a_list_or_range_partitioned_table.mdx => 05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_default_partition.mdx} (62%) create mode 100644 product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_maxvalue_partition.mdx create mode 100644 product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/index.mdx create mode 100644 product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/index.mdx create mode 100644 product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/performance_tuning_recommendations.mdx rename product_docs/docs/epas/15/managing_performance/{04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx => 04_dynamic_runtime_instrumentation_tools_architecture_DRITA/simulating_statspack_reports.mdx} (52%) create mode 100644 product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/taking_a_snapshot.mdx create mode 100644 product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/using_drita_functions.mdx diff --git a/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table.mdx b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_default_partition.mdx similarity index 62% rename from product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table.mdx rename to product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_default_partition.mdx index 777dcdfe1b4..d0f59ccc4c2 100644 --- a/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table.mdx +++ b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_default_partition.mdx @@ -1,18 +1,7 @@ --- -title: "Handling stray values in a LIST or RANGE partitioned table" -legacyRedirectsGenerated: - # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.6/Database_Compatibility_for_Oracle_Developers_Guide_v9.6.1.118.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.343.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.067.html" -redirects: - - ../../../epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table #generated for docs/epas/reorg-role-use-case-mode +title: "Defining a DEFAULT partition" --- - - -A `DEFAULT` or `MAXVALUE` partition or subpartition captures any rows that don't meet the other partitioning rules defined for a table. - ## Defining a DEFAULT partition A `DEFAULT` partition captures any rows that don't fit into any other partition in a `LIST` partitioned or subpartitioned table. If you don't include a `DEFAULT` rule, any row that doesn't match one of the values in the partitioning constraints causes an error. Each `LIST` partition or subpartition can have its own `DEFAULT` rule. @@ -212,85 +201,3 @@ sales_africa | 40 | 4519b | KENYA | 08-APR-12 00:00:00 |120000 sales_others_1 | 50 | 3788a | CHINA | 12-MAY-12 00:00:00 | 4950 (9 rows) ``` - -## Defining a MAXVALUE partition - -A `MAXVALUE` partition or subpartition captures any rows that don't fit into any other partition in a range-partitioned or subpartitioned table. If you don't include a `MAXVALUE` rule, any row that exceeds the maximum limit specified by the partitioning rules causes in an error. Each partition or subpartition can have its own `MAXVALUE` partition. - -The syntax of a `MAXVALUE` rule is: - -```sql -PARTITION [] VALUES LESS THAN (MAXVALUE) -``` - -Where `partition_name` specifies the name of the partition that stores any rows that don't match the rules specified for other partitions. - -The last example created a range-partitioned table in which the data was partitioned based on the value of the `date` column. If you attempt to add a row with a `date` value that exceeds a date listed in the partitioning constraints, EDB Postgres Advanced Server reports an error. - -```sql -edb=# INSERT INTO sales VALUES -edb-# (40, '3000x', 'IRELAND', '01-Mar-2013', '45000'); -ERROR: no partition of relation "sales" found for row -DETAIL: Partition key of the failing row contains (date) = (01-MAR-13 00:00:00). -``` - -This `CREATE TABLE` command creates the same table but with a `MAXVALUE` partition. Instead of throwing an error, the server stores any rows that don't match the previous partitioning constraints in the `others` partition. - -```sql -CREATE TABLE sales -( - dept_no number, - part_no varchar2, - country varchar2(20), - date date, - amount number -) -PARTITION BY RANGE(date) -( - PARTITION q1_2012 VALUES LESS THAN('2012-Apr-01'), - PARTITION q2_2012 VALUES LESS THAN('2012-Jul-01'), - PARTITION q3_2012 VALUES LESS THAN('2012-Oct-01'), - PARTITION q4_2012 VALUES LESS THAN('2013-Jan-01'), - PARTITION others VALUES LESS THAN (MAXVALUE) -); -``` - -To test the `MAXVALUE` partition, add a row with a value in the `date` column that exceeds the last date value listed in a partitioning rule. The server stores the row in the `others` partition. - -```sql -INSERT INTO sales VALUES - (40, '3000x', 'IRELAND', '01-Mar-2013', '45000'); -``` - -Query the contents of the `sales` table to confirm that the previously rejected row is now stored in the `sales_others` partition: - -```sql -edb=# SELECT tableoid::regclass, * FROM sales; -__OUTPUT__ - tableoid | dept_no | part_no | country | date | amount ----------------+---------+---------+----------+--------------------+-------- - sales_q1_2012 | 10 | 4519b | FRANCE | 17-JAN-12 00:00:00 | 45000 - sales_q1_2012 | 20 | 3788a | INDIA | 01-MAR-12 00:00:00 | 75000 - sales_q1_2012 | 30 | 9519b | CANADA | 01-FEB-12 00:00:00 | 75000 - sales_q2_2012 | 40 | 9519b | US | 12-APR-12 00:00:00 | 145000 - sales_q2_2012 | 20 | 3788a | PAKISTAN | 04-JUN-12 00:00:00 | 37500 - sales_q2_2012 | 30 | 4519b | CANADA | 08-APR-12 00:00:00 | 120000 - sales_q2_2012 | 40 | 3788a | US | 12-MAY-12 00:00:00 | 4950 - sales_q3_2012 | 10 | 9519b | ITALY | 07-JUL-12 00:00:00 | 15000 - sales_q3_2012 | 10 | 9519a | FRANCE | 18-AUG-12 00:00:00 | 650000 - sales_q3_2012 | 10 | 9519b | FRANCE | 18-AUG-12 00:00:00 | 650000 - sales_q3_2012 | 20 | 3788b | INDIA | 21-SEP-12 00:00:00 | 5090 - sales_q3_2012 | 40 | 4788a | US | 23-SEP-12 00:00:00 | 4950 - sales_q4_2012 | 40 | 4577b | US | 11-NOV-12 00:00:00 | 25000 - sales_q4_2012 | 30 | 7588b | CANADA | 14-DEC-12 00:00:00 | 50000 - sales_q4_2012 | 40 | 4788b | US | 09-OCT-12 00:00:00 | 15000 - sales_q4_2012 | 20 | 4519a | INDIA | 18-OCT-12 00:00:00 | 650000 - sales_q4_2012 | 20 | 4519b | INDIA | 02-DEC-12 00:00:00 | 5090 - sales_others | 40 | 3000x | IRELAND | 01-MAR-13 00:00:00 | 45000 -(18 rows) -``` - -EDB Postgres Advanced Server doesn't have a way to reassign the contents of a `MAXVALUE` partition or subpartition. - -- You can't use the `ALTER TABLE… ADD PARTITION` statement to add a partition to a table with a `MAXVALUE` rule. However, you can use the `ALTER TABLE… SPLIT PARTITION` statement to split an existing partition. -- You can't use the `ALTER TABLE… ADD SUBPARTITION` statement to add a subpartition to a table with a `MAXVALUE` rule. However, you can split an existing subpartition with the `ALTER TABLE… SPLIT SUBPARTITION` statement. diff --git a/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_maxvalue_partition.mdx b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_maxvalue_partition.mdx new file mode 100644 index 00000000000..d65880dd2b4 --- /dev/null +++ b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_maxvalue_partition.mdx @@ -0,0 +1,86 @@ +--- +title: "Defining a MAXVALUE partition" +--- + + +## Defining a MAXVALUE partition + +A `MAXVALUE` partition or subpartition captures any rows that don't fit into any other partition in a range-partitioned or subpartitioned table. If you don't include a `MAXVALUE` rule, any row that exceeds the maximum limit specified by the partitioning rules causes in an error. Each partition or subpartition can have its own `MAXVALUE` partition. + +The syntax of a `MAXVALUE` rule is: + +```sql +PARTITION [] VALUES LESS THAN (MAXVALUE) +``` + +Where `partition_name` specifies the name of the partition that stores any rows that don't match the rules specified for other partitions. + +The last example created a range-partitioned table in which the data was partitioned based on the value of the `date` column. If you attempt to add a row with a `date` value that exceeds a date listed in the partitioning constraints, EDB Postgres Advanced Server reports an error. + +```sql +edb=# INSERT INTO sales VALUES +edb-# (40, '3000x', 'IRELAND', '01-Mar-2013', '45000'); +ERROR: no partition of relation "sales" found for row +DETAIL: Partition key of the failing row contains (date) = (01-MAR-13 00:00:00). +``` + +This `CREATE TABLE` command creates the same table but with a `MAXVALUE` partition. Instead of throwing an error, the server stores any rows that don't match the previous partitioning constraints in the `others` partition. + +```sql +CREATE TABLE sales +( + dept_no number, + part_no varchar2, + country varchar2(20), + date date, + amount number +) +PARTITION BY RANGE(date) +( + PARTITION q1_2012 VALUES LESS THAN('2012-Apr-01'), + PARTITION q2_2012 VALUES LESS THAN('2012-Jul-01'), + PARTITION q3_2012 VALUES LESS THAN('2012-Oct-01'), + PARTITION q4_2012 VALUES LESS THAN('2013-Jan-01'), + PARTITION others VALUES LESS THAN (MAXVALUE) +); +``` + +To test the `MAXVALUE` partition, add a row with a value in the `date` column that exceeds the last date value listed in a partitioning rule. The server stores the row in the `others` partition. + +```sql +INSERT INTO sales VALUES + (40, '3000x', 'IRELAND', '01-Mar-2013', '45000'); +``` + +Query the contents of the `sales` table to confirm that the previously rejected row is now stored in the `sales_others` partition: + +```sql +edb=# SELECT tableoid::regclass, * FROM sales; +__OUTPUT__ + tableoid | dept_no | part_no | country | date | amount +---------------+---------+---------+----------+--------------------+-------- + sales_q1_2012 | 10 | 4519b | FRANCE | 17-JAN-12 00:00:00 | 45000 + sales_q1_2012 | 20 | 3788a | INDIA | 01-MAR-12 00:00:00 | 75000 + sales_q1_2012 | 30 | 9519b | CANADA | 01-FEB-12 00:00:00 | 75000 + sales_q2_2012 | 40 | 9519b | US | 12-APR-12 00:00:00 | 145000 + sales_q2_2012 | 20 | 3788a | PAKISTAN | 04-JUN-12 00:00:00 | 37500 + sales_q2_2012 | 30 | 4519b | CANADA | 08-APR-12 00:00:00 | 120000 + sales_q2_2012 | 40 | 3788a | US | 12-MAY-12 00:00:00 | 4950 + sales_q3_2012 | 10 | 9519b | ITALY | 07-JUL-12 00:00:00 | 15000 + sales_q3_2012 | 10 | 9519a | FRANCE | 18-AUG-12 00:00:00 | 650000 + sales_q3_2012 | 10 | 9519b | FRANCE | 18-AUG-12 00:00:00 | 650000 + sales_q3_2012 | 20 | 3788b | INDIA | 21-SEP-12 00:00:00 | 5090 + sales_q3_2012 | 40 | 4788a | US | 23-SEP-12 00:00:00 | 4950 + sales_q4_2012 | 40 | 4577b | US | 11-NOV-12 00:00:00 | 25000 + sales_q4_2012 | 30 | 7588b | CANADA | 14-DEC-12 00:00:00 | 50000 + sales_q4_2012 | 40 | 4788b | US | 09-OCT-12 00:00:00 | 15000 + sales_q4_2012 | 20 | 4519a | INDIA | 18-OCT-12 00:00:00 | 650000 + sales_q4_2012 | 20 | 4519b | INDIA | 02-DEC-12 00:00:00 | 5090 + sales_others | 40 | 3000x | IRELAND | 01-MAR-13 00:00:00 | 45000 +(18 rows) +``` + +EDB Postgres Advanced Server doesn't have a way to reassign the contents of a `MAXVALUE` partition or subpartition. + +- You can't use the `ALTER TABLE… ADD PARTITION` statement to add a partition to a table with a `MAXVALUE` rule. However, you can use the `ALTER TABLE… SPLIT PARTITION` statement to split an existing partition. +- You can't use the `ALTER TABLE… ADD SUBPARTITION` statement to add a subpartition to a table with a `MAXVALUE` rule. However, you can split an existing subpartition with the `ALTER TABLE… SPLIT SUBPARTITION` statement. diff --git a/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/index.mdx b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/index.mdx new file mode 100644 index 00000000000..5ec02c257f9 --- /dev/null +++ b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/index.mdx @@ -0,0 +1,19 @@ +--- +title: "Handling stray values in a LIST or RANGE partitioned table" +indexCards: simple +navigation: +- defining_a_default_partition +- defining_a_maxvalue_partition +legacyRedirectsGenerated: + # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.6/Database_Compatibility_for_Oracle_Developers_Guide_v9.6.1.118.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.343.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.067.html" +redirects: + - ../../../epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table #generated for docs/epas/reorg-role-use-case-mode +--- + + + +A `DEFAULT` or `MAXVALUE` partition or subpartition captures any rows that don't meet the other partitioning rules defined for a table. + diff --git a/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/index.mdx b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/index.mdx new file mode 100644 index 00000000000..e0707bd2790 --- /dev/null +++ b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/index.mdx @@ -0,0 +1,40 @@ +--- +title: "Using the dynamic runtime instrumentation tools architecture (DRITA)" +description: "How to use Dynamic Runtime Instrumentation Tools Architecture to query catalog views" +indexCards: simple +navigation: +- taking_a_snapshot +- using_drita_functions +- performance_tuning_recommendations +- simulating_statspack_reports +legacyRedirectsGenerated: + # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.20.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.19.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.21.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.22.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.23.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.24.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.322.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.323.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.321.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.318.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.320.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.319.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.145.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.144.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.143.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.141.html" + - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.142.html" +redirects: + - ../../epas_compat_tools_guide/04_dynamic_runtime_instrumentation_tools_architecture_DRITA #generated for docs/epas/reorg-role-use-case-mode +--- + + + +The Dynamic Runtime Instrumentation Tools Architecture (DRITA) allows a DBA to query catalog views to determine the *wait events* that affect the performance of individual sessions or the whole system. DRITA records the number of times each event occurs as well as the time spent waiting. You can use this information to diagnose performance problems. DRITA consumes minimal system resources. + +DRITA compares *snapshots* to evaluate the performance of a system. A snapshot is a saved set of system performance data at a given point in time. A unique ID number identifies each snapshot. You can use snapshot ID numbers with DRITA reporting functions to return system performance statistics. + + + diff --git a/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/performance_tuning_recommendations.mdx b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/performance_tuning_recommendations.mdx new file mode 100644 index 00000000000..1522ee9f318 --- /dev/null +++ b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/performance_tuning_recommendations.mdx @@ -0,0 +1,102 @@ +--- +title: "Performance tuning recommendations" +description: "Explains how to review DRITA reports for performance tuning recommendations" +--- + + + +## Reviewing the reports + +To use Dynamic Runtime Instrumentation Tools Architecture (DRITA) reports for performance tuning, review the top five events in a report. Look for any event that takes an especially large percentage of resources. In a streamlined system, user I/O generally makes up the largest number of waits. Evaluate waits in the context of CPU usage and total time. An event might not be significant if it takes two minutes out of a total measurement interval of two hours and the rest of the time is consumed by CPU time. Evaluate the component of response time (CPU "work" time or other "wait" time) that consumes the highest percentage of overall time. + +When evaluating events, watch for: + +| Event type | Description | +| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | +| Checkpoint waits | Checkpoint waits might indicate that checkpoint parameters need to be adjusted (`checkpoint_segments` and `checkpoint_timeout`). | +| WAL-related waits | WAL-related waits might indicate `wal_buffers` are undersized. | +| SQL Parse waits | If the number of waits is high, try to use prepared statements. | +| db file random reads | If high, check for appropriate indexes and statistics. | +| db file random writes | If high, might need to decrease `bgwriter_delay`. | +| btree random lock acquires | Might indicate indexes are being rebuilt. Schedule index builds during less active time. | + +Also look at the hardware, the operating system, the network, and the application SQL statements in performance reviews. + + + +## Event descriptions + +The following table lists the basic wait events that are displayed by DRITA. + +| Event name | Description | +| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `add in shmem lock acquire` | Obsolete/unused. | +| `bgwriter communication lock acquire` | The bgwriter (background writer) process has waited for the short-term lock that synchronizes messages between the bgwriter and a backend process. | +| `btree vacuum lock acquire` | The server has waited for the short-term lock that synchronizes access to the next available vacuum cycle ID. | +| `buffer free list lock acquire` | The server has waited for the short-term lock that synchronizes access to the list of free buffers (in shared memory). | +| `checkpoint lock acquire` | A server process has waited for the short-term lock that prevents simultaneous checkpoints. | +| `checkpoint start lock acquire` | The server has waited for the short-term lock that synchronizes access to the bgwriter checkpoint schedule. | +| `clog control lock acquire` | The server has waited for the short-term lock that synchronizes access to the commit log. | +| `control file lock acquire` | The server has waited for the short-term lock that synchronizes write access to the control file. This is usually a low number. | +| `db file extend` | A server process has waited for the operating system while adding a new page to the end of a file. |e +| `db file read` | A server process has waited for a read from disk to complete. | +| `db file write` | A server process has waited for a write to disk to complete. | +| `db file sync` | A server process has waited for the operating system to flush all changes to disk. | +| `first buf mapping lock acquire` | The server has waited for a short-term lock that synchronizes access to the shared-buffer mapping table. | +| `freespace lock acquire` | The server has waited for the short-term lock that synchronizes access to the freespace map. | +| `lwlock acquire` | The server has waited for a short-term lock that isn't described elsewhere in this table. | +| `multi xact gen lock acquire` | The server has waited for the short-term lock that synchronizes access to the next available multi-transaction ID (when a SELECT...FOR SHARE statement executes). | +| `multi xact member lock acquire` | The server has waited for the short-term lock that synchronizes access to the multi-transaction member file (when a SELECT...FOR SHARE statement executes). | +| `multi xact offset lock acquire` | The server has waited for the short-term lock that synchronizes access to the multi-transaction offset file (when a SELECT...FOR SHARE statement executes). | +| `oid gen lock acquire` | The server has waited for the short-term lock that synchronizes access to the next available OID (object ID). | +| `query plan` | The server has computed the execution plan for a SQL statement. | +| `rel cache init lock acquire` | The server has waited for the short-term lock that prevents simultaneous relation-cache loads/unloads. | +| `shmem index lock acquire` | The server has waited for the short-term lock that synchronizes access to the shared-memory map. | +| `sinval lock acquire` | The server has waited for the short-term lock that synchronizes access to the cache invalidation state. | +| `sql parse` | The server has parsed a SQL statement. | +| `subtrans control lock acquire` | The server has waited for the short-term lock that synchronizes access to the subtransaction log. | +| `tablespace create lock acquire` | The server has waited for the short-term lock that prevents simultaneous `CREATE TABLESPACE` or `DROP TABLESPACE` commands. | +| `two phase state lock acquire` | The server has waited for the short-term lock that synchronizes access to the list of prepared transactions. | +| `wal insert lock acquire` | The server has waited for the short-term lock that synchronizes write access to the write-ahead log. A high number can indicate that WAL buffers are sized too small. | +| `wal write lock acquire` | The server has waited for the short-term lock that synchronizes write-ahead log flushes. | +| `wal file sync` | The server has waited for the write-ahead log to sync to disk. This is related to the `wal_sync_method` parameter which, by default, is 'fsync'. You can gain better performance by changing this parameter to `open_sync`. | +| `wal flush` | The server has waited for the write-ahead log to flush to disk. | +| `wal write` | The server has waited for a write to the write-ahead log buffer. Expect this value to be high. | +| `xid gen lock acquire` | The server has waited for the short-term lock that synchronizes access to the next available transaction ID. | + +When wait events occur for *lightweight locks*, DRITA displays them as well. It uses a lightweight lock to protect a particular data structure in shared memory. + +Certain wait events can be due to the server process waiting for one of a group of related lightweight locks, which is referred to as a *lightweight lock tranche*. DRITA doesn't display individual lightweight lock tranches, but it displays their summation with a single event named `other lwlock acquire`. + +For a list and description of lightweight locks displayed by DRITA, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-SETUP). Under [Viewing Statistics](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS), see the Wait Event Type table for more details. + +This example displays lightweight locks `ProcArrayLock`, `CLogControlLock`, `WALBufMappingLock`, and `XidGenLock`. + +```sql +postgres=# select * from sys_rpt(40,70,20); +__OUTPUT__ + sys_rpt +---------------------------------------------------------------------------- + WAIT NAME COUNT WAIT TIME % WAIT +---------------------------------------------------------------------------- + wal flush 56107 44.456494 47.65 + db file read 66123 19.543968 20.95 + wal write 32886 12.780866 13.70 + wal file sync 32933 11.792972 12.64 + query plan 223576 4.539186 4.87 + db file extend 2339 0.087038 0.09 + other lwlock acquire 402 0.066591 0.07 + ProcArrayLock 135 0.012942 0.01 + CLogControlLock 212 0.010333 0.01 + WALBufMappingLock 47 0.006068 0.01 + XidGenLock 53 0.005296 0.01 +(13 rows) +``` + +DRITA also displays wait events that are related to certain EDB Postgres Advanced Server product features. These events and the `other lwlock acquire` event are listed in the following table. + +| Event name | Description | +| ----------------------- | ----------------------------------------------------------------- | +| `BulkLoadLock` | The server has waited for access related to EDB\*Loader. | +| `EDBResoureManagerLock` | The server has waited for access related to EDB Resource Manager. | +| `other lwlock acquire` | Summation of waits for lightweight lock tranches. | diff --git a/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/simulating_statspack_reports.mdx similarity index 52% rename from product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx rename to product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/simulating_statspack_reports.mdx index 9f484d2a271..73b465f90f1 100644 --- a/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA.mdx +++ b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/simulating_statspack_reports.mdx @@ -1,555 +1,11 @@ --- -title: "Using the dynamic runtime instrumentation tools architecture (DRITA)" -description: "How to use Dynamic Runtime Instrumentation Tools Architecture to query catalog views" -legacyRedirectsGenerated: - # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.20.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.19.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.21.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.22.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.23.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-tools-and-utilities-guide/9.6/DB_Compat_for_Oracle_Dev_Tools_Guide.1.24.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.322.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.323.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.321.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.318.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.320.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/database-compatibility-for-oracle-developers-guide/9.5/Database_Compatibility_for_Oracle_Developers_Guide.1.319.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.145.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.144.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.143.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.141.html" - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/9.5/EDB_Postgres_Enterprise_Guide.1.142.html" -redirects: - - ../../epas_compat_tools_guide/04_dynamic_runtime_instrumentation_tools_architecture_DRITA #generated for docs/epas/reorg-role-use-case-mode +title: "Simulating Statspack AWR reports" +description: "Describes how to use DRITA functions to simulate Oracle Statspack Automatic Workload Repository (AWR) reports" --- - - -The Dynamic Runtime Instrumentation Tools Architecture (DRITA) allows a DBA to query catalog views to determine the *wait events* that affect the performance of individual sessions or the whole system. DRITA records the number of times each event occurs as well as the time spent waiting. You can use this information to diagnose performance problems. DRITA consumes minimal system resources. - -DRITA compares *snapshots* to evaluate the performance of a system. A snapshot is a saved set of system performance data at a given point in time. A unique ID number identifies each snapshot. You can use snapshot ID numbers with DRITA reporting functions to return system performance statistics. - - - -## Configuring and using DRITA - -EDB Postgres Advanced Server's `postgresql.conf` file includes a configuration parameter named `timed_statistics` that controls collecting timing data. The valid parameter values are `TRUE` or `FALSE`. The default value is `FALSE`. - -`timed_statistics` is a dynamic parameter that you can modify in the `postgresql.conf` file or while a session is in progress. To enable DRITA, you must either: - -- Modify the `postgresql.conf` file, setting the `timed_statistics` parameter to `TRUE`. - -- Connect to the server with the EDB-PSQL client and invoke the command: - - Connect to the server with the EDB-PSQL client, and invoke the command: - -```sql -SET timed_statistics = TRUE -``` - -After modifying the `timed_statistics` parameter, take a starting snapshot. A snapshot captures the current state of each timer and event counter. The server compares the starting snapshot to a later snapshot to gauge system performance. - -Use the `edbsnap()` function to take the beginning snapshot: - -```sql -edb=# SELECT * FROM edbsnap(); -__OUTPUT__ - edbsnap ----------------------- - Statement processed. -(1 row) -``` - -Then, run the workload that you want to evaluate. When the workload is complete or at a strategic point during the workload, take another snapshot: - -```sql -edb=# SELECT * FROM edbsnap(); -__OUTPUT__ - edbsnap ----------------------- - Statement processed. -(1 row) -``` - -You can capture multiple snapshots during a session. Then, use the DRITA functions and reports to manage and compare the snapshots to evaluate performance information. - - - -## DRITA functions - -You can use DRITA functions to gather wait information and manage snapshots. DRITA functions are fully supported by EDB Postgres Advanced Server 14 whether your installation is made compatible with Oracle databases or is in PostgreSQL-compatible mode. - - - -### get_snaps() - -The `get_snaps()` function returns a list of the current snapshots. The signature is: - -```sql -get_snaps() -``` - -This example uses the `get_snaps()` function to display a list of snapshots: - -```sql -SELECT * FROM get_snaps(); -__OUTPUT__ - get_snaps ------------------------------- - 1 25-JUL-18 09:49:04.224597 - 2 25-JUL-18 09:49:09.310395 - 3 25-JUL-18 09:49:14.378728 - 4 25-JUL-18 09:49:19.448875 - 5 25-JUL-18 09:49:24.52103 - 6 25-JUL-18 09:49:29.586889 - 7 25-JUL-18 09:49:34.65529 - 8 25-JUL-18 09:49:39.723095 - 9 25-JUL-18 09:49:44.788392 - 10 25-JUL-18 09:49:49.855821 - 11 25-JUL-18 09:49:54.919954 - 12 25-JUL-18 09:49:59.987707 -(12 rows) -``` - -The first column in the result list displays the snapshot identifier. The second column displays the date and time that the snapshot was captured. - - - -### sys_rpt() - -The `sys_rpt()` function returns system wait information. The signature is: - -```sql -sys_rpt(, , ) -``` - -#### Parameters - -`beginning_id` - - An integer value that represents the beginning session identifier. - -`ending_id` - - An integer value that represents the ending session identifier. - -`top_n` - - The number of rows to return. - -This example shows a call to the `sys_rpt()` function: - -```sql -SELECT * FROM sys_rpt(9, 10, 10); -__OUTPUT__ - sys_rpt ------------------------------------------------------------------------------ -WAIT NAME COUNT WAIT TIME % WAIT ---------------------------------------------------------------------------- -wal flush 8359 1.357593 30.62 -wal write 8358 1.349153 30.43 -wal file sync 8358 1.286437 29.02 -query plan 33439 0.439324 9.91 -db file extend 54 0.000585 0.01 -db file read 31 0.000307 0.01 -other lwlock acquire 0 0.000000 0.00 -ProcArrayLock 0 0.000000 0.00 -CLogControlLock 0 0.000000 0.00 -(11 rows) -``` - -The information displayed in the result set includes: - -| Column name | Description | -| ----------- | -------------------------------------------------------------------------| -| `WAIT NAME` | The name of the wait | -| `COUNT` | The number of times that the wait event occurred | -| `WAIT TIME` | The time of the wait event in seconds | -| `% WAIT` | The percentage of the total wait time used by this wait for this session | - - - -### sess_rpt() - -The `sess_rpt()` function returns session wait information. The signature is: - -```sql -sess_rpt(, , ) -``` - -#### Parameters - -`beginning_id` - - An integer value that represents the beginning session identifier. - -`ending_id` - - An integer value that represents the ending session identifier. - -`top_n` - - The number of rows to return. - -This example shows a call to the `sess_rpt()` function: - -```sql -SELECT * FROM sess_rpt(8, 9, 10); -__OUTPUT__ - sess_rpt -------------------------------------------------------------------------------------- -ID USER WAIT NAME COUNT TIME % WAIT SES % WAIT ALL -------------------------------------------------------------------------------------- -3501 enterprise wal flush 8354 1.354958 30.61 30.61 -3501 enterprise wal write 8354 1.348192 30.46 30.46 -3501 enterprise wal file sync 8354 1.285607 29.04 29.04 -3501 enterprise query plan 33413 0.436901 9.87 9.87 -3501 enterprise db file extend 54 0.000578 0.01 0.01 -3501 enterprise db file read 56 0.000541 0.01 0.01 -3501 enterprise ProcArrayLock 0 0.000000 0.00 0.00 -3501 enterprise CLogControlLock 0 0.000000 0.00 0.00 -(10 rows) -``` - -The information displayed in the result set includes: - -| Column name | Description | -| ------------ | -------------------------------------------------------------------------- | -| `ID` | The processID of the session | -| `USER` | The name of the user incurring the wait | -| `WAIT NAME` | The name of the wait event | -| `COUNT` | The number of times that the wait event occurred | -| `TIME` | The length of the wait event in seconds | -| `% WAIT SES` | The percentage of the total wait time used by this wait for this session | -| `% WAIT ALL` | The percentage of the total wait time used by this wait for all sessions | - - - -### sessid_rpt() - -The `sessid_rpt()` function returns session ID information for a specified backend. The signature is: - -```sql -sessid_rpt(, , ) -``` - -#### Parameters - -`beginning_id` - - An integer value that represents the beginning session identifier. - -`ending_id` - - An integer value that represents the ending session identifier. - -`backend_id` - - An integer value that represents the backend identifier. - -This example shows a call to `sessid_rpt()`: - -```sql -SELECT * FROM sessid_rpt(8, 9, 3501); -__OUTPUT__ - sessid_rpt -------------------------------------------------------------------------------------- -ID USER WAIT NAME COUNT TIME % WAIT SES % WAIT ALL -------------------------------------------------------------------------------------- -3501 enterprise CLogControlLock 0 0.000000 0.00 0.00 -3501 enterprise ProcArrayLock 0 0.000000 0.00 0.00 -3501 enterprise db file read 56 0.000541 0.01 0.01 -3501 enterprise db file extend 54 0.000578 0.01 0.01 -3501 enterprise query plan 33413 0.436901 9.87 9.87 -3501 enterprise wal file sync 8354 1.285607 29.04 29.04 -3501 enterprise wal write 8354 1.348192 30.46 30.46 -3501 enterprise wal flush 8354 1.354958 30.61 30.61 -(10 rows) -``` - -The information displayed in the result set includes: - -| Column name | Description | -| ------------ | -------------------------------------------------------------------------- | -| `ID` | The process ID of the wait | -| `USER` | The name of the user that owns the session | -| `WAIT NAME` | The name of the wait event | -| `COUNT` | The number of times that the wait event occurred | -| `TIME` | The length of the wait in seconds | -| `% WAIT SES` | The percentage of the total wait time used by this wait for this session | -| `% WAIT ALL` | The percentage of the total wait time used by this wait for all sessions | - - - -### sesshist_rpt() - -The `sesshist_rpt()` function returns session wait information for a specified backend. The signature is: - -```sql -sesshist_rpt(, ) -``` - -#### Parameters - -`snapshot_id` - - An integer value that identifies the snapshot. - -`session_id` - - An integer value that represents the session. - -This example shows a call to the `sesshist_rpt()` function: - -!!! Note - The example was shortened. Over 1300 rows are actually generated. - -```sql -SELECT * FROM sesshist_rpt (9, 3501); -__OUTPUT__ - sesshist_rpt ------------------------------------------------------------------------------ ----------- - ID USER SEQ WAIT NAME ELAPSED File Name # - of Blk Sum of Blks ------------------------------------------------------------------------------ ---------- - 3501 enterprise 1 query plan 13 0 N/A -0 0 - 3501 enterprise 1 query plan 13 0 edb_password_history -0 0 - 3501 enterprise 1 query plan 13 0 edb_password_history -0 0 - 3501 enterprise 1 query plan 13 0 edb_password_history -0 0 - 3501 enterprise 1 query plan 13 0 edb_profile -0 0 - 3501 enterprise 1 query plan 13 0 edb_profile_name_ind -0 0 - 3501 enterprise 1 query plan 13 0 edb_profile_oid_inde -0 0 - 3501 enterprise 1 query plan 13 0 edb_profile_password -0 0 - 3501 enterprise 1 query plan 13 0 edb_resource_group -0 0 - 3501 enterprise 1 query plan 13 0 edb_resource_group_n -0 0 - 3501 enterprise 1 query plan 13 0 edb_resource_group_o -0 0 - 3501 enterprise 1 query plan 13 0 pg_attribute -0 0 - 3501 enterprise 1 query plan 13 0 pg_attribute_relid_a -0 0 - 3501 enterprise 1 query plan 13 0 pg_attribute_relid_a -0 0 - 3501 enterprise 1 query plan 13 0 pg_auth_members -0 0 - 3501 enterprise 1 query plan 13 0 pg_auth_members_memb -0 0 - 3501 enterprise 1 query plan 13 0 pg_auth_members_role -0 0 - . - . - . - 3501 enterprise 2 wal flush 149 0 N/A -0 0 - 3501 enterprise 2 wal flush 149 0 edb_password_history -0 0 - 3501 enterprise 2 wal flush 149 0 edb_password_history -0 0 - 3501 enterprise 2 wal flush 149 0 edb_password_history -0 0 - 3501 enterprise 2 wal flush 149 0 edb_profile -0 0 - 3501 enterprise 2 wal flush 149 0 edb_profile_name_ind -0 0 - 3501 enterprise 2 wal flush 149 0 edb_profile_oid_inde -0 0 - 3501 enterprise 2 wal flush 149 0 edb_profile_password -0 0 - 3501 enterprise 2 wal flush 149 0 edb_resource_group -0 0 - 3501 enterprise 2 wal flush 149 0 edb_resource_group_n -0 0 - 3501 enterprise 2 wal flush 149 0 edb_resource_group_o -0 0 - 3501 enterprise 2 wal flush 149 0 pg_attribute -0 0 - 3501 enterprise 2 wal flush 149 0 pg_attribute_relid_a -0 0 - 3501 enterprise 2 wal flush 149 0 pg_attribute_relid_a -0 0 - 3501 enterprise 2 wal flush 149 0 pg_auth_members -0 0 - 3501 enterprise 2 wal flush 149 0 pg_auth_members_memb -0 0 - 3501 enterprise 2 wal flush 149 0 pg_auth_members_role -0 0 - . - . - . - 3501 enterprise 3 wal write 148 0 N/A -0 0 - 3501 enterprise 3 wal write 148 0 edb_password_history -0 0 - 3501 enterprise 3 wal write 148 0 edb_password_history -0 0 - 3501 enterprise 3 wal write 148 0 edb_password_history -0 0 - 3501 enterprise 3 wal write 148 0 edb_profile -0 0 - 3501 enterprise 3 wal write 148 0 edb_profile_name_ind -0 0 - 3501 enterprise 3 wal write 148 0 edb_profile_oid_inde -0 0 - 3501 enterprise 3 wal write 148 0 edb_profile_password -0 0 - 3501 enterprise 3 wal write 148 0 edb_resource_group -0 0 - 3501 enterprise 3 wal write 148 0 edb_resource_group_n -0 0 - 3501 enterprise 3 wal write 148 0 edb_resource_group_o -0 0 - 3501 enterprise 3 wal write 148 0 pg_attribute -0 0 - 3501 enterprise 3 wal write 148 0 pg_attribute_relid_a -0 0 - 3501 enterprise 3 wal write 148 0 pg_attribute_relid_a -0 0 - 3501 enterprise 3 wal write 148 0 pg_auth_members -0 0 - 3501 enterprise 3 wal write 148 0 pg_auth_members_memb -0 0 - 3501 enterprise 3 wal write 148 0 pg_auth_members_role -0 0 - . - . - . - 3501 enterprise 24 wal write 130 0 pg_toast_1255 -0 0 - 3501 enterprise 24 wal write 130 0 pg_toast_1255_index -0 0 - 3501 enterprise 24 wal write 130 0 pg_toast_2396 -0 0 - 3501 enterprise 24 wal write 130 0 pg_toast_2396_index -0 0 - 3501 enterprise 24 wal write 130 0 pg_toast_2964 -0 0 - 3501 enterprise 24 wal write 130 0 pg_toast_2964_index -0 0 - 3501 enterprise 24 wal write 130 0 pg_toast_3592 -0 0 - 3501 enterprise 24 wal write 130 0 pg_toast_3592_index -0 0 - 3501 enterprise 24 wal write 130 0 pg_type -0 0 - 3501 enterprise 24 wal write 130 0 pg_type_oid_index -0 0 - 3501 enterprise 24 wal write 130 0 pg_type_typname_nsp_ -0 0 -(1304 rows) -``` - -The information displayed in the result set includes: - -| Column name | Description | -| ------------- | ---------------------------------------------------------------------- | -| `ID` | The system-assigned identifier of the wait | -| `USER` | The name of the user that incurred the wait | -| `SEQ` | The sequence number of the wait event | -| `WAIT NAME` | The name of the wait event | -| `ELAPSED` | The length of the wait event in microseconds | -| `File` | The relfilenode number of the file | -| `Name` | If available, the name of the file name related to the wait event | -| `# of Blk` | The block number read or written for a specific instance of the event | -| `Sum of Blks` | The number of blocks read | - - - -### purgesnap() - -The `purgesnap()` function purges a range of snapshots from the snapshot tables. The signature is: - -```sql -purgesnap(, ) -``` - -#### Parameters - -`beginning_id` - - An integer value that represents the beginning session identifier. - -`ending_id` - - An integer value that represents the ending session identifier. - -`purgesnap()` removes all snapshots between `beginning_id` and `ending_id`, inclusive: - -```sql -SELECT * FROM purgesnap(6, 9); -__OUTPUT__ - purgesnap ------------------------------------- - Snapshots in range 6 to 9 deleted. -(1 row) -``` - -A call to the `get_snaps()` function after executing the example shows that snapshots `6` through `9` were purged from the snapshot tables: - -```sql -SELECT * FROM get_snaps(); -__OUTPUT__ - get_snaps ------------------------------- - 1 25-JUL-18 09:49:04.224597 - 2 25-JUL-18 09:49:09.310395 - 3 25-JUL-18 09:49:14.378728 - 4 25-JUL-18 09:49:19.448875 - 5 25-JUL-18 09:49:24.52103 - 10 25-JUL-18 09:49:49.855821 - 11 25-JUL-18 09:49:54.919954 - 12 25-JUL-18 09:49:59.987707 -(8 rows) -``` - - - -### truncsnap() - -Use the truncsnap() function to delete all records from the snapshot table. The signature is: - -```sql -truncsnap() -``` - -For example: - -```sql -SELECT * FROM truncsnap(); -__OUTPUT__ - truncsnap ----------------------- - Snapshots truncated. -(1 row) -``` - -A call to the `get_snaps()` function after calling the `truncsnap()` function shows that all records were removed from the snapshot tables: - -```sql -SELECT * FROM get_snaps(); -__OUTPUT__ - get_snaps ------------ -(0 rows) -``` - -## Simulating Statspack AWR reports - -These functions return information comparable to the information contained in an Oracle Statspack/Automatic Workload Repository (AWR) report. When taking a snapshot, performance data from system catalog tables is saved into history tables. These reporting functions report on the differences between two given snapshots: +The following functions return information comparable to the information contained in an Oracle Statspack/Automatic Workload Repository (AWR) report. When taking a snapshot, performance data from system catalog tables is saved into history tables. These reporting functions report on the differences between two given snapshots: - `stat_db_rpt()` - `stat_tables_rpt()` @@ -561,7 +17,7 @@ You can execute the reporting functions individually or you can execute all five -### edbreport() +## edbreport() The `edbreport()` function includes data from the other reporting functions, plus system information. The signature is: @@ -569,7 +25,7 @@ The `edbreport()` function includes data from the other reporting functions, plu edbreport(, ) ``` -#### Parameters +### Parameters `beginning_id` @@ -1026,7 +482,7 @@ The information displayed in the `Database Parameters from postgresql.conf` sect -### stat_db_rpt() +## stat_db_rpt() The signature is: @@ -1034,7 +490,7 @@ The signature is: stat_db_rpt(, ) ``` -#### Parameters +### Parameters `beginning_id` @@ -1073,7 +529,7 @@ The information displayed in the DATA from `pg_stat_database` section of the rep -### stat_tables_rpt() +## stat_tables_rpt() The signature is: @@ -1081,7 +537,7 @@ The signature is: function_name(, , , ) ``` -#### Parameters +### Parameters `beginning_id` @@ -1201,7 +657,7 @@ The information displayed in the `DATA from pg_stat_all_tables ordered by rel tu -### statio_tables_rpt() +## statio_tables_rpt() The signature is: @@ -1209,7 +665,7 @@ The signature is: statio_tables_rpt(, , , ) ``` -#### Parameters +### Parameters `beginning_id` @@ -1288,7 +744,7 @@ The information displayed in the `DATA from pg_statio_all_tables` section includ -### stat_indexes_rpt() +## stat_indexes_rpt() The signature is: @@ -1296,7 +752,7 @@ The signature is: stat_indexes_rpt(, , , ) ``` -#### Parameters +### Parameters `beginning_id` @@ -1368,7 +824,7 @@ The information displayed in the `DATA from pg_stat_all_indexes` section include -### statio_indexes_rpt() +## statio_indexes_rpt() The signature is: @@ -1376,7 +832,7 @@ The signature is: statio_indexes_rpt(, , , ) ``` -#### Parameters +### Parameters `beginning_id` @@ -1445,100 +901,3 @@ The information displayed in the `DATA from pg_statio_all_indexes` report includ | `IDX BLKS READ` | The number of index blocks read | | `IDX BLKS HIT` | The number of index blocks hit | - - -## Performance tuning recommendations - -To use DRITA reports for performance tuning, review the top five events in a report. Look for any event that takes an especially large percentage of resources. In a streamlined system, user I/O generally makes up the largest number of waits. Evaluate waits in the context of CPU usage and total time. An event might not be significant if it takes two minutes out of a total measurement interval of two hours and the rest of the time is consumed by CPU time. Evaluate the component of response time (CPU "work" time or other "wait" time) that consumes the highest percentage of overall time. - -When evaluating events, watch for: - -| Event type | Description | -| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | -| Checkpoint waits | Checkpoint waits might indicate that checkpoint parameters need to be adjusted (`checkpoint_segments` and `checkpoint_timeout`). | -| WAL-related waits | WAL-related waits might indicate `wal_buffers` are undersized. | -| SQL Parse waits | If the number of waits is high, try to use prepared statements. | -| db file random reads | If high, check for appropriate indexes and statistics. | -| db file random writes | If high, might need to decrease `bgwriter_delay`. | -| btree random lock acquires | Might indicate indexes are being rebuilt. Schedule index builds during less active time. | - -Also look at the hardware, the operating system, the network, and the application SQL statements in performance reviews. - - - -## Event descriptions - -The following table lists the basic wait events that are displayed by DRITA. - -| Event name | Description | -| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `add in shmem lock acquire` | Obsolete/unused. | -| `bgwriter communication lock acquire` | The bgwriter (background writer) process has waited for the short-term lock that synchronizes messages between the bgwriter and a backend process. | -| `btree vacuum lock acquire` | The server has waited for the short-term lock that synchronizes access to the next available vacuum cycle ID. | -| `buffer free list lock acquire` | The server has waited for the short-term lock that synchronizes access to the list of free buffers (in shared memory). | -| `checkpoint lock acquire` | A server process has waited for the short-term lock that prevents simultaneous checkpoints. | -| `checkpoint start lock acquire` | The server has waited for the short-term lock that synchronizes access to the bgwriter checkpoint schedule. | -| `clog control lock acquire` | The server has waited for the short-term lock that synchronizes access to the commit log. | -| `control file lock acquire` | The server has waited for the short-term lock that synchronizes write access to the control file. This is usually a low number. | -| `db file extend` | A server process has waited for the operating system while adding a new page to the end of a file. |e -| `db file read` | A server process has waited for a read from disk to complete. | -| `db file write` | A server process has waited for a write to disk to complete. | -| `db file sync` | A server process has waited for the operating system to flush all changes to disk. | -| `first buf mapping lock acquire` | The server has waited for a short-term lock that synchronizes access to the shared-buffer mapping table. | -| `freespace lock acquire` | The server has waited for the short-term lock that synchronizes access to the freespace map. | -| `lwlock acquire` | The server has waited for a short-term lock that isn't described elsewhere in this table. | -| `multi xact gen lock acquire` | The server has waited for the short-term lock that synchronizes access to the next available multi-transaction ID (when a SELECT...FOR SHARE statement executes). | -| `multi xact member lock acquire` | The server has waited for the short-term lock that synchronizes access to the multi-transaction member file (when a SELECT...FOR SHARE statement executes). | -| `multi xact offset lock acquire` | The server has waited for the short-term lock that synchronizes access to the multi-transaction offset file (when a SELECT...FOR SHARE statement executes). | -| `oid gen lock acquire` | The server has waited for the short-term lock that synchronizes access to the next available OID (object ID). | -| `query plan` | The server has computed the execution plan for a SQL statement. | -| `rel cache init lock acquire` | The server has waited for the short-term lock that prevents simultaneous relation-cache loads/unloads. | -| `shmem index lock acquire` | The server has waited for the short-term lock that synchronizes access to the shared-memory map. | -| `sinval lock acquire` | The server has waited for the short-term lock that synchronizes access to the cache invalidation state. | -| `sql parse` | The server has parsed a SQL statement. | -| `subtrans control lock acquire` | The server has waited for the short-term lock that synchronizes access to the subtransaction log. | -| `tablespace create lock acquire` | The server has waited for the short-term lock that prevents simultaneous `CREATE TABLESPACE` or `DROP TABLESPACE` commands. | -| `two phase state lock acquire` | The server has waited for the short-term lock that synchronizes access to the list of prepared transactions. | -| `wal insert lock acquire` | The server has waited for the short-term lock that synchronizes write access to the write-ahead log. A high number can indicate that WAL buffers are sized too small. | -| `wal write lock acquire` | The server has waited for the short-term lock that synchronizes write-ahead log flushes. | -| `wal file sync` | The server has waited for the write-ahead log to sync to disk. This is related to the `wal_sync_method` parameter which, by default, is 'fsync'. You can gain better performance by changing this parameter to `open_sync`. | -| `wal flush` | The server has waited for the write-ahead log to flush to disk. | -| `wal write` | The server has waited for a write to the write-ahead log buffer. Expect this value to be high. | -| `xid gen lock acquire` | The server has waited for the short-term lock that synchronizes access to the next available transaction ID. | - -When wait events occur for *lightweight locks*, DRITA displays them as well. It uses a lightweight lock to protect a particular data structure in shared memory. - -Certain wait events can be due to the server process waiting for one of a group of related lightweight locks, which is referred to as a *lightweight lock tranche*. DRITA doesn't display individual lightweight lock tranches, but it displays their summation with a single event named `other lwlock acquire`. - -For a list and description of lightweight locks displayed by DRITA, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-SETUP). Under [Viewing Statistics](https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-STATS-VIEWS), see the Wait Event Type table for more details. - -This example displays lightweight locks `ProcArrayLock`, `CLogControlLock`, `WALBufMappingLock`, and `XidGenLock`. - -```sql -postgres=# select * from sys_rpt(40,70,20); -__OUTPUT__ - sys_rpt ----------------------------------------------------------------------------- - WAIT NAME COUNT WAIT TIME % WAIT ----------------------------------------------------------------------------- - wal flush 56107 44.456494 47.65 - db file read 66123 19.543968 20.95 - wal write 32886 12.780866 13.70 - wal file sync 32933 11.792972 12.64 - query plan 223576 4.539186 4.87 - db file extend 2339 0.087038 0.09 - other lwlock acquire 402 0.066591 0.07 - ProcArrayLock 135 0.012942 0.01 - CLogControlLock 212 0.010333 0.01 - WALBufMappingLock 47 0.006068 0.01 - XidGenLock 53 0.005296 0.01 -(13 rows) -``` - -DRITA also displays wait events that are related to certain EDB Postgres Advanced Server product features. These events and the `other lwlock acquire` event are listed in the following table. - -| Event name | Description | -| ----------------------- | ----------------------------------------------------------------- | -| `BulkLoadLock` | The server has waited for access related to EDB\*Loader. | -| `EDBResoureManagerLock` | The server has waited for access related to EDB Resource Manager. | -| `other lwlock acquire` | Summation of waits for lightweight lock tranches. | diff --git a/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/taking_a_snapshot.mdx b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/taking_a_snapshot.mdx new file mode 100644 index 00000000000..3a8a1f9bea5 --- /dev/null +++ b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/taking_a_snapshot.mdx @@ -0,0 +1,48 @@ +--- +title: "Taking a snapshot" +description: "Describes how to take a snapshot of system performance data" +--- + + + +EDB Postgres Advanced Server's `postgresql.conf` file includes a configuration parameter named `timed_statistics` that controls collecting timing data. The valid parameter values are `TRUE` or `FALSE`. The default value is `FALSE`. + +`timed_statistics` is a dynamic parameter that you can modify in the `postgresql.conf` file or while a session is in progress. To enable DRITA, you must either: + +- Modify the `postgresql.conf` file, setting the `timed_statistics` parameter to `TRUE`. + +- Connect to the server with the EDB-PSQL client and invoke the command: + + Connect to the server with the EDB-PSQL client, and invoke the command: + +```sql +SET timed_statistics = TRUE +``` + +After modifying the `timed_statistics` parameter, take a starting snapshot. A snapshot captures the current state of each timer and event counter. The server compares the starting snapshot to a later snapshot to gauge system performance. + +Use the `edbsnap()` function to take the beginning snapshot: + +```sql +edb=# SELECT * FROM edbsnap(); +__OUTPUT__ + edbsnap +---------------------- + Statement processed. +(1 row) +``` + +Then, run the workload that you want to evaluate. When the workload is complete or at a strategic point during the workload, take another snapshot: + +```sql +edb=# SELECT * FROM edbsnap(); +__OUTPUT__ + edbsnap +---------------------- + Statement processed. +(1 row) +``` + +You can capture multiple snapshots during a session. Then, use the DRITA functions and reports to manage and compare the snapshots to evaluate performance information. + + diff --git a/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/using_drita_functions.mdx b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/using_drita_functions.mdx new file mode 100644 index 00000000000..ad6c294e4b7 --- /dev/null +++ b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/using_drita_functions.mdx @@ -0,0 +1,471 @@ +--- +title: "Using DRITA functions" +description: "Describes how to use DRITA functions to gather wait information and manage snapshots" +--- + +You can use DRITA functions to gather wait information and manage snapshots. DRITA functions are fully supported by EDB Postgres Advanced Server 14 whether your installation is made compatible with Oracle databases or is in PostgreSQL-compatible mode. + + + +## get_snaps() + +The `get_snaps()` function returns a list of the current snapshots. The signature is: + +```sql +get_snaps() +``` + +This example uses the `get_snaps()` function to display a list of snapshots: + +```sql +SELECT * FROM get_snaps(); +__OUTPUT__ + get_snaps +------------------------------ + 1 25-JUL-18 09:49:04.224597 + 2 25-JUL-18 09:49:09.310395 + 3 25-JUL-18 09:49:14.378728 + 4 25-JUL-18 09:49:19.448875 + 5 25-JUL-18 09:49:24.52103 + 6 25-JUL-18 09:49:29.586889 + 7 25-JUL-18 09:49:34.65529 + 8 25-JUL-18 09:49:39.723095 + 9 25-JUL-18 09:49:44.788392 + 10 25-JUL-18 09:49:49.855821 + 11 25-JUL-18 09:49:54.919954 + 12 25-JUL-18 09:49:59.987707 +(12 rows) +``` + +The first column in the result list displays the snapshot identifier. The second column displays the date and time that the snapshot was captured. + + + +## sys_rpt() + +The `sys_rpt()` function returns system wait information. The signature is: + +```sql +sys_rpt(, , ) +``` + +### Parameters + +`beginning_id` + + An integer value that represents the beginning session identifier. + +`ending_id` + + An integer value that represents the ending session identifier. + +`top_n` + + The number of rows to return. + +This example shows a call to the `sys_rpt()` function: + +```sql +SELECT * FROM sys_rpt(9, 10, 10); +__OUTPUT__ + sys_rpt +----------------------------------------------------------------------------- +WAIT NAME COUNT WAIT TIME % WAIT +--------------------------------------------------------------------------- +wal flush 8359 1.357593 30.62 +wal write 8358 1.349153 30.43 +wal file sync 8358 1.286437 29.02 +query plan 33439 0.439324 9.91 +db file extend 54 0.000585 0.01 +db file read 31 0.000307 0.01 +other lwlock acquire 0 0.000000 0.00 +ProcArrayLock 0 0.000000 0.00 +CLogControlLock 0 0.000000 0.00 +(11 rows) +``` + +The information displayed in the result set includes: + +| Column name | Description | +| ----------- | -------------------------------------------------------------------------| +| `WAIT NAME` | The name of the wait | +| `COUNT` | The number of times that the wait event occurred | +| `WAIT TIME` | The time of the wait event in seconds | +| `% WAIT` | The percentage of the total wait time used by this wait for this session | + + + +## sess_rpt() + +The `sess_rpt()` function returns session wait information. The signature is: + +```sql +sess_rpt(, , ) +``` + +### Parameters + +`beginning_id` + + An integer value that represents the beginning session identifier. + +`ending_id` + + An integer value that represents the ending session identifier. + +`top_n` + + The number of rows to return. + +This example shows a call to the `sess_rpt()` function: + +```sql +SELECT * FROM sess_rpt(8, 9, 10); +__OUTPUT__ + sess_rpt +------------------------------------------------------------------------------------- +ID USER WAIT NAME COUNT TIME % WAIT SES % WAIT ALL +------------------------------------------------------------------------------------- +3501 enterprise wal flush 8354 1.354958 30.61 30.61 +3501 enterprise wal write 8354 1.348192 30.46 30.46 +3501 enterprise wal file sync 8354 1.285607 29.04 29.04 +3501 enterprise query plan 33413 0.436901 9.87 9.87 +3501 enterprise db file extend 54 0.000578 0.01 0.01 +3501 enterprise db file read 56 0.000541 0.01 0.01 +3501 enterprise ProcArrayLock 0 0.000000 0.00 0.00 +3501 enterprise CLogControlLock 0 0.000000 0.00 0.00 +(10 rows) +``` + +The information displayed in the result set includes: + +| Column name | Description | +| ------------ | -------------------------------------------------------------------------- | +| `ID` | The processID of the session | +| `USER` | The name of the user incurring the wait | +| `WAIT NAME` | The name of the wait event | +| `COUNT` | The number of times that the wait event occurred | +| `TIME` | The length of the wait event in seconds | +| `% WAIT SES` | The percentage of the total wait time used by this wait for this session | +| `% WAIT ALL` | The percentage of the total wait time used by this wait for all sessions | + + + +## sessid_rpt() + +The `sessid_rpt()` function returns session ID information for a specified backend. The signature is: + +```sql +sessid_rpt(, , ) +``` + +### Parameters + +`beginning_id` + + An integer value that represents the beginning session identifier. + +`ending_id` + + An integer value that represents the ending session identifier. + +`backend_id` + + An integer value that represents the backend identifier. + +This example shows a call to `sessid_rpt()`: + +```sql +SELECT * FROM sessid_rpt(8, 9, 3501); +__OUTPUT__ + sessid_rpt +------------------------------------------------------------------------------------- +ID USER WAIT NAME COUNT TIME % WAIT SES % WAIT ALL +------------------------------------------------------------------------------------- +3501 enterprise CLogControlLock 0 0.000000 0.00 0.00 +3501 enterprise ProcArrayLock 0 0.000000 0.00 0.00 +3501 enterprise db file read 56 0.000541 0.01 0.01 +3501 enterprise db file extend 54 0.000578 0.01 0.01 +3501 enterprise query plan 33413 0.436901 9.87 9.87 +3501 enterprise wal file sync 8354 1.285607 29.04 29.04 +3501 enterprise wal write 8354 1.348192 30.46 30.46 +3501 enterprise wal flush 8354 1.354958 30.61 30.61 +(10 rows) +``` + +The information displayed in the result set includes: + +| Column name | Description | +| ------------ | -------------------------------------------------------------------------- | +| `ID` | The process ID of the wait | +| `USER` | The name of the user that owns the session | +| `WAIT NAME` | The name of the wait event | +| `COUNT` | The number of times that the wait event occurred | +| `TIME` | The length of the wait in seconds | +| `% WAIT SES` | The percentage of the total wait time used by this wait for this session | +| `% WAIT ALL` | The percentage of the total wait time used by this wait for all sessions | + + + +## sesshist_rpt() + +The `sesshist_rpt()` function returns session wait information for a specified backend. The signature is: + +```sql +sesshist_rpt(, ) +``` + +### Parameters + +`snapshot_id` + + An integer value that identifies the snapshot. + +`session_id` + + An integer value that represents the session. + +This example shows a call to the `sesshist_rpt()` function: + +!!! Note + The example was shortened. Over 1300 rows are actually generated. + +```sql +SELECT * FROM sesshist_rpt (9, 3501); +__OUTPUT__ + sesshist_rpt +----------------------------------------------------------------------------- +---------- + ID USER SEQ WAIT NAME ELAPSED File Name # + of Blk Sum of Blks +----------------------------------------------------------------------------- +--------- + 3501 enterprise 1 query plan 13 0 N/A +0 0 + 3501 enterprise 1 query plan 13 0 edb_password_history +0 0 + 3501 enterprise 1 query plan 13 0 edb_password_history +0 0 + 3501 enterprise 1 query plan 13 0 edb_password_history +0 0 + 3501 enterprise 1 query plan 13 0 edb_profile +0 0 + 3501 enterprise 1 query plan 13 0 edb_profile_name_ind +0 0 + 3501 enterprise 1 query plan 13 0 edb_profile_oid_inde +0 0 + 3501 enterprise 1 query plan 13 0 edb_profile_password +0 0 + 3501 enterprise 1 query plan 13 0 edb_resource_group +0 0 + 3501 enterprise 1 query plan 13 0 edb_resource_group_n +0 0 + 3501 enterprise 1 query plan 13 0 edb_resource_group_o +0 0 + 3501 enterprise 1 query plan 13 0 pg_attribute +0 0 + 3501 enterprise 1 query plan 13 0 pg_attribute_relid_a +0 0 + 3501 enterprise 1 query plan 13 0 pg_attribute_relid_a +0 0 + 3501 enterprise 1 query plan 13 0 pg_auth_members +0 0 + 3501 enterprise 1 query plan 13 0 pg_auth_members_memb +0 0 + 3501 enterprise 1 query plan 13 0 pg_auth_members_role +0 0 + . + . + . + 3501 enterprise 2 wal flush 149 0 N/A +0 0 + 3501 enterprise 2 wal flush 149 0 edb_password_history +0 0 + 3501 enterprise 2 wal flush 149 0 edb_password_history +0 0 + 3501 enterprise 2 wal flush 149 0 edb_password_history +0 0 + 3501 enterprise 2 wal flush 149 0 edb_profile +0 0 + 3501 enterprise 2 wal flush 149 0 edb_profile_name_ind +0 0 + 3501 enterprise 2 wal flush 149 0 edb_profile_oid_inde +0 0 + 3501 enterprise 2 wal flush 149 0 edb_profile_password +0 0 + 3501 enterprise 2 wal flush 149 0 edb_resource_group +0 0 + 3501 enterprise 2 wal flush 149 0 edb_resource_group_n +0 0 + 3501 enterprise 2 wal flush 149 0 edb_resource_group_o +0 0 + 3501 enterprise 2 wal flush 149 0 pg_attribute +0 0 + 3501 enterprise 2 wal flush 149 0 pg_attribute_relid_a +0 0 + 3501 enterprise 2 wal flush 149 0 pg_attribute_relid_a +0 0 + 3501 enterprise 2 wal flush 149 0 pg_auth_members +0 0 + 3501 enterprise 2 wal flush 149 0 pg_auth_members_memb +0 0 + 3501 enterprise 2 wal flush 149 0 pg_auth_members_role +0 0 + . + . + . + 3501 enterprise 3 wal write 148 0 N/A +0 0 + 3501 enterprise 3 wal write 148 0 edb_password_history +0 0 + 3501 enterprise 3 wal write 148 0 edb_password_history +0 0 + 3501 enterprise 3 wal write 148 0 edb_password_history +0 0 + 3501 enterprise 3 wal write 148 0 edb_profile +0 0 + 3501 enterprise 3 wal write 148 0 edb_profile_name_ind +0 0 + 3501 enterprise 3 wal write 148 0 edb_profile_oid_inde +0 0 + 3501 enterprise 3 wal write 148 0 edb_profile_password +0 0 + 3501 enterprise 3 wal write 148 0 edb_resource_group +0 0 + 3501 enterprise 3 wal write 148 0 edb_resource_group_n +0 0 + 3501 enterprise 3 wal write 148 0 edb_resource_group_o +0 0 + 3501 enterprise 3 wal write 148 0 pg_attribute +0 0 + 3501 enterprise 3 wal write 148 0 pg_attribute_relid_a +0 0 + 3501 enterprise 3 wal write 148 0 pg_attribute_relid_a +0 0 + 3501 enterprise 3 wal write 148 0 pg_auth_members +0 0 + 3501 enterprise 3 wal write 148 0 pg_auth_members_memb +0 0 + 3501 enterprise 3 wal write 148 0 pg_auth_members_role +0 0 + . + . + . + 3501 enterprise 24 wal write 130 0 pg_toast_1255 +0 0 + 3501 enterprise 24 wal write 130 0 pg_toast_1255_index +0 0 + 3501 enterprise 24 wal write 130 0 pg_toast_2396 +0 0 + 3501 enterprise 24 wal write 130 0 pg_toast_2396_index +0 0 + 3501 enterprise 24 wal write 130 0 pg_toast_2964 +0 0 + 3501 enterprise 24 wal write 130 0 pg_toast_2964_index +0 0 + 3501 enterprise 24 wal write 130 0 pg_toast_3592 +0 0 + 3501 enterprise 24 wal write 130 0 pg_toast_3592_index +0 0 + 3501 enterprise 24 wal write 130 0 pg_type +0 0 + 3501 enterprise 24 wal write 130 0 pg_type_oid_index +0 0 + 3501 enterprise 24 wal write 130 0 pg_type_typname_nsp_ +0 0 +(1304 rows) +``` + +The information displayed in the result set includes: + +| Column name | Description | +| ------------- | ---------------------------------------------------------------------- | +| `ID` | The system-assigned identifier of the wait | +| `USER` | The name of the user that incurred the wait | +| `SEQ` | The sequence number of the wait event | +| `WAIT NAME` | The name of the wait event | +| `ELAPSED` | The length of the wait event in microseconds | +| `File` | The relfilenode number of the file | +| `Name` | If available, the name of the file name related to the wait event | +| `# of Blk` | The block number read or written for a specific instance of the event | +| `Sum of Blks` | The number of blocks read | + + + +## purgesnap() + +The `purgesnap()` function purges a range of snapshots from the snapshot tables. The signature is: + +```sql +purgesnap(, ) +``` + +### Parameters + +`beginning_id` + + An integer value that represents the beginning session identifier. + +`ending_id` + + An integer value that represents the ending session identifier. + +`purgesnap()` removes all snapshots between `beginning_id` and `ending_id`, inclusive: + +```sql +SELECT * FROM purgesnap(6, 9); +__OUTPUT__ + purgesnap +------------------------------------ + Snapshots in range 6 to 9 deleted. +(1 row) +``` + +A call to the `get_snaps()` function after executing the example shows that snapshots `6` through `9` were purged from the snapshot tables: + +```sql +SELECT * FROM get_snaps(); +__OUTPUT__ + get_snaps +------------------------------ + 1 25-JUL-18 09:49:04.224597 + 2 25-JUL-18 09:49:09.310395 + 3 25-JUL-18 09:49:14.378728 + 4 25-JUL-18 09:49:19.448875 + 5 25-JUL-18 09:49:24.52103 + 10 25-JUL-18 09:49:49.855821 + 11 25-JUL-18 09:49:54.919954 + 12 25-JUL-18 09:49:59.987707 +(8 rows) +``` + + + +## truncsnap() + +Use the truncsnap() function to delete all records from the snapshot table. The signature is: + +```sql +truncsnap() +``` + +For example: + +```sql +SELECT * FROM truncsnap(); +__OUTPUT__ + truncsnap +---------------------- + Snapshots truncated. +(1 row) +``` + +A call to the `get_snaps()` function after calling the `truncsnap()` function shows that all records were removed from the snapshot tables: + +```sql +SELECT * FROM get_snaps(); +__OUTPUT__ + get_snaps +----------- +(0 rows) +``` From 1435e2813e9fabf8b96ba6f8a30789e15d950d96 Mon Sep 17 00:00:00 2001 From: francoughlin Date: Fri, 14 Jul 2023 15:57:15 -0400 Subject: [PATCH 11/18] Additional edits to table partitioning Minor edits --- .../defining_a_default_partition.mdx | 5 +++-- .../defining_a_maxvalue_partition.mdx | 5 +---- ...ltiple_partitioning_keys_in_a_range_partitioned_table.mdx | 2 -- 3 files changed, 4 insertions(+), 8 deletions(-) diff --git a/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_default_partition.mdx b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_default_partition.mdx index d0f59ccc4c2..5cc9db3346e 100644 --- a/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_default_partition.mdx +++ b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_default_partition.mdx @@ -2,8 +2,6 @@ title: "Defining a DEFAULT partition" --- -## Defining a DEFAULT partition - A `DEFAULT` partition captures any rows that don't fit into any other partition in a `LIST` partitioned or subpartitioned table. If you don't include a `DEFAULT` rule, any row that doesn't match one of the values in the partitioning constraints causes an error. Each `LIST` partition or subpartition can have its own `DEFAULT` rule. The syntax of a `DEFAULT` rule is: @@ -14,6 +12,8 @@ PARTITION [] VALUES (DEFAULT) Where `partition_name` specifies the name of the partition or subpartition that stores any rows that don't match the rules specified for other partitions. +## Adding a DEFAULT partition + You can create a list-partitioned table in which the server decides the partition for storing the data based on the value of the `country` column. In that case, if you attempt to add a row in which the value of the `country` column contains a value not listed in the rules, an error is reported: ```sql @@ -42,6 +42,7 @@ PARTITION BY LIST(country) PARTITION others VALUES (DEFAULT) ); ``` +## Testing the DEFAULT partition To test the `DEFAULT` partition, add a row with a value in the `country` column that doesn't match one of the countries specified in the partitioning constraints: diff --git a/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_maxvalue_partition.mdx b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_maxvalue_partition.mdx index d65880dd2b4..7078f946a0b 100644 --- a/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_maxvalue_partition.mdx +++ b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/05_handling_stray_values_in_a_list_or_range_partitioned_table/defining_a_maxvalue_partition.mdx @@ -2,9 +2,6 @@ title: "Defining a MAXVALUE partition" --- - -## Defining a MAXVALUE partition - A `MAXVALUE` partition or subpartition captures any rows that don't fit into any other partition in a range-partitioned or subpartitioned table. If you don't include a `MAXVALUE` rule, any row that exceeds the maximum limit specified by the partitioning rules causes in an error. Each partition or subpartition can have its own `MAXVALUE` partition. The syntax of a `MAXVALUE` rule is: @@ -15,7 +12,7 @@ PARTITION [] VALUES LESS THAN (MAXVALUE) Where `partition_name` specifies the name of the partition that stores any rows that don't match the rules specified for other partitions. -The last example created a range-partitioned table in which the data was partitioned based on the value of the `date` column. If you attempt to add a row with a `date` value that exceeds a date listed in the partitioning constraints, EDB Postgres Advanced Server reports an error. +[This example](/epas/latest/application_programming/epas_compat_table_partitioning/06_specifying_multiple_partitioning_keys_in_a_range_partitioned_table/) created a range-partitioned table in which the data was partitioned based on the value of the `date` column. If you attempt to add a row with a `date` value that exceeds a date listed in the partitioning constraints, EDB Postgres Advanced Server reports an error. ```sql edb=# INSERT INTO sales VALUES diff --git a/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/06_specifying_multiple_partitioning_keys_in_a_range_partitioned_table.mdx b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/06_specifying_multiple_partitioning_keys_in_a_range_partitioned_table.mdx index ad482e415fc..99b75b6dab7 100644 --- a/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/06_specifying_multiple_partitioning_keys_in_a_range_partitioned_table.mdx +++ b/product_docs/docs/epas/15/application_programming/epas_compat_table_partitioning/06_specifying_multiple_partitioning_keys_in_a_range_partitioned_table.mdx @@ -13,8 +13,6 @@ redirects: You can often improve performance by specifying multiple key columns for a `RANGE` partitioned table. If you often select rows using comparison operators on a small set of columns based on a greater-than or less-than value, consider using those columns in `RANGE` partitioning rules. -## Specifying multiple keys in a range-partitioned table - Range-partitioned table definitions can include multiple columns in the partitioning key. To specify multiple partitioning keys for a range-partitioned table, include the column names in a comma-separated list after the `PARTITION BY RANGE` clause: ```sql From bf7db9236aecacc5b3366b6cc02e29e68d75326d Mon Sep 17 00:00:00 2001 From: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com> Date: Mon, 17 Jul 2023 07:14:03 -0400 Subject: [PATCH 12/18] Update product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx --- .../7.0.4.1/03_the_advanced_server_net_connector_overview.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx b/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx index d0956a818b4..e9c14a95774 100644 --- a/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx +++ b/product_docs/docs/net_connector/7.0.4.1/03_the_advanced_server_net_connector_overview.mdx @@ -26,7 +26,7 @@ The `EDBDataSource` is the entry point for all the connections made to the datab `EDBConnection` - The `EDBConnection` class represents a connection to EDB Postgres Advanced Server. An `EDBConnection` object contains a `ConnectionString` that instructs the .NET client how to connect to an EDB Postgres Advanced Server database. `EDBConnection` should be obtained from an `EDBDataSource` instance and used directly only in specific scenario such as transactions. + The `EDBConnection` class represents a connection to EDB Postgres Advanced Server. An `EDBConnection` object contains a `ConnectionString` that instructs the .NET client how to connect to an EDB Postgres Advanced Server database. `EDBConnection` should be obtained from an `EDBDataSource` instance and used directly only in specific scenarios such as transactions. `EDBCommand` From 8c10e67cc2f935ab8ce6d828594ac7a06420068e Mon Sep 17 00:00:00 2001 From: Fran Coughlin <132373434+francoughlin@users.noreply.github.com> Date: Mon, 17 Jul 2023 08:27:20 -0400 Subject: [PATCH 13/18] Update product_docs/docs/epas/15/application_programming/12_debugger/configuring_debugger.mdx Co-authored-by: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> --- .../12_debugger/configuring_debugger.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/epas/15/application_programming/12_debugger/configuring_debugger.mdx b/product_docs/docs/epas/15/application_programming/12_debugger/configuring_debugger.mdx index e4fd8d20b7f..b04c2d86626 100644 --- a/product_docs/docs/epas/15/application_programming/12_debugger/configuring_debugger.mdx +++ b/product_docs/docs/epas/15/application_programming/12_debugger/configuring_debugger.mdx @@ -20,7 +20,7 @@ If your EDB Postgres Advanced Server host is on a CentOS or Linux system, you ca yum install edb-pgadmin4* ``` -On Linux, you must also install the `edb-asxx-server-pldebugger` RPM package, where `xx` is the EDB Postgres Advanced Server version number. Information about pgAdmin 4 is available at . +On Linux, you must also install the `edb-asxx-server-pldebugger` RPM package, where `xx` is the EDB Postgres Advanced Server version number. Information about pgAdmin 4 is available at the [pgAdmin website](https://www.pgadmin.org/). The RPM installation adds the pgAdmin4 icon to your Applications menu. From 05618682a54325741d99652efc2550b08e2f794b Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Mon, 17 Jul 2023 12:47:42 -0400 Subject: [PATCH 14/18] Partner docs: restoring Security section on home page --- src/pages/index.js | 25 ++++++++++++++++++++++--- 1 file changed, 22 insertions(+), 3 deletions(-) diff --git a/src/pages/index.js b/src/pages/index.js index 1abfb30bc02..0b92ca18c0d 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -100,9 +100,9 @@ const Page = () => (

Use the new reference section in EDB Postgres Distributed to - quickly look up views, catalogs, functions, and variables. It's - a new view of the documentation designed to centralize essential - information and speed up your development. + quickly look up views, catalogs, functions, and variables. + It's a new view of the documentation designed to centralize + essential information and speed up your development.

@@ -353,6 +353,25 @@ const Page = () => ( SIB Visions VisionX + + Security + + + Hashicorp Vault + + + Hashicorp Vault Transit Secrets Engine + + + Imperva Data Security Fabric + + + Thales CipherTrust Manager + + + Thales CipherTrust Transparent Encryption + + Data Movement From 9ca54749434d0712a7a1ad4fae628a986398e640 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 17 Jul 2023 13:00:18 -0400 Subject: [PATCH 15/18] Some edits to the restructured content This has been edited before so I stopped after a few files. --- .../control_file_examples.mdx | 4 +- .../building_the_control_file/index.mdx | 40 +++++++++---------- 2 files changed, 21 insertions(+), 23 deletions(-) diff --git a/product_docs/docs/epas/15/database_administration/02_edb_loader/building_the_control_file/control_file_examples.mdx b/product_docs/docs/epas/15/database_administration/02_edb_loader/building_the_control_file/control_file_examples.mdx index 12278ee581d..605a0192551 100644 --- a/product_docs/docs/epas/15/database_administration/02_edb_loader/building_the_control_file/control_file_examples.mdx +++ b/product_docs/docs/epas/15/database_administration/02_edb_loader/building_the_control_file/control_file_examples.mdx @@ -222,7 +222,7 @@ __OUTPUT__ ## Field types with length specification -This control file contains the field type clauses with the length specification: +This control file contains the field-type clauses with the length specification: ```sql LOAD DATA @@ -396,7 +396,7 @@ __OUTPUT__ ## Multiple INTO TABLE clauses -This example uses multiple `INTO TABLE` clauses. For this example, two empty tables are created with the same data definition as the `emp` table. The following `CREATE TABLE` commands create these two empty tables, without inserting rows from the original `emp` table: +This example uses multiple `INTO TABLE` clauses. For this example, two empty tables are created with the same data definition as the `emp` table. The following `CREATE TABLE` commands create these two empty tables without inserting rows from the original `emp` table: ```sql CREATE TABLE emp_research AS SELECT * FROM emp WHERE deptno = 99; diff --git a/product_docs/docs/epas/15/database_administration/02_edb_loader/building_the_control_file/index.mdx b/product_docs/docs/epas/15/database_administration/02_edb_loader/building_the_control_file/index.mdx index 6f146088e73..825193a3655 100644 --- a/product_docs/docs/epas/15/database_administration/02_edb_loader/building_the_control_file/index.mdx +++ b/product_docs/docs/epas/15/database_administration/02_edb_loader/building_the_control_file/index.mdx @@ -64,28 +64,26 @@ ZONED EXTERNAL [()] | ZONED [( [,])] ## Description -The specification of `data_file`, `bad_file`, and `discard_file` can include the full directory path or a relative directory path to the file name. If the file name is specified alone or with a relative directory path, the file is then assumed to exist, in the case of `data_file`, relative to the current working directory from which you invoke `edbldr. In the case of `bad_file` or `discard_file`, it's created. +The specification of `data_file`, `bad_file`, and `discard_file` can include the full directory path or a relative directory path to the filename. If the filename is specified alone or with a relative directory path, the file is then assumed to exist, in the case of `data_file`, relative to the current working directory from which you invoke `edbldr`. In the case of `bad_file` or `discard_file`, it's created. -You can include references to environment variables in the EDB\*Loader control file when referring to a directory path or file name. Environment variable references are formatted differently on Windows systems than on Linux systems: +You can include references to environment variables in the EDB\*Loader control file when referring to a directory path or filename. Environment variable references are formatted differently on Windows systems than on Linux systems: - On Linux, the format is `$ENV_VARIABLE` or `${ENV_VARIABLE}`. - On Windows, the format is `%ENV_VARIABLE%`. -Where `ENV_VARIABLE` is the environment variable that's set to the directory path or file name. +Where `ENV_VARIABLE` is the environment variable that's set to the directory path or filename. Set the `EDBLDR_ENV_STYLE` environment variable to interpret environment variable references as Windows-styled references or Linux-styled references regardless of the operating system on which EDB\*Loader resides. You can use this environment variable to create portable control files for EDB\*Loader. - On a Windows system, set `EDBLDR_ENV_STYLE` to `linux` or `unix` to recognize Linux-style references in the control file. - On a Linux system, set `EDBLDR_ENV_STYLE` to `windows` to recognize Windows-style references in the control file. -The operating system account `enterprisedb` must have read permission on the directory and file specified by `data_file`. - -The operating system account enterprisedb must have write permission on the directories where `bad_file` and `discard_file` are written. +The operating system account enterprisedb must have read permission on the directory and file specified by `data_file`. It must have write permission to the directories where `bad_file` and `discard_file` are written. !!! Note - The file names for `data_file`, `bad_file`, and `discard_file` must have extensions `.dat`, `.bad`, and `.dsc`, respectively. If the provided file name doesn't have an extension, EDB\*Loader assumes the actual file name includes the appropriate extension. + The filenames for `data_file`, `bad_file`, and `discard_file` must have extensions `.dat`, `.bad`, and `.dsc`, respectively. If the provided filename doesn't have an extension, EDB\*Loader assumes the actual filename includes the appropriate extension. -Suppose an EDB\*Loader session results in data format errors, the `BADFILE` clause isn't specified, and the BAD parameter isn't given on the command line when `edbldr` is invoked. In this case, a bad file is created with the name `control_file_base.bad` in the directory from which `edbldr` is invoked. `control_file_base` is the base name of the control file used in the `edbldr` session. +Suppose an EDB\*Loader session results in data format errors, the `BADFILE` clause isn't specified, and the `BAD` parameter isn't given on the command line when `edbldr` is invoked. In this case, a bad file is created with the name `control_file_base.bad` in the directory from which `edbldr` is invoked. `control_file_base` is the base name of the control file used in the `edbldr` session. If all of the following conditions are true, the discard file isn't created even if the EDB\*Loader session results in discarded records: @@ -95,10 +93,10 @@ If all of the following conditions are true, the discard file isn't created even - The `DISCARDS` clause for specifying the maximum number of discarded records isn't included in the control file. - The `DISCARDMAX` parameter for specifying the maximum number of discarded records isn't included on the command line. -Suppose you don't specify the `DISCARDFILE` clause and the `DISCARD` parameter for explicitly specifying the discard file name, but you do specify `DISCARDMAX` or `DISCARDS`. In this case, the EDB\*Loader session creates a discard file using the data file name with an extension of `.dsc`. +Suppose you don't specify the `DISCARDFILE` clause and the `DISCARD` parameter for explicitly specifying the discard filename, but you do specify `DISCARDMAX` or `DISCARDS`. In this case, the EDB\*Loader session creates a discard file using the data filename with an extension of `.dsc`. !!! Note - The keywords `DISCARD` and `DISCARDS` differ. `DISCARD` is an EDB\*Loader command line parameter used to specify the discard file name. `DISCARDS` is a clause of the `LOAD DATA` directive that can appear only in the control file. Keywords `DISCARDS` and `DISCARDMAX` provide the same functionality of specifying the maximum number of discarded records allowed before terminating the EDB\*Loader session. Records loaded into the database before terminating the EDB\*Loader session due to exceeding the `DISCARDS` or `DISCARDMAX` settings are kept in the database. They aren't rolled back. + The keywords `DISCARD` and `DISCARDS` differ. `DISCARD` is an EDB\*Loader command line parameter used to specify the discard filename. `DISCARDS` is a clause of the `LOAD DATA` directive that can appear only in the control file. Keywords `DISCARDS` and `DISCARDMAX` provide the same functionality of specifying the maximum number of discarded records allowed before terminating the EDB\*Loader session. Records loaded into the database before terminating the EDB\*Loader session due to exceeding the `DISCARDS` or `DISCARDMAX` settings are kept in the database. They aren't rolled back. Specifying one of `INSERT`, `APPEND`, `REPLACE`, or `TRUNCATE` establishes the default action for adding rows to target tables. The default action is `INSERT`. @@ -153,7 +151,7 @@ If you specify the `FIELDS TERMINATED BY` clause, then you can't specify the `PO If `SKIP_INDEX_MAINTENANCE` is `TRUE`, index maintenance isn't performed as part of a direct path load, and indexes on the loaded table are marked as invalid. The default value of `SKIP_INDEX_MAINTENANCE` is `FALSE`. !!! Note - During a parallel direct path load, target table indexes aren't updated. They are marked as invalid after the load is complete. + During a parallel direct path load, target table indexes aren't updated. They're marked as invalid after the load is complete. You can use the `REINDEX` command to rebuild an index. For more information about the `REINDEX` command, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/sql-reindex.html). @@ -169,12 +167,12 @@ If you specify the `FIELDS TERMINATED BY` clause, then you can't specify the `PO File containing the data to load into `target_table`. Each record in the data file corresponds to a row to insert into `target_table`. - If you don't include an extension in the file name, EDB\*Loader assumes the file has an extension of `.dat`, for example, `mydatafile.dat`. + If you don't include an extension in the filename, EDB\*Loader assumes the file has an extension of `.dat`, for example, `mydatafile.dat`. !!! Note If you specify the `DATA` parameter on the command line when invoking `edbldr`, the file given by the command line `DATA` parameter is used instead. - If you omit the `INFILE` clause and the command line `DATA` parameter, then the data file name is assumed to be the same as the control file name but with the extension `.dat`. + If you omit the `INFILE` clause and the command line `DATA` parameter, then the data filename is assumed to be the same as the control filename but with the extension `.dat`. `stdin` @@ -186,16 +184,16 @@ If you specify the `FIELDS TERMINATED BY` clause, then you can't specify the `PO For EDB Postgres Advanced Server version 12 and later, a bad file is generated only if there are any bad or rejected records. However, if an existing bad file has the same name and location, and no bad records are generated after invoking a new version of `edbldr`, the existing bad file remains intact. - If you don't include an extension in the file name, EDB\*Loader assumes the file has an extension of `.bad`, for example, `mybadfile.bad`. + If you don't include an extension in the filename, EDB\*Loader assumes the file has an extension of `.bad`, for example, `mybadfile.bad`. !!! Note If you specify the `BAD` parameter on the command line when invoking `edbldr`, the file given with the command line `BAD` parameter is used instead. `discard_file` - File that receives input data records that aren't loaded into any table. This input data records are discarded because none of the selection criteria are met for tables with the `WHEN` clause and there are no tables without a `WHEN` clause. All records meet the selection criteria of a table without a `WHEN` clause. + File that receives input data records that aren't loaded into any table. This input data records are discarded because none of the selection criteria are met for tables with the `WHEN` clause, and there are no tables without a `WHEN` clause. All records meet the selection criteria of a table without a `WHEN` clause. - If you don't include an extension with the file name, EDB\*Loader assumes the file has an extension of `.dsc`, for example, `mydiscardfile.dsc`. + If you don't include an extension with the filename, EDB\*Loader assumes the file has an extension of `.dsc`, for example, `mydiscardfile.dsc`. !!! Note If you specify the `DISCARD` parameter on the command line when invoking `edbldr`, the file given with the command line `DISCARD` parameter is used instead. @@ -238,7 +236,7 @@ If you specify the `FIELDS TERMINATED BY` clause, then you can't specify the `PO The `PRESERVE BLANKS` option works only with the `OPTIONALLY ENCLOSED BY` clause. It retains leading and trailing whitespaces for both delimited and predetermined size fields. - In case of `NO PRESERVE BLANKS`, if the fields are delimited, then only leading whitespaces are omitted. If any trailing whitespaces are present, they are left intact. In the case of predetermined-sized fields with `NO PRESERVE BLANKS`, the trailing whitespaces are omitted. Any leading whitespaces are left intact. + In case of `NO PRESERVE BLANKS`, if the fields are delimited, then only leading whitespaces are omitted. If any trailing whitespaces are present, they're left intact. In the case of predetermined-sized fields with `NO PRESERVE BLANKS`, the trailing whitespaces are omitted. Any leading whitespaces are left intact. !!! Note If you don't explicitly provide `PRESERVE BLANKS` or `NO PRESERVE BLANKS`, then the behavior defaults to `NO PRESERVE BLANKS`. This option doesn't work for ideographic whitespaces. @@ -263,7 +261,7 @@ If you specify the `FIELDS TERMINATED BY` clause, then you can't specify the `PO Using (`start`:`end`) or `column_name` defines the portion of the record in `data_file` to compare with the value specified by `val` to evaluate as either true or false. - All characters used in the `field_condition` text (particularly in the `val` string) must be valid in the database encoding. For performing data conversion, EDB\*Loader first converts the characters in `val` string to the database encoding and then to the data file encoding. + All characters used in the `field_condition` text (particularly in the `val` string) must be valid in the database encoding. For performing data conversion, EDB\*Loader first converts the characters in the `val` string to the database encoding and then to the data file encoding. In the `WHEN field_condition [ AND field_condition ]` clause, if all such conditions evaluate to `TRUE` for a given record, then EDB\*Loader attempts to insert that record into `target_table`. If the insert operation fails, the record is written to `bad_file`. @@ -332,7 +330,7 @@ ZONED EXTERNAL [()] | ZONED [([,])] !!! Note Specifying a field type is optional for descriptive purposes and has no effect on whether EDB\*Loader successfully inserts the data in the field into the table column. Successful loading depends on the compatibility of the column data type and the field value. For example, a column with data type `NUMBER(7,2)` successfully accepts a field containing `2600`. However, if the field contains a value such as `26XX`, the insertion fails, and the record is written to `bad_file`. -`ZONED` data is not human readable. `ZONED` data is stored in an internal format where each digit is encoded in a separate nibble/nybble/4-bit field. In each `ZONED` value, the last byte contains a single digit in the high-order 4 bits and the sign in the low-order 4 bits. +`ZONED` data isn't human readable. `ZONED` data is stored in an internal format where each digit is encoded in a separate nibble/nybble/4-bit field. In each `ZONED` value, the last byte contains a single digit in the high-order 4 bits and the sign in the low-order 4 bits. `length` @@ -370,7 +368,7 @@ ZONED EXTERNAL [()] | ZONED [([,])] - `second field` specifies the datatype. - `third field` specifies the datemask. -If you want to provide an SQL expression, then a workaround is to specify the datemask and SQL expression using the `TO_CHAR` function as: +If you want to provide a SQL expression, then a workaround is to specify the datemask and SQL expression using the `TO_CHAR` function as: ```sql time_stamp timestamp "yyyymmddhh24miss" "to_char(to_timestamp(:time_stamp, 'yyyymmddhh24miss'), 'yyyymmddhh24miss')" @@ -386,7 +384,7 @@ If you want to provide an SQL expression, then a workaround is to specify the da The `PRESERVE BLANKS` option works only with the `OPTIONALLY ENCLOSED BY` clause and retains leading and trailing whitespaces for both delimited and predetermined size fields. - In case of `NO PRESERVE BLANKS`, if the fields are delimited, then only leading whitespaces are omitted. If any trailing whitespaces are present, they are left intact. In the case of predetermined-sized fields with `NO PRESERVE BLANKS`, the trailing whitespaces are omitted, and any leading whitespaces are left intact. + In case of `NO PRESERVE BLANKS`, if the fields are delimited, then only leading whitespaces are omitted. If any trailing whitespaces are present, they're left intact. In the case of predetermined-sized fields with `NO PRESERVE BLANKS`, the trailing whitespaces are omitted, and any leading whitespaces are left intact. !!! Note If you don't provide `PRESERVE BLANKS` or `NO PRESERVE BLANKS`, then the behavior defaults to `NO PRESERVE BLANKS`. This option doesn't work for ideographic whitespaces. From 2e03ad31614e7a2207fd594adb9c99b70efe5f81 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Mon, 17 Jul 2023 13:03:42 -0400 Subject: [PATCH 16/18] moved Data Movement category --- src/pages/index.js | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/src/pages/index.js b/src/pages/index.js index 0b92ca18c0d..2ff25b345f7 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -337,6 +337,13 @@ const Page = () => ( Veritas NetBackup for PostgreSQL + + Data Movement + + + Precisely Connect CDC + + Developer Tools @@ -372,13 +379,6 @@ const Page = () => ( Thales CipherTrust Transparent Encryption - - Data Movement - - - Precisely Connect CDC - - Other From b3a351d5489fb1ddc7ea9857b49e99daefd60879 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 17 Jul 2023 13:20:17 -0400 Subject: [PATCH 17/18] edits to upgrading topic --- .../upgrading_pem_installation_linux_rpm.mdx | 26 ++++++++----------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx b/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx index b4ec715e968..a03ec84b4eb 100644 --- a/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx +++ b/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx @@ -13,17 +13,16 @@ To upgrade PEM component software on Linux hosts, install a newer version of the 1. Invoke the PEM agent package installer on each monitored node except the PEM server host. 2. Invoke the PEM server package installer. It upgrades both the PEM server and the PEM agent that resides on the PEM server host. -During an installation, the component installation automatically detects an existing installation and performs an upgrade. After upgrading the PEM agent and server, you can upgrade SQL Profiler if required. That step is platform specific. +During an installation, the component installation detects an existing installation and performs an upgrade. After upgrading the PEM agent and server, you can upgrade SQL Profiler if required. That step is platform specific. !!! Note - If you already configured or are planning to configure any shell/batch script run by a Linux agent that's upgraded from any earlier version to version 7.11 or later, you must speciy the user for the `batch_script_user` parameter in the `agent.cfg` file. We strongly recommended that you use a non-root user to run the scripts. Using the root user can result in compromising the data security and operating system security. However, if you want to restore the pemagent to its original settings using a root user to run the scripts, then you must set the `batch_script_user` parameter to `root`. + If you already configured or are planning to configure any shell/batch script run by a Linux agent that's upgraded from any earlier version to version 7.11 or later, you must specify the user for the `batch_script_user` parameter in the `agent.cfg` file. We strongly recommended that you use a non-root user to run the scripts. Using the root user can result in compromising the data security and operating system security. However, if you want to restore the pemagent to its original settings using a root user to run the scripts, then you must set the `batch_script_user` parameter to `root`. ## Upgrading a PEM server installation The commands to upgrade the PEM server are platform specific. -If you want to upgrade a PEM server that is installed on a machine in an isolated network, you need to create a PEM repository on that machine before you upgrade the PEM server. For more information about creating a PEM repository on an isolated network, see [Creating an EDB repository -on an isolated network](/pem/latest/installing/creating_pem_repository_in_isolated_network/). +If you want to upgrade a PEM server that's installed on a machine in an isolated network, you need to create a PEM repository on that machine before you upgrade the PEM server. For more information about, see [Creating an EDB repository on an isolated network](/pem/latest/installing/creating_pem_repository_in_isolated_network/). ### On a CentOS, Rocky Linux, AlmaLinux, or RHEL host @@ -40,9 +39,9 @@ dnf upgrade edb-pem ``` !!! Note - If you're doing a fresh installation of the PEM server on CentOS or RHEL 7.x host, the installer installs the `edb-python3-mod_wsgi` package along with the installation. The package is a requirement of the operating system. If you are upgrading the PEM server on CentOS or RHEL 7.x host, the`the edb-python3-mod_wsgi` packages replaces the `mod_wsgi package` package to meet the requirements of the operating system. + If you're doing a fresh installation of the PEM server on CentOS or RHEL 7.x host, the installer installs the `edb-python3-mod_wsgi` package along with the installation. The package is a requirement of the operating system. If you're upgrading the PEM server on a CentOS or RHEL 7.x host, the`the edb-python3-mod_wsgi` packages replaces the `mod_wsgi package` package to meet the requirements of the operating system. -After upgrading the PEM server using yum or the `dnf` command, you must configure the PEM server. For detailed information, see [Configuring the PEM server](#configuring-the-pem-server). +After upgrading the PEM server using yum or the `dnf` command, you must configure the PEM server. For details, see [Configuring the PEM server](#configuring-the-pem-server). ### On a Debian or Ubuntu host @@ -52,7 +51,7 @@ You can use the `apt-get` package manager to upgrade the installed version of th apt-get upgrade edb-pem ``` -After upgrading the PEM server with `apt-get`, you need to configure the PEM server. For detailed information, see [Configuring the PEM server](#configuring-the-pem-server). +After upgrading the PEM server with `apt-get`, you need to configure the PEM server. For details, see [Configuring the PEM server](#configuring-the-pem-server). ### On a SLES host @@ -62,10 +61,10 @@ You can use the zypper package manager to upgrade the installed version of the P zypper update edb-pem ``` -After upgrading the PEM server using zypper, you need to configure the PEM server. For detailed information, see [Configuring the PEM server](#configuring-the-pem-server). +After upgrading the PEM server using zypper, you need to configure the PEM server. For details, see [Configuring the PEM server](#configuring-the-pem-server). !!! Note - If you upgrade the PEM backend database server and the PEM server, update the `PG_INSTALL_PATH` and `DB_UNIT_FILE` parameters pointing to the new version in the `/usr/edb/pem/share/.install-config` file before you run the configure script. + If you upgrade the PEM backend database server and the PEM server, update the `PG_INSTALL_PATH` and `DB_UNIT_FILE` parameters to point to the new version in the `/usr/edb/pem/share/.install-config` file before you run the configure script. ## Configuring the PEM server @@ -75,9 +74,7 @@ After upgrading the PEM server, you can use the following command to configure t /usr/edb/pem/bin/configure-pem-server.sh ``` -The configure script uses the values from the old PEM server configuration file while running the script. - -For detailed information, see [Configuring the PEM server on Linux platforms](/pem/latest/installing/configuring_the_pem_server_on_linux/). +The configure script uses the values from the old PEM server configuration file while running the script. For details, see [Configuring the PEM server on Linux platforms](/pem/latest/installing/configuring_the_pem_server_on_linux/). !!! Note - The configure script requires a superuser password only after the upgrade process. @@ -88,7 +85,7 @@ For detailed information, see [Configuring the PEM server on Linux platforms](/p SELECT agent_id FROM pem.agent_config WHERE param='alert_threads' AND value > 0; ``` - Stop the running agents and re-run the configure script. + Stop the running agents and rerun the configure script. If the problem persists, then run the query to terminate the stuck alert processes: @@ -96,7 +93,7 @@ For detailed information, see [Configuring the PEM server on Linux platforms](/p SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE query='SELECT pem.process_one_alert()'; ``` - Then re-run the configure script. + Then rerun the configure script. ## Upgrading a PEM agent installation @@ -131,4 +128,3 @@ For SLES, use the following command: ```shell zypper update edb-pem-agent ``` - From 51230d3077da5f298b8842ce2c54241689321c25 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 17 Jul 2023 13:21:24 -0400 Subject: [PATCH 18/18] Update upgrading_pem_installation_linux_rpm.mdx --- .../upgrading_pem_installation_linux_rpm.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx b/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx index a03ec84b4eb..eac140fd123 100644 --- a/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx +++ b/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx @@ -22,7 +22,7 @@ During an installation, the component installation detects an existing installat The commands to upgrade the PEM server are platform specific. -If you want to upgrade a PEM server that's installed on a machine in an isolated network, you need to create a PEM repository on that machine before you upgrade the PEM server. For more information about, see [Creating an EDB repository on an isolated network](/pem/latest/installing/creating_pem_repository_in_isolated_network/). +If you want to upgrade a PEM server that's installed on a machine in an isolated network, you need to create a PEM repository on that machine before you upgrade the PEM server. For more information, see [Creating an EDB repository on an isolated network](/pem/latest/installing/creating_pem_repository_in_isolated_network/). ### On a CentOS, Rocky Linux, AlmaLinux, or RHEL host