From 9a94c0f3de0c03ece0d7e52e9e5c0426aef46071 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 11 Jul 2023 15:20:00 -0400 Subject: [PATCH 001/370] Sample edits for HPE --- .../HPE/04-ConfiguringHPEGreenlake.mdx | 28 ++++++++----------- .../partner_docs/HPE/07-SupportandLogging.mdx | 28 ++++++++++++------- 2 files changed, 30 insertions(+), 26 deletions(-) diff --git a/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx b/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx index dbd9fdb6c88..ed1553b5ce4 100644 --- a/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx +++ b/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx @@ -1,35 +1,31 @@ --- title: 'Configuration' -description: 'Walkthrough on configuring the integration' +description: 'Walkthrough of configuring the integration' --- Implementing EDB software on HPE requires the following components: -!!! Note - The EDB Postgres Advanced Server, EDB Postgres Extended Server and PostgreSQL Server products will be referred to as Postgres distribution. The specific distribution type will be dependent upon customer need or preference. - Postgres distribution - HPE system components configured per your requirements -Sample deployment: - -![HPE Sample Deployment](Images/SampleDeployment.png) - -## Prerequisites - -- HPE servers set up per your requirements - +!!! Note + We refer to EDB Postgres Advanced Server, EDB Postgres Extended Server, and PostgreSQL Server as *Postgres distribution*. The specific distribution type depends on your needs and preferences. -## Login to Server and Deploy Postgres Distribution +The figure shows a sample deployment. +![HPE sample deployment](Images/SampleDeployment.png) -1. Login to your server per your chosen method, for example if on a Windows system accessing a RHEL Server, you would want to use a utility like PuTTy to SSH into your server to access it. +## Prerequisites -2. Login as the `Root` user via credentials you established with HPE during your server setup. +Set up HPE servers per your requirements. -3. Install your preferred Postgres distribution. For example, for EDB Postgres Advanced Server refer to the [EDB Postgres Advanced Server documentation](https://www.enterprisedb.com/docs/epas/latest/) or for EDB Postgres Extended Server refer to the [EDB Postgres Extended Server documentation](https://www.enterprisedb.com/docs/pge/latest/). +## Log in to server and deploy Postgres distribution -4. Install the other EDB tools, such as [Failover Manager (EFM)](https://www.enterprisedb.com/docs/efm/latest/), [Postgres Enterprise Manager (PEM)](https://www.enterprisedb.com/docs/pem/latest/), or [Barman](https://www.enterprisedb.com/docs/supported-open-source/barman/), as needed for your configuration in the appropriate servers. Refer to the [EDB documentation](https://www.enterprisedb.com/docs) for any other software needs. +1. Log in to your server using your chosen method. For example, on a Windows system accessing a RHEL server, use a utility like PuTTy to SSH into your server. +2. Log in as the root user using credentials you established with HPE during your server setup. +3. Install your preferred Postgres distribution. For EDB Postgres Advanced Server, see the [EDB Postgres Advanced Server documentation](https://www.enterprisedb.com/docs/epas/latest/). For EDB Postgres Extended Server, see the [EDB Postgres Extended Server documentation](https://www.enterprisedb.com/docs/pge/latest/). +4. Install the other EDB tools, such as [Failover Manager (EFM)](https://www.enterprisedb.com/docs/efm/latest/), [Postgres Enterprise Manager (PEM)](https://www.enterprisedb.com/docs/pem/latest/), or [Barman](https://www.enterprisedb.com/docs/supported-open-source/barman/), in the appropriate servers as needed for your configuration. See the [EDB documentation](https://www.enterprisedb.com/docs) for any other software needs. diff --git a/advocacy_docs/partner_docs/HPE/07-SupportandLogging.mdx b/advocacy_docs/partner_docs/HPE/07-SupportandLogging.mdx index 309c9340288..92dd10bf4b4 100644 --- a/advocacy_docs/partner_docs/HPE/07-SupportandLogging.mdx +++ b/advocacy_docs/partner_docs/HPE/07-SupportandLogging.mdx @@ -1,25 +1,33 @@ --- -title: 'Support and Logging Details' +title: 'Support and logging details' description: 'Details of the support process and logging information' --- ## Support -Technical support for the use of these products is provided by both EDB and HPE. A proper support contract is required to be in place at both EDB and HPE. A support ticket can be opened on either side to start the process. If it is determined through the support ticket that resources from the other vendor is required, the customer should open a support ticket with that vendor through normal support channels. This will allow both companies to work together to help the customer as needed. +Both EDB and HPE provide technical support for the use of these products. To get support, you must have a support contract in place at both EDB and HPE. You can open a support ticket with either vendor to start the process. + +If the support ticket shows that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging -**EDB Postgres Advanced Server Logs** +The log location depends on the product. + +### EDB Postgres Advanced Server logs + +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Alternatively, you can navigate to the `postgresql.conf` file, which you can use to customize logging options or enable `edb_audit` logs. + +An example of the full path to view EDB Postgres Advanced Server logs is `/var/lib/edb/as15/data/log`. -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance and from here you can navigate to `log`, `current_logfiles` or you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. An example of the full path to view EDB Postgres Advanced Server logs: `/var/lib/edb/as15/data/log`. +### EDB Postgres Extended Server logs -**EDB Postgres Extended Server Logs** +Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance. From there, you can navigate to `log`. Alternatively, you can navigate to the `postgresql.conf` file, which you can use to customize logging options. -Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance and from here you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended logs: `/var/lib/edb-pge/15/data/log`. +An example of the full path to view EDB Postgres Extended logs is `/var/lib/edb-pge/15/data/log`. -**PostgreSQL Server Logs** +### PostgreSQL Server logs -The default log directories for PostgreSQL logs vary depending on the operating system: +The default log directories for PostgreSQL logs depend on the operating system: - Debian-based system: `/var/log/postgresql/postgresql-x.x.main.log. X.x.` @@ -27,6 +35,6 @@ The default log directories for PostgreSQL logs vary depending on the operating - Windows: `C:\Program Files\PostgreSQL\9.3\data\pg_log` -**HPE Logs** +### HPE logs -For HPE logging and support, please contact the HPE Support team to assist you. \ No newline at end of file +For HPE logging and support, contact the HPE Support team. \ No newline at end of file From 130adbce9c615344b7818a00189344336791c532 Mon Sep 17 00:00:00 2001 From: Chris Estes <106166814+ccestes@users.noreply.github.com> Date: Thu, 27 Jul 2023 09:14:15 -0400 Subject: [PATCH 002/370] update --- product_docs/docs/edb_plus/41/04_using_edb_plus.mdx | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/product_docs/docs/edb_plus/41/04_using_edb_plus.mdx b/product_docs/docs/edb_plus/41/04_using_edb_plus.mdx index b166fce038c..5d8713e8707 100644 --- a/product_docs/docs/edb_plus/41/04_using_edb_plus.mdx +++ b/product_docs/docs/edb_plus/41/04_using_edb_plus.mdx @@ -32,7 +32,7 @@ edbplus [ -S[ILENT ] ] [ | /NOLOG ] [ @[. ] ] `connectstring` is the database connection string with the following format: ```text -[:][/][?ssl={true | false}] +[:],[:],[:],..[/][?ssl={true | false}][&targetServerType={primary}] ``` `host` is the hostname or IP address on which the database server resides. If you don't specify `@connectstring`, `@variable`, or `/NOLOG`, the default host is assumed to be the localhost. @@ -114,6 +114,12 @@ define edb="localhost:5445/edb" define hr_5445="localhost:5445/hr" ``` +Multi-host database connection strings can also be defined in the `login.sq`l file with the `?targetServerType=primary` parameter included in the connection string. The following shows an example how a multi-host connection string can be defined in `login.sql`: + +```ini +define edb="192.168.2.24:5444,192.168.2.25:5445,192.168.2.26:5446/edb" +``` + The following example executes a script file, `dept_query.sql`, after connecting to database `edb` on server localhost at port 5444. ```sql From 14d3d1e0739a3b1aa77b3217f01fe89d4733f639 Mon Sep 17 00:00:00 2001 From: Bobby Bissett Date: Thu, 27 Jul 2023 11:37:19 -0400 Subject: [PATCH 003/370] Adding warning message about openjdk 11 issue on redhat Pushing this first and will correct formatting after I see the generated pages. --- product_docs/docs/efm/4/13_troubleshooting.mdx | 6 +++++- product_docs/docs/efm/4/installing/prerequisites.mdx | 7 ++++++- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/efm/4/13_troubleshooting.mdx b/product_docs/docs/efm/4/13_troubleshooting.mdx index 62749d37f2e..96c3d93de75 100644 --- a/product_docs/docs/efm/4/13_troubleshooting.mdx +++ b/product_docs/docs/efm/4/13_troubleshooting.mdx @@ -1,6 +1,6 @@ --- title: "Troubleshooting" -redirects: +redirects: - ../efm_user/13_troubleshooting legacyRedirectsGenerated: # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. @@ -47,3 +47,7 @@ openjdk version "1.8.0_191" OpenJDK Runtime Environment (build 1.8.0_191-b12) OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode) ``` +!!! Note: + There is a temporary issue with OpenJDK version 11 on RHEL and its derivatives. When starting Failover Manager, you may see an error like the following: + `java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-2.el8.x86_64/lib/tzdb.dat (No such file or directory)` + If so, the workaround is to manually install the missing package, e.g. `dnf install tzdata-java` diff --git a/product_docs/docs/efm/4/installing/prerequisites.mdx b/product_docs/docs/efm/4/installing/prerequisites.mdx index 24e2b64e048..7a55ea3e34a 100644 --- a/product_docs/docs/efm/4/installing/prerequisites.mdx +++ b/product_docs/docs/efm/4/installing/prerequisites.mdx @@ -1,6 +1,6 @@ --- title: "Prerequisites" -redirects: +redirects: - ../efm_user/02_failover_manager_overview/01_prerequisites - /efm/latest/01_prerequisites/ legacyRedirectsGenerated: @@ -17,6 +17,11 @@ Before configuring a Failover Manager cluster, you must satisfy the prerequisite Before using Failover Manager, you must first install Java (version 1.8 or later). Failover Manager is tested with OpenJDK, and we strongly recommend installing that version of Java. [Installation instructions for Java](https://openjdk.java.net/install/) are platform specific. +!!! Note: + There is a temporary issue with OpenJDK version 11 on RHEL and its derivatives. When starting Failover Manager, you may see an error like the following: + `java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-2.el8.x86_64/lib/tzdb.dat (No such file or directory)` + If so, the workaround is to manually install the missing package, e.g. `dnf install tzdata-java` + ## Provide an SMTP server You can receive notifications from Failover Manager as specified by a user-defined notification script, by email, or both. From e15922d3ed9e017567c407a541206797dbd3edaa Mon Sep 17 00:00:00 2001 From: Bobby Bissett Date: Thu, 27 Jul 2023 13:16:01 -0400 Subject: [PATCH 004/370] Trying to add newlines. --- product_docs/docs/efm/4/13_troubleshooting.mdx | 2 ++ product_docs/docs/efm/4/installing/prerequisites.mdx | 2 ++ 2 files changed, 4 insertions(+) diff --git a/product_docs/docs/efm/4/13_troubleshooting.mdx b/product_docs/docs/efm/4/13_troubleshooting.mdx index 96c3d93de75..44da9ff6493 100644 --- a/product_docs/docs/efm/4/13_troubleshooting.mdx +++ b/product_docs/docs/efm/4/13_troubleshooting.mdx @@ -49,5 +49,7 @@ OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode) ``` !!! Note: There is a temporary issue with OpenJDK version 11 on RHEL and its derivatives. When starting Failover Manager, you may see an error like the following: + `java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-2.el8.x86_64/lib/tzdb.dat (No such file or directory)` + If so, the workaround is to manually install the missing package, e.g. `dnf install tzdata-java` diff --git a/product_docs/docs/efm/4/installing/prerequisites.mdx b/product_docs/docs/efm/4/installing/prerequisites.mdx index 7a55ea3e34a..83a7845f59c 100644 --- a/product_docs/docs/efm/4/installing/prerequisites.mdx +++ b/product_docs/docs/efm/4/installing/prerequisites.mdx @@ -19,7 +19,9 @@ Before using Failover Manager, you must first install Java (version 1.8 or later !!! Note: There is a temporary issue with OpenJDK version 11 on RHEL and its derivatives. When starting Failover Manager, you may see an error like the following: + `java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-2.el8.x86_64/lib/tzdb.dat (No such file or directory)` + If so, the workaround is to manually install the missing package, e.g. `dnf install tzdata-java` ## Provide an SMTP server From 40ed6bbd43906002918af9e86823368846a7e588 Mon Sep 17 00:00:00 2001 From: Bobby Bissett Date: Thu, 27 Jul 2023 13:54:07 -0400 Subject: [PATCH 005/370] Somehow "Note" was lost. Trying it like other examples in the doc but can't find one with multiple lines yet. --- product_docs/docs/efm/4/13_troubleshooting.mdx | 2 +- product_docs/docs/efm/4/installing/prerequisites.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/efm/4/13_troubleshooting.mdx b/product_docs/docs/efm/4/13_troubleshooting.mdx index 44da9ff6493..ff558cdb8bc 100644 --- a/product_docs/docs/efm/4/13_troubleshooting.mdx +++ b/product_docs/docs/efm/4/13_troubleshooting.mdx @@ -47,7 +47,7 @@ openjdk version "1.8.0_191" OpenJDK Runtime Environment (build 1.8.0_191-b12) OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode) ``` -!!! Note: +!!! Note There is a temporary issue with OpenJDK version 11 on RHEL and its derivatives. When starting Failover Manager, you may see an error like the following: `java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-2.el8.x86_64/lib/tzdb.dat (No such file or directory)` diff --git a/product_docs/docs/efm/4/installing/prerequisites.mdx b/product_docs/docs/efm/4/installing/prerequisites.mdx index 83a7845f59c..7c221116dc5 100644 --- a/product_docs/docs/efm/4/installing/prerequisites.mdx +++ b/product_docs/docs/efm/4/installing/prerequisites.mdx @@ -17,7 +17,7 @@ Before configuring a Failover Manager cluster, you must satisfy the prerequisite Before using Failover Manager, you must first install Java (version 1.8 or later). Failover Manager is tested with OpenJDK, and we strongly recommend installing that version of Java. [Installation instructions for Java](https://openjdk.java.net/install/) are platform specific. -!!! Note: +!!! Note There is a temporary issue with OpenJDK version 11 on RHEL and its derivatives. When starting Failover Manager, you may see an error like the following: `java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-2.el8.x86_64/lib/tzdb.dat (No such file or directory)` From c5a380833724b23b4aff960beb1a7a374943141c Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 1 Aug 2023 11:47:42 -0400 Subject: [PATCH 006/370] Finished editing HPE content --- .../partner_docs/HPE/02-PartnerInformation.mdx | 10 +++++----- advocacy_docs/partner_docs/HPE/03-SolutionSummary.mdx | 8 ++++---- .../partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx | 11 +++++------ .../partner_docs/HPE/05-UsingHPEGreenlake.mdx | 4 ++-- .../partner_docs/HPE/06-CertificationEnvironment.mdx | 8 ++++---- .../partner_docs/HPE/07-SupportandLogging.mdx | 10 ++++------ advocacy_docs/partner_docs/HPE/index.mdx | 9 ++++----- 7 files changed, 28 insertions(+), 32 deletions(-) diff --git a/advocacy_docs/partner_docs/HPE/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/HPE/02-PartnerInformation.mdx index 3d0da9ad59d..8ffbb5b7030 100644 --- a/advocacy_docs/partner_docs/HPE/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/HPE/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details of the partner' --- |   |   | | ----------- | ----------- | -| **Partner Name** | HPE | -| **Web Site** | https://www.hpe.com/us/en/greenlake.html | -| **Partner Product** | HPE Servers | -| **Product Description** | Whether on-prem or in the cloud, HPE provides customers with simple, secure systems to deploy their databases. HPE allows you to deploy your EDB Postgres Advanced Server, EDB Postgres Extended Server, PostgreSQL and other EDB software in a fast and secure environment. | +| **Partner name** | HPE | +| **Website** | https://www.hpe.com/us/en/greenlake.html | +| **Partner product** | HPE servers | +| **Product description** | Whether on premises or in the cloud, HPE provides you with simple, secure systems to deploy your databases. HPE allows you to deploy your EDB Postgres Advanced Server, EDB Postgres Extended Server, PostgreSQL, and other EDB software in a fast and secure environment. | diff --git a/advocacy_docs/partner_docs/HPE/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/HPE/03-SolutionSummary.mdx index 3e2919819ca..bcb00836edf 100644 --- a/advocacy_docs/partner_docs/HPE/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/HPE/03-SolutionSummary.mdx @@ -1,12 +1,12 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Explanation of the solution and its purpose' --- -EDB Postgres Advanced Server, EDB Postgres Extended Server, PostgreSQL, Failover Manager, Postgres Enterprise Manager, and Barman can each be deployed on HPE hardware that is customizable per customer needs. Furthermore, using the HPE GreenLake Database model with EDB Postgres allows for a simpler end-to-end solution for the entire lifecycle of the database environment. +EDB Postgres Advanced Server, EDB Postgres Extended Server, PostgreSQL, Failover Manager, Postgres Enterprise Manager, and Barman can each be deployed on HPE hardware that's customizable per your needs. Furthermore, using the HPE GreenLake Database model with EDB Postgres allows for a simpler end-to-end solution for the entire lifecycle of the database environment. -HPE GreenLake Database works to remove some of the complexities of getting a database up and running which then allows you to deploy all of your EDB products quickly and securely. HPE does this by taking on some of the in depth pieces like designing, implementing and operating a database so customers do not have to put as much focus into these areas. HPE provides customers with complete, scalable solutions for all of their server needs in order to run their databases efficiently. +HPE GreenLake Database works to remove some of the complexities of getting a database up and running, which then allows you to deploy all of your EDB products quickly and securely. HPE does this by taking on some of the in-depth pieces, like designing, implementing, and operating a database so you don't have to put as much focus into these areas. HPE provides you with complete, scalable solutions for all of your server needs to run your databases efficiently. -The following diagram shows what EDB products were tested on HPE Servers: +The following diagram shows the EDB products that were tested on HPE servers. ![EDB Products on HPE Servers](Images/HPESolutionSummaryImage.png) diff --git a/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx b/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx index ed1553b5ce4..b49fd7a5abb 100644 --- a/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx +++ b/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx @@ -3,14 +3,14 @@ title: 'Configuration' description: 'Walkthrough of configuring the integration' --- +!!! Note + We refer to EDB Postgres Advanced Server, EDB Postgres Extended Server, and PostgreSQL Server as the Postgres distribution. The specific distribution type depends on your needs and preferences. + Implementing EDB software on HPE requires the following components: - Postgres distribution - HPE system components configured per your requirements -!!! Note - We refer to EDB Postgres Advanced Server, EDB Postgres Extended Server, and PostgreSQL Server as *Postgres distribution*. The specific distribution type depends on your needs and preferences. - The figure shows a sample deployment. ![HPE sample deployment](Images/SampleDeployment.png) @@ -21,11 +21,10 @@ Set up HPE servers per your requirements. ## Log in to server and deploy Postgres distribution - 1. Log in to your server using your chosen method. For example, on a Windows system accessing a RHEL server, use a utility like PuTTy to SSH into your server. 2. Log in as the root user using credentials you established with HPE during your server setup. -3. Install your preferred Postgres distribution. For EDB Postgres Advanced Server, see the [EDB Postgres Advanced Server documentation](https://www.enterprisedb.com/docs/epas/latest/). For EDB Postgres Extended Server, see the [EDB Postgres Extended Server documentation](https://www.enterprisedb.com/docs/pge/latest/). +3. Install your preferred Postgres distribution. For EDB Postgres Advanced Server, see the [EDB Postgres Advanced Server documentation](/epas/latest/). For EDB Postgres Extended Server, see the [EDB Postgres Extended Server documentation](/pge/latest/). -4. Install the other EDB tools, such as [Failover Manager (EFM)](https://www.enterprisedb.com/docs/efm/latest/), [Postgres Enterprise Manager (PEM)](https://www.enterprisedb.com/docs/pem/latest/), or [Barman](https://www.enterprisedb.com/docs/supported-open-source/barman/), in the appropriate servers as needed for your configuration. See the [EDB documentation](https://www.enterprisedb.com/docs) for any other software needs. +4. Install the other EDB tools, such as [Failover Manager (EFM)](/efm/latest/), [Postgres Enterprise Manager (PEM)](/pem/latest/), or [Barman](/supported-open-source/barman/), in the appropriate servers as needed for your configuration. See the [EDB documentation](https://www.enterprisedb.com/docs) for any other software needs. diff --git a/advocacy_docs/partner_docs/HPE/05-UsingHPEGreenlake.mdx b/advocacy_docs/partner_docs/HPE/05-UsingHPEGreenlake.mdx index d20c653d2b0..9c0e4bb4642 100644 --- a/advocacy_docs/partner_docs/HPE/05-UsingHPEGreenlake.mdx +++ b/advocacy_docs/partner_docs/HPE/05-UsingHPEGreenlake.mdx @@ -5,9 +5,9 @@ description: 'Walkthrough of example usage scenarios' HPE systems are easy to deploy, turn on and off, and install your Postgres distribution products on, while ensuring speed and security. -To use HPE System Components: +To use HPE system components: -1. Access your server, either via GUI or SSH depending on your system setup. +1. Access your server, either using the UI or SSH, depending on your system setup. 1. Install and deploy your Postgres distribution products as needed: diff --git a/advocacy_docs/partner_docs/HPE/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/HPE/06-CertificationEnvironment.mdx index 3371353886c..bd8a037e9b8 100644 --- a/advocacy_docs/partner_docs/HPE/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/HPE/06-CertificationEnvironment.mdx @@ -1,15 +1,15 @@ --- -title: 'Certification Environment' +title: 'Certification environment' description: 'Overview of the certification environment' --- -## HPE DL380 Gen10 Plus Server Test Environment +## HPE DL380 Gen10 Plus Server test environment |   |   | | ----------- | ----------- | -| **Certification Test Date** | May 31, 2023 | +| **Certification test date** | May 31, 2023 | | **EDB Postgres Advanced Server** | 12,13,14,15 | | **EDB Postgres Extended Server** | 12,13,14,15 | | **Postgres Enterprise Manager** | 9.1.1 | | **EDB Failover Manager** | 4.6 | | **Barman** | 3.4.0 | -| **HPE Server** | Proliant DL380 Gen10 Plus | \ No newline at end of file +| **HPE server** | Proliant DL380 Gen10 Plus | \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HPE/07-SupportandLogging.mdx b/advocacy_docs/partner_docs/HPE/07-SupportandLogging.mdx index 92dd10bf4b4..1197e5fc217 100644 --- a/advocacy_docs/partner_docs/HPE/07-SupportandLogging.mdx +++ b/advocacy_docs/partner_docs/HPE/07-SupportandLogging.mdx @@ -5,23 +5,21 @@ description: 'Details of the support process and logging information' ## Support -Both EDB and HPE provide technical support for the use of these products. To get support, you must have a support contract in place at both EDB and HPE. You can open a support ticket with either vendor to start the process. - -If the support ticket shows that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. +Both EDB and HPE provide technical support for the use of these products. To get support, you must have a support contract in place at both EDB and HPE. You can open a support ticket with either vendor to start the process. If the support ticket shows that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging -The log location depends on the product. +The following log files are available. ### EDB Postgres Advanced Server logs -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Alternatively, you can navigate to the `postgresql.conf` file, which you can use to customize logging options or enable `edb_audit` logs. +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Or, you can navigate to the `postgresql.conf` file, which you can use to customize logging options or enable `edb_audit` logs. An example of the full path to view EDB Postgres Advanced Server logs is `/var/lib/edb/as15/data/log`. ### EDB Postgres Extended Server logs -Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance. From there, you can navigate to `log`. Alternatively, you can navigate to the `postgresql.conf` file, which you can use to customize logging options. +Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance. From there, you can navigate to `log`. Or, you can navigate to the `postgresql.conf` file, which you can use to customize logging options. An example of the full path to view EDB Postgres Extended logs is `/var/lib/edb-pge/15/data/log`. diff --git a/advocacy_docs/partner_docs/HPE/index.mdx b/advocacy_docs/partner_docs/HPE/index.mdx index 0fd33dbf849..5bb17a486d9 100644 --- a/advocacy_docs/partner_docs/HPE/index.mdx +++ b/advocacy_docs/partner_docs/HPE/index.mdx @@ -1,14 +1,13 @@ --- -title: 'HPE Servers Implementation Guide' +title: 'Implementing HPE Servers' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+![Partner Program Logo](Images/PartnerProgram.jpg.png) +

EDB GlobalConnect Technology Partner Implementation Guide

HPE Servers

-

This document is intended to augment each vendor’s product documentation in order to guide the reader in getting the products working together. It is not intended to show the optimal configuration for the certified integration.

\ No newline at end of file +

This content is intended to augment each vendor’s product documentation to guide you in getting the products working together. It isn't intended to show the optimal configuration for the certified integration.

\ No newline at end of file From 3fb9f83633d7563b46b5917c28e22fb8d49d4b21 Mon Sep 17 00:00:00 2001 From: Chris Estes <106166814+ccestes@users.noreply.github.com> Date: Tue, 1 Aug 2023 09:07:59 -0400 Subject: [PATCH 007/370] update to CONNECT section added code back and made some consistency fixes follow up --- product_docs/docs/edb_plus/41/06_command_summary.mdx | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/edb_plus/41/06_command_summary.mdx b/product_docs/docs/edb_plus/41/06_command_summary.mdx index f83c7ae75b6..a43cfdfe2d2 100644 --- a/product_docs/docs/edb_plus/41/06_command_summary.mdx +++ b/product_docs/docs/edb_plus/41/06_command_summary.mdx @@ -315,7 +315,7 @@ C:\Users\Administrator\AppData\Roaming\postgresql\pgpass.conf `variable` is a variable defined in the `login.sql` file that contains a database connection string. The `login.sql` file is in the `edbplus` subdirectory of the EDB Postgres Advanced Server home directory. -In this example, the database connection is changed to database `edb` on the localhost at port 5445 with username smith. +In this example, the database connection is changed to database `edb` on the localhost at port `5445` with username `smith`. ```sql SQL> CONNECT smith/mypassword@localhost:5445/edb @@ -323,7 +323,7 @@ Disconnected from EnterpriseDB Database. Connected to EnterpriseDB 14.0.0 (localhost:5445/edb) AS smith ``` -In this session, the connection is changed to user name enterprisedb. The host defaults to the localhost, the port defaults to 5444 (which isn't the same as the port previously used), and the database defaults to `edb`. +In this session, the connection is changed to user name `enterprisedb`. The host defaults to the `localhost`, the port defaults to `5444` (which isn't the same as the port previously used), and the database defaults to `edb`. ```sql SQL> CONNECT enterprisedb/password @@ -331,6 +331,14 @@ Disconnected from EnterpriseDB Database. Connected to EnterpriseDB 14.0.0 (localhost:5444/edb) AS enterprisedb ``` +The following example illustrates connectivity for a multi-node cluster (one primary node and two secondary nodes) setup. The given multi-host `connectstring` syntax is used to establish a connection with the active primary database server. In this case, using `CONNECT` command, the connection is established with the primary database node on host `192.168.22.24` at port `5444`. + +```sql +SQL> CONNECT enterprisedb/edb@192.168.22.24:5444,192.168.22.25:5445,192.168.22.26:5446/edb?targetServerType=primary +Disconnected from EnterpriseDB Database. +Connected to EnterpriseDB 15.3.0 (192.168.22.24:5444/edb) AS enterprisedb +``` + ## DEFINE The `DEFINE` command creates or replaces the value of a *user variable* (also called a *substitution variable*). From 8289f29c5bf6c7b79b6070dde7e05b9350b63471 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 09:34:17 -0400 Subject: [PATCH 008/370] A few fixes per Jennifer's global comments --- advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx | 2 +- advocacy_docs/partner_docs/HPE/index.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx b/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx index b49fd7a5abb..1b58ca03f8e 100644 --- a/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx +++ b/advocacy_docs/partner_docs/HPE/04-ConfiguringHPEGreenlake.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuration' +title: 'Configuring' description: 'Walkthrough of configuring the integration' --- diff --git a/advocacy_docs/partner_docs/HPE/index.mdx b/advocacy_docs/partner_docs/HPE/index.mdx index 5bb17a486d9..b6cbd09a76f 100644 --- a/advocacy_docs/partner_docs/HPE/index.mdx +++ b/advocacy_docs/partner_docs/HPE/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing HPE Servers' +title: 'HPE Servers Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From 0ce57511a60a42f2cca20c3b7d39dd3f44058aeb Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 10 Aug 2023 14:21:21 -0400 Subject: [PATCH 009/370] edits to PGD release PR4560 with queries --- product_docs/docs/pgd/5/parallelapply.mdx | 43 +++++++----- .../docs/pgd/5/reference/autopartition.mdx | 13 ++-- .../docs/pgd/5/reference/catalogs-visible.mdx | 10 +-- .../pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx | 68 +++++++++---------- .../docs/pgd/5/routing/installing_proxy.mdx | 10 +-- product_docs/docs/pgd/5/routing/proxy.mdx | 6 +- .../docs/pgd/5/upgrades/upgrade_paths.mdx | 2 +- 7 files changed, 78 insertions(+), 74 deletions(-) diff --git a/product_docs/docs/pgd/5/parallelapply.mdx b/product_docs/docs/pgd/5/parallelapply.mdx index d8d8c63f92c..997148802d2 100644 --- a/product_docs/docs/pgd/5/parallelapply.mdx +++ b/product_docs/docs/pgd/5/parallelapply.mdx @@ -6,35 +6,44 @@ navTitle: Parallel Apply ## What is Parallel Apply? !!! Note Improved in PGD 5.2 -This feature has been enhanced in PGD 5.2 and is recommended for more scenarios. +This feature was enhanced in PGD 5.2 and is recommended for more scenarios. !!! Parallel Apply is a feature of PGD that allows a PGD node to use multiple writers per subscription. This behavior generally increases the throughput of a subscription and improves replication performance. When Parallel Apply is operating, the transactional changes from the subscription are written by multiple writers. However, each writer ensures that the final commit of its transaction doesn't violate the commit order as executed on the origin node. If there's a violation, an error is generated and the transaction can be rolled back. -This mechanism has previously meant that if a transaction is pending commit and modifies a row that another transaction needs to change, and that other transaction executed on the origin node before the pending transaction did, then the resulting error could manifest itself as a deadlock if the pending transaction has taken out a lock request. +This mechanism previously meant that when the following are all true, the resulting error could manifest as a deadlock: -Additionally, handling the error could increase replication lag, due to a combination of the time taken to detect the deadlock, the time taken for the client to rollback its transaction, the time taken for indirect garbage collection of the already applied changes, and the time taken to redo the work. +- A transaction is pending commit and modifies a row that another transaction needs to change. +- That other transaction executed on the origin node before the pending transaction did. +- The pending transaction took out a lock request. -This is where Parallel Apply’s deadlock mitigation, introduced in PGD 5.2, comes into play. For any transaction, Parallel Apply looks at transactions already scheduled for any row (tuple) that the current transaction wants to write. If it finds one, the row is marked as needing to wait until the other transaction is committed before applying its change to the row. +Additionally, handling the error could increase replication lag, due to a combination of the time taken: -This ensures that rows are written in the correct order. +- To detect the deadlock +- For the client to roll back its transaction +- For indirect garbage collection of the changes that were already applied +- To redo the work + +This is where Parallel Apply’s deadlock mitigation, introduced in PGD 5.2, can help. For any transaction, Parallel Apply looks at transactions already scheduled for any row (tuple) that the current transaction wants to write. If it finds one, the row is marked as needing to wait until the other transaction is committed before applying its change to the row. + +This approach ensures that rows are written in the correct order. ### Configuring Parallel Apply -Two variables which control Parallel Apply in PGD 5, [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). +Two variables control Parallel Apply in PGD 5: [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) (defaults to 8) and [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) (defaults to 2). ```plain bdr.max_writers_per_subscription = 8 bdr.writers_per_subscription = 2 ``` -This configuration gives each subscription two writers. However, in some circumstances, the system might allocate up to 8 writers for a subscription. +This configuration gives each subscription two writers. However, in some circumstances, the system might allocate up to eight writers for a subscription. You can only change [`bdr.max_writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrmax_writers_per_subscription) with a server restart. -You can change [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) for a specific subscription, without a restart by: +You can change [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) for a specific subscription without a restart by: 1. Halting the subscription using [`bdr.alter_subscription_disable`](/pgd/latest/reference/nodes-management-interfaces#bdralter_subscription_disable). 1. Setting the new value. @@ -55,23 +64,21 @@ SELECT bdr.alter_subscription_enable ('bdr_bdrdb_bdrgroup_node2_node1'); ### When to use Parallel Apply -Parallel Apply is always on by default and, for most operations, we recommend that it's left on. +Parallel Apply is always on by default and, for most operations, we recommend leaving it on. -From PGD 5.2, Parallel Apply works with CAMO. It is not compatible with Group Commit or Eager Replication and should be disabled if Group Commit or Eager Replication are in use. +From PGD 5.2, Parallel Apply works with CAMO. It isn't compatible with Group Commit or Eager Replication, so disable it if Group Commit or Eager Replication are in use. -Up to and including PGD 5.1, Parallel Apply should not be used with Group Commit, CAMO and eager replication. You should disable Parallel Apply in these scenarios. If, using PGD 5.1 or earlier, you are experiencing a large number of deadlocks, you may also want to disable Parallel Apply or consider upgrading. +Up to and including PGD 5.1, don't use Parallel Apply with Group Commit, CAMO, and Eager Replication. Disable Parallel Apply in these scenarios. If, using PGD 5.1 or earlier, you're experiencing a large number of deadlocks, you might also want to disable Parallel Apply or consider upgrading. ### Monitoring Parallel Apply -To support Parallel Apply's deadlock mitigation, PGD 5.2 adds columns to [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription). The new columns are `nprovisional_waits`, `ntuple_waits` and `ncommmit_waits`. These -are metrics which indicate how well Parallel Apply is managing what would have previously been deadlocks. They are not -indicative of overall system performance. +To support Parallel Apply's deadlock mitigation, PGD 5.2 adds columns to [`bdr.stat_subscription`](/pgd/latest/reference/catalogs-visible#bdrstat_subscription). The new columns are `nprovisional_waits`, `ntuple_waits`, and `ncommmit_waits`. These +are metrics that indicate how well Parallel Apply is managing what previously would have been deadlocks. They don't indicate overall system performance. -The `nprovisional_waits` value reflects the number of operations on the same tuples being performed by concurrent apply transactions. These are provisional waits which aren't actually waiting yet but could. +The `nprovisional_waits` value reflects the number of operations on the same tuples being performed by concurrent apply transactions. These are provisional waits that aren't actually waiting yet but could. -If a tuple's write needs to wait until it can be safely applied, it is counted in `ntuple_waits`. Fully applied transactions which waited before being committed are counted in `ncommit_waits`. +If a tuple's write needs to wait until it can be safely applied, it's counted in `ntuple_waits`. Fully applied transactions that waited before being committed are counted in `ncommit_waits`. ### Disabling Parallel Apply -To disable Parallel Apply set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to 1. - +To disable Parallel Apply, set [`bdr.writers_per_subscription`](/pgd/latest/reference/pgd-settings#bdrwriters_per_subscription) to `1`. diff --git a/product_docs/docs/pgd/5/reference/autopartition.mdx b/product_docs/docs/pgd/5/reference/autopartition.mdx index 909cbe5b641..ab5c1d080b9 100644 --- a/product_docs/docs/pgd/5/reference/autopartition.mdx +++ b/product_docs/docs/pgd/5/reference/autopartition.mdx @@ -5,8 +5,8 @@ indexdepth: 3 rootisheading: false --- -Autopartition allows you to split tables into several partitions. Read more about it in the main -[Scaling](../scaling) section. +Autopartition allows you to split tables into several partitions. For more information, see +[Scaling](../scaling). ### `bdr.autopartition` @@ -33,7 +33,8 @@ bdr.autopartition(relation regclass, - `partition_initial_lowerbound` — If the table has no partition, then the first partition with this lower bound and `partition_increment` apart upper bound is created. -- `partition_autocreate_expression` — Used to detect if it's time to create new partitions. + +- `partition_autocreate_expression` — Detects if it's time to create new partitions. - `minimum_advance_partitions` — The system attempts to always have at least `minimum_advance_partitions` partitions. - `maximum_advance_partitions` — Number of partitions to create in a single @@ -57,7 +58,7 @@ unitsales int bdr.autopartition('measurement', '1 day', data_retention_period := '30 days'); ``` -Create five advance partitions when there are only two more partitions remaining. Each partition can hold 1 billion orders. +Create five advance partitions when only two more partitions are remaining. Each partition can hold 1 billion orders. ```sql bdr.autopartition('Orders', '1000000000', @@ -229,7 +230,3 @@ This function places a DDL lock on the parent table before using `DROP TABLE` on chosen partition table. This function is an internal function used by AutoPartition for partition management. We recommend that you don't use the function directly. - - - - diff --git a/product_docs/docs/pgd/5/reference/catalogs-visible.mdx b/product_docs/docs/pgd/5/reference/catalogs-visible.mdx index abac3dab33b..20fca8df41c 100644 --- a/product_docs/docs/pgd/5/reference/catalogs-visible.mdx +++ b/product_docs/docs/pgd/5/reference/catalogs-visible.mdx @@ -889,13 +889,13 @@ is enabled. | start_lsn | pg_lsn | LSN from which this subscription requested to start replication from the upstream | | retries_at_same_lsn | bigint | Number of attempts the subscription was restarted from the same LSN value | | curr_ncommit | bigint | Number of commits this subscription did after the current connection was established | -| npre_commit_confirmations | bigint | Number of pre-commit confirmations by CAMO partners | -| npre_commit | bigint | Number of pre-commits | +| npre_commit_confirmations | bigint | Number of precommit confirmations by CAMO partners | +| npre_commit | bigint | Number of precommits | | ncommit_prepared | bigint | Number of prepared transaction commits | | nabort_prepared | bigint | Number of prepared transaction aborts | -| nprovisional_waits | bigint | Number of update/delete operations on same tuples by concurrent apply transactions. These are provisional waits. See [Parallel apply](../parallelapply) | -| ntuple_waits | bigint | Number of update/delete operations that waited to be safely applied. See [Parallel apply](../parallelapply) | -| ncommit_waits | bigint | Number of fully applied transactions that had to wait before being committed. See [Parallel apply](../parallelapply) | +| nprovisional_waits | bigint | Number of update/delete operations on same tuples by concurrent apply transactions. These are provisional waits. See [Parallel Apply](../parallelapply) | +| ntuple_waits | bigint | Number of update/delete operations that waited to be safely applied. See [Parallel Apply](../parallelapply) | +| ncommit_waits | bigint | Number of fully applied transactions that had to wait before being committed. See [Parallel Apply](../parallelapply) | | stats_reset | timestamp with time zone | Time when these subscription statistics were reset | ### `bdr.subscription` diff --git a/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx b/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx index f1b8720d038..7ed48a0c20f 100644 --- a/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx +++ b/product_docs/docs/pgd/5/rel_notes/pgd_5.2.0_rel_notes.mdx @@ -7,9 +7,9 @@ EDB Postgres Distributed version 5.2.0 is a minor version of EDB Postgres Distri ## Highlights of EDB Postgres Distributed 5.2 -* Parallel apply is now available for PGD’s Commit at Most Once (CAMO) synchronous commit scope and improving replication performance. -* Parallel apply for native Postgres asynchronous and synchronous replication has been improved for workloads where the same key is being modified concurrently by multiple transactions to maintain commit sequence and avoid deadlocks -* PGD Proxy has added HTTP(S) APIs to allow the health of the proxy to be monitored directly for readiness and liveness. See [Proxy Health Check](../routing/proxy/#proxy-health-check) +* Parallel Apply is now available for PGD’s Commit at Most Once (CAMO) synchronous commit scope and improving replication performance. +* Parallel Apply for native Postgres asynchronous and synchronous replication has been improved for workloads where the same key is being modified concurrently by multiple transactions to maintain commit sequence and avoid deadlocks. +* PGD Proxy has added HTTP(S) APIs to allow the health of the proxy to be monitored directly for readiness and liveness. See [Proxy health check](../routing/proxy/#proxy-health-check). !!! Important Recommended upgrade We recommend that users of PGD 5.1 upgrade to PGD 5.2. @@ -21,34 +21,34 @@ This version is required for EDB Postgres Advanced Server versions 12.15, 13.11, | Component | Version | Type | Description | |-----------|---------|-----------------|--------------| -| BDR | 5.2.0 | Feature | Parallel apply for synchronous commit scopes with CAMO | -| BDR | 5.2.0 | Feature | Allow multiple SYNCHRONOUS_COMMIT clauses in a Commit Scope rule | -| BDR | 5.2.0 | Enhancement | Allow transaction streaming with SYNCHRONOUS_COMMIT and LAG CONTROL Commit Scopes | -| BDR | 5.2.0 | Enhancement | Improve handling of concurrent workloads with parallel apply | -| BDR | 5.2.0 | Enhancement | Modify `bdr.stat_subscription` for new columns | -| BDR | 5.2.0 | Bug fix | Allow a logical join of node if there are foreign key constraints violations. (RT91745) | -| BDR | 5.2.0 | Bug fix | Change to `group_raft_details` view to avoid deadlock possibility | -| BDR | 5.2.0 | Bug fix | Log the extension upgrade | -| BDR | 5.2.0 | Bug fix | Add check for conflicting node names | -| BDR | 5.2.0 | Bug fix | Fix a crash during Raft manual snapshot restore | -| BDR | 5.2.0 | Bug fix | Don't try to establish consensus connections to parting or parted nodes | -| BDR | 5.2.0 | Bug fix | Fix `tcp_user_timeout` GUC to use the correct unit | -| BDR | 5.2.0 | Bug fix | Fix the consensus snapshot compatibility with PGD 3.7. (RT93022) | -| BDR | 5.2.0 | Bug fix | Address a crash when BDR is used together with pgaudit | -| BDR | 5.2.0 | Bug fix | Skip parting synchronization to witness node | -| BDR | 5.2.0 | Bug fix | Generate correct keepalive parameters in connection strings | -| BDR | 5.2.0 | Bug fix | Enable various scenarios of switching nodes between groups and their subgroups. For example, transition node from a group to any of the nested sub-groups| -| BDR | 5.2.0 | Bug fix | Reduce the amount of WAL produced by consensus on idle server | -| BDR | 5.2.0 | Bug fix | Fixed deadlock on autopartition catalogs when a concurrent `DROP EXTENSION` is executing | -| BDR | 5.2.0 | Bug fix | Fix sporadic failure when dropping extension after node restart | -| BDR | 5.2.0 | Bug fix | Add workaround for crash due to pgaudit bug 212 | -| BDR | 5.2.0 | Bug fix | Fix deadlock between consensus and global monitoring queries | -| BDR | 5.2.0 | Bug fix | Fix query cancellation propagation across `bdr.run_on_all_nodes` | -| BDR | 5.2.0 | Bug fix | Disallow invoking `bdr.run_on_nodes()`, `bdr.run_on_group()` and `bdr.run_on_all_nodes()` on parted nodes | -| CLI | 5.2.0 | Enhancement | Add new GUCs verification in `verify-settings` command | -| CLI | 5.2.0 | Bug fix | Truncate the long value of GUC in tabular output of `verify-settings` | -| CLI | 5.2.0 | Bug fix | Upgrade database driver library version which fixes `connect_timeout` issue when `sslmode=allow` or `sslmode=prefer` | -| Proxy | 5.2.0 | Feature | Add HTTP(S) APIs for Proxy health check | -| Proxy | 5.2.0 | Enhancement | Improve route change events handling mechanism | -| Proxy | 5.2.0 | Enhancement | Add retry mechanism on consensus query error | -| Proxy | 5.2.0 | Bug fix | Upgrade database driver library version which fixes `connect_timeout` issue when `sslmode=allow` or `sslmode=prefer` | +| BDR | 5.2.0 | Feature | Added Parallel Apply for synchronous commit scopes with CAMO. | +| BDR | 5.2.0 | Feature | Allow multiple SYNCHRONOUS_COMMIT clauses in a commit scope rule. | +| BDR | 5.2.0 | Enhancement | BDR extension now allows transaction streaming with SYNCHRONOUS_COMMIT and LAG CONTROL commit scopes. | +| BDR | 5.2.0 | Enhancement | Improved handling of concurrent workloads with Parallel Apply. | +| BDR | 5.2.0 | Enhancement | Modified `bdr.stat_subscription` for new columns. | +| BDR | 5.2.0 | Bug fix | Fixed an issue by allowing a logical join of node if there are foreign key constraints violations. (RT91745) | +| BDR | 5.2.0 | Bug fix | Changed `group_raft_details` view to avoid deadlock possibility. | +| BDR | 5.2.0 | Bug fix | Fixed an issue by adding ability to log the extension upgrade. | +| BDR | 5.2.0 | Bug fix | Added check for conflicting node names. | +| BDR | 5.2.0 | Bug fix | Fixed a crash during Raft manual snapshot restore. | +| BDR | 5.2.0 | Bug fix | Fixed an issue whereby BDR extension was attempting to establish consensus connections to parting or parted nodes. | +| BDR | 5.2.0 | Bug fix | Fixed `tcp_user_timeout` GUC to use the correct unit. | +| BDR | 5.2.0 | Bug fix | Fixed the consensus snapshot compatibility with PGD 3.7. (RT93022) | +| BDR | 5.2.0 | Bug fix | Fixed an issue whereby a crash occurred when BDR extension is used with pgaudit. | +| BDR | 5.2.0 | Bug fix | Fixed an issue by skipping parting synchronization to witness node. | +| BDR | 5.2.0 | Bug fix | Fixed an issue by now generating correct keepalive parameters in connection strings. | +| BDR | 5.2.0 | Bug fix | Enabled various scenarios of switching nodes between groups and their subgroups, for example, transition node from a group to any of the nested subgroups.| +| BDR | 5.2.0 | Bug fix | Reduced the amount of WAL produced by consensus on idle server. | +| BDR | 5.2.0 | Bug fix | Fixed deadlock on autopartition catalogs when a concurrent `DROP EXTENSION` is executing. | +| BDR | 5.2.0 | Bug fix | Fixed sporadic failure when dropping extension after node restart | +| BDR | 5.2.0 | Bug fix | Added a workaround for crash due to pgaudit bug. | +| BDR | 5.2.0 | Bug fix | Fixed deadlock between consensus and global monitoring queries. | +| BDR | 5.2.0 | Bug fix | Fixed query cancellation propagation across `bdr.run_on_all_nodes`. | +| BDR | 5.2.0 | Bug fix | Fixed an issue by disallowing invoking `bdr.run_on_nodes()`, `bdr.run_on_group()` and `bdr.run_on_all_nodes()` on parted nodes. | +| CLI | 5.2.0 | Enhancement | Added new GUCs verification in `verify-settings` command. | +| CLI | 5.2.0 | Bug fix | Fixed an issue by truncating the long value of GUC in tabular output of `verify-settings`. | +| CLI | 5.2.0 | Bug fix | Fixed `connect_timeout` issue when `sslmode=allow` or `sslmode=prefer` by upgrading database driver library version. | +| Proxy | 5.2.0 | Feature | Added HTTP(S) APIs for Proxy health check. | +| Proxy | 5.2.0 | Enhancement | Improved route change events handling mechanism. | +| Proxy | 5.2.0 | Enhancement | Added retry mechanism on consensus query error. | +| Proxy | 5.2.0 | Bug fix | Fixed `connect_timeout` issue when `sslmode=allow` or `sslmode=prefer` by upgrading database driver library version. | diff --git a/product_docs/docs/pgd/5/routing/installing_proxy.mdx b/product_docs/docs/pgd/5/routing/installing_proxy.mdx index c7eb9a0c779..d9828e46c15 100644 --- a/product_docs/docs/pgd/5/routing/installing_proxy.mdx +++ b/product_docs/docs/pgd/5/routing/installing_proxy.mdx @@ -55,7 +55,7 @@ PGD Proxy uses endpoints given in the local config file only at proxy startup. A ##### Configuring health check -PGD Proxy provides [HTTP(S) health check APIs](proxy#proxy-health-check). If the health checks are required, you can enable them by adding the configuration parameters below to the pgd-proxy configuration file. By default it is disabled. +PGD Proxy provides [HTTP(S) health check APIs](proxy#proxy-health-check). If the health checks are required, you can enable them by adding the following configuration parameters to the pgd-proxy configuration file. By default, it's disabled. ```yaml cluster: @@ -78,15 +78,15 @@ cluster: timeout: 10s ``` -The API can be enabled by adding the config `cluster.proxy.http.enable: true`. When enabled, an HTTP server will listen on the default port, `8080`, with a ten second `timeout` and no HTTPS support. +You can enable the API by adding the config `cluster.proxy.http.enable: true`. When enabled, an HTTP server listens on the default port, `8080`, with a 10-second `timeout` and no HTTPS support. -To enable HTTPS set the config parameter `cluster.proxy.http.secure: true`. If it is set to `true`, the `cert_file` and `key_file` must also be set. +To enable HTTPS, set the config parameter `cluster.proxy.http.secure: true`. If it's set to `true`, the `cert_file` and `key_file` must also be set. -The `cluster.proxy.endpoint` is an endpoint used by the proxy to connect to the current write leader as part of its checks. When `cluster.proxy.http.enable` is `true`, `cluster.proxy.endpoint` must also be set. It could be same as BDR node [routing_dsn](../routing#configuration) where host will be `listen_address` and port will be `listen_port` [proxy options](../routing#configuration). If required, user can add additional connection string parameters in this endpoint, like `sslmode`, `sslrootcert`, `user`, etc. +The `cluster.proxy.endpoint` is an endpoint used by the proxy to connect to the current write leader as part of its checks. When `cluster.proxy.http.enable` is `true`, `cluster.proxy.endpoint` must also be set. It can be the same as BDR node [routing_dsn](../routing#configuration), where host is `listen_address` and port is `listen_port` [proxy options](../routing#configuration). If required, you can add connection string parameters in this endpoint, like `sslmode`, `sslrootcert`, `user`, and so on. #### PGD Proxy user -The database user specified in the endpoint doesn't need to be a superuser. Typically, in the TPA environment, `pgdproxy` is an OS user as well as a database user with the `bdr_superuser` role. +The database user specified in the endpoint doesn't need to be a superuser. Typically, in the TPA environment, pgdproxy is an OS user as well as a database user with the bdr_superuser role. #### PGD Proxy service diff --git a/product_docs/docs/pgd/5/routing/proxy.mdx b/product_docs/docs/pgd/5/routing/proxy.mdx index bb76f0933ec..34326b2088d 100644 --- a/product_docs/docs/pgd/5/routing/proxy.mdx +++ b/product_docs/docs/pgd/5/routing/proxy.mdx @@ -63,7 +63,7 @@ See [Connection management](../routing) for more information on the PGD side of ### Proxy health check -PGD Proxy provides the following HTTP(s) health check API endpoints. The API endpoints respond to `GET` requests. They need to be enabled and configured before use; see [Configurations](installing_proxy#configuring-health-check) for how to do this. +PGD Proxy provides the following HTTP(s) health check API endpoints. The API endpoints respond to `GET` requests. You need to enable and configure them before using them. See [Configurations](installing_proxy#configuring-health-check). ``` @@ -73,12 +73,12 @@ GET /health/is-live #### Readiness -On receiving a valid 'GET' request, the proxy checks if it can successfully route connections to the current write leader. If the check returns successfully, the API responds with a body containing `true` and an HTTP status code `200 (OK)`. Otherwise, it returns a body containing `false` with the HTTP status code `500 (Internal Server Error)`. +On receiving a valid `GET` request, the proxy checks if it can successfully route connections to the current write leader. If the check returns successfully, the API responds with a body containing `true` and an HTTP status code `200 (OK)`. Otherwise, it returns a body containing `false` with the HTTP status code `500 (Internal Server Error)`. #### Liveness -Liveness check either return `true` with HTTP status code `200 (OK)` or an error but never `false`. This is because the HTTP server listening for request is automatically stopped if the Proxy service fails to start or exits. +Liveness checks return either `true` with HTTP status code `200 (OK)` or an error. They never return `false` because the HTTP server listening for the request is stopped if the PGD Proxy service fails to start or exits. ## Proxy log location diff --git a/product_docs/docs/pgd/5/upgrades/upgrade_paths.mdx b/product_docs/docs/pgd/5/upgrades/upgrade_paths.mdx index 2a114d3af7c..0802df449da 100644 --- a/product_docs/docs/pgd/5/upgrades/upgrade_paths.mdx +++ b/product_docs/docs/pgd/5/upgrades/upgrade_paths.mdx @@ -27,7 +27,7 @@ Upgrades from PGD 4 to PGD 5 are supported from version 4.3.0. For older version ## Upgrading from version 3.7 to version 5 -Currently there are no direct upgrade paths from 3.7 to 5. You must first upgrade your cluster to 4.3.0 or later before upgrading to 5. See [Upgrading from version 3.7 to version 4](/pgd/4/upgrades/upgrade_paths/#upgrading-from-version-37-to-version-4) documentation for more information. +Currently there are no direct upgrade paths from 3.7 to 5. You must first upgrade your cluster to 4.3.0 or later before upgrading to 5. See [Upgrading from version 3.7 to version 4](/pgd/4/upgrades/upgrade_paths/#upgrading-from-version-37-to-version-4) for more information. - `partition_autocreate_expression` — Detects if it's time to create new partitions. - `minimum_advance_partitions` — The system attempts to always have at least `minimum_advance_partitions` partitions. From cc80b2eca140bc827b08445c34117d529b811f7e Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 10 Aug 2023 14:50:19 -0400 Subject: [PATCH 011/370] Update autopartition.mdx --- product_docs/docs/pgd/5/reference/autopartition.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/reference/autopartition.mdx b/product_docs/docs/pgd/5/reference/autopartition.mdx index 7f506444b64..b4d9feddbff 100644 --- a/product_docs/docs/pgd/5/reference/autopartition.mdx +++ b/product_docs/docs/pgd/5/reference/autopartition.mdx @@ -57,7 +57,7 @@ unitsales int bdr.autopartition('measurement', '1 day', data_retention_period := '30 days'); ``` -Create five advance partitions when only two more partitions are remaining. Each partition can hold 1 billion orders. +Create five advance partitions when only two more partitions remain. Each partition can hold 1 billion orders. ```sql bdr.autopartition('Orders', '1000000000', From b278fb3cd4243ef35ee5cac4505b3822c0d6f2bd Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 10 Aug 2023 17:16:05 -0400 Subject: [PATCH 012/370] Edits to PR4538 First pass --- .../01_setting_new_parameters.mdx | 16 ++++---- .../01_configuration_parameters/index.mdx | 2 +- .../02_edb_loader/data_loading_methods.mdx | 6 +-- .../edb_loader_overview_and_restrictions.mdx | 4 +- .../running_edb_loader.mdx | 39 +++++++++++-------- .../dirty_buffer_throttling.mdx | 4 +- .../copying_a_remote_schema.mdx | 2 +- .../setting_up_edb_clone_schema.mdx | 4 +- 8 files changed, 42 insertions(+), 35 deletions(-) diff --git a/product_docs/docs/epas/15/database_administration/01_configuration_parameters/01_setting_new_parameters.mdx b/product_docs/docs/epas/15/database_administration/01_configuration_parameters/01_setting_new_parameters.mdx index b8ce0d69c9d..4a30417ed58 100644 --- a/product_docs/docs/epas/15/database_administration/01_configuration_parameters/01_setting_new_parameters.mdx +++ b/product_docs/docs/epas/15/database_administration/01_configuration_parameters/01_setting_new_parameters.mdx @@ -14,7 +14,7 @@ redirects: -Set each configuration parameter using a name/value pair. Parameter names are not case sensitive. The parameter name is typically separated from its value by an optional equals sign (`=`). +Set each configuration parameter using a name/value pair. Parameter names aren't case sensitive. The parameter name is typically separated from its value by an optional equals sign (`=`). This example shows some configuration parameter settings in the `postgresql.conf` file: @@ -56,19 +56,19 @@ The multiplier for memory units is 1024. A number of parameter settings are set when the EDB Postgres Advanced Server database product is built. These are read-only parameters, and you can't change their values. A couple of parameters are also permanently set for each database when the database is created. These parameters are read-only and you can't later change them for the database. However, there are a number of ways to specify the configuration parameter settings: -- The initial settings for almost all configurable parameters across the entire database cluster are listed **in the `postgresql.conf`** configuration file. These settings are put into effect upon database server start or restart. You can override some of these initial parameter settings. All configuration parameters have built-in default settings that are in effect unless they are explicitly overridden. +- The initial settings for almost all configurable parameters across the entire database cluster are listed in the `postgresql.conf` configuration file. These settings are put into effect upon database server start or restart. You can override some of these initial parameter settings. All configuration parameters have built-in default settings that are in effect unless they're explicitly overridden. -- Configuration parameters in the `postgresql.conf` file are overridden when the same parameters are included **in the `postgresql.auto.conf` file**. The `ALTER SYSTEM` command is used to manage the configuration parameters in the `postgresql.auto.conf` file. +- Configuration parameters in the `postgresql.conf` file are overridden when the same parameters are included in the `postgresql.auto.conf` file. The `ALTER SYSTEM` command is used to manage the configuration parameters in the `postgresql.auto.conf` file. -- You can modify parameter settings **in the configuration file while the database server is running**. If the configuration file is then reloaded (meaning a SIGHUP signal is issued), for certain parameter types, the changed parameters settings immediately take effect. For some of these parameter types, the new settings are available in a currently running session immediately after the reload. For other of these parameter types, you must start a new session to use the new settings. And yet for other parameter types, modified settings don't take effect until the database server is stopped and restarted. See the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/config-setting.html) for information on how to reload the configuration file: +- You can modify parameter settings in the configuration file while the database server is running. If the configuration file is then reloaded (meaning a SIGHUP signal is issued), for certain parameter types, the changed parameters settings immediately take effect. For some of these parameter types, the new settings are available in a currently running session immediately after the reload. For others, you must start a new session to use the new settings. And for some others, modified settings don't take effect until the database server is stopped and restarted. See the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/config-setting.html) for information on how to reload the configuration file. -- You can **use the SQL commands** `ALTER DATABASE`, `ALTER ROLE`, or `ALTER ROLE IN DATABASE` to modify certain parameter settings. The modified parameter settings take effect for new sessions after you execute the command. `ALTER DATABASE` affects new sessions connecting to the specified database. `ALTER ROLE` affects new sessions started by the specified role. `ALTER ROLE IN DATABASE` affects new sessions started by the specified role connecting to the specified database. Parameter settings established by these SQL commands remain in effect indefinitely, across database server restarts, overriding settings established by the other methods. Parameter settings established using the `ALTER DATABASE`, `ALTER ROLE`, or `ALTER ROLE IN DATABASE` commands can be changed only by either: +- You can use the SQL commands `ALTER DATABASE`, `ALTER ROLE`, or `ALTER ROLE IN DATABASE` to modify certain parameter settings. The modified parameter settings take effect for new sessions after you execute the command. `ALTER DATABASE` affects new sessions connecting to the specified database. `ALTER ROLE` affects new sessions started by the specified role. `ALTER ROLE IN DATABASE` affects new sessions started by the specified role connecting to the specified database. Parameter settings established by these SQL commands remain in effect indefinitely, across database server restarts, overriding settings established by the other methods. You can change parameter settings established using the `ALTER DATABASE`, `ALTER ROLE`, or `ALTER ROLE IN DATABASE` commands by either: - Reissuing these commands with a different parameter value. - - Issuing these commands using either of the `SET parameter TO DEFAULT` clause or the `RESET parameter` clause. These clauses change the parameter back to using the setting set by the other methods. See the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/sql-commands.html) for the exact syntax of these SQL commands. + - Issuing these commands using the `SET parameter TO DEFAULT` clause or the `RESET parameter` clause. These clauses change the parameter back to using the setting set by the other methods. See the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/sql-commands.html) for the exact syntax of these SQL commands. -- You can make changes for certain parameter settings for the duration of individual sessions **using the `PGOPTIONS` environment variable** or by **using the `SET` command in the EDB-PSQL or PSQL command-line programs**. Parameter settings made this way override settings established using any of the methods descussed earlier, but only during that session. +- You can make changes for certain parameter settings for the duration of individual sessions using the `PGOPTIONS` environment variable or by using the `SET` command in the EDB-PSQL or PSQL command-line programs. Parameter settings made this way override settings established using any of the methods descussed earlier, but only during that session. ## Modifying the postgresql.conf file @@ -88,7 +88,7 @@ SELECT name FROM pg_settings WHERE context = 'postmaster'; Appropriate authentication methods provide protection and security. Entries in the `pg_hba.conf` file specify the authentication methods that the server uses with connecting clients. Before connecting to the server, you might need to modify the authentication properties specified in the `pg_hba.conf` file. -When you invoke the `initdb` utility to create a cluster, the utility creates a `pg_hba.conf` file for that cluster that specifies the type of authentication required from connecting clients. You can modify this file. After modifying the authentication settings in the `pg_hba.conf` file, restart the server and apply the changes. For more information about authentication and modifying the `pg_hba.conf` file, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html). +When you invoke the initdb utility to create a cluster, the utility creates a `pg_hba.conf` file for that cluster that specifies the type of authentication required from connecting clients. You can modify this file. After modifying the authentication settings in the `pg_hba.conf` file, restart the server and apply the changes. For more information about authentication and modifying the `pg_hba.conf` file, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html). When the server receives a connection request, it verifies the credentials provided against the authentication settings in the `pg_hba.conf` file before allowing a connection to a database. To log the `pg_hba.conf` file entry to authenticate a connection to the server, set the `log_connections` parameter to `ON` in the `postgresql.conf` file. diff --git a/product_docs/docs/epas/15/database_administration/01_configuration_parameters/index.mdx b/product_docs/docs/epas/15/database_administration/01_configuration_parameters/index.mdx index 2b48a5f7eeb..fecb50262fa 100644 --- a/product_docs/docs/epas/15/database_administration/01_configuration_parameters/index.mdx +++ b/product_docs/docs/epas/15/database_administration/01_configuration_parameters/index.mdx @@ -14,7 +14,7 @@ redirects: The EDB Postgres Advanced Server configuration parameters control various aspects of the database server’s behavior and environment such as data file and log file locations, connection, authentication and security settings, resource allocation and consumption, archiving and replication settings, error logging and statistics gathering, optimization and performance tuning, and locale and formatting settings -Configuration parameters that apply only to EDB Postgres Advanced Server are noted in the [Summary of configuration parameters](/epas/latest/reference/database_administrator_reference/02_summary_of_configuration_parameters/) topic, which lists all EDB Postgres Advanced Server configuration parameters along with a number of key attributes of the parameters. +Configuration parameters that apply only to EDB Postgres Advanced Server are noted in [Summary of configuration parameters](/epas/latest/reference/database_administrator_reference/02_summary_of_configuration_parameters/), which lists all EDB Postgres Advanced Server configuration parameters along with a number of key attributes of the parameters. You can find more information about configuration parameters in the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/runtime-config.html). diff --git a/product_docs/docs/epas/15/database_administration/02_edb_loader/data_loading_methods.mdx b/product_docs/docs/epas/15/database_administration/02_edb_loader/data_loading_methods.mdx index d75c1ccd3ea..5bde3714902 100644 --- a/product_docs/docs/epas/15/database_administration/02_edb_loader/data_loading_methods.mdx +++ b/product_docs/docs/epas/15/database_administration/02_edb_loader/data_loading_methods.mdx @@ -6,11 +6,11 @@ description: "Description of the data loading methods supported by EDB*Loader" As with Oracle SQL\*Loader, EDB\*Loader supports three data loading methods: -- *Conventional path load* — Conventional path load is the default method used by EDB\*Loader. Use basic insert processing to add rows to the table. The advantage of a conventional path load is that table constraints and database objects defined on the table are enforced during a conventional path load. Table constraints and database objects include primary keys, not null constraints, check constraints, unique indexes, foreign key constraints, triggers, and so on. One exception is that the EDB Postgres Advanced Server rules defined on the table aren't enforced. EDB\*Loader can load tables on which rules are defined. However, the rules aren't executed. As a consequence, you can't load partitioned tables implemented using rules with EDB\*Loader. See [Conventional path load](invoking_edb_loader/conventional_path_load.mdx) +- **Conventional path load** — Conventional path load is the default method used by EDB\*Loader. Use basic insert processing to add rows to the table. The advantage of a conventional path load is that table constraints and database objects defined on the table are enforced during a conventional path load. Table constraints and database objects include primary keys, not null constraints, check constraints, unique indexes, foreign key constraints, triggers, and so on. One exception is that the EDB Postgres Advanced Server rules defined on the table aren't enforced. EDB\*Loader can load tables on which rules are defined. However, the rules aren't executed. As a consequence, you can't load partitioned tables implemented using rules with EDB\*Loader. See [Conventional path load](invoking_edb_loader/conventional_path_load.mdx). -- *Direct path load* — A direct path load is faster than a conventional path load but requires removing most types of constraints and triggers from the table. See [Direct path load](invoking_edb_loader/direct_path_load.mdx). +- **Direct path load** — A direct path load is faster than a conventional path load but requires removing most types of constraints and triggers from the table. See [Direct path load](invoking_edb_loader/direct_path_load.mdx). -- *Parallel direct path load* — A parallel direct path load provides even greater performance improvement by permitting multiple EDB\*Loader sessions to run simultaneously to load a single table. See [Parallel direct path load](invoking_edb_loader/parallel_direct_path_load.mdx). +- **Parallel direct path load** — A parallel direct path load provides even greater performance improvement by permitting multiple EDB\*Loader sessions to run simultaneously to load a single table. See [Parallel direct path load](invoking_edb_loader/parallel_direct_path_load.mdx). !!! Note Create EDB Postgres Advanced Server rules using the `CREATE RULE` command. EDB Postgres Advanced Server rules aren't the same database objects as rules and rule sets used in Oracle. diff --git a/product_docs/docs/epas/15/database_administration/02_edb_loader/edb_loader_overview_and_restrictions.mdx b/product_docs/docs/epas/15/database_administration/02_edb_loader/edb_loader_overview_and_restrictions.mdx index 9e87a46566c..bbd6d241e79 100644 --- a/product_docs/docs/epas/15/database_administration/02_edb_loader/edb_loader_overview_and_restrictions.mdx +++ b/product_docs/docs/epas/15/database_administration/02_edb_loader/edb_loader_overview_and_restrictions.mdx @@ -21,9 +21,9 @@ EDB\*Loader features include: The important version compatibility restrictions between the EDB\*Loader client and the database server are: -- When you invoke the EDB\*Loader program (called `edbldr`), you pass in parameters and directive information to the database server. We strongly recommend that you use the version 14 EDB\*Loader client (the edbldr program supplied with EDB Postgres Advanced Server 14) to load data only into version 14 of the database server. In general, use the same version for the EDB\*Loader client and database server. +- When you invoke the EDB\*Loader program (called `edbldr`), you pass in parameters and directive information to the database server. We strongly recommend that you use the version 14 EDB\*Loader client (the `edbldr` program supplied with EDB Postgres Advanced Server 14) to load data only into version 14 of the database server. In general, use the same version for the EDB\*Loader client and database server. -- Using EDB\*Loader with connection poolers such as PgPool-II and PgBouncer isn't supported. EDB\*Loader must connect directly to EDB Postgres Advanced Server version 14. Alternatively, there are commands you can use for loading data through connection poolers: +- Using EDB\*Loader with connection poolers such as PgPool-II and PgBouncer isn't supported. EDB\*Loader must connect directly to EDB Postgres Advanced Server version 14. Alternatively, you can use these commands for loading data through connection poolers: ```shell psql \copy diff --git a/product_docs/docs/epas/15/database_administration/02_edb_loader/invoking_edb_loader/running_edb_loader.mdx b/product_docs/docs/epas/15/database_administration/02_edb_loader/invoking_edb_loader/running_edb_loader.mdx index ab018dcd42a..49a929473f1 100644 --- a/product_docs/docs/epas/15/database_administration/02_edb_loader/invoking_edb_loader/running_edb_loader.mdx +++ b/product_docs/docs/epas/15/database_administration/02_edb_loader/invoking_edb_loader/running_edb_loader.mdx @@ -33,13 +33,20 @@ You can specify parameters listed in the syntax diagram in a *parameter file*. E You can include the full directory path or a relative directory path to the file name when specifying `control_file`, `data_file`, `bad_file`, `discard_file`, `log_file`, and `param_file`. If you specify the file name alone or with a relative directory path, the file is assumed to exist in the case of `control_file`, `data_file`, or `param_file` relative to the current working directory from which `edbldr` is invoked. In the case of `bad_file`, `discard_file`, or `log_file`, the file is created. -If you omit the `-d` option, the `-p` option, or the `-h` option, the defaults for the database, port, and host are determined by the same rules as other EDB Postgres Advanced Server (EPAS) utility programs, such as `edb-psql`. +If you omit the `-d` option, the `-p` option, or the `-h` option, the defaults for the database, port, and host are determined by the same rules as other EDB Postgres Advanced Server utility programs, such as edb-psql. ## Requirements - The control file must exist in the character set encoding of the client where `edbldr` is invoked. If the client is in an encoding different from the database encoding, then you must set the `PGCLIENTENCODING` environment variable on the client to the client’s encoding prior to invoking `edbldr`. This technique ensures character set conversion between the client and the database server is done correctly. -- The file names for `control_file`, `data_file`, `bad_file`, `discard_file`, and `log_file` must include the extensions `.ctl`, `.dat`, `.bad`, `.dsc`, and `.log`, respectively. If the provided file name doesn't contain an extension, EDB\*Loader assumes the actual file name includes the appropriate extension. +- The file names must include these extensions: + - `control_file` must use the `.ctl` extension. + - `data_file` must use the `.dat` extension. + - `bad_file` must use the `.bad` extension + - `discard_file` must use the `.dsc` extension + - `log_file` must include the `.log` extension + + If the provided file name doesn't have an extension, EDB\*Loader assumes the actual file name includes the appropriate extension. - The operating system account used to invoke `edbldr` must have read permission on the directories and files specified by `control_file`, `data_file`, and `param_file`. @@ -59,18 +66,18 @@ If you omit the `-d` option, the `-p` option, or the `-h` option, the defaults f IP address of the host on which the database server is running. -`USERID={ username/password | username/ | username | / }` +`USERID={ | | | / }` - EDB\*Loader connects to the database with `username`. `username` must be a superuser or a username with the required privileges. `password` is the password for `username`. + EDB\*Loader connects to the database with ``. `` must be a superuser or a username with the required privileges. `` is the password for ``. If you omit the `USERID` parameter, EDB\*Loader prompts for `username` and `password`. If you specify `USERID=username/`, then EDB\*Loader either: - - Uses the password file specified by environment variable `PGPASSFILE` if `PGPASSFILE` is set + - Uses the password file specified by the environment variable `PGPASSFILE` if `PGPASSFILE` is set - Uses the `.pgpass` password file (`pgpass.conf` on Windows systems) if `PGPASSFILE` isn't set If you specify `USERID=username`, then EDB\*Loader prompts for `password`. If you specify `USERID=/`, the connection is attempted using the operating system account as the user name. !!! Note - EDB\*Loader ignores the EPAS connection environment variables `PGUSER` and `PGPASSWORD`. See the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/libpq-pgpass.html) for information on the `PGPASSFILE` environment variable and the password file. + EDB\*Loader ignores the EDB Postgres Advanced Server connection environment variables `PGUSER` and `PGPASSWORD`. See the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/libpq-pgpass.html) for information on the `PGPASSFILE` environment variable and the password file. `-c CONNECTION_STRING` @@ -92,13 +99,13 @@ If you omit the `-d` option, the `-p` option, or the `-h` option, the defaults f `BAD=bad_file` - `bad_file` specifies the name of a file that receives input data records that can't be loaded due to errors. Specifying a `bad_file` on the command line overrides any `BADFILE` clause specified in the control file. + `bad_file` specifies the name of a file that receives input data records that can't be loaded due to errors. Specifying `bad_file` on the command line overrides any `BADFILE` clause specified in the control file. For more information about `bad_file`, see [Building the EDB\*Loader control file](../building_the_control_file/). `DISCARD=discard_file` - `discard_file` is the name of the file that receives input data records that don't meet any table’s selection criteria. Specifying a `discard_file` on the command line overrides the `DISCARDFILE` clause in the control file. + `discard_file` is the name of the file that receives input data records that don't meet any table’s selection criteria. Specifying `discard_file` on the command line overrides the `DISCARDFILE` clause in the control file. For more information about `discard_file`, see [Building the EDB\*Loader control file](../building_the_control_file/). @@ -131,7 +138,7 @@ If you omit the `-d` option, the `-p` option, or the `-h` option, the defaults f `DIRECT= { FALSE | TRUE }` - If `DIRECT` is set to `TRUE` EDB\*Loader performs a direct path load instead of a conventional path load. The default value of `DIRECT` is `FALSE`. + If `DIRECT` is set to `TRUE`, EDB\*Loader performs a direct path load instead of a conventional path load. The default value of `DIRECT` is `FALSE`. Don't set `DIRECT=true` when loading the data into a replicated table. If you're using EDB\*Loader to load data into a replicated table and set `DIRECT=true`, indexes might omit rows that are in a table or might contain references to rows that were deleted. EnterpriseDB doesn't support direct inserts to load data into replicated tables. @@ -139,7 +146,7 @@ If you omit the `-d` option, the `-p` option, or the `-h` option, the defaults f `FREEZE= { FALSE | TRUE }` - Set `FREEZE` to `TRUE` to copy the data with the rows `frozen`. A tuple guaranteed to be visible to all current and future transactions is marked as frozen to prevent transaction ID wraparound. For more information about frozen tuples, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/routine-vacuuming.html). + Set `FREEZE` to `TRUE` to copy the data with the rows *frozen*. A tuple guaranteed to be visible to all current and future transactions is marked as frozen to prevent transaction ID wraparound. For more information about frozen tuples, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/static/routine-vacuuming.html). You must specify a data-loading type of `TRUNCATE` in the control file when using the `FREEZE` option. `FREEZE` isn't supported for direct loading. @@ -191,7 +198,7 @@ EDB*Loader: Copyright (c) 2007-2021, EnterpriseDB Corporation. Successfully loaded (4) records ``` -In this example, EDB\*Loader prompts for the user name and password since they are omitted from the command line. In addition, the files for the bad file and log file are specified with the `BAD` and `LOG` command line parameters. +In this example, EDB\*Loader prompts for the user name and password since they're omitted from the command line. In addition, the files for the bad file and log file are specified with the `BAD` and `LOG` command line parameters. ```shell $ /usr/edb/as14/bin/edbldr -d edb CONTROL=emp.ctl BAD=/tmp/emp.bad @@ -234,7 +241,7 @@ EDB*Loader: Copyright (c) 2007-2021, EnterpriseDB Corporation. Successfully loaded (4) records ``` -This example invokes EDB\*Loader using a normal user. For this example, one empty table `bar` is created and a normal user `bob` is created. The `bob` user is granted all privileges on the table `bar`. The CREATE TABLE command creates the empty table. The CREATE USER command creates the user and the GRANT command gives required privileges to the user `bob` on the `bar` table: +This example invokes EDB\*Loader using a normal user. For this example, one empty table `bar` is created and a normal user `bob` is created. The `bob` user is granted all privileges on the table `bar`. The CREATE TABLE command creates the empty table. The CREATE USER command creates the user, and the GRANT command gives required privileges to the user `bob` on the `bar` table: ```sql CREATE TABLE bar(i int); @@ -278,8 +285,8 @@ When EDB\*Loader exits, it returns one of the following codes: | Exit code | Description | | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `0` | Indicates that all rows loaded successfully. | -| `1` | Indicates that EDB\*Loader encountered command line or syntax errors or aborted the load operation due to an unrecoverable error. | -| `2` | Indicates that the load completed, but some (or all) rows were rejected or discarded. | -| `3` | Indicates that EDB\*Loader encountered fatal errors (such as OS errors). This class of errors is equivalent to the `FATAL` or `PANIC` severity levels of PostgreSQL errors. | +| `0` | All rows loaded successfully. | +| `1` | EDB\*Loader encountered command line or syntax errors or aborted the load operation due to an unrecoverable error. | +| `2` | The load completed, but some (or all) rows were rejected or discarded. | +| `3` | EDB\*Loader encountered fatal errors (such as OS errors). This class of errors is equivalent to the `FATAL` or `PANIC` severity levels of PostgreSQL errors. | diff --git a/product_docs/docs/epas/15/database_administration/10_edb_resource_manager/dirty_buffer_throttling.mdx b/product_docs/docs/epas/15/database_administration/10_edb_resource_manager/dirty_buffer_throttling.mdx index b76c12a008a..401da0a3db5 100644 --- a/product_docs/docs/epas/15/database_administration/10_edb_resource_manager/dirty_buffer_throttling.mdx +++ b/product_docs/docs/epas/15/database_administration/10_edb_resource_manager/dirty_buffer_throttling.mdx @@ -107,7 +107,7 @@ edb=# INSERT INTO t1 VALUES (generate_series (1,10000), 'aaa'); INSERT 0 10000 ``` -The following example shows the results from the `INSERT` command: +This example shows the results from the `INSERT` command: ```sql edb=# SELECT query, rows, total_time, shared_blks_dirtied FROM @@ -152,7 +152,7 @@ edb=# INSERT INTO t1 VALUES (generate_series (1,10000), 'aaa'); INSERT 0 10000 ``` -The following example shows the results from the `INSERT` command without the use of a resource group: +This example shows the results from the `INSERT` command without the use of a resource group: ```sql edb=# SELECT query, rows, total_time, shared_blks_dirtied FROM diff --git a/product_docs/docs/epas/15/database_administration/14_edb_clone_schema/copying_a_remote_schema.mdx b/product_docs/docs/epas/15/database_administration/14_edb_clone_schema/copying_a_remote_schema.mdx index 80ee0ea6a19..0ada4c31486 100644 --- a/product_docs/docs/epas/15/database_administration/14_edb_clone_schema/copying_a_remote_schema.mdx +++ b/product_docs/docs/epas/15/database_administration/14_edb_clone_schema/copying_a_remote_schema.mdx @@ -110,7 +110,7 @@ __OUTPUT__ (1 row) ``` -The following example displays the status from the log file during various points in the cloning process: +This example displays the status from the log file during various points in the cloning process: ```sql tgtdb=# SELECT edb_util.process_status_from_log('clone_rmt_src_tgt'); diff --git a/product_docs/docs/epas/15/database_administration/14_edb_clone_schema/setting_up_edb_clone_schema.mdx b/product_docs/docs/epas/15/database_administration/14_edb_clone_schema/setting_up_edb_clone_schema.mdx index adaf5809c41..fcaf094944f 100644 --- a/product_docs/docs/epas/15/database_administration/14_edb_clone_schema/setting_up_edb_clone_schema.mdx +++ b/product_docs/docs/epas/15/database_administration/14_edb_clone_schema/setting_up_edb_clone_schema.mdx @@ -150,7 +150,7 @@ CREATE USER MAPPING FOR enterprisedb SERVER local_server For more information about using the `CREATE USER MAPPING` command, see the [PostgreSQL core documentation](https://www.postgresql.org/docs/current/sql-createusermapping.html). -These `psql` commands show the foreign server and user mapping: +These psql commands show the foreign server and user mapping: ```sql edb=# \des+ @@ -235,7 +235,7 @@ CREATE USER MAPPING FOR enterprisedb SERVER src_server ``` ## Displaying foreign servers and user mappings -These `psql` commands show the foreign servers and user mappings: +These psql commands show the foreign servers and user mappings: ```sql tgtdb=# \des+ From b33b6ee7cbc90bff55f584f4f56c7736c762409a Mon Sep 17 00:00:00 2001 From: francoughlin Date: Thu, 10 Aug 2023 15:38:08 -0400 Subject: [PATCH 013/370] Edits for Working with Oracle Data section Also includes some edits for tools and utilities --- .../taking_a_snapshot.mdx | 58 +++++++++--------- .../06_unicode_collation_algorithm.mdx | 40 ++++++++----- .../11_libpq_c_library.mdx | 8 +-- .../02_enhanced_compatibility_features.mdx | 60 +++++++++---------- .../02_calling_dblink_ora_functions.mdx | 6 ++ .../06_dblink_ora/connecting_to_oracle.mdx | 7 +++ .../06_dblink_ora/index.mdx | 4 +- 7 files changed, 103 insertions(+), 80 deletions(-) diff --git a/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/taking_a_snapshot.mdx b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/taking_a_snapshot.mdx index 1b4c78de659..239812cfd24 100644 --- a/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/taking_a_snapshot.mdx +++ b/product_docs/docs/epas/15/managing_performance/04_dynamic_runtime_instrumentation_tools_architecture_DRITA/taking_a_snapshot.mdx @@ -9,35 +9,35 @@ EDB Postgres Advanced Server's `postgresql.conf` file includes a configuration p `timed_statistics` is a dynamic parameter that you can modify in the `postgresql.conf` file or while a session is in progress. To enable DRITA, you must either: -- Modify the `postgresql.conf` file, setting the `timed_statistics` parameter to `TRUE`. - -- Connect to the server with the EDB-PSQL client and invoke the command: - -```sql -SET timed_statistics = TRUE -``` - -- After modifying the `timed_statistics` parameter, take a starting snapshot. A snapshot captures the current state of each timer and event counter. The server compares the starting snapshot to a later snapshot to gauge system performance. Use the `edbsnap()` function to take the beginning snapshot: - -```sql -edb=# SELECT * FROM edbsnap(); -__OUTPUT__ - edbsnap ----------------------- - Statement processed. -(1 row) -``` - -- Run the workload that you want to evaluate. When the workload is complete or at a strategic point during the workload, take another snapshot: - -```sql -edb=# SELECT * FROM edbsnap(); -__OUTPUT__ - edbsnap ----------------------- - Statement processed. -(1 row) -``` +1. Modify the `postgresql.conf` file, setting the `timed_statistics` parameter to `TRUE`. + +2. Connect to the server with the EDB-PSQL client and invoke the command: + + ```sql + SET timed_statistics = TRUE + ``` + +3. After modifying the `timed_statistics` parameter, take a starting snapshot. A snapshot captures the current state of each timer and event counter. The server compares the starting snapshot to a later snapshot to gauge system performance. Use the `edbsnap()` function to take the beginning snapshot: + + ```sql + edb=# SELECT * FROM edbsnap(); + __OUTPUT__ + edbsnap + ---------------------- + Statement processed. + (1 row) + ``` + +4. Run the workload that you want to evaluate. When the workload is complete or at a strategic point during the workload, take another snapshot: + + ```sql + edb=# SELECT * FROM edbsnap(); + __OUTPUT__ + edbsnap + ---------------------- + Statement processed. + (1 row) + ``` You can capture multiple snapshots during a session. Finally, you can use the DRITA functions and reports to manage and compare the snapshots to evaluate performance information. diff --git a/product_docs/docs/epas/15/tools_utilities_and_components/application_developer_tools/06_unicode_collation_algorithm.mdx b/product_docs/docs/epas/15/tools_utilities_and_components/application_developer_tools/06_unicode_collation_algorithm.mdx index 4e973df61f2..c3151e28ebe 100644 --- a/product_docs/docs/epas/15/tools_utilities_and_components/application_developer_tools/06_unicode_collation_algorithm.mdx +++ b/product_docs/docs/epas/15/tools_utilities_and_components/application_developer_tools/06_unicode_collation_algorithm.mdx @@ -19,7 +19,9 @@ redirects: The Unicode Collation Algorithm (UCA) is a specification (*Unicode Technical Report #10*) that defines a customizable method of collating and comparing Unicode data. *Collation* means how data is sorted, as with a `SELECT … ORDER BY` clause. *Comparison* is relevant for searches that use ranges with less than, greater than, or equal to operators. -Customizability is an important factor for various reasons such as: +## Benefits + +Customizability is an important factor for various reasons: - Unicode supports many languages. Letters that might be common to several languages might collate in different orders depending on the language. - Characters that appear with letters in certain languages, such as accents or umlauts, have an impact on the expected collation depending on the language. @@ -42,11 +44,11 @@ The basic concept behind the Unicode Collation Algorithm is the use of multileve If the order can be determined based on the primary level, then the algorithm is done. If the order can't be determined based on the primary level, then the secondary level, level 2, is applied. If the order can be determined based on the secondary level, then the algorithm is done, otherwise the tertiary level is applied, and so on. There is typically a final, tie-breaking level to determine the order if it can't be resolved by the prior levels. -- **Level 1 – Primary level for base characters.** The order of basic characters such as letters and digits determines the difference, such as `A < B`. -- **Level 2 – Secondary level for accents.** If there are no primary level differences, then the presence or absence of accents and other such characters determines the order, such as `a < á`. -- **Level 3 – Tertiary level for case.** If there are no primary level or secondary level differences, then a difference in case determines the order, such as `a < A`. -- **Level 4 – Quaternary level for punctuation.** If there are no primary, secondary, or tertiary level differences, then the presence or absence of white-space characters, control characters, and punctuation determine the order, such as `-A < A`. -- **Level 5 – Identical level for tie breaking.** If there are no primary, secondary, tertiary, or quaternary level differences, then some other difference such as the code point values determines the order. +- **Level 1 – Primary level for base characters**. The order of basic characters such as letters and digits determines the difference, such as `A < B`. +- **Level 2 – Secondary level for accents**. If there are no primary level differences, then the presence or absence of accents and other such characters determines the order, such as `a < á`. +- **Level 3 – Tertiary level for case**. If there are no primary level or secondary level differences, then a difference in case determines the order, such as `a < A`. +- **Level 4 – Quaternary level for punctuation**. If there are no primary, secondary, or tertiary level differences, then the presence or absence of white-space characters, control characters, and punctuation determine the order, such as `-A < A`. +- **Level 5 – Identical level for tie breaking**. If there are no primary, secondary, tertiary, or quaternary level differences, then some other difference such as the code point values determines the order. ## International components for Unicode @@ -111,14 +113,14 @@ For the complete and precise meaning and usage of collation attributes, see “C Each collation attribute is represented by an uppercase letter. The possible valid values for each attribute are given by codes shown in the parentheses. Some codes have general meanings for all attributes. **X** means to set the attribute off. **O** means to set the attribute on. **D** means to set the attribute to its default value. -- **A – Alternate (N, S, D).** Handles treatment of *variable* characters such as white spaces, punctuation marks, and symbols. When set to non-ignorable (N), differences in variable characters are treated with the same importance as differences in letters. When set to shifted (S), then differences in variable characters are of minor importance (that is, the variable character is ignored when comparing base characters). -- **C – Case first (X, L, U, D).** Controls whether a lowercase letter sorts before the same uppercase letter (L), or the uppercase letter sorts before the same lowercase letter (U). You typically specify off (X) when you want lowercase first (L). -- **E – Case level (X, O, D).** Set in combination with the Strength attribute, use the Case Level attribute when you want to ignore accents but not case. -- **F – French collation (X, O, D).** When set to on, secondary differences (presence of accents) are sorted from the back of the string as done in the French Canadian locale. -- **H – Hiragana quaternary (X, O, D).** Introduces an additional level to distinguish between the Hiragana and Katakana characters for compatibility with the JIS X 4061 collation of Japanese character strings. -- **N – Normalization checking (X, O, D).** Controls whether text is thoroughly normalized for comparison. Normalization deals with the issue of canonical equivalence of text whereby different code point sequences represent the same character. This occurrence then presents issues when sorting or comparing such characters. For languages such as Arabic, ancient Greek, Hebrew, Hindi, Thai, or Vietnamese, set normalization checking to on. -- **S – Strength (1, 2, 3, 4, I, D).** Maximum collation level used for comparison. Influences whether accents or case are taken into account when collating or comparing strings. Each number represents a level. A setting of I represents identical strength (that is, level 5). -- **T – Variable top (hexadecimal digits).** Applies only when the Alternate attribute isn't set to non-ignorable (N). The hexadecimal digits specify the highest character sequence to consider ignorable. For example, if white space is ignorable but visible variable characters aren't, then set Variable Top to 0020 along with the Alternate attribute set to S and the Strength attribute set to 3. (The space character is hexadecimal 0020. Other nonvisible variable characters, such as backspace, tab, line feed, and carriage return, have values less than 0020. All visible punctuation marks have values greater than 0020.) +- **A – Alternate (N, S, D)**. Handles treatment of *variable* characters such as white spaces, punctuation marks, and symbols. When set to non-ignorable (N), differences in variable characters are treated with the same importance as differences in letters. When set to shifted (S), then differences in variable characters are of minor importance (that is, the variable character is ignored when comparing base characters). +- **C – Case first (X, L, U, D)**. Controls whether a lowercase letter sorts before the same uppercase letter (L), or the uppercase letter sorts before the same lowercase letter (U). You typically specify off (X) when you want lowercase first (L). +- **E – Case level (X, O, D)**. Set in combination with the Strength attribute, use the Case Level attribute when you want to ignore accents but not case. +- **F – French collation (X, O, D)**. When set to on, secondary differences (presence of accents) are sorted from the back of the string as done in the French Canadian locale. +- **H – Hiragana quaternary (X, O, D)**. Introduces an additional level to distinguish between the Hiragana and Katakana characters for compatibility with the JIS X 4061 collation of Japanese character strings. +- **N – Normalization checking (X, O, D)**. Controls whether text is thoroughly normalized for comparison. Normalization deals with the issue of canonical equivalence of text whereby different code point sequences represent the same character. This occurrence then presents issues when sorting or comparing such characters. For languages such as Arabic, ancient Greek, Hebrew, Hindi, Thai, or Vietnamese, set normalization checking to on. +- **S – Strength (1, 2, 3, 4, I, D)**. Maximum collation level used for comparison. Influences whether accents or case are taken into account when collating or comparing strings. Each number represents a level. A setting of I represents identical strength (that is, level 5). +- **T – Variable top (hexadecimal digits)**. Applies only when the Alternate attribute isn't set to non-ignorable (N). The hexadecimal digits specify the highest character sequence to consider ignorable. For example, if white space is ignorable but visible variable characters aren't, then set Variable Top to 0020 along with the Alternate attribute set to S and the Strength attribute set to 3. (The space character is hexadecimal 0020. Other nonvisible variable characters, such as backspace, tab, line feed, and carriage return, have values less than 0020. All visible punctuation marks have values greater than 0020.) A set of collation attributes and their values is represented by a text string consisting of the collation attribute letter concatenated with the desired attribute value. Each attribute/value pair is joined to the next pair with an underscore character: @@ -208,6 +210,8 @@ INSERT INTO collate_tbl VALUES (10, '-B'); INSERT INTO collate_tbl VALUES (11, ' B'); ``` +### Using the default collation + The following query sorts on column `c2` using the default collation. Variable characters (white space and punctuation marks) with `id` column values of `9`, `10`, and `11` are ignored and sort with the letter `B`. ```sql @@ -229,6 +233,8 @@ __OUTPUT__ (11 rows) ``` +### Using collation icu_collate_lowercase + The following query sorts on column `c2` using collation `icu_collate_lowercase`. This collation forces the lowercase form of a letter to sort before the uppercase form of the same base letter. The `AN` attribute forces the sort order to include variable characters at the same level when comparing base characters. Thus rows with `id` values of `9`, `10`, and `11` appear at the beginning of the sort list before all letters and numbers. ```sql @@ -250,6 +256,8 @@ __OUTPUT__ (11 rows) ``` +### Using collation icu_collate_uppercase + The following query sorts on column `c2` using collation `icu_collate_uppercase`. This collations forces the uppercase form of a letter to sort before the lowercase form of the same base letter. ```sql @@ -271,6 +279,8 @@ __OUTPUT__ (11 rows) ``` +### Using collation icu_collate_ignore_punct + The following query sorts on column `c2` using collation `icu_collate_ignore_punct`. This collation causes variable characters to be ignored so rows with id values of `9`, `10`, and `11` sort with the letter `B`, the character immediately following the ignored variable character. ```sql @@ -293,6 +303,8 @@ __OUTPUT__ (11 rows) ``` +### Using collation icu_collate_ignore_white_sp + The following query sorts on column `c2` using collation `icu_collate_ignore_white_sp`. The `AS` and `T0020` attributes of the collation cause variable characters with code points less than or equal to hexadecimal `0020` to be ignored while variable characters with code points greater than hexadecimal `0020` are included in the sort. The row with `id` value of `11`, which starts with a space character (hexadecimal `0020`), sorts with the letter `B`. The rows with `id` values of `9` and `10`, which start with visible punctuation marks greater than hexadecimal `0020`, appear at the beginning of the sort list. These particular variable characters are included in the sort order at the same level when comparing base characters. diff --git a/product_docs/docs/epas/15/tools_utilities_and_components/application_developer_tools/11_libpq_c_library.mdx b/product_docs/docs/epas/15/tools_utilities_and_components/application_developer_tools/11_libpq_c_library.mdx index dad156fea8f..fbacee19c20 100644 --- a/product_docs/docs/epas/15/tools_utilities_and_components/application_developer_tools/11_libpq_c_library.mdx +++ b/product_docs/docs/epas/15/tools_utilities_and_components/application_developer_tools/11_libpq_c_library.mdx @@ -47,7 +47,7 @@ You can now use `PQexec()` and `PQgetvalue()` to retrieve a `REFCURSOR` returned !!! Note The examples that follow don't include the error-handling code required in a real-world client application. -## Returning a single REFCURSOR +### Returning a single REFCURSOR This example shows an SPL function that returns a value of type `REFCURSOR`: @@ -319,7 +319,7 @@ If you call `getEmpsAndDepts(20, 30)`, the server returns a cursor that contains 30,SALES,CHICAGO ``` -## Array binding +## Performing array binding EDB Postgres Advanced Server's array binding functionality allows you to send an array of data across the network in a single round trip. When the back end receives the bulk data, it can use the data to perform insert or update operations. @@ -398,7 +398,7 @@ PGresult *PQexecBulkPrepared(PGconn *conn, const int *paramFormats); ``` -### Example code (Using PQBulkStart, PQexecBulk, PQBulkFinish) +### Using PQBulkStart, PQexecBulk, PQBulkFinish This example uses `PGBulkStart`, `PQexecBulk`, and `PQBulkFinish`: @@ -456,7 +456,7 @@ void InsertDataUsingBulkStyle( PGconn *conn ) } ``` -### Example code (Using PQexecBulkPrepared) +### Using PQexecBulkPrepared This example uses `PQexecBulkPrepared`: diff --git a/product_docs/docs/epas/15/working_with_oracle_data/02_enhanced_compatibility_features.mdx b/product_docs/docs/epas/15/working_with_oracle_data/02_enhanced_compatibility_features.mdx index 55fdbf44d07..d3918121d22 100644 --- a/product_docs/docs/epas/15/working_with_oracle_data/02_enhanced_compatibility_features.mdx +++ b/product_docs/docs/epas/15/working_with_oracle_data/02_enhanced_compatibility_features.mdx @@ -26,16 +26,16 @@ EDB Postgres Advanced Server includes extended functionality that provides compa ## Enabling compatibility features -You can install EDB Postgres Advanced Server in several ways to take advantage of compatibility features: +You can install EDB Postgres Advanced Server in several ways to enable compatibility features: -- Use the `INITDBOPTS` variable in the EDB Postgres Advanced Server service configuration file to specify `--redwood-like` before initializing your cluster. -- When invoking `initdb` to initialize your cluster compatible with Oracle mode, include the `--redwood-like` or `--no-redwood-compat` option to initialize your cluster in Oracle non-compatible mode. +- Use the `INITDBOPTS` variable in the EDB Postgres Advanced Server service configuration file to specify `--redwood-like` **before initializing your cluster**. +- Include the `--redwood-like` parameter when using `initdb` to initialize your cluster. See [Configuration parameters compatible with Oracle databases](../reference/database_administrator_reference/02_summary_of_configuration_parameters/) and [Managing an EDB Postgres Advanced Server installation](../installing/linux_install_details/managing_an_advanced_server_installation/) for more information about the installation options supported by the EDB Postgres Advanced Server installers. ## Stored procedural language -EDB Postgres Advanced Server supports a highly productive procedural language that allows you to write custom procedures, functions, triggers, and packages. The procedural language: +EDB Postgres Advanced Server supports a highly productive procedural language that allows you to write custom procedures, functions, triggers, and packages. This procedural language: - Complements the SQL language and built-in packages. - Provides a seamless development and testing environment. @@ -86,31 +86,31 @@ EDB Postgres Advanced Server supports a number of built-in packages that provide | Package name | Description | | -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| DBMS_ALERT | The DBMS_ALERT package lets you register for, send, and receive alerts. | -| DBMS_AQ | The DBMS_AQ package provides message queueing and processing for EDB Postgres Advanced Server. | -| DBMS_AQADM | The DBMS_AQADM package provides supporting procedures for Advanced Queueing functionality. | -| DBMS_CRYPTO | The DBMS_CRYPTO package provides functions and procedures that allow you to encrypt or decrypt RAW, BLOB, or CLOB data. You can also use DBMS_CRYPTO functions to generate cryptographically strong random values. | -| DBMS_JOB | The DBMS_JOB package provides for creating, scheduling, and managing jobs. | -| DBMS_LOB | The DBMS_LOB package lets you operate on large objects. | -| DBMS_LOCK | EDB Postgres Advanced Server provides support for the DBMS_LOCK.SLEEP procedure. | -| DBMS_MVIEW | Use procedures in the DBMS_MVIEW package to manage and refresh materialized views and their dependencies. | -| DBMS_OUTPUT | The DBMS_OUTPUT package lets you send messages to a message buffer or get messages from the message buffer. | -| DBMS_PIPE | The DBMS_PIPE package lets you send messages through a pipe in or between sessions connected to the same database cluster. | -| DBMS_PROFILER | The DBMS_PROFILER package collects and stores performance information about the PL/pgSQL and SPL statements that are executed during a performance profiling session. | -| DBMS_RANDOM | The DBMS_RANDOM package provides methods to generate random values. | -| DBMS_REDACT | The DBMS_REDACT package enables redacting or masking data that's returned by a query. | -| DBMS_RLS | The DBMS_RLS package enables implementating Virtual Private Database on certain EDB Postgres Advanced Server database objects. | -| DBMS_SCHEDULER | The DBMS_SCHEDULER package lets you create and manage jobs, programs, and job schedules. | -| DBMS_SESSION | EDB Postgres Advanced Server provides support for the DBMS_SESSION.SET_ROLE procedure. | -| DBMS_SQL | The DBMS_SQL package provides an application interface to the EDB dynamic SQL functionality. | -| DBMS_UTILITY | The DBMS_UTILITY package provides various utility programs. | -| UTL_ENCODE | The UTL_ENCODE package provides a way to encode and decode data. | -| UTL_FILE | The UTL_FILE package lets you read from and write to files on the operating system’s file system. | -| UTL_HTTP | The UTL_HTTP package lets you use the HTTP or HTTPS protocol to retrieve information found at a URL. | -| UTL_MAIL | The UTL_MAIL package lets you manage email. | -| UTL_RAW | The UTL_RAW package allows you to manipulate or retrieve the length of raw data types. | -| UTL_SMTP | The UTL_SMTP package lets you send emails over the Simple Mail Transfer Protocol (SMTP). | -| UTL_URL | The UTL_URL package provides a way to escape illegal and reserved characters in a URL. | +| DBMS_ALERT | Lets you register for, send, and receive alerts. | +| DBMS_AQ | Provides message queueing and processing for EDB Postgres Advanced Server. | +| DBMS_AQADM | Provides supporting procedures for Advanced Queueing functionality. | +| DBMS_CRYPTO | Provides functions and procedures that allow you to encrypt or decrypt RAW, BLOB, or CLOB data. You can also use DBMS_CRYPTO functions to generate cryptographically strong random values. | +| DBMS_JOB | Provides for creating, scheduling, and managing jobs. | +| DBMS_LOB | Lets you operate on large objects. | +| DBMS_LOCK | Provides support for the DBMS_LOCK.SLEEP procedure. | +| DBMS_MVIEW | Use to manage and refresh materialized views and their dependencies. | +| DBMS_OUTPUT | Lets you send messages to a message buffer or get messages from the message buffer. | +| DBMS_PIPE | Lets you send messages through a pipe in or between sessions connected to the same database cluster. | +| DBMS_PROFILER | Collects and stores performance information about the PL/pgSQL and SPL statements that are executed during a performance profiling session. | +| DBMS_RANDOM | Provides methods to generate random values. | +| DBMS_REDACT | Enables redacting or masking data that's returned by a query. | +| DBMS_RLS | Enables implementating Virtual Private Database on certain EDB Postgres Advanced Server database objects. | +| DBMS_SCHEDULER | Lets you create and manage jobs, programs, and job schedules. | +| DBMS_SESSION | Provides support for the DBMS_SESSION.SET_ROLE procedure. | +| DBMS_SQL | Provides an application interface to the EDB dynamic SQL functionality. | +| DBMS_UTILITY | Provides various utility programs. | +| UTL_ENCODE | Provides a way to encode and decode data. | +| UTL_FILE | Lets you read from and write to files on the operating system’s file system. | +| UTL_HTTP | Lets you use the HTTP or HTTPS protocol to retrieve information found at a URL. | +| UTL_MAIL | Lets you manage email. | +| UTL_RAW | Allows you to manipulate or retrieve the length of raw data types. | +| UTL_SMTP | Lets you send emails over the Simple Mail Transfer Protocol (SMTP). | +| UTL_URL | Provides a way to escape illegal and reserved characters in a URL. | See [Built-in packages](../reference/oracle_compatibility_reference/epas_compat_bip_guide/03_built-in_packages/) for detailed information about the procedures and functions available in each package. @@ -169,7 +169,7 @@ See [Protecting proprietary source code](../epas_security_guide/03_edb_wrap/) fo ### Dynamic Runtime Instrumentation Tools Architecture (DRITA) -The Dynamic Runtime Instrumentation Tools Architecture (DRITA) allows a DBA to query catalog views to determine the *wait events* that affect the performance of individual sessions or the system as a whole. DRITA records the number of times each event occurs as well as the time spent waiting. You can use this information to diagnose performance problems. DRITA offers this functionality while consuming minimal system resources. +DRITA allows a DBA to query catalog views to determine the *wait events* that affect the performance of individual sessions or the system as a whole. DRITA records the number of times each event occurs as well as the time spent waiting. You can use this information to diagnose performance problems. DRITA offers this functionality while consuming minimal system resources. DRITA compares *snapshots* to evaluate the performance of a system. A snapshot is a saved set of system performance data at a given point in time. A unique ID number identifies each snapshot. You can use snapshot ID numbers with DRITA reporting functions to return system performance statistics. diff --git a/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/02_calling_dblink_ora_functions.mdx b/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/02_calling_dblink_ora_functions.mdx index 66d7aa114c2..8e32516d61f 100644 --- a/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/02_calling_dblink_ora_functions.mdx +++ b/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/02_calling_dblink_ora_functions.mdx @@ -11,6 +11,8 @@ redirects: +## Using the dblink_ora_connect function + The following command establishes a connection using the `dblink_ora_connect()` function: ```sql @@ -25,6 +27,8 @@ The example connects to: You can use the connection name `acctg` to refer to this connection when calling other dblink_ora functions. +## Using the dblink_ora_copy function + The following command uses the `dblink_ora_copy()` function over a connection named `edb_conn`. It copies the `empid` and `deptno` columns from a table on an Oracle server named `ora_acctg` to a table located in the `public` schema on an instance of EDB Postgres Advanced Server named `as_acctg`. The `TRUNCATE` option is enforced, and a feedback count of `3` is specified: ```sql @@ -44,6 +48,8 @@ INFO: Row: 12 (1 row) ``` +## Using the dblink_ora_record function + The following `SELECT` statement uses the `dblink_ora_record()` function and the `acctg` connection to retrieve information from the Oracle server: ```sql diff --git a/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/connecting_to_oracle.mdx b/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/connecting_to_oracle.mdx index 37c3668fdfe..2ae2b741faa 100644 --- a/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/connecting_to_oracle.mdx +++ b/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/connecting_to_oracle.mdx @@ -5,6 +5,8 @@ description: "Describes how to create a link to an Oracle server" To enable Oracle connectivity, download Oracle's freely available OCI drivers from [their website](http://www.oracle.com/technetwork/database/database-technologies/instant-client/overview/index.html). +## Creating a symbolic link + For Linux, if the Oracle instant client that you downloaded doesn't include the `libclntsh.so` library, you must create a symbolic link named `libclntsh.so` that points to the downloaded version. Navigate to the instant client directory and execute the following command: ```shell @@ -16,9 +18,12 @@ Where `version` is the version number of the `libclntsh.so` library. For example ```shell ln -s libclntsh.so.12.1 libclntsh.so ``` +## Setting the environment variable Before creating a link to an Oracle server, you must direct EDB Postgres Advanced Server to the correct Oracle home directory. Set the `LD_LIBRARY_PATH` environment variable on Linux or `PATH` on Windows to the `lib` directory of the Oracle client installation directory. +## Setting the oracle_home configuration parameter + Alternatively, you can set the value of the `oracle_home` configuration parameter in the `postgresql.conf` file. The value specified in the `oracle_home` configuration parameter overrides the `LD_LIBRARY_PATH` environment variable in Linux and `PATH` environment variable in Windows. !!! Note @@ -32,6 +37,8 @@ oracle_home = 'lib_directory' In place of ``, substitute the name of the `oracle_home` path to the Oracle client installation directory that contains `libclntsh.so` in Linux and `oci.dll` in Windows. +## Restarting the server + After setting the `oracle_home` configuration parameter, you must restart the server for the changes to take effect. Restart the server: - On Linux, using the `systemctl` command or `pg_ctl` services. diff --git a/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/index.mdx b/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/index.mdx index 2ac481d2dbd..ad2be4cde70 100644 --- a/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/index.mdx +++ b/product_docs/docs/epas/15/working_with_oracle_data/06_dblink_ora/index.mdx @@ -11,6 +11,4 @@ redirects: --- -dblink_ora enables you to issue arbitrary queries to a remote Oracle server. It provides an OCI-based database link that allows you to `SELECT`, `INSERT`, `UPDATE`, or `DELETE` data stored on an Oracle system from EDB Postgres Advanced Server. - -The following topics describe how to use the dblink_ora feature. +dblink_ora enables you to issue arbitrary queries to a remote Oracle server. It provides an OCI-based database link that enables you to `SELECT`, `INSERT`, `UPDATE`, or `DELETE` data stored on an Oracle system from EDB Postgres Advanced Server. From 38d1888ae3d9610b798e3c0998153857de0cb750 Mon Sep 17 00:00:00 2001 From: francoughlin Date: Fri, 11 Aug 2023 15:56:37 -0400 Subject: [PATCH 014/370] Edits for Reference branch Edits for Application programming branch and Database administration branch --- .../06_user_defined_pl_sql_subtypes.mdx | 12 +++-- .../05_using_the_returning_into_clause.mdx | 6 ++- .../04_basic_statements/06_select_into.mdx | 10 +++- .../04_basic_statements/07_update.mdx | 2 +- .../08_obtaining_the_result_status.mdx | 2 +- .../01_if_statement/01_if_then.mdx | 4 ++ .../01_if_statement/02_if_then_else.mdx | 4 ++ .../04_if_then_elseif_else.mdx | 4 ++ .../01_if_statement/index.mdx | 1 + .../02_return_statement.mdx | 4 ++ .../03_goto_statement.mdx | 12 ++++- .../01_selector_case_expression.mdx | 4 ++ .../02_searched_case_expression.mdx | 4 ++ .../01_selector_case_statement.mdx | 4 ++ .../02_searched_case_statement.mdx | 4 ++ .../05_case_statement/index.mdx | 1 + .../06_loops/02_exit.mdx | 4 ++ .../06_loops/04_while.mdx | 4 ++ .../06_loops/05_for_integer_variant.mdx | 4 ++ .../05_control_structures/06_loops/index.mdx | 1 + .../07_exception_handling.mdx | 17 ++++-- .../08_user_defined_exceptions.mdx | 4 ++ .../09_pragma_exception_init.mdx | 10 +++- .../10_raise_application_error.mdx | 4 +- .../11_collection_methods/01_count.mdx | 8 ++- .../11_collection_methods/02_deletes.mdx | 7 +++ .../11_collection_methods/03_exists.mdx | 8 ++- .../11_collection_methods/04_extend.mdx | 10 +++- .../11_collection_methods/05_first.mdx | 8 ++- .../11_collection_methods/06_last.mdx | 8 ++- .../11_collection_methods/08_next.mdx | 8 ++- .../11_collection_methods/09_prior.mdx | 9 +++- .../11_collection_methods/10_trim.mdx | 8 ++- .../17_advanced_server_keywords.mdx | 2 + .../02_edb_redwood_raw_names.mdx | 4 ++ .../03_edb_redwood_strings.mdx | 8 ++- .../01_introduction/04_edb_stmt_level_tx.mdx | 4 ++ .../01_introduction/05_oracle_home.mdx | 4 ++ ...02_summary_of_configuration_parameters.mdx | 8 +-- .../edb_loader_control_file_parameters.mdx | 52 +++++++++---------- 40 files changed, 227 insertions(+), 55 deletions(-) diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/01_basic_spl_elements/06_user_defined_pl_sql_subtypes.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/01_basic_spl_elements/06_user_defined_pl_sql_subtypes.mdx index 90e6626dad3..c8e67ed7ab6 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/01_basic_spl_elements/06_user_defined_pl_sql_subtypes.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/01_basic_spl_elements/06_user_defined_pl_sql_subtypes.mdx @@ -11,7 +11,11 @@ redirects: -EDB Postgres Advanced Server supports user-defined PL/SQL subtypes and subtype aliases. A subtype is a data type with an optional set of constraints that restrict the values that can be stored in a column of that type. The rules that apply to the type on which the subtype is based are still enforced, but you can use additional constraints to place limits on the precision or scale of values stored in the type. +EDB Postgres Advanced Server supports user-defined PL/SQL subtypes and subtype aliases. + +## About subtypes + +A subtype is a data type with an optional set of constraints that restrict the values that can be stored in a column of that type. The rules that apply to the type on which the subtype is based are still enforced, but you can use additional constraints to place limits on the precision or scale of values stored in the type. You can define a subtype in the declaration of a PL function, procedure, anonymous block, or package. The syntax is: @@ -92,10 +96,10 @@ This example creates a subtype named `acct_balance` that shares all of the attri An argument declaration (in a function or procedure header) is a *formal argument*. The value passed to a function or procedure is an *actual argument*. When invoking a function or procedure, the caller provides 0 or more actual arguments. Each actual argument is assigned to a formal argument that holds the value in the body of the function or procedure. -If a formal argument is declared as a constrained subtype: +If a formal argument is declared as a constrained subtype, EDB Postgres Advanced Server: -- EDB Postgres Advanced Server doesn't enforce subtype constraints when assigning an actual argument to a formal argument when invoking a function. -- EDB Postgres Advanced Server enforces subtype constraints when assigning an actual argument to a formal argument when invoking a procedure. +- Enforces subtype constraints when assigning an actual argument to a formal argument when invoking a procedure. +- Doesn't enforce subtype constraints when assigning an actual argument to a formal argument when invoking a function. ## Using the %TYPE operator diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/05_using_the_returning_into_clause.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/05_using_the_returning_into_clause.mdx index 374a4aaa21d..b7d4467c968 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/05_using_the_returning_into_clause.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/05_using_the_returning_into_clause.mdx @@ -9,7 +9,7 @@ redirects: You can append the `INSERT`, `UPDATE`, and `DELETE` commands with the optional `RETURNING INTO` clause. This clause allows the SPL program to capture the newly added, modified, or deleted values from the results of an `INSERT`, `UPDATE`, or `DELETE` command, respectively. -The following is the syntax: +## Syntax ```sql { | | } @@ -30,6 +30,8 @@ If the `INSERT`, `UPDATE`, or `DELETE` command returns a result set with more th !!! Note A variation of `RETURNING INTO` using the `BULK COLLECT` clause allows a result set of more than one row that's returned into a collection. See [Using the BULK COLLECT clause](/epas/latest/application_programming/epas_compat_spl/12_working_with_collections/04_using_the_bulk_collect_clause/#using_the_bulk_collect_clause) for more information. +## Adding the RETURNING INTO clause + This example modifies the `emp_comp_update` procedure introduced in [UPDATE](07_update/#update). It adds the `RETURNING INTO` clause: ```sql @@ -88,6 +90,8 @@ New Salary : 6540.00 New Commission : 1200.00 ``` +## Adding the RETURNING INTO clause using record types + This example modifies the `emp_delete` procedure, adding the `RETURNING INTO` clause using record types: ```sql diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/06_select_into.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/06_select_into.mdx index c79f922c11c..6c52566060c 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/06_select_into.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/06_select_into.mdx @@ -12,7 +12,11 @@ The `SELECT INTO` statement is an SPL variation of the SQL `SELECT` command. The - `SELECT INTO` assigns the results to variables or records where they can then be used in SPL program statements. - The accessible result set of `SELECT INTO` is at most one row. -Other than these differences, all of the clauses of the `SELECT` command, such as `WHERE`, `ORDER BY`, `GROUP BY`, and `HAVING`, are valid for `SELECT INTO`. The following are the two variations of `SELECT INTO`: +Other than these differences, all of the clauses of the `SELECT` command, such as `WHERE`, `ORDER BY`, `GROUP BY`, and `HAVING`, are valid for `SELECT INTO`. + +## Syntax + +These examples show two variations of `SELECT INTO`: ```sql SELECT INTO FROM ...; @@ -33,6 +37,8 @@ If the query returns zero rows, null values are assigned to the targets. If the - There is a variation of `SELECT INTO` using the `BULK COLLECT` clause that allows a result set of more than one row that's returned into a collection. See [SELECT BULK COLLECT](/epas/latest/application_programming/epas_compat_spl/12_working_with_collections/04_using_the_bulk_collect_clause/01_select_bulk_collect/#select_bulk_collect) for more information. +## Including the WHEN NO_DATA_FOUND clause + You can use the `WHEN NO_DATA_FOUND` clause in an `EXCEPTION` block to determine whether the assignment was successful, that is, at least one row was returned by the query. This version of the `emp_sal_query` procedure uses the variation of `SELECT INTO` that returns the result set into a record. It also uses the `EXCEPTION` block containing the `WHEN NO_DATA_FOUND` conditional expression. @@ -77,6 +83,8 @@ EXEC emp_sal_query(0); Employee # 0 not found ``` +## Including a TOO_MANY_ROWS exception + Another conditional clause useful in the `EXCEPTION` section with `SELECT INTO` is the `TOO_MANY_ROWS` exception. If more than one row is selected by the `SELECT INTO` statement, SPL throws an exception. When the following block is executed, the `TOO_MANY_ROWS` exception is thrown since there are many employees in the specified department: diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/07_update.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/07_update.mdx index 806dd0e583e..f7c42b4fd64 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/07_update.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/07_update.mdx @@ -33,7 +33,7 @@ END; The `SQL%FOUND` conditional expression returns `TRUE` if a row is updated, `FALSE` otherwise. See [Obtaining the result status](08_obtaining_the_result_status/#obtaining_the_result_status) for a discussion of `SQL%FOUND` and other similar expressions. -The following shows the update on the employee using this procedure: +This example shows the update on the employee: ```sql EXEC emp_comp_update(9503, 6540, 1200); diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/08_obtaining_the_result_status.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/08_obtaining_the_result_status.mdx index 1e299f9e41a..c890897d3b6 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/08_obtaining_the_result_status.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/04_basic_statements/08_obtaining_the_result_status.mdx @@ -9,7 +9,7 @@ redirects: You can use several attributes to determine the effect of a command. `SQL%FOUND` is a Boolean that returns `TRUE` if at least one row was affected by an `INSERT`, `UPDATE` or `DELETE` command or a `SELECT INTO` command retrieved one or more rows. -The following anonymous block inserts a row and then displays the fact that the row was inserted: +This anonymous block inserts a row and then displays the fact that the row was inserted: ```sql BEGIN diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/01_if_then.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/01_if_then.mdx index 04811ae9c8d..4063d33e5d6 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/01_if_then.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/01_if_then.mdx @@ -7,6 +7,8 @@ redirects: +## Syntax + ```sql IF boolean-expression THEN @@ -15,6 +17,8 @@ END IF; `IF-THEN` statements are the simplest form of `IF`. The statements between `THEN` and `END IF` are executed if the condition is `TRUE`. Otherwise, they are skipped. +## Example + This example uses `IF-THEN` statement to test and display employees who have a commission: ```sql diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/02_if_then_else.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/02_if_then_else.mdx index ff47f3ad23a..43036e577cb 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/02_if_then_else.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/02_if_then_else.mdx @@ -7,6 +7,8 @@ redirects: +## Syntax + ```sql IF boolean-expression THEN @@ -17,6 +19,8 @@ END IF; `IF-THEN-ELSE` statements add to `IF-THEN` by letting you specify an alternative set of statements to execute if the condition evaluates to false. +## Example + This example shows an `IF-THEN-ELSE` statement being used to display the text `Non-commission` if an employee doesn't get a commission: ```sql diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/04_if_then_elseif_else.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/04_if_then_elseif_else.mdx index bbe3ebe2173..5be6ab73913 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/04_if_then_elseif_else.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/04_if_then_elseif_else.mdx @@ -7,6 +7,8 @@ redirects: +## Syntax + ```sql IF boolean-expression THEN @@ -21,6 +23,8 @@ END IF; `IF-THEN-ELSIF-ELSE` provides a method of checking many alternatives in one statement. Formally it is equivalent to nested `IF-THEN-ELSE-IF-THEN` commands, but only one `END IF` is needed. +## Example + The following example uses an `IF-THEN-ELSIF-ELSE` statement to count the number of employees by compensation ranges of $25,000: ```sql diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/index.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/index.mdx index 21cb499ac41..58abf3ff2d7 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/index.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/01_if_statement/index.mdx @@ -1,5 +1,6 @@ --- title: "IF statement" +indexCards: simple redirects: - /epas/latest/epas_compat_spl/05_control_structures/01_if_statement/ #generated for docs/epas/reorg-role-use-case-mode - /epas/latest/application_programming/epas_compat_spl/05_control_structures/01_if_statement/ diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/02_return_statement.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/02_return_statement.mdx index 5044f4ba18f..2cefb6d86dd 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/02_return_statement.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/02_return_statement.mdx @@ -13,6 +13,8 @@ redirects: The `RETURN` statement terminates the current function, procedure, or anonymous block and returns control to the caller. +## Syntax + The `RETURN` statement has two forms. The first form of the `RETURN` statement terminates a procedure or function that returns `void`. The syntax of this form is: ```sql @@ -27,6 +29,8 @@ RETURN ; `expression` must evaluate to the same data type as the return type of the function. +## Example + This example uses the `RETURN` statement and returns a value to the caller: ```sql diff --git a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/03_goto_statement.mdx b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/03_goto_statement.mdx index 6978d316ebf..95b9fab493d 100644 --- a/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/03_goto_statement.mdx +++ b/product_docs/docs/epas/15/reference/application_programmer_reference/stored_procedural_language_reference/05_control_structures/03_goto_statement.mdx @@ -11,7 +11,11 @@ redirects: -The `GOTO` statement causes the point of execution to jump to the statement with the specified label. The syntax of a `GOTO` statement is: +The `GOTO` statement causes the point of execution to jump to the statement with the specified label. + +## Syntax + +The syntax of a `GOTO` statement is: ```sql GOTO