viewdata_filter
diff --git a/product_docs/docs/pem/8/pem_online_help/08_toc_pem_developer_tools/05_schema_diff.mdx b/product_docs/docs/pem/8/pem_online_help/08_toc_pem_developer_tools/05_schema_diff.mdx
index 274e82b83b8..2888bd9bbb3 100644
--- a/product_docs/docs/pem/8/pem_online_help/08_toc_pem_developer_tools/05_schema_diff.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/08_toc_pem_developer_tools/05_schema_diff.mdx
@@ -11,15 +11,15 @@ legacyRedirectsGenerated:
The Schema Diff feature allows you to:
-> - Compare and synchronize the database objects (from source to target).
-> - Visualize the differences between database objects.
-> - List the differences in SQL statement for target database objects.
-> - Generate synchronization scripts.
+- Compare and synchronize the database objects (from source to target).
+- Visualize the differences between database objects.
+- List the differences in SQL statement for target database objects.
+- Generate synchronization scripts.
**Note:**
-> - The source and target database servers must be of the same major version.
-> - If you compare two **schemas** then dependencies won't be resolved.
+- The source and target database servers must be of the same major version.
+- If you compare two **schemas** then dependencies won't be resolved.
Click on *Schema Diff* under the *Tools* menu to open a selection panel. To compare **databases** choose the source and target servers, and databases. To compare **schemas** choose the source and target servers, databases, and schemas. After selecting the objects, click on the *Compare* button.
@@ -29,9 +29,9 @@ You can open multiple copies of `Schema Diff` in individual tabs simultaneously.
Use the [Preferences](../03_toc_pem_client/04_preferences/#preferences) dialog to specify following:
-> - *Schema Diff* should open in a new browser tab. Set *Open in new browser tab* option to true.
-> - *Schema Diff* should ignore the whitespaces while comparing string objects. Set *Ignore whitespaces* option to true.
-> - *Schema Diff* should ignore the owner while comparing objects. Set *Ignore owner* option to true.
+- *Schema Diff* should open in a new browser tab. Set *Open in new browser tab* option to true.
+- *Schema Diff* should ignore the whitespaces while comparing string objects. Set *Ignore whitespaces* option to true.
+- *Schema Diff* should ignore the owner while comparing objects. Set *Ignore owner* option to true.
The `Schema Diff` panel is divided into two panels; an Object Comparison panel and a DDL Comparison panel.
@@ -51,10 +51,10 @@ Use the drop-down lists of Database Objects to view the DDL statements.
In the upper-right hand corner of the object comparison panel is a *Filter* option that you can use to filter the database objects based on the following comparison criteria:
-> - Identical – If the object is found in both databases with the same SQL statement, then the comparison result is identical.
-> - Different – If the object is found in both databases but have different SQL statements, then the comparison result is different.
-> - Source Only – If the object is found in source database only and not in target database, then the comparison result is source only.
-> - Target Only – If the object is found in target database only and not in source database, then the comparison result is target only.
+- Identical – If the object is found in both databases with the same SQL statement, then the comparison result is identical.
+- Different – If the object is found in both databases but have different SQL statements, then the comparison result is different.
+- Source Only – If the object is found in source database only and not in target database, then the comparison result is source only.
+- Target Only – If the object is found in target database only and not in source database, then the comparison result is target only.
![Schema diff filter option](../images/schema_diff_filter_option.png)
@@ -78,6 +78,9 @@ Also, you can generate the SQL script of the differences found in the target dat
Select the database objects and click on the *Generate Script* button to open the `Query Tool` in a new tab, with the difference in the SQL statement displayed in the `Query Editor`.
+!!! Note
+ If `ENABLE_DATA_ACCESS_TOOLS` configuration option is set to False then `Generate Script` option is disabled.
+
If you have clicked on the database object to check the difference generated in the `DDL Comparison` Panel, and you have not selected the checkbox of the database object, PEM will open the `Query Tool` in a new tab, with the differences in the SQL statements displayed in the `Query Editor`.
You can also use the `Copy` button to copy the difference generated in the `DDL Comparison` panel.
diff --git a/product_docs/docs/pem/8/pem_online_help/08_toc_pem_developer_tools/06_erd_tool.mdx b/product_docs/docs/pem/8/pem_online_help/08_toc_pem_developer_tools/06_erd_tool.mdx
index d2b14ed1911..b55cfd64e37 100644
--- a/product_docs/docs/pem/8/pem_online_help/08_toc_pem_developer_tools/06_erd_tool.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/08_toc_pem_developer_tools/06_erd_tool.mdx
@@ -40,6 +40,9 @@ Hover over an icon on Toolbar to display a tooltip that describes the icon's fun
| `Generate SQL` | Click the `Generate SQL` icon to generate the DDL SQL for the diagram and open a query tool with the generated SQL ready for execution. | Option + Ctrl + S |
| `Download image` | Click the `Download image` icon to save the ERD diagram in a image format. | Option + Ctrl + I |
+!!! Note
+ If `ENABLE_DATA_ACCESS_TOOLS` configuration option is set to False then `Generate SQL` option is disabled.
+
## Editing Options
| **Icon** | **Behavior** | **Shortcut** |
From adca97455be0e96a13775e9e35b5746b742c442e Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Tue, 29 Mar 2022 15:52:59 -0400
Subject: [PATCH 23/39] first half of editing pass
---
.../09_jdbc_42.3.2.1_rel_notes.mdx | 16 ++--
.../10_jdbc_42.2.24.1_rel_notes.mdx | 2 +-
.../12_jdbc_42.2.19.1_rel_notes.mdx | 2 +-
.../18_jdbc_42.2.8.1_rel_notes.mdx | 2 +-
.../42.3.2.1/01_jdbc_rel_notes/index.mdx | 4 +-
...ling_the_connector_with_an_rpm_package.mdx | 16 ++--
...lling_the_connector_on_an_sles_12_host.mdx | 14 ++--
...deb_package_on_a_debian_or_ubuntu_host.mdx | 2 +-
...cal_installer_to_install_the_connector.mdx | 2 +-
...ing_the_advanced_server_jdbc_connector.mdx | 4 +-
.../index.mdx | 10 +--
...ing_the_advanced_server_jdbc_connector.mdx | 10 +--
.../01_additional_connection_properties.mdx | 26 +++----
...synchronous_secondary_database_servers.mdx | 78 +++++++++----------
.../02_connecting_to_the_database/index.mdx | 11 ++-
...l_statements_through_statement_objects.mdx | 26 +++----
...ieving_results_from_a_resultset_object.mdx | 14 ++--
.../05_freeing_resources.mdx | 6 +-
.../06_handling_errors.mdx | 16 ++--
.../index.mdx | 22 +++---
...cing_client-side_resource_requirements.mdx | 24 +++---
...reparedstatements_to_send_sql_commands.mdx | 16 ++--
.../03_executing_stored_procedures.mdx | 64 ++++++++-------
.../04_using_ref_cursors_with_java.mdx | 21 +++--
.../05_using_bytea_data_with_java.mdx | 37 +++++----
...object_types_and_collections_with_java.mdx | 28 ++++---
...ification_handling_with_noticelistener.mdx | 20 ++---
.../index.mdx | 6 +-
28 files changed, 238 insertions(+), 261 deletions(-)
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/09_jdbc_42.3.2.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/09_jdbc_42.3.2.1_rel_notes.mdx
index c32078e50d3..b3ac1218955 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/09_jdbc_42.3.2.1_rel_notes.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/09_jdbc_42.3.2.1_rel_notes.mdx
@@ -9,15 +9,15 @@ New features, enhancements, bug fixes, and other changes in the EDB JDBC Connect
| Type | Description |
| -------------- | ------------------------------------------------------------------------------- |
-| Upstream Merge | Merged with the upstream community driver version `42.3.2`. See the community [JDBC documentation](https://jdbc.postgresql.org/documentation/changelog.html#version_42.3.2) for details. |
-| New Feature | `org.checkerframework.*` was previously packaged in the EDB JDBC jar file; causing conflicts with other applications utilizing `org.checkerfamework.*` with different versions. New feature is packaging the checker framework under a custom namespace in the connector using the shade plugin. [Support Ticket: #74134] |
-| New Feature | JMS based API to interact with DBMS_AQ package seamlessly. This API has been made part of edb-jdbc code under com.edb.jms and com.edb.aq packages. |
+| Upstream merge | Merged with the upstream community driver version `42.3.2`. See the community [JDBC documentation](https://jdbc.postgresql.org/documentation/changelog.html#version_42.3.2) for details. |
+| New feature | `org.checkerframework.*` was previously packaged in the EDB JDBC jar file; causing conflicts with other applications utilizing `org.checkerfamework.*` with different versions. New feature is packaging the checker framework under a custom namespace in the connector using the shade plugin. [Support Ticket: #74134] |
+| New feature | JMS based API to interact with DBMS_AQ package seamlessly. This API has been made part of edb-jdbc code under com.edb.jms and com.edb.aq packages. |
| Enhancement | New property oidTimestamp used to change default behavior of driver when using setTimeStamp method for preparedStatement. If property oidTimestamp it is set to true, sets the oid to Oid.TIMESTAMP, otherwise uses default behavior. |
-| Bug Fix | Issue: Change in date format nls_date_format=’YYYY/MM/DD’ in EDB\*PLUS gives error. [Support Ticket: #75812] |
-| Bug Fix | Rounding differences between EDB and Oracle. [Support Ticket: #72708] |
-| Security Fix | CVE-2022-21724 as part of community merge with v42.3.2 |
-| Security Fix | CVE-2021-36373 - Removed dependency for org.apache.ant |
-| Security Fix | CVE-2020-15250 - junit fix for temporary folder. |
+| Bug fix | Issue: Change in date format nls_date_format=’YYYY/MM/DD’ in EDB\*PLUS gives error. [Support Ticket: #75812] |
+| Bug fix | Rounding differences between EDB and Oracle. [Support Ticket: #72708] |
+| Security fix | CVE-2022-21724 as part of community merge with v42.3.2 |
+| Security fix | CVE-2021-36373 - Removed dependency for org.apache.ant |
+| Security fix | CVE-2020-15250 - junit fix for temporary folder. |
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/10_jdbc_42.2.24.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/10_jdbc_42.2.24.1_rel_notes.mdx
index e5eed7a3f39..3d7bf56167c 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/10_jdbc_42.2.24.1_rel_notes.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/10_jdbc_42.2.24.1_rel_notes.mdx
@@ -9,4 +9,4 @@ New features, enhancements, bug fixes, and other changes in the EDB JDBC Connect
| Type | Description |
| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Upstream Merge | Merged with the upstream community driver version `42.2.24`. See the community [JDBC documentation](https://jdbc.postgresql.org/documentation/head/index.html) for details. |
\ No newline at end of file
+| Upstream merge | Merged with the upstream community driver version `42.2.24`. See the community [JDBC documentation](https://jdbc.postgresql.org/documentation/head/index.html) for details. |
\ No newline at end of file
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/12_jdbc_42.2.19.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/12_jdbc_42.2.19.1_rel_notes.mdx
index b2ea7e65634..e39f49b9cb0 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/12_jdbc_42.2.19.1_rel_notes.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/12_jdbc_42.2.19.1_rel_notes.mdx
@@ -8,7 +8,7 @@ New features, enhancements, bug fixes, and other changes in the EDB JDBC Connect
| Type | Description |
| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Upstream Merge | Merged with the upstream community driver version `42.2.19`. See the [community JDBC documentation](https://jdbc.postgresql.org/documentation/head/index.html) for details. |
+| Upstream merge | Merged with the upstream community driver version `42.2.19`. See the [community JDBC documentation](https://jdbc.postgresql.org/documentation/head/index.html) for details. |
| Enhancement | EDB JDBC Connector now supports GSSAPI encrypted connection. See [Support for GSSAPI Encrypted Connection](../09_security_and_encryption/03_support_for_gssapi_encrypted_connection). |
!!! Note
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/18_jdbc_42.2.8.1_rel_notes.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/18_jdbc_42.2.8.1_rel_notes.mdx
index a9405bae07b..1ec3264c78b 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/18_jdbc_42.2.8.1_rel_notes.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/18_jdbc_42.2.8.1_rel_notes.mdx
@@ -9,7 +9,7 @@ New features, enhancements, bug fixes, and other changes in the EDB JDBC Connect
| Type | Description |
| -------------- | ------------------------------------------------------------------------------- |
-| Upstream Merge | Merged with the upstream community driver version `42.2.8`. See the community [JDBC documentation](https://jdbc.postgresql.org/documentation/head/index.html) for details. |
+| Upstream merge | Merged with the upstream community driver version `42.2.8`. See the community [JDBC documentation](https://jdbc.postgresql.org/documentation/head/index.html) for details. |
| Enhancement | EDB JDBC Connector now supports EDB Postgres Advanced Server 12. |
| Enhancement | EDB JDBC Connector is now supported on the Windows Server 2019 platform. |
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/index.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/index.mdx
index 2f70051ea19..a10261274ce 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/index.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/01_jdbc_rel_notes/index.mdx
@@ -1,10 +1,10 @@
---
-title: "Release Notes"
+title: "Release notes"
---
The EDB JDBC connector documentation describes the latest version of EDB JDBC connector.
-These release notes describe what is new in each release. When a minor or patch release introduces new functionality, indicators in the content identify which version introduced the new feature.
+These release notes describe what's new in each release. When a minor or patch release introduces new functionality, indicators in the content identify the version that introduced the new feature.
| Version | Release Date |
| ---------------------------------------- | ------------ |
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/01_installing_the_connector_with_an_rpm_package.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/01_installing_the_connector_with_an_rpm_package.mdx
index f3b1da80e26..7d6425551c4 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/01_installing_the_connector_with_an_rpm_package.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/01_installing_the_connector_with_an_rpm_package.mdx
@@ -1,5 +1,5 @@
---
-title: "Installing the Connector with an RPM Package"
+title: "Installing the Connector with an RPM package"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -47,7 +47,7 @@ After receiving your repository credentials you can:
2. Modify the file, providing your user name and password.
3. Install `edb-jdbc`.
-**Creating a Repository Configuration File**
+**Creating a repository configuration file**
To create the repository configuration file, assume superuser privileges, and invoke the following command:
@@ -112,7 +112,7 @@ After receiving your repository credentials you can:
2. Modify the file, providing your user name and password.
3. Install `edb-jdbc`.
-**Creating a Repository Configuration File**
+**Creating a repository configuration file**
To create the repository configuration file, assume superuser privileges, and invoke the following command:
@@ -173,7 +173,7 @@ After receiving your repository credentials you can:
2. Modify the file, providing your user name and password.
3. Install `edb-jdbc`.
-**Creating a Repository Configuration File**
+**Creating a repository configuration file**
To create the repository configuration file, assume superuser privileges, and invoke the following command:
@@ -237,7 +237,7 @@ After receiving your repository credentials you can:
2. Modify the file, providing your user name and password.
3. Install `edb-jdbc`.
-**Creating a Repository Configuration File**
+**Creating a repository configuration file**
To create the repository configuration file, assume superuser privileges, and invoke the following command:
@@ -288,7 +288,7 @@ To log in as a superuser:
sudo su -
```
-#### Setting up the Repository
+#### Setting up the repository
1. To register with EDB to receive credentials for the EDB repository, visit: [Repository Access Request](https://www.enterprisedb.com/repository-access-request).
@@ -328,14 +328,14 @@ sudo su -
dnf -qy module disable postgresql
```
-#### Installing the Package
+#### Installing the package
```shell
dnf -y install edb-jdbc
```
-## Updating an RPM Installation
+## Updating an RPM installation
If you have an existing JDBC Connector RPM installation, you can use yum or dnf to upgrade your repository configuration file and update to a more recent product version. To update the `edb.repo` file, assume superuser privileges and enter:
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/02_installing_the_connector_on_an_sles_12_host.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/02_installing_the_connector_on_an_sles_12_host.mdx
index 600da17987c..647564d53a4 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/02_installing_the_connector_on_an_sles_12_host.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/02_installing_the_connector_on_an_sles_12_host.mdx
@@ -1,5 +1,5 @@
---
-title: "Installing the Connector on an SLES Host"
+title: "Installing the Connector on an SLES host"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -32,7 +32,7 @@ sudo su -
Before setting up the repository, you need to register with EDB. To receive credentials for the EDB repository, visit: [Repository Access Request](https://www.enterprisedb.com/repository-access-request).
-### Setting up the Repository
+### Setting up the repository
Setting up the repository is a one time task. If you have already set up your repository, you do not need to perform these steps.
@@ -57,7 +57,7 @@ SUSEConnect -p PackageHub/15.3/x86_64
zypper refresh
```
-### Installing the Package
+### Installing the package
```shell
zypper -n install edb-jdbc
@@ -81,7 +81,7 @@ sudo su -
Before setting up the repository, you need to register with EDB. To receive credentials for the EDB repository, visit: [Repository Access Request](https://www.enterprisedb.com/repository-access-request).
-### Setting up the Repository
+### Setting up the repository
Setting up the repository is a one time task. If you have already set up your repository, you do not need to perform these steps.
@@ -106,7 +106,7 @@ SUSEConnect -p PackageHub/15.3/ppc64le
zypper refresh
```
-### Installing the Package
+### Installing the package
```shell
zypper -n install edb-jdbc
@@ -187,7 +187,7 @@ sudo su -
Before setting up the repository, you need to register with EDB. To receive credentials for the EDB repository, visit: [Repository Access Request](https://www.enterprisedb.com/repository-access-request).
-### Setting up the Repository
+### Setting up the repository
Setting up the repository is a one time task. If you have already set up your repository, you do not need to perform these steps.
@@ -216,7 +216,7 @@ zypper refresh
zypper -n install java-1_8_0-openjdk
```
-### Installing the Package
+### Installing the package
```shell
zypper -n install edb-jdbc
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/03_installing_a_deb_package_on_a_debian_or_ubuntu_host.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/03_installing_a_deb_package_on_a_debian_or_ubuntu_host.mdx
index fb1310dc538..599e3e941bc 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/03_installing_a_deb_package_on_a_debian_or_ubuntu_host.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/03_installing_a_deb_package_on_a_debian_or_ubuntu_host.mdx
@@ -1,5 +1,5 @@
---
-title: "Installing the Connector on a Debian or Ubuntu Host"
+title: "Installing the Connector on a Debian or Ubuntu host"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/04_using_the_graphical_installer_to_install_the_connector.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/04_using_the_graphical_installer_to_install_the_connector.mdx
index b4e452f66d5..4fd727ac1b6 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/04_using_the_graphical_installer_to_install_the_connector.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/04_using_the_graphical_installer_to_install_the_connector.mdx
@@ -1,5 +1,5 @@
---
-title: "Using the Graphical Installer to Install the Connector"
+title: "Using the graphical installer to install the connector"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/05_configuring_the_advanced_server_jdbc_connector.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/05_configuring_the_advanced_server_jdbc_connector.mdx
index 2a42c153631..e6c00b861b1 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/05_configuring_the_advanced_server_jdbc_connector.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/05_configuring_the_advanced_server_jdbc_connector.mdx
@@ -1,5 +1,5 @@
---
-title: "Configuring the Advanced Server JDBC Connector"
+title: "Configuring the EDB Postgres Advanced Server JDBC Connector"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -15,6 +15,6 @@ edb-jdbc18.jar supports JDBC version 4.2.
To make the JDBC driver available to Java, you must either copy the appropriate java `.jar` file for the JDBC version that you are using to your `$java_home/jre/lib/ext` directory or append the location of the `.jar` file to the `CLASSPATH` environment variable.
-Note that if you choose to append the location of the `jar` file to the `CLASSPATH` environment variable, you must include the complete pathname:
+If you choose to append the location of the `jar` file to the `CLASSPATH` environment variable, you must include the complete pathname:
`/usr/edb/jdbc/edb-jdbc18.jar`
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/index.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/index.mdx
index 72268f84723..70e19ad4e62 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/index.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/04_installing_and_configuring_the_jdbc_connector/index.mdx
@@ -1,5 +1,5 @@
---
-title: "Installing and Configuring the JDBC Connector"
+title: "Installing and configuring the JDBC Connector"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,12 +11,8 @@ legacyRedirectsGenerated:
-This chapter describes how to install and configure the EDB JDBC Connector.
+Before installing the EDB JDBC Connector, you must have Java installed on your system. You can download a Java installer that matches your environment from the Oracle Java Downloads [website](http://www.oracle.com/technetwork/java/javase/downloads/index.html). Documentation that contains detailed installation instructions is available through the associated `Installation Instruction` links on the same page.
-Before installing the EDB JDBC Connector, you must have Java installed on your system; you can download a Java installer that matches your environment from the Oracle Java Downloads [website](http://www.oracle.com/technetwork/java/javase/downloads/index.html). Documentation that contains detailed installation instructions is available through the associated `Installation Instruction` links on the same page.
-
-You can use the Advanced Server graphical installer or an RPM package to add the EDB JDBC Connector to your installation.
-
-The following sections describe these installation methods.
+You can use the EDB Postgres Advanced Server graphical installer or an RPM package to add the EDB JDBC Connector to your installation.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/01_loading_the_advanced_server_jdbc_connector.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/01_loading_the_advanced_server_jdbc_connector.mdx
index e33344bb25e..e1f6c6d6fa9 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/01_loading_the_advanced_server_jdbc_connector.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/01_loading_the_advanced_server_jdbc_connector.mdx
@@ -1,5 +1,5 @@
---
-title: "Loading the Advanced Server JDBC Connector"
+title: "Loading the EDB Postgres Advanced Server JDBC Connector"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,12 +11,12 @@ legacyRedirectsGenerated:
-The Advanced Server JDBC driver is written in Java and is distributed in the form of a compiled JAR (Java Archive) file. Use the `Class.forName()` method to load the driver. The `forName()` method dynamically loads a Java class at runtime. When an application calls the `forName()` method, the JVM (Java Virtual Machine) attempts to find the compiled form (the bytecode) that implements the requested class.
+The EDB Postgres Advanced Server JDBC driver is written in Java and is distributed in the form of a compiled Java Archive (JAR) file. Use the `Class.forName()` method to load the driver. The `forName()` method dynamically loads a Java class at runtime. When an application calls the `forName()` method, the Java Virtual Machine (JVM) attempts to find the compiled form (the bytecode) that implements the requested class.
-The Advanced Server JDBC driver is named `com.edb.Driver`:
+The EDB Postgres Advanced Server JDBC driver is named `com.edb.Driver`:
`Class.forName("com.edb.Driver");`
-After loading the bytecode for the driver, the driver registers itself with another JDBC class (named `DriverManager`) that is responsible for managing all the JDBC drivers installed on the current system.
+After loading the bytecode for the driver, the driver registers itself with another JDBC class (named `DriverManager`) that's responsible for managing all the JDBC drivers installed on the current system.
-If the JVM is unable to locate the named driver, it throws a `ClassNotFound` exception (which is intercepted with a `catch` block near the end of the program). The `DriverManager` is designed to handle multiple JDBC driver objects. You can write a Java application that connects to more than one database system via JDBC. The next section explains how to select a specific driver.
+If the JVM can't locate the named driver, it reports a `ClassNotFound` exception (which is intercepted with a `catch` block near the end of the program). The `DriverManager` is designed to handle multiple JDBC driver objects. You can write a Java application that connects to more than one database system by way of JDBC.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/01_additional_connection_properties.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/01_additional_connection_properties.mdx
index 19c91c5c22f..996998c4e41 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/01_additional_connection_properties.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/01_additional_connection_properties.mdx
@@ -1,5 +1,5 @@
---
-title: "Additional Connection Properties"
+title: "Additional connection properties"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,9 +11,7 @@ legacyRedirectsGenerated:
-In addition to the standard connection parameters, the Advanced Server JDBC driver supports connection properties that control behavior specific to `EDB`. You can specify these properties in the connection URL or as a `Properties` object parameter passed to `DriverManager.getConnection()`. Listing 1.2 demonstrates how to use a `Properties` object to specify additional connection properties:
-
-Listing 1.2
+In addition to the standard connection parameters, the EDB Postgres Advanced Server JDBC driver supports connection properties that control behavior specific to `EDB`. You can specify these properties in the connection URL or as a `Properties` object parameter passed to `DriverManager.getConnection()`. The example shows how to use a `Properties` object to specify additional connection properties:
```text
String url = "jdbc:edb://localhost/edb";
@@ -28,25 +26,23 @@ Connection con = DriverManager.getConnection(url, props);
```
!!! Note
- By default the combination of `SSL=true` and setting the connection URL parameter `sslfactory=org.postgresql.ssl.NonValidatingFactory` encrypts the connection but does not validate the SSL certificate. To enforce certificate validation, you must use a `Custom SSLSocketFactory`. For more details about writing a `Custom SSLSocketFactory`, review [the PostgreSQL JDBC driver documentation](https://jdbc.postgresql.org/documentation/head/ssl-factory.html).
+ By default, the combination of `SSL=true` and setting the connection URL parameter `sslfactory=org.postgresql.ssl.NonValidatingFactory` encrypts the connection but doesn't validate the SSL certificate. To enforce certificate validation, you must use a `Custom SSLSocketFactory`. For more details about writing a `Custom SSLSocketFactory`, see the [the PostgreSQL JDBC driver documentation](https://jdbc.postgresql.org/documentation/head/ssl-factory.html).
To specify additional connection properties in the URL, add a question mark and an ampersand-separated list of keyword-value pairs:
`String url = "jdbc:edb://localhost/edb?user=enterprisedb&ssl=true";`
-Some of the additional connection properties are shown in the following table:
-
-Table 5-2 - Additional Connection Properties
+Some of the additional connection properties are shown in the following table.
| Name | Type | Description |
| --------------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| user | String | The database user on whose behalf the connection is being made. |
| password | String | The database user’s password. |
-| ssl | Boolean | Requests an authenticated, encrypted SSL connection |
-| loggerLevel | String | The logger level of the driver. The allowed values are OFF, DEBUG or TRACE. This enables the java.util.logging.Logger Level of the driver based on the following mapping of levels: DEBUG -> FINE, TRACE -> FINEST. The loggerLevel property is intended for debugging the driver, and not for general SQL query debugging. |
-| loggerFile | String | The file name output of the logger. This parameter must be used together with loggerLevel. If set, the Logger uses a java.util.logging.FileHandler to write to a specified file. If the parameter is not set or the file can’t be created, the java.util.logging.ConsoleHandler is used instead of java.util.logging.FileHandler. |
-| charSet | String | The value of charSet determines the character set used for data sent to or received from the database. |
-| prepareThreshold | Integer | The value of prepareThreshold determines the number of PreparedStatement executions required before switching to server side prepared statements. The default is five. |
+| ssl | Boolean | Requests an authenticated, encrypted SSL connection. |
+| loggerLevel | String | The logger level of the driver. The allowed values are `OFF`, `DEBUG`, or `TRACE`. This enables the `java.util.logging.Logger` level of the driver based on the following mapping of levels: `DEBUG` -> `FINE`, `TRACE` -> `FINEST`. The `loggerLevel` property is intended for debugging the driver and not for general SQL query debugging. |
+| loggerFile | String | The file name output of the logger. This parameter must be used together with `loggerLevel`. If set, the logger uses a `java.util.logging.FileHandler` to write to a specified file. If the parameter isn't set or the file can’t be created, the `java.util.logging.ConsoleHandler` is used instead of `java.util.logging.FileHandler`. |
+| charSet | String | The value of `charSet` determines the character set used for data sent to or received from the database. |
+| prepareThreshold | Integer | The value of `prepareThreshold` determines the number of `PreparedStatement` executions required before switching to server-side prepared statements. The default is five. |
| loadBalanceHosts | Boolean | In default mode (disabled) hosts are connected in the given order. If enabled, hosts are chosen randomly from the set of suitable candidates. |
-| targetServerType | String | Allows opening connections to only servers with the required state. The allowed values are any, primary, secondary, preferSecondary, and preferSyncSecondary. The primary/secondary distinction is currently done by observing if the server allows writes. The value preferSecondary tries to connect to secondaries if any are available, otherwise allows connecting to the primary. The Advanced Server JDBC Connector supports preferSyncSecondary, which permits connection to only synchronous secondaries or the primary if there are no active synchronous secondaries. |
-| skipQuotesOnReturning | Boolean | When set to true, column names from the RETURNING clause are not quoted. This eliminates a case-sensitive comparison of the column name. When set to false (the default setting), column names are quoted. |
+| targetServerType | String | Allows opening connections to only servers with the required state. The allowed values are `any`, `primary`, `secondary`, `preferSecondary`, and `preferSyncSecondary`. The primary/secondary distinction is currently done by observing if the server allows writes. The value `preferSecondary` tries to connect to secondaries if any are available, otherwise allows connecting to the primary. The EDB Postgres Advanced Server JDBC Connector supports `preferSyncSecondary`, which permits connection to only synchronous secondaries or the primary if there are no active synchronous secondaries. |
+| skipQuotesOnReturning | Boolean | When set to `true`, column names from the `RETURNING` clause aren't quoted. This eliminates a case-sensitive comparison of the column name. When set to `false` (the default setting), column names are quoted. |
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/02_preferring_synchronous_secondary_database_servers.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/02_preferring_synchronous_secondary_database_servers.mdx
index e4aed7b4220..6a5c1b3f568 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/02_preferring_synchronous_secondary_database_servers.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/02_preferring_synchronous_secondary_database_servers.mdx
@@ -1,5 +1,5 @@
---
-title: "Preferring Synchronous Secondary Database Servers"
+title: "Preferring synchronous secondary database servers"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,9 +11,9 @@ legacyRedirectsGenerated:
-The Advanced Server JDBC Connector supports the `preferSyncSecondary` option for the `targetServerType` connection property as noted in Table 5-2.
+The EDB Postgres Advanced Server JDBC Connector supports the `preferSyncSecondary` option for the `targetServerType` connection property.
-The `preferSyncSecondary` option provides a preference for synchronous, standby servers for failover connection, and thus ignoring asynchronous servers.
+The `preferSyncSecondary` option provides a preference for synchronous, standby servers for failover connection, thus ignoring asynchronous servers.
The specification of this capability in the connection URL is shown by the following syntax:
@@ -22,19 +22,22 @@ jdbc:edb://primary:port,secondary_1:port_1,secondary_2:port_2,.../
database?targetServerType=preferSyncSecondary
```
-**Parameters**
+## Parameters
`primary:port`
-The IP address or a name assigned to the primary database server followed by its port number. If `primary` is a name, it must be specified with its IP address in the `/etc/hosts` file on the host running the Java program. **Note:** The primary database server can be specified in any location in the list. It does not have to precede the secondary database servers.
+The IP address or a name assigned to the primary database server followed by its port number. If `primary` is a name, you must specify it with its IP address in the `/etc/hosts` file on the host running the Java program.
+
+!!! Note
+You can specify the primary database server in any location in the list. It doesn't have to precede the secondary database servers.
`secondary_n:port_n`
-The IP address or a name assigned to a standby, secondary database server followed by its port number. If `secondary_n` is a name, it must be specified with its IP address in the `/etc/hosts` file on the host running the Java program.
+The IP address or a name assigned to a standby, secondary database server followed by its port number. If `secondary_n` is a name, you must specify it with its IP address in the `/etc/hosts` file on the host running the Java program.
`database`
-The name of the database to which the connection is to be made.
+The name of the database to which to make the connection.
The following is an example of the connection URL:
@@ -45,35 +48,31 @@ con = DriverManager.getConnection(url, "enterprisedb", "edb");
The following characteristics apply to the `preferSyncSecondary` option:
-- The primary database server may be specified in any location in the connection list.
-- Connection for accessing the database for usage by the Java program is first attempted on a synchronous secondary. The secondary servers are available for read-only operations.
+- You cam specify the primary database server in any location in the connection list.
+- Connection for accessing the database for use by the Java program is first attempted on a synchronous secondary. The secondary servers are available for read-only operations.
- No connection attempt is made to any servers running in asynchronous mode.
-- The order in which connection attempts are made is determined by the `loadBalanceHosts` connection property as described in Table 5‑2. If disabled, which is the default setting, connection attempts are made in the left-to-right order specified in the connection list. If enabled, connection attempts are made randomly.
-- If connection cannot be made to a synchronous secondary, then connection to the primary database server is used. If the primary database server is not active, then the connection attempt fails.
+- The order in which connection attempts are made is determined by the `loadBalanceHosts` connection property. If disabled, which is the default setting, connection attempts are made in the left-to-right order specified in the connection list. If enabled, connection attempts are made randomly.
+- If connection can't be made to a synchronous secondary, then connection to the primary database server is used. If the primary database server isn't active, then the connection attempt fails.
-The synchronous secondaries to be used for the `preferSyncSecondary` option must be configured for hot standby usage.
+The synchronous secondaries to use for the `preferSyncSecondary` option must be configured for hot standby usage.
-The following section provides a brief overview of setting up the primary and secondary database servers for hot standby, synchronous replication.
+## Configuring primary and secondary database servers overview
-## Configuring Primary and Secondary Database Servers Overview
-
-The process for configuring a primary and secondary database servers are described in the PostgreSQL documentation.
+The process for configuring a primary and secondary database servers is described in the PostgreSQL documentation.
For general information on hot standby usage, which is needed for the `preferSyncSecondary` option, see [the PostgreSQL core documentation](https://www.postgresql.org/docs/12/static/hot-standby.html).
-For information about creating a base backup for the secondary database server from the primary, see Section 25.3.2, *Making a Base Backup* (describes usage of the pg_basebackup utility program) or Section 25.3.3, *Making a Base Backup Using the Low Level API* within Section 25.3 *Continuous Archiving and Point-in-Time Recovery (PITR)* in [The PostgreSQL Core Documentation](https://www.postgresql.org/docs/12/static/continuous-archiving.html).
-
-For information on the configuration parameters that must be set for hot standby usage, see [Section 19.6, Replication](https://www.postgresql.org/docs/12/static/runtime-config-replication.html).
+For information about creating a base backup for the secondary database server from the primary, see Section 25.3.2, *Making a Base Backup* (describes usage of the `pg_basebackup` utility program) or Section 25.3.3, *Making a Base Backup Using the Low Level API* in Section 25.3 *Continuous Archiving and Point-in-Time Recovery (PITR)* in [The PostgreSQL Core Documentation](https://www.postgresql.org/docs/12/static/continuous-archiving.html).
-The following section provides a basic example of setting up the primary and secondary database servers.
+For information on the configuration parameters to set for hot standby usage, see [Section 19.6, Replication](https://www.postgresql.org/docs/12/static/runtime-config-replication.html).
-## Example: Primary and Secondary Database Servers
+## Example: Primary and secondary database servers
In the example that follows, the:
-- primary database server resides on host `192.168.2.24`, port `5444`
-- Secondary database server is named `secondary1` and resides on host `192.168.2.22`, port `5445`
-- Secondary database server is named `secondary2` and resides on host `192.162.2.24`, port `5446` (same host as the primary)
+- Primary database server resides on host `192.168.2.24`, port `5444`.
+- Secondary database server is named `secondary1` and resides on host `192.168.2.22`, port `5445`.
+- Secondary database server is named `secondary2` and resides on host `192.162.2.24`, port `5446` (same host as the primary).
In the primary database server’s `pg_hba.conf` file, there must be a replication entry for each unique replication database `USER/ADDRESS` combination for all secondary database servers. In the following example, the database superuser `enterprisedb` is used as the replication database user for both the `secondary1` database server on `192.168.2.22` and the `secondary2` database server that is local relative to the primary.
@@ -83,7 +82,7 @@ host replication enterprisedb 192.168.2.22/32 md5
host replication enterprisedb 127.0.0.1/32 md5
```
-After the primary database server has been configured in the `postgresql.conf` file along with its `pg_hba.conf` file, database server `secondary1` is created by invoking the following command on host `192.168.2.22` for `secondary1`:
+After the primary database server is configured in the `postgresql.conf` file along with its `pg_hba.conf` file, database server `secondary1` is created by invoking the following command on host `192.168.2.22` for `secondary1`:
```text
su – enterprisedb
@@ -91,7 +90,7 @@ Password:
-bash-4.1$ pg_basebackup -D /opt/secondary1 -h 192.168.2.24 -p 5444 -Fp -R -X stream -l 'Secondary1'
```
-On the secondary database server, `/opt/secondary1`, a `recovery.conf` file is generated in the database cluster, which has been edited in the following example by adding the `application_name=secondary1` setting as part of the `primary_conninfo` string and removal of some of the other unneeded options automatically generated by `pg_basebackup`. Also note the use of the `standby_mode = 'on'` parameter.
+On the secondary database server, `/opt/secondary1`, a `recovery.conf` file is generated in the database cluster, which was edited in the following example by adding the `application_name=secondary1` setting as part of the `primary_conninfo` string and removing some of the other unneeded options automatically generated by `pg_basebackup`. Also note the use of the `standby_mode = 'on'` parameter.
```text
standby_mode = 'on'
@@ -122,7 +121,7 @@ NOTICE: pg_stop_backup complete, all required WAL segments have been archived
(1 row)
```
-On the secondary database server `/opt/secondary2`, create the `recovery.conf` file in the database cluster. Note the `application_name=secondary2` setting as part of the `primary_conninfo` string as shown in the following example. Also be sure to include the `standby_mode = 'on'` parameter.
+On the secondary database server `/opt/secondary2`, create the `recovery.conf` file in the database cluster. The `application_name=secondary2` setting is part of the `primary_conninfo` string as shown in the following example. Also be sure to include the `standby_mode = 'on'` parameter.
```text
standby_mode = 'on'
@@ -134,24 +133,23 @@ The application name `secondary2` must be included in the `synchronous_standby_n
You must ensure the configuration parameter settings in the `postgresql.conf` file of the secondary database servers are properly set (particularly `hot_standby=on`).
!!! Note
- As of EDB Postgres Advanced Server v12, the `recovery.conf` file is no longer valid; it is replaced by the `standby.signal` file. As a result, `primary_conninfo` is moved from the `recovery.conf` file to the `postgresql.conf` file. The presence of `standby.signal` file signals the cluster to run in standby mode. Please note that even if you try to create a `recovery.conf` file manually and keep it under the `data` directory, the server will fail to start and throw an error.
+ As of EDB Postgres Advanced Server v12, the `recovery.conf` file is no longer valid. It's replaced by the `standby.signal` file. As a result, `primary_conninfo` is moved from the `recovery.conf` file to the `postgresql.conf` file. The presence of the `standby.signal` file signals the cluster to run in standby mode. Even if you try to create a `recovery.conf` file manually and keep it under the `data` directory, the server fails to start and reports an error.
The parameter `standby_mode=on` is also removed from EDB Postgres Advanced Server v12, and the `trigger_file` parameter name is changed to `promote_trigger_file`.
-The following table lists the basic `postgresql.conf` configuration parameter settings of the primary database server as compared to the secondary database servers:
+The following table lists the basic `postgresql.conf` configuration parameter settings of the primary database server as compared to the secondary database servers.
-Table - Primary/Secondary Configuration Parameters
| Parameter | Primary | Secondary | Description |
| ------------------------- | ----------------------------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| archive_mode | on | off | Completed WAL segments sent to archive storage |
-| archive_command | cp %p /*archive_dir*/%f | n/a | Archive completed WAL segments |
-| wal_level (9.5 or prior) | hot_standby | minimal | Information written to WAL segment |
-| wal_level (9.6 or later) | replica | minimal | Information written to WAL segment |
-| max_wal_senders | *n* (positive integer) | 0 | Maximum concurrent connections from standby servers |
-| wal_keep_segments | *n* (positive integer) | 0 | Minimum number of past log segments to keep for standby servers |
-| synchronous_standby_names | *n*(*secondary1*, *secondary2*,...) | n/a | List of standby servers for synchronous replication. Must be present to enable synchronous replication. These are obtained from the application_name option of the primary_conninfo parameter in the recovery.conf file of each standby server. |
-| hot_standby | off | on | Client application can connect and run queries on the secondary server in standby mode |
+| archive_mode | on | off | Completed WAL segments sent to archive storage. |
+| archive_command | cp %p /*archive_dir*/%f | n/a | Archive completed WAL segments. |
+| wal_level (9.5 or prior) | hot_standby | minimal | Information written to WAL segment. |
+| wal_level (9.6 or later) | replica | minimal | Information written to WAL segment. |
+| max_wal_senders | *n* (positive integer) | 0 | Maximum concurrent connections from standby servers. |
+| wal_keep_segments | *n* (positive integer) | 0 | Minimum number of past log segments to keep for standby servers. |
+| synchronous_standby_names | *n*(*secondary1*, *secondary2*,...) | n/a | List of standby servers for synchronous replication. Must be present to enable synchronous replication. These are obtained from the `application_name` option of the `primary_conninfo` parameter in the `recovery.conf` file of each standby server. |
+| hot_standby | off | on | Client application can connect and run queries on the secondary server in standby mode. |
The secondary database server (`secondary1`) is started:
@@ -167,7 +165,7 @@ The secondary database server (`secondary2`) is started:
server starting
```
-To ensure that the secondary database servers are properly set up in synchronous mode, use the following query on the primary database server. Note that the `sync_state` column lists applications `secondary1` and `secondary2` as sync.
+To ensure that the secondary database servers are properly set up in synchronous mode, use the following query on the primary database server. The `sync_state` column lists applications `secondary1` and `secondary2` as sync.
```text
edb=# SELECT usename, application_name, client_addr, client_port, sync_state FROM pg_stat_replication;
@@ -193,7 +191,7 @@ The `/etc/hosts` file on the host running the Java program contains the followin
192.168.2.24 localhost.localdomain secondary2
```
-For this example, the preferred synchronous secondary connection option results in the first usage attempt made on `secondary1`, then on `secondary2` if `secondary1` is not active, then on the primary if both `secondary1` and `secondary2` are not active as demonstrated by the following program that displays the IP address and port of the database server to which the connection is made.
+For this example, the preferred synchronous secondary connection option results in the first usage attempt made on `secondary1`, then on `secondary2` if `secondary1` is not active, and then on the primary if both `secondary1` and `secondary2` aren't active as shown by the following program. The program displays the IP address and port of the database server to which the connection is made.
```text
import java.sql.*;
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/index.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/index.mdx
index 5d67ded2d63..2ef70ecb390 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/index.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/index.mdx
@@ -1,5 +1,5 @@
---
-title: "Connecting to the Database"
+title: "Connecting to the database"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -20,13 +20,13 @@ String password = "enterprisedb";
Connection con = DriverManager.getConnection(url, user, password);
```
-All JDBC connections start with the `DriverManager`. The `DriverManager` class offers a static method called `getConnection()` that is responsible for creating a connection to the database. When you call the `getConnection()` method, the `DriverManager` must decide which JDBC driver to use to connect to the database; that decision is based on a URL (Uniform Resource Locator) that you pass to `getConnection()`.
+All JDBC connections start with the `DriverManager`. The `DriverManager` class offers a static method called `getConnection()` that's responsible for creating a connection to the database. When you call the `getConnection()` method, the `DriverManager` must decide which JDBC driver to use to connect to the database. The decision is based on a URL that you pass to `getConnection()`.
A JDBC URL takes the following general format:
`jdbc:
:`
-The first component in a JDBC URL is always `jdbc`. When using the Advanced Server JDBC Connector, the second component (the driver) is `edb`.
+The first component in a JDBC URL is always `jdbc`. When using the EDB Postgres Advanced Server JDBC Connector, the second component (the driver) is `edb`.
The Advanced Server JDBC URL takes one of the following forms:
@@ -36,14 +36,13 @@ The Advanced Server JDBC URL takes one of the following forms:
`jdbc:edb://:/`
-The following table shows the various connection parameters:
+The following table shows the various connection parameters.
-Table - Connection Parameters
| Name | Description |
| -------- | -------------------------------------------------------------------------------------------------------- |
| host | The host name of the server. Defaults to localhost. |
-| port | The port number the server is listening on. Defaults to the Advanced Server standard port number (5444). |
+| port | The port number the server is listening on. Defaults to the EDB Postgres Advanced Server standard port number (5444). |
| database | The database name. |
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/03_executing_sql_statements_through_statement_objects.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/03_executing_sql_statements_through_statement_objects.mdx
index 3bb9d3d5cb2..0e10cbbb489 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/03_executing_sql_statements_through_statement_objects.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/03_executing_sql_statements_through_statement_objects.mdx
@@ -1,5 +1,5 @@
---
-title: "Executing SQL Statements through Statement Objects"
+title: "Executing SQL statements through statement objects"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,15 +11,15 @@ legacyRedirectsGenerated:
-After loading the Advanced Server JDBC Connector driver and connecting to the server, the code in the sample application builds a JDBC `Statement` object, executes an SQL query, and displays the results.
+After loading the EDB Postgres Advanced Server JDBC Connector driver and connecting to the server, the code in the sample application builds a JDBC `Statement` object, executes a SQL query, and displays the results.
-A `Statement` object sends SQL statements to a database. There are three kinds of Statement objects. Each is specialized to send a particular type of SQL statement:
+A `Statement` object sends SQL statements to a database. There are three kinds of `Statement` objects. Each is specialized to send a particular type of SQL statement:
- A `Statement` object is used to execute a simple SQL statement with no parameters.
-- A `PreparedStatement` object is used to execute a pre-compiled SQL statement with or without IN parameters.
+- A `PreparedStatement` object is used to execute a precompiled SQL statement with or without `IN` parameters.
- A `CallableStatement` object is used to execute a call to a database stored procedure.
-You must construct a `Statement` object before executing an SQL statement. The `Statement` object offers a way to send a SQL statement to the server (and gain access to the result set). Each `Statement` object belongs to a `Connection`; use the `createStatement()` method to ask the `Connection` to create the `Statement` object.
+You must construct a `Statement` object before executing a SQL statement. The `Statement` object offers a way to send a SQL statement to the server (and gain access to the result set). Each `Statement` object belongs to a `Connection`. Use the `createStatement()` method to ask the `Connection` to create the `Statement` object.
A `Statement` object defines several methods to execute different types of SQL statements. In the sample application, the `executeQuery()` method executes a `SELECT` statement:
@@ -28,27 +28,27 @@ Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * FROM emp");
```
-The `executeQuery()` method expects a single argument: the SQL statement that you want to execute. `executeQuery()` returns data from the query in a `ResultSet` object. If the server encountered an error while executing the SQL statement provided, it throws an `SQLException` (and does not return a `ResultSet`).
+The `executeQuery()` method expects a single argument: the SQL statement that you want to execute. `executeQuery()` returns data from the query in a `ResultSet` object. If the server encountered an error while executing the SQL statement provided, it returns an `SQLException` and doesn't return a `ResultSet`.
-## Using Named Notation with a CallableStatement Object
+## Using named notation with a CallableStatement object
-The JDBC Connector (Advanced Server version 9.6 and later) supports the use of named parameters when instantiating a `CallableStatement` object. This syntax is an extension of JDBC supported syntax, and does not conform to the JDBC standard.
+The JDBC Connector (EDB Postgres Advanced Server version 9.6 and later) supports the use of named parameters when instantiating a `CallableStatement` object. This syntax is an extension of JDBC supported syntax and doesn't conform to the JDBC standard.
You can use a `CallableStatement` object to pass parameter values to a stored procedure. You can assign values to `IN`, `OUT`, and `INOUT` parameters with a `CallableStatement` object.
When using the `CallableStatement` class, you can use ordinal notation or named notation to specify values for actual arguments. You must set a value for each `IN` or `INOUT` parameter marker in a statement.
-When using ordinal notation to pass values to a `CallableStatement` object, you should use the setter method that corresponds to the parameter type. For example, when passing a `STRING` value, use the `setString` setter method. Each parameter marker within a statement (`?`) represents an ordinal value. When using ordinal parameters, you should pass the actual parameter values to the statement in the order that the formal arguments are specified within the procedure definition.
+When using ordinal notation to pass values to a `CallableStatement` object, use the setter method that corresponds to the parameter type. For example, when passing a `STRING` value, use the `setString` setter method. Each parameter marker in a statement (`?`) represents an ordinal value. When using ordinal parameters, pass the actual parameter values to the statement in the order that the formal arguments are specified in the procedure definition.
You can also use named parameter notation when specifying argument values for a `CallableStatement` object. Named parameter notation allows you to supply values for only those parameters that are required by the procedure, omitting any parameters that have acceptable default values. You can also specify named parameters in any order.
-When using named notation, each parameter name should correspond to a `COLUMN_NAME` returned by a call to the `DatabaseMetaData.getProcedureColumns` method. You should use the `=>` token when including a named parameter in a statement call.
+When using named notation, each parameter name must correspond to a `COLUMN_NAME` returned by a call to the `DatabaseMetaData.getProcedureColumns` method. Use the `=>` token when including a named parameter in a statement call.
Use the `registerOutParameter` method to identify each `OUT` or `INOUT` parameter marker in the statement.
-**Examples**
+### Examples
-The following examples demonstrate using the `CallableStatement` method to provide parameters to a procedure with the following signature:
+The following examples show using the `CallableStatement` method to provide parameters to a procedure with the following signature:
```text
PROCEDURE hire_emp (ename VARCHAR2
@@ -76,7 +76,7 @@ cstmt.setInt(6, 7566);
cstmt.setInt(7, 30);
```
-The following example uses named notation to provide parameters; using named notation, you can omit parameters that have default values or re-order parameters:
+The following example uses named notation to provide parameters. Using named notation, you can omit parameters that have default values or reorder parameters:
```text
CallableStatement cstmt = con.prepareCall
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/04_retrieving_results_from_a_resultset_object.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/04_retrieving_results_from_a_resultset_object.mdx
index 5946cd42665..d84c04f9c92 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/04_retrieving_results_from_a_resultset_object.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/04_retrieving_results_from_a_resultset_object.mdx
@@ -1,5 +1,5 @@
---
-title: "Retrieving Results from a ResultSet Object"
+title: "Retrieving results from a ResultSet object"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,11 +11,11 @@ legacyRedirectsGenerated:
-A `ResultSet` object is the primary storage mechanism for the data returned by an SQL statement. Each `ResultSet` object contains both data and metadata (in the form of a `ResultSetMetaData` object). `ResultSetMetaData` includes useful information about results returned by the SQL command: column names, column count, row count, column length, and so on.
+A `ResultSet` object is the primary storage mechanism for the data returned by a SQL statement. Each `ResultSet` object contains both data and metadata in the form of a `ResultSetMetaData` object. `ResultSetMetaData` includes useful information about results returned by the SQL command: column names, column count, row count, column length, and so on.
-To access the row data stored in a `ResultSet` object, an application calls one or more `getter` methods. A `getter` method retrieves the value in a particular column of the current row. There are many different `getter` methods; each method returns a value of a particular type. For example, the `getString()` method returns a `STRING` type; the `getDate()` method returns a `Date`, and the `getInt()` method returns an `INT` type. When an application calls a `getter` method, JDBC tries to convert the value into the requested type.
+To access the row data stored in a `ResultSet` object, an application calls one or more `getter` methods. A `getter` method retrieves the value in a particular column of the current row. There are many different `getter` methods. Each method returns a value of a particular type. For example, the `getString()` method returns a `STRING` type, the `getDate()` method returns a `Date`, and the `getInt()` method returns an `INT` type. When an application calls a `getter` method, JDBC tries to convert the value into the requested type.
-Each `ResultSet` keeps an internal pointer that point to the current row. When the `executeQuery()` method returns a `ResultSet`, the pointer is positioned before the first row; if an application calls a `getter` method before moving the pointer, the `getter` method will fail. To advance to the next (or first) row, call the `ResultSet’s next()` method. `ResultSet.next()` is a boolean method; it returns `TRUE` if there is another row in the `ResultSet` or `FALSE` if you have moved past the last row.
+Each `ResultSet` keeps an internal pointer that points to the current row. When the `executeQuery()` method returns a `ResultSet`, the pointer is positioned before the first row. If an application calls a `getter` method before moving the pointer, the `getter` method fails. To advance to the next (or first) row, call the `ResultSet’s next()` method. `ResultSet.next()` is a Boolean method. It returns `TRUE` if there's another row in the `ResultSet` or `FALSE` if you moved past the last row.
After moving the pointer to the first row, the sample application uses the `getString()` `getter` method to retrieve the value in the first column and then prints that value. Since `ListEmployees` calls `rs.next()` and `rs.getString()` in a loop, it processes each row in the result set. `ListEmployees` exits the loop when `rs.next()` moves the pointer past the last row and returns `FALSE`.
@@ -26,8 +26,8 @@ System.out.println(rs.getString(1));
}
```
-When using the `ResultSet` interface, remember:
+When using the `ResultSet` interface:
- You must call `next()` before reading any values. `next()` returns `true` if another row is available and prepares the row for processing.
-- Under the JDBC specification, an application should access each row in the `ResultSet` only once. It is safest to stick to this rule, although at the current time, the Advanced Server JDBC driver will allow you to access a field as many times as you want.
-- When you’ve finished using a `ResultSet`, call the `close()` method to free the resources held by that object.
+- Under the JDBC specification, an application must access each row in the `ResultSet` only once. It's safest to stick to this rule, although currently the EDB Postgres Advanced Server JDBC driver allows you to access a field as many times as you want.
+- When you finish using a `ResultSet`, call the `close()` method to free the resources held by that object.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/05_freeing_resources.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/05_freeing_resources.mdx
index ea58ec28b37..c7c9775e2e6 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/05_freeing_resources.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/05_freeing_resources.mdx
@@ -1,5 +1,5 @@
---
-title: "Freeing Resources"
+title: "Freeing resources"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,7 +11,7 @@ legacyRedirectsGenerated:
-Every JDBC object consumes some number of resources. A `ResultSet` object, for example, may contain a copy of every row returned by a query; a `Statement` object may contain the text of the last command executed, and so forth. It’s usually a good idea to free up those resources when the application no longer needs them. The sample application releases the resources consumed by the `Result`, `Statement`, and `Connection` objects by calling each object’s `close()` method:
+Every JDBC object consumes resources. A `ResultSet` object, for example, might contain a copy of every row returned by a query. A `Statement` object might contain the text of the last command executed. It’s usually a good idea to free up those resources when the application no longer needs them. The sample application releases the resources consumed by the `Result`, `Statement`, and `Connection` objects by calling each object’s `close()` method:
```text
rs.close();
@@ -19,4 +19,4 @@ stmt.close();
con.close();
```
-If you attempt to use a JDBC object after closing it, that object will throw an error.
+If you attempt to use a JDBC object after closing it, that object returns an error.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx
index 120bc5f39cc..f38a1d76d26 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx
@@ -1,5 +1,5 @@
---
-title: "Handling Errors"
+title: "Handling errors"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,15 +11,15 @@ legacyRedirectsGenerated:
-When connecting to an external resource (such as a database server), errors are bound to occur; your code should include a way to handle these errors. Both JDBC and the Advanced Server JDBC Connector provide various types of error handling. The [ListEmployees class example](./#using_the_advanced_server_jdbc_connector_with_java_applications) demonstrates how to handle an error using `try/catch` blocks.
+When connecting to an external resource (such as a database server), errors are bound to occur. Your code must include a way to handle these errors. Both JDBC and the EDB Postgres Advanced Server JDBC Connector provide various types of error handling. The [ListEmployees class example](./#using_the_advanced_server_jdbc_connector_with_java_applications) shows how to handle an error using `try/catch` blocks.
-When a JDBC object throws an error (an object of type `SQLException` or of a type derived from `SQLException`), the `SQLException` object exposes three different pieces of error information:
+When a JDBC object retrns an error (an object of type `SQLException` or of a type derived from `SQLException`), the `SQLException` object exposes three different pieces of error information:
-- The error message.
-- The SQL State.
-- A vendor-specific error code.
+- The error message
+- The SQL state
+- A vendor-specific error code
-In the example, the following code displays the value of these components should an error occur:
+In this example, the code displays the value of these components if an error occurs:
```text
System.out.println("SQL Exception: " + exp.getMessage());
@@ -27,7 +27,7 @@ System.out.println("SQL State: " + exp.getSQLState());
System.out.println("Vendor Error: " + exp.getErrorCode());
```
-For example, if the server tries to connect to a database that does not exist on the specified host, the following error message is displayed:
+For example, if the server tries to connect to a database that doesn't exist on the specified host, the following error message is displayed:
```text
SQL Exception: FATAL: database "acctg" does not exist
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/index.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/index.mdx
index b970bf66231..9f57ee8653f 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/index.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/index.mdx
@@ -1,5 +1,5 @@
---
-title: "Using the Advanced Server JDBC Connector with Java applications"
+title: "Using the EDB Postgres Advanced Server JDBC Connector with Java applications"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,9 +11,7 @@ legacyRedirectsGenerated:
-With Java and the EDB JDBC Connector in place, a Java application can access an Advanced Server database. Listing 1.1 creates an application that executes a query and prints the result set.
-
-Listing 1.1
+With Java and the EDB JDBC Connector in place, a Java application can access an EDB Postgres Advanced Server database. This example creates an application that executes a query and prints the result set.
```text
import java.sql.*;
@@ -54,15 +52,15 @@ public class ListEmployees
}
```
-This example is simple, but it demonstrates the fundamental steps required to interact with an Advanced Server database from a Java application:
+This example is simple, but it shows the fundamental steps required to interact with an EDB Postgres Advanced Server database from a Java application:
-- Load the JDBC driver
-- Build connection properties
-- Connect to the database server
-- Execute an SQL statement
-- Process the result set
-- Clean up
-- Handle any errors that may occur
+- Load the JDBC driver.
+- Build connection properties.
+- Connect to the database server.
+- Execute a SQL statement.
+- Process the result set.
+- Clean up.
+- Handle any errors that occur.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/01_reducing_client-side_resource_requirements.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/01_reducing_client-side_resource_requirements.mdx
index e0290c7bb4b..7c34a6ea7fe 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/01_reducing_client-side_resource_requirements.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/01_reducing_client-side_resource_requirements.mdx
@@ -1,5 +1,5 @@
---
-title: "Reducing Client-side Resource Requirements"
+title: "Reducing client-side resource requirements"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,23 +11,23 @@ legacyRedirectsGenerated:
-The Advanced Server JDBC driver retrieves the results of a SQL query as a `ResultSet` object. If a query returns a large number of rows, using a batched `ResultSet` will:
+The EDB Postgres Advanced Server JDBC driver retrieves the results of a SQL query as a `ResultSet` object. If a query returns a large number of rows, using a batched `ResultSet`:
-- Reduce the amount of time it takes to retrieve the first row.
-- Save time by retrieving only the rows that you need.
-- Reduce the memory requirement of the client.
+- Reduces the amount of time it takes to retrieve the first row.
+- Saves time by retrieving only the rows that you need.
+- Reduces the memory requirement of the client.
-When you reduce the fetch size of a `ResultSet` object, the driver doesn’t copy the entire `ResultSet` across the network (from the server to the client). Instead, the driver requests a small number of rows at a time; as the client application moves through the result set, the driver fetches the next batch of rows from the server.
+When you reduce the fetch size of a `ResultSet` object, the driver doesn’t copy the entire `ResultSet` across the network (from the server to the client). Instead, the driver requests a small number of rows at a time. As the client application moves through the result set, the driver fetches the next batch of rows from the server.
-Batched result sets cannot be used in all situations. Not adhering to the following restrictions will make the driver silently fall back to fetching the whole `ResultSet` at once:
+You can't use batched result sets in all situations. Not adhering to the following restrictions causes the driver to silently fall back to fetching the whole `ResultSet` at once:
- The client application must disable `autocommit`.
-- The `Statement` object must be created with a `ResultSet` type of `TYPE_FORWARD_ONLY` type (which is the default). `TYPE_FORWARD_ONLY` result sets can only step forward through the ResultSet.
+- You must create the `Statement` object with a `ResultSet` type of `TYPE_FORWARD_ONLY` type (the default). `TYPE_FORWARD_ONLY` result sets can only step forward through the ResultSet.
- The query must consist of a single SQL statement.
-## Modifying the Batch Size of a Statement Object
+## Modifying the batch size of a statement object
-Limiting the batch size of a `ResultSet` object can speed the retrieval of data and reduce the resources needed by a client-side application. Listing 1.6 creates a `Statement` object with a batch size limited to five rows:
+Limiting the batch size of a `ResultSet` object can speed the retrieval of data and reduce the resources needed by a client-side application. The following code creates a `Statement` object with a batch size limited to five rows:
```text
// Make sure autocommit is off
@@ -65,6 +65,4 @@ For each row in the `ResultSet` object, the call to `println()` prints `a row wa
System.out.println("a row was returned.");
```
-Remember, while the `ResultSet` contains all of the rows in the table, they are only fetched from the server five rows at a time. From the client’s point of view, the only difference between a `batched` result set and an `unbatched` result set is that a batched result may return the first row in less time.
-
-Next, we will look at another feature (`the PreparedStatement`) that you can use to increase the performance of certain JDBC applications.
+While the `ResultSet` contains all of the rows in the table, they are only fetched from the server five rows at a time. From the client’s point of view, the only difference between a `batched` result set and an `unbatched` result set is that a batched result can return the first row in less time.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/02_using_preparedstatements_to_send_sql_commands.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/02_using_preparedstatements_to_send_sql_commands.mdx
index 6ec39402d51..3a5c20f00f8 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/02_using_preparedstatements_to_send_sql_commands.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/02_using_preparedstatements_to_send_sql_commands.mdx
@@ -1,5 +1,5 @@
---
-title: "Using PreparedStatements to Send SQL Commands"
+title: "Using PreparedStatements to send SQL commands"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,11 +11,9 @@ legacyRedirectsGenerated:
-Many applications execute the same SQL statement over and over again, changing one or more of the data values in the statement between each iteration. If you use a `Statement` object to repeatedly execute a SQL statement, the server must parse, plan, and optimize the statement every time. JDBC offers another `Statement` derivative, the `PreparedStatement` to reduce the amount of work required in such a scenario.
+Many applications execute the same SQL statement over and over again, changing one or more of the data values in the statement between each iteration. If you use a `Statement` object to repeatedly execute a SQL statement, the server must parse, plan, and optimize the statement every time. JDBC offers another `Statement` derivative, the `PreparedStatement`, to reduce the amount of work required in this scenario.
-Listing 1.6 demonstrates invoking a `PreparedStatement` that accepts an employee ID and employee name and inserts that employee information in the `emp` table:
-
-Listing 1.6
+The following code shows invoking a `PreparedStatement` that accepts an employee ID and employee name and inserts that employee information in the `emp` table:
```text
public void AddEmployee(Connection con)
@@ -40,7 +38,7 @@ public void AddEmployee(Connection con)
}
```
-Instead of hard-coding data values in the SQL statement, you insert `placeholders` to represent the values that will change with each iteration. Listing 1.6 shows an `INSERT` statement that includes two placeholders (each represented by a question mark):
+Instead of hard coding data values in the SQL statement, you insert placeholders to represent the values that change with each iteration. The following shows an `INSERT` statement that includes two placeholders (each represented by a question mark):
```text
String command = "INSERT INTO emp(empno,ename) VALUES(?,?)";
@@ -52,9 +50,9 @@ With the parameterized SQL statement in hand, the `AddEmployee()` method can ask
PreparedStatement stmt = con.prepareStatement(command);
```
-At this point, the `PreparedStatement` has parsed and planned the `INSERT` statement, but it does not know what values to add to the table. Before executing the `PreparedStatement`, you must supply a value for each placeholder by calling a `setter` method. `setObject()` expects two arguments:
+At this point, the `PreparedStatement` has parsed and planned the `INSERT` statement, but it doesn't know the values to add to the table. Before executing the `PreparedStatement`, you must supply a value for each placeholder by calling a `setter` method. `setObject()` expects two arguments:
-- A parameter number; parameter number one corresponds to the first question mark, parameter number two corresponds to the second question mark, etc.
+- A parameter number. Parameter number one corresponds to the first question mark, parameter number two corresponds to the second question mark, etc.
- The value to substitute for the placeholder.
The `AddEmployee()` method prompts the user for an employee ID and name and calls `setObject()` with the values supplied by the user:
@@ -64,7 +62,7 @@ stmt.setObject(1,new Integer(c.readLine("ID:")));
stmt.setObject(2, c.readLine("Name:"));
```
-And then asks the `PreparedStatement` object to execute the statement:
+It then asks the `PreparedStatement` object to execute the statement:
```text
stmt.execute();
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/03_executing_stored_procedures.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/03_executing_stored_procedures.mdx
index b35b5c955d2..26c75e853d8 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/03_executing_stored_procedures.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/03_executing_stored_procedures.mdx
@@ -1,5 +1,5 @@
---
-title: "Executing Stored Procedures"
+title: "Executing stored procedures"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,15 +11,15 @@ legacyRedirectsGenerated:
-A stored procedure is a module that is written in EDB’s SPL and stored in the database. A stored procedure may define input parameters to supply data to the procedure and output parameters to return data from the procedure. Stored procedures execute within the server and consist of database access commands (SQL), control statements, and data structures that manipulate the data obtained from the database.
+A stored procedure is a module that's written in EDB’s SPL and stored in the database. A stored procedure can define input parameters to supply data to the procedure and output parameters to return data from the procedure. Stored procedures execute in the server and consist of database access commands (SQL), control statements, and data structures that manipulate the data obtained from the database.
-Stored procedures are especially useful when extensive data manipulation is required before storing data from the client. It is also efficient to use a stored procedure to manipulate data in a batch program.
+Stored procedures are especially useful when extensive data manipulation is required before storing data from the client. It's also efficient to use a stored procedure to manipulate data in a batch program.
-## Invoking Stored Procedures
+## Invoking stored procedures
The `CallableStatement` class provides a way for a Java program to call stored procedures. A `CallableStatement` object can have a variable number of parameters used for input (`IN` parameters), output (`OUT` parameters), or both (`IN OUT` parameters).
-The syntax for invoking a stored procedure in JDBC is shown below. Note that the square brackets indicate optional parameters; they are not part of the command syntax.
+The syntax for invoking a stored procedure in JDBC is shown below. The square brackets indicate optional parameters. They aren't part of the command syntax.
```text
{call procedure_name([?, ?, ...])}
@@ -31,11 +31,11 @@ The syntax to invoke a procedure that returns a result parameter is:
{? = call procedure_name([?, ?, ...])}
```
-Each question mark serves as a placeholder for a parameter. The stored procedure determines if the placeholders represent `IN`, `OUT`, or `IN OUT` parameters and the Java code must match. We will show you how to supply values for `IN` (or `IN OUT`) parameters and how to retrieve values returned in `OUT` (or `IN OUT`) parameters in a moment.
+Each question mark serves as a placeholder for a parameter. The stored procedure determines if the placeholders represent `IN`, `OUT`, or `IN OUT` parameters and the Java code must match.
-### Executing a Simple Stored Procedure
+### Executing a simple stored procedure
-Listing 1.7-a shows a stored procedure that increases the salary of each employee by `10%`. `increaseSalary` expects no arguments from the caller and does not return any information:
+The following shows a stored procedure that increases the salary of each employee by 10%. `increaseSalary` expects no arguments from the caller and doesn't return any information:
```sql
CREATE OR REPLACE PROCEDURE increaseSalary
@@ -45,7 +45,7 @@ IS
END;
```
-Listing 1.7-b demonstrates how to invoke the `increaseSalary` procedure:
+The following shows how to invoke the `increaseSalary` procedure:
```java
public void SimpleCallSample(Connection con)
@@ -71,21 +71,19 @@ To invoke a stored procedure from a Java application, use a `CallableStatement`
CallableStatement stmt = con.prepareCall("{call increaseSalary()}");
```
-As the name implies, the `prepareCall()` method prepares the statement, but does not execute it. As you will see in the next example, an application typically binds parameter values between the call to `prepareCall()` and the call to `execute()`. To invoke the stored procedure on the server, call the `execute()` method.
+As the name implies, the `prepareCall()` method prepares the statement but doesn't execute it. As [Executing stored procedures with IN parameters](#executing_stored_procedures_with_in_parameters) shows, an application typically binds parameter values between the call to `prepareCall()` and the call to `execute()`. To invoke the stored procedure on the server, call the `execute()` method.
```java
stmt.execute();
```
-This stored procedure (`increaseSalary`) did not expect any `IN` parameters and did not return any information to the caller (using `OUT` parameters) so invoking the procedure is simply a matter of creating a `CallableStatement` object and then calling that object’s `execute()` method.
+This stored procedure (`increaseSalary`) didn't expect any `IN` parameters and didn't return any information to the caller (using `OUT` parameters), so invoking the procedure is a matter of creating a `CallableStatement` object and then calling that object’s `execute()` method.
-The next section demonstrates how to invoke a stored procedure that requires data (`IN` parameters) from the caller.
+### Executing stored procedures with IN parameters
-### Executing Stored Procedures with IN parameters
+The code in the next example first creates and then invokes a stored procedure named `empInsert`. `empInsert` requires `IN` parameters that contain employee information: `empno`, `ename`, `job`, `sal`, `comm`, `deptno`, and `mgr`. `empInsert` then inserts that information into the `emp` table.
-The code in the next example first creates and then invokes a stored procedure named `empInsert`; `empInsert` requires `IN` parameters that contain employee information: `empno`, `ename`, `job`, `sal`, `comm`, `deptno`, and `mgr`. `empInsert` then inserts that information into the `emp` table.
-
-Listing 1.8-a creates the stored procedure in the Advanced Server database:
+The following creates the stored procedure in the EDB Postgres Advanced Server database:
```sql
CREATE OR REPLACE PROCEDURE empInsert(
@@ -109,7 +107,7 @@ BEGIN
END;
```
-Listing 1.8-b demonstrates how to invoke the stored procedure from Java:
+The following shows how to invoke the stored procedure from Java:
```java
public void CallExample2(Connection con)
@@ -136,7 +134,7 @@ public void CallExample2(Connection con)
}
```
-Each placeholder (?) in the command (`commandText`) represents a point in the command that is later replaced with data:
+Each placeholder (?) in the command (`commandText`) represents a point in the command that's later replaced with data:
```java
String commandText = "{call EMP_INSERT(?,?,?,?,?,?)}";
@@ -156,9 +154,9 @@ stmt.setObject(6, new Integer(c.readLine("Manager")));
After supplying a value for each placeholder, this method executes the statement by calling the `execute()` method.
-### Executing Stored Procedures with OUT parameters
+### Executing stored procedures with OUT parameters
-The next example creates and invokes an SPL stored procedure called `deptSelect`. This procedure requires one `IN` parameter (department number) and returns two `OUT` parameters (the department name and location) corresponding to the department number. The code in Listing 1.9-a creates the `deptSelect` procedure:
+The next example creates and invokes an SPL stored procedure called `deptSelect`. This procedure requires one `IN` parameter (department number) and returns two `OUT` parameters (the department name and location) corresponding to the department number:
```sql
CREATE OR REPLACE PROCEDURE deptSelect
@@ -178,7 +176,7 @@ BEGIN
END;
```
-Listing 1.9-b shows the Java code required to invoke the `deptSelect` stored procedure:
+The following shows the Java code required to invoke the `deptSelect` stored procedure:
```java
public void GetDeptInfo(Connection con)
@@ -204,27 +202,27 @@ public void GetDeptInfo(Connection con)
}
```
-Each placeholder (?) in the command (`commandText`) represents a point in the command that is later replaced with data:
+Each placeholder (?) in the command (`commandText`) represents a point in the command that's later replaced with data:
```java
String commandText = "{call deptSelect(?,?,?)}";
CallableStatement stmt = con.prepareCall(commandText);
```
-The `setObject()` method binds a value to an `IN` or `IN OUT` placeholder. When calling `setObject()` you must identify a placeholder (by its ordinal number) and provide a value to substitute in place of that placeholder:
+The `setObject()` method binds a value to an `IN` or `IN OUT` placeholder. When calling `setObject()`, you must identify a placeholder (by its ordinal number) and provide a value to substitute in place of that placeholder:
```java
stmt.setObject(1, new Integer(c.readLine("Dept No :")));
```
-The JDBC type of each `OUT` parameter must be registered before the `CallableStatement` objects can be executed. Registering the JDBC type is done with the `registerOutParameter()` method.
+Register the JDBC type of each `OUT` parameter before executing the `CallableStatement` objects. Registering the JDBC type is done with the `registerOutParameter()` method.
```java
stmt.registerOutParameter(2, Types.VARCHAR);
stmt.registerOutParameter(3, Types.VARCHAR);
```
-After executing the statement, the `CallableStatement’s`getter method retrieves the `OUT` parameter values: to retrieve a `VARCHAR` value, use the `getString()` getter method.
+After executing the statement, the `CallableStatement` getter method retrieves the `OUT` parameter values. To retrieve a `VARCHAR` value, use the `getString()` getter method.
```java
stmt.execute();
@@ -232,13 +230,13 @@ System.out.println("Dept Name: " + stmt.getString(2));
System.out.println("Location : " + stmt.getString(3));
```
-In the current example `GetDeptInfo()` registers two `OUT` parameters and (after executing the stored procedure) retrieves the values returned in the `OUT` parameters. Since both `OUT` parameters are defined as `VARCHAR` values, `GetDeptInfo()` uses the `getString()` method to retrieve the `OUT` parameters.
+In this example, `GetDeptInfo()` registers two `OUT` parameters and (after executing the stored procedure) retrieves the values returned in the `OUT` parameters. Since both `OUT` parameters are defined as `VARCHAR` values, `GetDeptInfo()` uses the `getString()` method to retrieve the `OUT` parameters.
-## Executing Stored Procedures with IN OUT parameters
+## Executing stored procedures with IN OUT parameters
-The code in the next example creates and invokes a stored procedure named `empQuery` defined with one `IN` parameter (`p_deptno`), two`IN OUT`parameters (`p_empno` and `p_ename`) and three `OUT` parameters (`p_job`,`p_hiredate` and `p_sal`). `empQuery` then returns information about the employee in the two `IN OUT` parameters and three `OUT` parameters.
+The code in the next example creates and invokes a stored procedure named `empQuery` defined with one `IN` parameter (`p_deptno`), two `IN OUT`parameters (`p_empno` and `p_ename`) and three `OUT` parameters (`p_job`,`p_hiredate` and `p_sal`). `empQuery` then returns information about the employee in the two `IN OUT` parameters and three `OUT` parameters.
-Listing 1.10-a creates a stored procedure named `empQuery` :
+This code creates a stored procedure named `empQuery` :
```sql
CREATE OR REPLACE PROCEDURE empQuery
@@ -261,7 +259,7 @@ BEGIN
END;
```
-Listing 1.10-b demonstrates invoking the `empQuery` procedure, providing values for the `IN` parameters, and handling the `OUT` and`IN OUT`parameters:
+The following code shows invoking the `empQuery` procedure, providing values for the `IN` parameters, and handling the `OUT` and `IN OUT`parameters:
```java
public void CallSample4(Connection con)
@@ -295,7 +293,7 @@ public void CallSample4(Connection con)
}
```
-Each placeholder (?) in the command (`commandText`) represents a point in the command that is later replaced with data:
+Each placeholder (?) in the command (`commandText`) represents a point in the command that's later replaced with data:
```java
String commandText = "{call empQuery(?,?,?,?,?,?)}";
@@ -315,7 +313,7 @@ The `setString()` method binds a `String` value to an `IN` or `IN OUT` placehold
stmt.setString(3, new String(c.readLine("Employee Name:")));
```
-Before executing the `CallableStatement` , you must register the JDBC type of each `OUT` parameter by calling the `registerOutParameter()` method.
+Before executing the `CallableStatement`, you must register the JDBC type of each `OUT` parameter by calling the `registerOutParameter()` method.
```java
stmt.registerOutParameter(2, Types.INTEGER);
@@ -325,7 +323,7 @@ stmt.registerOutParameter(5, Types.TIMESTAMP);
stmt.registerOutParameter(6, Types.NUMERIC);
```
-Remember, before calling a procedure with an `IN` parameter, you must assign a value to that parameter with a setter method. Before calling a procedure with an `OUT` parameter, you register the type of that parameter; then you can retrieve the value returned by calling a getter method. When calling a procedure that defines an `IN OUT` parameter, you must perform all three actions:
+Before calling a procedure with an `IN` parameter, you must assign a value to that parameter with a setter method. Before calling a procedure with an `OUT` parameter, you register the type of that parameter. Then you can retrieve the value returned by calling a getter method. When calling a procedure that defines an `IN OUT` parameter, you must perform all three actions:
- Assign a value to the parameter.
- Register the type of the parameter.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/04_using_ref_cursors_with_java.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/04_using_ref_cursors_with_java.mdx
index eac9a462f4f..11a99aae23a 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/04_using_ref_cursors_with_java.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/04_using_ref_cursors_with_java.mdx
@@ -11,17 +11,17 @@ legacyRedirectsGenerated:
-A `REF CURSOR` is a cursor variable that contains a pointer to a query result set returned by an `OPEN` statement. Unlike a static cursor, a `REF CURSOR` is not tied to a particular query. You may open the same `REF CURSOR` variable any number of times with the `OPEN` statement containing different queries; each time, a new result set is created for that query and made available via the cursor variable. A `REF CURSOR` can also pass a result set from one procedure to another.
+A `REF CURSOR` is a cursor variable that contains a pointer to a query result set returned by an `OPEN` statement. Unlike a static cursor, a `REF CURSOR` isn't tied to a particular query. You can open the same `REF CURSOR` variable any number of times with the `OPEN` statement containing different queries. Each time, a new result set is created for that query and made available by way of the cursor variable. A `REF CURSOR` can also pass a result set from one procedure to another.
-Advanced Server supports the declaration of both `strongly-typed` and `weakly-typed` `REF CURSORs`. A strongly-typed cursor must declare the `shape` (the type of each column) of the expected result set. You can only use a strongly-typed cursor with a query that returns the declared columns; opening the cursor with a query that returns a result set with a different shape will cause the server to throw an exception. On the other hand, a weakly-typed cursor can work with a result set of any shape.
+EDB Postgres Advanced Server supports the declaration of both strongly typed and weakly typed `REF CURSOR` variables. A strongly typed cursor must declare the `shape` (the type of each column) of the expected result set. You can use only a strongly typed cursor with a query that returns the declared columns. Opening the cursor with a query that returns a result set with a different shape causes the server to return an exception. On the other hand, a weakly typed cursor can work with a result set of any shape.
-To declare a strongly-typed `REF CURSOR`:
+To declare a strongly typed `REF CURSOR`:
```Text
TYPE
IS REF CURSOR RETURN ;
```
-To declare a weakly-typed `REF_CURSOR`:
+To declare a weakly typed `REF_CURSOR`:
```Text
name SYS_REFCURSOR;
@@ -29,9 +29,8 @@ name SYS_REFCURSOR;
## Using a REF CURSOR to retrieve a ResultSet
-The stored procedure shown in Listing 1.11-a (`getEmpNames`) builds two `REF CURSORs` on the server; the first `REF CURSOR` contains a list of commissioned employees in the `emp` table, while the second `REF CURSOR` contains a list of salaried employees in the `emp` table:
+The stored procedure shown in the following (`getEmpNames`) builds two `REF CURSOR` variabes on the server. The first `REF CURSOR` contains a list of commissioned employees in the `emp` table. The second `REF CURSOR` contains a list of salaried employees in the `emp` table:
-Listing 1.11-a
```text
CREATE OR REPLACE PROCEDURE getEmpNames
@@ -46,9 +45,7 @@ BEGIN
END;
```
-The `RefCursorSample()` method (see Listing 1.11-b) invokes the `getEmpName()` stored procedure and displays the names returned in each of the two `REF CURSOR` variables:
-
-Listing 1.11-b
+The `RefCursorSample()` method shown in the following invokes the `getEmpName()` stored procedure and displays the names returned in each of the two `REF CURSOR` variables:
```text
public void RefCursorSample(Connection con)
@@ -92,13 +89,13 @@ String commandText = "{call getEmpNames(?,?)}";
CallableStatement stmt = con.prepareCall(commandText);
```
-The call to `registerOutParameter()` registers the parameter type (`Types.REF`) of the first REF CURSOR (`commissioned`) :
+The call to `registerOutParameter()` registers the parameter type (`Types.REF`) of the first `REF CURSOR` (`commissioned`) :
```text
stmt.registerOutParameter(1, Types.REF);
```
-Another call to `registerOutParameter()` registers the second parameter type (`Types.REF`) of the second REF CURSOR (`salaried`) :
+Another call to `registerOutParameter()` registers the second parameter type (`Types.REF`) of the second `REF CURSOR` (`salaried`) :
```text
stmt.registerOutParameter(2, Types.REF);
@@ -120,7 +117,7 @@ while(commissioned.next())
}
```
-The same getter method retrieves the `ResultSet` from the second parameter and `RefCursorExample` iterates through that cursor, printing the name of each salaried employee:
+The same getter method retrieves the `ResultSet` from the second parameter, and `RefCursorExample` iterates through that cursor, printing the name of each salaried employee:
```text
ResultSet salaried = (ResultSet)stmt.getObject(2);
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/05_using_bytea_data_with_java.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/05_using_bytea_data_with_java.mdx
index 43cf2b9e2b0..81a4967c3bb 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/05_using_bytea_data_with_java.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/05_using_bytea_data_with_java.mdx
@@ -1,5 +1,5 @@
---
-title: "Using BYTEA Data with Java"
+title: "Using BYTEA data with Java"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,11 +11,14 @@ legacyRedirectsGenerated:
-The `BYTEA` data type stores a binary string in a sequence of bytes; digital ../images and sound files are often stored as binary data. Advanced Server can store and retrieve binary data via the `BYTEA` data type.
+The `BYTEA` data type stores a binary string in a sequence of bytes. Digital images and sound files are often stored as binary data. EDB Postgres Advanced Server can store and retrieve binary data by way of the `BYTEA` data type.
-The following Java sample stores `BYTEA` data in an Advanced Server database and then demonstrates how to retrieve that data. The example requires a bit of setup; Listings 1.12-a, 1.12-b, and 1.12-c create the server-side environment for the Java example.
+The following Java sample stores `BYTEA` data in an EDB Postgres Advanced Server database and then shows how to retrieve that data.
-Listing 1.12-a creates a table (`emp_detail`) that stores `BYTEA` data. `emp_detail` contains two columns: the first column stores an employee’s ID number (type `INT`) and serves as the primary key for the table; the second column stores a photograph of the employee in `BYTEA` format:
+First, the following creates a table (`emp_detail`) that stores `BYTEA` data. `emp_detail` contains two columns:
+
+- The first column stores an employee’s ID number (type `INT`) and serves as the primary key for the table.
+- The second column stores a photograph of the employee in `BYTEA` format.
```text
CREATE TABLE emp_detail
@@ -25,7 +28,7 @@ CREATE TABLE emp_detail
);
```
-Listing 1.12-b creates a procedure (`ADD_PIC`) that inserts a row into the `emp_detail` table:
+The following creates a procedure (`ADD_PIC`) that inserts a row into the `emp_detail` table:
```text
CREATE OR REPLACE PROCEDURE ADD_PIC(p_empno IN int4, p_photo IN bytea) AS
@@ -34,7 +37,7 @@ BEGIN
END;
```
-And finally, Listing 1.12-c creates a function (`GET_PIC`) that returns the photograph for a given employee:
+Then, the following creates a function (`GET_PIC`) that returns the photograph for a given employee:
```text
CREATE OR REPLACE FUNCTION GET_PIC(p_empno IN int4) RETURN BYTEA IS
@@ -46,9 +49,9 @@ BEGIN
END;
```
-## Inserting BYTEA Data into an Advanced Server Database
+## Inserting BYTEA data into an EDB Postgres Advanced Server
-Listing 1.13 shows a Java method that invokes the `ADD_PIC` procedure (see Listing 1.12-b) to copy a photograph from the client file system to the `emp_detail` table on the server:
+The following shows a Java method that invokes the `ADD_PIC` procedure to copy a photograph from the client file system to the `emp_detail` table on the server:
```text
public void InsertPic(Connection con)
@@ -88,7 +91,7 @@ int empno = Integer.parseInt(c.readLine("Employee No :"));
String fileName = c.readLine("Image filename :");
```
-If the requested file does not exist, `InsertPic()` displays an error message and terminates:
+If the requested file doesn't exist, `InsertPic()` displays an error message and terminates:
```text
File f = new File(fileName);
@@ -100,7 +103,7 @@ if(!f.exists())
}
```
-Next, `InsertPic()` prepares a `CallableStatement` object (`stmt`) that calls the `ADD_PIC` procedure. The first placeholder (?) represents the first parameter expected by `ADD_PIC (p_empno)`; the second placeholder represents the second parameter (`p_photo`). To provide actual values for those placeholders, `InsertPic()` calls two setter methods. Since the first parameter is of type `INTEGER`, `InsertPic()` calls the `setInt()` method to provide a value for `p_empno`. The second parameter is of type `BYTEA`, so `InsertPic()` uses a binary setter method; in this case, the method is `setBinaryStream()`:
+Next, `InsertPic()` prepares a `CallableStatement` object (`stmt`) that calls the `ADD_PIC` procedure. The first placeholder (?) represents the first parameter expected by `ADD_PIC (p_empno)`. The second placeholder represents the second parameter (`p_photo`). To provide actual values for those placeholders, `InsertPic()` calls two setter methods. Since the first parameter is of type `INTEGER`, `InsertPic()` calls the `setInt()` method to provide a value for `p_empno`. The second parameter is of type `BYTEA`, so `InsertPic()` uses a binary setter method. In this case, the method is `setBinaryStream()`:
```text
CallableStatement stmt = con.prepareCall("{call ADD_PIC(?, ?)}");
@@ -108,13 +111,13 @@ stmt.setInt(1, empno);
stmt.setBinaryStream(2 ,new FileInputStream(f), f.length());
```
-Now that the placeholders are bound to actual values, `InsertPic()` executes the `CallableStatement`:
+Once the placeholders are bound to actual values, `InsertPic()` executes the `CallableStatement`:
```text
stmt.execute();
```
-If all goes well, `InsertPic()` displays a message verifying that the image has been added to the table. If an error occurs, the `catch` block displays a message to the user:
+If all goes well, `InsertPic()` displays a message verifying that the image was added to the table. If an error occurs, the `catch` block displays a message to the user:
```text
System.out.println("Added image for Employee \""+empno);
@@ -126,9 +129,9 @@ catch(Exception err)
}
```
-## Retrieving BYTEA Data from an Advanced Server Database
+## Retrieving BYTEA data from an EDB Postgres Advanced Server database
-Now that you know how to insert `BYTEA` data from a Java application, Listing 1.14 demonstrates how to retrieve `BYTEA` data from the server:
+Now that you know how to insert `BYTEA` data from a Java application, the following shows how to retrieve `BYTEA` data from the server:
```text
public static void GetPic(Connection con)
@@ -179,13 +182,13 @@ stmt.execute();
byte[] b = stmt.getBytes(1);
```
-The program prompts the user for the name of the file where it will store the photograph:
+The program prompts the user for the name of the file to store the photograph:
```text
String fileName = c.readLine("Destination filename :");
```
-The `FileOutputStream` object writes the binary data that contains the photograph to the destination filename:
+The `FileOutputStream` object writes the binary data that contains the photograph to the destination file:
```text
FileOutputStream fos = new FileOutputStream(new File(fileName));
@@ -193,7 +196,7 @@ fos.write(b);
fos.close();
```
-Finally, `GetPic()` displays a message confirming that the file has been saved at the new location:
+Finally, `GetPic()` displays a message confirming that the file was saved at the new location:
```text
System.out.println("File saved at \""+fileName+"\"");
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/06_using_object_types_and_collections_with_java.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/06_using_object_types_and_collections_with_java.mdx
index f8b00a19398..dee1f6bd0aa 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/06_using_object_types_and_collections_with_java.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/06_using_object_types_and_collections_with_java.mdx
@@ -1,5 +1,5 @@
---
-title: "Using Object Types and Collections with Java"
+title: "Using object types and collections with Java"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,17 +11,15 @@ legacyRedirectsGenerated:
-The SQL `CREATE TYPE` command is used to create a user-defined `object type`, which is stored in the Advanced Server database. The `CREATE TYPE` command is also used to create a `collection`, commonly referred to as an `array`, which is also stored in the Advanced Server database.
+The SQL `CREATE TYPE` command is used to create a user-defined `object type`, which is stored in the EDB Postgres Advanced Server database. The `CREATE TYPE` command is also used to create a collection, commonly referred to as an array, which is also stored in the EDB Postgres Advanced Server database.
-These user-defined types can then be referenced within SPL procedures, SPL functions, and Java programs.
+These user-defined types can then be referenced in SPL procedures, SPL functions, and Java programs.
The basic object type is created with the `CREATE TYPE AS OBJECT` command along with optional usage of the `CREATE TYPE BODY` command.
-A `nested table type` collection is created using the `CREATE TYPE AS TABLE OF` command. A `varray type` collection is created with the `CREATE TYPE VARRAY` command.
+A nested table type collection is created using the `CREATE TYPE AS TABLE OF` command. A varray type collection is created with the `CREATE TYPE VARRAY` command.
-Example usage of an object type and a collection are shown in the following sections.
-
-Listing 1.15 shows a Java method used by both examples to establish the connection to the Advanced Server database.
+The following shows a Java method used by both upcoming examples to establish the connection to the EDB Postgres Advanced Server database.
```text
public static Connection getEDBConnection() throws
@@ -35,9 +33,9 @@ public static Connection getEDBConnection() throws
}
```
-## Using an Object Type
+## Using an object type
-Create the object types in the Advanced Server database. Object type `addr_object_type` defines the attributes of an address:
+Create the object types in the EDB Postgres Advanced Server database. Object type `addr_object_type` defines the attributes of an address:
```text
CREATE OR REPLACE TYPE addr_object_type AS OBJECT
@@ -49,7 +47,7 @@ CREATE OR REPLACE TYPE addr_object_type AS OBJECT
);
```
-Object type `emp_obj_typ` defines the attributes of an employee. Note that one of these attributes is object type `ADDR_OBJECT_TYPE` as previously described. The object type body contains a method that displays the employee information:
+Object type `emp_obj_typ` defines the attributes of an employee. One of these attributes is object type `ADDR_OBJECT_TYPE`. The object type body contains a method that displays the employee information:
```text
CREATE OR REPLACE TYPE emp_obj_typ AS OBJECT
@@ -73,7 +71,7 @@ CREATE OR REPLACE TYPE BODY emp_obj_typ AS
END;
```
-Listing 1.16 is a Java method that includes these user-defined object types:
+The following is a Java method that includes these user-defined object types:
```text
public static void testUDT() throws SQLException {
@@ -168,16 +166,16 @@ System.out.println("state: " + attrAddress[2]);
System.out.println("zip: " + attrAddress[3]);
```
-## Using a Collection
+## Using a collection
-Create collection types `NUMBER_ARRAY` and `CHAR_ARRAY` in the Advanced Server database:
+Create collections types `NUMBER_ARRAY` and `CHAR_ARRAY` in the EDB Postgres Advanced Server database:
```text
CREATE OR REPLACE TYPE NUMBER_ARRAY AS TABLE OF NUMBER;
CREATE OR REPLACE TYPE CHAR_ARRAY AS TABLE OF VARCHAR(50);
```
-Listing 1.17-a is an SPL function that uses collection types `NUMBER_ARRAY` and `CHAR_ARRAY` as `IN` parameters and `CHAR_ARRAY` as the `OUT` parameter.
+The following is an SPL function that uses collection types `NUMBER_ARRAY` and `CHAR_ARRAY` as `IN` parameters and `CHAR_ARRAY` as the `OUT` parameter.
The function concatenates the employee ID from the `NUMBER_ARRAY IN` parameter with the employee name in the corresponding row from the `CHAR_ARRAY IN` parameter. The resulting concatenated entries are returned in the `CHAR_ARRAY OUT` parameter.
@@ -200,7 +198,7 @@ BEGIN
END;
```
-Listing 1.17-b is a Java method that calls the Listing 1.17-a function, passing and retrieving the collection types:
+The following is a Java method that calls the previous function, passing and retrieving the collection types:
```text
public static void testTableOfAsInOutParams() throws SQLException {
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/07_asynchronous_notification_handling_with_noticelistener.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/07_asynchronous_notification_handling_with_noticelistener.mdx
index ca1dfdfeb5c..6bc11f71662 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/07_asynchronous_notification_handling_with_noticelistener.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/07_asynchronous_notification_handling_with_noticelistener.mdx
@@ -1,5 +1,5 @@
---
-title: "Asynchronous Notification Handling with NoticeListener"
+title: "Asynchronous notification handling with NoticeListener"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,11 +11,11 @@ legacyRedirectsGenerated:
-The Advanced Server JDBC Connector provides asynchronous notification handling functionality. A `notification` is a message generated by the server when an SPL (or PL/pgSQL) program executes a `RAISE NOTICE` statement. Each notification is sent from the server to the client application. To intercept a notification in a JDBC client, an application must create a `NoticeListener` object (or, more typically, an object derived from `NoticeListener`).
+The EDB Postgres Advanced Server JDBC Connector provides asynchronous notification handling functionality. A notification is a message generated by the server when an SPL (or PL/pgSQL) program executes a `RAISE NOTICE` statement. Each notification is sent from the server to the client application. To intercept a notification in a JDBC client, an application must create a `NoticeListener` object (or, more typically, an object derived from `NoticeListener`).
-It is important to understand that a notification is sent to the client as a result of executing an SPL (or PL/pgSQL) program. To generate a notification, you must execute an SQL statement that invokes a stored procedure, function, or trigger: the notification is delivered to the client as the SQL statement executes. Notifications work with any type of statement object; `CallableStatement` objects, `PreparedStatement` objects, or simple `Statement` objects. A JDBC program intercepts a notification by associating a `NoticeListener` with a `Statement` object. When the `Statement` object executes an SQL statement that raises a notice, JDBC invokes the `noticeReceived()` method in the associated `NoticeListener`.
+It's important to understand that a notification is sent to the client as a result of executing an SPL (or PL/pgSQL) program. To generate a notification, you must execute an SQL statement that invokes a stored procedure, function, or trigger. The notification is delivered to the client as the SQL statement executes. Notifications work with any type of statement object: `CallableStatement` objects, `PreparedStatement` objects, or simple `Statement` objects. A JDBC program intercepts a notification by associating a `NoticeListener` with a `Statement` object. When the `Statement` object executes an SQL statement that raises a notice, JDBC invokes the `noticeReceived()` method in the associated `NoticeListener`.
-Listing 1.18-a shows an SPL procedure that loops through the `emp` table and gives each employee a 10% raise. As each employee is processed, `adjustSalary` executes a `RAISE NOTICE` statement (in this case, the message contained in the notification reports progress to the client application):
+The following shows an SPL procedure that loops through the `emp` table and gives each employee a 10% raise. As each employee is processed, `adjustSalary` executes a `RAISE NOTICE` statement. (In this case, the message contained in the notification reports progress to the client application.)
```text
CREATE OR REPLACE PROCEDURE adjustSalary
@@ -36,7 +36,7 @@ BEGIN
END;
```
-Listing 1.18-b shows how to create a `NoticeListener` that intercepts notifications in a JDBC application:
+The following shows how to create a `NoticeListener` that intercepts notifications in a JDBC application:
```text
public void NoticeExample(Connection con)
@@ -71,13 +71,13 @@ class MyNoticeListener implements NoticeListener
}
```
-The `NoticeExample()` method is straightforward; it expects a single argument, a `Connection` object, from the caller:
+The `NoticeExample()` method is straightforward. It expects a single argument from the caller, a `Connection` object:
```text
public void NoticeExample(Connection con)
```
-`NoticeExample()` begins by preparing a call to the `adjustSalary` procedure shown in example 1.10-a. As you would expect, `con.prepareCall()` returns a `CallableStatement` object. Before executing the `CallableStatement`, you must create an object that implements the `NoticeListener` interface and add that object to the list of `NoticeListeners` associated with the `CallableStatement`:
+`NoticeExample()` begins by preparing a call to the `adjustSalary` procedure shown previously. As you would expect, `con.prepareCall()` returns a `CallableStatement` object. Before executing the `CallableStatement`, you must create an object that implements the `NoticeListener` interface and add that object to the list of `NoticeListeners` associated with the `CallableStatement`:
```text
CallableStatement stmt = con.prepareCall("{call adjustSalary()}");
@@ -85,14 +85,14 @@ MyNoticeListener listener = new MyNoticeListener();
((BaseStatement)stmt).addNoticeListener(listener);
```
-Once the `NoticeListener` is in place, `NoticeExample` method executes the `CallableStatement` (invoking the `adjustSalary` procedure on the server) and displays a message to the user:
+Once the `NoticeListener` is in place, the `NoticeExample` method executes the `CallableStatement` (invoking the `adjustSalary` procedure on the server) and displays a message to the user:
```text
stmt.execute();
System.out.println("Finished");
```
-Each time the `adjustSalary` procedure executes a `RAISE NOTICE` statement, the server sends the text of the message (`"Salary increased for ..."`) to the `Statement` (or derivative) object in the client application. JDBC invokes the `noticeReceived()` method (possibly many times) `before` the call to `stmt.execute()` completes.
+Each time the `adjustSalary` procedure executes a `RAISE NOTICE` statement, the server sends the text of the message (`"Salary increased for ..."`) to the `Statement` (or derivative) object in the client application. JDBC invokes the `noticeReceived()` method (possibly many times) before the call to `stmt.execute()` completes.
```text
class MyNoticeListener implements NoticeListener
@@ -110,4 +110,4 @@ class MyNoticeListener implements NoticeListener
When JDBC calls the `noticeReceived()` method, it creates an `SQLWarning` object that contains the text of the message generated by the `RAISE NOTICE` statement on the server.
-Notice that each `Statement` object keeps a `list` of `NoticeListeners` . When the JDBC driver receives a notification from the server, it consults the list maintained by the `Statement` object. If the list is empty, the notification is saved in the `Statement` object (you can retrieve the notifications by calling `stmt.getWarnings()` once the call to `execute()` completes). If the list is not empty, the JDBC driver delivers an `SQLWarning` to each listener, in the order in which the listeners were added to the `Statement` .
+Each `Statement` object keeps a list of `NoticeListeners`. When the JDBC driver receives a notification from the server, it consults the list maintained by the `Statement` object. If the list is empty, the notification is saved in the `Statement` object. (You can retrieve the notifications by calling `stmt.getWarnings()` once the call to `execute()` completes.) If the list isn't empty, the JDBC driver delivers an `SQLWarning` to each listener in the order in which the listeners were added to the `Statement`.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/index.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/index.mdx
index 67e081a3810..b5c08d3faf0 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/index.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/08_advanced_jdbc_connector_functionality/index.mdx
@@ -1,5 +1,5 @@
---
-title: "Advanced JDBC Connector Functionality"
+title: "Advanced JDBC Connector functionality"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,9 +11,7 @@ legacyRedirectsGenerated:
-The previous example created a graphical user interface that displayed a result set in a `JTable`. Now we will switch gears and show you some of the more advanced features of the Advanced Server JDBC Connector.
-
-To avoid unnecessary clutter, the rest of the code samples in this document will use the console to interact with the user instead of creating a graphical user interface.
+These examples show you some of the advanced features of the EDB Postgres Advanced Server JDBC Connector.
From c36dbe1f3b8733d2a31bb70bea8e8c2de1ae6858 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Wed, 30 Mar 2022 14:42:39 +0530
Subject: [PATCH 24/39] Added the content to Online help as per PR #69
Added the content to Online help as per https://github.com/EnterpriseDB/pem/pull/69
---
.../09_pem_alerting/03_pem_alert_templates.mdx | 1 +
1 file changed, 1 insertion(+)
diff --git a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/09_pem_alerting/03_pem_alert_templates.mdx b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/09_pem_alerting/03_pem_alert_templates.mdx
index a47c2ce04f5..000b3eb199e 100644
--- a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/09_pem_alerting/03_pem_alert_templates.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/09_pem_alerting/03_pem_alert_templates.mdx
@@ -136,6 +136,7 @@ Within the table, the alerts are sorted by the target of the alert. The `Templat
| BDR Group Raft Leader ID not matching | BDR group raft leader id not matching | Yes |
| BDR Group Versions check | BDR/pglogical version mismatched in BDR group | Yes |
| BDR worker error detected | BDR worker error detected reported for BDR node | |
+| Transaction ID exhaustion (wraparound) | Check for Transaction ID exhaustion (wraparound). | Yes |
## Templates applicable on Database
From 5b092764f09bbc734535a2ad33483dcc06322883 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Wed, 30 Mar 2022 15:07:35 +0530
Subject: [PATCH 25/39] Added the content to online help as per PR#71
Added the content to online help as per https://github.com/EnterpriseDB/pem/pull/71
---
.../pem/8/images/pgagent_schedule_repeat.png | 4 +-
.../pem/8/images/pgagent_scheduledetails2.png | 4 +-
.../8/images/pgagent_scheduleproperties.png | 4 +-
.../01_pem_config_options.mdx | 1 +
.../10_pgagent/03_pgagent_jobs.mdx | 73 ++++++++++---------
5 files changed, 44 insertions(+), 42 deletions(-)
diff --git a/product_docs/docs/pem/8/images/pgagent_schedule_repeat.png b/product_docs/docs/pem/8/images/pgagent_schedule_repeat.png
index bc2734f2923..776fbf94ef0 100644
--- a/product_docs/docs/pem/8/images/pgagent_schedule_repeat.png
+++ b/product_docs/docs/pem/8/images/pgagent_schedule_repeat.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:d37950a128677bc45c12a6a5eb7bd029766ac0bff771f3d1087ea6bf32e3082c
-size 133047
+oid sha256:10435500700ba294017dc06f8f6a0aab69a7b41f79a062b7012c11bd07b3750b
+size 228625
diff --git a/product_docs/docs/pem/8/images/pgagent_scheduledetails2.png b/product_docs/docs/pem/8/images/pgagent_scheduledetails2.png
index 72b9aaba4a9..28b50776955 100644
--- a/product_docs/docs/pem/8/images/pgagent_scheduledetails2.png
+++ b/product_docs/docs/pem/8/images/pgagent_scheduledetails2.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:e86610d3bdcd765922463b2755036cca85509b2099ba85eead9ff15b73f8e238
-size 308296
+oid sha256:ec74108f45aaa8a2123285746fe3028d7c6f74a00256d5879b1000620fbccb84
+size 284669
diff --git a/product_docs/docs/pem/8/images/pgagent_scheduleproperties.png b/product_docs/docs/pem/8/images/pgagent_scheduleproperties.png
index e515a681eac..5c4d2df39fd 100644
--- a/product_docs/docs/pem/8/images/pgagent_scheduleproperties.png
+++ b/product_docs/docs/pem/8/images/pgagent_scheduleproperties.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:77cd10940d907b4ea56144124f2ff9ac265667cc14ecffbdbac14ea735a8235a
-size 48220
+oid sha256:230fc68d7afcf9ac9d4679bf6f17a5185440ee36077861b6caf837f22c949d4f
+size 118838
diff --git a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/02_pem_server_config/01_pem_config_options.mdx b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/02_pem_server_config/01_pem_config_options.mdx
index 4dc552e6f28..15a88d5011b 100644
--- a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/02_pem_server_config/01_pem_config_options.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/02_pem_server_config/01_pem_config_options.mdx
@@ -131,6 +131,7 @@ Please note that this list is subject to change.
| smtp_server | 127.0.0.1 | Specifies the SMTP server host address to be used for sending email. |
| smtp_spool_retention_time | 7 days | Specifies the number of days to retain sent email messages in the spool table before they are discarded. |
| smtp_username | | Specifies the username to be used to connect to SMTP server. |
+| smtp_message_linebreak | LF | Specifies the linebreak to be used in email message body. |
| snmp_community | public | Specifies the SNMP community used when sending traps. Used only with SNMPv1 and SNMPv2. |
| snmp_enabled | true | Specifies whether to enable/disable sending SNMP traps. |
| snmp_port | 162 | Specifies the SNMP server port to be used for sending SNMP traps. |
diff --git a/product_docs/docs/pem/8/pem_online_help/10_pgagent/03_pgagent_jobs.mdx b/product_docs/docs/pem/8/pem_online_help/10_pgagent/03_pgagent_jobs.mdx
index 51b2d30d375..9fc92011e39 100644
--- a/product_docs/docs/pem/8/pem_online_help/10_pgagent/03_pgagent_jobs.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/10_pgagent/03_pgagent_jobs.mdx
@@ -17,23 +17,23 @@ When the pgAgent dialog opens, use the tabs on the `pgAgent Job` dialog to defin
Use the fields on the `General` tab to provide general information about a job:
-> - Provide a name for the job in the `Name` field.
->
-> - Move the `Enabled` switch to the `Yes` position to enable a job, or `No` to disable a job.
->
-> - Use the `Job Class` drop-down to select a class (for job categorization).
->
-> - Use the `Host Agent` field to specify the name of a machine that is running pgAgent to indicate that only that machine may execute the job. Leave the field blank to specify that any machine may perform the job.
->
-> **Note:** It is not always obvious what value to specify for the Host Agent in order to target a job step to a specific machine. With pgAgent running on the required machines and connected to the scheduler database, you can use the following query to view the hostnames as reported by each agent:
->
-> ```
-> SELECT jagstation FROM pgagent.pga_jobagent
-> ```
->
-> Use the hostname exactly as reported by the query in the Host Agent field.
->
-> - Use the `Comment` field to store notes about the job.
+- Provide a name for the job in the `Name` field.
+
+- Move the `Enabled` switch to the `Yes` position to enable a job, or `No` to disable a job.
+
+- Use the `Job Class` drop-down to select a class (for job categorization).
+
+- Use the `Host Agent` field to specify the name of a machine that is running pgAgent to indicate that only that machine may execute the job. Leave the field blank to specify that any machine may perform the job.
+
+ **Note:** It is not always obvious what value to specify for the Host Agent in order to target a job step to a specific machine. With pgAgent running on the required machines and connected to the scheduler database, you can use the following query to view the hostnames as reported by each agent:
+
+ ```
+ SELECT jagstation FROM pgagent.pga_jobagent
+ ```
+
+ Use the hostname exactly as reported by the query in the Host Agent field.
+
+- Use the `Comment` field to store notes about the job.
![Create pgAgent Job dialog - Steps tab](../images/pgagent_steps.png)
@@ -43,12 +43,12 @@ Use the `Steps` tab to define and manage the steps that the job will perform. Cl
Use fields on the step definition dialog to define the step:
-> - Provide a name for the step in the `Name` field; please note that steps will be performed in alphanumeric order by name.
-> - Use the `Enabled` switch to include the step when executing the job (`True`) or to disable the step (`False`).
-> - Use the `Kind` switch to indicate if the job step invokes SQL code (`SQL`) or a batch script (`Batch`).
->
-> > - If you select `SQL`, use the `Code` tab to provide SQL code for the step.
-> > - If you select `Batch`, use the `Code` tab to provide the batch script that will be executed during the step.
+- Provide a name for the step in the `Name` field; please note that steps will be performed in alphanumeric order by name.
+- Use the `Enabled` switch to include the step when executing the job (`True`) or to disable the step (`False`).
+- Use the `Kind` switch to indicate if the job step invokes SQL code (`SQL`) or a batch script (`Batch`).
+
+ - If you select `SQL`, use the `Code` tab to provide SQL code for the step.
+ - If you select `Batch`, use the `Code` tab to provide the batch script that will be executed during the step.
!!! Note
The fields `Connection type`, `Database` and `Connection string` are only applicable when `SQL` is selected because `Batch` cannot be run on remote servers.
@@ -61,14 +61,14 @@ Use fields on the step definition dialog to define the step:
- `Success` - Mark the step as completing successfully, and continue.
- `Ignore` - Ignore the error, and continue.
-> - Use the `Comment` field to provide a comment about the step.
+- Use the `Comment` field to provide a comment about the step.
![Create pgAgent Job dialog - Steps tab - Code tab](../images/pgagent_step_definition_code.png)
Use the context-sensitive field on the step definition dialog's `Code` tab to provide the SQL code or batch script that will be executed during the step:
-> - If the step invokes SQL code, provide one or more SQL statements in the `SQL query` field.
-> - If the step performs a batch script, provide the script in the `Script` field. If you are running on a Windows server, standard batch file syntax must be used. When running on a Linux server, any shell script may be used, provided that a suitable interpreter is specified on the first line (e.g. `#!/bin/sh`).
+- If the step invokes SQL code, provide one or more SQL statements in the `SQL query` field.
+- If the step performs a batch script, provide the script in the `Script` field. If you are running on a Windows server, standard batch file syntax must be used. When running on a Linux server, any shell script may be used, provided that a suitable interpreter is specified on the first line (e.g. `#!/bin/sh`).
When you've provided all of the information required by the step, click the compose icon to close the step definition dialog. Click the add icon (+) to add each additional step, or select the `Schedules` tab to define the job schedule.
@@ -80,11 +80,11 @@ Click the Add icon (+) to add a schedule for the job; then click the compose ico
Use the fields on the schedule definition tab to specify the days and times at which the job will execute.
-> - Provide a name for the schedule in the `Name` field.
-> - Use the `Enabled` switch to indicate that pgAgent should use the schedule (`Yes`) or to disable the schedule (`No`).
-> - Use the calendar selector in the `Start` field to specify the starting date and time for the schedule.
-> - Use the calendar selector in the `End` field to specify the ending date and time for the schedule.
-> - Use the `Comment` field to provide a comment about the schedule.
+- Provide a name for the schedule in the `Name` field.
+- Use the `Enabled` switch to indicate that pgAgent should use the schedule (`Yes`) or to disable the schedule (`No`).
+- Use the calendar selector in the `Start` field to specify the starting date and time for the schedule.
+- Use the calendar selector in the `End` field to specify the ending date and time for the schedule.
+- Use the `Comment` field to provide a comment about the schedule.
Select the `Repeat` tab to define the days on which the schedule will execute.
@@ -96,14 +96,15 @@ Click within a field to open a list of valid values for that field; click on a s
Use the fields within the `Days` box to specify the days on which the job will execute:
-> - Use the `Week Days` field to select the days on which the job will execute.
-> - Use the `Month Days` field to select the numeric days on which the job will execute. Specify the `Last Day` to indicate that the job should be performed on the last day of the month, irregardless of the date.
-> - Use the `Months` field to select the months in which the job will execute.
+- Use the `Week Days` field to select the days on which the job will execute.
+- Use the `Month Days` field to select the numeric days on which the job will execute. Specify the `Last Day` to indicate that the job should be performed on the last day of the month, irregardless of the date.
+- Use the `Months` field to select the months in which the job will execute.
Use the fields within the `Times` box to specify the times at which the job will execute:
-> - Use the `Hours` field to select the hour at which the job will execute.
-> - Use the `Minutes` field to select the minute at which the job will execute.
+- Use the `Hours` field to select the hour at which the job will execute.
+- Use the `Minutes` field to select the minute at which the job will execute.
+- Use the `Timezone` drop-down to select the timezone to be used in the next job run.
Select the `Exceptions` tab to specify any days on which the schedule will `not` execute.
From 2c52e30895b597bc60964b056f428e9ddd238264 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Wed, 30 Mar 2022 17:20:58 +0530
Subject: [PATCH 26/39] Added release notes
Added release notes
---
.../10_pgagent/05_pgagent-schedules.mdx | 2 +-
.../pem/8/pem_rel_notes/04_840_rel_notes.mdx | 23 +++++++++++++++++++
.../docs/pem/8/pem_rel_notes/index.mdx | 1 +
3 files changed, 25 insertions(+), 1 deletion(-)
create mode 100644 product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
diff --git a/product_docs/docs/pem/8/pem_online_help/10_pgagent/05_pgagent-schedules.mdx b/product_docs/docs/pem/8/pem_online_help/10_pgagent/05_pgagent-schedules.mdx
index 027faa14a12..f838dfd8589 100644
--- a/product_docs/docs/pem/8/pem_online_help/10_pgagent/05_pgagent-schedules.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/10_pgagent/05_pgagent-schedules.mdx
@@ -17,7 +17,7 @@ Each schedule consists of the basic details such as a name, whether or not it is
![pgAgent schedule - General tab](../images/pgagent_scheduledetails1.png)
-Schedules are specified using a cron-style format. For each selected time or date element, the schedule will execute. For example, to execute at 5 minutes past every hour, simply tick '5' in the `Minutes` list box. Values from more than one field may be specified in order to further control the schedule. For example, to execute at 12:05 and 14:05 every Monday and Thursday, you would tick minute 5, hours 12 and 14, and weekdays Monday and Thursday. For additional flexibility, the `Month Days` check list includes an extra `Last Day` option. This matches the last day of the month, whether it happens to be the 28th, 29th, 30th or 31st.
+Schedules are specified using a cron-style format. For each selected time or date element, the schedule will execute. For example, to execute at 5 minutes past every hour, simply tick '5' in the `Minutes` list box. Values from more than one field may be specified in order to further control the schedule. For example, to execute at 12:05 and 14:05 every Monday and Thursday, you would tick minute 5, hours 12 and 14, and weekdays Monday and Thursday. For additional flexibility, the `Month Days` check list includes an extra `Last Day` option. This matches the last day of the month, whether it happens to be the 28th, 29th, 30th or 31st. Use `Timezone` drop-down to select the timezone to be used in the next job run.
![pgAgent schedule - Repeat tab](../images/pgagent_scheduledetails2.png)
diff --git a/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx b/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
new file mode 100644
index 00000000000..bf11f0d0d54
--- /dev/null
+++ b/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
@@ -0,0 +1,23 @@
+---
+title: "Version 8.4.0"
+---
+
+New features, enhancements, bug fixes, and other changes in PEM 8.4.0 include:
+
+| Type | Description | ID |
+| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------| --------- |
+| New Feature | Add built-in support for monitoring Barman backups. | PEM-4435 |
+| Security Fix | Harden against unrestricted file uploads as reported in the medium severity CVE-2022-0959. | PEM-4442 |
+| Enhancement | Add monitoring of transaction ID (TXID) wraparound ID. | PEM-3990 |
+| Enhancement | Remove unnecessary monitoring of virtual filesystems [Support Ticket 573096]. | PEM-806 |
+| Enhancement | Add sorting based on status for agent and server tables in the dashboard. | PEM-4152 |
+| Enhancement | Add an option to disable the Query Tool for users in order to restrict viewing data [Support Ticket 74976]. | PEM-4315 |
+| Enhancement | Support Postgres extension based probes for version flexibility. | PEM-4391 |
+| Enhancement | Improve the guide for Installation on Linux with added details and steps. | PEM-4381 |
+| Bug Fix | During a new installation, the assignment of the postgres role name as a member of the pem_admin role failed [Support Ticket 79577]. | PEM-4433 |
+| Bug Fix | For a table displayed, sort numeric fields by numeric order, not alphabetical order [Support Ticket 1111704]. | PEM-3827 |
+| Bug Fix | Limit the decimal precision displayed for monitoring percentages. | PEM-4144 |
+| Bug Fix | Duplicate key value violates unique constraint blocked_session_info_pkey [Support Ticket RT75870]. | PEM-4333 |
+| Bug Fix | Probe error for Postgres Extended 14. | PEM-4356 |
+| Bug Fix | PEM agent not gathering data after upgrade [Support Ticket 78679]. | PEM-4430 |
+| Bug Fix | Alert body content missing in email notification [Support Ticket 833910]. | PEM-1832 |
diff --git a/product_docs/docs/pem/8/pem_rel_notes/index.mdx b/product_docs/docs/pem/8/pem_rel_notes/index.mdx
index 9eeddcede8f..6016e4948e1 100644
--- a/product_docs/docs/pem/8/pem_rel_notes/index.mdx
+++ b/product_docs/docs/pem/8/pem_rel_notes/index.mdx
@@ -6,6 +6,7 @@ The Postgres Enterprise Manager (PEM) documentation describes the latest version
| Version | Release Date | Upstream Merges | Accessibility Conformance |
| ------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------- |
+| [8.4.0](04_840_rel_notes) | 04 Apr 2022 | NA | [Conformance Report]() |
| [8.3.0](05_830_rel_notes) | 24 Nov 2021 | pgAdmin [5.7](https://www.pgadmin.org/docs/pgadmin4/5.7/release_notes_5_7.html#bug-fixes) | NA |
| [8.2.0](06_820_rel_notes) | 09 Sep 2021 | pgAdmin [5.4](https://www.pgadmin.org/docs/pgadmin4/5.4/release_notes_5_4.html#bug-fixes), [5.5](https://www.pgadmin.org/docs/pgadmin4/5.5/release_notes_5_5.html#bug-fixes), and [5.6](https://www.pgadmin.org/docs/pgadmin4/5.6/release_notes_5_6.html#bug-fixes) | NA |
| [8.1.1](07_811_rel_notes) | 22 Jul 2021 | NA | NA |
From d9da4ae8c0f40b659973aeee550de4553079134d Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Thu, 31 Mar 2022 13:46:26 +0530
Subject: [PATCH 27/39] updated the release notes
updated the release notes
---
.../docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx b/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
index bf11f0d0d54..5d6fc4cc973 100644
--- a/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
+++ b/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
@@ -6,18 +6,18 @@ New features, enhancements, bug fixes, and other changes in PEM 8.4.0 include:
| Type | Description | ID |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------| --------- |
-| New Feature | Add built-in support for monitoring Barman backups. | PEM-4435 |
+| New Feature | Add built-in support for monitoring Barman backups using the pg-backup-api. | PEM-4435 |
| Security Fix | Harden against unrestricted file uploads as reported in the medium severity CVE-2022-0959. | PEM-4442 |
| Enhancement | Add monitoring of transaction ID (TXID) wraparound ID. | PEM-3990 |
| Enhancement | Remove unnecessary monitoring of virtual filesystems [Support Ticket 573096]. | PEM-806 |
| Enhancement | Add sorting based on status for agent and server tables in the dashboard. | PEM-4152 |
| Enhancement | Add an option to disable the Query Tool for users in order to restrict viewing data [Support Ticket 74976]. | PEM-4315 |
-| Enhancement | Support Postgres extension based probes for version flexibility. | PEM-4391 |
+| Enhancement | Support for Postgres extension-based probes for multi-version flexibility updating to newer versions are no longer required. | PEM-4391 |
| Enhancement | Improve the guide for Installation on Linux with added details and steps. | PEM-4381 |
-| Bug Fix | During a new installation, the assignment of the postgres role name as a member of the pem_admin role failed [Support Ticket 79577]. | PEM-4433 |
+| Bug Fix | During a new installation, grant the `pem_admin` role to the superuser. [Support Ticket 79577]. | PEM-4433 |
| Bug Fix | For a table displayed, sort numeric fields by numeric order, not alphabetical order [Support Ticket 1111704]. | PEM-3827 |
| Bug Fix | Limit the decimal precision displayed for monitoring percentages. | PEM-4144 |
| Bug Fix | Duplicate key value violates unique constraint blocked_session_info_pkey [Support Ticket RT75870]. | PEM-4333 |
| Bug Fix | Probe error for Postgres Extended 14. | PEM-4356 |
| Bug Fix | PEM agent not gathering data after upgrade [Support Ticket 78679]. | PEM-4430 |
-| Bug Fix | Alert body content missing in email notification [Support Ticket 833910]. | PEM-1832 |
+| Bug Fix | Added an option in preferences to change the line ending of the email body content from LF(Line Feed) to CRLF (Carriage Return Line Feed) to fix the missing alert body content in email notification [Support Ticket 833910]. | PEM-1832 |
From 8673b1e8f42fe92b015994e198000e1199cbd823 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Thu, 31 Mar 2022 18:03:23 +0530
Subject: [PATCH 28/39] Added the content as per PEM-4458 and moved the
troubleshooting to upper level
Added the content as per PEM-4458 and moved the troubleshooting to upper level
---
.../docs/pem/8/04_troubleshooting.mdx | 76 +++++++++++++++++++
product_docs/docs/pem/8/index.mdx | 1 +
.../index.mdx | 2 +-
.../pem/8/pem_upgrade/04_troubleshooting.mdx | 48 ------------
4 files changed, 78 insertions(+), 49 deletions(-)
create mode 100644 product_docs/docs/pem/8/04_troubleshooting.mdx
delete mode 100644 product_docs/docs/pem/8/pem_upgrade/04_troubleshooting.mdx
diff --git a/product_docs/docs/pem/8/04_troubleshooting.mdx b/product_docs/docs/pem/8/04_troubleshooting.mdx
new file mode 100644
index 00000000000..b2f0741bb24
--- /dev/null
+++ b/product_docs/docs/pem/8/04_troubleshooting.mdx
@@ -0,0 +1,76 @@
+---
+title: "Troubleshooting"
+legacyRedirectsGenerated:
+ # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
+ - "/edb-docs/d/edb-postgres-enterprise-manager/installation-getting-started/upgrade-migration-guide/8.0/troubleshooting.html"
+---
+
+
+
+## Troubleshooting the PEM server installation
+
+While installing the PEM server on RHEL 8, if you come across the below error:
+
+```text
+[root@etpgxlt firstuser]# dnf install edb-pem
+Updating Subscription Management repositories.
+Last metadata expiration check: 0:01:33 ago on Wed 30 Mar 2022 01:28:16 AM EDT.
+Error:
+ Problem: problem with installed package python3-mod_wsgi-4.6.4-4.el8.s390x
+ - package python39-mod_wsgi-4.7.1-4.module+el8.4.0+9822+20bf1249.s390x conflicts with python3-mod_wsgi provided by python3-mod_wsgi-4.6.4-4.el8.s390x
+ - package python39-mod_wsgi-4.7.1-4.module+el8.4.0+9822+20bf1249.s390x conflicts with python3-mod_wsgi provided by python3-mod_wsgi-4.6.4-3.el8.s390x
+ - package edb-pem-server-8.4.0-7.rhel8.s390x requires python39-mod_wsgi >= 4.7, but none of the providers can be installed
+ - package edb-pem-8.4.0-7.rhel8.s390x requires edb-pem-server = 8.4.0-7.rhel8, but none of the providers can be installed
+ - cannot install the best candidate for the job
+(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
+[root@etpgxlt firstuser]#
+```
+
+Remove the `python3-mod_wsgi` package first:
+
+```text
+dnf remove python3-mod_wsgi
+```
+
+Then try installing the PEM server again.
+
+
+## Reconfiguring the PEM server
+
+In some situations you may need to uninstall the PEM server, install it again, and then reconfigure the server. Use the following commands in the given sequence:
+
+1. Use the following command to remove the PEM server configuration and uninstall:
+
+ ```text
+ /usr/edb/pem/bin/configure-pem-server.sh –un
+ ```
+
+2. Use the following command to remove the PEM packages:
+
+ ```text
+ yum erase edb-pem-server
+ ```
+
+3. Use the following command to drop the `pem` database:
+
+ ```text
+ DROP DATABASE pem
+ ```
+
+4. Move the certificates from `/root/.pem/` to another location:
+
+ ```text
+ mv /root/.pem/*
+ ```
+
+5. Move the `agent.cfg` file from `/usr/edb/pem/agent/etc/agent.cfg` to another location:
+
+ ```text
+ mv /usr/edb/pem/agent/etc/agent.cfg
+ ```
+
+6. Then, use the following command to configure the PEM server again:
+
+ ```text
+ /usr/edb/pem/bin/configure-pem-server.sh
+ ```
diff --git a/product_docs/docs/pem/8/index.mdx b/product_docs/docs/pem/8/index.mdx
index 7afe93697a6..4c6e6a4bc86 100644
--- a/product_docs/docs/pem/8/index.mdx
+++ b/product_docs/docs/pem/8/index.mdx
@@ -8,6 +8,7 @@ navigation:
- pem_inst_guide_linux
- pem_inst_guide_windows
- pem_upgrade
+ - 04_troubleshooting
- pem_pgbouncer
- "#Guides"
- pem_admin
diff --git a/product_docs/docs/pem/8/pem_inst_guide_linux/04_installing_postgres_enterprise_manager/index.mdx b/product_docs/docs/pem/8/pem_inst_guide_linux/04_installing_postgres_enterprise_manager/index.mdx
index a0299a1efc6..49f90bbaa4a 100644
--- a/product_docs/docs/pem/8/pem_inst_guide_linux/04_installing_postgres_enterprise_manager/index.mdx
+++ b/product_docs/docs/pem/8/pem_inst_guide_linux/04_installing_postgres_enterprise_manager/index.mdx
@@ -15,4 +15,4 @@ The PEM Agent that is installed with the PEM server is capable of monitoring mul
For detailed information about installing and configuring a PEM Agent, see [Installing the PEM agent on Linux](08_installing_pem_agent_using_edb_repository).
-
+For troubleshooting the installation or configuration of the PEM Server, see [Troubleshooting](/pem/latest/04_troubleshooting)
diff --git a/product_docs/docs/pem/8/pem_upgrade/04_troubleshooting.mdx b/product_docs/docs/pem/8/pem_upgrade/04_troubleshooting.mdx
deleted file mode 100644
index 81e21176432..00000000000
--- a/product_docs/docs/pem/8/pem_upgrade/04_troubleshooting.mdx
+++ /dev/null
@@ -1,48 +0,0 @@
----
-title: "Troubleshooting"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-enterprise-manager/installation-getting-started/upgrade-migration-guide/8.0/troubleshooting.html"
----
-
-
-
-## Reconfiguring the PEM server
-
-In some situations you may need to uninstall the PEM server, install it again, and then reconfigure the server. Use the following commands in the given sequence:
-
-1. Use the following command to remove the PEM server configuration and uninstall:
-
- ```text
- /usr/edb/pem/bin/configure-pem-server.sh –un
- ```
-
-2. Use the following command to remove the PEM packages:
-
- ```text
- yum erase edb-pem-server
- ```
-
-3. Use the following command to drop the `pem` database:
-
- ```text
- DROP DATABASE pem
- ```
-
-4. Move the certificates from `/root/.pem/` to another location:
-
- ```text
- mv /root/.pem/*
- ```
-
-5. Move the `agent.cfg` file from `/usr/edb/pem/agent/etc/agent.cfg` to another location:
-
- ```text
- mv /usr/edb/pem/agent/etc/agent.cfg
- ```
-
-6. Then, use the following command to configure the PEM server again:
-
- ```text
- /usr/edb/pem/bin/configure-pem-server.sh
- ```
From cfee84d6cb3a65f5f02f1be6d3195461aad832b8 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Thu, 31 Mar 2022 19:09:12 +0530
Subject: [PATCH 29/39] Added barman content to enteprise features guide
Added barman content to enteprise features guide
---
.../docs/pem/8/images/barman_dashboard.png | 3 +
.../pem/8/images/barman_dashboard_backups.png | 3 +
.../pem/8/images/barman_dashboard_servers.png | 3 +
.../pem/8/images/barman_server_properties.png | 3 +
...barman_server_properties_configuration.png | 3 +
.../barman_server_properties_general.png | 3 +
.../barman_server_properties_information.png | 3 +
.../barman_server_properties_pem_agent.png | 3 +
.../8/images/create_barman_server_general.png | 3 +
.../images/create_barman_server_pem_agent.png | 3 +
.../01_managing_barman_prerequisites.mdx | 10 ++
.../02_configuring_barman_server.mdx | 100 ++++++++++++++++++
.../03_viewing_barman_dashboard.mdx | 19 ++++
.../18_monitoring_barman/index.mdx | 21 ++++
.../docs/pem/8/pem_ent_feat/index.mdx | 1 +
.../02_configuring_barman_server.mdx | 14 +--
.../03_viewing_barman_dashboard.mdx | 6 +-
17 files changed, 191 insertions(+), 10 deletions(-)
create mode 100755 product_docs/docs/pem/8/images/barman_dashboard.png
create mode 100644 product_docs/docs/pem/8/images/barman_dashboard_backups.png
create mode 100755 product_docs/docs/pem/8/images/barman_dashboard_servers.png
create mode 100755 product_docs/docs/pem/8/images/barman_server_properties.png
create mode 100755 product_docs/docs/pem/8/images/barman_server_properties_configuration.png
create mode 100755 product_docs/docs/pem/8/images/barman_server_properties_general.png
create mode 100755 product_docs/docs/pem/8/images/barman_server_properties_information.png
create mode 100644 product_docs/docs/pem/8/images/barman_server_properties_pem_agent.png
create mode 100755 product_docs/docs/pem/8/images/create_barman_server_general.png
create mode 100755 product_docs/docs/pem/8/images/create_barman_server_pem_agent.png
create mode 100644 product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/01_managing_barman_prerequisites.mdx
create mode 100644 product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/02_configuring_barman_server.mdx
create mode 100644 product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/03_viewing_barman_dashboard.mdx
create mode 100644 product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/index.mdx
diff --git a/product_docs/docs/pem/8/images/barman_dashboard.png b/product_docs/docs/pem/8/images/barman_dashboard.png
new file mode 100755
index 00000000000..d398dda315a
--- /dev/null
+++ b/product_docs/docs/pem/8/images/barman_dashboard.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9127b493fed7ed48616f832ef592402cd9f77cc52ea8fe01fab55a63a78db1ed
+size 211417
diff --git a/product_docs/docs/pem/8/images/barman_dashboard_backups.png b/product_docs/docs/pem/8/images/barman_dashboard_backups.png
new file mode 100644
index 00000000000..c67afd85a5e
--- /dev/null
+++ b/product_docs/docs/pem/8/images/barman_dashboard_backups.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f86b8746af88af30e78aefac8ccb02debe0afe7e68e6d8e24cc937015203824
+size 319490
diff --git a/product_docs/docs/pem/8/images/barman_dashboard_servers.png b/product_docs/docs/pem/8/images/barman_dashboard_servers.png
new file mode 100755
index 00000000000..99e6f075708
--- /dev/null
+++ b/product_docs/docs/pem/8/images/barman_dashboard_servers.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e3853bb457758b029af2dd43e81004c3fdc236d8bf004fd50726eeec92aab331
+size 475238
diff --git a/product_docs/docs/pem/8/images/barman_server_properties.png b/product_docs/docs/pem/8/images/barman_server_properties.png
new file mode 100755
index 00000000000..395d9b4aedd
--- /dev/null
+++ b/product_docs/docs/pem/8/images/barman_server_properties.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6fe5dac1b080fda70cfc9c1816036436f62aad1d4b83c8987c78df3c9ff06ae9
+size 243886
diff --git a/product_docs/docs/pem/8/images/barman_server_properties_configuration.png b/product_docs/docs/pem/8/images/barman_server_properties_configuration.png
new file mode 100755
index 00000000000..49f07d5db52
--- /dev/null
+++ b/product_docs/docs/pem/8/images/barman_server_properties_configuration.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2dfe59de0871a8cfe478be81e0e559354d58af151e0832268e123c03228ed4a6
+size 157838
diff --git a/product_docs/docs/pem/8/images/barman_server_properties_general.png b/product_docs/docs/pem/8/images/barman_server_properties_general.png
new file mode 100755
index 00000000000..481694af8fd
--- /dev/null
+++ b/product_docs/docs/pem/8/images/barman_server_properties_general.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00510b343b0b074b70f3b7902af2f680f2cd6a16516f5ece0f823ab6ca84a642
+size 49371
diff --git a/product_docs/docs/pem/8/images/barman_server_properties_information.png b/product_docs/docs/pem/8/images/barman_server_properties_information.png
new file mode 100755
index 00000000000..84b80b5b28c
--- /dev/null
+++ b/product_docs/docs/pem/8/images/barman_server_properties_information.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aa31176949a4a6fcdfbcb897dad385ed0d51f0a470476154c8c913444d43b63a
+size 41748
diff --git a/product_docs/docs/pem/8/images/barman_server_properties_pem_agent.png b/product_docs/docs/pem/8/images/barman_server_properties_pem_agent.png
new file mode 100644
index 00000000000..9d9b67e5f1f
--- /dev/null
+++ b/product_docs/docs/pem/8/images/barman_server_properties_pem_agent.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:db9d01acf4523b4d690fb661b53e2fd02c9b5eeb1f02a40cb64287a624489ec6
+size 70276
diff --git a/product_docs/docs/pem/8/images/create_barman_server_general.png b/product_docs/docs/pem/8/images/create_barman_server_general.png
new file mode 100755
index 00000000000..4ec880a3201
--- /dev/null
+++ b/product_docs/docs/pem/8/images/create_barman_server_general.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:762bf2cefc12770e65c75bde0e12b530c912623049f9da2666d1e07975a74a42
+size 53989
diff --git a/product_docs/docs/pem/8/images/create_barman_server_pem_agent.png b/product_docs/docs/pem/8/images/create_barman_server_pem_agent.png
new file mode 100755
index 00000000000..4a9cd94688b
--- /dev/null
+++ b/product_docs/docs/pem/8/images/create_barman_server_pem_agent.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7e528edf63f86e17ed3d1513aa967dde7dde4f699790662ddfd079e17ca7323d
+size 70145
diff --git a/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/01_managing_barman_prerequisites.mdx b/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/01_managing_barman_prerequisites.mdx
new file mode 100644
index 00000000000..6b7bb67283f
--- /dev/null
+++ b/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/01_managing_barman_prerequisites.mdx
@@ -0,0 +1,10 @@
+---
+title: "Prerequisites for monitoring Barman"
+---
+
+Before adding a Barman server to the PEM console,
+
+- You must manually install and configure Barman on the Barman host. For more information about installing and configuring Barman, please see the [Barman documentation](https://www.enterprisedb.com/docs/supported-open-source/barman/).
+
+
+- Install the ``pg-backup-api`` tool on Barman host.
diff --git a/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/02_configuring_barman_server.mdx b/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/02_configuring_barman_server.mdx
new file mode 100644
index 00000000000..a943c74e109
--- /dev/null
+++ b/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/02_configuring_barman_server.mdx
@@ -0,0 +1,100 @@
+---
+title: "Configuring a Barman Server"
+---
+
+You can configure and edit your Barman server using:
+
+- PEM web client
+- `pemworker` command line
+
+## Using PEM web client
+
+### Configure
+
+You can use the `Create–BARMAN server` dialog to register an existing Barman server with the PEM server. To access the dialog, right-click on the `BARMAN Servers` node and select `Create-BARMAN Server`.
+
+![Create-BARMAN server dialog - General tab](../../images/create_barman_server_general.png)
+
+Use the fields on the `General` tab to describe the general properties of the Barman server:
+
+- Use the `Name` field to specify a user-friendly name for the server. The name identifies the server in the browser tree.
+
+- Use the `URL` field to specify the URL of the host where Barman is installed.
+
+- Use the `Team` field to specify a PostgreSQL role name. Only PEM users who are members of this role, who created the server initially, or have superuser privileges on the PEM server will see this server when they log on to PEM. If this field is left blank, all PEM users see the server.
+
+![Create-BARMAN server dialog - PEM Agent tab](../../images/create_barman_server_pem_agent.png)
+
+Use the fields on the `PEM Agent` tab to specify connection details for the PEM Agent:
+
+- Use the `Bound Agent` field to select the agent that you want to configure as a Barman server. Only those PEM agents that are supported for Barman are listed in the drop-down list.
+
+- Use the `Probe Frequency` field to specify the number of seconds to execute the probes with the specified interval.
+
+- Use the `Hearbeat` field to specify the number of seconds interval to check the availability of PEM agent.
+
+!!! Note
+ After registering the ``Barman server`` you need to restart the PEM agent.
+
+### Editing
+
+To edit your Barman server, select your Barman server from the browser tree, right click and select `Properties`.
+
+![BARMAN server properties - General tab](../../images/barman_server_properties_general.png)
+
+- Use the fields on the PEM Agent tab to modify the `Bound Agent`, `Probe Frequency`, and `Heartbeat`. Only the owner of the Barman server can modify the fields on PEM Agent tab.
+
+![BARMAN server properties - Information tab](../../images/barman_server_properties_information.png)
+
+- Use the fields on Information tab to view the detailed information about your Barman server. This tab gets populated whenever the Barman related probes are executed. You cannot modify any of the fields on the Information tab.
+
+![BARMAN server properties - Configuration tab](../../images/barman_server_properties_configuration.png)
+
+- Use the fields on Configuration tab to view the configuration settings of your Barman server. This tab gets populated whenever the Barman related probes are executed. You cannot modify any of the fields on the Configuration tab.
+
+!!! Note
+ After registering the ``Barman server`` you need to restart the PEM agent.
+
+## Using `pemworker` command line
+
+You can configure Barman server using `pemworker` command line options.
+
+``` text
+ asheshvashi@pem:~/PEM/agent$ ./pemworker --update-barman --help
+ ./pemworker --update-barman [barman-update-options]
+
+ barman-update-options:
+ --id (ID for the existing BARMAN API 'pg-backup-api')
+ --api-url (URL of the BARMAN API 'pg-backup-api')
+ --probe-execution-frequency (Default: 30, Probe the BARMAN API 'pg-backup-api' at regular interval 'in seconds' and fetch the metrics.)
+ --heartbeat-interval (Default: 10, Ping the BARMAN API 'pg-backup-api' 'status' API at a regular interval 'in seconds' for checking its availability.)
+ --ssl-crt (SSL certificate file for the BARMAN API.)
+ --ssl-key (Private SSL key for the BARMAN API.)
+ --ssl-ca-cert (CA certificate to verify peer against the BARMAN API.)
+ --config-file/-c (Path to the agent configuration file.)
+
+ asheshvashi@pem:~/PEM/agent$ ./pemworker --unregister-barman --help
+ ./pemworker --unregister-barman [barman-unregistration-options]
+
+ barman-unregistration-options:
+ --id (ID for the existing BARMAN API, registered with the PEM Server.'pg-backup-api')
+ --config-file/-c (Path to the agent configuration file.)
+
+ asheshvashi@pem:~/PEM/agent$ ./pemworker --register-barman --help
+ ./pemworker --register-barman [barman-registration-options]
+
+ barman-registration-options:
+ --api-url (URL of the BARMAN API 'pg-backup-api')
+ --description (Description to show on the UI 'User interface' for the BARMAN API.)
+ --probe-execution-frequency (Default: 30, Probe the BARMAN API 'pg-backup-api' at regular interval 'in seconds' and fetch the metrics.)
+ --heartbeat-interval (Default: 10, Ping the BARMAN API 'pg-backup-api' 'status' API at a regular interval 'in seconds' for checking its availability.)
+ --ssl-crt (SSL certificate file for the BARMAN API.)
+ --ssl-key (Private SSL key for the BARMAN API.)
+ --ssl-ca-cert (CA certificate to verify peer against the BARMAN API.)
+ --team (Specify the name of the database group role, on the PEM backend database server, that should have access to this BARMAN API Server.)
+ --owner (Specify the name of the database user, on the PEM backend database server, who will own the BARMAN API Server.)
+ --config-file/-c (Path to the agent configuration file.)
+```
+
+!!! Note
+ After registering the ``Barman server`` you need to restart the PEM agent.
\ No newline at end of file
diff --git a/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/03_viewing_barman_dashboard.mdx b/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/03_viewing_barman_dashboard.mdx
new file mode 100644
index 00000000000..c35fe9dc090
--- /dev/null
+++ b/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/03_viewing_barman_dashboard.mdx
@@ -0,0 +1,19 @@
+---
+title: "Viewing the Barman Server details on a PEM dashboard"
+---
+
+Once the Barman server is configured, you can see the entire backup- and server-related details for that particular Barman server on the PEM Dashboard.
+
+![BARMAN dashboard](../../images/barman_dashboard.png)
+
+When you select a monitored Barman server, details of all the associated database servers along with their activities are displayed as a chart on the Dashboard in the `Barman Activities` panel. You can select the activities on any criteria that you specify in the filter boxes (the database server, status, duration, or date).
+
+The `Servers` panel displays a list of all the database servers managed by that particular Barman server along with the active status.
+
+![BARMAN dashboard - Servers panel](../../images/barman_dashboard_servers.png)
+
+The `Backups` panel displays a list of all the backups of the database servers managed by that particular Barman server. You can filter the list to display the details of a particular database server. You can also filter the list on any criteria that you specify in the filter box. Typically, this filter works with any kind of string value (excluding date, time, and size) listed under the columns. For example, you can type `tar` to filter the list and display only those backups that are in tar format.
+
+Backup details includes the `Backup ID`, `Server`, `Mode`, `Start time`, `End time`, `Size`, `Error`, and `Status` column.
+
+![BARMAN dashboard - Backups panel](../../images/barman_dashboard_backups.png)
diff --git a/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/index.mdx b/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/index.mdx
new file mode 100644
index 00000000000..b17a544a736
--- /dev/null
+++ b/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/index.mdx
@@ -0,0 +1,21 @@
+---
+title: "Monitoring Barman"
+---
+
+Postgres Enterprise Manager (PEM) is designed to assist database administrators, system architects, and performance analysts when administering, monitoring, and tuning PostgreSQL and Advanced Server database servers.
+
+Barman (Backup and Recovery Manager) is an open-source administration tool for remote backups and disaster recovery of PostgreSQL servers in business-critical environments. It relies on PostgreSQL’s Point In Time Recovery technology, allowing DBAs to remotely manage a complete catalogue of backups and the recovery phase of multiple remote servers – all from one location. For more information about Barman, see [Barman docs](https://www.enterprisedb.com/docs/supported-open-source/barman/).
+
+Starting with version 8.4, you can monitor a Barman server through PEM console. You can monitor your Barman server using the PEM console.
+
+Before you manage a Barman server through PEM console, your system must meet certain requirements. See
+
+- [Prerequisites for monitoring Barman](01_managing_barman_prerequisites)
+
+You must add a Barman server to the PEM console whose backup you want to manage with Barman. See
+
+- [Configuring a Barman Server](02_configuring_barman_server)
+
+After you configure the Barman server, you will be able to view the details of the backups in the dashboard. See
+
+- [Viewing the Barman Server Details on a PEM Dashboard](03_viewing_barman_dashboard)
\ No newline at end of file
diff --git a/product_docs/docs/pem/8/pem_ent_feat/index.mdx b/product_docs/docs/pem/8/pem_ent_feat/index.mdx
index 852eeec60ab..cd65e9fada0 100644
--- a/product_docs/docs/pem/8/pem_ent_feat/index.mdx
+++ b/product_docs/docs/pem/8/pem_ent_feat/index.mdx
@@ -17,6 +17,7 @@ navigation:
- 10_tuning_wizard
- 11_postgres_expert
- 17_monitoring_BDR_nodes
+ - 18_monitoring_barman
- 13_monitoring_failover_manager
- 14_monitoring_xdb_replication_cluster
- 12_reports
diff --git a/product_docs/docs/pem/8/pem_online_help/06a_toc_pem_barman/02_configuring_barman_server.mdx b/product_docs/docs/pem/8/pem_online_help/06a_toc_pem_barman/02_configuring_barman_server.mdx
index 16e0466d995..a943c74e109 100644
--- a/product_docs/docs/pem/8/pem_online_help/06a_toc_pem_barman/02_configuring_barman_server.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/06a_toc_pem_barman/02_configuring_barman_server.mdx
@@ -9,11 +9,11 @@ You can configure and edit your Barman server using:
## Using PEM web client
-**Configure**
+### Configure
You can use the `Create–BARMAN server` dialog to register an existing Barman server with the PEM server. To access the dialog, right-click on the `BARMAN Servers` node and select `Create-BARMAN Server`.
-![Create-BARMAN server dialog - General tab](../images/create_barman_server_general.png)
+![Create-BARMAN server dialog - General tab](../../images/create_barman_server_general.png)
Use the fields on the `General` tab to describe the general properties of the Barman server:
@@ -23,7 +23,7 @@ Use the fields on the `General` tab to describe the general properties of the Ba
- Use the `Team` field to specify a PostgreSQL role name. Only PEM users who are members of this role, who created the server initially, or have superuser privileges on the PEM server will see this server when they log on to PEM. If this field is left blank, all PEM users see the server.
-![Create-BARMAN server dialog - PEM Agent tab](../images/create_barman_server_pem_agent.png)
+![Create-BARMAN server dialog - PEM Agent tab](../../images/create_barman_server_pem_agent.png)
Use the fields on the `PEM Agent` tab to specify connection details for the PEM Agent:
@@ -36,19 +36,19 @@ Use the fields on the `PEM Agent` tab to specify connection details for the PEM
!!! Note
After registering the ``Barman server`` you need to restart the PEM agent.
-**Editing**
+### Editing
To edit your Barman server, select your Barman server from the browser tree, right click and select `Properties`.
-![BARMAN server properties - General tab](../images/barman_server_properties_general.png)
+![BARMAN server properties - General tab](../../images/barman_server_properties_general.png)
- Use the fields on the PEM Agent tab to modify the `Bound Agent`, `Probe Frequency`, and `Heartbeat`. Only the owner of the Barman server can modify the fields on PEM Agent tab.
-![BARMAN server properties - Information tab](../images/barman_server_properties_information.png)
+![BARMAN server properties - Information tab](../../images/barman_server_properties_information.png)
- Use the fields on Information tab to view the detailed information about your Barman server. This tab gets populated whenever the Barman related probes are executed. You cannot modify any of the fields on the Information tab.
-![BARMAN server properties - Configuration tab](../images/barman_server_properties_configuration.png)
+![BARMAN server properties - Configuration tab](../../images/barman_server_properties_configuration.png)
- Use the fields on Configuration tab to view the configuration settings of your Barman server. This tab gets populated whenever the Barman related probes are executed. You cannot modify any of the fields on the Configuration tab.
diff --git a/product_docs/docs/pem/8/pem_online_help/06a_toc_pem_barman/03_viewing_barman_dashboard.mdx b/product_docs/docs/pem/8/pem_online_help/06a_toc_pem_barman/03_viewing_barman_dashboard.mdx
index 73d3cbdcaec..c35fe9dc090 100644
--- a/product_docs/docs/pem/8/pem_online_help/06a_toc_pem_barman/03_viewing_barman_dashboard.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/06a_toc_pem_barman/03_viewing_barman_dashboard.mdx
@@ -4,16 +4,16 @@ title: "Viewing the Barman Server details on a PEM dashboard"
Once the Barman server is configured, you can see the entire backup- and server-related details for that particular Barman server on the PEM Dashboard.
-![BARMAN dashboard](../images/barman_dashboard.png)
+![BARMAN dashboard](../../images/barman_dashboard.png)
When you select a monitored Barman server, details of all the associated database servers along with their activities are displayed as a chart on the Dashboard in the `Barman Activities` panel. You can select the activities on any criteria that you specify in the filter boxes (the database server, status, duration, or date).
The `Servers` panel displays a list of all the database servers managed by that particular Barman server along with the active status.
-![BARMAN dashboard - Servers panel](../images/barman_dashboard_servers.png)
+![BARMAN dashboard - Servers panel](../../images/barman_dashboard_servers.png)
The `Backups` panel displays a list of all the backups of the database servers managed by that particular Barman server. You can filter the list to display the details of a particular database server. You can also filter the list on any criteria that you specify in the filter box. Typically, this filter works with any kind of string value (excluding date, time, and size) listed under the columns. For example, you can type `tar` to filter the list and display only those backups that are in tar format.
Backup details includes the `Backup ID`, `Server`, `Mode`, `Start time`, `End time`, `Size`, `Error`, and `Status` column.
-![BARMAN dashboard - Backups panel](../images/barman_dashboard_backups.png)
+![BARMAN dashboard - Backups panel](../../images/barman_dashboard_backups.png)
From dc49ce0db9f733deab5acd0992d2e0ccae3f9074 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Thu, 31 Mar 2022 21:11:09 +0530
Subject: [PATCH 30/39] Added the probes changes to Enterprise Features guide
Added the probes changes to Enterprise Features guide
---
..._performance_monitoring_and_management.mdx | 16 +++++++--
.../01_pem_custom_probes.mdx | 34 +++++++++----------
.../03_pem_probe_config/index.mdx | 10 +++---
3 files changed, 36 insertions(+), 24 deletions(-)
diff --git a/product_docs/docs/pem/8/pem_ent_feat/05_performance_monitoring_and_management.mdx b/product_docs/docs/pem/8/pem_ent_feat/05_performance_monitoring_and_management.mdx
index 770ca34faca..b9b49c303e2 100644
--- a/product_docs/docs/pem/8/pem_ent_feat/05_performance_monitoring_and_management.mdx
+++ b/product_docs/docs/pem/8/pem_ent_feat/05_performance_monitoring_and_management.mdx
@@ -353,11 +353,17 @@ A `probe` is a scheduled task that retrieves information about the database obje
### System probes
-Unless otherwise noted, Postgres Enterprise Manager enables the probes listed in the table below:
+Unless otherwise noted, Postgres Enterprise Manager enables the following probes at the server, database, schema, extension (starting with version 8.4), or agent levels:
| Probe Name | Information Monitored by Probe | Level |
| ---------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- |
| Background Writer Statistics | This probe monitors information about the background writer. The information includes:
The number of timed checkpoints
The number of requested checkpoints
The number of buffers written (by checkpoint)
The number of buffers written (by background writer)
The number of background writer cycles
The number of background buffers written
The number of buffers allocated | Server |
+| Barman Configuration | This probe returns information about the Barman tool global configuration. | Agent |
+| Barman Information | This probe returns information about the Barman tool. | Agent |
+| Barman Server | This probe returns information about the respective database server configuration monitored by Barman. | Agent |
+| Barman Server Status | This probe returns information about the respective database server status monitored by Barman. | Agent |
+| Barman Server Backup | This probe returns information about the backups of the respective database servers. | Agent |
+| Barman Server WAL Status | This probe returns information about the Barman server WAL files. | Agent |
| Blocked Session Information | This probe provides information about blocked sessions. | Server |
| CPU Usage | This probe monitors CPU Usage information. | Agent |
| Data and Log File Analysis | This probe monitors information about log files. The information includes:
The name of the log file
The directory in which the log file resides | Server |
@@ -412,7 +418,10 @@ Unless otherwise noted, Postgres Enterprise Manager enables the probes listed in
### BDR probes
-To monitor the BDR Group via BDR dashboards, the following probes must be enabled. All these probes are configured at server level.
+To monitor the BDR Group via BDR dashboards, the following probes must be enabled. All these probes are configured at extension level.
+
+!!! Note
+ Prior to version 8.4, all these probes are available at the server level.
The user with `bdr_superuser` will be able to view information from all the following probes.
@@ -448,6 +457,9 @@ The `Manage Probes` tab provides a set of Quick Links that you can use to create
- Click the `Manage Custom Probes` icon to open the `Custom Probes` tab and create or modify a custom probe.
- Click the `Copy Probes` icon to open the Copy Probe dialog, and copy the probe configurations from the currently selected object to one or more monitored objects.
+!!! Note
+ At the moment the `Copy Probe` is not supported for the extension level probes.
+
A probe monitors a unique set of metrics for each specific object type (server, database, database object, or agent); select the name of an object in the tree control to review the probes for that object.
To modify the properties associated with a probe, highlight the name of a probe, and customize the settings that are displayed in the Probes table:
diff --git a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/01_pem_custom_probes.mdx b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/01_pem_custom_probes.mdx
index dc42ad96cec..f2bd274ace0 100644
--- a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/01_pem_custom_probes.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/01_pem_custom_probes.mdx
@@ -26,19 +26,19 @@ Use the fields on the `General` tab to modify the definition of an existing prob
- Use the `Probe name` field to provide a name for a new probe.
- Use the `Collection method` field to specify the probe type. Use the drop-down listbox to select from:
-> - `SQL` (the probe will gather information via a SQL statement)
->
-> - `WMI` (the probe will gather information via a Windows Management Instrumentation extension)
->
-> - `Batch/Shell Script` (the probe will use a command-script or shell-script to gather information).
->
-> Before creating a batch probe on a Linux system, you must modify the `agent.cfg` file, setting the `allow_batch_probes` parameter equal to `true` and restart the PEM agent. The `agent.cfg` file is located in `/opt/PEM/agent/etc`.
->
-> On 64-bit Windows systems, agent settings are stored in the registry. Before creating a batch probe, modify the registry entry for the `AllowBatchProbes` registry entry and restart the PEM agent. PEM registry entries are located in `HKEY_LOCAL_MACHINE\\Software\\Wow6432Node\\EnterpriseDB\\PEM\\agent`.
->
-> Please note that batch probes are platform-specific. If you specify a collection method of `Batch`, you must specify a platform type in the `Platform` field.
->
-> To invoke a script on a Linux system, you must modify the entry for `batch_script_user` parameter of agent.cfg file and specify the user that should be used to run the script. You can either specify a non-root user or root for this parameter. If you do not specify a user, or the specified user does not exist, then the script will not be executed. Restart the agent after modifying the file. If pemagent is being run by a non-root user then the value of `batch_script_user` will be ignored and the script will be executed by the same non-root user that is being used for running the pemagent.
+ - `SQL` (the probe will gather information via a SQL statement)
+
+ - `WMI` (the probe will gather information via a Windows Management Instrumentation extension)
+
+ - `Batch/Shell Script` (the probe will use a command-script or shell-script to gather information).
+
+ Before creating a batch probe on a Linux system, you must modify the `agent.cfg` file, setting the `allow_batch_probes` parameter equal to `true` and restart the PEM agent. The `agent.cfg` file is located in `/opt/PEM/agent/etc`.
+
+ On 64-bit Windows systems, agent settings are stored in the registry. Before creating a batch probe, modify the registry entry for the `AllowBatchProbes` registry entry and restart the PEM agent. PEM registry entries are located in `HKEY_LOCAL_MACHINE\\Software\\Wow6432Node\\EnterpriseDB\\PEM\\agent`.
+
+ Please note that batch probes are platform-specific. If you specify a collection method of `Batch`, you must specify a platform type in the `Platform` field.
+
+ To invoke a script on a Linux system, you must modify the entry for `batch_script_user` parameter of agent.cfg file and specify the user that should be used to run the script. You can either specify a non-root user or root for this parameter. If you do not specify a user, or the specified user does not exist, then the script will not be executed. Restart the agent after modifying the file. If pemagent is being run by a non-root user then the value of `batch_script_user` will be ignored and the script will be executed by the same non-root user that is being used for running the pemagent.
- Use the `Target type` drop-down listbox to select the object type that the probe will monitor. `Target type` is disabled if `Collection method` is `WMI`.
- Use the `Minutes` and `Seconds` selectors to specify how often the probe will collect data.
@@ -78,10 +78,10 @@ Use the `Code` tab to specify the default code that will be executed by the prob
- If the probe is a SQL probe, you must specify the SQL SELECT statement invoked by the probe on the `Code` tab. The column names returned by the query must match the `Internal Name` specified on the `Column` tab. The number of columns returned by the query, as well as the column name, datatype, etc. must match the information specified on the `Columns` tab.
- If the probe is a Batch probe, you must specify the shell or .bat script that will be invoked when the probe runs. The output of the script should be as follows:
-> - The first line must contain the names of the columns provided on the `Columns` tab. Each column name should be separated by a tab (t) character.
-> - From the second line onwards, each line should contain the data for each column, separated by a tab character.
-> - If a specified column is defined as key column, make sure the script does not produce duplicate data for that column across lines of output.
-> - The number of columns specified in the `Columns` tab and their names, data type, etc. should match with the output of the script output.
+ - The first line must contain the names of the columns provided on the `Columns` tab. Each column name should be separated by a tab (t) character.
+ - From the second line onwards, each line should contain the data for each column, separated by a tab character.
+ - If a specified column is defined as key column, make sure the script does not produce duplicate data for that column across lines of output.
+ - The number of columns specified in the `Columns` tab and their names, data type, etc. should match with the output of the script output.
- If the probe is a WMI probe, you must specify the WMI query as a SELECT WMI query. The column name referenced in the SELECT statement should be same as the name of the corresponding column specified on the `Column` tab. The column names returned by the query must match the `Internal Name` specified on the `Column` tab. The number of columns returned by the query, as well as the column name, datatype, etc. must match the information specified on the `Columns` tab.
diff --git a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/03_pem_probe_config/index.mdx b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/03_pem_probe_config/index.mdx
index 2ed051a8cc8..67234caaa56 100644
--- a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/03_pem_probe_config/index.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/03_pem_probe_config/index.mdx
@@ -13,16 +13,16 @@ A [probe](01_pem_probes/#pem_probes) is a scheduled task that returns a set of p
The `Manage Probes` tab provides a set of `Quick Links` that you can use to create and manage probes:
-> - Click the [Manage Custom Probes](../01_pem_custom_probes/#pem_custom_probes) icon to open the `Custom Probes` tab and create or modify a custom probe.
-> - Click the [Copy Probes](../02_copy_probe_config/#copy_probe_config) icon to open the `Copy Probe` dialog, and copy the probe configurations from the currently selected object to one or more monitored objects.
+- Click the [Manage Custom Probes](../01_pem_custom_probes/#pem_custom_probes) icon to open the `Custom Probes` tab and create or modify a custom probe.
+- Click the [Copy Probes](../02_copy_probe_config/#copy_probe_config) icon to open the `Copy Probe` dialog, and copy the probe configurations from the currently selected object to one or more monitored objects.
A probe monitors a unique set of metrics for each specific object type (server, database, database object, or agent); select the name of an object in the tree control to review the probes for that object.
To modify the properties associated with a probe, highlight the name of a probe, and customize the settings that are displayed in the `Probes` table:
-> - Move the `Default` switch in the `Execution Frequency` columns to `No` to enable the `Minutes` and `Seconds` selectors, and specify a non-default value for the length of time between executions of the probe.
-> - Move the `Default` switch in the `Enabled?` column to `No` to change the state of the probe, and indicate if the probe is active or not active. If data from a probe that is `Disabled` is used in a chart, the chart will display an information icon in the upper-left corner that allows you to enable the probe by clicking the provided link.
-> - Move the `Default` switch in the `Data Retention` column to `No` to enable the `Day(s)` field and specify the number of days that information gathered by the probe is stored on the PEM server.
+- Move the `Default` switch in the `Execution Frequency` columns to `No` to enable the `Minutes` and `Seconds` selectors, and specify a non-default value for the length of time between executions of the probe.
+- Move the `Default` switch in the `Enabled?` column to `No` to change the state of the probe, and indicate if the probe is active or not active. If data from a probe that is `Disabled` is used in a chart, the chart will display an information icon in the upper-left corner that allows you to enable the probe by clicking the provided link.
+- Move the `Default` switch in the `Data Retention` column to `No` to enable the `Day(s)` field and specify the number of days that information gathered by the probe is stored on the PEM server.
The `Manage Probes` tab may also display information about probes that cannot be modified from the current node; if a probe cannot be modified from the current dialog, the switches for that probe are disabled. Generally, a disabled probe can be modified from a node that is higher in the hierarchy of the PEM client tree control. Select another object in the tree control to change which probes are displayed or enabled on the `Manage Probes` tab.
From 20cae130d5b92788707187f445172a49f54172fc Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Thu, 31 Mar 2022 21:16:34 +0530
Subject: [PATCH 31/39] Added conformance report links
Added conformance report links
---
product_docs/docs/pem/8/pem_rel_notes/index.mdx | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/product_docs/docs/pem/8/pem_rel_notes/index.mdx b/product_docs/docs/pem/8/pem_rel_notes/index.mdx
index 6016e4948e1..7aa8c30e6bc 100644
--- a/product_docs/docs/pem/8/pem_rel_notes/index.mdx
+++ b/product_docs/docs/pem/8/pem_rel_notes/index.mdx
@@ -6,13 +6,13 @@ The Postgres Enterprise Manager (PEM) documentation describes the latest version
| Version | Release Date | Upstream Merges | Accessibility Conformance |
| ------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------- |
-| [8.4.0](04_840_rel_notes) | 04 Apr 2022 | NA | [Conformance Report]() |
-| [8.3.0](05_830_rel_notes) | 24 Nov 2021 | pgAdmin [5.7](https://www.pgadmin.org/docs/pgadmin4/5.7/release_notes_5_7.html#bug-fixes) | NA |
-| [8.2.0](06_820_rel_notes) | 09 Sep 2021 | pgAdmin [5.4](https://www.pgadmin.org/docs/pgadmin4/5.4/release_notes_5_4.html#bug-fixes), [5.5](https://www.pgadmin.org/docs/pgadmin4/5.5/release_notes_5_5.html#bug-fixes), and [5.6](https://www.pgadmin.org/docs/pgadmin4/5.6/release_notes_5_6.html#bug-fixes) | NA |
+| [8.4.0](04_840_rel_notes) | 04 Apr 2022 | NA | [Conformance Report](https://www.enterprisedb.com/accessibility) |
+| [8.3.0](05_830_rel_notes) | 24 Nov 2021 | pgAdmin [5.7](https://www.pgadmin.org/docs/pgadmin4/5.7/release_notes_5_7.html#bug-fixes) | [Conformance Report](https://www.enterprisedb.com/accessibility) |
+| [8.2.0](06_820_rel_notes) | 09 Sep 2021 | pgAdmin [5.4](https://www.pgadmin.org/docs/pgadmin4/5.4/release_notes_5_4.html#bug-fixes), [5.5](https://www.pgadmin.org/docs/pgadmin4/5.5/release_notes_5_5.html#bug-fixes), and [5.6](https://www.pgadmin.org/docs/pgadmin4/5.6/release_notes_5_6.html#bug-fixes) | [Conformance Report](https://www.enterprisedb.com/accessibility) |
| [8.1.1](07_811_rel_notes) | 22 Jul 2021 | NA | NA |
-| [8.1.0](08_810_rel_notes) | 16 Jun 2021 | pgAdmin [5.0](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_0.html#bug-fixes), [5.1](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_1.html#bug-fixes), [5.2](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_2.html#bug-fixes), and [5.3](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_3.html#bug-fixes) | [Conformance Report](https://www.enterprisedb.com/sites/default/files/VPAT2.4WCAG--PEMJune2021.pdf) |
+| [8.1.0](08_810_rel_notes) | 16 Jun 2021 | pgAdmin [5.0](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_0.html#bug-fixes), [5.1](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_1.html#bug-fixes), [5.2](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_2.html#bug-fixes), and [5.3](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_3.html#bug-fixes) | [Conformance Report](https://www.enterprisedb.com/accessibility) |
| [8.0.1](09_801_rel_notes) | 3 Mar 2021 | pgAdmin [4.29](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_29.html#bug-fixes), [4.30](https://www.pgadmin.org/docs/pgadmin4/4.30/release_notes_4_30.html#bug-fixes), and [5.0](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_0.html#bug-fixes) | NA |
-| [8.0.0](10_800_rel_notes) | 9 Dec 2020 | pgAdmin [4.27](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_27.html#bug-fixes), [4.28](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_28.html#bug-fixes), and [4.29](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_29.html#bug-fixes) | [Conformance Report](https://www.enterprisedb.com/sites/default/files/VPAT2.4WCAG--PEMDec2020.pdf) |
+| [8.0.0](10_800_rel_notes) | 9 Dec 2020 | pgAdmin [4.27](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_27.html#bug-fixes), [4.28](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_28.html#bug-fixes), and [4.29](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_29.html#bug-fixes) | [Conformance Report](https://www.enterprisedb.com/accessibility) |
Often only select issues are included in the upstream merges. The specific issues included in the merges are listed in the release note topics.
From 902f297d4ec41de4247845636f9e66b06f9b4989 Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Thu, 31 Mar 2022 21:45:27 +0530
Subject: [PATCH 32/39] Replaced the bdr screenshots
Replaced the bdr screenshots
---
product_docs/docs/pem/8/images/bdr_admin_dashboard.png | 4 ++--
.../docs/pem/8/images/bdr_group_monitoring_dashboard.png | 4 ++--
.../docs/pem/8/images/bdr_node_monitoring_dashboard.png | 4 ++--
product_docs/docs/pem/8/images/bdr_probes.png | 3 +++
.../pem_ent_feat/05_performance_monitoring_and_management.mdx | 2 ++
.../03_pem_probe_config/01_pem_probes.mdx | 2 ++
6 files changed, 13 insertions(+), 6 deletions(-)
create mode 100644 product_docs/docs/pem/8/images/bdr_probes.png
diff --git a/product_docs/docs/pem/8/images/bdr_admin_dashboard.png b/product_docs/docs/pem/8/images/bdr_admin_dashboard.png
index 681e046aa60..6257593f885 100644
--- a/product_docs/docs/pem/8/images/bdr_admin_dashboard.png
+++ b/product_docs/docs/pem/8/images/bdr_admin_dashboard.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:d447095b158457fec3ac4718a2aa546cb01b32e4a91d56137e6ca12d788ca0a5
-size 268172
+oid sha256:5db428308cb0907bb75357dc56590b3793791343e333c8841545857504cf47c9
+size 300392
diff --git a/product_docs/docs/pem/8/images/bdr_group_monitoring_dashboard.png b/product_docs/docs/pem/8/images/bdr_group_monitoring_dashboard.png
index 8fe8e7db2fb..c58fc74e438 100644
--- a/product_docs/docs/pem/8/images/bdr_group_monitoring_dashboard.png
+++ b/product_docs/docs/pem/8/images/bdr_group_monitoring_dashboard.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:d94e33773b372cb5e897da87882091a80e2a664e1fef10dcffd5710ed97c1a0d
-size 232376
+oid sha256:27f5ac9c5fbac67e185799d7c840096f17d940e47b30bbbf2f42b6abc29c5a06
+size 227834
diff --git a/product_docs/docs/pem/8/images/bdr_node_monitoring_dashboard.png b/product_docs/docs/pem/8/images/bdr_node_monitoring_dashboard.png
index 87d7d0416ac..7c12a788038 100644
--- a/product_docs/docs/pem/8/images/bdr_node_monitoring_dashboard.png
+++ b/product_docs/docs/pem/8/images/bdr_node_monitoring_dashboard.png
@@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
-oid sha256:c983dc14ba5865ad9b673ba170657f87a78f104089706fb7e77b794bc7381246
-size 170305
+oid sha256:dda99fbdd20de3150509a81fa75c3c8b6d2fe30dd1255a80faaf65c723ef5101
+size 322985
diff --git a/product_docs/docs/pem/8/images/bdr_probes.png b/product_docs/docs/pem/8/images/bdr_probes.png
new file mode 100644
index 00000000000..7e50baa4efa
--- /dev/null
+++ b/product_docs/docs/pem/8/images/bdr_probes.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4fd3341eb593f837de6b76d9e5b53ec3046ae760e4f4a2dddc9339398a98952c
+size 413769
diff --git a/product_docs/docs/pem/8/pem_ent_feat/05_performance_monitoring_and_management.mdx b/product_docs/docs/pem/8/pem_ent_feat/05_performance_monitoring_and_management.mdx
index b9b49c303e2..d63de601fd3 100644
--- a/product_docs/docs/pem/8/pem_ent_feat/05_performance_monitoring_and_management.mdx
+++ b/product_docs/docs/pem/8/pem_ent_feat/05_performance_monitoring_and_management.mdx
@@ -423,6 +423,8 @@ To monitor the BDR Group via BDR dashboards, the following probes must be enable
!!! Note
Prior to version 8.4, all these probes are available at the server level.
+![BDR Probes](../images/bdr_probes.png)
+
The user with `bdr_superuser` will be able to view information from all the following probes.
All the following probes works with `BDR Enterprise Edition`.
diff --git a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/03_pem_probe_config/01_pem_probes.mdx b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/03_pem_probe_config/01_pem_probes.mdx
index 5fb6aac3742..0790c6a112d 100644
--- a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/03_pem_probe_config/01_pem_probes.mdx
+++ b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/12_pem_manage_probes/03_pem_probe_config/01_pem_probes.mdx
@@ -79,6 +79,8 @@ To monitor the BDR Group via [BDR dashboards](../../01_dashboards/03_bdr_dashboa
!!! Note
Prior to version 8.4, all these probes are available at the server level.
+![BDR Probes](../images/bdr_probes.png)
+
The user with `bdr_superuser` will be able to view information from all the following probes.
All the following probes works with `BDR Enterprise Edition`.
From 7a2ad8a819b646e895f5caa427e4f018a6863a89 Mon Sep 17 00:00:00 2001
From: drothery-edb
Date: Thu, 31 Mar 2022 12:19:36 -0400
Subject: [PATCH 33/39] rel notes edits
---
.../18_monitoring_barman/index.mdx | 2 ++
.../docs/pem/8/pem_ent_feat/index.mdx | 2 +-
.../pem/8/pem_rel_notes/04_840_rel_notes.mdx | 28 +++++++++----------
3 files changed, 17 insertions(+), 15 deletions(-)
diff --git a/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/index.mdx b/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/index.mdx
index b17a544a736..b784cbd37b0 100644
--- a/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/index.mdx
+++ b/product_docs/docs/pem/8/pem_ent_feat/18_monitoring_barman/index.mdx
@@ -1,6 +1,8 @@
---
title: "Monitoring Barman"
---
+!!! Tip "New Feature "
+ Monitoring Barman is available in PEM 8.4.0 and later.
Postgres Enterprise Manager (PEM) is designed to assist database administrators, system architects, and performance analysts when administering, monitoring, and tuning PostgreSQL and Advanced Server database servers.
diff --git a/product_docs/docs/pem/8/pem_ent_feat/index.mdx b/product_docs/docs/pem/8/pem_ent_feat/index.mdx
index cd65e9fada0..b0201d78424 100644
--- a/product_docs/docs/pem/8/pem_ent_feat/index.mdx
+++ b/product_docs/docs/pem/8/pem_ent_feat/index.mdx
@@ -4,7 +4,6 @@ title: "PEM Enterprise Features Guide"
navigation:
- - 01_what's_new
- 02_pem_query_tool
- 03_pem_schema_diff_tool
- 04_pem_erd_tool
@@ -20,6 +19,7 @@ navigation:
- 18_monitoring_barman
- 13_monitoring_failover_manager
- 14_monitoring_xdb_replication_cluster
+ - 18_monitoring_barman
- 12_reports
- 16_reference
diff --git a/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx b/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
index 5d6fc4cc973..6274bf292a5 100644
--- a/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
+++ b/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
@@ -6,18 +6,18 @@ New features, enhancements, bug fixes, and other changes in PEM 8.4.0 include:
| Type | Description | ID |
| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------| --------- |
-| New Feature | Add built-in support for monitoring Barman backups using the pg-backup-api. | PEM-4435 |
-| Security Fix | Harden against unrestricted file uploads as reported in the medium severity CVE-2022-0959. | PEM-4442 |
-| Enhancement | Add monitoring of transaction ID (TXID) wraparound ID. | PEM-3990 |
-| Enhancement | Remove unnecessary monitoring of virtual filesystems [Support Ticket 573096]. | PEM-806 |
-| Enhancement | Add sorting based on status for agent and server tables in the dashboard. | PEM-4152 |
-| Enhancement | Add an option to disable the Query Tool for users in order to restrict viewing data [Support Ticket 74976]. | PEM-4315 |
-| Enhancement | Support for Postgres extension-based probes for multi-version flexibility updating to newer versions are no longer required. | PEM-4391 |
-| Enhancement | Improve the guide for Installation on Linux with added details and steps. | PEM-4381 |
-| Bug Fix | During a new installation, grant the `pem_admin` role to the superuser. [Support Ticket 79577]. | PEM-4433 |
-| Bug Fix | For a table displayed, sort numeric fields by numeric order, not alphabetical order [Support Ticket 1111704]. | PEM-3827 |
-| Bug Fix | Limit the decimal precision displayed for monitoring percentages. | PEM-4144 |
-| Bug Fix | Duplicate key value violates unique constraint blocked_session_info_pkey [Support Ticket RT75870]. | PEM-4333 |
+| New Feature | Built-in support for monitoring Barman backups. See [Monitoring Barman](../pem_ent_feat/18_monitoring_barman) for more information. | PEM-4435 |
+| Security Fix | Hardened against unrestricted file uploads as reported in the medium severity CVE-2022-0959. | PEM-4442 |
+| Enhancement | Monitoring of transaction ID (TXID) wraparound ID. | PEM-3990 |
+| Enhancement | Removed unnecessary monitoring of virtual file systems. [Support Ticket #573096] | PEM-806 |
+| Enhancement | Sorting based on status for agent and server tables in the dashboard. | PEM-4152 |
+| Enhancement | Option to disable the Query Tool for users in order to restrict viewing data. [Support Ticket #74976] | PEM-4315 |
+| Enhancement | Support for Postgres extension-based probes for multi-version flexibility. Updating to newer versions are no longer required. | PEM-4391 |
+| Enhancement | Improved the Linux installation instructions with added details and steps. | PEM-4381 |
+| Bug Fix | A new installation grants the `pem_admin` role to the superuser. [Support Ticket #79577] | PEM-4433 |
+| Bug Fix | For display tables, numeric fields sorted by numeric order, not alphabetical order. [Support Ticket #1111704] | PEM-3827 |
+| Bug Fix | Limits the decimal precision displayed for monitoring percentages. | PEM-4144 |
+| Bug Fix | Duplicate key value violates unique constraint blocked_session_info_pkey. [Support Ticket #RT75870] | PEM-4333 |
| Bug Fix | Probe error for Postgres Extended 14. | PEM-4356 |
-| Bug Fix | PEM agent not gathering data after upgrade [Support Ticket 78679]. | PEM-4430 |
-| Bug Fix | Added an option in preferences to change the line ending of the email body content from LF(Line Feed) to CRLF (Carriage Return Line Feed) to fix the missing alert body content in email notification [Support Ticket 833910]. | PEM-1832 |
+| Bug Fix | PEM agent not gathering data after upgrade. [Support Ticket #78679] | PEM-4430 |
+| Bug Fix | Added an option in preferences to change the line ending of the email body content from LF(Line Feed) to CRLF (Carriage Return Line Feed). This fixes missing alert body content in email notifications. [Support Ticket #833910] | PEM-1832 |
From 0fe1094660bf1affaeaf1b54cd3de4950bd33bad Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Thu, 31 Mar 2022 14:32:44 -0400
Subject: [PATCH 34/39] Edits to security section
---
.../01_configuring_the_server.mdx | 8 +-
.../02_configuring_the_client.mdx | 26 +++---
.../03_testing_the_ssl_jdbc_connection.mdx | 32 +++----
...cate_authentication_without_a_password.mdx | 6 +-
.../01_using_ssl/index.mdx | 8 +-
.../02_scram_compatibility.mdx | 4 +-
...upport_for_gssapi_encrypted_connection.mdx | 85 +++++++++----------
.../09_security_and_encryption/index.mdx | 2 +-
8 files changed, 82 insertions(+), 89 deletions(-)
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/01_configuring_the_server.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/01_configuring_the_server.mdx
index 508a541ea53..d960c867ee0 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/01_configuring_the_server.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/01_configuring_the_server.mdx
@@ -1,5 +1,5 @@
---
-title: "Configuring the Server"
+title: "Configuring the server"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,12 +11,10 @@ legacyRedirectsGenerated:
-For information about configuring PostgreSQL or Advanced Server for SSL, refer to:
-
-
+For information about configuring PostgreSQL or EDB Postgres Advanced Server for SSL, see the [PostgreSQL documentation](https://www.postgresql.org/docs/12.3/ssl-tcp.html).
!!! Note
- Before you access your SSL enabled server from Java, ensure that you can log in to your server via `edb-psql`. The sample output should look similar to the one shown below if you have established a SSL connection:
+ Before you access your SSL-enabled server from Java, ensure that you can log in to your server via `edb-psql`. If you've established an SSL connection, the output looks similar to this:
```text
$ ./bin/edb-psql -U enterprisedb -d edb
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/02_configuring_the_client.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/02_configuring_the_client.mdx
index 30f29bf6873..38ee3d9b352 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/02_configuring_the_client.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/02_configuring_the_client.mdx
@@ -1,5 +1,5 @@
---
-title: "Configuring the Client"
+title: "Configuring the client"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,31 +11,31 @@ legacyRedirectsGenerated:
-There are a number of connection parameters for configuring the client for SSL. To know more about the SSL Connection parameters and Additional Connection Properties, refer to [Section 5.2](../../05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/#connecting_to_the_database).
+A number of connection parameters are available for configuring the client for SSL. To know more about the SSL connection parameters and additional connection properties, see [Connecting to the database](../../05_using_the_advanced_server_jdbc_connector_with_java_applications/02_connecting_to_the_database/#connecting_to_the_database).
-In this section, you will learn more about the behavior of ssl connection parameters when passed with different values. When you pass the connection parameter `ssl=true` into the driver, the driver validates the SSL certificate and verifies the hostname. On contrary to this behavior, using `libpq` defaults to a non-validating SSL connection.
+When passed different values, the behavior of SSL connection parameters differs. When you pass the connection parameter `ssl=true` into the driver, the driver validates the SSL certificate and verifies the hostname. Conversely, using `libpq` defaults to a nonvalidating SSL connection.
-You can get better control of the SSL connection using the `sslmode` connection parameter. This parameter is the same as the `libpq sslmode` parameter and the existing SSL implements the following sslmode connection parameters.
+You can get better control of the SSL connection using the `sslmode` connection parameter. This parameter is the same as the `libpq sslmode` parameter, and the existing SSL implements the following `sslmode` connection parameters.
-## sslmode Connection Parameters
+## sslmode connection parameters
-**sslmode=require**
+### sslmode=require
-This mode makes the encryption mandatory and also requires the connection to fail if it can’t be encrypted. The server is configured to accept SSL connections for this Host/IP address and that the server recognizes the client certificate.
+This mode makes the encryption mandatory and also requires the connection to fail if it can’t be encrypted. The server is configured to accept SSL connections for this host/IP address and that the server recognizes the client certificate.
!!! Note
In this mode, the JDBC driver accepts all server certificates.
-**sslmode=verify-ca**
+### sslmode=verify-ca
If `sslmode=verify-ca`, the server is verified by checking the certificate chain up to the root certificate stored on the client.
-**sslmode=verify-full**
+### sslmode=verify-full
-If `sslmode=verify-full`, the server host name is verified to make sure it matches the name stored in the server certificate. The SSL connection fails if the server certificate cannot be verified. This mode is recommended in most security-sensitive environments.
+If `sslmode=verify-full`, the server hostname is verified to make sure it matches the name stored in the server certificate. The SSL connection fails if it can't verify the server certificate. This mode is recommended in most security-sensitive environments.
-In the case where the certificate validation is failing you can try `sslcert=` and `LibPQFactory` will not send the client certificate. If the server is not configured to authenticate using the certificate it should connect.
+In the case where the certificate validation is failing, you can try `sslcert=`, and `LibPQFactory` will not send the client certificate. If the server isn't configured to authenticate using the certificate, it should connect.
-The location of the client certificate, client key and root certificate can be overridden with the `sslcert`, `sslkey`, and `sslrootcert` settings respectively. These default to `/defaultdir/postgresql.crt`, `/defaultdir/postgresql.pk8`, and `/defaultdir/root.crt` respectively where defaultdir is ${user.home}/.postgresql/ in unix systems and %appdata%/postgresql/ on windows.
+You can override the location of the client certificate, client key, and root certificate with the `sslcert`, `sslkey`, and `sslrootcert` settings, respectively. These default to `/defaultdir/postgresql.crt`, `/defaultdir/postgresql.pk8`, and `/defaultdir/root.crt`, respectively, where `defaultdir` is `${user.home}/.postgresql/` in Unix systems and `%appdata%/postgresql/` on Windows.
-In this mode, when establishing a SSL connection the JDBC driver will validate the server's identity preventing "man in the middle" attacks. It does this by checking that the server certificate is signed by a trusted authority, and that the host you are connecting to, is the same as the hostname in the certificate.
+In this mode, when establishing an SSL connection, the JDBC driver validates the server's identity, preventing "man in the middle" attacks. It does this by checking that the server certificate is signed by a trusted authority and that the host you're connecting to is the same as the hostname in the certificate.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/03_testing_the_ssl_jdbc_connection.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/03_testing_the_ssl_jdbc_connection.mdx
index bbd815519a9..290bbeaeeae 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/03_testing_the_ssl_jdbc_connection.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/03_testing_the_ssl_jdbc_connection.mdx
@@ -1,5 +1,5 @@
---
-title: "Testing the SSL JDBC Connection"
+title: "Testing the SSL JDBC connection"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,15 +11,19 @@ legacyRedirectsGenerated:
-If you are using Java's default mechanism (not `LibPQFactory`) to create the SSL connection, you need to make the server certificate available to Java, which can be achieved by implementing steps given below:
+If you're using Java's default mechanism (not `LibPQFactory`) to create the SSL connection, you need to make the server certificate available to Java.
1. Set the following property in the Java program.
+ ```text
String url=“jdbc:edb://localhost/test?user=fred&password=secret&ssl=true”;
+ ```
2. Convert the server certificate to Java format:
- `$ openssl x509 -in server.crt -out server.crt.der -outform der`
+ ```text
+ $ openssl x509 -in server.crt -out server.crt.der -outform der
+ ```
3. Import this certificate into Java's system truststore.
@@ -27,7 +31,7 @@ If you are using Java's default mechanism (not `LibPQFactory`) to create the SSL
$ keytool -keystore $JAVA_HOME/lib/security/cacerts -alias postgresql-import -file server.crt.der
```
-4. If you do not have access to the system cacerts truststore, create your own truststore as below:
+4. If you don't have access to the system cacerts truststore, create your own truststore.
```text
$ keytool -keystore mystore -alias postgresql -import -file server.crt.der
@@ -39,20 +43,18 @@ If you are using Java's default mechanism (not `LibPQFactory`) to create the SSL
$ java -Djavax.net.ssl.trustStore=mystore com.mycompany.MyApp
```
-For example:
+ For example:
-```text
-$java -classpath .:/usr/edb/jdbc/edb-jdbc18.jar–
-Djavax.net.ssl.trustStore=mystore pg_test2 public
-```
+ ```text
+ $java -classpath .:/usr/edb/jdbc/edb-jdbc18.jar–
+ Djavax.net.ssl.trustStore=mystore pg_test2 public
+ ```
!!! Note
- To troubleshoot connection issues, add `-Djavax.net.debug=ssl` to the java command.
-
-## Using SSL without Certificate Validation
+ To troubleshoot connection issues, add `-Djavax.net.debug=ssl` to the Java command.
-By default the combination of `SSL=true` and setting the connection URL parameter `sslfactory=com.edb.ssl.NonValidatingFactory` encrypts the connection but does not validate the SSL certificate. To enforce certificate validation, you must use a `Custom SSLSocketFactory`.
+## Using SSL without certificate validation
-For more details about writing a `Custom SSLSocketFactory`, refer to:
+By default, the combination of `SSL=true` and setting the connection URL parameter `sslfactory=com.edb.ssl.NonValidatingFactory` encrypts the connection but doesn't validate the SSL certificate. To enforce certificate validation, you must use a `Custom SSLSocketFactory`.
-
+For more details about writing a `Custom SSLSocketFactory`, see the [PostgreSQL documentation](https://jdbc.postgresql.org/documentation/head/ssl-factory.html).
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/04_using_certificate_authentication_without_a_password.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/04_using_certificate_authentication_without_a_password.mdx
index 4a17c96fb9c..348ca6060a6 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/04_using_certificate_authentication_without_a_password.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/04_using_certificate_authentication_without_a_password.mdx
@@ -1,5 +1,5 @@
---
-title: "Using Certificate Authentication Without a Password"
+title: "Using certificate authentication without a password"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,7 +11,7 @@ legacyRedirectsGenerated:
-To use certificate authentication without a password, you must:
+To use certificate authentication without a password:
1. Convert the client certificate to DER format.
@@ -27,7 +27,7 @@ $ openssl pkcs8 -topk8 -outform DER -in postgresql.key -out
postgresql.key.pk8 –nocrypt
```
-3. Copy the client files (`postgresql.crt.der`, `postgresql.key.pk8`) and root certificate to the client machine and use the following properties in your java program to test it:
+3. Copy the client files (`postgresql.crt.der`, `postgresql.key.pk8`) and root certificate to the client machine and use the following properties in your Java program to test it:
```text
String url = "jdbc:edb://snvm001:5444/edbstore";
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/index.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/index.mdx
index 17b6faefbac..d41dfdd27d8 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/index.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/01_using_ssl/index.mdx
@@ -11,13 +11,13 @@ legacyRedirectsGenerated:
-In this section, you will learn about:
+When using SSL, consider the following:
- Configuring the server
- Configuring the client
-- Testing the SSL JDBC Connection
-- Using SSL without Certificate Validation
-- Using Certificate Authentication (without a password)
+- Testing the SSL JDBC connection
+- Using SSL without certificate Validation
+- Using certificate authentication without a password
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/02_scram_compatibility.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/02_scram_compatibility.mdx
index dad168aaf79..4e57e593dbd 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/02_scram_compatibility.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/02_scram_compatibility.mdx
@@ -1,5 +1,5 @@
---
-title: "Scram Compatibility"
+title: "Scram compatibility"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -10,4 +10,4 @@ legacyRedirectsGenerated:
-The EDB JDBC driver provides SCRAM-SHA-256 support for Advanced Server versions 10, 11, and 12. For JRE/JDK version 1.8, this support is available from EDB JDBC Connector release 42.2.2.1 onwards; for JRE/JDK version 1.7, this support is available from EDB JDBC Connector release 42.2.5 onwards.
+The EDB JDBC driver provides SCRAM-SHA-256 support for EDB Postgres Advanced Server versions 10, 11, and 12. For JRE/JDK version 1.8, this support is available from EDB JDBC Connector release 42.2.2.1 onwards. For JRE/JDK version 1.7, this support is available from EDB JDBC Connector release 42.2.5 onwards.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/03_support_for_gssapi_encrypted_connection.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/03_support_for_gssapi_encrypted_connection.mdx
index 6429d08c480..f9f9e8ebbaa 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/03_support_for_gssapi_encrypted_connection.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/03_support_for_gssapi_encrypted_connection.mdx
@@ -1,51 +1,44 @@
---
-title: "Support for GSSAPI Encrypted Connection"
+title: "Support for GSSAPI-encrypted connection"
---
!!! Tip "New Feature "
- Support for GSSAPI cncrypted connections is available in EDB JDBC Connector release 42.2.19.1 and later.
+ Support for GSSAPI-ecncrypted connections is available in EDB JDBC Connector release 42.2.19.1 and later.
-The EDB JDBC driver supports GSSAPI encrypted connections for EDB Postgres Advanced Server 12 onwards.
+The EDB JDBC driver supports GSSAPI-encrypted connections for EDB Postgres Advanced Server 12 onwards.
-`gssEncMode` parameter controls GSSAPI Encrypted Connection. The parameter can have any of the below four values:
+The `gssEncMode` parameter controls GSSAPI-encrypted connection. The parameter can have any of these values:
-- `Disable`
-Disables any attempt to connect using GSS encrypted mode.
+- `Disable`. Disables any attempt to connect using GSS-encrypted mode.
-- `Allow`
-Attempts to connect in plain text then if the server requests it will switch to encrypted mode.
+- `Allow`. Attempts to connect in plain text. Then, if the server requests it, it switches to encrypted mode.
-- `Prefer`
-Attempts to connect in encrypted mode and fall back to plain text if it fails to acquire an encrypted connection.
+- `Prefer`. Attempts to connect in encrypted mode and falls back to plain text if it fails to acquire an encrypted connection.
-- `Require`
-Attempts to connect in encrypted mode and will fail to connect if that is not possible.
+- `Require`. Attempts to connect in encrypted mode and fails to connect if that isn't possible.
-## GSSAPI/SSPI Authentication
+## GSSAPI/SSPI authentication
The default behavior of GSSAPI/SSPI authentication on Windows and Linux platforms is as following:
-- By default, on Windows, the EDB JDBC driver tries to connect using SSPI.
-- By default, on Linux, the EDB JDBC Driver tries to connect using GSSAPI.
+- On Windows, the EDB JDBC driver tries to connect using SSPI.
+- On Linux, the EDB JDBC driver tries to connect using GSSAPI.
-This default behavior is controlled using the `gsslib` connection parameter that takes one of the following three values:
+This default behavior is controlled using the `gsslib` connection parameter that takes one of the following values:
-- `auto`
-The driver attempts for SSPI authentication when the server requests it, the EDB JDBC client is running on Windows, and the Waffle libraries required for SSPI are on the CLASSPATH. Otherwise it opts for Kerberos/GSSAPI authentication via JSSE. Note that unlike libpq, the EDB JDBC driver does not use Windows' SSPI libraries for Kerberos (GSSAPI) requests.
+- `auto`. The driver attempts for SSPI authentication when the server requests it, the EDB JDBC client is running on Windows, and the waffle libraries required for SSPI are on the CLASSPATH. Otherwise it opts for Kerberos/GSSAPI authentication via JSSE. Unlike libpq, the EDB JDBC driver doesn't use the Windows SSPI libraries for Kerberos (GSSAPI) requests.
-- `gssapi`
-This option forces JSSE's GSSAPI authentication even when SSPI is available.
+- `gssapi`. This option forces JSSE's GSSAPI authentication even when SSPI is available.
-- `sspi`
-This option forces SSPI authentication. This authentication fails on Linux or where SSPI is unavailable.
+- `sspi`. This option forces SSPI authentication. This authentication fails on Linux or where SSPI is unavailable.
-## Using SSPI (Windows only environment)
+## Using SSPI (Windows-only environment)
-When the EDB Advanced Server and JDBC client both are on Windows, JDBC driver connects with SSPI authentication using one of the following connection strings:
+When the EDB Postgres Advanced Server and JDBC client both are on Windows, the JDBC driver connects with SSPI authentication using one of the following connection strings:
```text
Class.forName("com.edb.Driver");
@@ -56,12 +49,12 @@ con = DriverManager.getConnection("jdbc:edb://localhost:5444/edb?gsslib=sspi");
!!! Note
- - gsslib=sspi is optional because the server requires SSPI authentication.
+ - `gsslib=sspi` is optional because the server requires SSPI authentication.
- There is no need to specify username and password. The logged-in user credentials are used to authenticate the user.
-**Example**:
+### Example
-The example assumes that SSPI authentication is configured on Windows machine. Suppose `edb-jdbc18.jar` path is `
` and waffle libraries path is ``. Here is how to set `CLASSPATH` and run JEdb sample:
+The example assumes that SSPI authentication is configured on a Windows machine. Suppose the `edb-jdbc18.jar` path is `` and the waffle libraries path is ``. Here's how to set `CLASSPATH` and run the JEdb sample:
```text
set CLASSPATH=\edb-jdbc18.jar;\*;
@@ -69,12 +62,11 @@ javac JEdb.java
java JEdb
```
-## Using GSSAPI (Linux only environment)
+## Using GSSAPI (Linux-only environment)
-When the EDB Advanced Server and JDBC client both are on Linux, JDBC driver connects with GSSAPI authentication using the following connection string:
+When the EDB Postgres Advanced Server and JDBC client both are on Linux, the JDBC driver connects with GSSAPI authentication using the following connection string:
```text
-
Class.forName("com.edb.Driver");
Properties connectionProps = new Properties();
connectionProps.setProperty("user", "postgres/myrealm.example@MYREALM.EXAMPLE");
@@ -86,10 +78,11 @@ con = DriverManager.getConnection(databaseUrl, connectionProps);
`gsslib=gssapi` is optional because the server requires GSSAPI authentication.
-**Example**:
+### Example
+
+This example assumes that GSS authentication is configured on a Linux machine.
-This example assumes that GSS Authentication is configured on Linux machine.
-Create a file named pgjdbc.conf with the following contents.
+Create a file named `pgjdbc.conf` with the following contents.
```text
pgjdbc {
@@ -101,22 +94,22 @@ debug=true;
};
```
-Suppose pgjdbc.conf is placed at `/etc/pgjdbc.conf`. Here is how to run JEdb sample:
+Suppose `pgjdbc.conf` is placed at `/etc/pgjdbc.conf`. Here's how to run JEdb sample:
```text
javac JEdb.java
java -Djava.security.auth.login.config=/etc/pgjdbc.conf -cp .:edb-jdbc18.jar JEdb
```
-## Using SSPI/GSSAPI (Windows + Linux environment)
+## Using SSPI/GSSAPI (Windows and Linux environment)
-When the EDB Advanced Server is on Linux with authentication configured as GSSAPI and JDBC client is on Windows, the EDB JDBC connects either using SSPI or GSSAPI authentication.
+When the EDB Postgres Advanced Server is on Linux with authentication configured as GSSAPI, and the JDBC client is on Windows, the EDB JDBC connects either using SSPI or GSSAPI authentication.
-For `gsslib=gssapi` or `gsslib=auto`, EDB JDBC uses SSPI while for `gsslib=gssapi` it uses GSSAPI authentication.
+For `gsslib=gssapi` or `gsslib=auto`, EDB JDBC uses SSPI. For `gsslib=gssapi` it uses GSSAPI authentication.
-**Example**:
+### Example
-This example assumes that GSS authentication is configured between Windows Active Directory and Linux machine.
+This example assumes that GSS authentication is configured between Windows Active Directory and a Linux machine.
**SSPI**
@@ -129,7 +122,7 @@ connectionProps.setProperty("user", "david@MYREALM.EXAMPLE");
String databaseUrl = "jdbc:edb://pg.myrealm.example:5444/edb?gsslib=sspi";
con = DriverManager.getConnection(databaseUrl, connectionProps);
```
-Running EDB JDBC based app in this case is the same as described in the Section: Using SSPI (Windows only environment).
+Running an EDB JDBC-based app in this case is the same as described in [Using SSPI (Windows-only environment)](#using_sspi_windows_only_environment).
**GSSAPI**
@@ -143,19 +136,19 @@ String databaseUrl = "jdbc:edb://pg.myrealm.example:5444/edb?gsslib=gssapi";
con = DriverManager.getConnection(databaseUrl, connectionProps);
```
-Set Up the Kerberos Credential Cache File and obtain a ticket.
+Set up the Kerberos credential cache file and obtain a ticket.
-Create a new directory say `c:\temp`, and a system environment variable `KRB5CCNAME`. In the variable value field, enter `c:\temp\krb5cache`.
+Create a new directory, say `c:\temp`, and a system environment variable `KRB5CCNAME`. In the variable value field, enter `c:\temp\krb5cache`.
!!! Note
- krb5cache is a file that is managed by the Kerberos software.
+ `krb5cache` is a file that's managed by the Kerberos software.
-Obtain a Ticket for a Kerberos Principal either using MIT Kerberos Ticket Manager or using a keytab file using ktpass utility.
+Obtain a ticket for a Kerberos principal either using MIT Kerberos Ticket Manager or using a `keytab` file using the `ktpass` utility.
-Create `pgjdbc.conf` file with the same contents as described in Section: Using GSSAPI (Linux only environment).
+Create the `pgjdbc.conf` file with the same contents described in [Using GSSAPI (Linux-only environment)](#using_gssapi_linux_only_environment).
-Suppose `pgjdbc.conf` is placed at `c:\pgjdbc.conf`. Here is how to run JEdb sample:
+Suppose `pgjdbc.conf` is placed at `c:\pgjdbc.conf`. Here's how to run JEdb sample:
```text
set CLASSPATH=C:\Program Files\edb\jdbc\edb-jdbc18.jar;
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/index.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/index.mdx
index e1a33a1ef67..df67acd539c 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/index.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/09_security_and_encryption/index.mdx
@@ -1,5 +1,5 @@
---
-title: "Security and Encryption"
+title: "Security and encryption"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
From 4cddd4b2ae0612b7dd4acdfa64bf9bcad9763a0e Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Thu, 31 Mar 2022 15:34:41 -0400
Subject: [PATCH 35/39] Rest of the edit of JDBC Connector
---
.../42.3.2.1/02_requirements_overview.mdx | 4 +-
...dvanced_server_jdbc_connector_overview.mdx | 44 +++++++---------
.../42.3.2.1/05a_using_advanced_queueing.mdx | 52 +++++++++----------
...ting_sql_commands_with_executeUpdate().mdx | 47 +++++++++--------
..._graphical_interface_to_a_java_program.mdx | 20 +++----
...advanced_server_jdbc_connector_logging.mdx | 16 +++---
.../42.3.2.1/11_reference_jdbc_data_types.mdx | 8 +--
.../docs/jdbc_connector/42.3.2.1/index.mdx | 8 ++-
8 files changed, 97 insertions(+), 102 deletions(-)
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/02_requirements_overview.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/02_requirements_overview.mdx
index 33afe535cf0..1587bff12a4 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/02_requirements_overview.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/02_requirements_overview.mdx
@@ -1,5 +1,5 @@
---
-title: "Requirements Overview"
+title: "Requirements overview"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -22,6 +22,6 @@ This table lists the latest JDBC Connector versions and their supported correspo
| [42.2.9.1](01_jdbc_rel_notes/12_jdbc_42.2.9.1_rel_notes.mdx) | N | N | Y | Y | Y |
| [42.2.8.1](01_jdbc_rel_notes/12_jdbc_42.2.8.1_rel_notes.mdx) | N | N | Y | Y | Y |
-## Supported JDK Distribution
+## Supported JDK distribution
Java Virtual Machine (JVM): Java SE 8 or higher (LTS version), including Oracle JDK, OpenJDK, and IBM SDK (Java) distributions.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/03_advanced_server_jdbc_connector_overview.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/03_advanced_server_jdbc_connector_overview.mdx
index 1d72a480462..4056d186223 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/03_advanced_server_jdbc_connector_overview.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/03_advanced_server_jdbc_connector_overview.mdx
@@ -1,5 +1,5 @@
---
-title: "Advanced Server JDBC Connector Overview"
+title: "EDB Postgres Advanced Server JDBC Connector overview"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,72 +11,68 @@ legacyRedirectsGenerated:
-Sun Microsystems created a standardized interface for connecting Java applications to databases known as Java Database Connectivity (JDBC). The EDB JDBC Connector connects a Java application to a Postgres database.
+Sun Microsystems created a standardized interface for connecting Java applications to databases, known as Java Database Connectivity (JDBC). The EDB JDBC Connector connects a Java application to a Postgres database.
-## JDBC Driver Types
+## JDBC driver types
-There are currently four different types of JDBC drivers, each with their own specific implementation, use and limitations. The EDB JDBC Connector is a Type 4 driver.
+There are currently four types of JDBC drivers, each with its own implementation, use, and limitations. The EDB JDBC Connector is a Type 4 driver.
-Type 1 Driver
+Type 1 driver
- This driver type is the JDBC-ODBC bridge.
-- It is limited to running locally.
+- It's limited to running locally.
- Must have ODBC installed on computer.
- Must have ODBC driver for specific database installed on computer.
- Generally can’t run inside an applet because of Native Method calls.
-Type 2 Driver
+Type 2 driver
- This is the native database library driver.
- Uses Native Database library on computer to access database.
- Generally can’t run inside an applet because of Native Method calls.
- Must have database library installed on client.
-Type 3 Driver
+Type 3 driver
- 100% Java Driver, no native methods.
-- Does not require pre-installation on client.
+- Doesn't require preinstallation on client.
- Can be downloaded and configured on-the-fly just like any Java class file.
- Uses a proprietary protocol for talking with a middleware server.
-- Middleware server converts from proprietary calls to DBMS specific calls
+- Middleware server converts from proprietary calls to DBMS specific calls.
-Type 4 Driver
+Type 4 driver
-- 100% Java Driver, no native methods.
-- Does not require pre-installation on client.
+- 100% Java driver, no native methods.
+- Doesn't require preinstallation on client.
- Can be downloaded and configured on-the-fly just like any Java class file.
-- Unlike Type III driver, talks directly with the DBMS server.
+- Unlike Type 3 driver, talks directly with the DBMS server.
- Converts JDBC calls directly to database specific calls.
-## The JDBC Interface
+## The JDBC interface
The following figure shows the core API interfaces in the JDBC specification and how they relate to each other. These interfaces are implemented in the `java.sql` package.
![JDBC Class Relationships](images/jdbc_class_relationships.png)
-JDBC Class Relationships
-## JDBC Classes and Interfaces
+## JDBC classes and interfaces
-The core API is composed of classes and interfaces; these classes and interfaces work together as shown below:
+The core API is composed of classes and interfaces. These classes and interfaces work together as shown in the figure:
![Core Classes and Interfaces](images/core_classes_and_interfaces.png)
-Core Classes and Interfaces
## The JDBC DriverManager
-The figure below depicts the role of the `DriverManager` class in a typical JDBC application. The `DriverManager` acts as the bridge between a Java application and the backend database and determines which JDBC driver to use for the target database.
+This figure depicts the role of the `DriverManager` class in a typical JDBC application. The `DriverManager` acts as the bridge between a Java application and the backend database and determines the JDBC driver to use for the target database.
![DriverManager/Drivers](images/drivermanager_drivers.png)
-DriverManager/Drivers
-
-## Advanced Server JDBC Connector Compatibility
+## EDB Postgres Advanced Server JDBC Connector compatibility
-This is the current version of the driver. Unless you have unusual requirements (running old applications or JVMs), this is the driver you should be using. This driver supports PostgreSQL 9.6 or higher versions and requires Java 8 or higher versions. It contains support for SSL and the javax.sql package.
+This is the current version of the driver. Unless you have unusual requirements (running old applications or JVMs), this is the driver to use. This driver supports PostgreSQL 9.6 or higher versions and requires Java 8 or higher versions. It contains support for SSL and the `javax.sql` package.
!!! Note
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05a_using_advanced_queueing.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05a_using_advanced_queueing.mdx
index c62d7b3ee58..10cbfbf4c5f 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05a_using_advanced_queueing.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05a_using_advanced_queueing.mdx
@@ -1,37 +1,37 @@
---
-title: "Using Advanced Queueing"
+title: "Using advanced queueing"
---
!!! Tip "New Feature "
- Advanced Queueing is available in JDBC 42.3.2.1 and later.
+ Advanced queueing is available in JDBC 42.3.2.1 and later.
-EDB Postgres Advanced Server advanced queueing provides message queueing and message processing for the Advanced Server database. User-defined messages are stored in a queue and a collection of queues is stored in a queue table. You must first create a queue table before creating a queue that depends on it.
+EDB Postgres Advanced Server advanced queueing provides message queueing and message processing for the EDB Postgres Advanced Server database. User-defined messages are stored in a queue, and a collection of queues is stored in a queue table. You must first create a queue table before creating a queue that depends on it.
-On the server side, procedures in the `DBMS_AQADM` package create and manage message queues and queue tables. Use the `DBMS_AQ` package to add or remove messages from a queue, or register or unregister a PL/SQL callback procedure. For more information about `DBMS_AQ` and `DBMS_AQADM`, see the [EDB Postgres Advanced Server documentation](../epas/11/epas_compat_bip_guide/03_built-in_packages/02_dbms_aq/#pID0E01HG0HA).
+On the server side, procedures in the `DBMS_AQADM` package create and manage message queues and queue tables. Use the `DBMS_AQ` package to add or remove messages from a queue or register or unregister a PL/SQL callback procedure. For more information about `DBMS_AQ` and `DBMS_AQADM`, see the [EDB Postgres Advanced Server documentation](../epas/11/epas_compat_bip_guide/03_built-in_packages/02_dbms_aq/#pID0E01HG0HA).
-On the client side, application uses EDB-JDBC driver's JMS API to enqueue and dequeue message.
+On the client side, the application uses EDB-JDBC driver's JMS API to enqueue and dequeue message.
-## Enqueueing or Dequeueing a Message
+## Enqueueing or dequeueing a message
-For more information about using Advanced Server's advanced queueing functionality, see the [Database Compatibility for Oracle Developers Built-in Package Guide](/epas/latest/).
+For more information about using EDB Postgres Advanced Server's advanced queueing functionality, see the [Database Compatibility for Oracle Developers Built-in Package Guide](/epas/latest/).
-## Server-Side Setup
+## Server-side setup
-To use Advanced Queueing functionality on your JMS-based Java application, first create a user-defined type, queue table, and queue. Then start the queue on the database server. You can use either EDB-PSQL or EDB-JDBC JMS API in the Java application.
+To use advanced queueing functionality on your JMS-based Java application, first create a user-defined type, queue table, and queue. Then start the queue on the database server. You can use either EDB-PSQL or EDB-JDBC JMS API in the Java application.
### Using EDB-PSQL
Invoke EDB-PSQL and connect to the EDB Postgres Advanced Server host database. Use the following SPL commands at the command line.
-**Creating a User-defined Type**
+**Creating a user-defined type**
-To specify a RAW data type, you should create a user-defined type. This example creates a user-defined type named as `mytype`.
+To specify a RAW data type, create a user-defined type. This example creates a user-defined type named as `mytype`.
```Text
CREATE TYPE mytype AS (code int, project TEXT);
```
-**Create the Queue Table**
+**Create the queue table**
A queue table can hold multiple queues with the same payload type. This example creates a table named `MSG_QUEUE_TABLE`.
@@ -43,15 +43,15 @@ EXEC DBMS_AQADM.CREATE_QUEUE_TABLE
END;
```
-**Create the Queue**
+**Create the queue**
-This example creates a queue named `MSG_QUEUE` within the table `MSG_QUEUE_TABLE`.
+This example creates a queue named `MSG_QUEUE` in the table `MSG_QUEUE_TABLE`.
```Text
EXEC DBMS_AQADM.CREATE_QUEUE ( queue_name => 'MSG_QUEUE', queue_table => 'MSG_QUEUE_TABLE', comment => 'This queue contains pending messages.');
```
-**Start the Queue**
+**Start the queue**
Once the queue is created, invoke the following SPL code at the command line to start a queue in the EDB database.
@@ -91,13 +91,13 @@ queue.setEdbQueueTbl(queueTable);
queue.start();
```
-## Client-Side Example
+## Client-side example
After you create a user-defined type followed by queue table and queue, start the queue. Then, you can enqueue or dequeue a message using EDB-JDBC driver's JMS API.
Create a Java project and add the `edb-jdbc18.jar` from the `edb-jdbc` installation directory to its libraries.
-Create a Java Bean corresponding to the type created above.
+Create a Java Bean corresponding to the type you created.
```Text
package mypackage;
@@ -154,11 +154,11 @@ public class MyType extends UDTType {
}
```
-### Enqueue and Dequeue a Message
+### Enqueue and dequeue a message
To enqueue and dequeue a message:
-1. Create a JMS Connection factory and create a queue connection.
+1. Create a JMS connection factory and create a queue connection.
2. Create a queue session.
```Text
@@ -171,13 +171,13 @@ session = (EDBJmsQueueSession) conn.createQueueSession(true, Session.CLIENT_ACKN
queue = new EDBJmsQueue("MSG_QUEUE");
```
-### Enqueue a Message
+### Enqueue a message
To enqueue a message:
1. Create `EDBJmsMessageProducer` from the session.
2. Create the enqueue message.
-3. Call `EDBJmsMessageProducer.send` method.
+3. Call the `EDBJmsMessageProducer.send` method.
```Text
messageProducer = (EDBJmsMessageProducer) session.createProducer(queue);
@@ -191,12 +191,12 @@ udtType1.setName("mytype");
messageProducer.send(udtType1);
```
-### Dequeue a Message
+### Dequeue a message
To dequeue a message:
1. Create `EDBJmsMessageConsumer` from the session.
-2. Call `EDBJmsMessageConsumer.Receive` method.
+2. Call the `EDBJmsMessageConsumer.Receive` method.
```Text
messageConsumer = (EDBJmsMessageConsumer) session.createConsumer(queue);
@@ -207,9 +207,9 @@ queue.setTypeName("mytype");
Message message = messageConsumer.receive();
```
-## A Complete Enqueue and Dequeue Program
+## A complete enqueue and dequeue program
-This example shows enqueue and dequeue. User-defined type, queue table, and queue are created using EDB-PSQL and the queue is started.
+This example shows enqueue and dequeue. User-defined type, queue table, and queue are created using EDB-PSQL, and the queue is started.
```Text
package mypackage;
@@ -283,7 +283,7 @@ public class JMSClient {
}
```
-This example shows enqueue, dequeue, and creating the user-defined type, queue table and queue. It also starts the queue.
+This example shows enqueue, dequeue, and creating the user-defined type, queue table, and queue. It also starts the queue.
```Text
package mypackage;
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/06_executing_sql_commands_with_executeUpdate().mdx b/product_docs/docs/jdbc_connector/42.3.2.1/06_executing_sql_commands_with_executeUpdate().mdx
index 2c9b2569fd0..f089cae8c01 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/06_executing_sql_commands_with_executeUpdate().mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/06_executing_sql_commands_with_executeUpdate().mdx
@@ -1,5 +1,5 @@
---
-title: "Executing SQL Commands with executeUpdate() or through PrepareStatement Objects"
+title: "Executing SQL commands with executeUpdate() or through PrepareStatement objects"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,7 +11,7 @@ legacyRedirectsGenerated:
-In the previous example `ListEmployees` executed a `SELECT` statement using the `Statement.executeQuery()` method. `executeQuery()` was designed to execute query statements so it returns a `ResultSet` that contains the data returned by the query. The `Statement` class offers a second method that you should use to execute other types of commands (`UPDATE`, `INSERT`, `DELETE`, and so forth). Instead of returning a collection of rows, the `executeUpdate()` method returns the number of rows affected by the SQL command it executes.
+In the previous example, `ListEmployees` executed a `SELECT` statement using the `Statement.executeQuery()` method. `executeQuery()` was designed to execute query statements so it returns a `ResultSet` that contains the data returned by the query. The `Statement` class offers a second method that you use to execute other types of commands (`UPDATE`, `INSERT`, `DELETE`, and so forth). Instead of returning a collection of rows, the `executeUpdate()` method returns the number of rows affected by the SQL command it executes.
The signature of the `executeUpdate()` method is:
@@ -19,16 +19,15 @@ The signature of the `executeUpdate()` method is:
int executeUpdate(String sqlStatement)
```
-Provide this method a single parameter of type `String`, containing the SQL command that you wish to execute.
+Provide this method a single parameter of type `String` containing the SQL command that you want to execute.
-## Using executeUpdate() to INSERT Data
+## Using executeUpdate() to INSERT data
-The example that follows demonstrates using the `executeUpdate()` method to add a row to the `emp` table.
+The example that follows shows using the `executeUpdate()` method to add a row to the `emp` table.
!!! Note
- The following example is not a complete application, only a method - the samples in the remainder of this document do not include the code required to set up and tear down a `Connection`. To experiment with the example, you must provide a class that invokes the sample code.
+ The following example isn't a complete application, only a method. These code samples don't include the code required to set up and tear down a `Connection`. To experiment with the example, you must provide a class that invokes the sample code.
-Listing 1.3
```text
public void updateEmployee(Connection con)
@@ -48,11 +47,12 @@ Listing 1.3
}
```
-The `updateEmployee()` method expects a single argument from the caller, a `Connection` object that must be connected to an Advanced Server database:
+The `updateEmployee()` method expects a single argument from the caller, a `Connection` object that must be connected to an EDB Postgres Advanced Server database:
```text
public void updateEmployee(Connection con);
```
+
The `executeUpdate()` method returns the number of rows affected by the SQL statement (an `INSERT` typically affects one row, but an `UPDATE` or `DELETE` statement can affect more).
```text
@@ -60,7 +60,7 @@ The `executeUpdate()` method returns the number of rows affected by the SQL stat
+" VALUES(6000,'Jones')");
```
-If `executeUpdate()` returns without throwing an error, the call to `System.out.println` displays a message to the user that shows the number of rows affected.
+If `executeUpdate()` returns without an error, the call to `System.out.println` displays a message to the user that shows the number of rows affected.
```text
System.out.println("");
@@ -77,13 +77,13 @@ The catch block displays an appropriate error message to the user if the program
}
```
-You can use `executeUpdate()` with any SQL command that does not return a result set. However, you probably want to use `PrepareStatements` when the queries can be parameterized.
+You can use `executeUpdate()` with any SQL command that doesn't return a result set. However, you probably want to use `PrepareStatements` when the queries can be parameterized.
-## Using PreparedStatements to Send SQL Commands
+## Using PreparedStatements to send SQL commands
-Many applications execute the same SQL statement over and over again, changing one or more of the data values in the statement between each iteration. If you use a `Statement` object to repeatedly execute a SQL statement, the server must parse, plan, and optimize the statement every time. JDBC offers another `Statement` derivative, the `PreparedStatement` to reduce the amount of work required in such a scenario.
+Many applications execute the same SQL statement over and over again, changing one or more of the data values in the statement between each iteration. If you use a `Statement` object to repeatedly execute a SQL statement, the server must parse, plan, and optimize the statement every time. JDBC offers another `Statement` derivative, the `PreparedStatement`, to reduce the amount of work required in such a scenario.
-Listing 1.4 demonstrates invoking a `PreparedStatement` that accepts an employee ID and employee name and inserts that employee information in the `emp` table:
+The following shows invoking a `PreparedStatement` that accepts an employee ID and employee name and inserts that employee information in the `emp` table:
```text
public void AddEmployee(Connection con)
@@ -103,22 +103,23 @@ Listing 1.4 demonstrates invoking a `PreparedStatement` that accepts an employee
}
}
```
-Instead of hard-coding data values in the SQL statement, you insert `placeholders` to represent the values that will change with each iteration. Listing 1.4 shows an `INSERT` statement that includes two placeholders (each represented by a question mark):
+
+Instead of hard coding data values in the SQL statement, you insert placeholders to represent the values to change with each iteration. The example shows an `INSERT` statement that includes two placeholders (each represented by a question mark):
```text
String command = "INSERT INTO emp(empno,ename) VALUES(?,?)";
```
+
With the parameterized SQL statement in hand, the `AddEmployee()` method can ask the `Connection` object to prepare that statement and return a `PreparedStatement` object:
```text
PreparedStatement stmt = con.prepareStatement(command);
```
-At this point, the `PreparedStatement` has parsed and planned the `INSERT` statement, but it does not know what values to add to the table. Before executing the `PreparedStatement`, you must supply a value for each placeholder by calling a `setter` method. `setObject()` expects two arguments:
-
- - A parameter number; parameter number one corresponds to the first question mark, parameter number two corresponds to the second question mark, etc.
+At this point, the `PreparedStatement` has parsed and planned the `INSERT` statement, but it doesn't know the values to add to the table. Before executing `PreparedStatement`, you must supply a value for each placeholder by calling a `setter` method. `setObject()` expects two arguments:
- - The value to substitute for the placeholder.
+- A parameter number. Parameter number one corresponds to the first question mark, parameter number two corresponds to the second question mark, etc.
+- The value to substitute for the placeholder.
The `AddEmployee()` method prompts the user for an employee ID and name and calls `setObject()` with the values supplied by the user:
@@ -127,7 +128,7 @@ The `AddEmployee()` method prompts the user for an employee ID and name and call
stmt.setObject(2, c.readLine("Name:"));
```
-And then asks the PreparedStatement object to execute the statement:
+It then asks the `PreparedStatement` object to execute the statement:
```text
stmt.execute();
@@ -137,7 +138,7 @@ If the SQL statement executes as expected, `AddEmployee()` displays a message th
Some simple syntax examples using `PreparedStatement` sending SQL commands follow:
-To use the UPDATE command to update a row:
+To use the `UPDATE` command to update a row:
```text
String command = " UPDATE emp SET ename=? WHERE empno=?";
@@ -147,7 +148,7 @@ To use the UPDATE command to update a row:
stmt.execute();
```
-To use the DROP TABLE command to delete a table from a database:
+To use the `DROP TABLE` command to delete a table from a database:
```text
String command = "DROP TABLE tableName";
@@ -155,7 +156,7 @@ To use the DROP TABLE command to delete a table from a database:
stmt.execute();
```
-To use the CREATE TABLE command to add a new table to a database:
+To use the `CREATE TABLE` command to add a new table to a database:
```text
String command = ("CREATE TABLE tablename (fieldname NUMBER(4,2), fieldname2 VARCHAR2(30))";
@@ -163,7 +164,7 @@ To use the CREATE TABLE command to add a new table to a database:
stmt.execute();
```
-To use the ALTER TABLE command to change the attributes of a table:
+To use the `ALTER TABLE` command to change the attributes of a table:
```text
String command ="ALTER TABLE tablename ADD COLUMN colname BOOLEAN ";
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/07_adding_a_graphical_interface_to_a_java_program.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/07_adding_a_graphical_interface_to_a_java_program.mdx
index 6e8b1720882..87dda3781df 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/07_adding_a_graphical_interface_to_a_java_program.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/07_adding_a_graphical_interface_to_a_java_program.mdx
@@ -1,5 +1,5 @@
---
-title: "Adding a Graphical Interface to a Java Program"
+title: "Adding a graphical interface to a Java program"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,12 +11,12 @@ legacyRedirectsGenerated:
-With a little extra work, you can add a graphical user interface to a program - the next example (Listing 1.4) demonstrates how to write a Java application that creates a `JTable` (a spreadsheet-like graphical object) and copies the data returned by a query into that `JTable`.
+With a little extra work, you can add a graphical user interface to a program. The next example shows how to write a Java application that creates a `JTable` (a spreadsheet-like graphical object) and copies the data returned by a query into that `JTable`.
!!! Note
- The following sample application is a method, not a complete application. To call this method, provide an appropriate main() function and wrapper class.
+ The following sample application is a method, not a complete application. To call this method, provide an appropriate `main()` function and wrapper class.
+
-Listing 1.5
```text
import java.sql.*;
@@ -73,7 +73,7 @@ import javax.swing.JScrollPane;
import javax.swing.JTable;
```
-The `showEmployees()` method expects a Connection object to be provided by the caller; the `Connection` object must be connected to the Advanced Server:
+The `showEmployees()` method expects a `Connection` object to be provided by the caller. The `Connection` object must be connected to the EDB Postgres Advanced Server:
```text
public void showEmployees(Connection con)
@@ -86,13 +86,13 @@ Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * FROM emp");
```
-As you would expect, `executeQuery()` returns a `ResultSet` object. The `ResultSet` object contains the metadata that describes the `shape` of the result set (that is, the number of rows and columns in the result set, the data type for each column, the name of each column, and so forth). You can extract the metadata from the `ResultSet` by calling the `getMetaData()` method:
+As you'd expect, `executeQuery()` returns a `ResultSet` object. The `ResultSet` object contains the metadata that describes the shape of the result set (that is, the number of rows and columns in the result set, the data type for each column, the name of each column, and so forth). You can extract the metadata from the `ResultSet` by calling the `getMetaData()` method:
```text
ResultSetMetaData rsmd = rs.getMetaData();
```
-Next, `showEmployees()` creates a `Vector` (a one dimensional array) to hold the column headers and then copies each header from the `ResultMetaData` object into the vector:
+Next, `showEmployees()` creates a vector (a one-dimensional array) to hold the column headers and then copies each header from the `ResultMetaData` object into the vector:
```text
Vector labels = new Vector();
@@ -102,7 +102,7 @@ for(int column = 0; column < rsmd.getColumnCount(); column++)
}
```
-With the column headers in place, `showEmployees()` extracts each row from the `ResultSet` and copies it into a new vector (named `rows`). The `rows` vector is actually a vector of vectors: each entry in the `rows` vector contains a vector that contains the data values in that row. This combination forms the two-dimensional array that you will need to build a `JTable`. After creating the rows vector, the program reads through each row in the `ResultSet` (by calling `rs.next()`). For each column in each row, a `getter` method extracts the value at that row/column and adds the value to the `rowValues` vector. Finally, `showEmployee()` adds each `rowValues` vector to the `rows` vector:
+With the column headers in place, `showEmployees()` extracts each row from the `ResultSet` and copies it into a new vector (named `rows`). The `rows` vector is actually a vector of vectors: each entry in the `rows` vector contains a vector that contains the data values in that row. This combination forms the two-dimensional array that you need to build a `JTable`. After creating the `rows` vector, the program reads through each row in the `ResultSet` (by calling `rs.next()`). For each column in each row, a `getter` method extracts the value at that row/column and adds the value to the `rowValues` vector. Finally, `showEmployee()` adds each `rowValues` vector to the `rows` vector:
```text
Vector rows = new Vector();
@@ -127,7 +127,7 @@ jf.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
System.out.println("Command successfully executed");
```
-The `showEmployees()` method includes a `catch` block to intercept any errors that may occur and display an appropriate message to the user:
+The `showEmployees()` method includes a `catch` block to intercept any errors that occur and display an appropriate message to the user:
```text
catch(Exception err)
@@ -138,6 +138,6 @@ catch(Exception err)
}
```
-The result of calling the `showEmployees()` method is shown in below figure:
+The result of calling the `showEmployees()` method is shown in figure:
![The showEmployees Window](images/the_showemployees_window.png)
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/10_advanced_server_jdbc_connector_logging.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/10_advanced_server_jdbc_connector_logging.mdx
index 3e9b02ec065..c7e3ca8186e 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/10_advanced_server_jdbc_connector_logging.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/10_advanced_server_jdbc_connector_logging.mdx
@@ -1,5 +1,5 @@
---
-title: "Advanced Server JDBC Connector Logging"
+title: "EDB Postgres Advanced Server JDBC Connector logging"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -9,18 +9,18 @@ legacyRedirectsGenerated:
- "/edb-docs/d/jdbc-connector/user-guides/jdbc-guide/42.2.12.1/advanced_server_jdbc_connector_logging.html"
---
-The Advanced Server JDBC Connector supports the use of logging to help resolve issues with the JDBC Connector when used in your application. The JDBC Connector uses the logging APIs of `java.util.logging` that was part of Java since JDK 1.4. For information on `java.util.logging`, see [The PostgreSQL JDBC Driver](https://jdbc.postgresql.org/documentation/head/logging.html).
+The EDB Postgres Advanced Server JDBC Connector supports the use of logging to help resolve issues with the JDBC Connector when used in your application. The JDBC Connector uses the logging APIs of `java.util.logging` that was part of Java since JDK 1.4. For information on `java.util.logging`, see [The PostgreSQL JDBC Driver](https://jdbc.postgresql.org/documentation/head/logging.html).
!!! Note
- Previous versions of the Advanced Server JDBC Connector used a custom mechanism to enable logging, which is now replaced by the use of `java.util.logging` in versions moving forward from community version 42.1.4.1. The older mechanism is no longer available.
+ Previous versions of the EDB Postgres Advanced Server JDBC Connector used a custom mechanism to enable logging. It's now replaced by the use of `java.util.logging` in versions moving forward from community version 42.1.4.1. The older mechanism is no longer available.
-## Enabling Logging with Connection Properties (static)
+## Enabling logging with connection properties (static)
You can directly configure logging within the connection properties of the JDBC Connector. The connection properties related to logging are `loggerLevel` and `loggerFile`.
`loggerLevel`
-Logger level of the driver. Allowed values are `OFF`, `DEBUG`, or `TRACE`. This option enables the `java.util.logging.Logger` level of the driver to correspond to the following Advanced Server JDBC levels:
+Logger level of the driver. Allowed values are `OFF`, `DEBUG`, or `TRACE`. This option enables the `java.util.logging.Logger` level of the driver to correspond to the following EDB Postgres Advanced Server JDBC levels:
| loggerLevel | java.util.logging |
| ----------- | ----------------- |
@@ -34,9 +34,9 @@ File name output of the logger. The default is `java.util.logging.ConsoleHandler
`jdbc:edb://myhost:5444/mydb?loggerLevel=TRACE&loggerFile=EDB-JDBC.LOG`
-## Enabling Logging with logging.properties (dynamic)
+## Enabling logging with logging.properties (dynamic)
-You can use logging properties to configure the driver dynamically (for example, when using the JDBC Connector with an application server such as Tomcat, JBoss, WildFly, etc.), which makes it easier to enable/disable logging at runtime. The following example demonstrates configuring logging dynamically:
+You can use logging properties to configure the driver dynamically (for example, when using the JDBC Connector with an application server such as Tomcat, JBoss, WildFly, etc.), which makes it easier to enable/disable logging at runtime. The following example shows configuring logging dynamically:
```text
handlers = java.util.logging.FileHandler
@@ -53,7 +53,7 @@ java.util.logging.FileHandler.count = 20
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter java.util.logging.FileHandler.level = FINEST java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS %4$s %2$s %5$s%6$s%n
```
-Use the following command to set the logging level for the JDBC Connector to FINEST (maps to `loggerLevel`):
+Use the following command to set the logging level for the JDBC Connector to `FINEST` (maps to `loggerLevel`):
`com.edb.level=FINEST`
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/11_reference_jdbc_data_types.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/11_reference_jdbc_data_types.mdx
index ed720ee78a5..0a72bb3aa40 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/11_reference_jdbc_data_types.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/11_reference_jdbc_data_types.mdx
@@ -1,5 +1,5 @@
---
-title: "Reference - JDBC Data Types"
+title: "Reference - JDBC data types"
legacyRedirectsGenerated:
# This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
@@ -11,7 +11,7 @@ legacyRedirectsGenerated:
-The following table lists the JDBC data types supported by Advanced Server and the JDBC connector. If you are binding to an Advanced Server type (shown in the middle column) using the `setObject()` method, supply a JDBC value of the type shown in the left column. When you retrieve data, the `getObject()` method will return the object type listed in the right-most column:
+The following table lists the JDBC data types supported by EDB Postgres Advanced Server and the JDBC Connector. If you're binding to an EDB Postgres Advanced Server type (shown in the middle column) using the `setObject()` method, supply a JDBC value of the type shown in the left column. When you retrieve data, the `getObject()` method returns the object type listed in the right-most column:
| JDBC Type | Advanced Server Type | getObject() returns |
| -------------------- | ---------------------- | ------------------------------------------ |
@@ -35,6 +35,6 @@ The following table lists the JDBC data types supported by Advanced Server and t
| Types.SQLXML | XML | java.sql.SQLXML |
!!! Note
- `Types.REF_CURSOR` is only supported for JRE 4.2.
+ `Types.REF_CURSOR` is supported only for JRE 4.2.
-`Types.OTHER` is not only used for UUID, but is also used if you do not specify any specific type and allow the server or the JDBC driver to determine the type. If the parameter is an instance of `java.util.UUID`, the driver determines the appropriate internal type and sends it to the server.
+`Types.OTHER` is not only used for UUID but is also used if you don't specify a type and allow the server or the JDBC driver to determine the type. If the parameter is an instance of `java.util.UUID`, the driver determines the appropriate internal type and sends it to the server.
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/index.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/index.mdx
index fae85d33257..ac1b24289c9 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/index.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/index.mdx
@@ -1,5 +1,5 @@
---
-title: "EDB Postgres Advanced Server JDBC Connector Guide"
+title: "EDB Postgres Advanced Server JDBC Connector"
directoryDefaults:
description: "EDB JDBC Connector Version 42.2.12.3 Documentation and release notes."
@@ -27,11 +27,9 @@ legacyRedirectsGenerated:
- "/edb-docs/d/jdbc-connector/user-guides/jdbc-guide/42.2.12.1/index.html"
---
-The EDB JDBC connector provides connectivity between a Java application and an Advanced Server database. This guide provides installation instructions, usage instructions, and examples that demonstrate the Advanced Server specific functionality of the JDBC Connector.
+The EDB JDBC Connector provides connectivity between a Java application and an EDB Postgres Advanced Server database. The EDB JDBC Connector is written in Java and conforms to Sun's JDK architecture. For more information, see [JDBC driver types](03_advanced_server_jdbc_connector_overview/#jdbc-driver-types)
-The EDB JDBC connector is written in Java and conforms to Sun's JDK architecture. For more information, see [JDBC Driver Types](03_advanced_server_jdbc_connector_overview/#jdbc-driver-types)
-
-The EDB JDBC connector is built on and supports all of the functionality of the PostgreSQL community driver. For more information about the features and functionality of the driver, please refer to the community [documentation](https://jdbc.postgresql.org/documentation/head/index.html).
+The EDB JDBC Connector is built on and supports all of the functionality of the PostgreSQL community driver. For more information about the features and functionality of the driver, please see the community [documentation](https://jdbc.postgresql.org/documentation/head/index.html).
From ce5d339723620547410f3aaf45b8ceee8bcb94d0 Mon Sep 17 00:00:00 2001
From: drothery-edb
Date: Thu, 31 Mar 2022 15:50:23 -0400
Subject: [PATCH 36/39] Adding Migration Handbook to product search
---
advocacy_docs/migrating/oracle/index.mdx | 2 ++
1 file changed, 2 insertions(+)
diff --git a/advocacy_docs/migrating/oracle/index.mdx b/advocacy_docs/migrating/oracle/index.mdx
index 559b3dab93b..3ef6102e709 100644
--- a/advocacy_docs/migrating/oracle/index.mdx
+++ b/advocacy_docs/migrating/oracle/index.mdx
@@ -1,6 +1,8 @@
---
title: 'Oracle to EDB Postgres Advanced Server Migration Handbook'
indexCards: none
+directoryDefaults:
+product: "Migration Handbook"
navigation:
- "#Overview for migrating your Oracle database"
- factors_to_consider
From 9270030a55db29cac9b8ddc662c5a07945c369cf Mon Sep 17 00:00:00 2001
From: Josh Heyer <63653723+josh-heyer@users.noreply.github.com>
Date: Thu, 31 Mar 2022 13:49:03 -0700
Subject: [PATCH 37/39] Indentation
---
advocacy_docs/migrating/oracle/index.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/advocacy_docs/migrating/oracle/index.mdx b/advocacy_docs/migrating/oracle/index.mdx
index 3ef6102e709..7e4e9dd5a2f 100644
--- a/advocacy_docs/migrating/oracle/index.mdx
+++ b/advocacy_docs/migrating/oracle/index.mdx
@@ -2,7 +2,7 @@
title: 'Oracle to EDB Postgres Advanced Server Migration Handbook'
indexCards: none
directoryDefaults:
-product: "Migration Handbook"
+ product: "Migration Handbook"
navigation:
- "#Overview for migrating your Oracle database"
- factors_to_consider
From bdbfd1ad3c0eca2c43201cc8d6ce42e1b6609cbc Mon Sep 17 00:00:00 2001
From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com>
Date: Fri, 1 Apr 2022 11:36:44 +0530
Subject: [PATCH 38/39] edited the release notes
---
.../pem/8/pem_rel_notes/04_840_rel_notes.mdx | 32 +++++++++----------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx b/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
index 6274bf292a5..b76dbf8a7c5 100644
--- a/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
+++ b/product_docs/docs/pem/8/pem_rel_notes/04_840_rel_notes.mdx
@@ -4,20 +4,20 @@ title: "Version 8.4.0"
New features, enhancements, bug fixes, and other changes in PEM 8.4.0 include:
-| Type | Description | ID |
-| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------| --------- |
-| New Feature | Built-in support for monitoring Barman backups. See [Monitoring Barman](../pem_ent_feat/18_monitoring_barman) for more information. | PEM-4435 |
-| Security Fix | Hardened against unrestricted file uploads as reported in the medium severity CVE-2022-0959. | PEM-4442 |
-| Enhancement | Monitoring of transaction ID (TXID) wraparound ID. | PEM-3990 |
-| Enhancement | Removed unnecessary monitoring of virtual file systems. [Support Ticket #573096] | PEM-806 |
-| Enhancement | Sorting based on status for agent and server tables in the dashboard. | PEM-4152 |
-| Enhancement | Option to disable the Query Tool for users in order to restrict viewing data. [Support Ticket #74976] | PEM-4315 |
-| Enhancement | Support for Postgres extension-based probes for multi-version flexibility. Updating to newer versions are no longer required. | PEM-4391 |
-| Enhancement | Improved the Linux installation instructions with added details and steps. | PEM-4381 |
-| Bug Fix | A new installation grants the `pem_admin` role to the superuser. [Support Ticket #79577] | PEM-4433 |
-| Bug Fix | For display tables, numeric fields sorted by numeric order, not alphabetical order. [Support Ticket #1111704] | PEM-3827 |
-| Bug Fix | Limits the decimal precision displayed for monitoring percentages. | PEM-4144 |
-| Bug Fix | Duplicate key value violates unique constraint blocked_session_info_pkey. [Support Ticket #RT75870] | PEM-4333 |
-| Bug Fix | Probe error for Postgres Extended 14. | PEM-4356 |
-| Bug Fix | PEM agent not gathering data after upgrade. [Support Ticket #78679] | PEM-4430 |
+| Type | Description | ID |
+| ------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| --------- |
+| New Feature | Built-in support for monitoring Barman backups. See [Monitoring Barman](../pem_ent_feat/18_monitoring_barman) for more information. | PEM-4435 |
+| Security Fix | Hardened against unrestricted file uploads as reported in the medium severity CVE-2022-0959. | PEM-4442 |
+| Enhancement | Monitoring of transaction ID (TXID) wraparound for exhaustion and to prevent failure. | PEM-3990 |
+| Enhancement | Removed unnecessary monitoring of virtual file systems. [Support Ticket #573096] | PEM-806 |
+| Enhancement | Sorting based on status for agent and server tables in the dashboard. | PEM-4152 |
+| Enhancement | Option to disable the Query Tool for users in order to restrict viewing data. [Support Ticket #74976] | PEM-4315 |
+| Enhancement | Support for Postgres extension-based probes for multi-version flexibility. Updating to newer versions are no longer required. | PEM-4391 |
+| Enhancement | Improved the Linux installation instructions with added details and steps. | PEM-4381 |
+| Bug Fix | A new installation grants the `pem_admin` role to the superuser. [Support Ticket #79577] | PEM-4433 |
+| Bug Fix | For display tables, numeric fields sorted by numeric order, not alphabetical order. [Support Ticket #1111704] | PEM-3827 |
+| Bug Fix | Limits the decimal precision displayed for monitoring percentages. | PEM-4144 |
+| Bug Fix | Duplicate key value violates unique constraint blocked_session_info_pkey. [Support Ticket #RT75870] | PEM-4333 |
+| Bug Fix | Probe error for Postgres Extended 14. | PEM-4356 |
+| Bug Fix | PEM agent not gathering data after upgrade. [Support Ticket #78679] | PEM-4430 |
| Bug Fix | Added an option in preferences to change the line ending of the email body content from LF(Line Feed) to CRLF (Carriage Return Line Feed). This fixes missing alert body content in email notifications. [Support Ticket #833910] | PEM-1832 |
From 2d81156c0f4cd0b97ee0e543f005b8cad1f83c72 Mon Sep 17 00:00:00 2001
From: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com>
Date: Fri, 1 Apr 2022 05:09:41 -0400
Subject: [PATCH 39/39] typo
---
.../06_handling_errors.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx
index f38a1d76d26..3e9f8c02cea 100644
--- a/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx
+++ b/product_docs/docs/jdbc_connector/42.3.2.1/05_using_the_advanced_server_jdbc_connector_with_java_applications/06_handling_errors.mdx
@@ -13,7 +13,7 @@ legacyRedirectsGenerated:
When connecting to an external resource (such as a database server), errors are bound to occur. Your code must include a way to handle these errors. Both JDBC and the EDB Postgres Advanced Server JDBC Connector provide various types of error handling. The [ListEmployees class example](./#using_the_advanced_server_jdbc_connector_with_java_applications) shows how to handle an error using `try/catch` blocks.
-When a JDBC object retrns an error (an object of type `SQLException` or of a type derived from `SQLException`), the `SQLException` object exposes three different pieces of error information:
+When a JDBC object returns an error (an object of type `SQLException` or of a type derived from `SQLException`), the `SQLException` object exposes three different pieces of error information:
- The error message
- The SQL state