From 15662d953c7d0af716ebf092e4c323e8e3f56ecc Mon Sep 17 00:00:00 2001
From: Bobby Bissett <70302203+EFM-Bobby@users.noreply.github.com>
Date: Mon, 14 Mar 2022 09:33:09 -0400
Subject: [PATCH 01/16] add explicit ssh notes about users
This is a fix for https://enterprisedb.atlassian.net/browse/EFM-1459
---
product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
index c5e5bc36534..92c3f4c081a 100644
--- a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
+++ b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
@@ -65,7 +65,7 @@ to those instructions:
1. Create an integration script that connects to every (remote)
PgBouncer host and runs the redirect script. Locate the script at `/usr/edb/efm-4.2/bin/efm_pgbouncer_functions`. Make sure the user
-efm can execute the script, which has the following contents:
+efm can execute the script, which has the following contents (note that the 'efm' user is ssh'ing as 'enterprised' to run the script):
``` text
@@ -157,6 +157,7 @@ by root and that user/group/other (0755) has read and execute access. The script
For the PgBouncer integration, passwordless `ssh` access is required. There are multiple ways
to configure `ssh`. Follow your organization's recommended process to
configure the passwordless `ssh`. For a quick start, you can also follow this example for configuring passwordless `ssh`.
+The 'efm' user will need to be able to ssh as the user running pgbouncer, i.e. the 'enterprisedb' user.
#### Configure on PgBouncer hosts
From ac3128c6aa8ba19b97f5a90a414f5c7b0ea29b47 Mon Sep 17 00:00:00 2001
From: Bobby Bissett <70302203+EFM-Bobby@users.noreply.github.com>
Date: Mon, 14 Mar 2022 09:36:29 -0400
Subject: [PATCH 02/16] typo
---
product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
index 92c3f4c081a..1e2ef93e315 100644
--- a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
+++ b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
@@ -65,7 +65,7 @@ to those instructions:
1. Create an integration script that connects to every (remote)
PgBouncer host and runs the redirect script. Locate the script at `/usr/edb/efm-4.2/bin/efm_pgbouncer_functions`. Make sure the user
-efm can execute the script, which has the following contents (note that the 'efm' user is ssh'ing as 'enterprised' to run the script):
+efm can execute the script, which has the following contents (note that the 'efm' user is ssh'ing as 'enterprisedb' to run the script):
``` text
From a8c9d6c2f0a20bf9612f73068bb6307e39a9be97 Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Tue, 22 Mar 2022 16:18:15 -0400
Subject: [PATCH 03/16] first round of edits for hadoop foreign data wrapper
---
.../2/02_requirements_overview.mdx | 2 +-
.../2/03_architecture_overview.mdx | 4 +-
.../2/04_supported_authentication_methods.mdx | 20 +-
.../2/06_updating_the_hadoop_data_adapter.mdx | 2 +-
.../2/07_features_of_hdfs_fdw.mdx | 20 +-
...08_configuring_the_hadoop_data_adapter.mdx | 209 ++++++++----------
.../docs/hadoop_data_adapter/2/index.mdx | 6 +-
7 files changed, 122 insertions(+), 141 deletions(-)
diff --git a/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx b/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx
index 5545cf7ef92..15cf1d8588d 100644
--- a/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx
@@ -1,5 +1,5 @@
---
-title: "Supported Database Versions"
+title: "Supported database versions"
---
This table lists the latest Hadoop Foreign Data Wrapper versions and their supported corresponding EDB Postgres Advanced Server (EPAS) versions. Hadoop Foreign Data Wrapper is supported on the same platforms as EDB Postgres Advanced Server. See Product Compatibility for details.
diff --git a/product_docs/docs/hadoop_data_adapter/2/03_architecture_overview.mdx b/product_docs/docs/hadoop_data_adapter/2/03_architecture_overview.mdx
index 87c8fb6d024..5b1ac0b2c46 100644
--- a/product_docs/docs/hadoop_data_adapter/2/03_architecture_overview.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/03_architecture_overview.mdx
@@ -1,5 +1,5 @@
---
-title: "Architecture Overview"
+title: "Architecture overview"
---
@@ -10,4 +10,4 @@ The Hadoop data wrapper provides an interface between a Hadoop file system and a
![Using a Hadoop distributed file system with Postgres](images/hadoop_distributed_file_system_with_postgres.png)
-When possible, the Foreign Data Wrapper asks the Hive or Spark server to perform the actions associated with the `WHERE` clause of a `SELECT` statement. Pushing down the `WHERE` clause improves performance by decreasing the amount of data moving across the network.
+When possible, the foreign data wrapper asks the Hive or Spark server to perform the actions associated with the `WHERE` clause of a `SELECT` statement. Pushing down the `WHERE` clause improves performance by decreasing the amount of data moving across the network.
diff --git a/product_docs/docs/hadoop_data_adapter/2/04_supported_authentication_methods.mdx b/product_docs/docs/hadoop_data_adapter/2/04_supported_authentication_methods.mdx
index 24377cbadda..4baff8285fb 100644
--- a/product_docs/docs/hadoop_data_adapter/2/04_supported_authentication_methods.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/04_supported_authentication_methods.mdx
@@ -1,14 +1,14 @@
---
-title: "Supported Authentication Methods"
+title: "Supported authentication methods"
---
-The Hadoop Foreign Data Wrapper supports `NOSASL` and `LDAP` authentication modes. To use `NOSASL`, do not specify any `OPTIONS` while creating user mapping. For `LDAP` authentication mode, specify `username` and `password` in `OPTIONS` while creating user mapping.
+The Hadoop Foreign Data Wrapper supports NOSASL and LDAP authentication modes. To use NOSASL, don't specify any `OPTIONS` values while creating user mapping. For LDAP authentication mode, specify `username` and `password` in `OPTIONS` while creating user mapping.
-## Using LDAP Authentication
+## Using LDAP authentication
-When using the Hadoop Foreign Data Wrapper with `LDAP` authentication, you must first configure the `Hive Server` or `Spark Server` to use LDAP authentication. The configured server must provide a `hive-site.xml` file that includes the connection details for the LDAP server. For example:
+When using the Hadoop Foreign Data Wrapper with `LDAP` authentication, first configure the Hive server or Spark server to use LDAP authentication. The configured server must provide a `hive-site.xml` file that includes the connection details for the LDAP server. For example:
```text
@@ -38,21 +38,21 @@ When using the Hadoop Foreign Data Wrapper with `LDAP` authentication, you must
```
-Then, when starting the hive server, include the path to the `hive-site.xml` file in the command. For example:
+Then, when starting the Hive server, include the path to the `hive-site.xml` file in the command. For example:
```text
-./hive --config path_to_hive-site.xml_file --service hiveServer2
+./hive --config --service hiveServer2
```
-Where *path_to_hive-site.xml_file* specifies the complete path to the `hive‑site.xml` file.
+ <path_to_hive-site.xml_file> specifies the complete path to the `hive‑site.xml` file.
-When creating the user mapping, you must provide the name of a registered LDAP user and the corresponding password as options. For details, see [Create User Mapping](08_configuring_the_hadoop_data_adapter/#create-user-mapping).
+When creating the user mapping, you must provide the name of a registered LDAP user and the corresponding password as options. For details, see [Create user mapping](08_configuring_the_hadoop_data_adapter/#create-user-mapping).
-## Using NOSASL Authentication
+## Using NOSASL authentication
-When using `NOSASL` authentication with the Hadoop Foreign Data Wrapper, set the authorization to `None`, and the authentication method to `NOSASL` on the `Hive Server` or `Spark Server`. For example, if you start the `Hive Server` at the command line, include the `hive.server2.authentication` configuration parameter in the command:
+When using NOSASL authentication with the Hadoop Foreign Data Wrapper, set the authorization to `None` and the authentication method to `NOSASL` on the Hive server or Spark server. For example, if you start the Hive server at the command line, include the `hive.server2.authentication` configuration parameter in the command:
```text
hive --service hiveserver2 --hiveconf hive.server2.authentication=NOSASL
diff --git a/product_docs/docs/hadoop_data_adapter/2/06_updating_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/06_updating_the_hadoop_data_adapter.mdx
index f036e321d51..51713b808de 100644
--- a/product_docs/docs/hadoop_data_adapter/2/06_updating_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/06_updating_the_hadoop_data_adapter.mdx
@@ -4,7 +4,7 @@ title: "Upgrading the Hadoop Foreign Data Wrapper"
-If you have an existing installation of MongoDB Foreign Data Wrapper that you installed using the EDB repository, you can use the `upgrade` command to update your repository configuration file and then upgrade to a more recent product version. To start the process, open a terminal window, assume superuser privileges, and enter the commands applicable to the operating system and package manager used for the installation:
+If you have an existing installation of MongoDB Foreign Data Wrapper that you installed using the EDB repository, you can use the `upgrade` command to update your repository configuration file and then upgrade to a more recent product version. To start the process, open a terminal window, assume superuser privileges, and enter the commands that apply to the operating system and package manager used for the installation:
## On RHEL or Rocky Linux or AlmaLinux or OL 8
diff --git a/product_docs/docs/hadoop_data_adapter/2/07_features_of_hdfs_fdw.mdx b/product_docs/docs/hadoop_data_adapter/2/07_features_of_hdfs_fdw.mdx
index 689c412aa81..1bdeeb9a038 100644
--- a/product_docs/docs/hadoop_data_adapter/2/07_features_of_hdfs_fdw.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/07_features_of_hdfs_fdw.mdx
@@ -4,27 +4,25 @@ title: "Features of the Hadoop Foreign Data Wrapper"
-The key features of the Hadoop Foreign Data Wrapper are listed below:
+These are the key features of the Hadoop Foreign Data Wrapper.
-## Where Clause Pushdown
+## WHERE clause pushdown
-Hadoop Foreign Data Wrappper allows the pushdown of `WHERE` clause to the foreign server for execution. This feature optimizes remote queries to reduce the number of rows transferred from foreign servers.
+Hadoop Foreign Data Wrappper allows the pushdown of `WHERE` clauses to the foreign server for execution. This feature optimizes remote queries to reduce the number of rows transferred from foreign servers.
-## Column Pushdown
+## Column pushdown
Hadoop Foreign Data Wrapper supports column pushdown. As a result, the query brings back only those columns that are a part of the select target list.
-## Join Pushdown
+## Join pushdown
-Hadoop Foreign Data Wrapper supports join pushdown. It pushes the joins between the foreign tables of the same remote HIVE/SPARK server to that remote HIVE/SPARK server, thereby enhancing the performance.
+Hadoop Foreign Data Wrapper supports join pushdown. It pushes the joins between the foreign tables of the same remote Hive or Spark server to that remote Hive or Spark server, enhancing the performance.
-See also:
+For an example, see [Example: Join pushdown](10a_example_join_pushdown).
-[Example: Join Pushdown](10a_example_join_pushdown)
+## Automated cleanup
-## Automated Cleanup
-
-Hadoop Foreign Data Wrappper allows the cleanup of foreign tables in a single operation using `DROP EXTENSION` command. This feature is specifically useful when a foreign table is set for a temporary purpose. The syntax is:
+Hadoop Foreign Data Wrappper allows the cleanup of foreign tables in a single operation using the `DROP EXTENSION` command. This feature is specifically useful when a foreign table is set for a temporary purpose. The syntax is:
`DROP EXTENSION hdfs_fdw CASCADE;`
diff --git a/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
index 60c4b5010e0..413a2339728 100644
--- a/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
@@ -4,11 +4,11 @@ title: "Configuring the Hadoop Foreign Data Wrapper"
Before creating the extension and the database objects that use the extension, you must modify the Postgres host, providing the location of the supporting libraries.
-After installing Postgres, modify `postgresql.conf` located in:
+After installing Postgres, modify `postgresql.conf`, located in:
`/var/lib/edb/as_version/data`
-Modify the configuration file with your editor of choice, adding the `hdfs_fdw.jvmpath` parameter to the end of the configuration file, and setting the value to specify the location of the Java virtual machine (`libjvm.so`). Set the value of `hdfs_fdw.classpath` to indicate the location of the java class files used by the adapter. Use a colon (:) as a delimiter between each path. For example:
+Modify the configuration file, adding the `hdfs_fdw.jvmpath` parameter to the end of the configuration file and setting the value to specify the location of the Java virtual machine (`libjvm.so`). Set the value of `hdfs_fdw.classpath` to indicate the location of the Java class files used by the adapter. Use a colon (:) as a delimiter between each path. For example:
``` Text
hdfs_fdw.classpath=
@@ -16,21 +16,18 @@ Modify the configuration file with your editor of choice, adding the `hdfs_fdw.j
```
!!! Note
-The jar files (hive-jdbc-1.0.1-standalone.jar and hadoop-common-2.6.4.jar) mentioned in the above example should be copied from the respective Hive and Hadoop sources or website to the PostgreSQL instance where Hadoop Foreign Data Wrapper is installed.
+ Copy the jar files (`hive-jdbc-1.0.1-standalone.jar` and `hadoop-common-2.6.4.jar`) from the respective Hive and Hadoop sources or website to the PostgreSQL instance where Hadoop Foreign Data Wrapper is installed.
-If you are using EDB Advanced Server and have a `DATE` column in your database, you must set `edb_redwood_date = OFF` in the `postgresql.conf` file.
-!!!
+ If you're using EDB Postgres Advanced Server and have a `DATE` column in your database, you must set `edb_redwood_date = OFF` in the `postgresql.conf` file.
-After setting the parameter values, restart the Postgres server. For detailed information about controlling the service on an Advanced Server host, see the [EDB Postgres Advanced Server documentation](../epas/latest).
+After setting the parameter values, restart the Postgres server. For detailed information about controlling the service on an EDB Postgres Advanced Server host, see the [EDB Postgres Advanced Server documentation](../epas/latest).
-
-
-Before using the Hadoop Foreign Data Wrapper, you must:
+Before using the Hadoop Foreign Data Wrapper:
1. Use the [CREATE EXTENSION](#create-extension) command to create the extension on the Postgres host.
2. Use the [CREATE SERVER](#create-server) command to define a connection to the Hadoop file system.
3. Use the [CREATE USER MAPPING](#create-user-mapping) command to define a mapping that associates a Postgres role with the server.
- 4. Use the [CREATE FOREIGN TABLE](#create-foreign-table) command to define a table in the Advanced Server database that corresponds to a database that resides on the Hadoop cluster.
+ 4. Use the [CREATE FOREIGN TABLE](#create-foreign-table) command to define a table in the EDB Postgres Advanced Server database that corresponds to a database that resides on the Hadoop cluster.
@@ -42,25 +39,24 @@ Use the `CREATE EXTENSION` command to create the `hdfs_fdw` extension. To invoke
CREATE EXTENSION [IF NOT EXISTS] hdfs_fdw [WITH] [SCHEMA schema_name];
```
-**Parameters**
+### Parameters
`IF NOT EXISTS`
- Include the `IF NOT EXISTS` clause to instruct the server to issue a notice instead of throwing an error if an extension with the same name already exists.
+ Include the `IF NOT EXISTS` clause to instruct the server to issue a notice instead of returning an error if an extension with the same name already exists.
`schema_name`
Optionally specify the name of the schema in which to install the extension's objects.
-**Example**
+### Example
-The following command installs the `hdfs_fdw` hadoop foreign data wrapper:
+The following command installs the `hdfs_fdw` Hadoop Foreign Data Wrapper:
`CREATE EXTENSION hdfs_fdw;`
-For more information about using the foreign data wrapper `CREATE EXTENSION` command, see:
-
- .
+For more information about using the foreign data wrapper `CREATE EXTENSION` command, see the [PostgreSQL documentation]
+(https://www.postgresql.org/docs/current/static/sql-createextension.html).
@@ -73,17 +69,17 @@ CREATE SERVER server_name FOREIGN DATA WRAPPER hdfs_fdw
[OPTIONS (option 'value' [, ...])]
```
-The role that defines the server is the owner of the server; use the `ALTER SERVER` command to reassign ownership of a foreign server. To create a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `CREATE SERVER` command.
+The role that defines the server is the owner of the server. Use the `ALTER SERVER` command to reassign ownership of a foreign server. To create a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `CREATE SERVER` command.
-**Parameters**
+### Parameters
`server_name`
- Use `server_name` to specify a name for the foreign server. The server name must be unique within the database.
+ Use `server_name` to specify a name for the foreign server. The server name must be unique in the database.
`FOREIGN_DATA_WRAPPER`
- Include the `FOREIGN_DATA_WRAPPER` clause to specify that the server should use the `hdfs_fdw` foreign data wrapper when connecting to the cluster.
+ Include the `FOREIGN_DATA_WRAPPER` clause to specify for the server to use the `hdfs_fdw` foreign data wrapper when connecting to the cluster.
`OPTIONS`
@@ -91,18 +87,18 @@ The role that defines the server is the owner of the server; use the `ALTER SERV
| Option | Description |
| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| host | The address or hostname of the Hadoop cluster. The default value is \`localhost\`. |
-| port | The port number of the Hive Thrift Server or Spark Thrift Server. The default is \`10000\`. |
-| client_type | Specify hiveserver2 or spark as the client type. To use the ANALYZE statement on Spark, you must specify a value of spark; if you do not specify a value for client_type, the default value is hiveserver2. |
-| auth_type
| The authentication type of the client; specify LDAP or NOSASL. If you do not specify an auth_type, the data wrapper will decide the auth_type value on the basis of the user mapping. If the user mapping includes a user name and password, the data wrapper will use LDAP authentication. | The authentication type of the client. Specify `LDAP` or `NOSASL`. If you don't specify an `auth_type`, the data wrapper decides the `auth_type` value on the basis of the user mapping. If the user mapping includes a user name and password, the data wrapper uses LDAP authentication. If the user mapping doesn't include a user name and password, the data wrapper uses NOSASL authentication. |
+| connect_timeout | The length of time before a connection attempt times out. The default value is 300 seconds |
+| enable_join_pushdown | Similar to the table-level option but configured at the server level. If `true`, pushes the join between two foreign tables from the same foreign server instead of fetching all the rows for both the tables and performing a join locally. You can also set this option for an individual table and, if any of the tables involved in the join has set the option to `false`, then the join isn't pushed down. The table-level value of the option takes precedence over the server-level option value. Default is `true`. |
+| fetch_size | Provided as a parameter to the JDBC API `setFetchSize`. The default value is `10,000`. |
+| log_remote_sql | If `true`, logging includes SQL commands executed on the remote Hive server and the number of times that a scan is repeated. The default is `false`. |
+| query_timeout | Use `query_timeout` to provide the number of seconds after which a request times out if it isn't satisfied by the Hive server. Query timeout is not supported by the Hive JDBC driver. |
+| use_remote_estimate | Include `use_remote_estimate` to instruct the server to use `EXPLAIN` commands on the remote server when estimating processing costs. By default, `use_remote_estimate` is `false`, and remote tables are assumed to have 1000 rows. |
+
+### Example
The following command creates a foreign server named `hdfs_server` that uses the `hdfs_fdw` foreign data wrapper to connect to a host with an IP address of `170.11.2.148`:
@@ -110,11 +106,9 @@ The following command creates a foreign server named `hdfs_server` that uses the
CREATE SERVER hdfs_server FOREIGN DATA WRAPPER hdfs_fdw OPTIONS (host '170.11.2.148', port '10000', client_type 'hiveserver2', auth_type 'LDAP', connect_timeout '10000', query_timeout '10000');
```
-The foreign server uses the default port (`10000`) for the connection to the client on the Hadoop cluster; the connection uses an LDAP server.
-
-For more information about using the `CREATE SERVER` command, see:
+The foreign server uses the default port (10000) for the connection to the client on the Hadoop cluster. The connection uses an LDAP server.
-
+For more information about using the `CREATE SERVER` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createserver.html).
@@ -129,13 +123,14 @@ CREATE USER MAPPING FOR role_name SERVER server_name
You must be the owner of the foreign server to create a user mapping for that server.
-Please note: the Hadoop Foreign Data Wrapper supports NOSASL and LDAP authentication. If you are creating a user mapping for a server that uses LDAP authentication, use the `OPTIONS` clause to provide the connection credentials (the username and password) for an existing LDAP user. If the server uses NOSASL authentication, omit the OPTIONS clause when creating the user mapping.
+!!! Note
+ The Hadoop Foreign Data Wrapper supports NOSASL and LDAP authentication. If you're creating a user mapping for a server that uses LDAP authentication, use the `OPTIONS` clause to provide the connection credentials (the user name and password) for an existing LDAP user. If the server uses NOSASL authentication, omit the `OPTIONS` clause when creating the user mapping.
-**Parameters**
+### Parameters
`role_name`
- Use `role_name` to specify the role that will be associated with the foreign server.
+ Use `role_name` to specify the role to associate with the foreign server.
`server_name`
@@ -143,17 +138,17 @@ Please note: the Hadoop Foreign Data Wrapper supports NOSASL and LDAP authentica
`OPTIONS`
- Use the `OPTIONS` clause to specify connection information for the foreign server. If you are using LDAP authentication, provide a:
+ Use the `OPTIONS` clause to specify connection information for the foreign server. If you're using LDAP authentication, provide:
- `username`: the name of the user on the LDAP server.
+ `username` — The name of the user on the LDAP server.
- `password`: the password associated with the username.
+ `password` — the password associated with the username.
- If you do not provide a user name and password, the data wrapper will use NOSASL authentication.
+ If you don't provide a user name and password, the data wrapper uses NOSASL authentication.
-**Example**
+### Example
-The following command creates a user mapping for a role named `enterprisedb`; the mapping is associated with a server named `hdfs_server`:
+The following command creates a user mapping for a role named `enterprisedb`. The mapping is associated with a server named `hdfs_server`:
`CREATE USER MAPPING FOR enterprisedb SERVER hdfs_server;`
@@ -163,17 +158,15 @@ If the database host uses LDAP authentication, provide connection credentials wh
CREATE USER MAPPING FOR enterprisedb SERVER hdfs_server OPTIONS (username 'alice', password '1safepwd');
```
-The command creates a user mapping for a role named `enterprisedb` that is associated with a server named `hdfs_server`. When connecting to the LDAP server, the Hive or Spark server will authenticate as `alice`, and provide a password of `1safepwd`.
-
-For detailed information about the `CREATE USER MAPPING` command, see:
+The command creates a user mapping for a role named `enterprisedb` that is associated with a server named `hdfs_server`. When connecting to the LDAP server, the Hive or Spark server authenticates as `alice`, and provides a password of `1safepwd`.
-
+For detailed information about the `CREATE USER MAPPING` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createusermapping.html).
## CREATE FOREIGN TABLE
-A foreign table is a pointer to a table that resides on the Hadoop host. Before creating a foreign table definition on the Postgres server, connect to the Hive or Spark server and create a table; the columns in the table will map to to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the table that resides on the Hadoop host. The syntax is:
+A foreign table is a pointer to a table that resides on the Hadoop host. Before creating a foreign table definition on the Postgres server, connect to the Hive or Spark server and create a table. The columns in the table map to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the table that resides on the Hadoop host. The syntax is:
```text
CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
@@ -185,40 +178,40 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
SERVER server_name [ OPTIONS ( option 'value' [, ... ] ) ]
```
-where `column_constraint` is:
+`column_constraint` is:
```text
[ CONSTRAINT constraint_name ]
{ NOT NULL | NULL | CHECK (expr) [ NO INHERIT ] | DEFAULT default_expr }
```
-and `table_constraint` is:
+`table_constraint` is:
```text
[ CONSTRAINT constraint_name ] CHECK (expr) [ NO INHERIT ]
```
-**Parameters**
+### Parameters
`table_name`
- Specify the name of the foreign table; include a schema name to specify the schema in which the foreign table should reside.
+ Specify the name of the foreign table. Include a schema name to specify the schema in which the foreign table resides.
`IF NOT EXISTS`
- Include the `IF NOT EXISTS` clause to instruct the server to not throw an error if a table with the same name already exists; if a table with the same name exists, the server will issue a notice.
+ Include the `IF NOT EXISTS` clause to instruct the server not to return an error if a table with the same name already exists. If a table with the same name exists, the server issues a notice.
`column_name`
- Specifies the name of a column in the new table; each column should correspond to a column described on the Hive or Spark server.
+ Specifies the name of a column in the new table. Each column corresponds to a column described on the Hive or Spark server.
`data_type`
- Specify the data type of the column; when possible, specify the same data type for each column on the Postgres server and the Hive or Spark server. If a data type with the same name is not available, the Postgres server will attempt to cast the data type to a type compatible with the Hive or Spark server. If the server cannot identify a compatible data type, it will return an error.
+ Specify the data type of the column. When possible, specify the same data type for each column on the Postgres server and the Hive or Spark server. If a data type with the same name isn't available, the Postgres server attempts to cast the data type to a type compatible with the Hive or Spark server. If the server can't identify a compatible data type, it returns an error.
`COLLATE collation`
- Include the `COLLATE` clause to assign a collation to the column; if not specified, the column data type's default collation is used.
+ Include the `COLLATE` clause to assign a collation to the column. The column data type's default collation is used by default.
`INHERITS (parent_table [, ... ])`
@@ -226,11 +219,11 @@ and `table_constraint` is:
`CONSTRAINT constraint_name`
- Specify an optional name for a column or table constraint; if not specified, the server will generate a constraint name.
+ Specify an optional name for a column or table constraint. If not specified, the server generates a constraint name.
`NOT NULL`
- Include the `NOT NULL` keywords to indicate that the column is not allowed to contain null values.
+ Include the `NOT NULL` keywords to indicate that the column isn't allowed to contain null values.
`NULL`
@@ -238,29 +231,29 @@ and `table_constraint` is:
`CHECK (expr) [NO INHERIT]`
- Use the `CHECK` clause to specify an expression that produces a Boolean result that each row in the table must satisfy. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.
+ Use the `CHECK` clause to specify an expression that produces a Boolean result that each row in the table must satisfy. A check constraint specified as a column constraint can reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.
- A `CHECK` expression cannot contain subqueries or refer to variables other than columns of the current row.
+ A `CHECK` expression can't contain subqueries or refer to variables other than columns of the current row.
- Include the `NO INHERIT` keywords to specify that a constraint should not propagate to child tables.
+ Include the `NO INHERIT` keywords to specify that a constraint can't propagate to child tables.
`DEFAULT default_expr`
- Include the `DEFAULT` clause to specify a default data value for the column whose column definition it appears within. The data type of the default expression must match the data type of the column.
+ Include the `DEFAULT` clause to specify a default data value for the column whose column definition it appears in. The data type of the default expression must match the data type of the column.
`SERVER server_name [OPTIONS (option 'value' [, ... ] ) ]`
- To create a foreign table that will allow you to query a table that resides on a Hadoop file system, include the `SERVER` clause and specify the `server_name` of the foreign server that uses the Hadoop data adapter.
+ To create a foreign table that allows you to query a table that resides on a Hadoop file system, include the `SERVER` clause and specify the `server_name` value of the foreign server that uses the Hadoop data adapter.
- Use the `OPTIONS` clause to specify the following `options` and their corresponding values:
+ Use the `OPTIONS` clause to specify the following options and their corresponding values:
-| option | value |
+| Option | Value |
| ---------- | --------------------------------------------------------------------------------------- |
-| dbname | The name of the database on the Hive server; the database name is required. |
-| table_name | The name of the table on the Hive server; the default is the name of the foreign table. |
-| enable_join_pushdown | Similar to the server-level option, but configured at table-level. If true, pushes the join between two foreign tables from the same foreign server instead of fetching all the rows for both the tables and performing a join locally. This option can also be set for an individual table and if any of the tables involved in the join has set the option to false then the join is not pushed down. The table-level value of the option takes precedence over the server-level option value. Default is true. |
+| dbname | The name of the database on the Hive server. The database name is required. |
+| table_name | The name of the table on the Hive server. The default is the name of the foreign table. |
+| enable_join_pushdown | Similar to the server-level option but configured at table level. If `true`, pushes the join between two foreign tables from the same foreign server instead of fetching all the rows for both the tables and performing a join locally. You can also set this option for an individual table. If any of the tables involved in the join has set the option to `false`, then the join isn't pushed down. The table-level value of the option takes precedence over the server-level option value. Default is `true`. |
-**Example**
+### Example
To use data that is stored on a distributed file system, you must create a table on the Postgres host that maps the columns of a Hadoop table to the columns of a Postgres table. For example, for a Hadoop table with the following definition:
@@ -286,7 +279,7 @@ row format delimited
fields terminated by '\t';
```
-You should execute a command on the Postgres server that creates a comparable table on the Postgres server:
+Execute a command on the Postgres server that creates a comparable table on the Postgres server:
```text
CREATE FOREIGN TABLE weblogs
@@ -314,15 +307,13 @@ SERVER hdfs_server
Include the `SERVER` clause to specify the name of the database stored on the Hadoop file system (`webdata`) and the name of the table (`weblogs`) that corresponds to the table on the Postgres server.
-For more information about using the `CREATE FOREIGN TABLE` command, see:
+For more information about using the `CREATE FOREIGN TABLE` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createforeigntable.html).
-
+## Data type mappings
-### Data Type Mappings
+When using the foreign data wrapper, you must create a table on the Postgres server that mirrors the table that resides on the Hive server. The Hadoop data wrapper automatically converts the following Hive data types to the target Postgres type:
-When using the foreign data wrapper, you must create a table on the Postgres server that mirrors the table that resides on the Hive server. The Hadoop data wrapper will automatically convert the following Hive data types to the target Postgres type:
-
-| **Hive** | **Postgres** |
+| Hive | Postgres |
| ----------- | ---------------- |
| BIGINT | BIGINT/INT8 |
| BOOLEAN | BOOL/BOOLEAN |
@@ -340,21 +331,21 @@ When using the foreign data wrapper, you must create a table on the Postgres ser
## DROP EXTENSION
-Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you will be dropping the Hadoop server, and run the command:
+Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you're dropping the Hadoop server, and run the command:
```text
DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ];
```
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if an extension with the specified name doesn't exists.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if an extension with the specified name doesn't exist.
`name`
- Specify the name of the installed extension. It is optional.
+ Optionally, specify the name of the installed extension.
`CASCADE`
@@ -362,17 +353,15 @@ DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ];
`RESTRICT`
- Do not allow to drop extension if any objects, other than its member objects and extensions listed in the same DROP command are dependent on it.
+ Don't allow to drop extension if any objects, other than its member objects and extensions listed in the same `DROP` command, depend on it.
-**Example**
+### Example
The following command removes the extension from the existing database:
`DROP EXTENSION hdfs_fdw;`
-For more information about using the foreign data wrapper `DROP EXTENSION` command, see:
-
- .
+For more information about using the foreign data wrapper `DROP EXTENSION` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-dropextension.html).
## DROP SERVER
@@ -382,35 +371,33 @@ Use the `DROP SERVER` command to remove a connection to a foreign server. The sy
DROP SERVER [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
-The role that drops the server is the owner of the server; use the `ALTER SERVER` command to reassign ownership of a foreign server. To drop a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `DROP SERVER` command.
+The role that drops the server is the owner of the server. Use the `ALTER SERVER` command to reassign ownership of a foreign server. To drop a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `DROP SERVER` command.
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if a server with the specified name doesn't exists.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if a server with the specified name doesn't exist.
`name`
- Specify the name of the installed server. It is optional.
+ Optionally, specify the name of the installed server.
`CASCADE`
- Automatically drop objects that depend on the server. It should drop all the other dependent objects too.
+ Automatically drop objects that depend on the server. It drops all the other dependent objects too.
`RESTRICT`
- Do not allow to drop the server if any objects are dependent on it.
+ Don't allow to drop the server if any objects depend on it.
-**Example**
+### Example
The following command removes a foreign server named `hdfs_server`:
`DROP SERVER hdfs_server;`
-For more information about using the `DROP SERVER` command, see:
-
-
+For more information about using the `DROP SERVER` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-dropserver.html).
## DROP USER MAPPING
@@ -420,11 +407,11 @@ Use the `DROP USER MAPPING` command to remove a mapping that associates a Postgr
DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC } SERVER server_name;
```
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if the user mapping doesn't exist.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if the user mapping doesn't exist.
`user_name`
@@ -434,15 +421,13 @@ DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC }
Specify the name of the server that defines a connection to the Hadoop cluster.
-**Example**
+### Example
-The following command drops a user mapping for a role named `enterprisedb`; the mapping is associated with a server named `hdfs_server`:
+The following command drops a user mapping for a role named `enterprisedb`. The mapping is associated with a server named `hdfs_server`:
`DROP USER MAPPING FOR enterprisedb SERVER hdfs_server;`
-For detailed information about the `DROP USER MAPPING` command, see:
-
-
+For detailed information about the `DROP USER MAPPING` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-dropusermapping.html).
## DROP FOREIGN TABLE
@@ -452,11 +437,11 @@ A foreign table is a pointer to a table that resides on the Hadoop host. Use the
DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if the foreign table with the specified name doesn't exists.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if the foreign table with the specified name doesn't exist.
`name`
@@ -464,18 +449,16 @@ DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
`CASCADE`
- Automatically drop objects that depend on the foreign table. It should drop all the other dependent objects too.
+ Automatically drop objects that depend on the foreign table. It drops all the other dependent objects too.
`RESTRICT`
- Do not allow to drop foreign table if any objects are dependent on it.
+ Don't allow to drop foreign table if any objects depend on it.
-**Example**
+### Example
```text
DROP FOREIGN TABLE warehouse;
```
-For more information about using the `DROP FOREIGN TABLE` command, see:
-
-
+For more information about using the `DROP FOREIGN TABLE` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-dropforeigntable.html).
diff --git a/product_docs/docs/hadoop_data_adapter/2/index.mdx b/product_docs/docs/hadoop_data_adapter/2/index.mdx
index 5a4fa872406..d581e63e407 100644
--- a/product_docs/docs/hadoop_data_adapter/2/index.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/index.mdx
@@ -15,9 +15,9 @@ navigation:
- 11_uninstalling_the_hadoop_data_adapter
---
-The Hadoop Foreign Data Wrapper (`hdfs_fdw`) is a Postgres extension that allows you to access data that resides on a Hadoop file system from EDB Postgres Advanced Server. The foreign data wrapper makes the Hadoop file system a read-only data source that you can use with Postgres functions and utilities, or in conjunction with other data that resides on a Postgres host.
+The Hadoop Foreign Data Wrapper (`hdfs_fdw`) is a Postgres extension that allows you to access data that resides on a Hadoop file system from EDB Postgres Advanced Server. The foreign data wrapper makes the Hadoop file system a read-only data source that you can use with Postgres functions and utilities or with other data that resides on a Postgres host.
-The Hadoop Foreign Data Wrapper can be installed with an RPM package. You can download an installer from the [EDB website](https://www.enterprisedb.com/software-downloads-postgres/).
+You can install the Hadoop Foreign Data Wrapper with an RPM package. You can download an installer from the [EDB website](https://www.enterprisedb.com/software-downloads-postgres/).
-This guide uses the term `Postgres` to refer to an instance of EDB Postgres Advanced Server.
+The term Postgres refers to an instance of EDB Postgres Advanced Server.
From 268060cbfa7ebcf7313646ff0a14d1baa2cf0dc8 Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Tue, 22 Mar 2022 16:20:11 -0400
Subject: [PATCH 04/16] added. missing link to product compatibiity
---
.../docs/hadoop_data_adapter/2/02_requirements_overview.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx b/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx
index 15cf1d8588d..61beb2701c5 100644
--- a/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx
@@ -2,7 +2,7 @@
title: "Supported database versions"
---
-This table lists the latest Hadoop Foreign Data Wrapper versions and their supported corresponding EDB Postgres Advanced Server (EPAS) versions. Hadoop Foreign Data Wrapper is supported on the same platforms as EDB Postgres Advanced Server. See Product Compatibility for details.
+This table lists the latest Hadoop Foreign Data Wrapper versions and their supported corresponding EDB Postgres Advanced Server (EPAS) versions. Hadoop Foreign Data Wrapper is supported on the same platforms as EDB Postgres Advanced Server. See [Product Compatibility](https://www.enterprisedb.com/platform-compatibility#epas) for details.
| Hadoop Foreign Data Wrapper | EPAS 14 | EPAS 13 | EPAS 12 | EPAS 11 | EPAS 10 |
| --------------------------- | ------- | ------- | ------- | ------- | ------- |
From 1bb1de7dc2b15b8d05db8300ab732f02ad7b3353 Mon Sep 17 00:00:00 2001
From: Bobby Bissett <70302203+EFM-Bobby@users.noreply.github.com>
Date: Wed, 23 Mar 2022 16:11:18 -0400
Subject: [PATCH 05/16] clarify slot creation by efm
Clarify slot creation when the update.physical.slots.period property is used. The slots need to exist already on the primary, but efm will create them as needed on standbys when copying the information to them.
---
.../docs/efm/4/04_configuring_efm/01_cluster_properties.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/efm/4/04_configuring_efm/01_cluster_properties.mdx b/product_docs/docs/efm/4/04_configuring_efm/01_cluster_properties.mdx
index 12f69e30396..79cf1a48990 100644
--- a/product_docs/docs/efm/4/04_configuring_efm/01_cluster_properties.mdx
+++ b/product_docs/docs/efm/4/04_configuring_efm/01_cluster_properties.mdx
@@ -576,7 +576,7 @@ To perform maintenance on the primary database when `primary.shutdown.as.failure
-Use the `update.physical.slots.period` property to define the slot advance frequency for database version 12 and above. When `update.physical.slots.period` is set to a positive integer value, the primary agent will read the current `restart_lsn` of the physical replication slots after every `update.physical.slots.period` seconds, and send this information with its `pg_current_wal_lsn` and `primary_slot_name` (if it is set in the postgresql.conf file) to the standbys. If physical slots do not already exist, setting this parameter to a positive integer value will create the slots and then update the `restart_lsn parameter` for these slots. A non-promotable standby will not create new slots but will update them if they exist.
+Use the `update.physical.slots.period` property to define the slot advance frequency for database version 12 and above. When `update.physical.slots.period` is set to a positive integer value, the primary agent will read the current `restart_lsn` of the physical replication slots after every `update.physical.slots.period` seconds, and send this information with its `pg_current_wal_lsn` and `primary_slot_name` (if it is set in the postgresql.conf file) to the standbys. The physical slots must already exist on the primary for the agent to find them. If physical slots do not already exist on the standbys, standby agents will create the slots and then update the `restart_lsn parameter` for these slots. A non-promotable standby will not create new slots but will update them if they exist.
Note: all slot names, including one set on the current primary if desired, must be unique.
From 83327b1416469b95f6afe3cb219f5c02a19b21f1 Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Thu, 24 Mar 2022 12:54:51 -0400
Subject: [PATCH 06/16] Rest of hadoop edits
---
.../2/09_using_the_hadoop_data_adapter.mdx | 316 +++++++++---------
.../2/10_identifying_data_adapter_version.mdx | 2 +-
.../2/10a_example_join_pushdown.mdx | 4 +-
...1_uninstalling_the_hadoop_data_adapter.mdx | 8 +-
4 files changed, 163 insertions(+), 167 deletions(-)
diff --git a/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
index 3a97f15e7be..ae52b4fe284 100644
--- a/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
@@ -6,78 +6,78 @@ title: "Using the Hadoop Foreign Data Wrapper"
You can use the Hadoop Foreign Data Wrapper either through the Apache Hive or the Apache Spark. Both Hive and Spark store metadata in the configured metastore, where databases and tables are created using HiveQL.
-## Using HDFS FDW with Apache Hive on Top of Hadoop
+## Using HDFS FDW with Apache Hive on top of Hadoop
-`Apache Hive` data warehouse software facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called `HiveQL`. At the same time, this language allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in `HiveQL`.
+Apache Hive data warehouse software helps withg querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time, this language allows traditional map/reduce programmers to plug in their custom mappers and reducers when it's inconvenient or inefficient to express this logic in HiveQL.
-There are two versions of Hive - `HiveServer1` and `HiveServer2` which can be downloaded from the [Apache Hive website](https://hive.apache.org/downloads.html).
+You can download the two versions of Hive—HiveServer1 and HiveServer2—from the [Apache Hive website](https://hive.apache.org/downloads.html).
!!! Note
- The Hadoop Foreign Data Wrapper supports only `HiveServer2`.
+ The Hadoop Foreign Data Wrapper supports only HiveServer2.
To use HDFS FDW with Apache Hive on top of Hadoop:
-Step 1: Download [weblogs_parse](http://wiki.pentaho.com/download/attachments/23531451/weblogs_parse.zip?version=1&modificationDate=1327096242000/) and follow the instructions at the [Wiki Pentaho website](https://wiki.pentaho.com/display/BAD/Transforming+Data+within+Hive/).
+1. Download [weblogs_parse](http://wiki.pentaho.com/download/attachments/23531451/weblogs_parse.zip?version=1&modificationDate=1327096242000/) and follow the instructions at the [Wiki Pentaho website](https://wiki.pentaho.com/display/BAD/Transforming+Data+within+Hive/).
-Step 2: Upload `weblog_parse.txt` file using these commands:
+1. Upload the `weblog_parse.txt` file using these commands:
-```text
-hadoop fs -mkdir /weblogs
-hadoop fs -mkdir /weblogs/parse
-hadoop fs -put weblogs_parse.txt /weblogs/parse/part-00000
-```
+ ```text
+ hadoop fs -mkdir /weblogs
+ hadoop fs -mkdir /weblogs/parse
+ hadoop fs -put weblogs_parse.txt /weblogs/parse/part-00000
+ ```
-Step 3: Start `HiveServer`, if not already running, using following command:
+1. Start HiveServer, if not already running, using following command:
-```text
-$HIVE_HOME/bin/hiveserver2
-```
+ ```text
+ $HIVE_HOME/bin/hiveserver2
+ ```
-or
+ or
-```text
-$HIVE_HOME/bin/hive --service hiveserver2
-```
+ ```text
+ $HIVE_HOME/bin/hive --service hiveserver2
+ ```
-Step 4: Connect to `HiveServer2` using the hive `beeline` client. For example:
+1. Connect to HiveServer2 using the hive beeline client. For example:
-```text
-$ beeline
-Beeline version 1.0.1 by Apache Hive
-beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl
-```
+ ```text
+ $ beeline
+ Beeline version 1.0.1 by Apache Hive
+ beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl
+ ```
-Step 5: Create a table in Hive. The example creates a table named `weblogs`"
+1. Create a table in Hive. The example creates a table named `weblogs`"
-```text
-CREATE TABLE weblogs (
- client_ip STRING,
- full_request_date STRING,
- day STRING,
- month STRING,
- month_num INT,
- year STRING,
- hour STRING,
- minute STRING,
- second STRING,
- timezone STRING,
- http_verb STRING,
- uri STRING,
- http_status_code STRING,
- bytes_returned STRING,
- referrer STRING,
- user_agent STRING)
-row format delimited
-fields terminated by '\t';
-```
+ ```text
+ CREATE TABLE weblogs (
+ client_ip STRING,
+ full_request_date STRING,
+ day STRING,
+ month STRING,
+ month_num INT,
+ year STRING,
+ hour STRING,
+ minute STRING,
+ second STRING,
+ timezone STRING,
+ http_verb STRING,
+ uri STRING,
+ http_status_code STRING,
+ bytes_returned STRING,
+ referrer STRING,
+ user_agent STRING)
+ row format delimited
+ fields terminated by '\t';
+ ```
-Step 6: Load data into the table.
+1. Load data into the table.
-```text
-hadoop fs -cp /weblogs/parse/part-00000 /user/hive/warehouse/weblogs/
-```
+ ```text
+ hadoop fs -cp /weblogs/parse/part-00000 /user/hive/warehouse/weblogs/
+ ```
-Step 7: Access your data from Postgres; you can now use the `weblog` table. Once you are connected using psql, follow the below steps:
+1. Access your data from Postgres. You can now use the `weblog` table. Once you're connected using psql, follow these steps:
```text
-- set the GUC variables appropriately, e.g. :
@@ -145,115 +145,115 @@ EXPLAIN (VERBOSE, COSTS OFF) SELECT client_ip, full_request_date, uri FROM weblo
(3 rows)
```
-## Using HDFS FDW with Apache Spark on Top of Hadoop
+## Using HDFS FDW with Apache Spark on top of Hadoop
-Apache Spark is a general purpose distributed computing framework which supports a wide variety of use cases. It provides real time streaming as well as batch processing with speed, ease of use, and sophisticated analytics. Spark does not provide a storage layer as it relies on third party storage providers like Hadoop, HBASE, Cassandra, S3, and so on. Spark integrates seamlessly with Hadoop and can process existing data. Spark SQL is 100% compatible with `HiveQL` and can be used as a replacement of `Hiveserver2`, using `Spark Thrift Server`.
+Apache Spark is a general-purpose distributed computing framework that supports a wide variety of use cases. It provides real-time streaming as well as batch processing with speed, ease-of-use, and sophisticated analytics. Spark doesn't provide a storage layer, as it relies on third-party storage providers like Hadoop, HBASE, Cassandra, S3, and so on. Spark integrates seamlessly with Hadoop and can process existing data. Spark SQL is 100% compatible with HiveQL. You can use it to replace Hiveserver2, using Spark Thrift Server.
To use HDFS FDW with Apache Spark on top of Hadoop:
-Step 1: Download and install Apache Spark in local mode.
-
-Step 2: In the folder `$SPARK_HOME/conf` create a file `spark-defaults.conf` containing the following line:
-
-```text
-spark.sql.warehouse.dir hdfs://localhost:9000/user/hive/warehouse
-```
-
-By default, Spark uses `derby` for both the meta data and the data itself (called a warehouse in Spark). To have Spark use Hadoop as a warehouse, you should add this property.
-
-Step 3: Start the Spark Thrift Server.
-
-```text
-./start-thriftserver.sh
-```
-
-Step 4: Make sure the Spark Thrift server is running and writing to a log file.
-
-Step 5: Create a local file (`names.txt`) that contains the following entries:
-
-```text
-$ cat /tmp/names.txt
-1,abcd
-2,pqrs
-3,wxyz
-4,a_b_c
-5,p_q_r
-,
-```
-
-Step 6: Connect to Spark Thrift Server2 using the Spark `beeline` client. For example:
-
-```text
-$ beeline
-Beeline version 1.2.1.spark2 by Apache Hive
-beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl org.apache.hive.jdbc.HiveDriver
-```
-
-Step 7: Prepare the sample data on Spark. Run the following commands in the `beeline` command line tool:
-
-```text
-./beeline
-Beeline version 1.2.1.spark2 by Apache Hive
-beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl org.apache.hive.jdbc.HiveDriver
-Connecting to jdbc:hive2://localhost:10000/default;auth=noSasl
-Enter password for jdbc:hive2://localhost:10000/default;auth=noSasl:
-Connected to: Spark SQL (version 2.1.1)
-Driver: Hive JDBC (version 1.2.1.spark2)
-Transaction isolation: TRANSACTION_REPEATABLE_READ
-0: jdbc:hive2://localhost:10000> create database my_test_db;
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.379 seconds)
-0: jdbc:hive2://localhost:10000> use my_test_db;
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.03 seconds)
-0: jdbc:hive2://localhost:10000> create table my_names_tab(a int, name string)
- row format delimited fields terminated by ' ';
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.11 seconds)
-0: jdbc:hive2://localhost:10000>
-
-0: jdbc:hive2://localhost:10000> load data local inpath '/tmp/names.txt'
- into table my_names_tab;
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.33 seconds)
-0: jdbc:hive2://localhost:10000> select * from my_names_tab;
-+-------+---------+--+
-| a | name |
-+-------+---------+--+
-| 1 | abcd |
-| 2 | pqrs |
-| 3 | wxyz |
-| 4 | a_b_c |
-| 5 | p_q_r |
-| NULL | NULL |
-+-------+---------+--+
-```
-
-The following commands list the corresponding files in Hadoop:
-
-```text
-$ hadoop fs -ls /user/hive/warehouse/
-Found 1 items
-drwxrwxrwx - org.apache.hive.jdbc.HiveDriver supergroup 0 2020-06-12 17:03 /user/hive/warehouse/my_test_db.db
-
-$ hadoop fs -ls /user/hive/warehouse/my_test_db.db/
-Found 1 items
-drwxrwxrwx - org.apache.hive.jdbc.HiveDriver supergroup 0 2020-06-12 17:03 /user/hive/warehouse/my_test_db.db/my_names_tab
-```
-
-Step 8: Access your data from Postgres using psql:
+1. Download and install Apache Spark in local mode.
+
+1. In the folder `$SPARK_HOME/conf`, create a file `spark-defaults.conf` containing the following line:
+
+ ```text
+ spark.sql.warehouse.dir hdfs://localhost:9000/user/hive/warehouse
+ ```
+
+ By default, Spark uses `derby` for both the meta data and the data itself (called a warehouse in Spark). To have Spark use Hadoop as a warehouse, add this property.
+
+1. Start Spark Thrift Server.
+
+ ```text
+ ./start-thriftserver.sh
+ ```
+
+1. Make sure Spark Thrift Server is running and writing to a log file.
+
+1. Create a local file (`names.txt`) that contains the following entries:
+
+ ```text
+ $ cat /tmp/names.txt
+ 1,abcd
+ 2,pqrs
+ 3,wxyz
+ 4,a_b_c
+ 5,p_q_r
+ ,
+ ```
+
+1. Connect to Spark Thrift Server2 using the Spark beeline client. For example:
+
+ ```text
+ $ beeline
+ Beeline version 1.2.1.spark2 by Apache Hive
+ beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl org.apache.hive.jdbc.HiveDriver
+ ```
+
+1. Prepare the sample data on Spark. Run the following commands in the beeline command line tool:
+
+ ```text
+ ./beeline
+ Beeline version 1.2.1.spark2 by Apache Hive
+ beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl org.apache.hive.jdbc.HiveDriver
+ Connecting to jdbc:hive2://localhost:10000/default;auth=noSasl
+ Enter password for jdbc:hive2://localhost:10000/default;auth=noSasl:
+ Connected to: Spark SQL (version 2.1.1)
+ Driver: Hive JDBC (version 1.2.1.spark2)
+ Transaction isolation: TRANSACTION_REPEATABLE_READ
+ 0: jdbc:hive2://localhost:10000> create database my_test_db;
+ +---------+--+
+ | Result |
+ +---------+--+
+ +---------+--+
+ No rows selected (0.379 seconds)
+ 0: jdbc:hive2://localhost:10000> use my_test_db;
+ +---------+--+
+ | Result |
+ +---------+--+
+ +---------+--+
+ No rows selected (0.03 seconds)
+ 0: jdbc:hive2://localhost:10000> create table my_names_tab(a int, name string)
+ row format delimited fields terminated by ' ';
+ +---------+--+
+ | Result |
+ +---------+--+
+ +---------+--+
+ No rows selected (0.11 seconds)
+ 0: jdbc:hive2://localhost:10000>
+
+ 0: jdbc:hive2://localhost:10000> load data local inpath '/tmp/names.txt'
+ into table my_names_tab;
+ +---------+--+
+ | Result |
+ +---------+--+
+ +---------+--+
+ No rows selected (0.33 seconds)
+ 0: jdbc:hive2://localhost:10000> select * from my_names_tab;
+ +-------+---------+--+
+ | a | name |
+ +-------+---------+--+
+ | 1 | abcd |
+ | 2 | pqrs |
+ | 3 | wxyz |
+ | 4 | a_b_c |
+ | 5 | p_q_r |
+ | NULL | NULL |
+ +-------+---------+--+
+ ```
+
+ The following commands list the corresponding files in Hadoop:
+
+ ```text
+ $ hadoop fs -ls /user/hive/warehouse/
+ Found 1 items
+ drwxrwxrwx - org.apache.hive.jdbc.HiveDriver supergroup 0 2020-06-12 17:03 /user/hive/warehouse/my_test_db.db
+
+ $ hadoop fs -ls /user/hive/warehouse/my_test_db.db/
+ Found 1 items
+ drwxrwxrwx - org.apache.hive.jdbc.HiveDriver supergroup 0 2020-06-12 17:03 /user/hive/warehouse/my_test_db.db/my_names_tab
+ ```
+
+1. Access your data from Postgres using psql:
```text
-- set the GUC variables appropriately, e.g. :
@@ -299,4 +299,4 @@ EXPLAIN (verbose, costs off) SELECT name FROM f_names_tab WHERE a > 3;
```
!!! Note
- This example uses the same port while creating foreign server because the Spark Thrift Server is compatible with the Hive Thrift Server. Applications using Hiveserver2 would work with Spark except for the behaviour of the `ANALYZE` command and the connection string in the case of `NOSASL`. We recommend using `ALTER SERVER` and changing the `client_type` option if Hive is to be replaced with Spark.
+ This example uses the same port while creating the foreign server because Spark Thrift Server is compatible with Hive Thrift Server. Applications using Hiveserver2 work with Spark except for the behavior of the `ANALYZE` command and the connection string in the case of `NOSASL`. We recommend using `ALTER SERVER` and changing the `client_type` option if you replace Hive with Spark.
diff --git a/product_docs/docs/hadoop_data_adapter/2/10_identifying_data_adapter_version.mdx b/product_docs/docs/hadoop_data_adapter/2/10_identifying_data_adapter_version.mdx
index fa6e51f1d5c..0be13704a60 100644
--- a/product_docs/docs/hadoop_data_adapter/2/10_identifying_data_adapter_version.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/10_identifying_data_adapter_version.mdx
@@ -1,5 +1,5 @@
---
-title: "Identifying the Hadoop Foreign Data Wrapper Version"
+title: "Identifying the Hadoop Foreign Data Wrapper version"
---
diff --git a/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx b/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx
index e727be1bd90..bcad81896a9 100644
--- a/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx
@@ -1,8 +1,8 @@
---
-title: "Example: Join Pushdown"
+title: "Example: Join pushdown"
---
-The following example shows join pushdown between the foreign tables of the same remote HIVE/SPARK server to that remote HIVE/SPARK server,:
+This example shows join pushdown between the foreign tables of the same remote HIVE/SPARK server to that remote HIVE/SPARK server:
Tables on HIVE/SPARK server:
diff --git a/product_docs/docs/hadoop_data_adapter/2/11_uninstalling_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/11_uninstalling_the_hadoop_data_adapter.mdx
index 3e54f5fa327..89367a3dab4 100644
--- a/product_docs/docs/hadoop_data_adapter/2/11_uninstalling_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/11_uninstalling_the_hadoop_data_adapter.mdx
@@ -4,7 +4,7 @@ title: "Uninstalling the Hadoop Foreign Data Wrapper"
-You use the `remove` command to uninstall MongoDB Foreign Data Wrapper packages. To uninstall, open a terminal window, assume superuser privileges, and enter the command applicable to the operating system and package manager used for the installation:
+You use the `remove` command to uninstall MongoDB Foreign Data Wrapper packages. To uninstall, open a terminal window, assume superuser privileges, and enter the command that applies to the operating system and package manager used for the installation:
- On RHEL or CentOS 7:
@@ -18,10 +18,6 @@ You use the `remove` command to uninstall MongoDB Foreign Data Wrapper packages.
`zypper remove edb-as-hdfs_fdw`
-- On Debian or Ubuntu
+- On Debian or Ubuntu
`apt-get remove edb-as-hdfs_fdw`
-
-
-
-
From 6d113af0c667c98f722c8db08f0fc0e119456410 Mon Sep 17 00:00:00 2001
From: drothery-edb
Date: Thu, 24 Mar 2022 13:22:14 -0400
Subject: [PATCH 07/16] tweaks
---
.../hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx | 2 +-
.../2/11_uninstalling_the_hadoop_data_adapter.mdx | 2 +-
product_docs/docs/hadoop_data_adapter/2/index.mdx | 3 ---
3 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
index ae52b4fe284..d7c13519756 100644
--- a/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
@@ -8,7 +8,7 @@ You can use the Hadoop Foreign Data Wrapper either through the Apache Hive or th
## Using HDFS FDW with Apache Hive on top of Hadoop
-Apache Hive data warehouse software helps withg querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time, this language allows traditional map/reduce programmers to plug in their custom mappers and reducers when it's inconvenient or inefficient to express this logic in HiveQL.
+Apache Hive data warehouse software helps with querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL. At the same time, this language allows traditional map/reduce programmers to plug in their custom mappers and reducers when it's inconvenient or inefficient to express this logic in HiveQL.
You can download the two versions of Hive—HiveServer1 and HiveServer2—from the [Apache Hive website](https://hive.apache.org/downloads.html).
diff --git a/product_docs/docs/hadoop_data_adapter/2/11_uninstalling_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/11_uninstalling_the_hadoop_data_adapter.mdx
index 89367a3dab4..2526eaa97dc 100644
--- a/product_docs/docs/hadoop_data_adapter/2/11_uninstalling_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/11_uninstalling_the_hadoop_data_adapter.mdx
@@ -4,7 +4,7 @@ title: "Uninstalling the Hadoop Foreign Data Wrapper"
-You use the `remove` command to uninstall MongoDB Foreign Data Wrapper packages. To uninstall, open a terminal window, assume superuser privileges, and enter the command that applies to the operating system and package manager used for the installation:
+You use the `remove` command to uninstall Hadoop Foreign Data Wrapper packages. To uninstall, open a terminal window, assume superuser privileges, and enter the command that applies to the operating system and package manager used for the installation:
- On RHEL or CentOS 7:
diff --git a/product_docs/docs/hadoop_data_adapter/2/index.mdx b/product_docs/docs/hadoop_data_adapter/2/index.mdx
index d581e63e407..decda8bb9ec 100644
--- a/product_docs/docs/hadoop_data_adapter/2/index.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/index.mdx
@@ -17,7 +17,4 @@ navigation:
The Hadoop Foreign Data Wrapper (`hdfs_fdw`) is a Postgres extension that allows you to access data that resides on a Hadoop file system from EDB Postgres Advanced Server. The foreign data wrapper makes the Hadoop file system a read-only data source that you can use with Postgres functions and utilities or with other data that resides on a Postgres host.
-You can install the Hadoop Foreign Data Wrapper with an RPM package. You can download an installer from the [EDB website](https://www.enterprisedb.com/software-downloads-postgres/).
-
-The term Postgres refers to an instance of EDB Postgres Advanced Server.
From 7cd1945a66b12bcaf5e7108f9ba555607152409d Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Thu, 24 Mar 2022 14:42:05 -0400
Subject: [PATCH 08/16] Edits to Mongodb fdw doc
---
.../5/02_requirements_overview.mdx | 2 +-
.../5/03_architecture_overview.mdx | 4 +-
.../5/05_updating_the_mongo_data_adapter.mdx | 6 +-
.../5/06_features_of_mongo_fdw.mdx | 36 ++--
.../07_configuring_the_mongo_data_adapter.mdx | 180 ++++++++----------
...8_example_using_the_mongo_data_adapter.mdx | 2 +-
.../5/08a_example_join_pushdown.mdx | 2 +-
.../5/09_identifying_data_adapter_version.mdx | 2 +-
.../mongo_data_adapter/5/10_limitations.mdx | 6 +-
...11_uninstalling_the_mongo_data_adapter.mdx | 3 +-
.../docs/mongo_data_adapter/5/index.mdx | 6 +-
.../5/mongo_rel_notes/index.mdx | 4 +-
.../mongo_rel_notes/mongo5.2.8_rel_notes.mdx | 8 +-
.../mongo_rel_notes/mongo5.2.9_rel_notes.mdx | 6 +-
.../mongo_rel_notes/mongo5.3.0_rel_notes.mdx | 4 +-
15 files changed, 125 insertions(+), 146 deletions(-)
diff --git a/product_docs/docs/mongo_data_adapter/5/02_requirements_overview.mdx b/product_docs/docs/mongo_data_adapter/5/02_requirements_overview.mdx
index 9ab11614aba..37c5f86c79e 100644
--- a/product_docs/docs/mongo_data_adapter/5/02_requirements_overview.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/02_requirements_overview.mdx
@@ -1,5 +1,5 @@
---
-title: "Supported Database Versions"
+title: "Supported database versions"
---
This table lists the latest MongoDB Foreign Data Wrapper versions and their supported corresponding EDB Postgres Advanced Server (EPAS) versions. MongoDB Foreign Data Wrapper is supported on the same platforms as EDB Postgres Advanced Server. See [Product Compatibility](https://www.enterprisedb.com/platform-compatibility#epas) for details.
diff --git a/product_docs/docs/mongo_data_adapter/5/03_architecture_overview.mdx b/product_docs/docs/mongo_data_adapter/5/03_architecture_overview.mdx
index c2dc149ee79..948e44b015d 100644
--- a/product_docs/docs/mongo_data_adapter/5/03_architecture_overview.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/03_architecture_overview.mdx
@@ -1,8 +1,8 @@
---
-title: "Architecture Overview"
+title: "Architecture overview"
---
-The MongoDB data wrapper provides an interface between a MongoDB server and a Postgres database. It transforms a Postgres statement (`SELECT`/`INSERT`/`DELETE`/`UPDATE`) into a query that is understood by the MongoDB database.
+The MongoDB data wrapper provides an interface between a MongoDB server and a Postgres database. It transforms a Postgres statement (`SELECT`/`INSERT`/`DELETE`/`UPDATE`) into a query that's understood by the MongoDB database.
![Using MongoDB FDW with Postgres](images/mongo_server_with_postgres.png)
diff --git a/product_docs/docs/mongo_data_adapter/5/05_updating_the_mongo_data_adapter.mdx b/product_docs/docs/mongo_data_adapter/5/05_updating_the_mongo_data_adapter.mdx
index 069a3a602f3..eb958648d02 100644
--- a/product_docs/docs/mongo_data_adapter/5/05_updating_the_mongo_data_adapter.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/05_updating_the_mongo_data_adapter.mdx
@@ -4,7 +4,7 @@ title: "Upgrading the MongoDB Foreign Data Wrapper"
-If you have an existing installation of MongoDB Foreign Data Wrapper that you installed using the EDB repository, you can use the `upgrade` command to update your repository configuration file and then upgrade to a more recent product version. To start the process, open a terminal window, assume superuser privileges, and enter the commands applicable to the operating system and package manager used for the installation:
+If you have an existing installation of MongoDB Foreign Data Wrapper that you installed using the EDB repository, you can use the `upgrade` command to update your repository configuration file and then upgrade to a more recent product version. To start the process, open a terminal window, assume superuser privileges, and enter the commands applicable to the operating system and package manager used for the installation.
## On RHEL or Rocky Linux or AlmaLinux or OL 8
@@ -16,7 +16,7 @@ dnf upgrade edb-repo
dnf upgrade edb-as-mongo_fdw
# where is the EDB Postgres Advanced Server version number
```
-## On RHEL or CentOS or OL 7:
+## On RHEL or CentOS or OL 7
```shell
# Update your edb.repo file to access the current EDB repository
@@ -64,5 +64,3 @@ yum upgrade edb-as-mongo_fdw edb-libmongoc-at-libs
# Advanced Server versions 10 to 11, must be 10 and for
# EDB Postgres Advanced Server version 12 and later, must be 11.
```
-
-
diff --git a/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx b/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx
index fa62e8eb347..94474b30381 100644
--- a/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx
@@ -4,49 +4,47 @@ title: "Features of the MongoDB Foreign Data Wrapper"
-The key features of the MongoDB Foreign Data Wrapper are listed below:
+These are the key features of the MongoDB Foreign Data Wrapper.
## Writable FDW
-The MongoDB Foreign Data Wrapper allows you to modify data on a MongoDB server. Users can `INSERT`, `UPDATE` and `DELETE` data in the remote MongoDB collections by inserting, updating and deleting data locally in foreign tables.
+The MongoDB Foreign Data Wrapper lets you modify data on a MongoDB server. You can insert, update, and delete data in the remote MongoDB collections by inserting, updating and deleting data locally in foreign tables.
See also:
- [Example: Using the MongoDB Foreign Data Wrapper](08_example_using_the_mongo_data_adapter/#example_using_the_mongo_data_adapter)
-- [Data Type Mappings](07_configuring_the_mongo_data_adapter/#data-type-mappings)
+- [Data type mappings](07_configuring_the_mongo_data_adapter/#data-type-mappings)
-## WHERE Clause Pushdown
+## WHERE clause pushdown
-MongoDB Foreign Data Wrapper allows the pushdown of the `WHERE` clause only when clauses include the comparison expressions that have a column and a constant as arguments. `WHERE` clause pushdown is not supported where the constant is an array.
+MongoDB Foreign Data Wrapper allows the pushdown of the `WHERE` clause only when clauses include the comparison expressions that have a column and a constant as arguments. `WHERE` clause pushdown isn't supported where the constant is an array.
-## Join Pushdown
+## Join pushdown
-MongoDB Foreign Data Wrapper supports pushdown for inner joins, left joins, and right joins. Currently, joins involving only relational and arithmetic operators in join-clauses are pushed down to avoid any potential join failures.
+MongoDB Foreign Data Wrapper supports pushdown for inner joins, left joins, and right joins. Currently, joins involving only relational and arithmetic operators in join clauses are pushed down to avoid any potential join failures.
-See also:
-
-- [Example: Join Pushdown](08a_example_join_pushdown)
+For more information, see [Example: Join pushdown](08a_example_join_pushdown).
-## Connection Pooling
+## Connection pooling
The MongoDB Foreign Data Wrapper establishes a connection to a foreign server during the first query that uses a foreign table associated with the foreign server. This connection is kept and reused for subsequent queries in the same session.
-## Automated Cleanup
+## Automated cleanup
-The MongoDB Foreign Data Wrapper allows the cleanup of foreign tables in a single operation using the `DROP EXTENSION` command. This feature is especially useful when a foreign table has been created for a temporary purpose. The syntax of a `DROP EXTENSION` command is:
+The MongoDB Foreign Data Wrapper allows the cleanup of foreign tables in a single operation using the `DROP EXTENSION` command. This feature is especially useful when a foreign table was created for a temporary purpose. The syntax of a `DROP EXTENSION` command is:
`DROP EXTENSION mongo_fdw CASCADE;`
For more information, see [DROP EXTENSION](https://www.postgresql.org/docs/current/sql-dropextension.html).
-## Full Document Retrieval
+## Full-document retrieval
-This feature allows you to retrieve documents along with all their fields from collection without any knowledge of the fields in the BSON document available in MongoDB's collection. Those retrieved documents are in JSON format.
+This feature lets you retrieve documents along with all their fields from collection without any knowledge of the fields in the BSON document available in MongoDB's collection. Those retrieved documents are in JSON format.
-You can retrieve all available fields in a collection residing in MongoDB Foreign Data Wrapper as explained in the following example:
+You can retrieve all available fields in a collection residing in MongoDB Foreign Data Wrapper as explained in the following example.
-**Example**:
+### Example
```text
> db.warehouse.find();
@@ -56,7 +54,7 @@ You can retrieve all available fields in a collection residing in MongoDB Foreig
Steps for retrieving the document:
-1. Create foreign table with a column name `__doc`. The type of the column could be json, jsonb, text, or varchar.
+1. Create a foreign table with a column name `__doc`. The type of the column can be json, jsonb, text, or varchar.
```text
CREATE FOREIGN TABLE test_json(__doc json) SERVER mongo_server OPTIONS (database 'testdb', collection 'warehouse');
@@ -68,7 +66,7 @@ CREATE FOREIGN TABLE test_json(__doc json) SERVER mongo_server OPTIONS (database
SELECT * FROM test_json ORDER BY __doc::text COLLATE "C";
```
-The output:
+ The output:
```text
edb=#SELECT * FROM test_json ORDER BY __doc::text COLLATE "C";
diff --git a/product_docs/docs/mongo_data_adapter/5/07_configuring_the_mongo_data_adapter.mdx b/product_docs/docs/mongo_data_adapter/5/07_configuring_the_mongo_data_adapter.mdx
index f293aa691e2..e44832b763e 100644
--- a/product_docs/docs/mongo_data_adapter/5/07_configuring_the_mongo_data_adapter.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/07_configuring_the_mongo_data_adapter.mdx
@@ -4,7 +4,7 @@ title: "Configuring the MongoDB Foreign Data Wrapper"
-Before using the MongoDB Foreign Data Wrapper, you must:
+Before using the MongoDB Foreign Data Wrapper:
1. Use the [CREATE EXTENSION](#create-extension) command to create the MongoDB Foreign Data Wrapper extension on the Postgres host.
2. Use the [CREATE SERVER](#create-server) command to define a connection to the MongoDB server.
@@ -15,31 +15,29 @@ Before using the MongoDB Foreign Data Wrapper, you must:
## CREATE EXTENSION
-Use the `CREATE EXTENSION` command to create the `mongo_fdw` extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you will be querying the MongoDB server, and invoke the command:
+Use the `CREATE EXTENSION` command to create the `mongo_fdw` extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you want to query the MongoDB server, and invoke the command:
```text
CREATE EXTENSION [IF NOT EXISTS] mongo_fdw [WITH] [SCHEMA schema_name];
```
-**Parameters**
+### Parameters
`IF NOT EXISTS`
- Include the `IF NOT EXISTS` clause to instruct the server to issue a notice instead of throwing an error if an extension with the same name already exists.
+ Include the `IF NOT EXISTS` clause to instruct the server to issue a notice instead of returning an error if an extension with the same name already exists.
`schema_name`
Optionally specify the name of the schema in which to install the extension's objects.
-**Example**
+### Example
The following command installs the MongoDB foreign data wrapper:
`CREATE EXTENSION mongo_fdw;`
-For more information about using the foreign data wrapper `CREATE EXTENSION` command, see:
-
- .
+For more information about using the foreign data wrapper `CREATE EXTENSION` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createextension.html).
@@ -52,37 +50,37 @@ CREATE SERVER server_name FOREIGN DATA WRAPPER mongo_fdw
[OPTIONS (option 'value' [, ...])]
```
-The role that defines the server is the owner of the server; use the `ALTER SERVER` command to reassign ownership of a foreign server. To create a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `CREATE SERVER` command.
+The role that defines the server is the owner of the server. Use the `ALTER SERVER` command to reassign ownership of a foreign server. To create a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `CREATE SERVER` command.
-**Parameters**
+### Parameters
`server_name`
- Use `server_name` to specify a name for the foreign server. The server name must be unique within the database.
+ Use `server_name` to specify a name for the foreign server. The server name must be unique in the database.
`FOREIGN_DATA_WRAPPER`
- Include the `FOREIGN_DATA_WRAPPER` clause to specify that the server should use the `mongo_fdw` foreign data wrapper when connecting to the cluster.
+ Include the `FOREIGN_DATA_WRAPPER` clause to specify for the server to use the `mongo_fdw` foreign data wrapper when connecting to the cluster.
`OPTIONS`
- Use the `OPTIONS` clause of the `CREATE SERVER` command to specify connection information for the foreign server object. You can include:
+ Use the `OPTIONS` clause of the `CREATE SERVER` command to specify connection information for the foreign server object. You can include these options.
-| **Option** | **Description** |
+| Option | Description |
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| address | The address or hostname of the Mongo server. The default value is `127.0.0.1`. |
-| port | The port number of the Mongo Server. Valid range is 0 to 65535. The default value is `27017`. |
-| authentication_database | The database against which user will be authenticated. This option is only valid with password based authentication. |
-| ssl | Requests an authenticated, encrypted SSL connection. By default, the value is set to `false`. Set the value to `true` to enable ssl. See to understand the options. |
+| address | The address or host name of the Mongo server. The default value is `127.0.0.1`. |
+| port | The port number of the Mongo server. Valid range is 0 to 65535. The default value is `27017`. |
+| authentication_database | The database against which the user is authenticated. This option is valid only with password-based authentication. |
+| ssl | Requests an authenticated, encrypted SSL connection. By default, the value is set to `false`. Set the value to `true` to enable SSL. See [mongoc_ssl_opt_t](http://mongoc.org/libmongoc/current/mongoc_ssl_opt_t.html) to understand the options. |
| pem_file | SSL option. |
| pem_pwd | SSL option. |
| ca_file | SSL option. |
| ca_dir | SSL option. |
| crl_file | SSL option. |
| weak_cert_validation | SSL option. The default value is `false`. |
-| enable_join_pushdown | Similar to the table-level option, but configured at the server-level. If true, pushes the join between two foreign tables from the same foreign server instead of fetching all the rows for both the tables and performing a join locally. This option can also be set for an individual table and if any of the tables involved in the join has set the option to false then the join is not pushed down. The table-level value of the option takes precedence over the server-level option value. Default is true.|
+| enable_join_pushdown | Similar to the table-level option but configured at the server level. If `true`, pushes the join between two foreign tables from the same foreign server instead of fetching all the rows for both the tables and performing a join locally. You can also set this option for an individual table. In this case, if any of the tables involved in the join has the option set to `false`, then the join isn't pushed down. The table-level value of the option takes precedence over the server-level option value. Default is `true`.|
-**Example**
+### Example
The following command creates a foreign server named `mongo_server` that uses the `mongo_fdw` foreign data wrapper to connect to a host with an IP address of `127.0.0.1`:
@@ -92,9 +90,7 @@ CREATE SERVER mongo_server FOREIGN DATA WRAPPER mongo_fdw OPTIONS (host '127.0.0
The foreign server uses the default port (`27017`) for the connection to the client on the MongoDB cluster.
-For more information about using the `CREATE SERVER` command, see:
-
-
+For more information about using the `CREATE SERVER` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createserver.html).
@@ -109,11 +105,11 @@ CREATE USER MAPPING FOR role_name SERVER server_name
You must be the owner of the foreign server to create a user mapping for that server.
-**Parameters**
+### Parameters
`role_name`
- Use `role_name` to specify the role that will be associated with the foreign server.
+ Use `role_name` to specify the role to associate with the foreign server.
`server_name`
@@ -123,13 +119,13 @@ You must be the owner of the foreign server to create a user mapping for that se
Use the `OPTIONS` clause to specify connection information for the foreign server.
- `username`: the name of the user on the MongoDB server.
+ `username` is the name of the user on the MongoDB server.
- `password`: the password associated with the username.
+ `password` is the password associated with the username.
-**Example**
+### Example
-The following command creates a user mapping for a role named `enterprisedb`; the mapping is associated with a server named `mongo_server`:
+The following command creates a user mapping for a role named `enterprisedb`. The mapping is associated with a server named `mongo_server`.
`CREATE USER MAPPING FOR enterprisedb SERVER mongo_server;`
@@ -139,17 +135,15 @@ If the database host uses secure authentication, provide connection credentials
CREATE USER MAPPING FOR enterprisedb SERVER mongo_server OPTIONS (username 'mongo_user', password 'mongo_pass');
```
-The command creates a user mapping for a role named `enterprisedb` that is associated with a server named `mongo_server`. When connecting to the MongoDB server, the server will authenticate as `mongo_user`, and provide a password of `mongo_pass`.
+The command creates a user mapping for a role named `enterprisedb` that is associated with a server named `mongo_server`. When connecting to the MongoDB server, the server authenticates as `mongo_user` and provides a password of `mongo_pass`.
-For detailed information about the `CREATE USER MAPPING` command, see:
-
-
+For detailed information about the `CREATE USER MAPPING` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createusermapping.html).
## CREATE FOREIGN TABLE
-A foreign table is a pointer to a table that resides on the MongoDB host. Before creating a foreign table definition on the Postgres server, connect to the MongoDB server and create a collection; the columns in the table will map to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the collection that resides on the MongoDB host. The syntax is:
+A foreign table is a pointer to a table that resides on the MongoDB host. Before creating a foreign table definition on the Postgres server, connect to the MongoDB server and create a collection. The columns in the table map to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the collection that resides on the MongoDB host. The syntax is:
```text
CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
@@ -161,52 +155,52 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
SERVER server_name [ OPTIONS ( option 'value' [, ... ] ) ]
```
-where `column_constraint` is:
+`column_constraint` is:
```text
[ CONSTRAINT constraint_name ]
{ NOT NULL | NULL | CHECK (expr) [ NO INHERIT ] | DEFAULT default_expr }
```
-and `table_constraint` is:
+`table_constraint` is:
```text
[ CONSTRAINT constraint_name ] CHECK (expr) [ NO INHERIT ]
```
-**Parameters**
+### Parameters
`table_name`
- Specify the name of the foreign table; include a schema name to specify the schema in which the foreign table should reside.
+ Specify the name of the foreign table. Include a schema name to specify the schema in which the foreign table resides.
`IF NOT EXISTS`
- Include the `IF NOT EXISTS` clause to instruct the server to not throw an error if a table with the same name already exists; if a table with the same name exists, the server will issue a notice.
+ Include the `IF NOT EXISTS` clause to instruct the server to not return an error if a table with the same name already exists. If a table with the same name exists, the server issues a notice.
`column_name`
- Specify the name of a column in the new table; each column should correspond to a column described on the MongoDB server.
+ Specify the name of a column in the new table. Each column must correspond to a column described on the MongoDB server.
`data_type`
- Specify the data type of the column; when possible, specify the same data type for each column on the Postgres server and the MongoDB server. If a data type with the same name is not available, the Postgres server will attempt to cast the data type to a type compatible with the MongoDB server. If the server cannot identify a compatible data type, it will return an error.
+ Specify the data type of the column. When possible, specify the same data type for each column on the Postgres server and the MongoDB server. If a data type with the same name isn't available, the Postgres server attempts to cast the data type to a type compatible with the MongoDB server. If the server can't identify a compatible data type, it returns an error.
`COLLATE collation`
- Include the `COLLATE` clause to assign a collation to the column; if not specified, the column data type's default collation is used.
+ Include the `COLLATE` clause to assign a collation to the column. If not specified, the column data type's default collation is used.
`INHERITS (parent_table [, ... ])`
- Include the `INHERITS` clause to specify a list of tables from which the new foreign table automatically inherits all columns. Parent tables can be plain tables or foreign tables.
+ Include the `INHERITS` clause to specify a list of tables from which the new foreign table inherits all columns. Parent tables can be plain tables or foreign tables.
`CONSTRAINT constraint_name`
- Specify an optional name for a column or table constraint; if not specified, the server will generate a constraint name.
+ Specify an optional name for a column or table constraint. If not specified, the server generates a constraint name.
`NOT NULL`
- Include the `NOT NULL` keywords to indicate that the column is not allowed to contain null values.
+ Include the `NOT NULL` keywords to indicate that the column isn't allowed to contain null values.
`NULL`
@@ -214,31 +208,31 @@ and `table_constraint` is:
`CHECK (expr) [NO INHERIT]`
- Use the `CHECK` clause to specify an expression that produces a Boolean result that each row in the table must satisfy. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.
+ Use the `CHECK` clause to specify an expression that produces a Boolean result that each row in the table must satisfy. A check constraint specified as a column constraint must reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.
- A `CHECK` expression cannot contain subqueries or refer to variables other than columns of the current row.
+ A `CHECK` expression can't contain subqueries or refer to variables other than columns of the current row.
- Include the `NO INHERIT` keywords to specify that a constraint should not propagate to child tables.
+ Include the `NO INHERIT` keywords to specify that a constraint can't propagate to child tables.
`DEFAULT default_expr`
- Include the `DEFAULT` clause to specify a default data value for the column whose column definition it appears within. The data type of the default expression must match the data type of the column.
+ Include the `DEFAULT` clause to specify a default data value for the column whose column definition it appears in. The data type of the default expression must match the data type of the column.
`SERVER server_name [OPTIONS (option 'value' [, ... ] ) ]`
- To create a foreign table that will allow you to query a table that resides on a MongoDB file system, include the `SERVER` clause and specify the `server_name` of the foreign server that uses the MongoDB data adapter.
+ To create a foreign table that allows you to query a table that resides on a MongoDB file system, include the `SERVER` clause and specify `server_name` for the foreign server that uses the MongoDB data adapter.
- Use the `OPTIONS` clause to specify the following `options` and their corresponding values:
+ Use the `OPTIONS` clause to specify the following options and their corresponding values:
-| option | value |
+| Option | Value |
| ---------- | --------------------------------------------------------------------------------- |
| database | The name of the database to query. The default value is `test`. |
| collection | The name of the collection to query. The default value is the foreign table name. |
-| enable_join_pushdown | Similar to the server-level option, but configured at table-level. If true, pushes the join between two foreign tables from the same foreign server instead of fetching all the rows for both the tables and performing a join locally. This option can also be set for an individual table and if any of the tables involved in the join has set the option to false then the join is not pushed down. The table-level value of the option takes precedence over the server-level option value. Default is true. |
+| enable_join_pushdown | Similar to the server-level option but configured at table level. If `true`, pushes the join between two foreign tables from the same foreign server instead of fetching all the rows for both the tables and performing a join locally. You can also set this option for an individual table. In this case, if any of the tables involved in the join has set the option to `false`, then the join isn't pushed down. The table-level value of the option takes precedence over the server-level option value. Default is `true`. |
-**Example**
+### Example
-To use data that is stored on MongoDB server, you must create a table on the Postgres host that maps the columns of a MongoDB collection to the columns of a Postgres table. For example, for a MongoDB collection with the following definition:
+To use data that's stored on MongoDB server, you must create a table on the Postgres host that maps the columns of a MongoDB collection to the columns of a Postgres table. For example, for a MongoDB collection with the following definition:
```text
db.warehouse.find
@@ -255,7 +249,7 @@ db.warehouse.find
}
```
-You should execute a command on the Postgres server that creates a comparable table on the Postgres server:
+Execute a command on the Postgres server that creates a comparable table on the Postgres server:
```text
CREATE FOREIGN TABLE warehouse
@@ -273,20 +267,18 @@ The first column of the table must be `_id` of the type `name`.
Include the `SERVER` clause to specify the name of the database stored on the MongoDB server and the name of the table (`warehouse`) that corresponds to the table on the Postgres server.
-For more information about using the `CREATE FOREIGN TABLE` command, see:
-
-
+For more information about using the `CREATE FOREIGN TABLE` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createforeigntable.html).
!!! Note
- MongoDB foreign data wrapper supports the write capability feature.
+ MongoDB Foreign Data Wrapper supports the write capability feature.
-### Data Type Mappings
+### Data type mappings
-When using the foreign data wrapper, you must create a table on the Postgres server that mirrors the table that resides on the MongoDB server. The MongoDB data wrapper will automatically convert the following MongoDB data types to the target Postgres type:
+When using the foreign data wrapper, you must create a table on the Postgres server that mirrors the table that resides on the MongoDB server. The MongoDB data wrapper converts the following MongoDB data types to the target Postgres type:
-| **MongoDB (BSON Type)** | **Postgres** |
+| MongoDB (BSON Type) | Postgres |
| ---------------------------- | ---------------------------------------- |
| ARRAY | JSON |
| BOOL | DOUBLE |
@@ -301,39 +293,37 @@ When using the foreign data wrapper, you must create a table on the Postgres ser
## DROP EXTENSION
-Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you will be dropping the MongoDB server, and run the command:
+Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you're dropping the MongoDB server, and run the command:
```text
DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ];
```
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if an extension with the specified name doesn't exists.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if an extension with the specified name doesn't exist.
`name`
- Specify the name of the installed extension. It is optional.
+ Optionally, specify the name of the installed extension.
`CASCADE`
- Automatically drop objects that depend on the extension. It drops all the other dependent objects too.
+ Drop objects that depend on the extension. It drops all the other dependent objects too.
`RESTRICT`
- Do not allow to drop extension if any objects, other than its member objects and extensions listed in the same DROP command are dependent on it.
+ Don't allow to drop extension if any objects, other than its member objects and extensions listed in the same DROP command, depend on it.
-**Example**
+### Example
The following command removes the extension from the existing database:
`DROP EXTENSION mongo_fdw;`
-For more information about using the foreign data wrapper `DROP EXTENSION` command, see:
-
- .
+For more information about using the foreign data wrapper `DROP EXTENSION` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-dropextension.html).
## DROP SERVER
@@ -343,35 +333,33 @@ Use the `DROP SERVER` command to remove a connection to a foreign server. The sy
DROP SERVER [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
-The role that drops the server is the owner of the server; use the `ALTER SERVER` command to reassign ownership of a foreign server. To drop a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `DROP SERVER` command.
+The role that drops the server is the owner of the server. Use the `ALTER SERVER` command to reassign ownership of a foreign server. To drop a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `DROP SERVER` command.
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if a server with the specified name doesn't exists.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if a server with the specified name doesn't exist.
`name`
- Specify the name of the installed server. It is optional.
+ Optionally, specify the name of the installed server.
`CASCADE`
- Automatically drop objects that depend on the server. It should drop all the other dependent objects too.
+ Drop objects that depend on the server. It drops all the other dependent objects too.
`RESTRICT`
- Do not allow to drop the server if any objects are dependent on it.
+ Don't allow to drop the server if any objects depend on it.
-**Example**
+### Example
The following command removes a foreign server named `mongo_server`:
`DROP SERVER mongo_server;`
-For more information about using the `DROP SERVER` command, see:
-
-
+For more information about using the `DROP SERVER` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-dropserver.html).
## DROP USER MAPPING
@@ -381,11 +369,11 @@ Use the `DROP USER MAPPING` command to remove a mapping that associates a Postgr
DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC } SERVER server_name;
```
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if the user mapping doesn't exist.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if the user mapping doesn't exist.
`user_name`
@@ -395,15 +383,13 @@ DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC }
Specify the name of the server that defines a connection to the MongoDB cluster.
-**Example**
+### Example
-The following command drops a user mapping for a role named `enterprisedb`; the mapping is associated with a server named `mongo_server`:
+The following command drops a user mapping for a role named `enterprisedb`. The mapping is associated with a server named `mongo_server`.
`DROP USER MAPPING FOR enterprisedb SERVER mongo_server;`
-For detailed information about the `DROP USER MAPPING` command, see:
-
-
+For detailed information about the `DROP USER MAPPING` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-dropusermapping.html).
## DROP FOREIGN TABLE
@@ -413,11 +399,11 @@ A foreign table is a pointer to a table that resides on the MongoDB host. Use th
DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if the foreign table with the specified name doesn't exists.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if the foreign table with the specified name doesn't exist.
`name`
@@ -425,18 +411,16 @@ DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
`CASCADE`
- Automatically drop objects that depend on the foreign table. It should drop all the other dependent objects too.
+ Drop objects that depend on the foreign table. It drops all the other dependent objects too.
`RESTRICT`
- Do not allow to drop foreign table if any objects are dependent on it.
+ Don't allow to drop foreign table if any objects depend on it.
-**Example**
+### Example
```text
DROP FOREIGN TABLE warehouse;
```
-For more information about using the `DROP FOREIGN TABLE` command, see:
-
-
+For more information about using the `DROP FOREIGN TABLE` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-dropforeigntable.html).
diff --git a/product_docs/docs/mongo_data_adapter/5/08_example_using_the_mongo_data_adapter.mdx b/product_docs/docs/mongo_data_adapter/5/08_example_using_the_mongo_data_adapter.mdx
index 38f2f35b122..31c3932cb50 100644
--- a/product_docs/docs/mongo_data_adapter/5/08_example_using_the_mongo_data_adapter.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/08_example_using_the_mongo_data_adapter.mdx
@@ -4,7 +4,7 @@ title: "Example: Using the MongoDB Foreign Data Wrapper"
-Before using the MongoDB foreign data wrapper, you must connect to your database with a client application. The following examples demonstrate using the wrapper with the psql client. After connecting to psql, you can follow the steps in the example below:
+Before using the MongoDB foreign data wrapper, you must connect to your database with a client application. The following example uses the wrapper with the psql client. After connecting to psql, you can follow the steps in the example:
```text
-- load extension first time after install
diff --git a/product_docs/docs/mongo_data_adapter/5/08a_example_join_pushdown.mdx b/product_docs/docs/mongo_data_adapter/5/08a_example_join_pushdown.mdx
index 45dc400403d..4c18c05e9fa 100644
--- a/product_docs/docs/mongo_data_adapter/5/08a_example_join_pushdown.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/08a_example_join_pushdown.mdx
@@ -1,5 +1,5 @@
---
-title: "Example: Join Pushdown"
+title: "Example: Join pushdown"
---
MongoDB Foreign Data Wrapper supports pushdown for inner joins, left joins, and right joins. For example:
diff --git a/product_docs/docs/mongo_data_adapter/5/09_identifying_data_adapter_version.mdx b/product_docs/docs/mongo_data_adapter/5/09_identifying_data_adapter_version.mdx
index b1d0564acc4..e406a2c0423 100644
--- a/product_docs/docs/mongo_data_adapter/5/09_identifying_data_adapter_version.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/09_identifying_data_adapter_version.mdx
@@ -1,5 +1,5 @@
---
-title: "Identifying the MongoDB Foreign Data Wrapper Version"
+title: "Identifying the MongoDB Foreign Data Wrapper version"
---
diff --git a/product_docs/docs/mongo_data_adapter/5/10_limitations.mdx b/product_docs/docs/mongo_data_adapter/5/10_limitations.mdx
index acdd2f2383c..417730e21ec 100644
--- a/product_docs/docs/mongo_data_adapter/5/10_limitations.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/10_limitations.mdx
@@ -6,6 +6,6 @@ title: "Limitations"
The following limitations apply to MongoDB Foreign Data Wrapper:
-- If the BSON document key contains uppercase letters or occurs within a nested document, MongoDB Foreign Data Wrapper requires the corresponding column names to be declared in double quotes.
-- PostgreSQL limits column names to 63 characters by default. You can increase the `NAMEDATALEN` constant in `src/include/pg_config_manual.h`, compile, and re-install when column names extend beyond 63 characters.
-- MongoDB Foreign Data Wrapper errors out on BSON field which is not listed in the known types (For example: byte, arrays). It throws an error: `Cannot convert BSON type to column type`.
+- If the BSON document key contains uppercase letters or occurs in a nested document, MongoDB Foreign Data Wrapper requires the corresponding column names to be declared in double quotes.
+- PostgreSQL limits column names to 63 characters by default. You can increase the `NAMEDATALEN` constant in `src/include/pg_config_manual.h`, compile, and reinstall when column names exceed 63 characters.
+- MongoDB Foreign Data Wrapper returns an error on BSON field that isn't listed in the known types (for example, byte, arrays). It returns this error: `Cannot convert BSON type to column type`.
diff --git a/product_docs/docs/mongo_data_adapter/5/11_uninstalling_the_mongo_data_adapter.mdx b/product_docs/docs/mongo_data_adapter/5/11_uninstalling_the_mongo_data_adapter.mdx
index 8fecb569438..3fe37ecea7e 100644
--- a/product_docs/docs/mongo_data_adapter/5/11_uninstalling_the_mongo_data_adapter.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/11_uninstalling_the_mongo_data_adapter.mdx
@@ -5,7 +5,7 @@ title: "Uninstalling the MongoDB Foreign Data Wrapper"
-You can use the `remove` command to uninstall MongoDB Foreign Data Wrapper packages. To uninstall, open a terminal window, assume superuser privileges, and enter the command applicable to the operating system and package manager used for the installation:
+You can use the `remove` command to uninstall MongoDB Foreign Data Wrapper packages. To uninstall, open a terminal window, assume superuser privileges, and enter the command that applies to the operating system and package manager used for the installation. `xx` is the EDB Postgres Advanced Server version number.
- On RHEL or CentOS 7:
@@ -23,6 +23,5 @@ You can use the `remove` command to uninstall MongoDB Foreign Data Wrapper packa
`apt-get remove edb-as-mongo-fdw`
-Where `xx` is the EDB Postgres Advanced Server version number.
diff --git a/product_docs/docs/mongo_data_adapter/5/index.mdx b/product_docs/docs/mongo_data_adapter/5/index.mdx
index 1698a8cd92d..bf9a2d53c1f 100644
--- a/product_docs/docs/mongo_data_adapter/5/index.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/index.mdx
@@ -15,10 +15,10 @@ navigation:
- 11_uninstalling_the_mongo_data_adapter
---
-The MongoDB Foreign Data Wrapper (`mongo_fdw`) is a Postgres extension that allows you to access data that resides on a MongoDB database from EDB Postgres Advanced Server. It is a writable foreign data wrapper that you can use with Postgres functions and utilities, or in conjunction with other data that resides on a Postgres host.
+The MongoDB Foreign Data Wrapper (`mongo_fdw`) is a Postgres extension that lets you access data that resides on a MongoDB database from EDB Postgres Advanced Server. It's a writable foreign data wrapper that you can use with Postgres functions and utilities or with other data that resides on a Postgres host.
-The MongoDB Foreign Data Wrapper can be installed with an RPM package. You can download an installer from the [EDB website](https://www.enterprisedb.com/software-downloads-postgres/).
+You can install the MongoDB Foreign Data Wrapper with an RPM package. You can download an installer from the [EDB website](https://www.enterprisedb.com/software-downloads-postgres/).
-This guide uses the term `Postgres` to refer to an instance of EDB Postgres Advanced Server.
+The term Postgres to refers to an instance of EDB Postgres Advanced Server.
diff --git a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/index.mdx b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/index.mdx
index 4ccdc33f023..13ec9074592 100644
--- a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/index.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/index.mdx
@@ -10,9 +10,9 @@ navigation:
- mongo5.2.3_rel_notes
---
-The Mongo Foreign Data Wrapper documentation describes the latest version of Mongo Foreign Data Wrapper 5 including minor releases and patches. The release notes in this section provide information on what was new in each release. For new functionality introduced in a minor or patch release, there are also indicators within the content about what release introduced the feature.
+The Mongo Foreign Data Wrapper documentation describes the latest version of Mongo Foreign Data Wrapper 5, including minor releases and patches. The release notes provide information on what was new in each release. For new functionality introduced in a minor or patch release, the content also indicates the release that introduced the feature.
-| Version | Release Date |
+| Version | Release date |
| ----------------------------- | ------------ |
| [5.3.0](mongo5.3.0_rel_notes) | 2021 Dec 02 |
| [5.2.9](mongo5.2.9_rel_notes) | 2021 Jun 24 |
diff --git a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.2.8_rel_notes.mdx b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.2.8_rel_notes.mdx
index cc69d8943c4..d8613fc8c8a 100644
--- a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.2.8_rel_notes.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.2.8_rel_notes.mdx
@@ -9,10 +9,10 @@ New features, enhancements, bug fixes, and other changes in Mongo Foreign Data W
| Enhancement | Support for EDB Postgres Advanced Server 13. |
| Enhancement | Support for Ubuntu 20.04 LTS platform. |
| Enhancement | Updated LICENSE file. |
-| Bug Fix | Fixed crash with COPY FROM and/or foreign partition routing operations. The crash was caused by Mongo Foreign Data Wrapper not supporting routable foreign-table partitions and/or executing COPY FROM on foreign tables. Instead of crashing, Mongo Foreign Data Wrapper now throws an error. |
-| Bug Fix | Fixed issue where casting target list produces 'NULL'. Correct results are returned not only for have an explicit casts, but also for function calls or operators in the target list. |
-| Bug Fix | Fixed ReScanForeignScan API to make the parameterized query work correctly. Sub-select or correlated queries now use a parameterized plan. |
-| Bug Fix | Changed the server port option's type from int32 to int16 to resolve compilation warnings. Meta driver APIs expect port value in unsigned short type, which resulted in a compilation warning on some gcc versions. |
+| Bug fix | Fixed crash with COPY FROM and/or foreign partition routing operations. The crash was caused by Mongo Foreign Data Wrapper not supporting routable foreign-table partitions and/or executing COPY FROM on foreign tables. Instead of crashing, Mongo Foreign Data Wrapper now throws an error. |
+| Bug fix | Fixed issue where casting target list produces 'NULL'. Correct results are returned not only for have an explicit casts, but also for function calls or operators in the target list. |
+| Bug fix | Fixed ReScanForeignScan API to make the parameterized query work correctly. Sub-select or correlated queries now use a parameterized plan. |
+| Bug fix | Changed the server port option's type from int32 to int16 to resolve compilation warnings. Meta driver APIs expect port value in unsigned short type, which resulted in a compilation warning on some gcc versions. |
diff --git a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.2.9_rel_notes.mdx b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.2.9_rel_notes.mdx
index 48bde0205e0..7b3baf98ec1 100644
--- a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.2.9_rel_notes.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.2.9_rel_notes.mdx
@@ -9,9 +9,9 @@ New features, enhancements, bug fixes, and other changes in Mongo Foreign Data W
| Enhancement | Updated mongo-c-driver to 1.17.3. |
| Enhancement | Updated json-c to 0.15. |
| Enhancement | Updated LICENSE file. |
-| Bug Fix | Fixed crash with the queries involving LEFT JOIN LATERAL. |
-| Bug Fix | Restrict fetching PostgreSQL-specific system attributes from the remote relation to avoid a server crash. |
-| Bug Fix | Improved WHERE pushdown so that more conditions can be sent to the remote server. |
+| Bug fix | Fixed crash with the queries involving LEFT JOIN LATERAL. |
+| Bug fix | Restrict fetching PostgreSQL-specific system attributes from the remote relation to avoid a server crash. |
+| Bug fix | Improved WHERE pushdown so that more conditions can be sent to the remote server. |
diff --git a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.3.0_rel_notes.mdx b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.3.0_rel_notes.mdx
index 53ed4091569..6ffa1b667b3 100644
--- a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.3.0_rel_notes.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.3.0_rel_notes.mdx
@@ -11,7 +11,7 @@ include:
| ----------- |------------ |
| Enhancement | Support for EDB Postgres Advanced Server 14. |
| Enhancement | Join pushdown: If a query has a join between two foreign tables from the same remote server, you can now push that join down to the remote server instead of fetching all the rows for both the tables and performing a join locally. |
-| Bug Fix | Improve API performance. |
-| Bug Fix | Need support for the whole-row reference. |
+| Bug fix | Improve API performance. |
+| Bug fix | Need support for the whole-row reference. |
From 352ed2b8e8ecc203e339e9317191fd5b3d511a7a Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Thu, 24 Mar 2022 14:47:54 -0400
Subject: [PATCH 09/16] removed sentence about postgres
---
product_docs/docs/mongo_data_adapter/5/index.mdx | 2 --
1 file changed, 2 deletions(-)
diff --git a/product_docs/docs/mongo_data_adapter/5/index.mdx b/product_docs/docs/mongo_data_adapter/5/index.mdx
index bf9a2d53c1f..85ea17899e0 100644
--- a/product_docs/docs/mongo_data_adapter/5/index.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/index.mdx
@@ -19,6 +19,4 @@ The MongoDB Foreign Data Wrapper (`mongo_fdw`) is a Postgres extension that lets
You can install the MongoDB Foreign Data Wrapper with an RPM package. You can download an installer from the [EDB website](https://www.enterprisedb.com/software-downloads-postgres/).
-The term Postgres to refers to an instance of EDB Postgres Advanced Server.
-
From fc6ecbf699a44baa9f54719c7e6b6048f8ba63b4 Mon Sep 17 00:00:00 2001
From: Bobby Bissett <70302203+EFM-Bobby@users.noreply.github.com>
Date: Thu, 24 Mar 2022 15:19:33 -0400
Subject: [PATCH 10/16] Update 05_efm_pgbouncer.mdx
---
product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
index 1e2ef93e315..a455364aec2 100644
--- a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
+++ b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
@@ -157,7 +157,7 @@ by root and that user/group/other (0755) has read and execute access. The script
For the PgBouncer integration, passwordless `ssh` access is required. There are multiple ways
to configure `ssh`. Follow your organization's recommended process to
configure the passwordless `ssh`. For a quick start, you can also follow this example for configuring passwordless `ssh`.
-The 'efm' user will need to be able to ssh as the user running pgbouncer, i.e. the 'enterprisedb' user.
+The 'efm' user will need to be able to ssh as the user running PgBouncer, i.e. the 'enterprisedb' user.
#### Configure on PgBouncer hosts
From 6761b73974a7c67c28b83da73b22ce2b20b75904 Mon Sep 17 00:00:00 2001
From: Bobby Bissett <70302203+EFM-Bobby@users.noreply.github.com>
Date: Thu, 24 Mar 2022 15:21:04 -0400
Subject: [PATCH 11/16] Update
product_docs/docs/efm/4/04_configuring_efm/01_cluster_properties.mdx
Co-authored-by: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com>
---
.../docs/efm/4/04_configuring_efm/01_cluster_properties.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/efm/4/04_configuring_efm/01_cluster_properties.mdx b/product_docs/docs/efm/4/04_configuring_efm/01_cluster_properties.mdx
index 79cf1a48990..e0f4c217a0c 100644
--- a/product_docs/docs/efm/4/04_configuring_efm/01_cluster_properties.mdx
+++ b/product_docs/docs/efm/4/04_configuring_efm/01_cluster_properties.mdx
@@ -576,7 +576,7 @@ To perform maintenance on the primary database when `primary.shutdown.as.failure
-Use the `update.physical.slots.period` property to define the slot advance frequency for database version 12 and above. When `update.physical.slots.period` is set to a positive integer value, the primary agent will read the current `restart_lsn` of the physical replication slots after every `update.physical.slots.period` seconds, and send this information with its `pg_current_wal_lsn` and `primary_slot_name` (if it is set in the postgresql.conf file) to the standbys. The physical slots must already exist on the primary for the agent to find them. If physical slots do not already exist on the standbys, standby agents will create the slots and then update the `restart_lsn parameter` for these slots. A non-promotable standby will not create new slots but will update them if they exist.
+Use the `update.physical.slots.period` property to define the slot advance frequency for database version 12 and later. When `update.physical.slots.period` is set to a positive integer value, the primary agent reads the current `restart_lsn` of the physical replication slots after every `update.physical.slots.period` seconds and sends this information with its `pg_current_wal_lsn` and `primary_slot_name` (if it is set in the postgresql.conf file) to the standbys. The physical slots must already exist on the primary for the agent to find them. If physical slots do not already exist on the standbys, standby agents create the slots and then update `restart_lsn parameter` for these slots. A non-promotable standby doesn't create new slots but updates them if they exist.
Note: all slot names, including one set on the current primary if desired, must be unique.
From df2181bd648f45fee5c9453356afa7ab5aec50e2 Mon Sep 17 00:00:00 2001
From: Bobby Bissett <70302203+EFM-Bobby@users.noreply.github.com>
Date: Thu, 24 Mar 2022 15:30:41 -0400
Subject: [PATCH 12/16] Update 05_efm_pgbouncer.mdx
---
product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
index a455364aec2..dd97746f831 100644
--- a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
+++ b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
@@ -157,7 +157,7 @@ by root and that user/group/other (0755) has read and execute access. The script
For the PgBouncer integration, passwordless `ssh` access is required. There are multiple ways
to configure `ssh`. Follow your organization's recommended process to
configure the passwordless `ssh`. For a quick start, you can also follow this example for configuring passwordless `ssh`.
-The 'efm' user will need to be able to ssh as the user running PgBouncer, i.e. the 'enterprisedb' user.
+The user efm user must be able to ssh as the user running PgBouncer; for example, enterprisedb.
#### Configure on PgBouncer hosts
From 46ce68177bbbf23b4cf07d1e8c6ce1efe3d36069 Mon Sep 17 00:00:00 2001
From: Bobby Bissett <70302203+EFM-Bobby@users.noreply.github.com>
Date: Thu, 24 Mar 2022 15:33:47 -0400
Subject: [PATCH 13/16] Update 05_efm_pgbouncer.mdx
---
product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
index dd97746f831..7155aaff891 100644
--- a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
+++ b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx
@@ -65,7 +65,7 @@ to those instructions:
1. Create an integration script that connects to every (remote)
PgBouncer host and runs the redirect script. Locate the script at `/usr/edb/efm-4.2/bin/efm_pgbouncer_functions`. Make sure the user
-efm can execute the script, which has the following contents (note that the 'efm' user is ssh'ing as 'enterprisedb' to run the script):
+efm can execute the script, which has the following contents. The user efm runs ssh as enterpriseDB to run the script.
``` text
From 8e1a5b364f54b9871f40acab86bc68672faba16b Mon Sep 17 00:00:00 2001
From: drothery-edb
Date: Thu, 24 Mar 2022 15:59:54 -0400
Subject: [PATCH 14/16] fixed some formatting issues
---
...08_configuring_the_hadoop_data_adapter.mdx | 7 +-
.../2/09_using_the_hadoop_data_adapter.mdx | 189 +++++++++---------
2 files changed, 98 insertions(+), 98 deletions(-)
diff --git a/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
index 413a2339728..053a554ee7d 100644
--- a/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
@@ -16,9 +16,10 @@ Modify the configuration file, adding the `hdfs_fdw.jvmpath` parameter to the en
```
!!! Note
- Copy the jar files (`hive-jdbc-1.0.1-standalone.jar` and `hadoop-common-2.6.4.jar`) from the respective Hive and Hadoop sources or website to the PostgreSQL instance where Hadoop Foreign Data Wrapper is installed.
+ Copy the jar files (`hive-jdbc-1.0.1-standalone.jar` and `hadoop-common-2.6.4.jar`) from the respective Hive and Hadoop sources or website to the PostgreSQL instance where Hadoop Foreign Data Wrapper is installed.
+
+ If you're using EDB Postgres Advanced Server and have a `DATE` column in your database, you must set `edb_redwood_date = OFF` in the `postgresql.conf` file.
- If you're using EDB Postgres Advanced Server and have a `DATE` column in your database, you must set `edb_redwood_date = OFF` in the `postgresql.conf` file.
After setting the parameter values, restart the Postgres server. For detailed information about controlling the service on an EDB Postgres Advanced Server host, see the [EDB Postgres Advanced Server documentation](../epas/latest).
@@ -124,7 +125,7 @@ CREATE USER MAPPING FOR role_name SERVER server_name
You must be the owner of the foreign server to create a user mapping for that server.
!!! Note
- The Hadoop Foreign Data Wrapper supports NOSASL and LDAP authentication. If you're creating a user mapping for a server that uses LDAP authentication, use the `OPTIONS` clause to provide the connection credentials (the user name and password) for an existing LDAP user. If the server uses NOSASL authentication, omit the `OPTIONS` clause when creating the user mapping.
+ The Hadoop Foreign Data Wrapper supports NOSASL and LDAP authentication. If you're creating a user mapping for a server that uses LDAP authentication, use the `OPTIONS` clause to provide the connection credentials (the user name and password) for an existing LDAP user. If the server uses NOSASL authentication, omit the `OPTIONS` clause when creating the user mapping.
### Parameters
diff --git a/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
index d7c13519756..b42fa8b6f9f 100644
--- a/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
@@ -79,71 +79,70 @@ To use HDFS FDW with Apache Hive on top of Hadoop:
1. Access your data from Postgres. You can now use the `weblog` table. Once you're connected using psql, follow these steps:
-```text
--- set the GUC variables appropriately, e.g. :
-hdfs_fdw.jvmpath='/home/edb/Projects/hadoop_fdw/jdk1.8.0_111/jre/lib/amd64/server/'
-hdfs_fdw.classpath='/usr/local/edbas/lib/postgresql/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
+ ```text
+ -- set the GUC variables appropriately, e.g. :
+ hdfs_fdw.jvmpath='/home/edb/Projects/hadoop_fdw/jdk1.8.0_111/jre/lib/amd64/server/'
+ hdfs_fdw.classpath='/usr/local/edbas/lib/postgresql/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
--- load extension first time after install
-CREATE EXTENSION hdfs_fdw;
+ -- load extension first time after install
+ CREATE EXTENSION hdfs_fdw;
--- create server object
-CREATE SERVER hdfs_server
+ -- create server object
+ CREATE SERVER hdfs_server
FOREIGN DATA WRAPPER hdfs_fdw
OPTIONS (host '127.0.0.1');
--- create user mapping
-CREATE USER MAPPING FOR postgres
+ -- create user mapping
+ CREATE USER MAPPING FOR postgres
SERVER hdfs_server OPTIONS (username 'hive_username', password 'hive_password');
--- create foreign table
-CREATE FOREIGN TABLE weblogs
-(
- client_ip TEXT,
- full_request_date TEXT,
- day TEXT,
- Month TEXT,
- month_num INTEGER,
- year TEXT,
- hour TEXT,
- minute TEXT,
- second TEXT,
- timezone TEXT,
- http_verb TEXT,
- uri TEXT,
- http_status_code TEXT,
- bytes_returned TEXT,
- referrer TEXT,
- user_agent TEXT
-)
-SERVER hdfs_server
+ -- create foreign table
+ CREATE FOREIGN TABLE weblogs
+ (
+ client_ip TEXT,
+ full_request_date TEXT,
+ day TEXT,
+ Month TEXT,
+ month_num INTEGER,
+ year TEXT,
+ hour TEXT,
+ minute TEXT,
+ second TEXT,
+ timezone TEXT,
+ http_verb TEXT,
+ uri TEXT,
+ http_status_code TEXT,
+ bytes_returned TEXT,
+ referrer TEXT,
+ user_agent TEXT
+ )
+ SERVER hdfs_server
OPTIONS (dbname 'default', table_name 'weblogs');
-
--- select from table
-postgres=# SELECT DISTINCT client_ip IP, count(*)
+ -- select from table
+ postgres=# SELECT DISTINCT client_ip IP, count(*)
FROM weblogs GROUP BY IP HAVING count(*) > 5000 ORDER BY 1;
- ip | count
------------------+-------
- 13.53.52.13 | 5494
- 14.323.74.653 | 16194
- 322.6.648.325 | 13242
- 325.87.75.336 | 6500
- 325.87.75.36 | 6498
- 361.631.17.30 | 64979
- 363.652.18.65 | 10561
- 683.615.622.618 | 13505
-(8 rows)
-
--- EXPLAIN output showing WHERE clause being pushed down to remote server.
-EXPLAIN (VERBOSE, COSTS OFF) SELECT client_ip, full_request_date, uri FROM weblogs WHERE http_status_code = 200;
+ ip | count
+ -----------------+-------
+ 13.53.52.13 | 5494
+ 14.323.74.653 | 16194
+ 322.6.648.325 | 13242
+ 325.87.75.336 | 6500
+ 325.87.75.36 | 6498
+ 361.631.17.30 | 64979
+ 363.652.18.65 | 10561
+ 683.615.622.618 | 13505
+ (8 rows)
+
+ -- EXPLAIN output showing WHERE clause being pushed down to remote server.
+ EXPLAIN (VERBOSE, COSTS OFF) SELECT client_ip, full_request_date, uri FROM weblogs WHERE http_status_code = 200;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------
- Foreign Scan on public.weblogs
- Output: client_ip, full_request_date, uri
- Remote SQL: SELECT client_ip, full_request_date, uri FROM default.weblogs WHERE ((http_status_code = '200'))
-(3 rows)
-```
+ ----------------------------------------------------------------------------------------------------------------
+ Foreign Scan on public.weblogs
+ Output: client_ip, full_request_date, uri
+ Remote SQL: SELECT client_ip, full_request_date, uri FROM default.weblogs WHERE ((http_status_code = '200'))
+ (3 rows)
+ ```
## Using HDFS FDW with Apache Spark on top of Hadoop
@@ -255,48 +254,48 @@ To use HDFS FDW with Apache Spark on top of Hadoop:
1. Access your data from Postgres using psql:
-```text
--- set the GUC variables appropriately, e.g. :
-hdfs_fdw.jvmpath='/home/edb/Projects/hadoop_fdw/jdk1.8.0_111/jre/lib/amd64/server/'
-hdfs_fdw.classpath='/usr/local/edbas/lib/postgresql/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
-
--- load extension first time after install
-CREATE EXTENSION hdfs_fdw;
-
--- create server object
-CREATE SERVER hdfs_server
- FOREIGN DATA WRAPPER hdfs_fdw
- OPTIONS (host '127.0.0.1', port '10000', client_type 'spark', auth_type 'NOSASL');
-
--- create user mapping
-CREATE USER MAPPING FOR postgres
- SERVER hdfs_server OPTIONS (username 'spark_username', password 'spark_password');
-
--- create foreign table
-CREATE FOREIGN TABLE f_names_tab( a int, name varchar(255)) SERVER hdfs_svr
- OPTIONS (dbname 'testdb', table_name 'my_names_tab');
-
--- select the data from foreign server
-select * from f_names_tab;
- a | name
----+--------
- 1 | abcd
- 2 | pqrs
- 3 | wxyz
- 4 | a_b_c
- 5 | p_q_r
- 0 |
-(6 rows)
-
--- EXPLAIN output showing WHERE clause being pushed down to remote server.
-EXPLAIN (verbose, costs off) SELECT name FROM f_names_tab WHERE a > 3;
+ ```text
+ -- set the GUC variables appropriately, e.g. :
+ hdfs_fdw.jvmpath='/home/edb/Projects/hadoop_fdw/jdk1.8.0_111/jre/lib/amd64/server/'
+ hdfs_fdw.classpath='/usr/local/edbas/lib/postgresql/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
+
+ -- load extension first time after install
+ CREATE EXTENSION hdfs_fdw;
+
+ -- create server object
+ CREATE SERVER hdfs_server
+ FOREIGN DATA WRAPPER hdfs_fdw
+ OPTIONS (host '127.0.0.1', port '10000', client_type 'spark', auth_type 'NOSASL');
+
+ -- create user mapping
+ CREATE USER MAPPING FOR postgres
+ SERVER hdfs_server OPTIONS (username 'spark_username', password 'spark_password');
+
+ -- create foreign table
+ CREATE FOREIGN TABLE f_names_tab( a int, name varchar(255)) SERVER hdfs_svr
+ OPTIONS (dbname 'testdb', table_name 'my_names_tab');
+
+ -- select the data from foreign server
+ select * from f_names_tab;
+ a | name
+ ---+--------
+ 1 | abcd
+ 2 | pqrs
+ 3 | wxyz
+ 4 | a_b_c
+ 5 | p_q_r
+ 0 |
+ (6 rows)
+
+ -- EXPLAIN output showing WHERE clause being pushed down to remote server.
+ EXPLAIN (verbose, costs off) SELECT name FROM f_names_tab WHERE a > 3;
QUERY PLAN
---------------------------------------------------------------------------
- Foreign Scan on public.f_names_tab
- Output: name
- Remote SQL: SELECT name FROM my_test_db.my_names_tab WHERE ((a > '3'))
-(3 rows)
-```
+ --------------------------------------------------------------------------
+ Foreign Scan on public.f_names_tab
+ Output: name
+ Remote SQL: SELECT name FROM my_test_db.my_names_tab WHERE ((a > '3'))
+ (3 rows)
+ ```
!!! Note
This example uses the same port while creating the foreign server because Spark Thrift Server is compatible with Hive Thrift Server. Applications using Hiveserver2 work with Spark except for the behavior of the `ANALYZE` command and the connection string in the case of `NOSASL`. We recommend using `ALTER SERVER` and changing the `client_type` option if you replace Hive with Spark.
From 2b1ec465f57718814ed059aa0e19e0463dfcb718 Mon Sep 17 00:00:00 2001
From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com>
Date: Thu, 24 Mar 2022 16:14:38 -0400
Subject: [PATCH 15/16] eding of MySQL foreign data wrapper
---
.../2/02_requirements_overview.mdx | 10 +-
.../2/03_architecture_overview.mdx | 2 +-
.../01_mysql_rhel8_x86.mdx | 2 +-
.../02_mysql_other_linux8_x86.mdx | 2 +-
.../03_mysql_rhel7_x86.mdx | 2 +-
.../04_mysql_centos7_x86.mdx | 2 +-
.../05_mysql_sles15_x86.mdx | 4 +-
.../07_mysql_sles12_x86.mdx | 4 +-
.../13_mysql_rhel8_ppcle.mdx | 4 +-
.../17_mysql_sles15_ppcle.mdx | 4 +-
.../19_mysql_sles12_ppcle.mdx | 4 +-
.../2/05_updating_the_mysql_data_adapter.mdx | 4 +-
.../2/06_features_of_mysql_fdw.mdx | 53 +++--
.../07_configuring_the_mysql_data_adapter.mdx | 198 ++++++++----------
...8_example_using_the_mysql_data_adapter.mdx | 2 +-
.../2/09_example_import_foreign_schema.mdx | 4 +-
.../2/10_example_join_push_down.mdx | 4 +-
.../10a_example_aggregate_func_push_down.mdx | 12 +-
.../2/11_identifying_data_adapter_version.mdx | 2 +-
...12_uninstalling_the_mysql_data_adapter.mdx | 14 +-
.../2/13_troubleshooting.mdx | 10 +-
.../docs/mysql_data_adapter/2/index.mdx | 6 +-
.../2/mysql_rel_notes/index.mdx | 2 +-
.../mysql_rel_notes/mysql2.7.0_rel_notes.mdx | 4 +-
24 files changed, 163 insertions(+), 192 deletions(-)
diff --git a/product_docs/docs/mysql_data_adapter/2/02_requirements_overview.mdx b/product_docs/docs/mysql_data_adapter/2/02_requirements_overview.mdx
index c918e1b8868..3b13bd15b2b 100644
--- a/product_docs/docs/mysql_data_adapter/2/02_requirements_overview.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/02_requirements_overview.mdx
@@ -1,8 +1,8 @@
---
-title: "Supported Database and MySQL Versions"
+title: "Supported database and MySQL versions"
---
-## Supported Database Versions
+## Supported database versions
This table lists the latest MySQL Foreign Data Wrapper versions and their supported corresponding EDB Postgres Advanced Server (EPAS) versions. MySQL Foreign Data Wrapper is supported on the same platforms as EDB Postgres Advanced Server. See [Product Compatibility](https://www.enterprisedb.com/platform-compatibility#epas) for details.
@@ -14,9 +14,9 @@ This table lists the latest MySQL Foreign Data Wrapper versions and their suppor
| 2.5.3 | N | N | Y | N | N |
| 2.5.1 | N | N | N | Y | N |
-## Supported MySQL Versions
+## Supported MySQL versions
-### The MySQL Foreign Data Wrapper is supported for MySQL 8 on the following platforms:
+The MySQL Foreign Data Wrapper is supported for MySQL 8 on the following platforms:
**Linux x86-64**
@@ -36,7 +36,7 @@ This table lists the latest MySQL Foreign Data Wrapper versions and their suppor
- SLES 12
-### The MySQL Foreign Data Wrapper is supported for MySQL 5 on the following platforms:
+The MySQL Foreign Data Wrapper is supported for MySQL 5 on the following platforms:
**Linux x86-64**
diff --git a/product_docs/docs/mysql_data_adapter/2/03_architecture_overview.mdx b/product_docs/docs/mysql_data_adapter/2/03_architecture_overview.mdx
index fba2349b118..f7356969f6d 100644
--- a/product_docs/docs/mysql_data_adapter/2/03_architecture_overview.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/03_architecture_overview.mdx
@@ -1,5 +1,5 @@
---
-title: "Architecture Overview"
+title: "Architecture overview"
---
diff --git a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/01_mysql_rhel8_x86.mdx b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/01_mysql_rhel8_x86.mdx
index 8584b53f4b8..fb0bad6d4d2 100644
--- a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/01_mysql_rhel8_x86.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/01_mysql_rhel8_x86.mdx
@@ -25,7 +25,7 @@ After receiving your repository credentials you can:
2. Modify the file, providing your user name and password.
3. Install the MySQL Foreign Data Wrapper.
-## Creating a Repository Configuration File
+## Creating a repository configuration file
To create the repository configuration file, assume superuser privileges, and invoke the following command:
diff --git a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/02_mysql_other_linux8_x86.mdx b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/02_mysql_other_linux8_x86.mdx
index 7526cd0f295..9833ab60ee2 100644
--- a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/02_mysql_other_linux8_x86.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/02_mysql_other_linux8_x86.mdx
@@ -24,7 +24,7 @@ After receiving your repository credentials you can:
2. Modify the file, providing your user name and password.
3. Install the MySQL Foreign Data Wrapper.
-## Creating a Repository Configuration File
+## Creating a repository configuration file
To create the repository configuration file, assume superuser privileges, and invoke the following command:
diff --git a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/03_mysql_rhel7_x86.mdx b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/03_mysql_rhel7_x86.mdx
index e0065569951..a87a3118f31 100644
--- a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/03_mysql_rhel7_x86.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/03_mysql_rhel7_x86.mdx
@@ -24,7 +24,7 @@ After receiving your repository credentials you can:
2. Modify the file, providing your user name and password.
3. Install the MySQL Foreign Data Wrapper.
-## Creating a Repository Configuration File
+## Creating a repository configuration file
To create the repository configuration file, assume superuser privileges, and invoke the following command:
diff --git a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/04_mysql_centos7_x86.mdx b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/04_mysql_centos7_x86.mdx
index 37778de2905..c4cef261778 100644
--- a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/04_mysql_centos7_x86.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/04_mysql_centos7_x86.mdx
@@ -20,7 +20,7 @@ After receiving your repository credentials you can:
2. Modify the file, providing your user name and password.
3. Install the MySQL Foreign Data Wrapper.
-## Creating a Repository Configuration File
+## Creating a repository configuration file
To create the repository configuration file, assume superuser privileges, and invoke the following command:
diff --git a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/05_mysql_sles15_x86.mdx b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/05_mysql_sles15_x86.mdx
index 765c3083dd2..e20a2a010e2 100644
--- a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/05_mysql_sles15_x86.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/05_mysql_sles15_x86.mdx
@@ -16,7 +16,7 @@ sudo su -
Before setting up the repository, you need to register with EDB. To receive credentials for the EDB repository, visit: [Repository Access Request](https://www.enterprisedb.com/repository-access-request).
-## Setting up the Repository
+## Setting up the repository
Setting up the repository is a one time task. If you have already set up your repository, you do not need to perform these steps.
@@ -45,7 +45,7 @@ rpm --import /etc/RPM-GPG-KEY-mysql-2022
zypper refresh
```
-## Installing the Package
+## Installing the package
```shell
zypper -n install edb-as13-mysql8_fdw
diff --git a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/07_mysql_sles12_x86.mdx b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/07_mysql_sles12_x86.mdx
index 17cd9f9a5aa..2bdb5c07cac 100644
--- a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/07_mysql_sles12_x86.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/07_mysql_sles12_x86.mdx
@@ -16,7 +16,7 @@ sudo su -
Before setting up the repository, you need to register with EDB. To receive credentials for the EDB repository, visit: [Repository Access Request](https://www.enterprisedb.com/repository-access-request).
-## Setting up the Repository
+## Setting up the repository
Setting up the repository is a one time task. If you have already set up your repository, you do not need to perform these steps.
@@ -50,7 +50,7 @@ SUSEConnect -p sle-sdk/12.5/x86_64
zypper refresh
```
-## Installing the Package
+## Installing the package
```shell
zypper -n install edb-as14-mysql8_fdw
diff --git a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/13_mysql_rhel8_ppcle.mdx b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/13_mysql_rhel8_ppcle.mdx
index 1f4215746c9..300d3bb9b50 100644
--- a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/13_mysql_rhel8_ppcle.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/13_mysql_rhel8_ppcle.mdx
@@ -15,7 +15,7 @@ To log in as a superuser:
sudo su -
```
-## Setting up the Repository
+## Setting up the repository
1. To register with EDB to receive credentials for the EDB repository, visit: [Repository Access Request](https://www.enterprisedb.com/repository-access-request).
@@ -55,7 +55,7 @@ sudo su -
dnf -qy module disable postgresql
```
-## Installing the Package
+## Installing the package
```shell
dnf -y install edb-as-mysql8_fdw
diff --git a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/17_mysql_sles15_ppcle.mdx b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/17_mysql_sles15_ppcle.mdx
index 38b91dc962d..afdebfc1861 100644
--- a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/17_mysql_sles15_ppcle.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/17_mysql_sles15_ppcle.mdx
@@ -16,7 +16,7 @@ sudo su -
Before setting up the repository, you need to register with EDB. To receive credentials for the EDB repository, visit: [Repository Access Request](https://www.enterprisedb.com/repository-access-request).
-## Setting up the Repository
+## Setting up the repository
Setting up the repository is a one time task. If you have already set up your repository, you do not need to perform these steps.
@@ -45,7 +45,7 @@ rpm --import /etc/RPM-GPG-KEY-mysql-2022
zypper refresh
```
-## Installing the Package
+## Installing the package
```shell
zypper -n install edb-as13-mysql8_fdw
diff --git a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/19_mysql_sles12_ppcle.mdx b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/19_mysql_sles12_ppcle.mdx
index b62656d70ee..1f000909cdc 100644
--- a/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/19_mysql_sles12_ppcle.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/04_installing_the_mysql_data_adapter/19_mysql_sles12_ppcle.mdx
@@ -16,7 +16,7 @@ sudo su -
Before setting up the repository, you need to register with EDB. To receive credentials for the EDB repository, visit: [Repository Access Request](https://www.enterprisedb.com/repository-access-request).
-#### Setting up the Repository
+## Setting up the repository
Setting up the repository is a one time task. If you have already set up your repository, you do not need to perform these steps.
@@ -42,7 +42,7 @@ SUSEConnect -p sle-sdk/12.5/ppc64le
zypper refresh
```
-#### Installing the Package
+## Installing the package
```shell
zypper -n install edb-as14-mysql8_fdw
diff --git a/product_docs/docs/mysql_data_adapter/2/05_updating_the_mysql_data_adapter.mdx b/product_docs/docs/mysql_data_adapter/2/05_updating_the_mysql_data_adapter.mdx
index c55733f3c5e..bff8a573aa7 100644
--- a/product_docs/docs/mysql_data_adapter/2/05_updating_the_mysql_data_adapter.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/05_updating_the_mysql_data_adapter.mdx
@@ -4,7 +4,7 @@ title: "Updating the MySQL Foreign Data Wrapper"
-**Updating an RPM Installation**
+## Updating an RPM installation
If you have an existing RPM installation of MySQL Foreign Data Wrapper, you can use yum or dnf to upgrade your repository configuration file and update to a more recent product version. yum or dnf will update the `edb.repo` file to enable access to the current EDB repository, configured to connect with the credentials specified in your `edb.repo` file. Then, you can use yum or dnf to upgrade any installed packages:
@@ -23,7 +23,7 @@ If you have an existing RPM installation of MySQL Foreign Data Wrapper, you can
sudo yum -y upgrade edb-as-mysql5_fdw* mysql-community-devel
```
-**Updating MySQL Foreign Data Wrapper on a Debian or Ubuntu Host**
+## Updating MySQL Foreign Data Wrapper on a Debian or Ubuntu host
In the previously released version of MySQL FDW, the package name for MySQL FDW was `edb-as*-mysql-fdw`. In the current release, two separate packages have been made available for MySQL 5 and MySQL 8 i.e `edb-as*-mysql5-fdw` and `edb-as*-mysql8-fdw` Respectively.
diff --git a/product_docs/docs/mysql_data_adapter/2/06_features_of_mysql_fdw.mdx b/product_docs/docs/mysql_data_adapter/2/06_features_of_mysql_fdw.mdx
index 0dd7182a6c7..5c806bbdfe3 100644
--- a/product_docs/docs/mysql_data_adapter/2/06_features_of_mysql_fdw.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/06_features_of_mysql_fdw.mdx
@@ -3,11 +3,11 @@ title: "Features of the MySQL Foreign Data Wrapper"
---
-The key features of the MySQL Foreign Data Wrapper are:
+These are the key features of the MySQL Foreign Data Wrapper.
-## Writable Foreign Data Wrapper
+## Writable foreign data wrapper
-MySQL Foreign Data Wrapper provides the write capability. Users can insert, update, and delete data in the remote MySQL tables by inserting, updating, and deleting the data locally in the foreign tables. MySQL foreign data wrapper uses the Postgres type casting mechanism to provide opposite type casting between MySQL and Postgres data types.
+MySQL Foreign Data Wrapper provides the write capability. You can insert, update, and delete data in the remote MySQL tables by inserting, updating, and deleting the data locally in the foreign tables. MySQL foreign data wrapper uses the Postgres type casting mechanism to provide opposite type casting between MySQL and Postgres data types.
!!! Note
The first column of MySQL table must have unique/primary key for DML to work.
@@ -15,62 +15,59 @@ MySQL Foreign Data Wrapper provides the write capability. Users can insert, upda
See also:
- [Example: Using the MySQL Foreign Data Wrapper](08_example_using_the_mysql_data_adapter/#example_using_the_mysql_data_adapter)
-- [Data Type Mappings](07_configuring_the_mysql_data_adapter/#data-type-mappings)
+- [Data type mappings](07_configuring_the_mysql_data_adapter/#data-type-mappings)
-## Connection Pooling
+## Connection pooling
-MySQL_FDW establishes a connection to a foreign server during the first query that uses a foreign table associated with the foreign server. This connection is kept and reused for subsequent queries in the same session.
+MySQL Foreign Data Wrapper establishes a connection to a foreign server during the first query that uses a foreign table associated with the foreign server. This connection is kept and reused for subsequent queries in the same session.
-## Where Clause Pushdown
+## WHERE clause pushdown
-MySQL Foreign Data Wrapper allows the pushdown of `WHERE` clause to the foreign server for execution. This feature optimizes remote queries to reduce the number of rows transferred from foreign servers.
+MySQL Foreign Data Wrapper allows the pushdown of a `WHERE` clause to the foreign server for execution. This feature optimizes remote queries to reduce the number of rows transferred from foreign servers.
-## Column Pushdown
+## Column pushdown
MySQL Foreign Data Wrapper supports column pushdown. As a result, the query brings back only those columns that are a part of the select target list.
-## Join Pushdown
+## Join pushdown
-MySQL Foreign Data Wrapper supports join pushdown. It pushes the joins between the foreign tables of the same remote MySQL server to that remote MySQL server, thereby enhancing the performance.
+MySQL Foreign Data Wrapper supports join pushdown. It pushes the joins between the foreign tables of the same remote MySQL server to that remote MySQL server, enhancing the performance.
!!! Note
- - Currently, joins involving only relational and arithmetic operators in join-clauses are pushed down to avoid any potential join failure.
+ - Currently, joins involving only relational and arithmetic operators in join clauses are pushed down to avoid any potential join failure.
- Only the INNER and LEFT/RIGHT OUTER joins are supported.
See also:
-- [Example: Join Pushdown](10_example_join_push_down/#example_join_push_down)
+- [Example: Join pushdown](10_example_join_push_down/#example_join_push_down)
- [Blog: Join Pushdown](https://www.enterprisedb.com/blog/how-enhance-efficiency-your-mysqlfdw-operations-join-push-down) - covers performance improvements and partition-wise join pushdowns
-## Prepared Statement
-
-## Aggregate Function Pushdown
+## Aggregate function pushdown
MySQL Foreign Data Wrapper supports aggregate pushdown for min, max,
-sum, avg, and count aggregate functions allowing you to push aggregates to the remote MySQL server instead of fetching all
-of the rows and aggregating them locally. Aggregate filters and aggregate orders are not pushed down as MySQL does not support them.
+sum, avg, and count aggregate functions, allowing you to push aggregates to the remote MySQL server instead of fetching all
+of the rows and aggregating them locally. Aggregate filters and aggregate orders aren't pushed down as MySQL doesn't support them.
See also:
-- [Example: Aggregate Function Pushdown](10a_example_aggregate_func_push_down)
+- [Example: Aggregate function pushdown](10a_example_aggregate_func_push_down)
- [Blog: Aggregate Pushdown](https://www.enterprisedb.com/blog/aggregate-push-down-mysqlfdw) - covers performance improvements, using join and aggregate pushdowns together, and pushing down aggregates to the partition table
-## Prepared Statement
-MySQL Foreign Data Wrapper supports Prepared Statement. The select queries uses prepared statements instead of simple query protocol.
+## Prepared Statement
-## Import Foreign Schema
+MySQL Foreign Data Wrapper supports Prepared Statement. The select queries use prepared statements instead of simple query protocol.
-MySQL Foreign Data Wrapper supports Import Foreign Schema which enables the local host to import table definitions on the EDB Postgres Advanced Server from the MySQL server. The new foreign tables are created with the corresponding column types and same table name as that of remote tables in the existing local schema.
+## Import foreign schema
-See also:
+MySQL Foreign Data Wrapper supports import foreign schema, which enables the local host to import table definitions on EDB Postgres Advanced Server from the MySQL server. The new foreign tables are created with the corresponding column types and same table name as that of remote tables in the existing local schema.
-[Example: Import Foreign Schema](09_example_import_foreign_schema/#example_import_foreign_schema)
+See [Example: Import foreign schema](09_example_import_foreign_schema/#example_import_foreign_schema) for an example.
-## Automated Cleanup
+## Automated cleanup
-MySQL Foreign Data Wrapper allows the cleanup of foreign tables in a single operation using `DROP EXTENSION` command. This feature is specifically useful when a foreign table is set for a temporary purpose. The syntax:
+MySQL Foreign Data Wrapper allows the cleanup of foreign tables in a single operation using the `DROP EXTENSION` command. This feature is useful when a foreign table is set for a temporary purpose. The syntax is:
-> `DROP EXTENSION mysql_fdw CASCADE;`
+ `DROP EXTENSION mysql_fdw CASCADE;`
For more information, see [DROP EXTENSION](https://www.postgresql.org/docs/current/sql-dropextension.html).
diff --git a/product_docs/docs/mysql_data_adapter/2/07_configuring_the_mysql_data_adapter.mdx b/product_docs/docs/mysql_data_adapter/2/07_configuring_the_mysql_data_adapter.mdx
index 5e0c1010285..e00cc053ee1 100644
--- a/product_docs/docs/mysql_data_adapter/2/07_configuring_the_mysql_data_adapter.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/07_configuring_the_mysql_data_adapter.mdx
@@ -4,42 +4,40 @@ title: "Configuring the MySQL Foreign Data Wrapper"
-Before using the MySQL Foreign Data Wrapper, you must:
+Before using the MySQL Foreign Data Wrapper:
1. Use the [CREATE EXTENSION](#create-extension) command to create the MySQL Foreign Data Wrapper extension on the Postgres host.
2. Use the [CREATE SERVER](#create-server) command to define a connection to the MySQL server.
3. Use the [CREATE USER MAPPING](#create-user-mapping) command to define a mapping that associates a Postgres role with the server.
- 4. Use the [CREATE FOREIGN TABLE](#create-foreign-table) command to define a single table in the Postgres database that corresponds to a table that resides on the MySQL server or use the [IMPORT FOREIGN SCHEMA](#import-foreign-schema) command to import multiple remote tables in the local schema.
+ 4. Use the [CREATE FOREIGN TABLE](#create-foreign-table) command to define a single table in the Postgres database that corresponds to a table that resides on the MySQL server, or use the [IMPORT FOREIGN SCHEMA](#import-foreign-schema) command to import multiple remote tables in the local schema.
## CREATE EXTENSION
-Use the `CREATE EXTENSION` command to create the `mysql_fdw` extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you will be querying the MySQL server, and invoke the command:
+Use the `CREATE EXTENSION` command to create the `mysql_fdw` extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you're querying the MySQL server, and invoke the command:
```text
CREATE EXTENSION [IF NOT EXISTS] mysql_fdw [WITH] [SCHEMA schema_name];
```
-**Parameters**
+### Parameters
`IF NOT EXISTS`
- Include the `IF NOT EXISTS` clause to instruct the server to issue a notice instead of throwing an error if an extension with the same name already exists.
+ Include the `IF NOT EXISTS` clause to instruct the server to issue a notice instead of returning an error if an extension with the same name already exists.
`schema_name`
Optionally specify the name of the schema in which to install the extension's objects.
-**Example**
+### Example
The following command installs the MySQL foreign data wrapper:
`CREATE EXTENSION mysql_fdw;`
-For more information about using the foreign data wrapper `CREATE EXTENSION` command, see:
-
- .
+For more information about using the foreign data wrapper `CREATE EXTENSION` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createextension.html).
@@ -52,36 +50,36 @@ CREATE SERVER server_name FOREIGN DATA WRAPPER mysql_fdw
[OPTIONS (option 'value' [, ...])]
```
-The role that defines the server is the owner of the server; use the `ALTER SERVER` command to reassign ownership of a foreign server. To create a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `CREATE SERVER` command.
+The role that defines the server is the owner of the server. Use the `ALTER SERVER` command to reassign ownership of a foreign server. To create a foreign server, you must have `USAGE` privilege on the foreign data wrapper specified in the `CREATE SERVER` command.
-**Parameters**
+### Parameters
`server_name`
- Use `server_name` to specify a name for the foreign server. The server name must be unique within the database.
+ Use `server_name` to specify a name for the foreign server. The server name must be unique in the database.
`FOREIGN_DATA_WRAPPER`
- Include the `FOREIGN_DATA_WRAPPER` clause to specify that the server should use the `mysql_fdw` foreign data wrapper when connecting to the cluster.
+ Include the `FOREIGN_DATA_WRAPPER` clause to specify for the server to use the `mysql_fdw` foreign data wrapper when connecting to the cluster.
`OPTIONS`
- Use the `OPTIONS` clause of the `CREATE SERVER` command to specify connection information for the foreign server. You can include:
+ Use the `OPTIONS` clause of the `CREATE SERVER` command to specify connection information for the foreign server. You can include these options.
| Option | Description |
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| host | The address or hostname of the MySQL server. The default value is `127.0.0.1`. |
| port | The port number of the MySQL Server. The default is `3306`. |
-| secure_auth | Use to enable or disable secure authentication. The default value is `true`. |
+| secure_auth | Use to enable or disable secure authentication. The default is `true`. |
| init_command | The SQL statement to execute when connecting to the MySQL server. |
| ssl_key | The path name of the client private key file. |
| ssl_cert | The path name of the client public key certificate file. |
| ssl_ca | The path name of the Certificate Authority (CA) certificate file. This option, if used, must specify the same certificate used by the server. |
| ssl_capath | The path name of the directory that contains trusted SSL CA certificate files. |
| ssl_cipher | The list of permissible ciphers for SSL encryption. |
-| use_remote_estimate | Include the use_remote_estimate to instruct the server to use EXPLAIN commands on the remote server when estimating processing costs. By default, use_remote_estimate is false. |
+| use_remote_estimate | Include `use_remote_estimate` to instruct the server to use `EXPLAIN` commands on the remote server when estimating processing costs. By default, `use_remote_estimate` is `false`. |
-**Example**
+### Example
The following command creates a foreign server named `mysql_server` that uses the `mysql_fdw` foreign data wrapper to connect to a host with an IP address of `127.0.0.1`:
@@ -91,9 +89,7 @@ CREATE SERVER mysql_server FOREIGN DATA WRAPPER mysql_fdw OPTIONS (host '127.0.0
The foreign server uses the default port (`3306`) for the connection to the client on the MySQL cluster.
-For more information about using the `CREATE SERVER` command, see:
-
-
+For more information about using the `CREATE SERVER` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createserver.html).
@@ -108,11 +104,11 @@ CREATE USER MAPPING FOR role_name SERVER server_name
You must be the owner of the foreign server to create a user mapping for that server.
-**Parameters**
+### Parameters
`role_name`
- Use `role_name` to specify the role that will be associated with the foreign server.
+ Use `role_name` to specify the role to associate with the foreign server.
`server_name`
@@ -122,13 +118,13 @@ You must be the owner of the foreign server to create a user mapping for that se
Use the `OPTIONS` clause to specify connection information for the foreign server.
- `username`: the name of the user on the MySQL server.
+ `username` is the name of the user on the MySQL server.
- `password`: the password associated with the username.
+ `password` is the password associated with the username.
-**Example**
+### Example
-The following command creates a user mapping for a role named `enterprisedb`; the mapping is associated with a server named `mysql_server`:
+The following command creates a user mapping for a role named `enterprisedb`. The mapping is associated with a server named `mysql_server`.
`CREATE USER MAPPING FOR enterprisedb SERVER mysql_server;`
@@ -138,17 +134,15 @@ If the database host uses secure authentication, provide connection credentials
CREATE USER MAPPING FOR public SERVER mysql_server OPTIONS (username 'foo', password 'bar');
```
-The command creates a user mapping for a role named `public` that is associated with a server named `mysql_server`. When connecting to the MySQL server, the server will authenticate as `foo`, and provide a password of `bar`.
-
-For detailed information about the `CREATE USER MAPPING` command, see:
+The command creates a user mapping for a role named `public` that's associated with a server named `mysql_server`. When connecting to the MySQL server, the server authenticates as `foo` and provides a password of `bar`.
-
+For detailed information about the `CREATE USER MAPPING` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createusermapping.html).
## CREATE FOREIGN TABLE
-A foreign table is a pointer to a table that resides on the MySQL host. Before creating a foreign table definition on the Postgres server, connect to the MySQL server and create a table; the columns in the table will map to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the table that resides on the MySQL host. The syntax is:
+A foreign table is a pointer to a table that resides on the MySQL host. Before creating a foreign table definition on the Postgres server, connect to the MySQL server and create a table. The columns in the table map to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the table that resides on the MySQL host. The syntax is:
```text
CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
@@ -160,52 +154,52 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
SERVER server_name [ OPTIONS ( option 'value' [, ... ] ) ]
```
-where `column_constraint` is:
+`column_constraint` is:
```text
[ CONSTRAINT constraint_name ]
{ NOT NULL | NULL | CHECK (expr) [ NO INHERIT ] | DEFAULT default_expr }
```
-and `table_constraint` is:
+`table_constraint` is:
```text
[ CONSTRAINT constraint_name ] CHECK (expr) [ NO INHERIT ]
```
-**Parameters**
+### Parameters
`table_name`
- Specifies the name of the foreign table; include a schema name to specify the schema in which the foreign table should reside.
+ Specifies the name of the foreign table. Include a schema name to specify the schema in which the foreign table resides.
`IF NOT EXISTS`
- Include the `IF NOT EXISTS` clause to instruct the server to not throw an error if a table with the same name already exists; if a table with the same name exists, the server will issue a notice.
+ Include the `IF NOT EXISTS` clause to instruct the server to not return an error if a table with the same name already exists. If a table with the same name exists, the server issues a notice.
`column_name`
- Specifies the name of a column in the new table; each column should correspond to a column described on the MySQL server.
+ Specifies the name of a column in the new table. Each column must correspond to a column described on the MySQL server.
`data_type`
- Specifies the data type of the column; when possible, specify the same data type for each column on the Postgres server and the MySQL server. If a data type with the same name is not available, the Postgres server will attempt to cast the data type to a type compatible with the MySQL server. If the server cannot identify a compatible data type, it will return an error.
+ Specifies the data type of the column. When possible, specify the same data type for each column on the Postgres server and the MySQL server. If a data type with the same name isn't available, the Postgres server attempts to cast the data type to a type compatible with the MySQL server. If the server can't identify a compatible data type, it returns an error.
`COLLATE collation`
- Include the `COLLATE` clause to assign a collation to the column; if not specified, the column data type's default collation is used.
+ Include the `COLLATE` clause to assign a collation to the column. If not specified, the column data type's default collation is used.
`INHERITS (parent_table [, ... ])`
- Include the `INHERITS` clause to specify a list of tables from which the new foreign table automatically inherits all columns. Parent tables can be plain tables or foreign tables.
+ Include the `INHERITS` clause to specify a list of tables from which the new foreign table inherits all columns. Parent tables can be plain tables or foreign tables.
`CONSTRAINT constraint_name`
- Specify an optional name for a column or table constraint; if not specified, the server will generate a constraint name.
+ Specify an optional name for a column or table constraint. If not specified, the server generates a constraint name.
`NOT NULL`
- Include the `NOT NULL` keywords to indicate that the column is not allowed to contain null values.
+ Include the `NOT NULL` keywords to indicate that the column isn't allowed to contain null values.
`NULL`
@@ -213,31 +207,31 @@ and `table_constraint` is:
`CHECK (expr) [NO INHERIT]`
- Use the `CHECK` clause to specify an expression that produces a Boolean result that each row in the table must satisfy. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.
+ Use the `CHECK` clause to specify an expression that produces a Boolean result that each row in the table must satisfy. A check constraint specified as a column constraint references that column's value only, while an expression appearing in a table constraint can reference multiple columns.
- A `CHECK` expression cannot contain subqueries or refer to variables other than columns of the current row.
+ A `CHECK` expression can't contain subqueries or refer to variables other than columns of the current row.
- Include the `NO INHERIT` keywords to specify that a constraint should not propagate to child tables.
+ Include the `NO INHERIT` keywords to specify that a constraint must not propagate to child tables.
`DEFAULT default_expr`
- Include the `DEFAULT` clause to specify a default data value for the column whose column definition it appears within. The data type of the default expression must match the data type of the column.
+ Include the `DEFAULT` clause to specify a default data value for the column whose column definition it appears in. The data type of the default expression must match the data type of the column.
`SERVER server_name [OPTIONS (option 'value' [, ... ] ) ]`
- To create a foreign table that will allow you to query a table that resides on a MySQL file system, include the `SERVER` clause and specify the `server_name` of the foreign server that uses the MySQL data adapter.
+ To create a foreign table that allows you to query a table that resides on a MySQL file system, include the `SERVER` clause and specify `server_name` for the foreign server that uses the MySQL data adapter.
- Use the `OPTIONS` clause to specify the following `options` and their corresponding values:
+ Use the `OPTIONS` clause to specify the following options and their corresponding values:
-| option | value |
+| Option | Value |
| ------------- | ---------------------------------------------------------------------------------------- |
-| dbname | The name of the database on the MySQL server; the database name is required. |
-| table_name | The name of the table on the MySQL server; the default is the name of the foreign table. |
-| max_blob_size | The maximum blob size to read without truncation. |
+| dbname | The name of the database on the MySQL server. The database name is required. |
+| table_name | The name of the table on the MySQL server. The default is the name of the foreign table. |
+| max_blob_size | The maximum BLOB size to read without truncation. |
-**Example**
+### Example
-To use data that is stored on MySQL server, you must create a table on the Postgres host that maps the columns of a MySQL table to the columns of a Postgres table. For example, for a MySQL table with the following definition:
+To use data that's stored on a MySQL server, you must create a table on the Postgres host that maps the columns of a MySQL table to the columns of a Postgres table. For example, for a MySQL table with the following definition:
```text
CREATE TABLE warehouse (
@@ -246,7 +240,7 @@ CREATE TABLE warehouse (
warehouse_created TIMESTAMP);
```
-You should execute a command on the Postgres server that creates a comparable table on the Postgres server:
+Execute a command on the Postgres server that creates a comparable table on the Postgres server:
```text
CREATE FOREIGN TABLE warehouse
@@ -261,20 +255,18 @@ SERVER mysql_server
Include the `SERVER` clause to specify the name of the database stored on the MySQL server and the name of the table (`warehouse`) that corresponds to the table on the Postgres server.
-For more information about using the `CREATE FOREIGN TABLE` command, see:
-
->
+For more information about using the `CREATE FOREIGN TABLE` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-createforeigntable.html).
!!! Note
MySQL foreign data wrapper supports the write capability feature.
-### Data Type Mappings
+### Data type mappings
-When using the foreign data wrapper, you must create a table on the Postgres server that mirrors the table that resides on the MySQL server. The MySQL data wrapper will automatically convert the following MySQL data types to the target Postgres type:
+When using the foreign data wrapper, you must create a table on the Postgres server that mirrors the table that resides on the MySQL server. The MySQL data wrapper converts the following MySQL data types to the target Postgres type:
-| **MySQL** | **Postgres** |
+| MySQL | Postgres |
| ----------- | ---------------------------- |
| BIGINT | BIGINT/INT8 |
| BOOLEAN | SMALLINT |
@@ -292,9 +284,9 @@ When using the foreign data wrapper, you must create a table on the Postgres ser
!!! Note
For `ENUM` data type:
- MySQL accepts `enum` value in string form. You must create exactly same `enum` listing on Advanced Server as that is present on MySQL server. Any sort of inconsistency will result in an error while fetching rows with values not known on the local server.
+ MySQL accepts an `enum` value in string form. You must create exactly the same `enum` listing on EDB Postgres Advanced Server as is present on the MySQL server. Any sort of inconsistency causes an error while fetching rows with values not known on the local server.
- Also, when the given `enum` value is not present at MySQL side but present at Postgres/Advanced Server side, an empty string (`''`) is inserted as a value at MySQL side for the `enum` column. To select from such a table having enum value as `''`, create an `enum` type at Postgres side with all valid values and `''`.
+ Also, when the given `enum` value isn't present at the MySQL side but is present at the EDB Postgres Advanced Server side, an empty string (`''`) is inserted as a value at the MySQL side for the `enum` column. To select from a table having the `enum` value as `''`, create an `enum` type at the Postgres side with all valid values and `''`.
@@ -310,7 +302,7 @@ IMPORT FOREIGN SCHEMA remote_schema
[ OPTIONS ( option 'value' [, ... ] ) ]
```
-**Parameters**
+### Parameters
`remote_schema`
@@ -330,20 +322,20 @@ IMPORT FOREIGN SCHEMA remote_schema
`local_schema`
- Specify the name of local schema where the imported foreign tables must be created.
+ Specify the name of local schema where you want to creat the imported foreign tables.
`OPTIONS`
- Use the `OPTIONS` clause to specify the following `options` and their corresponding values:
+ Use the `OPTIONS` clause to specify the following options and their corresponding values:
- | **Option** | **Description** |
+ | Option | Description |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| import_default | Controls whether column `DEFAULT` expressions are included in the definitions of foreign tables imported from a foreign server. The default is `false`. |
| import_not_null | Controls whether column `NOT NULL` constraints are included in the definitions of foreign tables imported from a foreign server. The default is `true`. |
-**Example**
+### Example
-For a MySQL table created in the `edb` database with the following definition:
+For a MySQL table created in the edb database with the following definition:
```text
CREATE TABLE color(cid INT PRIMARY KEY, cname TEXT);
@@ -356,7 +348,7 @@ INSERT INTO fruit VALUES (1, 'Orange');
INSERT INTO fruit VALUES (2, 'Mango');
```
-You should execute a command on the Postgres server that imports a comparable table on the Postgres server:
+Execute a command on the Postgres server that imports a comparable table on the Postgres server:
```text
IMPORT FOREIGN SCHEMA edb FROM SERVER mysql_server INTO public;
@@ -381,45 +373,41 @@ SELECT * FROM fruit;
The command imports table definitions from a remote schema `edb` on server `mysql_server` and then creates the foreign tables in local schema `public`.
-For more information about using the `IMPORT FOREIGN SCHEMA` command, see:
-
-
+For more information about using the `IMPORT FOREIGN SCHEMA` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-importforeignschema.html).
## DROP EXTENSION
-Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you will be dropping the MySQL server, and run the command:
+Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you're dropping the MySQL server, and run the command:
```text
DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ];
```
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if an extension with the specified name doesn't exists.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if an extension with the specified name doesn't exist.
`name`
- Specify the name of the installed extension. It is optional.
+ Optionally, specify the name of the installed extension.
`CASCADE`
- Automatically drop objects that depend on the extension. It drops all the other dependent objects too.
+ Drop objects that depend on the extension. It drops all the other dependent objects too.
`RESTRICT`
- Do not allow to drop extension if any objects, other than its member objects and extensions listed in the same DROP command are dependent on it.
+ Don't allow to drop extension if any objects, other than its member objects and extensions listed in the same `DROP` command, depend on it.
-**Example**
+### Example
The following command removes the extension from the existing database:
`DROP EXTENSION mysql_fdw;`
-For more information about using the foreign data wrapper `DROP EXTENSION` command, see:
-
- .
+For more information about using the foreign data wrapper `DROP EXTENSION` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-dropextension.html).
## DROP SERVER
@@ -429,37 +417,33 @@ Use the `DROP SERVER` command to remove a connection to a foreign server. The sy
DROP SERVER [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
-The role that drops the server is the owner of the server; use the `ALTER SERVER` command to reassign ownership of a foreign server. To drop a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `DROP SERVER` command.
+The role that drops the server is the owner of the server. Use the `ALTER SERVER` command to reassign ownership of a foreign server. To drop a foreign server, you must have `USAGE` privilege on the foreign data wrapper specified in the `DROP SERVER` command.
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if a server with the specified name doesn't exists.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if a server with the specified name doesn't exists.
`name`
- Specify the name of the installed server. It is optional.
+ Optionally, specify the name of the installed server.
`CASCADE`
- Automatically drop objects that depend on the server. It should drop all the other dependent objects too.
+ Drop objects that depend on the server. Drop all the other dependent objects too.
`RESTRICT`
- Do not allow to drop the server if any objects are dependent on it.
+ Don't allow to drop the server if any objects depend on it.
-**Example**
+### Example
The following command removes a foreign server named `mysql_server`:
`DROP SERVER mysql_server;`
-For more information about using the `DROP SERVER` command, see:
-
-
-
-## DROP USER MAPPING
+For more information about using the `DROP SERVER` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-dropserver.html).
Use the `DROP USER MAPPING` command to remove a mapping that associates a Postgres role with a foreign server. You must be the owner of the foreign server to remove a user mapping for that server.
@@ -467,7 +451,7 @@ Use the `DROP USER MAPPING` command to remove a mapping that associates a Postgr
DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC } SERVER server_name;
```
-**Parameters**
+### Parameters
`IF EXISTS`
@@ -481,15 +465,13 @@ DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC }
Specify the name of the server that defines a connection to the MySQL cluster.
-**Example**
+### Example
-The following command drops a user mapping for a role named `enterprisedb`; the mapping is associated with a server named `mysql_server`:
+The following command drops a user mapping for a role named `enterprisedb`. The mapping is associated with a server named `mysql_server`.
`DROP USER MAPPING FOR enterprisedb SERVER mysql_server;`
-For detailed information about the `DROP USER MAPPING` command, see:
-
-
+For detailed information about the `DROP USER MAPPING` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/sql-dropusermapping.html).
## DROP FOREIGN TABLE
@@ -499,11 +481,11 @@ A foreign table is a pointer to a table that resides on the MySQL host. Use the
DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
-**Parameters**
+### Parameters
`IF EXISTS`
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if the foreign table with the specified name doesn't exists.
+ Include the `IF EXISTS` clause to instruct the server to issue a notice instead of returning an error if the foreign table with the specified name doesn't exist.
`name`
@@ -511,18 +493,16 @@ DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
`CASCADE`
- Automatically drop objects that depend on the foreign table. It should drop all the other dependent objects too.
+ Drop objects that depend on the foreign table. Drop all the other dependent objects too.
`RESTRICT`
- Do not allow to drop foreign table if any objects are dependent on it.
+ Don't allow to drop foreign table if any objects depend on it.
-**Example**
+### Example
```text
DROP FOREIGN TABLE warehouse;
```
-For more information about using the `DROP FOREIGN TABLE` command, see:
-
-
+For more information about using the `DROP FOREIGN TABLE` command, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-dropforeigntable.html).
diff --git a/product_docs/docs/mysql_data_adapter/2/08_example_using_the_mysql_data_adapter.mdx b/product_docs/docs/mysql_data_adapter/2/08_example_using_the_mysql_data_adapter.mdx
index 6c802b4334a..662deffda65 100644
--- a/product_docs/docs/mysql_data_adapter/2/08_example_using_the_mysql_data_adapter.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/08_example_using_the_mysql_data_adapter.mdx
@@ -4,7 +4,7 @@ title: "Example: Using the MySQL Foreign Data Wrapper"
-Access data from Advanced Server and connect to psql. Once you are connected to psql, follow the below steps:
+Access data from EDB Postgres Advanced Server and connect to psql. Once you're connected to psql, follow these steps:
```text
-- load extension first time after install
diff --git a/product_docs/docs/mysql_data_adapter/2/09_example_import_foreign_schema.mdx b/product_docs/docs/mysql_data_adapter/2/09_example_import_foreign_schema.mdx
index 04384d09b1a..745f911e00a 100644
--- a/product_docs/docs/mysql_data_adapter/2/09_example_import_foreign_schema.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/09_example_import_foreign_schema.mdx
@@ -1,10 +1,10 @@
---
-title: "Example: Import Foreign Schema"
+title: "Example: Import foreign schema"
---
-Access data from Advanced Server and connect to psql. Once you are connected to psql, follow the below steps:
+Access data from EDB Postgres Advanced Server and connect to psql. Once you're connected to psql, follow these steps:
```text
-- load extension first time after install
diff --git a/product_docs/docs/mysql_data_adapter/2/10_example_join_push_down.mdx b/product_docs/docs/mysql_data_adapter/2/10_example_join_push_down.mdx
index 09b71a31101..b36d2231904 100644
--- a/product_docs/docs/mysql_data_adapter/2/10_example_join_push_down.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/10_example_join_push_down.mdx
@@ -1,9 +1,9 @@
---
-title: "Example: Join Pushdown"
+title: "Example: Join pushdown"
---
-The following example shows join pushdown between two foreign tables 'warehouse' and 'sales_records':
+This example shows join pushdown between two foreign tables: `warehouse` and `sales_records`.
Table on MySQL server:
diff --git a/product_docs/docs/mysql_data_adapter/2/10a_example_aggregate_func_push_down.mdx b/product_docs/docs/mysql_data_adapter/2/10a_example_aggregate_func_push_down.mdx
index 09ec1f19627..3189610b954 100644
--- a/product_docs/docs/mysql_data_adapter/2/10a_example_aggregate_func_push_down.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/10a_example_aggregate_func_push_down.mdx
@@ -1,14 +1,14 @@
---
-title: "Example: Aggregate Function Pushdown"
+title: "Example: Aggregate function pushdown"
---
MySQL Foreign Data Wrapper supports pushdown for the following aggregate functions:
-- AVG – calculates the average of a set of values
-- COUNT – counts rows in a specified table or view
-- MIN – gets the minimum value in a set of values
-- MAX – gets the maximum value in a set of values
-- SUM – calculates the sum of values
+- AVG — Calculates the average of a set of values.
+- COUNT — Counts rows in a specified table or view.
+- MIN — Gets the minimum value in a set of values.
+- MAX — Gets the maximum value in a set of values.
+- SUM — Calculates the sum of values.
Table on MySQL server:
diff --git a/product_docs/docs/mysql_data_adapter/2/11_identifying_data_adapter_version.mdx b/product_docs/docs/mysql_data_adapter/2/11_identifying_data_adapter_version.mdx
index 668f438a89f..aa2d40f5b8a 100644
--- a/product_docs/docs/mysql_data_adapter/2/11_identifying_data_adapter_version.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/11_identifying_data_adapter_version.mdx
@@ -1,5 +1,5 @@
---
-title: "Identifying the MySQL Foreign Data Wrapper Version"
+title: "Identifying the MySQL Foreign Data Wrapper version"
---
diff --git a/product_docs/docs/mysql_data_adapter/2/12_uninstalling_the_mysql_data_adapter.mdx b/product_docs/docs/mysql_data_adapter/2/12_uninstalling_the_mysql_data_adapter.mdx
index 350212dd5ac..1a3d8bf56cb 100644
--- a/product_docs/docs/mysql_data_adapter/2/12_uninstalling_the_mysql_data_adapter.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/12_uninstalling_the_mysql_data_adapter.mdx
@@ -4,15 +4,15 @@ title: "Uninstalling the MySQL Foreign Data Wrapper"
-You can use the `remove` command to uninstall MongoDB Foreign Data Wrapper packages. To uninstall, open a terminal window, assume superuser privileges, and enter the command applicable to the operating system and package manager used for the installation:
+You can use the `remove` command to uninstall MySQL Foreign Data Wrapper packages. To uninstall, open a terminal window, assume superuser privileges, and enter the command that applies to the operating system and package manager used for the installation:
- On RHEL or CentOS 7:
`yum remove edb-as-mysql_fdw`
Where:
- - `xx` is the EDB Postgres Advanced Server version.
- - `x` is the supported release version number of MySQL.
+ - `` is the EDB Postgres Advanced Server version.
+ - `` is the supported release version number of MySQL.
- On RHEL or Rocky Linux or AlmaLinux 8:
`dnf remove edb-as-mysql8_fdw`
@@ -23,14 +23,10 @@ You can use the `remove` command to uninstall MongoDB Foreign Data Wrapper packa
`zypper remove edb-as-mysql_fdw`
Where:
- - `xx` is the EDB Postgres Advanced Server version.
- - `x` is the supported release version number of MySQL.
+ - `` is the EDB Postgres Advanced Server version.
+ - `` is the supported release version number of MySQL.
- On Debian or Ubuntu
`apt-get remove edb-as-mysql-fdw`
Where `` is the EDB Postgres Advanced Server version.
-
-
-
-
diff --git a/product_docs/docs/mysql_data_adapter/2/13_troubleshooting.mdx b/product_docs/docs/mysql_data_adapter/2/13_troubleshooting.mdx
index 2b4ba0109ea..b439515c6ca 100644
--- a/product_docs/docs/mysql_data_adapter/2/13_troubleshooting.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/13_troubleshooting.mdx
@@ -2,17 +2,17 @@
title: "Troubleshooting"
---
-In case you are experiencing issues with using MySQL 8 and MySQL_FDW, below is a list of solutions to some frequently seen issues:
+In case you're experiencing issues with using MySQL 8 and MySQL_FDW, these are solutions to some frequently seen issues,
-**Authentication plugin ‘caching_sha2_password’ Error**
+## Authentication plugin ‘caching_sha2_password’ error
```text
ERROR: failed to connect to MySQL: Authentication plugin ‘caching_sha2_password’ cannot be loaded: /usr/lib64/mysql/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory
```
-Specify the authentication plugin as `mysql_native_password` and set a cleartext password value. The syntax:
+Specify the authentication plugin as `mysql_native_password` and set a cleartext password value. The syntax is:
-> `ALTER USER 'username'@'host' IDENTIFIED WITH mysql_native_password BY '';`
+`ALTER USER 'username'@'host' IDENTIFIED WITH mysql_native_password BY '';`
!!! Note
- Refer to [MySQL 8 documentation](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html) for more details on the above error.
+ See the [MySQL 8 documentation](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html) for more details on this error.
diff --git a/product_docs/docs/mysql_data_adapter/2/index.mdx b/product_docs/docs/mysql_data_adapter/2/index.mdx
index 946b97e0933..29aefbf1251 100644
--- a/product_docs/docs/mysql_data_adapter/2/index.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/index.mdx
@@ -18,11 +18,9 @@ navigation:
---
-The MySQL Foreign Data Wrapper (`mysql_fdw`) is a Postgres extension that allows you to access data that resides on a MySQL database from EDB Postgres Advanced Server. It is a writable foreign data wrapper that you can use with Postgres functions and utilities, or in conjunction with other data that resides on a Postgres host.
+The MySQL Foreign Data Wrapper (`mysql_fdw`) is a Postgres extension that lets you access data that resides on a MySQL database from EDB Postgres Advanced Server. It's a writable foreign data wrapper that you can use with Postgres functions and utilities or with other data that resides on a Postgres host.
-The MySQL Foreign Data Wrapper can be installed with an RPM package. You can download an installer from the [EDB website](https://www.enterprisedb.com/software-downloads-postgres/).
-
-This guide uses the term `Postgres` to refer to an instance of EDB Postgres Advanced Server.
+You can install the MySQL Foreign Data Wrapper with an RPM package. You can download an installer from the [EDB website](https://www.enterprisedb.com/software-downloads-postgres/).
diff --git a/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/index.mdx b/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/index.mdx
index 8f33cef04f6..d46f34e9352 100644
--- a/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/index.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/index.mdx
@@ -10,7 +10,7 @@ navigation:
- mysql2.5.1_rel_notes
---
-The MySQL Foreign Data Wrapper documentation describes the latest version of MySQL Foreign Data Wrapper 5 including minor releases and patches. The release notes in this section provide information on what was new in each release. For new functionality introduced in a minor or patch release, there are also indicators within the content about what release introduced the feature.
+The MySQL Foreign Data Wrapper documentation describes the latest version of MySQL Foreign Data Wrapper 5, including minor releases and patches. The release notes provide information on what was new in each release. For new functionality introduced in a minor or patch release, the content also includes indicators about the release that introduced the feature.
| Version | Release Date |
| ----------------------------- | ------------ |
diff --git a/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/mysql2.7.0_rel_notes.mdx b/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/mysql2.7.0_rel_notes.mdx
index 6bd8a54b52e..4c18eb867dd 100644
--- a/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/mysql2.7.0_rel_notes.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/mysql2.7.0_rel_notes.mdx
@@ -11,6 +11,6 @@ Enhancements, bug fixes, and other changes in MySQL Foreign Data Wrapper 2.7.0 i
| ---- |------------ |
| Enhancement | Support for EDB Postgres Advanced Server 14. |
| Enhancement | MySQL Foreign Data Wrapper now supports min, max, sum, avg, and count aggregate function pushdown. You can now push aggregates to the remote MySQL server instead of fetching all of the rows and aggregating them locally. This improves performance for the cases where aggregates can be pushed down. Aggregate filters and orders are not pushed down as MySQL does not support them. |
-| Bug Fix | Function expression deparsing implicit/explicit coercion sending wrong query to the MySQL server. |
-| Bug Fix | Import foreign schema failure due to SET type. |
+| Bug fix | Function expression deparsing implicit/explicit coercion sending wrong query to the MySQL server. |
+| Bug fix | Import foreign schema failure due to SET type. |
From d32a7899745245cc18c22c52d6d5744b6ae7ed2e Mon Sep 17 00:00:00 2001
From: cnp-autobot
Date: Fri, 25 Mar 2022 18:10:16 +0000
Subject: [PATCH 16/16] [create-pull-request] automated change
---
.../cloud_native_postgresql/api_reference.mdx | 95 +++++++++++--------
.../cloud_native_postgresql/architecture.mdx | 2 +-
.../backup_recovery.mdx | 68 ++++++++++++-
.../cloud_native_postgresql/bootstrap.mdx | 14 +--
.../cloud_native_postgresql/cnp-plugin.mdx | 39 +++++++-
.../connection_pooling.mdx | 45 +++++++++
.../container_images.mdx | 2 +-
.../cloud_native_postgresql/evaluation.mdx | 2 +-
.../cloud_native_postgresql/failover.mdx | 79 +++++++++++++++
.../cloud_native_postgresql/failure_modes.mdx | 8 +-
.../cloud_native_postgresql/index.mdx | 13 +--
.../installation_upgrade.mdx | 4 +-
.../instance_manager.mdx | 7 +-
.../cloud_native_postgresql/license_keys.mdx | 6 +-
.../cloud_native_postgresql/logging.mdx | 2 +-
.../cloud_native_postgresql/monitoring.mdx | 2 +-
.../operator_capability_levels.mdx | 13 ++-
.../postgresql_conf.mdx | 53 ++++++++---
.../cloud_native_postgresql/release_notes.mdx | 53 ++++++++++-
.../samples/cluster-backup-aws-inherit.yaml | 15 +++
.../samples/cluster-example-full.yaml | 2 +-
.../cloud_native_postgresql/scheduling.mdx | 2 +-
.../ssl_connections.mdx | 2 +-
.../troubleshooting.mdx | 24 ++++-
24 files changed, 452 insertions(+), 100 deletions(-)
create mode 100644 advocacy_docs/kubernetes/cloud_native_postgresql/failover.mdx
create mode 100644 advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-backup-aws-inherit.yaml
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx
index b7bfebb6db8..ea0909e5a26 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/api_reference.mdx
@@ -48,6 +48,7 @@ Below you will find a description of the defined resources:
- [EPASConfiguration](#EPASConfiguration)
- [EmbeddedObjectMetadata](#EmbeddedObjectMetadata)
- [ExternalCluster](#ExternalCluster)
+- [GoogleCredentials](#GoogleCredentials)
- [InstanceID](#InstanceID)
- [LocalObjectReference](#LocalObjectReference)
- [MonitoringConfiguration](#MonitoringConfiguration)
@@ -169,27 +170,28 @@ BackupSpec defines the desired state of Backup
BackupStatus defines the observed state of Backup
-| Name | Description | Type |
-| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
-| `s3Credentials ` | The credentials to be used to upload data to S3 | [\*S3Credentials](#S3Credentials) |
-| `azureCredentials` | The credentials to be used to upload data to Azure Blob Storage | [\*AzureCredentials](#AzureCredentials) |
-| `endpointCA ` | EndpointCA store the CA bundle of the barman endpoint. Useful when using self-signed certificates to avoid errors with certificate issuer and barman-cloud-wal-archive. | [\*SecretKeySelector](#SecretKeySelector) |
-| `endpointURL ` | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string |
-| `destinationPath ` | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data - *mandatory* | string |
-| `serverName ` | The server name on S3, the cluster name is used if this parameter is omitted | string |
-| `encryption ` | Encryption method required to S3 API | string |
-| `backupId ` | The ID of the Barman backup | string |
-| `phase ` | The last backup status | BackupPhase |
-| `startedAt ` | When the backup was started | [\*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta) |
-| `stoppedAt ` | When the backup was terminated | [\*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta) |
-| `beginWal ` | The starting WAL | string |
-| `endWal ` | The ending WAL | string |
-| `beginLSN ` | The starting xlog | string |
-| `endLSN ` | The ending xlog | string |
-| `error ` | The detected error | string |
-| `commandOutput ` | Unused. Retained for compatibility with old versions. | string |
-| `commandError ` | The backup command output in case of error | string |
-| `instanceID ` | Information to identify the instance where the backup has been taken from | [\*InstanceID](#InstanceID) |
+| Name | Description | Type |
+| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
+| `s3Credentials ` | The credentials to be used to upload data to S3 | [\*S3Credentials](#S3Credentials) |
+| `azureCredentials ` | The credentials to be used to upload data to Azure Blob Storage | [\*AzureCredentials](#AzureCredentials) |
+| `googleCredentials` | The credentials to use to upload data to Google Cloud Storage | [\*GoogleCredentials](#GoogleCredentials) |
+| `endpointCA ` | EndpointCA store the CA bundle of the barman endpoint. Useful when using self-signed certificates to avoid errors with certificate issuer and barman-cloud-wal-archive. | [\*SecretKeySelector](#SecretKeySelector) |
+| `endpointURL ` | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string |
+| `destinationPath ` | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data - *mandatory* | string |
+| `serverName ` | The server name on S3, the cluster name is used if this parameter is omitted | string |
+| `encryption ` | Encryption method required to S3 API | string |
+| `backupId ` | The ID of the Barman backup | string |
+| `phase ` | The last backup status | BackupPhase |
+| `startedAt ` | When the backup was started | [\*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta) |
+| `stoppedAt ` | When the backup was terminated | [\*metav1.Time](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#time-v1-meta) |
+| `beginWal ` | The starting WAL | string |
+| `endWal ` | The ending WAL | string |
+| `beginLSN ` | The starting xlog | string |
+| `endLSN ` | The ending xlog | string |
+| `error ` | The detected error | string |
+| `commandOutput ` | Unused. Retained for compatibility with old versions. | string |
+| `commandError ` | The backup command output in case of error | string |
+| `instanceID ` | Information to identify the instance where the backup has been taken from | [\*InstanceID](#InstanceID) |
@@ -197,18 +199,19 @@ BackupStatus defines the observed state of Backup
BarmanObjectStoreConfiguration contains the backup configuration using Barman against an S3-compatible object storage
-| Name | Description | Type |
-| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------- |
-| `s3Credentials ` | The credentials to use to upload data to S3 | [\*S3Credentials](#S3Credentials) |
-| `azureCredentials` | The credentials to use to upload data in Azure Blob Storage | [\*AzureCredentials](#AzureCredentials) |
-| `endpointURL ` | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string |
-| `endpointCA ` | EndpointCA store the CA bundle of the barman endpoint. Useful when using self-signed certificates to avoid errors with certificate issuer and barman-cloud-wal-archive | [\*SecretKeySelector](#SecretKeySelector) |
-| `destinationPath ` | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data - *mandatory* | string |
-| `serverName ` | The server name on S3, the cluster name is used if this parameter is omitted | string |
-| `wal ` | The configuration for the backup of the WAL stream. When not defined, WAL files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | [\*WalBackupConfiguration](#WalBackupConfiguration) |
-| `data ` | The configuration to be used to backup the data files When not defined, base backups files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | [\*DataBackupConfiguration](#DataBackupConfiguration) |
-| `tags ` | Tags is a list of key value pairs that will be passed to the Barman --tags option. | map[string]string |
-| `historyTags ` | HistoryTags is a list of key value pairs that will be passed to the Barman --history-tags option. | map[string]string |
+| Name | Description | Type |
+| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------- |
+| `s3Credentials ` | The credentials to use to upload data to S3 | [\*S3Credentials](#S3Credentials) |
+| `azureCredentials ` | The credentials to use to upload data to Azure Blob Storage | [\*AzureCredentials](#AzureCredentials) |
+| `googleCredentials` | The credentials to use to upload data to Google Cloud Storage | [\*GoogleCredentials](#GoogleCredentials) |
+| `endpointURL ` | Endpoint to be used to upload data to the cloud, overriding the automatic endpoint discovery | string |
+| `endpointCA ` | EndpointCA store the CA bundle of the barman endpoint. Useful when using self-signed certificates to avoid errors with certificate issuer and barman-cloud-wal-archive | [\*SecretKeySelector](#SecretKeySelector) |
+| `destinationPath ` | The path where to store the backup (i.e. s3://bucket/path/to/folder) this path, with different destination folders, will be used for WALs and for data - *mandatory* | string |
+| `serverName ` | The server name on S3, the cluster name is used if this parameter is omitted | string |
+| `wal ` | The configuration for the backup of the WAL stream. When not defined, WAL files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | [\*WalBackupConfiguration](#WalBackupConfiguration) |
+| `data ` | The configuration to be used to backup the data files When not defined, base backups files will be stored uncompressed and may be unencrypted in the object store, according to the bucket default policy. | [\*DataBackupConfiguration](#DataBackupConfiguration) |
+| `tags ` | Tags is a list of key value pairs that will be passed to the Barman --tags option. | map[string]string |
+| `historyTags ` | HistoryTags is a list of key value pairs that will be passed to the Barman --history-tags option. | map[string]string |
@@ -474,6 +477,17 @@ ExternalCluster represents the connection parameters to an external cluster whic
| `password ` | The reference to the password to be used to connect to the server | [\*corev1.SecretKeySelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#secretkeyselector-v1-core) |
| `barmanObjectStore ` | The configuration for the barman-cloud tool suite | [\*BarmanObjectStoreConfiguration](#BarmanObjectStoreConfiguration) |
+
+
+## GoogleCredentials
+
+GoogleCredentials is the type for the Google Cloud Storage credentials. This needs to be specified even if we run inside a GKE environment.
+
+| Name | Description | Type |
+| ------------------------ | -------------------------------------------------------------------------------------------------------- | ----------------------------------------- |
+| `gkeEnvironment ` | If set to true, will presume that it's running inside a GKE environment, default to false. - *mandatory* | bool |
+| `applicationCredentials` | The secret containing the Google Cloud Storage JSON file with the credentials | [\*SecretKeySelector](#SecretKeySelector) |
+
## InstanceID
@@ -708,12 +722,17 @@ RollingUpdateStatus contains the information about an instance which is being up
## S3Credentials
-S3Credentials is the type for the credentials to be used to upload files to S3
+S3Credentials is the type for the credentials to be used to upload files to S3. It can be provided in two alternative ways:
+
+- explicitly passing accessKeyId and secretAccessKey
+
+- inheriting the role from the pod environment by setting inheritFromIAMRole to true
-| Name | Description | Type |
-| ----------------- | ---------------------------------------------------- | --------------------------------------- |
-| `accessKeyId ` | The reference to the access key id - *mandatory* | [SecretKeySelector](#SecretKeySelector) |
-| `secretAccessKey` | The reference to the secret access key - *mandatory* | [SecretKeySelector](#SecretKeySelector) |
+| Name | Description | Type |
+| -------------------- | -------------------------------------------------------------------------------------- | ----------------------------------------- |
+| `accessKeyId ` | The reference to the access key id | [\*SecretKeySelector](#SecretKeySelector) |
+| `secretAccessKey ` | The reference to the secret access key | [\*SecretKeySelector](#SecretKeySelector) |
+| `inheritFromIAMRole` | Use the role based authentication without providing explicitly the keys. - *mandatory* | bool |
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/architecture.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/architecture.mdx
index f59184d9258..67a6ef38cd3 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/architecture.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/architecture.mdx
@@ -92,7 +92,7 @@ only write inside a single Kubernetes cluster, at any time.
!!! Tip
If you are interested in a PostgreSQL architecture where all instances accept writes,
please take a look at [BDR (Bi-Directional Replication) by EDB](https://www.enterprisedb.com/docs/bdr/latest/).
- For Kubernetes, BDR will have its own Operator, expected late in 2021.
+ For Kubernetes, BDR will have its own Operator, expected later in 2022.
However, for business continuity objectives it is fundamental to:
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/backup_recovery.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/backup_recovery.mdx
index a1e4d7cb6ae..a7837bb2e37 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/backup_recovery.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/backup_recovery.mdx
@@ -35,7 +35,8 @@ You can archive the backup files in any service that is supported
by the Barman Cloud infrastructure. That is:
- [AWS S3](https://aws.amazon.com/s3/)
-- [Microsoft Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/).
+- [Microsoft Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
+- [Google Cloud Storage](https://cloud.google.com/storage/)
You can also use any compatible implementation of the
supported services.
@@ -318,6 +319,71 @@ In that case, `` is the first component of the path.
This is required if you are testing the Azure support via the Azure Storage
Emulator or [Azurite](https://github.com/Azure/Azurite).
+### Google Cloud Storage
+
+Currently, the operator supports two authentication methods for Google Cloud Storage,
+one assumes the pod is running inside a Google Kubernetes Engine cluster, the other one leverages
+the environment variable `GOOGLE_APPLICATION_CREDENTIALS`.
+
+#### Running inside Google Kubernetes Engine
+
+This could be one of the easiest way to create a backup, and only requires
+the following configuration:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ destinationPath: "gs://"
+ googleCredentials:
+ gkeEnvironment: true
+```
+
+This, will tell the operator that the cluster is running inside a Google Kubernetes
+Engine meaning that no credentials are needed to upload the files
+
+!!! Important
+ This method will require carefully defined permissions for cluster
+ and pods, which have to be defined by a cluster administrator.
+
+#### Using authentication
+
+Following the [instruction from Google](https://cloud.google.com/docs/authentication/getting-started)
+you will get a JSON file that contains all the required information to authenticate.
+
+The content of the JSON file must be provided using a `Secret` that can be created
+with the following command:
+
+```shell
+kubectl create secret generic backup-creds --from-file=gcsCredentials=gcs_credentials_file.json
+```
+
+This will create the `Secret` with the name `backup-creds` to be used in the yaml file like this:
+
+```yaml
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+[...]
+spec:
+ backup:
+ barmanObjectStore:
+ destinationPath: "gs://"
+ googleCredentials:
+ applicationCredentials:
+ name: backup-creds
+ key: gcsCredentials
+```
+
+Now the operator will use the credentials to authenticate against Google Cloud Storage.
+
+!!! Important
+ This way of authentication will create a JSON file inside the container with all the needed
+ information to access your Google Cloud Storage bucket, meaning that if someone gets access to the pod
+ will also have write permissions to the bucket.
+
## On-demand backups
To request a new backup, you need to create a new Backup resource
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/bootstrap.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/bootstrap.mdx
index eac2c48ba8e..14f941ae369 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/bootstrap.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/bootstrap.mdx
@@ -77,8 +77,8 @@ method or the `recovery` one. An external cluster needs to have:
cluster - that is, base backups and WAL archives.
!!! Note
- A recovery object store is normally an AWS S3 or an Azure Blob Storage
- compatible source that is managed by Barman Cloud.
+ A recovery object store is normally an AWS S3, or an Azure Blob Storage,
+ or a Google Cloud Storage source that is managed by Barman Cloud.
When only the streaming connection is defined, the source can be used for the
`pg_basebackup` method. When only the recovery object store is defined, the
@@ -678,7 +678,7 @@ file on the source PostgreSQL instance:
host replication streaming_replica all md5
```
-The following manifest creates a new PostgreSQL 14.1 cluster,
+The following manifest creates a new PostgreSQL 14.2 cluster,
called `target-db`, using the `pg_basebackup` bootstrap method
to clone an external PostgreSQL cluster defined as `source-db`
(in the `externalClusters` array). As you can see, the `source-db`
@@ -693,7 +693,7 @@ metadata:
name: target-db
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:14.1
+ imageName: quay.io/enterprisedb/postgresql:14.2
bootstrap:
pg_basebackup:
@@ -713,7 +713,7 @@ spec:
```
All the requirements must be met for the clone operation to work, including
-the same PostgreSQL version (in our case 14.1).
+the same PostgreSQL version (in our case 14.2).
#### TLS certificate authentication
@@ -728,7 +728,7 @@ in the same Kubernetes cluster.
This example can be easily adapted to cover an instance that resides
outside the Kubernetes cluster.
-The manifest defines a new PostgreSQL 14.1 cluster called `cluster-clone-tls`,
+The manifest defines a new PostgreSQL 14.2 cluster called `cluster-clone-tls`,
which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
external cluster. The host is identified by the read/write service
in the same cluster, while the `streaming_replica` user is authenticated
@@ -743,7 +743,7 @@ metadata:
name: cluster-clone-tls
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:14.1
+ imageName: quay.io/enterprisedb/postgresql:14.2
bootstrap:
pg_basebackup:
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/cnp-plugin.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/cnp-plugin.mdx
index 26d43d758be..2b53336109b 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/cnp-plugin.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/cnp-plugin.mdx
@@ -77,7 +77,7 @@ Cluster in healthy state
Name: sandbox
Namespace: default
System ID: 7039966298120953877
-PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
+PostgreSQL Image: quay.io/enterprisedb/postgresql:14.2
Primary instance: sandbox-2
Instances: 3
Ready instances: 3
@@ -121,7 +121,7 @@ Cluster in healthy state
Name: sandbox
Namespace: default
System ID: 7039966298120953877
-PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1
+PostgreSQL Image: quay.io/enterprisedb/postgresql:14.2
Primary instance: sandbox-2
Instances: 3
Ready instances: 3
@@ -277,4 +277,39 @@ The following command will reload all configurations for a given cluster:
```shell
kubectl cnp reload [cluster_name]
+```
+
+### Maintenance
+
+The `kubectl cnp maintenance` command helps to modify one or more clusters across namespaces
+and set the maintenance window values, it will change the following fields:
+
+- .spec.nodeMaintenanceWindow.inProgress
+- .spec.nodeMaintenanceWindow.reusePVC
+
+Accepts as argument `set` and `unset` using this to set the `inProgress` to `true` in case `set`
+and to `false` in case of `unset`.
+
+By default, `reusePVC` is always set to `false` unless the `--reusePVC` flag is passed.
+
+The plugin will ask for a confirmation with a list of the cluster to modify and their new values,
+if this is accepted this action will be applied to all the cluster in the list.
+
+If you want to set in maintenance all the PostgreSQL in your Kubernetes cluster, just need to
+write the following command:
+
+```shell
+kubectl cnp maintenance set --all-namespaces
+```
+
+And you'll have the list of all the cluster to update
+
+```shell
+The following are the new values for the clusters
+Namespace Cluster Name Maintenance reusePVC
+--------- ------------ ----------- --------
+default cluster-example true false
+default pg-backup true false
+test cluster-example true false
+Do you want to proceed? [y/n]: y
```
\ No newline at end of file
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/connection_pooling.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/connection_pooling.mdx
index 911457f44eb..aefd4baee1e 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/connection_pooling.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/connection_pooling.mdx
@@ -104,6 +104,24 @@ authentication (see the ["Authentication" section](#authentication) below).
Containers run as the `pgbouncer` system user, and access to the `pgbouncer`
database is only allowed via local connections, through `peer` authentication.
+### Certificates
+
+By default, PgBouncer pooler will use the same certificates that are used by the
+cluster itself, but if the user provides those certificates the pooler will accept
+secrets with the following format:
+
+1. Basic Auth
+2. TLS
+3. Opaque
+
+In the Opaque case, it will look for specific keys that needs to be used, those keys
+are the following:
+
+- tls.crt
+- tls.key
+
+So we can treat this secret as a TLS secret, and start from there.
+
## Authentication
**Password based authentication** is the only supported method for clients of
@@ -121,6 +139,33 @@ Internally, our implementation relies on PgBouncer's `auth_user` and `auth_query
- removes all the above when it detects that a cluster does not have
any pooler associated to it
+!!! Important
+ If you specify your own secrets the operator will not automatically integrate the Pooler.
+
+To manually integrate the Pooler, in the case that you have specified your own secrets, you must run the following queries from inside your cluster.
+
+1. Create the role:
+
+```sql
+CREATE ROLE cnp_pooler_pgbouncer WITH LOGIN;
+```
+
+2. For each application database, grant the permission for `cnp_pooler_pgbouncer` to connect to it:
+
+ ```sql
+ GRANT CONNECT ON DATABASE { database name here } TO cnp_pooler_pgbouncer;
+ ```
+
+3. Connect in each application database, then create the authentication function inside each of the application databases:
+
+ ```sql
+ CREATE OR REPLACE FUNCTION user_search(uname TEXT) RETURNS TABLE (usename name, passwd text) as 'SELECT usename, passwd FROM pg_shadow WHERE usename=$1;' LANGUAGE sql SECURITY DEFINER;
+
+ REVOKE ALL ON FUNCTION user_search(text) FROM public;
+
+ GRANT EXECUTE ON FUNCTION user_search(text) TO cnp_pooler_pgbouncer;
+ ```
+
## PodTemplates
You can take advantage of pod templates specification in the `template`
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/container_images.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/container_images.mdx
index 0a02044ecf7..34b8ba00b4a 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/container_images.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/container_images.mdx
@@ -33,7 +33,7 @@ Native PostgreSQL overrides it with its instance manager.
in a **Primary with multiple/optional Hot Standby Servers Architecture**
only.
-EnterpriseDB provides and supports public container images for Cloud Native
+EDB provides and supports public container images for Cloud Native
PostgreSQL and publishes them on
[Quay.io](https://quay.io/repository/enterprisedb/postgresql).
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/evaluation.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/evaluation.mdx
index 7b8dd831405..3d02b16a7ba 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/evaluation.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/evaluation.mdx
@@ -27,7 +27,7 @@ PostgreSQL container images are available at
You can use Cloud Native PostgreSQL with EDB Postgres Advanced
too. You need to request a trial license key from the
-[EnterpriseDB website](https://cloud-native.enterprisedb.com).
+[EDB website](https://cloud-native.enterprisedb.com).
EDB Postgres Advanced container images are available at
[quay.io/enterprisedb/edb-postgres-advanced](https://quay.io/repository/enterprisedb/edb-postgres-advanced).
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/failover.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/failover.mdx
new file mode 100644
index 00000000000..4e6dc7d599f
--- /dev/null
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/failover.mdx
@@ -0,0 +1,79 @@
+---
+title: 'Automated failover'
+originalFilePath: 'src/failover.md'
+product: 'Cloud Native Operator'
+---
+
+In the case of unexpected errors on the primary, the cluster will go into
+**failover mode**. This may happen, for example, when:
+
+- The primary pod has a disk failure
+- The primary pod is deleted
+- The `postgres` container on the primary has any kind of sustained failure
+
+In the failover scenario, the primary cannot be assumed to be working properly.
+
+After cases like the ones above, the readiness probe for the primary pod will start
+failing. This will be picked up in the controller's reconciliation loop. The
+controller will initiate the failover process, in two steps:
+
+1. First, it will mark the `TargetPrimary` as `pending`. This change of state will
+ force the primary pod to shutdown, to ensure the WAL receivers on the replicas
+ will stop. The cluster will be marked in failover phase ("Failing over").
+2. Once all WAL receivers are stopped, there will be a leader election, and a
+ new primary will be named. The chosen instance will initiate promotion to
+ primary, and, after this is completed, the cluster will resume normal operations.
+ Meanwhile, the former primary pod will restart, detect that it is no longer
+ the primary, and become a replica node.
+
+!!! Important
+ The two-phase procedure helps ensure the WAL receivers can stop in an orderly
+ fashion, and that the failing primary will not start streaming WALs again upon
+ restart. These safeguards prevent timeline discrepancies between the new primary
+ and the replicas.
+
+During the time the failing primary is being shut down:
+
+1. It will first try a PostgreSQL's *fast shutdown* with
+ `.spec.switchoverDelay` seconds as timeout. This graceful shutdown will attempt
+ to archive pending WALs.
+2. If the fast shutdown fails, or its timeout is exceeded, a PostgreSQL's
+ *immediate shutdown* is initiated.
+
+!!! Info
+ "Fast" mode does not wait for PostgreSQL clients to disconnect and will
+ terminate an online backup in progress. All active transactions are rolled back
+ and clients are forcibly disconnected, then the server is shut down.
+ "Immediate" mode will abort all PostgreSQL server processes immediately,
+ without a clean shutdown.
+
+## RTO and RPO impact
+
+Failover may result in the service being impacted and/or data being lost:
+
+1. During the time when the primary has started to fail, and before the controller
+ starts failover procedures, queries in transit, WAL writes, checkpoints and
+ similar operations, may fail.
+2. Once the fast shutdown command has been issued, the cluster will no longer
+ accept connections, so service will be impacted but no data
+ will be lost.
+3. If the fast shutdown fails, the immediate shutdown will stop any pending
+ processes, including WAL writing. Data may be lost.
+4. During the time the primary is shutting down and a new primary hasn't yet
+ started, the cluster will operate without a primary and thus be impaired - but
+ with no data loss.
+
+!!! Note
+ The timeout that controls fast shutdown is set by `.spec.switchoverDelay`,
+ as in the case of a switchover. Increasing the time for fast shutdown is safer
+ from an RPO point of view, but possibly delays the return to normal operation -
+ negatively affecting RTO.
+
+!!! Warning
+ As already mentioned in the ["Instance Manager" section](instance_manager.md)
+ when explaining the switchover process, the `.spec.switchoverDelay` option
+ affects the RPO and RTO of your PostgreSQL database. Setting it to a low value,
+ might favor RTO over RPO but lead to data loss at cluster level and/or backup
+ level. On the contrary, setting it to a high value, might remove the risk of
+ data loss while leaving the cluster without an active primary for a longer time
+ during the switchover.
\ No newline at end of file
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/failure_modes.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/failure_modes.mdx
index 72a584f508c..e483421152f 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/failure_modes.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/failure_modes.mdx
@@ -9,7 +9,7 @@ PostgreSQL can face on a Kubernetes cluster during its lifetime.
!!! Important
In case the failure scenario you are experiencing is not covered by this
- section, please immediately contact EnterpriseDB for support and assistance.
+ section, please immediately contact EDB for support and assistance.
!!! Seealso "Postgres instance manager"
Please refer to the ["Postgres instance manager" section](instance_manager.md)
@@ -125,9 +125,9 @@ pod will be created from a backup of the current primary. The pod
will be added again to the `-r` service and to the `-ro` service when ready.
If the failed pod is the primary, the operator will promote the active pod
-with status ready and the lowest replication lag, then point the `-rw`service
+with status ready and the lowest replication lag, then point the `-rw` service
to it. The failed pod will be removed from the `-r` service and from the
-`-ro` service.
+`-rw` service.
Other standbys will start replicating from the new primary. The former
primary will use `pg_rewind` to synchronize itself with the new one if its
PVC is available; otherwise, a new standby will be created from a backup of the
@@ -140,7 +140,7 @@ to solve the problem manually.
!!! Important
In such cases, please do not perform any manual operation without the
- support and assistance of EnterpriseDB engineering team.
+ support and assistance of EDB engineering team.
From version 1.11.0 of the operator, you can use the
`k8s.enterprisedb.io/reconciliationLoop` annotation to temporarily disable the
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/index.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/index.mdx
index 89fec9c2bab..e7c6c7434fe 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/index.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/index.mdx
@@ -40,6 +40,7 @@ navigation:
- expose_pg_services
- cnp-plugin
- openshift
+ - failover
- troubleshooting
- e2e
- license_keys
@@ -50,7 +51,7 @@ navigation:
---
**Cloud Native PostgreSQL** is an [operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/)
-designed by [EnterpriseDB](https://www.enterprisedb.com)
+designed by [EDB](https://www.enterprisedb.com)
to manage [PostgreSQL](https://www.postgresql.org/) workloads on any supported [Kubernetes](https://kubernetes.io)
cluster running in private, public, hybrid, or multi-cloud environments.
Cloud Native PostgreSQL adheres to DevOps principles and concepts
@@ -65,16 +66,16 @@ Applications that reside in the same Kubernetes cluster can access the
PostgreSQL database using a service which is solely managed by the operator,
without having to worry about changes of the primary role following a failover
or a switchover. Applications that reside outside the Kubernetes cluster, need
-to configure an Ingress object to expose the service via TCP.
+to configure a Service or Ingress object to expose the Postgres via TCP.
Web applications can take advantage of the native connection pooler based on PgBouncer.
Cloud Native PostgreSQL works with PostgreSQL and [EDB Postgres Advanced](https://www.enterprisedb.com/products/edb-postgres-advanced-server-secure-ha-oracle-compatible)
-and is available under the [EnterpriseDB Limited Use License](https://www.enterprisedb.com/limited-use-license).
+and is available under the [EDB Limited Use License](https://www.enterprisedb.com/limited-use-license).
You can [evaluate Cloud Native PostgreSQL for free](evaluation.md).
You need a valid license key to use Cloud Native PostgreSQL in production.
-!!! Important
+!!! Note
Based on the [Operator Capability Levels model](operator_capability_levels.md),
users can expect a **"Level V - Auto Pilot"** set of capabilities from the
Cloud Native PostgreSQL Operator.
@@ -141,9 +142,9 @@ on OpenShift only.
- In-place or rolling updates for operator upgrades
- TLS connections and client certificate authentication
- Support for custom TLS certificates (including integration with cert-manager)
-- Continuous backup to an S3 compatible object store
+- Continuous backup to an object store (AWS S3 and S3-compatible, Azure Blob Storage, and Google Cloud Storage)
- Backup retention policies (based on recovery window)
-- Full recovery and Point-In-Time recovery from an S3 compatible object store backup
+- Full recovery and Point-In-Time recovery from an existing backup in an object store
- Replica clusters for PostgreSQL deployments across multiple Kubernetes
clusters, enabling private, public, hybrid, and multi-cloud architectures
- Support for Synchronous Replicas
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/installation_upgrade.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/installation_upgrade.mdx
index 58e7ae0ae09..eb69b489f5c 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/installation_upgrade.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/installation_upgrade.mdx
@@ -16,12 +16,12 @@ product: 'Cloud Native Operator'
The operator can be installed like any other resource in Kubernetes,
through a YAML manifest applied via `kubectl`.
-You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.13.0.yaml)
+You can install the [latest operator manifest](https://get.enterprisedb.io/cnp/postgresql-operator-1.14.0.yaml)
as follows:
```sh
kubectl apply -f \
- https://get.enterprisedb.io/cnp/postgresql-operator-1.13.0.yaml
+ https://get.enterprisedb.io/cnp/postgresql-operator-1.14.0.yaml
```
Once you have run the `kubectl` command, Cloud Native PostgreSQL will be installed in your Kubernetes cluster.
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/instance_manager.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/instance_manager.mdx
index 0b35bb94fdd..9fdd9d31416 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/instance_manager.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/instance_manager.mdx
@@ -91,4 +91,9 @@ an infinite delay and therefore preserve data durability.
PostgreSQL database. Setting it to a low value, might favor RTO over RPO
but lead to data loss at cluster level and/or backup level. On the contrary,
setting it to a high value, might remove the risk of data loss while leaving
- the cluster without an active primary for a longer time during the switchover.
\ No newline at end of file
+ the cluster without an active primary for a longer time during the switchover.
+
+## Failover
+
+In case of primary pod failure, the cluster will go into failover mode.
+Please refer to the ["Failover" section](failover.md) for details.
\ No newline at end of file
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/license_keys.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/license_keys.mdx
index d551138389a..ac95512fb33 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/license_keys.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/license_keys.mdx
@@ -113,16 +113,16 @@ the expiration date or move the cluster to a production license.
## License key secret at cluster level
Each `Cluster` resource can also have a `licenseKeySecret` parameter, which contains
-the name and key of a secret. That secret contains the license key provided by EnterpriseDB.
+the name and key of a secret. That secret contains the license key provided by EDB.
This field will take precedence over `licenseKey`: it will be refreshed
when you change the secret, in order to extend the expiration date, or switching from a trial
license to a production license.
-Cloud Native PostgreSQL is distributed under the EnterpriseDB Limited Usage License
+Cloud Native PostgreSQL is distributed under the EDB Limited Usage License
Agreement, available at [enterprisedb.com/limited-use-license](https://www.enterprisedb.com/limited-use-license).
-Cloud Native PostgreSQL: Copyright (C) 2019-2021 EnterpriseDB.
+Cloud Native PostgreSQL: Copyright (C) 2019-2022 EnterpriseDB.
## What happens when a license expires
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/logging.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/logging.mdx
index 2cbfee740a9..0f097a52130 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/logging.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/logging.mdx
@@ -169,7 +169,7 @@ for more details about each field in a record.
## EDB Audit logs
-Clusters that are running on EnterpriseDB Postgres Advanced Server (EPAS)
+Clusters that are running on EDB Postgres Advanced Server (EPAS)
can enable [EDB Audit](https://www.enterprisedb.com/edb-docs/d/edb-postgres-advanced-server/user-guides/user-guide/12/EDB_Postgres_Advanced_Server_Guide.1.43.html) as follows:
```yaml
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/monitoring.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/monitoring.mdx
index 8b6bc5974ef..93e7d608a6c 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/monitoring.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/monitoring.mdx
@@ -465,7 +465,7 @@ Here is a short description of all the available fields:
- `primary`: whether to run the query only on the primary instance
- `master`: same as `primary` (for compatibility with the Prometheus PostgreSQL exporter's syntax - deprecated)
- `runonserver`: a semantic version range to limit the versions of PostgreSQL the query should run on
- (e.g. `">=10.0.0"` or `">=12.0.0 <=14.1.0"`)
+ (e.g. `">=10.0.0"` or `">=12.0.0 <=14.2.0"`)
- `target_databases`: a list of databases to run the `query` against,
or a [shell-like pattern](#example-of-a-user-defined-metric-running-on-multiple-databases)
to enable auto discovery. Overwrites the default database if provided.
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/operator_capability_levels.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/operator_capability_levels.mdx
index 6a161d2a51e..6b1b5328a54 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/operator_capability_levels.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/operator_capability_levels.mdx
@@ -59,7 +59,7 @@ The operator is designed to support any operand container image with
PostgreSQL inside.
By default, the operator uses the latest available minor
version of the latest stable major version supported by the PostgreSQL
-Community and published on Quay.io by EnterpriseDB.
+Community and published on Quay.io by EDB.
You can use any compatible image of PostgreSQL supporting the
primary/standby architecture directly by setting the `imageName`
attribute in the CR. The operator also supports `imagePullSecrets`
@@ -213,10 +213,8 @@ replication connections from the standby servers, instead of relying on a passwo
The operator enables you to apply changes to the `Cluster` resource YAML
section of the PostgreSQL configuration and makes sure that all instances
are properly reloaded or restarted, depending on the configuration option.
-*Current limitations:* changes with `ALTER SYSTEM` are not detected, meaning
-that the cluster state is not enforced; proper restart order is not implemented
-with [hot standby sensitive parameters](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN)
-such as `max_connections` and `max_wal_senders`.
+*Current limitation:* changes with `ALTER SYSTEM` are not detected, meaning
+that the cluster state is not enforced.
### Multiple installation methods
@@ -297,8 +295,9 @@ failover and switchover operations. This area includes enhancements in:
The operator has been designed to provide application-level backups using
PostgreSQL’s native continuous backup technology based on
physical base backups and continuous WAL archiving. Specifically,
-the operator currently supports only backups on AWS S3 or S3-compatible
-object stores and gateways like MinIO.
+the operator currently supports only backups on object stores (AWS S3 and
+S3-compatible, Azure Blob Storage, Google Cloud Storage, and gateways like
+MinIO).
WAL archiving and base backups are defined at the cluster level, declaratively,
through the `backup` parameter in the cluster definition, by specifying
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/postgresql_conf.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/postgresql_conf.mdx
index 737f9d09e7c..a5ad6ac7d5a 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/postgresql_conf.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/postgresql_conf.mdx
@@ -186,6 +186,7 @@ As anticipated in the previous section, Cloud Native PostgreSQL automatically
manages the content in `shared_preload_libraries` for some well-known and
supported extensions. The current list includes:
+- `auto_explain`
- `pg_stat_statements`
- `pgaudit`
@@ -205,6 +206,29 @@ SELECT datname FROM pg_database WHERE datallowconn
!!! Note
The above query also includes template databases like `template1`.
+#### Enabling `auto_explain`
+
+The [`auto_explain`](https://www.postgresql.org/docs/current/auto-explain.html)
+extension provides a means for logging execution plans of slow statements
+automatically, without having to manually run `EXPLAIN` (helpful for tracking
+down un-optimized queries).
+
+You can enable `auto_explain` by adding to the configuration a parameter
+that starts with `auto_explain.` as in the following example excerpt (which
+automatically logs execution plans of queries that take longer than 10 seconds
+to complete):
+
+```yaml
+ # ...
+ postgresql:
+ parameters:
+ auto_explain.log_min_duration: "10s"
+ # ...
+```
+
+!!! Note
+ Enabling auto_explain can lead to performance issues. Please refer to [`the auto explain documentation`](https://www.postgresql.org/docs/current/auto-explain.html)
+
#### Enabling `pg_stat_statements`
The [`pg_stat_statements`](https://www.postgresql.org/docs/current/pgstatstatements.html)
@@ -228,24 +252,25 @@ As explained previously, the operator will automatically add
NOT EXISTS pg_stat_statements` on each database, enabling you to run queries
against the `pg_stat_statements` view.
-#### Enabling `auto_explain`
+#### Enabling `pgaudit`
-The [`auto_explain`](https://www.postgresql.org/docs/current/auto-explain.html)
-extension provides a means for logging execution plans of slow statements
-automatically, without having to manually run `EXPLAIN` (helpful for tracking
-down un-optimized queries).
+The `pgaudit` extension provides detailed session and/or object audit logging via the standard PostgreSQL logging facility.
-You can enable `auto_explain` by adding to the configuration a parameter
-that starts with `auto_explain.` as in the following example excerpt (which
-automatically logs execution plans of queries that take longer than 10 seconds
-to complete):
+Cloud Native PostgreSQL has transparent and native support for
+[PGAudit](https://www.pgaudit.org/) on PostgreSQL clusters. For further information, please refer to the ["PGAudit" logs section.](logging.md#pgaudit-logs)
+
+You can enable `pgaudit` by adding to the configuration a parameter
+that starts with `pgaudit.` as in the following example excerpt:
```yaml
- # ...
- postgresql:
- parameters:
- auto_explain.log_min_duration: "10s"
- # ...
+#
+postgresql:
+ parameters:
+ pgaudit.log: "all, -misc"
+ pgaudit.log_catalog: "off"
+ pgaudit.log_parameter: "on"
+ pgaudit.log_relation: "on"
+#
```
## The `pg_hba` section
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/release_notes.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/release_notes.mdx
index 281911951d0..3240e81c722 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/release_notes.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/release_notes.mdx
@@ -6,6 +6,55 @@ product: 'Cloud Native Operator'
History of user-visible changes for Cloud Native PostgreSQL.
+## Version 1.14.0
+
+**Release date:** 25 March 2022
+
+Features:
+
+- Natively support Google Cloud Storage for backup and recovery, by taking
+ advantage of the features introduced in Barman Cloud 2.19
+- Improved observability of backups through the introduction of the
+ `LastBackupSucceeded` condition for the `Cluster` object
+- Support update of Hot Standby sensitive parameters: `max_connections`,
+ `max_prepared_transactions`, `max_locks_per_transaction`, `max_wal_senders`,
+ `max_worker_processes`
+- Add the `Online upgrade in progress` phase in the `Cluster` object to show
+ when an online upgrade of the operator is in progress
+- Ability to inherit an AWS IAM Role as an alternative way to provide
+ credentials for the S3 object storage
+- Support for Opaque secrets for Pooler’s authQuerySecret and certificates
+- Updated default PostgreSQL version to 14.2
+- Add a new command to `kubectl cnp` plugin named `maintenance` to set
+ maintenance window to cluster(s) in one or all namespaces across the Kubernetes
+ cluster
+
+Container Images:
+
+- Latest PostgreSQL and EPAS containers include Barman Cloud 2.19
+
+Security Enhancements:
+
+- Stronger RBAC enforcement for namespaced operator installations with Operator
+ Lifecycle Manager, including OpenShift. OpenShift users are recommended to
+ update to this version.
+
+Fixes:
+
+- Allow the instance manager to retry an interrupted `pg_rewind` by preserving a
+ copy of the original `pg_control` file
+- Clean up stale PID files before running `pg_rewind`
+- Force sorting by key in `primary_conninfo` to avoid random restarts with
+ PostgreSQL versions prior to 13
+- Preserve `ServiceAccount` changes (e.g., labels, annotations) upon
+ reconciliation
+- Disable enforcement of the imagePullPolicy default value
+- Improve initDB validation for WAL segment size
+- Properly handle the `targetLSN` option when recovering a cluster with the LSN
+ specified
+- Fix custom TLS certificates validation by allowing a certificates chain both
+ in the server and CA certificates
+
## Version 1.13.0
**Release date:** 17 February 2022
@@ -158,7 +207,7 @@ Features:
bootstrap method to specify a list of SQL queries to be executed on the main
application database as a superuser immediately after the cluster has been
created
-- Support for EDB Postgres Advanced 14.1
+- Support for EDB Postgres Advanced 14.2
Fixes:
@@ -231,7 +280,7 @@ Features:
- Drop support for deprecated API version
`postgresql.k8s.enterprisedb.io/v1alpha1` on the `Cluster`, `Backup`, and
`ScheduledBackup` kinds
-- Set default operand image to PostgreSQL 14.1
+- Set default operand image to PostgreSQL 14.2
Security:
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-backup-aws-inherit.yaml b/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-backup-aws-inherit.yaml
new file mode 100644
index 00000000000..e807fb0a28a
--- /dev/null
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-backup-aws-inherit.yaml
@@ -0,0 +1,15 @@
+apiVersion: postgresql.k8s.enterprisedb.io/v1
+kind: Cluster
+metadata:
+ name: pg-backup-aws-inherit
+spec:
+ instances: 3
+ storage:
+ storageClass: standard
+ size: 1Gi
+ backup:
+ barmanObjectStore:
+ destinationPath: s3://BUCKET_NAME/path/to/folder
+ s3Credentials:
+ inheritFromIAMRole: true
+ retentionPolicy: "30d"
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-example-full.yaml b/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-example-full.yaml
index bbf916f74fb..8c7548da025 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-example-full.yaml
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/samples/cluster-example-full.yaml
@@ -35,7 +35,7 @@ metadata:
name: cluster-example-full
spec:
description: "Example of cluster"
- imageName: quay.io/enterprisedb/postgresql:14.1
+ imageName: quay.io/enterprisedb/postgresql:14.2
# imagePullSecret is only required if the images are located in a private registry
# imagePullSecrets:
# - name: private_registry_access
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/scheduling.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/scheduling.mdx
index 7d658b920a7..cf32c1d2877 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/scheduling.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/scheduling.mdx
@@ -62,7 +62,7 @@ metadata:
name: cluster-example
spec:
instances: 3
- imageName: quay.io/enterprisedb/postgresql:14.1
+ imageName: quay.io/enterprisedb/postgresql:14.2
affinity:
enablePodAntiAffinity: true #default value
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/ssl_connections.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/ssl_connections.mdx
index 582752b97c4..c4ae291d1c0 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/ssl_connections.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/ssl_connections.mdx
@@ -167,7 +167,7 @@ Output :
version
--------------------------------------------------------------------------------------
------------------
-PostgreSQL 14.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat
+PostgreSQL 14.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.3.1 20191121 (Red Hat
8.3.1-5), 64-bit
(1 row)
```
\ No newline at end of file
diff --git a/advocacy_docs/kubernetes/cloud_native_postgresql/troubleshooting.mdx b/advocacy_docs/kubernetes/cloud_native_postgresql/troubleshooting.mdx
index 7591d7754fb..4c1d0e01017 100644
--- a/advocacy_docs/kubernetes/cloud_native_postgresql/troubleshooting.mdx
+++ b/advocacy_docs/kubernetes/cloud_native_postgresql/troubleshooting.mdx
@@ -92,7 +92,7 @@ kubectl describe pod -n postgresql-operator-system
Then get the logs from the same pod by running:
```shell
-kubectl get logs -n postgresql-operator-system
+kubectl logs -n postgresql-operator-system
```
### Gather more information about the operator
@@ -129,7 +129,7 @@ Cluster in healthy state
Name: cluster-example
Namespace: default
System ID: 7044925089871458324
-PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1-3
+PostgreSQL Image: quay.io/enterprisedb/postgresql:14.2-3
Primary instance: cluster-example-1
Instances: 3
Ready instances: 3
@@ -205,7 +205,7 @@ kubectl describe cluster -n | grep "Image Name"
Output:
```shell
- Image Name: quay.io/enterprisedb/postgresql:14.1-3
+ Image Name: quay.io/enterprisedb/postgresql:14.2-3
```
!!! Note
@@ -405,12 +405,21 @@ objects [like here](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifec
Cluster exposes `status.conditions` as well. This allows one to 'wait' for a particular
event to occur instead of relying on the overall cluster health state. Available conditions as of now are:
+- LastBackupSucceeded
- ContinuousArchiving
### How to wait for a particular condition
+- Backup:
+
```bash
-$ kubectl wait --for=condition=ContinuousArchiving cluster/
+$ kubectl wait --for=condition=LastBackupSucceeded cluster/ -n
+```
+
+- ContinuousArchiving:
+
+```bash
+$ kubectl wait --for=condition=ContinuousArchiving cluster/ -n
```
Below is a snippet of a `cluster.status` that contains a failing condition.
@@ -423,10 +432,15 @@ $ kubectl get cluster/ -o yaml
status:
conditions:
- message: 'unexpected failure invoking barman-cloud-wal-archive: exit status
- 4'
+ 2'
reason: Continuous Archiving is Failing
status: "False"
type: ContinuousArchiving
+
+ - message: exit status 2
+ reason: Backup is failed
+ status: "False"
+ type: LastBackupSucceeded
```
## Some common issues