diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/01_whats_new.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/01_whats_new.mdx
deleted file mode 100644
index 77d51fa9b73..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/01_whats_new.mdx
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: "What’s New"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/whats_new.html"
----
-
-
-
-The following features are added to create Hadoop Foreign Data Wrapper `2.0.7`:
-
-- Support for EDB Postgres Advanced Server 13.
-- Support for Ubuntu 20.04 LTS platform.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/02_requirements_overview.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/02_requirements_overview.mdx
deleted file mode 100644
index 80570da0e29..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/02_requirements_overview.mdx
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: "Requirements Overview"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/requirements_overview.html"
----
-
-## Supported Versions
-
-The Hadoop Foreign Data Wrapper is certified with EDB Postgres Advanced Server 10 and above.
-
-## Supported Platforms
-
-The Hadoop Foreign Data Wrapper is supported on the following platforms:
-
-**Linux x86-64**
-
- - RHEL 8.x and 7.x
- - Rocky Linux/AlmaLinux 8.x
- - CentOS 7.x
- - OL 8.x and 7.x
- - Ubuntu 20.04 and 18.04 LTS
- - Debian 10.x and 9.x
-
-**Linux on IBM Power8/9 (LE)**
-
- - RHEL 7.x
-
-The Hadoop Foreign Data Wrapper supports use of the Hadoop file system using a HiveServer2 interface or Apache Spark using the Spark Thrift Server.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/03_architecture_overview.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/03_architecture_overview.mdx
deleted file mode 100644
index 775c8490a16..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/03_architecture_overview.mdx
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: "Architecture Overview"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/architecture_overview.html"
----
-
-
-
-Hadoop is a framework that allows you to store a large data set in a distributed file system.
-
-The Hadoop data wrapper provides an interface between a Hadoop file system and a Postgres database. The Hadoop data wrapper transforms a Postgres `SELECT` statement into a query that is understood by the HiveQL or Spark SQL interface.
-
-![Using a Hadoop distributed file system with Postgres](images/hadoop_distributed_file_system_with_postgres.png)
-
-When possible, the Foreign Data Wrapper asks the Hive or Spark server to perform the actions associated with the `WHERE` clause of a `SELECT` statement. Pushing down the `WHERE` clause improves performance by decreasing the amount of data moving across the network.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/04_supported_authentication_methods.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/04_supported_authentication_methods.mdx
deleted file mode 100644
index 2e9a00806f3..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/04_supported_authentication_methods.mdx
+++ /dev/null
@@ -1,62 +0,0 @@
----
-title: "Supported Authentication Methods"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/supported_authentication_methods.html"
----
-
-
-
-The Hadoop Foreign Data Wrapper supports `NOSASL` and `LDAP` authentication modes. To use `NOSASL`, do not specify any `OPTIONS` while creating user mapping. For `LDAP` authentication mode, specify `username` and `password` in `OPTIONS` while creating user mapping.
-
-## Using LDAP Authentication
-
-When using the Hadoop Foreign Data Wrapper with `LDAP` authentication, you must first configure the `Hive Server` or `Spark Server` to use LDAP authentication. The configured server must provide a `hive-site.xml` file that includes the connection details for the LDAP server. For example:
-
-```text
-
- hive.server2.authentication
- LDAP
-
- Expects one of [nosasl, none, ldap, kerberos, pam, custom].
- Client authentication types.
- NONE: no authentication check
- LDAP: LDAP/AD based authentication
- KERBEROS: Kerberos/GSSAPI authentication
- CUSTOM: Custom authentication provider
- (Use with property hive.server2.custom.authentication.class)
- PAM: Pluggable authentication module
- NOSASL: Raw transport
-
-
-
- hive.server2.authentication.ldap.url
- ldap://localhost
- LDAP connection URL
-
-
- hive.server2.authentication.ldap.baseDN
- ou=People,dc=itzgeek,dc=local
- LDAP base DN
-
-```
-
-Then, when starting the hive server, include the path to the `hive-site.xml` file in the command. For example:
-
-```text
-./hive --config path_to_hive-site.xml_file --service hiveServer2
-```
-
-Where *path_to_hive-site.xml_file* specifies the complete path to the `hive‑site.xml` file.
-
-When creating the user mapping, you must provide the name of a registered LDAP user and the corresponding password as options. For details, see [Create User Mapping](08_configuring_the_hadoop_data_adapter/#create-user-mapping).
-
-
-
-## Using NOSASL Authentication
-
-When using `NOSASL` authentication with the Hadoop Foreign Data Wrapper, set the authorization to `None`, and the authentication method to `NOSASL` on the `Hive Server` or `Spark Server`. For example, if you start the `Hive Server` at the command line, include the `hive.server2.authentication` configuration parameter in the command:
-
-```text
-hive --service hiveserver2 --hiveconf hive.server2.authentication=NOSASL
-```
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/05_installing_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/05_installing_the_hadoop_data_adapter.mdx
deleted file mode 100644
index 7db147644b0..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/05_installing_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,342 +0,0 @@
----
-title: "Installing the Hadoop Foreign Data Wrapper"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/installing_the_hadoop_data_adapter.html"
----
-
-
-
-The Hadoop Foreign Data Wrapper can be installed with an RPM package. During the installation process, the installer will satisfy software prerequisites.
-
-
-
-## Installing the Hadoop Foreign Data Wrapper using an RPM Package
-
-You can install the Hadoop Foreign Data Wrapper using an RPM package on the following platforms:
-
-- [RHEL 7](#rhel7)
-- [RHEL 8](#rhel8)
-- [CentOS 7](#centos7)
-- [Rocky Linux/AlmaLinux 8](#centos8)
-
-
-
-### On RHEL 7
-
-Before installing the Hadoop Foreign Data Wrapper, you must install the following prerequisite packages, and request credentials from EDB:
-
-Install the `epel-release` package:
-
- ```text
- yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
- ```
-
-Enable the optional, extras, and HA repositories:
-
- ```text
- subscription-manager repos --enable "rhel-*-optional-rpms" --enable "rhel-*-extras-rpms" --enable "rhel-ha-for-rhel-*-server-rpms"
- ```
-
-You must also have credentials that allow access to the EDB repository. For information about requesting credentials, visit:
-
-
-
-After receiving your repository credentials you can:
-
-1. Create the repository configuration file.
-2. Modify the file, providing your user name and password.
-3. Install `edb-as-hdfs_fdw`.
-
-**Creating a Repository Configuration File**
-
-To create the repository configuration file, assume superuser privileges, and invoke the following command:
-
- ```text
- yum -y install https://yum.enterprisedb.com/edbrepos/edb-repo-latest.noarch.rpm
- ```
-
-The repository configuration file is named `edb.repo`. The file resides in `/etc/yum.repos.d`.
-
-**Modifying the file, providing your user name and password**
-
-After creating the `edb.repo` file, use your choice of editor to ensure that the value of the `enabled` parameter is `1`, and replace the `username` and `password` placeholders in the `baseurl` specification with the name and password of a registered EDB user.
-
- ```text
- [edb]
- name=EnterpriseDB RPMs $releasever - $basearch
- baseurl=https://:@yum.enterprisedb.com/edb/redhat/rhel-$releasever-$basearch
- enabled=1
- gpgcheck=1
- repo_gpgcheck=1
- gpgkey=file:///etc/pki/rpm-gpg/ENTERPRISEDB-GPG-KEY
- ```
-
-**Installing Hadoop Foreign Data Wrapper**
-
-After saving your changes to the configuration file, use the following commands to install the Hadoop Foreign Data Wrapper:
-
- ```
- yum install edb-as-hdfs_fdw
- ```
-
-where `xx` is the server version number.
-
-When you install an RPM package that is signed by a source that is not recognized by your system, yum may ask for your permission to import the key to your local server. If prompted, and you are satisfied that the packages come from a trustworthy source, enter `y`, and press `Return` to continue.
-
-During the installation, yum may encounter a dependency that it cannot resolve. If it does, it will provide a list of the required dependencies that you must manually resolve.
-
-
-
-### On RHEL 8
-
-Before installing the Hadoop Foreign Data Wrapper, you must install the following prerequisite packages, and request credentials from EDB:
-
-Install the `epel-release` package:
-
- ```text
- dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
- ```
-
-Enable the `codeready-builder-for-rhel-8-\*-rpms` repository:
-
- ```text
- ARCH=$( /bin/arch )
- subscription-manager repos --enable "codeready-builder-for-rhel-8-${ARCH}-rpms"
- ```
-
-You must also have credentials that allow access to the EDB repository. For information about requesting credentials, visit:
-
-
-
-After receiving your repository credentials you can:
-
-1. Create the repository configuration file.
-2. Modify the file, providing your user name and password.
-3. Install `edb-as-hdfs_fdw`.
-
-**Creating a Repository Configuration File**
-
-To create the repository configuration file, assume superuser privileges, and invoke the following command:
-
- ```text
- dnf -y https://yum.enterprisedb.com/edbrepos/edb-repo-latest.noarch.rpm
- ```
-
-The repository configuration file is named `edb.repo`. The file resides in `/etc/yum.repos.d`.
-
-**Modifying the file, providing your user name and password**
-
-After creating the `edb.repo` file, use your choice of editor to ensure that the value of the `enabled` parameter is `1`, and replace the `username` and `password` placeholders in the `baseurl` specification with the name and password of a registered EDB user.
-
- ```text
- [edb]
- name=EnterpriseDB RPMs $releasever - $basearch
- baseurl=https://:@yum.enterprisedb.com/edb/redhat/rhel-$releasever-$basearch
- enabled=1
- gpgcheck=1
- repo_gpgcheck=1
- gpgkey=file:///etc/pki/rpm-gpg/ENTERPRISEDB-GPG-KEY
- ```
-
-**Installing Hadoop Foreign Data Wrapper**
-
-After saving your changes to the configuration file, use the below command to install the Hadoop Foreign Data Wrapper:
-
- ```text
- dnf install edb-as-hdfs_fdw
- ```
-
-When you install an RPM package that is signed by a source that is not recognized by your system, yum may ask for your permission to import the key to your local server. If prompted, and you are satisfied that the packages come from a trustworthy source, enter `y`, and press `Return` to continue.
-
-During the installation, yum may encounter a dependency that it cannot resolve. If it does, it will provide a list of the required dependencies that you must manually resolve.
-
-
-
-### On CentOS 7
-
-Before installing the Hadoop Foreign Data Wrapper, you must install the following prerequisite packages, and request credentials from EDB:
-
-Install the `epel-release` package:
-
- ```text
- yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
- ```
-
-!!! Note
- You may need to enable the `[extras]` repository definition in the `CentOS-Base.repo` file (located in `/etc/yum.repos.d`).
-
-You must also have credentials that allow access to the EDB repository. For information about requesting credentials, visit:
-
-
-
-After receiving your repository credentials you can:
-
-1. Create the repository configuration file.
-2. Modify the file, providing your user name and password.
-3. Install `edb-as-hdfs_fdw`.
-
-**Creating a Repository Configuration File**
-
-To create the repository configuration file, assume superuser privileges, and invoke the following command:
-
- ```text
- yum -y install https://yum.enterprisedb.com/edbrepos/edb-repo-latest.noarch.rpm
- ```
-
-The repository configuration file is named `edb.repo`. The file resides in `/etc/yum.repos.d`.
-
-**Modifying the file, providing your user name and password**
-
-After creating the `edb.repo` file, use your choice of editor to ensure that the value of the `enabled` parameter is `1`, and replace the `username` and `password` placeholders in the `baseurl` specification with the name and password of a registered EDB user.
-
- ```text
- [edb]
- name=EnterpriseDB RPMs $releasever - $basearch
- baseurl=https://:@yum.enterprisedb.com/edb/redhat/rhel-$releasever-$basearch
- enabled=1
- gpgcheck=1
- repo_gpgcheck=1
- gpgkey=file:///etc/pki/rpm-gpg/ENTERPRISEDB-GPG-KEY
- ```
-
-**Installing Hadoop Foreign Data Wrapper**
-
-After saving your changes to the configuration file, use the following command to install the Hadoop Foreign Data Wrapper:
-
- ```text
- yum install edb-as-hdfs_fdw
- ```
-
-where `xx` is the server version number.
-
-When you install an RPM package that is signed by a source that is not recognized by your system, yum may ask for your permission to import the key to your local server. If prompted, and you are satisfied that the packages come from a trustworthy source, enter `y`, and press `Return` to continue.
-
-During the installation, yum may encounter a dependency that it cannot resolve. If it does, it will provide a list of the required dependencies that you must manually resolve.
-
-
-
-### On Rocky Linux/AlmaLinux 8
-
-Before installing the Hadoop Foreign Data Wrapper, you must install the following prerequisite packages, and request credentials from EDB:
-
-Install the `epel-release` package:
-
- ```text
- dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
- ```
-
-Enable the `PowerTools` repository:
-
- ```text
- dnf config-manager --set-enabled PowerTools
- ```
-
-You must also have credentials that allow access to the EDB repository. For information about requesting credentials, visit:
-
-
-
-After receiving your repository credentials you can:
-
-1. Create the repository configuration file.
-2. Modify the file, providing your user name and password.
-3. Install `edb-as-hdfs_fdw`.
-
-**Creating a Repository Configuration File**
-
-To create the repository configuration file, assume superuser privileges, and invoke the following command:
-
- ```text
- dnf -y install https://yum.enterprisedb.com/edbrepos/edb-repo-latest.noarch.rpm
- ```
-
-The repository configuration file is named `edb.repo`. The file resides in `/etc/yum.repos.d`.
-
-**Modifying the file, providing your user name and password**
-
-After creating the `edb.repo` file, use your choice of editor to ensure that the value of the `enabled` parameter is `1`, and replace the `username` and `password` placeholders in the `baseurl` specification with the name and password of a registered EDB user.
-
- ```text
- [edb]
- name=EnterpriseDB RPMs $releasever - $basearch
- baseurl=https://:@yum.enterprisedb.com/edb/redhat/rhel-$releasever-$basearch
- enabled=1
- gpgcheck=1
- repo_gpgcheck=1
- gpgkey=file:///etc/pki/rpm-gpg/ENTERPRISEDB-GPG-KEY
- ```
-
-**Installing Hadoop Foreign Data Wrapper**
-
-After saving your changes to the configuration file, use the following command to install the Hadoop Foreign Data Wrapper:
-
- ```text
- dnf install edb-as-hdfs_fdw
- ```
-
-where `xx` is the server version number.
-
-When you install an RPM package that is signed by a source that is not recognized by your system, yum may ask for your permission to import the key to your local server. If prompted, and you are satisfied that the packages come from a trustworthy source, enter `y`, and press `Return` to continue.
-
-During the installation, yum may encounter a dependency that it cannot resolve. If it does, it will provide a list of the required dependencies that you must manually resolve.
-
-## Installing the Hadoop Foreign Data Wrapper on a Debian or Ubuntu Host
-
-To install the Hadoop Foreign Data Wrapper on a Debian or Ubuntu host, you must have credentials that allow access to the EDB repository. To request credentials for the repository, visit the [EDB website](https://www.enterprisedb.com/repository-access-request/).
-
-The following steps will walk you through on using the EDB apt repository to install a Debian package. When using the commands, replace the `username` and `password` with the credentials provided by EDB.
-
-1. Assume superuser privileges:
-
- ```text
- sudo su –
- ```
-
-2. Configure the EnterpriseDB repository:
-
- On Debian 9 and Ubuntu:
-
- ```text
- sh -c 'echo "deb https://username:password@apt.enterprisedb.com/$(lsb_release -cs)-edb/ $(lsb_release -cs) main" > /etc/apt/sources.list.d/edb-$(lsb_release -cs).list'
- ```
-
- On Debian 10:
-
- 1. Set up the EDB repository:
-
- ```text
- sh -c 'echo "deb [arch=amd64] https://apt.enterprisedb.com/$(lsb_release -cs)-edb/ $(lsb_release -cs) main" > /etc/apt/sources.list.d/edb-$(lsb_release -cs).list'
- ```
-
- 1. Substitute your EDB credentials for the `username` and `password` in the following command:
-
- ```text
- sh -c 'echo "machine apt.enterprisedb.com login password " > /etc/apt/auth.conf.d/edb.conf'
- ```
-
-3. Add support to your system for secure APT repositories:
-
- ```text
- apt-get install apt-transport-https
- ```
-
-4. Add the EDB signing key:
-
- ```text
- wget -q -O - https://username:password
- @apt.enterprisedb.com/edb-deb.gpg.key | apt-key add -
- ```
-
-5. Update the repository metadata:
-
- ```text
- apt-get update
- ```
-
-6. Install the package:
-
- ```text
- apt-get install edb-as-hdfs-fdw
- ```
-
-where `xx` is the server version number.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/06_updating_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/06_updating_the_hadoop_data_adapter.mdx
deleted file mode 100644
index 1366440ad50..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/06_updating_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: "Updating the Hadoop Foreign Data Wrapper"
----
-
-
-
-**Updating an RPM Installation**
-
-If you have an existing RPM installation of Hadoop Foreign Data Wrapper, you can use yum or dnf to upgrade your repository configuration file and update to a more recent product version. To update the `edb.repo` file, assume superuser privileges and enter:
-
-- On RHEL or CentOS 7:
-
- > `yum upgrade edb-repo`
-
-- On RHEL or Rocky Linux or AlmaLinux 8:
-
- > `dnf upgrade edb-repo`
-
-yum or dnf will update the `edb.repo` file to enable access to the current EDB repository, configured to connect with the credentials specified in your `edb.repo` file. Then, you can use yum or dnf to upgrade any installed packages:
-
-- On RHEL or CentOS 7:
-
- > `yum upgrade edb-as-hdfs_fdw`
-
- where `xx` is the server version number.
-
-- On RHEL or Rocky Linux or AlmaLinux 8:
-
- > `dnf upgrade edb-as-hdfs_fdw`
-
- where `xx` is the server version number.
-
-**Updating MongoDB Foreign Data Wrapper on a Debian or Ubuntu Host**
-
-To update MongoDB Foreign Data Wrapper on a Debian or Ubuntu Host, use the following command:
-
-> `apt-get --only-upgrade install edb-as-hdfs-fdw`
->
-> where `xx` is the server version number.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/07_features_of_hdfs_fdw.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/07_features_of_hdfs_fdw.mdx
deleted file mode 100644
index 8066191ad8d..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/07_features_of_hdfs_fdw.mdx
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: "Features of the Hadoop Foreign Data Wrapper"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/features_of_hdfs_fdw.html"
----
-
-
-
-The key features of the Hadoop Foreign Data Wrapper are listed below:
-
-## Where Clause Push-down
-
-Hadoop Foreign Data Wrappper allows the push-down of `WHERE` clause to the foreign server for execution. This feature optimizes remote queries to reduce the number of rows transferred from foreign servers.
-
-## Column Push-down
-
-Hadoop Foreign Data Wrapper supports column push-down. As a result, the query brings back only those columns that are a part of the select target list.
-
-## Automated Cleanup
-
-Hadoop Foreign Data Wrappper allows the cleanup of foreign tables in a single operation using `DROP EXTENSION` command. This feature is specifically useful when a foreign table is set for a temporary purpose. The syntax is:
-
-> `DROP EXTENSION hdfs_fdw CASCADE;`
-
-For more information, see [DROP EXTENSION](https://www.postgresql.org/docs/current/sql-dropextension.html).
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/08_configuring_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/08_configuring_the_hadoop_data_adapter.mdx
deleted file mode 100644
index c1a9de56120..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/08_configuring_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,483 +0,0 @@
----
-title: "Configuring the Hadoop Foreign Data Wrapper"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/configuring_the_hadoop_data_adapter.html"
----
-
-
-
-Before creating the extension and the database objects that use the extension, you must modify the Postgres host, providing the location of the supporting libraries.
-
-After installing Postgres, modify the `postgresql.conf` located in:
-
-> `/var/lib/edb/as_version/data`
-
-Modify the configuration file with your editor of choice, adding the `hdfs_fdw.jvmpath` parameter to the end of the configuration file, and setting the value to specify the location of the Java virtual machine (`libjvm.so`). Set the value of `hdfs_fdw.classpath` to indicate the location of the java class files used by the adapter; use a colon (:) as a delimiter between each path. For example:
-
-> ```text
-> hdfs_fdw.classpath=
-> '/usr/edb/as12/lib/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
-> ```
->
-> !!! Note
-> The jar files (hive-jdbc-1.0.1-standalone.jar and hadoop-common-2.6.4.jar) mentioned in the above example should be copied from respective Hive and Hadoop sources or website to PostgreSQL instance where Hadoop Foreign Data Wrapper is installed.
->
-> If you are using EDB Advanced Server and have a `DATE` column in your database, you must set `edb_redwood_date = OFF` in the `postgresql.conf` file.
-
-After setting the parameter values, restart the Postgres server. For detailed information about controlling the service on an Advanced Server host, see the EDB Postgres Advanced Server Installation Guide, available at:
-
->
-
-Before using the Hadoop Foreign Data Wrapper, you must:
-
-> 1. Use the [CREATE EXTENSION](#create-extension) command to create the extension on the Postgres host.
-> 2. Use the [CREATE SERVER](#create-server) command to define a connection to the Hadoop file system.
-> 3. Use the [CREATE USER MAPPING](#create-user-mapping) command to define a mapping that associates a Postgres role with the server.
-> 4. Use the [CREATE FOREIGN TABLE](#create-foreign-table) command to define a table in the Advanced Server database that corresponds to a database that resides on the Hadoop cluster.
-
-
-
-## CREATE EXTENSION
-
-Use the `CREATE EXTENSION` command to create the `hdfs_fdw` extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you will be querying the Hive or Spark server, and invoke the command:
-
-```text
-CREATE EXTENSION [IF NOT EXISTS] hdfs_fdw [WITH] [SCHEMA schema_name];
-```
-
-**Parameters**
-
-`IF NOT EXISTS`
-
-> Include the `IF NOT EXISTS` clause to instruct the server to issue a notice instead of throwing an error if an extension with the same name already exists.
-
-`schema_name`
-
-> Optionally specify the name of the schema in which to install the extension's objects.
-
-**Example**
-
-The following command installs the `hdfs_fdw` hadoop foreign data wrapper:
-
-> `CREATE EXTENSION hdfs_fdw;`
-
-For more information about using the foreign data wrapper `CREATE EXTENSION` command, see:
-
-> .
-
-
-
-## CREATE SERVER
-
-Use the `CREATE SERVER` command to define a connection to a foreign server. The syntax is:
-
-```text
-CREATE SERVER server_name FOREIGN DATA WRAPPER hdfs_fdw
- [OPTIONS (option 'value' [, ...])]
-```
-
-The role that defines the server is the owner of the server; use the `ALTER SERVER` command to reassign ownership of a foreign server. To create a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `CREATE SERVER` command.
-
-**Parameters**
-
-`server_name`
-
-> Use `server_name` to specify a name for the foreign server. The server name must be unique within the database.
-
-`FOREIGN_DATA_WRAPPER`
-
-> Include the `FOREIGN_DATA_WRAPPER` clause to specify that the server should use the `hdfs_fdw` foreign data wrapper when connecting to the cluster.
-
-`OPTIONS`
-
-> Use the `OPTIONS` clause of the `CREATE SERVER` command to specify connection information for the foreign server. You can include:
-
-| Option | Description |
-| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| host | The address or hostname of the Hadoop cluster. The default value is \`localhost\`. |
-| port | The port number of the Hive Thrift Server or Spark Thrift Server. The default is \`10000\`. |
-| client_type | Specify hiveserver2 or spark as the client type. To use the ANALYZE statement on Spark, you must specify a value of spark; if you do not specify a value for client_type, the default value is hiveserver2. |
-| auth_type | The authentication type of the client; specify LDAP or NOSASL. If you do not specify an auth_type, the data wrapper will decide the auth_type value on the basis of the user mapping:- If the user mapping includes a user name and password, the data wrapper will use LDAP authentication. - If the user mapping does not include a user name and password, the data wrapper will use NOSASL authentication. |
-| connect_timeout | The length of time before a connection attempt times out. The default value is \`300\` seconds. |
-| fetch_size | A user-specified value that is provided as a parameter to the JDBC API setFetchSize. The default value is \`10,000\`. |
-| log_remote_sql | If true, logging will include SQL commands executed on the remote hive server and the number of times that a scan is repeated. The default is \`false\`. |
-| query_timeout | Use query_timeout to provide the number of seconds after which a request will timeout if it is not satisfied by the Hive server. Query timeout is not supported by the Hive JDBC driver. |
-| use_remote_estimate | Include the use_remote_estimate to instruct the server to use EXPLAIN commands on the remote server when estimating processing costs. By default, use_remote_estimate is false, and remote tables are assumed to have \`1000\` rows. |
-
-**Example**
-
-The following command creates a foreign server named `hdfs_server` that uses the `hdfs_fdw` foreign data wrapper to connect to a host with an IP address of `170.11.2.148`:
-
-```text
-CREATE SERVER hdfs_server FOREIGN DATA WRAPPER hdfs_fdw OPTIONS (host '170.11.2.148', port '10000', client_type 'hiveserver2', auth_type 'LDAP', connect_timeout '10000', query_timeout '10000');
-```
-
-The foreign server uses the default port (`10000`) for the connection to the client on the Hadoop cluster; the connection uses an LDAP server.
-
-For more information about using the `CREATE SERVER` command, see:
-
->
-
-
-
-## CREATE USER MAPPING
-
-Use the `CREATE USER MAPPING` command to define a mapping that associates a Postgres role with a foreign server:
-
-```text
-CREATE USER MAPPING FOR role_name SERVER server_name
- [OPTIONS (option 'value' [, ...])];
-```
-
-You must be the owner of the foreign server to create a user mapping for that server.
-
-Please note: the Hadoop Foreign Data Wrapper supports NOSASL and LDAP authentication. If you are creating a user mapping for a server that uses LDAP authentication, use the `OPTIONS` clause to provide the connection credentials (the username and password) for an existing LDAP user. If the server uses NOSASL authentication, omit the OPTIONS clause when creating the user mapping.
-
-**Parameters**
-
-`role_name`
-
-> Use `role_name` to specify the role that will be associated with the foreign server.
-
-`server_name`
-
-> Use `server_name` to specify the name of the server that defines a connection to the Hadoop cluster.
-
-`OPTIONS`
-
-> Use the `OPTIONS` clause to specify connection information for the foreign server. If you are using LDAP authentication, provide a:
->
-> `username`: the name of the user on the LDAP server.
->
-> `password`: the password associated with the username.
->
-> If you do not provide a user name and password, the data wrapper will use NOSASL authentication.
-
-**Example**
-
-The following command creates a user mapping for a role named `enterprisedb`; the mapping is associated with a server named `hdfs_server`:
-
-> `CREATE USER MAPPING FOR enterprisedb SERVER hdfs_server;`
-
-If the database host uses LDAP authentication, provide connection credentials when creating the user mapping:
-
-```text
-CREATE USER MAPPING FOR enterprisedb SERVER hdfs_server OPTIONS (username 'alice', password '1safepwd');
-```
-
-The command creates a user mapping for a role named `enterprisedb` that is associated with a server named `hdfs_server`. When connecting to the LDAP server, the Hive or Spark server will authenticate as `alice`, and provide a password of `1safepwd`.
-
-For detailed information about the `CREATE USER MAPPING` command, see:
-
->
-
-
-
-## CREATE FOREIGN TABLE
-
-A foreign table is a pointer to a table that resides on the Hadoop host. Before creating a foreign table definition on the Postgres server, connect to the Hive or Spark server and create a table; the columns in the table will map to to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the table that resides on the Hadoop host. The syntax is:
-
-```text
-CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
- { column_name data_type [ OPTIONS ( option 'value' [, ... ] ) ] [ COLLATE collation ] [ column_constraint [ ... ] ]
- | table_constraint }
- [, ... ]
-] )
-[ INHERITS ( parent_table [, ... ] ) ]
- SERVER server_name [ OPTIONS ( option 'value' [, ... ] ) ]
-```
-
-where `column_constraint` is:
-
-```text
-[ CONSTRAINT constraint_name ]
-{ NOT NULL | NULL | CHECK (expr) [ NO INHERIT ] | DEFAULT default_expr }
-```
-
-and `table_constraint` is:
-
-```text
-[ CONSTRAINT constraint_name ] CHECK (expr) [ NO INHERIT ]
-```
-
-**Parameters**
-
-`table_name`
-
-> Specifies the name of the foreign table; include a schema name to specify the schema in which the foreign table should reside.
-
-`IF NOT EXISTS`
-
-> Include the `IF NOT EXISTS` clause to instruct the server to not throw an error if a table with the same name already exists; if a table with the same name exists, the server will issue a notice.
-
-`column_name`
-
-> Specifies the name of a column in the new table; each column should correspond to a column described on the Hive or Spark server.
-
-`data_type`
-
-> Specifies the data type of the column; when possible, specify the same data type for each column on the Postgres server and the Hive or Spark server. If a data type with the same name is not available, the Postgres server will attempt to cast the data type to a type compatible with the Hive or Spark server. If the server cannot identify a compatible data type, it will return an error.
-
-`COLLATE collation`
-
-> Include the `COLLATE` clause to assign a collation to the column; if not specified, the column data type's default collation is used.
-
-`INHERITS (parent_table [, ... ])`
-
-> Include the `INHERITS` clause to specify a list of tables from which the new foreign table automatically inherits all columns. Parent tables can be plain tables or foreign tables.
-
-`CONSTRAINT constraint_name`
-
-> Specify an optional name for a column or table constraint; if not specified, the server will generate a constraint name.
-
-`NOT NULL`
-
-> Include the `NOT NULL` keywords to indicate that the column is not allowed to contain null values.
-
-`NULL`
-
-> Include the `NULL` keywords to indicate that the column is allowed to contain null values. This is the default.
-
-`CHECK (expr) [NO INHERIT]`
-
-> Use the `CHECK` clause to specify an expression that produces a Boolean result that each row in the table must satisfy. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.
->
-> A `CHECK` expression cannot contain subqueries or refer to variables other than columns of the current row.
->
-> Include the `NO INHERIT` keywords to specify that a constraint should not propagate to child tables.
-
-`DEFAULT default_expr`
-
-> Include the `DEFAULT` clause to specify a default data value for the column whose column definition it appears within. The data type of the default expression must match the data type of the column.
-
-`SERVER server_name [OPTIONS (option 'value' [, ... ] ) ]`
-
-> To create a foreign table that will allow you to query a table that resides on a Hadoop file system, include the `SERVER` clause and specify the `server_name` of the foreign server that uses the Hadoop data adapter.
->
-> Use the `OPTIONS` clause to specify the following `options` and their corresponding values:
-
-| option | value |
-| ---------- | --------------------------------------------------------------------------------------- |
-| dbname | The name of the database on the Hive server; the database name is required. |
-| table_name | The name of the table on the Hive server; the default is the name of the foreign table. |
-
-**Example**
-
-To use data that is stored on a distributed file system, you must create a table on the Postgres host that maps the columns of a Hadoop table to the columns of a Postgres table. For example, for a Hadoop table with the following definition:
-
-```text
-CREATE TABLE weblogs (
- client_ip STRING,
- full_request_date STRING,
- day STRING,
- month STRING,
- month_num INT,
- year STRING,
- hour STRING,
- minute STRING,
- second STRING,
- timezone STRING,
- http_verb STRING,
- uri STRING,
- http_status_code STRING,
- bytes_returned STRING,
- referrer STRING,
- user_agent STRING)
-row format delimited
-fields terminated by '\t';
-```
-
-You should execute a command on the Postgres server that creates a comparable table on the Postgres server:
-
-```text
-CREATE FOREIGN TABLE weblogs
-(
- client_ip TEXT,
- full_request_date TEXT,
- day TEXT,
- Month TEXT,
- month_num INTEGER,
- year TEXT,
- hour TEXT,
- minute TEXT,
- second TEXT,
- timezone TEXT,
- http_verb TEXT,
- uri TEXT,
- http_status_code TEXT,
- bytes_returned TEXT,
- referrer TEXT,
- user_agent TEXT
-)
-SERVER hdfs_server
- OPTIONS (dbname 'webdata', table_name 'weblogs');
-```
-
-Include the `SERVER` clause to specify the name of the database stored on the Hadoop file system (`webdata`) and the name of the table (`weblogs`) that corresponds to the table on the Postgres server.
-
-For more information about using the `CREATE FOREIGN TABLE` command, see:
-
->
-
-### Data Type Mappings
-
-When using the foreign data wrapper, you must create a table on the Postgres server that mirrors the table that resides on the Hive server. The Hadoop data wrapper will automatically convert the following Hive data types to the target Postgres type:
-
-| **Hive** | **Postgres** |
-| ----------- | ---------------- |
-| BIGINT | BIGINT/INT8 |
-| BOOLEAN | BOOL/BOOLEAN |
-| BINARY | BYTEA |
-| CHAR | CHAR |
-| DATE | DATE |
-| DOUBLE | FLOAT8 |
-| FLOAT | FLOAT/FLOAT4 |
-| INT/INTEGER | INT/INTEGER/INT4 |
-| SMALLINT | SMALLINT/INT2 |
-| STRING | TEXT |
-| TIMESTAMP | TIMESTAMP |
-| TINYINT | INT2 |
-| VARCHAR | VARCHAR |
-
-## DROP EXTENSION
-
-Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you will be dropping the Hadoop server, and run the command:
-
-```text
-DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ];
-```
-
-**Parameters**
-
-`IF EXISTS`
-
-> Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if an extension with the specified name doesn't exists.
-
-`name`
-
-> Specify the name of the installed extension. It is optional.
->
-> `CASCADE`
->
-> Automatically drop objects that depend on the extension. It drops all the other dependent objects too.
->
-> `RESTRICT`
->
-> Do not allow to drop extension if any objects, other than its member objects and extensions listed in the same DROP command are dependent on it.
-
-**Example**
-
-The following command removes the extension from the existing database:
-
-> `DROP EXTENSION hdfs_fdw;`
-
-For more information about using the foreign data wrapper `DROP EXTENSION` command, see:
-
-> .
-
-## DROP SERVER
-
-Use the `DROP SERVER` command to remove a connection to a foreign server. The syntax is:
-
-```text
-DROP SERVER [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
-```
-
-The role that drops the server is the owner of the server; use the `ALTER SERVER` command to reassign ownership of a foreign server. To drop a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `DROP SERVER` command.
-
-**Parameters**
-
-`IF EXISTS`
-
-> Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if a server with the specified name doesn't exists.
-
-`name`
-
-> Specify the name of the installed server. It is optional.
->
-> `CASCADE`
->
-> Automatically drop objects that depend on the server. It should drop all the other dependent objects too.
->
-> `RESTRICT`
->
-> Do not allow to drop the server if any objects are dependent on it.
-
-**Example**
-
-The following command removes a foreign server named `hdfs_server`:
-
-> `DROP SERVER hdfs_server;`
-
-For more information about using the `DROP SERVER` command, see:
-
->
-
-## DROP USER MAPPING
-
-Use the `DROP USER MAPPING` command to remove a mapping that associates a Postgres role with a foreign server. You must be the owner of the foreign server to remove a user mapping for that server.
-
-```text
-DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC } SERVER server_name;
-```
-
-**Parameters**
-
-`IF EXISTS`
-
-> Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if the user mapping doesn't exist.
-
-`user_name`
-
-> Specify the user name of the mapping.
-
-`server_name`
-
-> Specify the name of the server that defines a connection to the Hadoop cluster.
-
-**Example**
-
-The following command drops a user mapping for a role named `enterprisedb`; the mapping is associated with a server named `hdfs_server`:
-
-> `DROP USER MAPPING FOR enterprisedb SERVER hdfs_server;`
-
-For detailed information about the `DROP USER MAPPING` command, see:
-
->
-
-## DROP FOREIGN TABLE
-
-A foreign table is a pointer to a table that resides on the Hadoop host. Use the `DROP FOREIGN TABLE` command to remove a foreign table. Only the owner of the foreign table can drop it.
-
-```text
-DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
-```
-
-**Parameters**
-
-`IF EXISTS`
-
-> Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if the foreign table with the specified name doesn't exists.
-
-`name`
-
-> Specify the name of the foreign table.
-
-`CASCADE`
-
-> Automatically drop objects that depend on the foreign table. It should drop all the other dependent objects too.
-
-`RESTRICT`
-
-> Do not allow to drop foreign table if any objects are dependent on it.
-
-**Example**
-
-```text
-DROP FOREIGN TABLE warehouse;
-```
-
-For more information about using the `DROP FOREIGN TABLE` command, see:
-
->
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/09_using_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/09_using_the_hadoop_data_adapter.mdx
deleted file mode 100644
index c669a808a37..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/09_using_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,305 +0,0 @@
----
-title: "Using the Hadoop Foreign Data Wrapper"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/using_the_hadoop_data_adapter.html"
----
-
-
-
-You can use the Hadoop Foreign Data Wrapper either through the Apache Hive or the Apache Spark. Both Hive and Spark store metadata in the configured metastore, where databases and tables are created using HiveQL.
-
-## Using HDFS FDW with Apache Hive on Top of Hadoop
-
-`Apache Hive` data warehouse software facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called `HiveQL`. At the same time, this language allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in `HiveQL`.
-
-There are two versions of Hive - `HiveServer1` and `HiveServer2` which can be downloaded from the [Apache Hive website](https://hive.apache.org/downloads.html).
-
-!!! Note
- The Hadoop Foreign Data Wrapper supports only `HiveServer2`.
-
-To use HDFS FDW with Apache Hive on top of Hadoop:
-
-Step 1: Download [weblogs_parse](http://wiki.pentaho.com/download/attachments/23531451/weblogs_parse.zip?version=1&modificationDate=1327096242000/) and follow the instructions at the [Wiki Pentaho website](https://wiki.pentaho.com/display/BAD/Transforming+Data+within+Hive/).
-
-Step 2: Upload `weblog_parse.txt` file using these commands:
-
-```text
-hadoop fs -mkdir /weblogs
-hadoop fs -mkdir /weblogs/parse
-hadoop fs -put weblogs_parse.txt /weblogs/parse/part-00000
-```
-
-Step 3: Start `HiveServer`, if not already running, using following command:
-
-```text
-$HIVE_HOME/bin/hiveserver2
-```
-
-or
-
-```text
-$HIVE_HOME/bin/hive --service hiveserver2
-```
-
-Step 4: Connect to `HiveServer2` using the hive `beeline` client. For example:
-
-```text
-$ beeline
-Beeline version 1.0.1 by Apache Hive
-beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl
-```
-
-Step 5: Create a table in Hive. The example creates a table named `weblogs`"
-
-```text
-CREATE TABLE weblogs (
- client_ip STRING,
- full_request_date STRING,
- day STRING,
- month STRING,
- month_num INT,
- year STRING,
- hour STRING,
- minute STRING,
- second STRING,
- timezone STRING,
- http_verb STRING,
- uri STRING,
- http_status_code STRING,
- bytes_returned STRING,
- referrer STRING,
- user_agent STRING)
-row format delimited
-fields terminated by '\t';
-```
-
-Step 6: Load data into the table.
-
-```text
-hadoop fs -cp /weblogs/parse/part-00000 /user/hive/warehouse/weblogs/
-```
-
-Step 7: Access your data from Postgres; you can now use the `weblog` table. Once you are connected using psql, follow the below steps:
-
-```text
--- set the GUC variables appropriately, e.g. :
-hdfs_fdw.jvmpath='/home/edb/Projects/hadoop_fdw/jdk1.8.0_111/jre/lib/amd64/server/'
-hdfs_fdw.classpath='/usr/local/edbas/lib/postgresql/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
-
--- load extension first time after install
-CREATE EXTENSION hdfs_fdw;
-
--- create server object
-CREATE SERVER hdfs_server
- FOREIGN DATA WRAPPER hdfs_fdw
- OPTIONS (host '127.0.0.1');
-
--- create user mapping
-CREATE USER MAPPING FOR postgres
- SERVER hdfs_server OPTIONS (username 'hive_username', password 'hive_password');
-
--- create foreign table
-CREATE FOREIGN TABLE weblogs
-(
- client_ip TEXT,
- full_request_date TEXT,
- day TEXT,
- Month TEXT,
- month_num INTEGER,
- year TEXT,
- hour TEXT,
- minute TEXT,
- second TEXT,
- timezone TEXT,
- http_verb TEXT,
- uri TEXT,
- http_status_code TEXT,
- bytes_returned TEXT,
- referrer TEXT,
- user_agent TEXT
-)
-SERVER hdfs_server
- OPTIONS (dbname 'default', table_name 'weblogs');
-
-
--- select from table
-postgres=# SELECT DISTINCT client_ip IP, count(*)
- FROM weblogs GROUP BY IP HAVING count(*) > 5000 ORDER BY 1;
- ip | count
------------------+-------
- 13.53.52.13 | 5494
- 14.323.74.653 | 16194
- 322.6.648.325 | 13242
- 325.87.75.336 | 6500
- 325.87.75.36 | 6498
- 361.631.17.30 | 64979
- 363.652.18.65 | 10561
- 683.615.622.618 | 13505
-(8 rows)
-
--- EXPLAIN output showing WHERE clause being pushed down to remote server.
-EXPLAIN (VERBOSE, COSTS OFF) SELECT client_ip, full_request_date, uri FROM weblogs WHERE http_status_code = 200;
- QUERY PLAN
-----------------------------------------------------------------------------------------------------------------
- Foreign Scan on public.weblogs
- Output: client_ip, full_request_date, uri
- Remote SQL: SELECT client_ip, full_request_date, uri FROM default.weblogs WHERE ((http_status_code = '200'))
-(3 rows)
-```
-
-## Using HDFS FDW with Apache Spark on Top of Hadoop
-
-Apache Spark is a general purpose distributed computing framework which supports a wide variety of use cases. It provides real time streaming as well as batch processing with speed, ease of use, and sophisticated analytics. Spark does not provide a storage layer as it relies on third party storage providers like Hadoop, HBASE, Cassandra, S3 etc. Spark integrates seamlessly with Hadoop and can process existing data. Spark SQL is 100% compatible with `HiveQL` and can be used as a replacement of `Hiveserver2`, using `Spark Thrift Server`.
-
-To use HDFS FDW with Apache Spark on top of Hadoop:
-
-Step 1: Download and install Apache Spark in local mode.
-
-Step 2: In the folder `$SPARK_HOME/conf` create a file `spark-defaults.conf` containing the following line:
-
-```text
-spark.sql.warehouse.dir hdfs://localhost:9000/user/hive/warehouse
-```
-
-By default, Spark uses `derby` for both the meta data and the data itself (called a warehouse in Spark). To have Spark use Hadoop as a warehouse, you should add this property.
-
-Step 3: Start the Spark Thrift Server.
-
-```text
-./start-thriftserver.sh
-```
-
-Step 4: Make sure the Spark Thrift server is running and writing to a log file.
-
-Step 5: Create a local file (`names.txt`) that contains the following entries:
-
-```text
-$ cat /tmp/names.txt
-1,abcd
-2,pqrs
-3,wxyz
-4,a_b_c
-5,p_q_r
-,
-```
-
-Step 6: Connect to Spark Thrift Server2 using the Spark `beeline` client. For example:
-
-```text
-$ beeline
-Beeline version 1.2.1.spark2 by Apache Hive
-beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl org.apache.hive.jdbc.HiveDriver
-```
-
-Step 7: Prepare the sample data on Spark. Run the following commands in the `beeline` command line tool:
-
-```text
-./beeline
-Beeline version 1.2.1.spark2 by Apache Hive
-beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl org.apache.hive.jdbc.HiveDriver
-Connecting to jdbc:hive2://localhost:10000/default;auth=noSasl
-Enter password for jdbc:hive2://localhost:10000/default;auth=noSasl:
-Connected to: Spark SQL (version 2.1.1)
-Driver: Hive JDBC (version 1.2.1.spark2)
-Transaction isolation: TRANSACTION_REPEATABLE_READ
-0: jdbc:hive2://localhost:10000> create database my_test_db;
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.379 seconds)
-0: jdbc:hive2://localhost:10000> use my_test_db;
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.03 seconds)
-0: jdbc:hive2://localhost:10000> create table my_names_tab(a int, name string)
- row format delimited fields terminated by ' ';
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.11 seconds)
-0: jdbc:hive2://localhost:10000>
-
-0: jdbc:hive2://localhost:10000> load data local inpath '/tmp/names.txt'
- into table my_names_tab;
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.33 seconds)
-0: jdbc:hive2://localhost:10000> select * from my_names_tab;
-+-------+---------+--+
-| a | name |
-+-------+---------+--+
-| 1 | abcd |
-| 2 | pqrs |
-| 3 | wxyz |
-| 4 | a_b_c |
-| 5 | p_q_r |
-| NULL | NULL |
-+-------+---------+--+
-```
-
-The following commands list the corresponding files in Hadoop:
-
-```text
-$ hadoop fs -ls /user/hive/warehouse/
-Found 1 items
-drwxrwxrwx - org.apache.hive.jdbc.HiveDriver supergroup 0 2020-06-12 17:03 /user/hive/warehouse/my_test_db.db
-
-$ hadoop fs -ls /user/hive/warehouse/my_test_db.db/
-Found 1 items
-drwxrwxrwx - org.apache.hive.jdbc.HiveDriver supergroup 0 2020-06-12 17:03 /user/hive/warehouse/my_test_db.db/my_names_tab
-```
-
-Step 8: Access your data from Postgres using psql:
-
-```text
--- set the GUC variables appropriately, e.g. :
-hdfs_fdw.jvmpath='/home/edb/Projects/hadoop_fdw/jdk1.8.0_111/jre/lib/amd64/server/'
-hdfs_fdw.classpath='/usr/local/edbas/lib/postgresql/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
-
--- load extension first time after install
-CREATE EXTENSION hdfs_fdw;
-
--- create server object
-CREATE SERVER hdfs_server
- FOREIGN DATA WRAPPER hdfs_fdw
- OPTIONS (host '127.0.0.1', port '10000', client_type 'spark', auth_type 'NOSASL');
-
--- create user mapping
-CREATE USER MAPPING FOR postgres
- SERVER hdfs_server OPTIONS (username 'spark_username', password 'spark_password');
-
--- create foreign table
-CREATE FOREIGN TABLE f_names_tab( a int, name varchar(255)) SERVER hdfs_svr
- OPTIONS (dbname 'testdb', table_name 'my_names_tab');
-
--- select the data from foreign server
-select * from f_names_tab;
- a | name
----+--------
- 1 | abcd
- 2 | pqrs
- 3 | wxyz
- 4 | a_b_c
- 5 | p_q_r
- 0 |
-(6 rows)
-
--- EXPLAIN output showing WHERE clause being pushed down to remote server.
-EXPLAIN (verbose, costs off) SELECT name FROM f_names_tab WHERE a > 3;
- QUERY PLAN
---------------------------------------------------------------------------
- Foreign Scan on public.f_names_tab
- Output: name
- Remote SQL: SELECT name FROM my_test_db.my_names_tab WHERE ((a > '3'))
-(3 rows)
-```
-
-!!! Note
- The same port was being used while creating foreign server because the Spark Thrift Server is compatible with the Hive Thrift Server. Applications using Hiveserver2 would work with Spark except for the behaviour of the `ANALYZE` command and the connection string in the case of `NOSASL`. We recommend using `ALTER SERVER` and changing the `client_type` option if Hive is to be replaced with Spark.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/10_identifying_data_adapter_version.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/10_identifying_data_adapter_version.mdx
deleted file mode 100644
index b643ca82769..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/10_identifying_data_adapter_version.mdx
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: "Identifying the Hadoop Foreign Data Wrapper Version"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/identifying_data_adapter_version.html"
----
-
-
-
-The Hadoop Foreign Data Wrapper includes a function that you can use to identify the currently installed version of the `.so` file for the data wrapper. To use the function, connect to the Postgres server, and enter:
-
-```text
-SELECT hdfs_fdw_version();
-```
-
-The function returns the version number:
-
-```text
-hdfs_fdw_version
------------------
-
-```
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/11_uninstalling_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/11_uninstalling_the_hadoop_data_adapter.mdx
deleted file mode 100644
index d30391031a0..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/11_uninstalling_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: "Uninstalling the Hadoop Foreign Data Wrapper"
----
-
-
-
-**Uninstalling an RPM Package**
-
-You can use the `yum remove` or `dnf remove` command to remove a package installed by `yum` or `dnf`. To remove a package, open a terminal window, assume superuser privileges, and enter the command:
-
-- On RHEL or CentOS 7:
-
- `yum remove edb-as-hdfs_fdw`
-
-> where `xx` is the server version number.
-
-- On RHEL or Rocky Linux or AlmaLinux 8:
-
- `dnf remove edb-as-hdfs_fdw`
-
-> where `xx` is the server version number.
-
-**Uninstalling Hadoop Foreign Data Wrapper on a Debian or Ubuntu Host**
-
-- To uninstall Hadoop Foreign Data Wrapper on a Debian or Ubuntu host, invoke the following command.
-
- `apt-get remove edb-as-hdfs-fdw`
-
-> where `xx` is the server version number.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/index.mdx b/product_docs/docs/hadoop_data_adapter/2.0.7/index.mdx
deleted file mode 100644
index 49ee76de2a8..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.7/index.mdx
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: "Hadoop Foreign Data Wrapper Guide"
-legacyRedirectsGenerated:
- # This list is generated by a script. If you need add entries, use the `legacyRedirects` key.
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/index.html"
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/conclusion.html"
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/genindex.html"
- - "/edb-docs/p/edb-postgres-hadoop-data-adapter/2.0.7"
- - "/edb-docs/d/edb-postgres-hadoop-data-adapter/user-guides/user-guide/2.0.7/whats_new.html"
----
-
-The Hadoop Foreign Data Wrapper (`hdfs_fdw`) is a Postgres extension that allows you to access data that resides on a Hadoop file system from EDB Postgres Advanced Server. The foreign data wrapper makes the Hadoop file system a read-only data source that you can use with Postgres functions and utilities, or in conjunction with other data that resides on a Postgres host.
-
-The Hadoop Foreign Data Wrapper can be installed with an RPM package. You can download an installer from the [EDB website](https://www.enterprisedb.com/software-downloads-postgres/).
-
-This guide uses the term `Postgres` to refer to an instance of EDB Postgres Advanced Server.
-
-
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/01_whats_new.mdx b/product_docs/docs/hadoop_data_adapter/2.0.8/01_whats_new.mdx
deleted file mode 100644
index 30138f9d121..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/01_whats_new.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: "What’s New"
----
-
-
-
-The following features are added to create Hadoop Foreign Data Wrapper `2.0.8`:
-
-- Support for Hadoop version 3.2.x
-- Support for Hive version 3.1.x
-- Support for Spark version 3.0.x
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/02_requirements_overview.mdx b/product_docs/docs/hadoop_data_adapter/2.0.8/02_requirements_overview.mdx
deleted file mode 100644
index 631871cf013..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/02_requirements_overview.mdx
+++ /dev/null
@@ -1,26 +0,0 @@
----
-title: "Requirements Overview"
----
-
-## Supported Versions
-
-The Hadoop Foreign Data Wrapper is certified with EDB Postgres Advanced Server 10 and above.
-
-## Supported Platforms
-
-The Hadoop Foreign Data Wrapper is supported on the following platforms:
-
-**Linux x86-64**
-
- - RHEL 8.x and 7.x
- - Rocky Linux/AlmaLinux 8.x
- - CentOS 7.x
- - OL 8.x and 7.x
- - Ubuntu 20.04 and 18.04 LTS
- - Debian 10.x and 9.x
-
-**Linux on IBM Power8/9 (LE)**
-
- - RHEL 7.x
-
-The Hadoop Foreign Data Wrapper supports use of the Hadoop file system using a HiveServer2 interface or Apache Spark using the Spark Thrift Server.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/05_installing_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.0.8/05_installing_the_hadoop_data_adapter.mdx
deleted file mode 100644
index 51b34673964..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/05_installing_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,294 +0,0 @@
----
-title: "Installing the Hadoop Foreign Data Wrapper"
----
-
-
-
-The Hadoop Foreign Data Wrapper can be installed with an RPM package. During the installation process, the installer will satisfy software prerequisites. If yum encounters a dependency that it cannot resolve, it will provide a list of the required dependencies that you must manually resolve.
-
-
-
-## Installing the Hadoop Foreign Data Wrapper using an RPM Package
-
-You can install the Hadoop Foreign Data Wrapper using an RPM package on the following platforms:
-
-- [RHEL or CentOS 7 PPCLE](#rhel_centos7_PPCLE)
-- [RHEL 7](#rhel7)
-- [RHEL 8](#rhel8)
-- [CentOS 7](#centos7)
-- [Rocky Linux/AlmaLinux 8](#centos8)
-
-
-
-### On RHEL or CentOS 7 PPCLE
-
-1. Use the following command to create a configuration file and install Advance Toolchain:
-
- ```text
- rpm --import https://public.dhe.ibm.com/software/server/POWER/Linux/toolchain/at/redhat/RHEL7/gpg-pubkey-6976a827-5164221b
-
- cat > /etc/yum.repos.d/advance-toolchain.repo <:@USERNAME:PASSWORD@" /etc/yum.repos.d/edb.repo
- ```
-
-4. Install the EPEL repository:
-
- ```text
- yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
- ```
-
-5. On RHEL 7 PPCLE, enable the additional repositories to resolve EPEL dependencies:
-
- ```text
- subscription-manager repos --enable "rhel-*-optional-rpms" --enable "rhel-*-extras-rpms" --enable "rhel-ha-for-rhel-*-server-rpms"
- ```
-
-6. Install the selected package:
-
- ```text
- dnf install edb-as-hdfs_fdw
- ```
-
- where `xx` is the server version number.
-
-
-
-
-### On RHEL 7
-
-1. To create the repository configuration file, assume superuser privileges, and invoke the following command:
-
- ```text
- yum -y install https://yum.enterprisedb.com/edbrepos/edb-repo-latest.noarch.rpm
- ```
-
-2. Replace ‘USERNAME:PASSWORD’ below with your username and password for the EDB repositories:
-
- ```text
- sed -i "s@:@USERNAME:PASSWORD@" /etc/yum.repos.d/edb.repo
- ```
-
-3. Install the EPEL repository:
-
- ```text
- yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
- ```
-
-4. Enable the additional repositories to resolve dependencies:
-
- ```text
- subscription-manager repos --enable "rhel-*-optional-rpms" --enable "rhel-*-extras-rpms" --enable "rhel-ha-for-rhel-*-server-rpms"
- ```
-
-5. Install the selected package:
-
- ```text
- dnf install edb-as-hdfs_fdw
- ```
-
- where `xx` is the server version number.
-
-
-
-
-
-
-### On RHEL 8
-
-1. To create the repository configuration file, assume superuser privileges, and invoke the following command:
-
- ```text
- dnf -y install https://yum.enterprisedb.com/edbrepos/edb-repo-latest.noarch.rpm
- ```
-
-2. Replace ‘USERNAME:PASSWORD’ below with your username and password for the EDB repositories:
-
- ```text
- sed -i "s@:@USERNAME:PASSWORD@" /etc/yum.repos.d/edb.repo
- ```
-
-3. Install the EPEL repository:
-
- ```text
- dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
- ```
-
-4. Enable the additional repositories to resolve dependencies:
-
- ```text
- ARCH=$( /bin/arch ) subscription-manager repos --enable "codeready-builder-for-rhel-8-${ARCH}-rpms"
- ```
-
-5. Disable the built-in PostgreSQL module:
-
- ```text
- dnf -qy module disable postgresql
- ```
-6. Install the selected package:
-
- ```text
- dnf install edb-as-hdfs_fdw
- ```
-
- where `xx` is the server version number.
-
-
-
-
-
-### On CentOS 7
-
-1. To create the repository configuration file, assume superuser privileges, and invoke the following command:
-
- ```text
- yum -y install https://yum.enterprisedb.com/edbrepos/edb-repo-latest.noarch.rpm
- ```
-
-2. Replace ‘USERNAME:PASSWORD’ below with your username and password for the EDB repositories:
-
- ```text
- sed -i "s@:@USERNAME:PASSWORD@" /etc/yum.repos.d/edb.repo
- ```
-
-3. Install the EPEL repository:
-
- ```text
- yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
- ```
-
-4. Install the selected package:
-
- ```text
- dnf install edb-as-hdfs_fdw
- ```
-
- where `xx` is the server version number.
-
-
-
-
-
-
-### On Rocky Linux/AlmaLinux 8
-
-
-1. To create the repository configuration file, assume superuser privileges, and invoke the following command:
-
- ```text
- dnf -y install https://yum.enterprisedb.com/edbrepos/edb-repo-latest.noarch.rpm
- ```
-
-2. Replace ‘USERNAME:PASSWORD’ below with your username and password for the EDB repositories:
-
- ```text
- sed -i "s@:@USERNAME:PASSWORD@" /etc/yum.repos.d/edb.repo
- ```
-
-3. Install the EPEL repository:
-
- ```text
- dnf -y install epel-release
- ```
-
-4. Enable the additional repositories to resolve dependencies:
-
- ```text
- dnf config-manager --set-enabled PowerTools
- ```
-
-5. Disable the built-in PostgreSQL module:
-
- ```text
- dnf -qy module disable postgresql
- ```
-6. Install the selected package:
-
- ```text
- dnf install edb-as-hdfs_fdw
- ```
-
- where `xx` is the server version number.
-
-
-
-## Installing the Hadoop Foreign Data Wrapper on a Debian or Ubuntu Host
-
-To install the Hadoop Foreign Data Wrapper on a Debian or Ubuntu host, you must have credentials that allow access to the EDB repository. To request credentials for the repository, visit the [EDB website](https://www.enterprisedb.com/repository-access-request/).
-
-The following steps will walk you through on using the EDB apt repository to install a Debian package. When using the commands, replace the `username` and `password` with the credentials provided by EDB.
-
-1. Assume superuser privileges:
-
- ```text
- sudo su –
- ```
-
-2. Configure the EnterpriseDB repository:
-
- On Debian 9 and Ubuntu:
-
- > ```text
- > sh -c 'echo "deb https://username:password@apt.enterprisedb.com/$(lsb_release -cs)-edb/ $(lsb_release -cs) main" > /etc/apt/sources.list.d/edb-$(lsb_release -cs).list'
- > ```
-
- On Debian 10:
-
- 1. Set up the EDB repository:
-
- > ```text
- > sh -c 'echo "deb [arch=amd64] https://apt.enterprisedb.com/$(lsb_release -cs)-edb/ $(lsb_release -cs) main" > /etc/apt/sources.list.d/edb-$(lsb_release -cs).list'
- > ```
-
- 1. Substitute your EDB credentials for the `username` and `password` in the following command:
-
- > ```text
- > sh -c 'echo "machine apt.enterprisedb.com login password " > /etc/apt/auth.conf.d/edb.conf'
- > ```
-
-3. Add support to your system for secure APT repositories:
-
- ```text
- apt-get install apt-transport-https
- ```
-
-4. Add the EDB signing key:
-
- ```text
- wget -q -O - https://username:password
- @apt.enterprisedb.com/edb-deb.gpg.key | apt-key add -
- ```
-
-5. Update the repository metadata:
-
- ```text
- apt-get update
- ```
-
-6. Install the package:
-
- ```text
- apt-get install edb-as-hdfs-fdw
- ```
-
- where `xx` is the server version number.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/06_updating_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.0.8/06_updating_the_hadoop_data_adapter.mdx
deleted file mode 100644
index 0c309f095c3..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/06_updating_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: "Updating the Hadoop Foreign Data Wrapper"
----
-
-
-
-## Updating an RPM Installation
-
-If you have an existing RPM installation of Hadoop Foreign Data Wrapper, you can use yum or dnf to upgrade your repository configuration file and update to a more recent product version. To update the `edb.repo` file, assume superuser privileges and enter:
-
-- On RHEL or CentOS 7:
-
- `yum upgrade edb-repo`
-
-- On RHEL or Rocky Linux or AlmaLinux 8:
-
- `dnf upgrade edb-repo`
-
-yum or dnf will update the `edb.repo` file to enable access to the current EDB repository, configured to connect with the credentials specified in your `edb.repo` file. Then, you can use yum or dnf to upgrade any installed packages:
-
-- On RHEL or CentOS 7:
-
- `yum upgrade edb-as-hdfs_fdw`
-
- where `xx` is the server version number.
-
-- On RHEL or Rocky Linux or AlmaLinux 8:
-
- `dnf upgrade edb-as-hdfs_fdw`
-
- where `xx` is the server version number.
-
-## Updating MongoDB Foreign Data Wrapper on a Debian or Ubuntu Host
-
-To update MongoDB Foreign Data Wrapper on a Debian or Ubuntu Host, use the following command:
-
- `apt-get --only-upgrade install edb-as-hdfs-fdw`
-
- where `xx` is the server version number.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/07_features_of_hdfs_fdw.mdx b/product_docs/docs/hadoop_data_adapter/2.0.8/07_features_of_hdfs_fdw.mdx
deleted file mode 100644
index 66cacb62851..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/07_features_of_hdfs_fdw.mdx
+++ /dev/null
@@ -1,23 +0,0 @@
----
-title: "Features of the Hadoop Foreign Data Wrapper"
----
-
-
-
-The key features of the Hadoop Foreign Data Wrapper are listed below:
-
-## Where Clause Push-down
-
-Hadoop Foreign Data Wrappper allows the push-down of `WHERE` clause to the foreign server for execution. This feature optimizes remote queries to reduce the number of rows transferred from foreign servers.
-
-## Column Push-down
-
-Hadoop Foreign Data Wrapper supports column push-down. As a result, the query brings back only those columns that are a part of the select target list.
-
-## Automated Cleanup
-
-Hadoop Foreign Data Wrappper allows the cleanup of foreign tables in a single operation using `DROP EXTENSION` command. This feature is specifically useful when a foreign table is set for a temporary purpose. The syntax is:
-
-> `DROP EXTENSION hdfs_fdw CASCADE;`
-
-For more information, see [DROP EXTENSION](https://www.postgresql.org/docs/current/sql-dropextension.html).
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/08_configuring_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.0.8/08_configuring_the_hadoop_data_adapter.mdx
deleted file mode 100644
index b706ac482fa..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/08_configuring_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,481 +0,0 @@
----
-title: "Configuring the Hadoop Foreign Data Wrapper"
----
-
-
-
-Before creating the extension and the database objects that use the extension, you must modify the Postgres host, providing the location of the supporting libraries.
-
-After installing Postgres, modify the `postgresql.conf` located in:
-
- `/var/lib/edb/as_version/data`
-
-Modify the configuration file with your editor of choice, adding the `hdfs_fdw.jvmpath` parameter to the end of the configuration file, and setting the value to specify the location of the Java virtual machine (`libjvm.so`). Set the value of `hdfs_fdw.classpath` to indicate the location of the java class files used by the adapter; use a colon (:) as a delimiter between each path. For example:
-
- ```text
- hdfs_fdw.classpath=
- '/usr/edb/as12/lib/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
- ```
-
- !!! Note
- The jar files (hive-jdbc-1.0.1-standalone.jar and hadoop-common-2.6.4.jar) mentioned in the above example should be copied from respective Hive and Hadoop sources or website to PostgreSQL instance where Hadoop Foreign Data Wrapper is installed.
-
- If you are using EDB Advanced Server and have a `DATE` column in your database, you must set `edb_redwood_date = OFF` in the `postgresql.conf` file.
-
-After setting the parameter values, restart the Postgres server. For detailed information about controlling the service on an Advanced Server host, see the EDB Postgres Advanced Server Installation Guide, available at:
-
-
-
-Before using the Hadoop Foreign Data Wrapper, you must:
-
- 1. Use the [CREATE EXTENSION](#create-extension) command to create the extension on the Postgres host.
- 2. Use the [CREATE SERVER](#create-server) command to define a connection to the Hadoop file system.
- 3. Use the [CREATE USER MAPPING](#create-user-mapping) command to define a mapping that associates a Postgres role with the server.
- 4. Use the [CREATE FOREIGN TABLE](#create-foreign-table) command to define a table in the Advanced Server database that corresponds to a database that resides on the Hadoop cluster.
-
-
-
-## CREATE EXTENSION
-
-Use the `CREATE EXTENSION` command to create the `hdfs_fdw` extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you will be querying the Hive or Spark server, and invoke the command:
-
-```text
-CREATE EXTENSION [IF NOT EXISTS] hdfs_fdw [WITH] [SCHEMA schema_name];
-```
-
-**Parameters**
-
-`IF NOT EXISTS`
-
- Include the `IF NOT EXISTS` clause to instruct the server to issue a notice instead of throwing an error if an extension with the same name already exists.
-
-`schema_name`
-
- Optionally specify the name of the schema in which to install the extension's objects.
-
-**Example**
-
-The following command installs the `hdfs_fdw` hadoop foreign data wrapper:
-
- `CREATE EXTENSION hdfs_fdw;`
-
-For more information about using the foreign data wrapper `CREATE EXTENSION` command, see:
-
- .
-
-
-
-## CREATE SERVER
-
-Use the `CREATE SERVER` command to define a connection to a foreign server. The syntax is:
-
-```text
-CREATE SERVER server_name FOREIGN DATA WRAPPER hdfs_fdw
- [OPTIONS (option 'value' [, ...])]
-```
-
-The role that defines the server is the owner of the server; use the `ALTER SERVER` command to reassign ownership of a foreign server. To create a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `CREATE SERVER` command.
-
-**Parameters**
-
-`server_name`
-
- Use `server_name` to specify a name for the foreign server. The server name must be unique within the database.
-
-`FOREIGN_DATA_WRAPPER`
-
- Include the `FOREIGN_DATA_WRAPPER` clause to specify that the server should use the `hdfs_fdw` foreign data wrapper when connecting to the cluster.
-
-`OPTIONS`
-
- Use the `OPTIONS` clause of the `CREATE SERVER` command to specify connection information for the foreign server. You can include:
-
-| Option | Description |
-| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| host | The address or hostname of the Hadoop cluster. The default value is \`localhost\`. |
-| port | The port number of the Hive Thrift Server or Spark Thrift Server. The default is \`10000\`. |
-| client_type | Specify hiveserver2 or spark as the client type. To use the ANALYZE statement on Spark, you must specify a value of spark; if you do not specify a value for client_type, the default value is hiveserver2. |
-| auth_type | The authentication type of the client; specify LDAP or NOSASL. If you do not specify an auth_type, the data wrapper will decide the auth_type value on the basis of the user mapping. If the user mapping includes a user name and password, the data wrapper will use LDAP authentication. If the user mapping does not include a user name and password, the data wrapper will use NOSASL authentication. |
-| connect_timeout | The length of time before a connection attempt times out. The default value is \`300\` seconds. |
-
-| fetch_size | A user-specified value that is provided as a parameter to the JDBC API setFetchSize. The default value is \`10,000\`. |
-| log_remote_sql | If true, logging will include SQL commands executed on the remote hive server and the number of times that a scan is repeated. The default is \`false\`. |
-| query_timeout | Use query_timeout to provide the number of seconds after which a request will timeout if it is not satisfied by the Hive server. Query timeout is not supported by the Hive JDBC driver. |
-| use_remote_estimate | Include the use_remote_estimate to instruct the server to use EXPLAIN commands on the remote server when estimating processing costs. By default, use_remote_estimate is false, and remote tables are assumed to have \`1000\` rows. |
-
-**Example**
-
-The following command creates a foreign server named `hdfs_server` that uses the `hdfs_fdw` foreign data wrapper to connect to a host with an IP address of `170.11.2.148`:
-
-```text
-CREATE SERVER hdfs_server FOREIGN DATA WRAPPER hdfs_fdw OPTIONS (host '170.11.2.148', port '10000', client_type 'hiveserver2', auth_type 'LDAP', connect_timeout '10000', query_timeout '10000');
-```
-
-The foreign server uses the default port (`10000`) for the connection to the client on the Hadoop cluster; the connection uses an LDAP server.
-
-For more information about using the `CREATE SERVER` command, see:
-
-
-
-
-
-## CREATE USER MAPPING
-
-Use the `CREATE USER MAPPING` command to define a mapping that associates a Postgres role with a foreign server:
-
-```text
-CREATE USER MAPPING FOR role_name SERVER server_name
- [OPTIONS (option 'value' [, ...])];
-```
-
-You must be the owner of the foreign server to create a user mapping for that server.
-
-Please note: the Hadoop Foreign Data Wrapper supports NOSASL and LDAP authentication. If you are creating a user mapping for a server that uses LDAP authentication, use the `OPTIONS` clause to provide the connection credentials (the username and password) for an existing LDAP user. If the server uses NOSASL authentication, omit the OPTIONS clause when creating the user mapping.
-
-**Parameters**
-
-`role_name`
-
- Use `role_name` to specify the role that will be associated with the foreign server.
-
-`server_name`
-
- Use `server_name` to specify the name of the server that defines a connection to the Hadoop cluster.
-
-`OPTIONS`
-
- Use the `OPTIONS` clause to specify connection information for the foreign server. If you are using LDAP authentication, provide a:
-
- `username`: the name of the user on the LDAP server.
-
- `password`: the password associated with the username.
-
- If you do not provide a user name and password, the data wrapper will use NOSASL authentication.
-
-**Example**
-
-The following command creates a user mapping for a role named `enterprisedb`; the mapping is associated with a server named `hdfs_server`:
-
- `CREATE USER MAPPING FOR enterprisedb SERVER hdfs_server;`
-
-If the database host uses LDAP authentication, provide connection credentials when creating the user mapping:
-
-```text
-CREATE USER MAPPING FOR enterprisedb SERVER hdfs_server OPTIONS (username 'alice', password '1safepwd');
-```
-
-The command creates a user mapping for a role named `enterprisedb` that is associated with a server named `hdfs_server`. When connecting to the LDAP server, the Hive or Spark server will authenticate as `alice`, and provide a password of `1safepwd`.
-
-For detailed information about the `CREATE USER MAPPING` command, see:
-
-
-
-
-
-## CREATE FOREIGN TABLE
-
-A foreign table is a pointer to a table that resides on the Hadoop host. Before creating a foreign table definition on the Postgres server, connect to the Hive or Spark server and create a table; the columns in the table will map to to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the table that resides on the Hadoop host. The syntax is:
-
-```text
-CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
- { column_name data_type [ OPTIONS ( option 'value' [, ... ] ) ] [ COLLATE collation ] [ column_constraint [ ... ] ]
- | table_constraint }
- [, ... ]
-] )
-[ INHERITS ( parent_table [, ... ] ) ]
- SERVER server_name [ OPTIONS ( option 'value' [, ... ] ) ]
-```
-
-where `column_constraint` is:
-
-```text
-[ CONSTRAINT constraint_name ]
-{ NOT NULL | NULL | CHECK (expr) [ NO INHERIT ] | DEFAULT default_expr }
-```
-
-and `table_constraint` is:
-
-```text
-[ CONSTRAINT constraint_name ] CHECK (expr) [ NO INHERIT ]
-```
-
-**Parameters**
-
-`table_name`
-
- Specify the name of the foreign table; include a schema name to specify the schema in which the foreign table should reside.
-
-`IF NOT EXISTS`
-
- Include the `IF NOT EXISTS` clause to instruct the server to not throw an error if a table with the same name already exists; if a table with the same name exists, the server will issue a notice.
-
-`column_name`
-
- Specifies the name of a column in the new table; each column should correspond to a column described on the Hive or Spark server.
-
-`data_type`
-
- Specify the data type of the column; when possible, specify the same data type for each column on the Postgres server and the Hive or Spark server. If a data type with the same name is not available, the Postgres server will attempt to cast the data type to a type compatible with the Hive or Spark server. If the server cannot identify a compatible data type, it will return an error.
-
-`COLLATE collation`
-
- Include the `COLLATE` clause to assign a collation to the column; if not specified, the column data type's default collation is used.
-
-`INHERITS (parent_table [, ... ])`
-
- Include the `INHERITS` clause to specify a list of tables from which the new foreign table automatically inherits all columns. Parent tables can be plain tables or foreign tables.
-
-`CONSTRAINT constraint_name`
-
- Specify an optional name for a column or table constraint; if not specified, the server will generate a constraint name.
-
-`NOT NULL`
-
- Include the `NOT NULL` keywords to indicate that the column is not allowed to contain null values.
-
-`NULL`
-
- Include the `NULL` keywords to indicate that the column is allowed to contain null values. This is the default.
-
-`CHECK (expr) [NO INHERIT]`
-
- Use the `CHECK` clause to specify an expression that produces a Boolean result that each row in the table must satisfy. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.
-
- A `CHECK` expression cannot contain subqueries or refer to variables other than columns of the current row.
-
- Include the `NO INHERIT` keywords to specify that a constraint should not propagate to child tables.
-
-`DEFAULT default_expr`
-
- Include the `DEFAULT` clause to specify a default data value for the column whose column definition it appears within. The data type of the default expression must match the data type of the column.
-
-`SERVER server_name [OPTIONS (option 'value' [, ... ] ) ]`
-
- To create a foreign table that will allow you to query a table that resides on a Hadoop file system, include the `SERVER` clause and specify the `server_name` of the foreign server that uses the Hadoop data adapter.
-
- Use the `OPTIONS` clause to specify the following `options` and their corresponding values:
-
-| option | value |
-| ---------- | --------------------------------------------------------------------------------------- |
-| dbname | The name of the database on the Hive server; the database name is required. |
-| table_name | The name of the table on the Hive server; the default is the name of the foreign table. |
-
-**Example**
-
-To use data that is stored on a distributed file system, you must create a table on the Postgres host that maps the columns of a Hadoop table to the columns of a Postgres table. For example, for a Hadoop table with the following definition:
-
-```text
-CREATE TABLE weblogs (
- client_ip STRING,
- full_request_date STRING,
- day STRING,
- month STRING,
- month_num INT,
- year STRING,
- hour STRING,
- minute STRING,
- second STRING,
- timezone STRING,
- http_verb STRING,
- uri STRING,
- http_status_code STRING,
- bytes_returned STRING,
- referrer STRING,
- user_agent STRING)
-row format delimited
-fields terminated by '\t';
-```
-
-You should execute a command on the Postgres server that creates a comparable table on the Postgres server:
-
-```text
-CREATE FOREIGN TABLE weblogs
-(
- client_ip TEXT,
- full_request_date TEXT,
- day TEXT,
- Month TEXT,
- month_num INTEGER,
- year TEXT,
- hour TEXT,
- minute TEXT,
- second TEXT,
- timezone TEXT,
- http_verb TEXT,
- uri TEXT,
- http_status_code TEXT,
- bytes_returned TEXT,
- referrer TEXT,
- user_agent TEXT
-)
-SERVER hdfs_server
- OPTIONS (dbname 'webdata', table_name 'weblogs');
-```
-
-Include the `SERVER` clause to specify the name of the database stored on the Hadoop file system (`webdata`) and the name of the table (`weblogs`) that corresponds to the table on the Postgres server.
-
-For more information about using the `CREATE FOREIGN TABLE` command, see:
-
-
-
-### Data Type Mappings
-
-When using the foreign data wrapper, you must create a table on the Postgres server that mirrors the table that resides on the Hive server. The Hadoop data wrapper will automatically convert the following Hive data types to the target Postgres type:
-
-| **Hive** | **Postgres** |
-| ----------- | ---------------- |
-| BIGINT | BIGINT/INT8 |
-| BOOLEAN | BOOL/BOOLEAN |
-| BINARY | BYTEA |
-| CHAR | CHAR |
-| DATE | DATE |
-| DOUBLE | FLOAT8 |
-| FLOAT | FLOAT/FLOAT4 |
-| INT/INTEGER | INT/INTEGER/INT4 |
-| SMALLINT | SMALLINT/INT2 |
-| STRING | TEXT |
-| TIMESTAMP | TIMESTAMP |
-| TINYINT | INT2 |
-| VARCHAR | VARCHAR |
-
-## DROP EXTENSION
-
-Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you will be dropping the Hadoop server, and run the command:
-
-```text
-DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ];
-```
-
-**Parameters**
-
-`IF EXISTS`
-
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if an extension with the specified name doesn't exists.
-
-`name`
-
- Specify the name of the installed extension. It is optional.
-
- `CASCADE`
-
- Automatically drop objects that depend on the extension. It drops all the other dependent objects too.
-
- `RESTRICT`
-
- Do not allow to drop extension if any objects, other than its member objects and extensions listed in the same DROP command are dependent on it.
-
-**Example**
-
-The following command removes the extension from the existing database:
-
- `DROP EXTENSION hdfs_fdw;`
-
-For more information about using the foreign data wrapper `DROP EXTENSION` command, see:
-
- .
-
-## DROP SERVER
-
-Use the `DROP SERVER` command to remove a connection to a foreign server. The syntax is:
-
-```text
-DROP SERVER [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
-```
-
-The role that drops the server is the owner of the server; use the `ALTER SERVER` command to reassign ownership of a foreign server. To drop a foreign server, you must have `USAGE` privilege on the foreign-data wrapper specified in the `DROP SERVER` command.
-
-**Parameters**
-
-`IF EXISTS`
-
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if a server with the specified name doesn't exists.
-
-`name`
-
- Specify the name of the installed server. It is optional.
-
- `CASCADE`
-
- Automatically drop objects that depend on the server. It should drop all the other dependent objects too.
-
- `RESTRICT`
-
- Do not allow to drop the server if any objects are dependent on it.
-
-**Example**
-
-The following command removes a foreign server named `hdfs_server`:
-
- `DROP SERVER hdfs_server;`
-
-For more information about using the `DROP SERVER` command, see:
-
-
-
-## DROP USER MAPPING
-
-Use the `DROP USER MAPPING` command to remove a mapping that associates a Postgres role with a foreign server. You must be the owner of the foreign server to remove a user mapping for that server.
-
-```text
-DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC } SERVER server_name;
-```
-
-**Parameters**
-
-`IF EXISTS`
-
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if the user mapping doesn't exist.
-
-`user_name`
-
- Specify the user name of the mapping.
-
-`server_name`
-
- Specify the name of the server that defines a connection to the Hadoop cluster.
-
-**Example**
-
-The following command drops a user mapping for a role named `enterprisedb`; the mapping is associated with a server named `hdfs_server`:
-
- `DROP USER MAPPING FOR enterprisedb SERVER hdfs_server;`
-
-For detailed information about the `DROP USER MAPPING` command, see:
-
-
-
-## DROP FOREIGN TABLE
-
-A foreign table is a pointer to a table that resides on the Hadoop host. Use the `DROP FOREIGN TABLE` command to remove a foreign table. Only the owner of the foreign table can drop it.
-
-```text
-DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
-```
-
-**Parameters**
-
-`IF EXISTS`
-
- Include the `IF EXISTS` clause to instruct the server to issue a notice instead of throwing an error if the foreign table with the specified name doesn't exists.
-
-`name`
-
- Specify the name of the foreign table.
-
-`CASCADE`
-
- Automatically drop objects that depend on the foreign table. It should drop all the other dependent objects too.
-
-`RESTRICT`
-
- Do not allow to drop foreign table if any objects are dependent on it.
-
-**Example**
-
-```text
-DROP FOREIGN TABLE warehouse;
-```
-
-For more information about using the `DROP FOREIGN TABLE` command, see:
-
-
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/11_uninstalling_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.0.8/11_uninstalling_the_hadoop_data_adapter.mdx
deleted file mode 100644
index 026a761c828..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/11_uninstalling_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: "Uninstalling the Hadoop Foreign Data Wrapper"
----
-
-
-
-## Uninstalling an RPM Package
-
-You can use the `yum remove` or `dnf remove` command to remove a package installed by `yum` or `dnf`. To remove a package, open a terminal window, assume superuser privileges, and enter the command:
-
-- On RHEL or CentOS 7:
-
- `yum remove edb-as-hdfs_fdw`
-
-> where `xx` is the server version number.
-
-- On RHEL or Rocky Linux or AlmaLinux 8:
-
- `dnf remove edb-as-hdfs_fdw`
-
-> where `xx` is the server version number.
-
-## Uninstalling Hadoop Foreign Data Wrapper on a Debian or Ubuntu Host
-
-- To uninstall Hadoop Foreign Data Wrapper on a Debian or Ubuntu host, invoke the following command.
-
- `apt-get remove edb-as-hdfs-fdw`
-
-> where `xx` is the server version number.
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/EDB_logo.png b/product_docs/docs/hadoop_data_adapter/2.0.8/images/EDB_logo.png
deleted file mode 100644
index f4a93cf57f5..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/EDB_logo.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:07423b012a855204780fe5a2a5a1e33607304a5c3020ae4acbf3d575691dedd6
-size 12136
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/ambari_administrative_interface.png b/product_docs/docs/hadoop_data_adapter/2.0.8/images/ambari_administrative_interface.png
deleted file mode 100755
index d44e42a740e..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/ambari_administrative_interface.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:b4acb08665b6a1df9494f91f9ab64a8f4d0979f61947e19162f419d134e351ea
-size 150222
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/edb_logo.svg b/product_docs/docs/hadoop_data_adapter/2.0.8/images/edb_logo.svg
deleted file mode 100644
index 74babf2f8da..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/edb_logo.svg
+++ /dev/null
@@ -1,56 +0,0 @@
-
-
-
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/hadoop_distributed_file_system_with_postgres.png b/product_docs/docs/hadoop_data_adapter/2.0.8/images/hadoop_distributed_file_system_with_postgres.png
deleted file mode 100755
index ff6e32d8e94..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/hadoop_distributed_file_system_with_postgres.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:fda731e9f3b5018bda72c52b85737198530d8864d7ed5d57e02bcd2a58b537bc
-size 70002
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/installation_complete.png b/product_docs/docs/hadoop_data_adapter/2.0.8/images/installation_complete.png
deleted file mode 100755
index 311d632a71e..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/installation_complete.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:e52a4437577b7a64d7f36c4f837b9a0fab90b163b201055bd817f0e3cbaf112a
-size 39463
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/installation_wizard_welcome_screen.png b/product_docs/docs/hadoop_data_adapter/2.0.8/images/installation_wizard_welcome_screen.png
deleted file mode 100755
index aaf582bc781..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/installation_wizard_welcome_screen.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:85ea24919ac97d6f8ebb882da665c22e4d5c0942b8491faa5e07be8b93007b60
-size 38341
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/progress_as_the_servers_restart.png b/product_docs/docs/hadoop_data_adapter/2.0.8/images/progress_as_the_servers_restart.png
deleted file mode 100755
index 43523c7d1ad..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/progress_as_the_servers_restart.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:46a0feaf37642c3aa87fe8267259687dfa9c9571f1c2663297159ef98356e2fd
-size 85080
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/restart_the_server.png b/product_docs/docs/hadoop_data_adapter/2.0.8/images/restart_the_server.png
deleted file mode 100755
index 2518b46d46d..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/restart_the_server.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:9e612201379d56b4dffcfb4222ceb765532ca5d097504c1dbabdc6a812afaba9
-size 33996
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/setup_wizard_ready.png b/product_docs/docs/hadoop_data_adapter/2.0.8/images/setup_wizard_ready.png
deleted file mode 100755
index 922e318868d..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/setup_wizard_ready.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:3ba6a1a88fe8a91b94571b57a36077fce7b3346e850a38f9bf015166ace93e36
-size 16833
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/specify_an_installation_directory.png b/product_docs/docs/hadoop_data_adapter/2.0.8/images/specify_an_installation_directory.png
deleted file mode 100755
index 208c85c46af..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/specify_an_installation_directory.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:dae28ab7f567617da49816514a3fa5eb6161e611c416295cfe2f829cd941f98e
-size 20596
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/images/the_installation_wizard_welcome_screen.png b/product_docs/docs/hadoop_data_adapter/2.0.8/images/the_installation_wizard_welcome_screen.png
deleted file mode 100755
index 2da19033b0e..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/images/the_installation_wizard_welcome_screen.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:7fd52b490dd37c86dca15975a7dbc9bdd47c7ae4ab0912d1bf570d785c521f79
-size 33097
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/02_requirements_overview.mdx b/product_docs/docs/hadoop_data_adapter/2.1.0/02_requirements_overview.mdx
deleted file mode 100644
index 699bfc3991b..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/02_requirements_overview.mdx
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: "Requirements Overview"
----
-
-## Supported Versions
-
-The Hadoop Foreign Data Wrapper is certified with EDB Postgres Advanced Server 10 and above.
-
-## Supported Platforms
-
-The Hadoop Foreign Data Wrapper is supported on the following platforms:
-
-**Linux x86-64**
-
- - RHEL 8/OL 8
- - RHEL 7/OL 7
- - Rocky Linux 8/AlmaLinux 8
- - CentOS 7
- - SLES 15
- - SLES 12
- - Ubuntu 20.04 and 18.04 LTS
- - Debian 10.x and 9.x
-
-**Linux on IBM Power (ppc64le)**
- - RHEL 8
- - RHEL 7
- - SLES 15
- - SLES 12
-
-The Hadoop Foreign Data Wrapper supports use of the Hadoop file system using a HiveServer2 interface or Apache Spark using the Spark Thrift Server.
-
-
-
-
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/03_architecture_overview.mdx b/product_docs/docs/hadoop_data_adapter/2.1.0/03_architecture_overview.mdx
deleted file mode 100644
index 87c8fb6d024..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/03_architecture_overview.mdx
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: "Architecture Overview"
----
-
-
-
-Hadoop is a framework that allows you to store a large data set in a distributed file system.
-
-The Hadoop data wrapper provides an interface between a Hadoop file system and a Postgres database. The Hadoop data wrapper transforms a Postgres `SELECT` statement into a query that is understood by the HiveQL or Spark SQL interface.
-
-![Using a Hadoop distributed file system with Postgres](images/hadoop_distributed_file_system_with_postgres.png)
-
-When possible, the Foreign Data Wrapper asks the Hive or Spark server to perform the actions associated with the `WHERE` clause of a `SELECT` statement. Pushing down the `WHERE` clause improves performance by decreasing the amount of data moving across the network.
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/04_supported_authentication_methods.mdx b/product_docs/docs/hadoop_data_adapter/2.1.0/04_supported_authentication_methods.mdx
deleted file mode 100644
index 24377cbadda..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/04_supported_authentication_methods.mdx
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title: "Supported Authentication Methods"
----
-
-
-
-The Hadoop Foreign Data Wrapper supports `NOSASL` and `LDAP` authentication modes. To use `NOSASL`, do not specify any `OPTIONS` while creating user mapping. For `LDAP` authentication mode, specify `username` and `password` in `OPTIONS` while creating user mapping.
-
-## Using LDAP Authentication
-
-When using the Hadoop Foreign Data Wrapper with `LDAP` authentication, you must first configure the `Hive Server` or `Spark Server` to use LDAP authentication. The configured server must provide a `hive-site.xml` file that includes the connection details for the LDAP server. For example:
-
-```text
-
- hive.server2.authentication
- LDAP
-
- Expects one of [nosasl, none, ldap, kerberos, pam, custom].
- Client authentication types.
- NONE: no authentication check
- LDAP: LDAP/AD based authentication
- KERBEROS: Kerberos/GSSAPI authentication
- CUSTOM: Custom authentication provider
- (Use with property hive.server2.custom.authentication.class)
- PAM: Pluggable authentication module
- NOSASL: Raw transport
-
-
-
- hive.server2.authentication.ldap.url
- ldap://localhost
- LDAP connection URL
-
-
- hive.server2.authentication.ldap.baseDN
- ou=People,dc=itzgeek,dc=local
- LDAP base DN
-
-```
-
-Then, when starting the hive server, include the path to the `hive-site.xml` file in the command. For example:
-
-```text
-./hive --config path_to_hive-site.xml_file --service hiveServer2
-```
-
-Where *path_to_hive-site.xml_file* specifies the complete path to the `hive‑site.xml` file.
-
-When creating the user mapping, you must provide the name of a registered LDAP user and the corresponding password as options. For details, see [Create User Mapping](08_configuring_the_hadoop_data_adapter/#create-user-mapping).
-
-
-
-## Using NOSASL Authentication
-
-When using `NOSASL` authentication with the Hadoop Foreign Data Wrapper, set the authorization to `None`, and the authentication method to `NOSASL` on the `Hive Server` or `Spark Server`. For example, if you start the `Hive Server` at the command line, include the `hive.server2.authentication` configuration parameter in the command:
-
-```text
-hive --service hiveserver2 --hiveconf hive.server2.authentication=NOSASL
-```
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/06_updating_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.1.0/06_updating_the_hadoop_data_adapter.mdx
deleted file mode 100644
index 0c309f095c3..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/06_updating_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: "Updating the Hadoop Foreign Data Wrapper"
----
-
-
-
-## Updating an RPM Installation
-
-If you have an existing RPM installation of Hadoop Foreign Data Wrapper, you can use yum or dnf to upgrade your repository configuration file and update to a more recent product version. To update the `edb.repo` file, assume superuser privileges and enter:
-
-- On RHEL or CentOS 7:
-
- `yum upgrade edb-repo`
-
-- On RHEL or Rocky Linux or AlmaLinux 8:
-
- `dnf upgrade edb-repo`
-
-yum or dnf will update the `edb.repo` file to enable access to the current EDB repository, configured to connect with the credentials specified in your `edb.repo` file. Then, you can use yum or dnf to upgrade any installed packages:
-
-- On RHEL or CentOS 7:
-
- `yum upgrade edb-as-hdfs_fdw`
-
- where `xx` is the server version number.
-
-- On RHEL or Rocky Linux or AlmaLinux 8:
-
- `dnf upgrade edb-as-hdfs_fdw`
-
- where `xx` is the server version number.
-
-## Updating MongoDB Foreign Data Wrapper on a Debian or Ubuntu Host
-
-To update MongoDB Foreign Data Wrapper on a Debian or Ubuntu Host, use the following command:
-
- `apt-get --only-upgrade install edb-as-hdfs-fdw`
-
- where `xx` is the server version number.
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/09_using_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.1.0/09_using_the_hadoop_data_adapter.mdx
deleted file mode 100644
index 3a97f15e7be..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/09_using_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,302 +0,0 @@
----
-title: "Using the Hadoop Foreign Data Wrapper"
----
-
-
-
-You can use the Hadoop Foreign Data Wrapper either through the Apache Hive or the Apache Spark. Both Hive and Spark store metadata in the configured metastore, where databases and tables are created using HiveQL.
-
-## Using HDFS FDW with Apache Hive on Top of Hadoop
-
-`Apache Hive` data warehouse software facilitates querying and managing large datasets residing in distributed storage. Hive provides a mechanism to project structure onto this data and query the data using a SQL-like language called `HiveQL`. At the same time, this language allows traditional map/reduce programmers to plug in their custom mappers and reducers when it is inconvenient or inefficient to express this logic in `HiveQL`.
-
-There are two versions of Hive - `HiveServer1` and `HiveServer2` which can be downloaded from the [Apache Hive website](https://hive.apache.org/downloads.html).
-
-!!! Note
- The Hadoop Foreign Data Wrapper supports only `HiveServer2`.
-
-To use HDFS FDW with Apache Hive on top of Hadoop:
-
-Step 1: Download [weblogs_parse](http://wiki.pentaho.com/download/attachments/23531451/weblogs_parse.zip?version=1&modificationDate=1327096242000/) and follow the instructions at the [Wiki Pentaho website](https://wiki.pentaho.com/display/BAD/Transforming+Data+within+Hive/).
-
-Step 2: Upload `weblog_parse.txt` file using these commands:
-
-```text
-hadoop fs -mkdir /weblogs
-hadoop fs -mkdir /weblogs/parse
-hadoop fs -put weblogs_parse.txt /weblogs/parse/part-00000
-```
-
-Step 3: Start `HiveServer`, if not already running, using following command:
-
-```text
-$HIVE_HOME/bin/hiveserver2
-```
-
-or
-
-```text
-$HIVE_HOME/bin/hive --service hiveserver2
-```
-
-Step 4: Connect to `HiveServer2` using the hive `beeline` client. For example:
-
-```text
-$ beeline
-Beeline version 1.0.1 by Apache Hive
-beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl
-```
-
-Step 5: Create a table in Hive. The example creates a table named `weblogs`"
-
-```text
-CREATE TABLE weblogs (
- client_ip STRING,
- full_request_date STRING,
- day STRING,
- month STRING,
- month_num INT,
- year STRING,
- hour STRING,
- minute STRING,
- second STRING,
- timezone STRING,
- http_verb STRING,
- uri STRING,
- http_status_code STRING,
- bytes_returned STRING,
- referrer STRING,
- user_agent STRING)
-row format delimited
-fields terminated by '\t';
-```
-
-Step 6: Load data into the table.
-
-```text
-hadoop fs -cp /weblogs/parse/part-00000 /user/hive/warehouse/weblogs/
-```
-
-Step 7: Access your data from Postgres; you can now use the `weblog` table. Once you are connected using psql, follow the below steps:
-
-```text
--- set the GUC variables appropriately, e.g. :
-hdfs_fdw.jvmpath='/home/edb/Projects/hadoop_fdw/jdk1.8.0_111/jre/lib/amd64/server/'
-hdfs_fdw.classpath='/usr/local/edbas/lib/postgresql/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
-
--- load extension first time after install
-CREATE EXTENSION hdfs_fdw;
-
--- create server object
-CREATE SERVER hdfs_server
- FOREIGN DATA WRAPPER hdfs_fdw
- OPTIONS (host '127.0.0.1');
-
--- create user mapping
-CREATE USER MAPPING FOR postgres
- SERVER hdfs_server OPTIONS (username 'hive_username', password 'hive_password');
-
--- create foreign table
-CREATE FOREIGN TABLE weblogs
-(
- client_ip TEXT,
- full_request_date TEXT,
- day TEXT,
- Month TEXT,
- month_num INTEGER,
- year TEXT,
- hour TEXT,
- minute TEXT,
- second TEXT,
- timezone TEXT,
- http_verb TEXT,
- uri TEXT,
- http_status_code TEXT,
- bytes_returned TEXT,
- referrer TEXT,
- user_agent TEXT
-)
-SERVER hdfs_server
- OPTIONS (dbname 'default', table_name 'weblogs');
-
-
--- select from table
-postgres=# SELECT DISTINCT client_ip IP, count(*)
- FROM weblogs GROUP BY IP HAVING count(*) > 5000 ORDER BY 1;
- ip | count
------------------+-------
- 13.53.52.13 | 5494
- 14.323.74.653 | 16194
- 322.6.648.325 | 13242
- 325.87.75.336 | 6500
- 325.87.75.36 | 6498
- 361.631.17.30 | 64979
- 363.652.18.65 | 10561
- 683.615.622.618 | 13505
-(8 rows)
-
--- EXPLAIN output showing WHERE clause being pushed down to remote server.
-EXPLAIN (VERBOSE, COSTS OFF) SELECT client_ip, full_request_date, uri FROM weblogs WHERE http_status_code = 200;
- QUERY PLAN
-----------------------------------------------------------------------------------------------------------------
- Foreign Scan on public.weblogs
- Output: client_ip, full_request_date, uri
- Remote SQL: SELECT client_ip, full_request_date, uri FROM default.weblogs WHERE ((http_status_code = '200'))
-(3 rows)
-```
-
-## Using HDFS FDW with Apache Spark on Top of Hadoop
-
-Apache Spark is a general purpose distributed computing framework which supports a wide variety of use cases. It provides real time streaming as well as batch processing with speed, ease of use, and sophisticated analytics. Spark does not provide a storage layer as it relies on third party storage providers like Hadoop, HBASE, Cassandra, S3, and so on. Spark integrates seamlessly with Hadoop and can process existing data. Spark SQL is 100% compatible with `HiveQL` and can be used as a replacement of `Hiveserver2`, using `Spark Thrift Server`.
-
-To use HDFS FDW with Apache Spark on top of Hadoop:
-
-Step 1: Download and install Apache Spark in local mode.
-
-Step 2: In the folder `$SPARK_HOME/conf` create a file `spark-defaults.conf` containing the following line:
-
-```text
-spark.sql.warehouse.dir hdfs://localhost:9000/user/hive/warehouse
-```
-
-By default, Spark uses `derby` for both the meta data and the data itself (called a warehouse in Spark). To have Spark use Hadoop as a warehouse, you should add this property.
-
-Step 3: Start the Spark Thrift Server.
-
-```text
-./start-thriftserver.sh
-```
-
-Step 4: Make sure the Spark Thrift server is running and writing to a log file.
-
-Step 5: Create a local file (`names.txt`) that contains the following entries:
-
-```text
-$ cat /tmp/names.txt
-1,abcd
-2,pqrs
-3,wxyz
-4,a_b_c
-5,p_q_r
-,
-```
-
-Step 6: Connect to Spark Thrift Server2 using the Spark `beeline` client. For example:
-
-```text
-$ beeline
-Beeline version 1.2.1.spark2 by Apache Hive
-beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl org.apache.hive.jdbc.HiveDriver
-```
-
-Step 7: Prepare the sample data on Spark. Run the following commands in the `beeline` command line tool:
-
-```text
-./beeline
-Beeline version 1.2.1.spark2 by Apache Hive
-beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl org.apache.hive.jdbc.HiveDriver
-Connecting to jdbc:hive2://localhost:10000/default;auth=noSasl
-Enter password for jdbc:hive2://localhost:10000/default;auth=noSasl:
-Connected to: Spark SQL (version 2.1.1)
-Driver: Hive JDBC (version 1.2.1.spark2)
-Transaction isolation: TRANSACTION_REPEATABLE_READ
-0: jdbc:hive2://localhost:10000> create database my_test_db;
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.379 seconds)
-0: jdbc:hive2://localhost:10000> use my_test_db;
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.03 seconds)
-0: jdbc:hive2://localhost:10000> create table my_names_tab(a int, name string)
- row format delimited fields terminated by ' ';
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.11 seconds)
-0: jdbc:hive2://localhost:10000>
-
-0: jdbc:hive2://localhost:10000> load data local inpath '/tmp/names.txt'
- into table my_names_tab;
-+---------+--+
-| Result |
-+---------+--+
-+---------+--+
-No rows selected (0.33 seconds)
-0: jdbc:hive2://localhost:10000> select * from my_names_tab;
-+-------+---------+--+
-| a | name |
-+-------+---------+--+
-| 1 | abcd |
-| 2 | pqrs |
-| 3 | wxyz |
-| 4 | a_b_c |
-| 5 | p_q_r |
-| NULL | NULL |
-+-------+---------+--+
-```
-
-The following commands list the corresponding files in Hadoop:
-
-```text
-$ hadoop fs -ls /user/hive/warehouse/
-Found 1 items
-drwxrwxrwx - org.apache.hive.jdbc.HiveDriver supergroup 0 2020-06-12 17:03 /user/hive/warehouse/my_test_db.db
-
-$ hadoop fs -ls /user/hive/warehouse/my_test_db.db/
-Found 1 items
-drwxrwxrwx - org.apache.hive.jdbc.HiveDriver supergroup 0 2020-06-12 17:03 /user/hive/warehouse/my_test_db.db/my_names_tab
-```
-
-Step 8: Access your data from Postgres using psql:
-
-```text
--- set the GUC variables appropriately, e.g. :
-hdfs_fdw.jvmpath='/home/edb/Projects/hadoop_fdw/jdk1.8.0_111/jre/lib/amd64/server/'
-hdfs_fdw.classpath='/usr/local/edbas/lib/postgresql/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
-
--- load extension first time after install
-CREATE EXTENSION hdfs_fdw;
-
--- create server object
-CREATE SERVER hdfs_server
- FOREIGN DATA WRAPPER hdfs_fdw
- OPTIONS (host '127.0.0.1', port '10000', client_type 'spark', auth_type 'NOSASL');
-
--- create user mapping
-CREATE USER MAPPING FOR postgres
- SERVER hdfs_server OPTIONS (username 'spark_username', password 'spark_password');
-
--- create foreign table
-CREATE FOREIGN TABLE f_names_tab( a int, name varchar(255)) SERVER hdfs_svr
- OPTIONS (dbname 'testdb', table_name 'my_names_tab');
-
--- select the data from foreign server
-select * from f_names_tab;
- a | name
----+--------
- 1 | abcd
- 2 | pqrs
- 3 | wxyz
- 4 | a_b_c
- 5 | p_q_r
- 0 |
-(6 rows)
-
--- EXPLAIN output showing WHERE clause being pushed down to remote server.
-EXPLAIN (verbose, costs off) SELECT name FROM f_names_tab WHERE a > 3;
- QUERY PLAN
---------------------------------------------------------------------------
- Foreign Scan on public.f_names_tab
- Output: name
- Remote SQL: SELECT name FROM my_test_db.my_names_tab WHERE ((a > '3'))
-(3 rows)
-```
-
-!!! Note
- This example uses the same port while creating foreign server because the Spark Thrift Server is compatible with the Hive Thrift Server. Applications using Hiveserver2 would work with Spark except for the behaviour of the `ANALYZE` command and the connection string in the case of `NOSASL`. We recommend using `ALTER SERVER` and changing the `client_type` option if Hive is to be replaced with Spark.
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/10_identifying_data_adapter_version.mdx b/product_docs/docs/hadoop_data_adapter/2.1.0/10_identifying_data_adapter_version.mdx
deleted file mode 100644
index fa6e51f1d5c..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/10_identifying_data_adapter_version.mdx
+++ /dev/null
@@ -1,19 +0,0 @@
----
-title: "Identifying the Hadoop Foreign Data Wrapper Version"
----
-
-
-
-The Hadoop Foreign Data Wrapper includes a function that you can use to identify the currently installed version of the `.so` file for the data wrapper. To use the function, connect to the Postgres server, and enter:
-
-```text
-SELECT hdfs_fdw_version();
-```
-
-The function returns the version number:
-
-```text
-hdfs_fdw_version
------------------
-
-```
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/11_uninstalling_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2.1.0/11_uninstalling_the_hadoop_data_adapter.mdx
deleted file mode 100644
index 026a761c828..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/11_uninstalling_the_hadoop_data_adapter.mdx
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: "Uninstalling the Hadoop Foreign Data Wrapper"
----
-
-
-
-## Uninstalling an RPM Package
-
-You can use the `yum remove` or `dnf remove` command to remove a package installed by `yum` or `dnf`. To remove a package, open a terminal window, assume superuser privileges, and enter the command:
-
-- On RHEL or CentOS 7:
-
- `yum remove edb-as-hdfs_fdw`
-
-> where `xx` is the server version number.
-
-- On RHEL or Rocky Linux or AlmaLinux 8:
-
- `dnf remove edb-as-hdfs_fdw`
-
-> where `xx` is the server version number.
-
-## Uninstalling Hadoop Foreign Data Wrapper on a Debian or Ubuntu Host
-
-- To uninstall Hadoop Foreign Data Wrapper on a Debian or Ubuntu host, invoke the following command.
-
- `apt-get remove edb-as-hdfs-fdw`
-
-> where `xx` is the server version number.
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/EDB_logo.png b/product_docs/docs/hadoop_data_adapter/2.1.0/images/EDB_logo.png
deleted file mode 100644
index f4a93cf57f5..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/EDB_logo.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:07423b012a855204780fe5a2a5a1e33607304a5c3020ae4acbf3d575691dedd6
-size 12136
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/ambari_administrative_interface.png b/product_docs/docs/hadoop_data_adapter/2.1.0/images/ambari_administrative_interface.png
deleted file mode 100755
index d44e42a740e..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/ambari_administrative_interface.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:b4acb08665b6a1df9494f91f9ab64a8f4d0979f61947e19162f419d134e351ea
-size 150222
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/edb_logo.svg b/product_docs/docs/hadoop_data_adapter/2.1.0/images/edb_logo.svg
deleted file mode 100644
index 74babf2f8da..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/edb_logo.svg
+++ /dev/null
@@ -1,56 +0,0 @@
-
-
-
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/hadoop_distributed_file_system_with_postgres.png b/product_docs/docs/hadoop_data_adapter/2.1.0/images/hadoop_distributed_file_system_with_postgres.png
deleted file mode 100755
index ff6e32d8e94..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/hadoop_distributed_file_system_with_postgres.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:fda731e9f3b5018bda72c52b85737198530d8864d7ed5d57e02bcd2a58b537bc
-size 70002
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/installation_complete.png b/product_docs/docs/hadoop_data_adapter/2.1.0/images/installation_complete.png
deleted file mode 100755
index 311d632a71e..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/installation_complete.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:e52a4437577b7a64d7f36c4f837b9a0fab90b163b201055bd817f0e3cbaf112a
-size 39463
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/installation_wizard_welcome_screen.png b/product_docs/docs/hadoop_data_adapter/2.1.0/images/installation_wizard_welcome_screen.png
deleted file mode 100755
index aaf582bc781..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/installation_wizard_welcome_screen.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:85ea24919ac97d6f8ebb882da665c22e4d5c0942b8491faa5e07be8b93007b60
-size 38341
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/progress_as_the_servers_restart.png b/product_docs/docs/hadoop_data_adapter/2.1.0/images/progress_as_the_servers_restart.png
deleted file mode 100755
index 43523c7d1ad..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/progress_as_the_servers_restart.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:46a0feaf37642c3aa87fe8267259687dfa9c9571f1c2663297159ef98356e2fd
-size 85080
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/restart_the_server.png b/product_docs/docs/hadoop_data_adapter/2.1.0/images/restart_the_server.png
deleted file mode 100755
index 2518b46d46d..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/restart_the_server.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:9e612201379d56b4dffcfb4222ceb765532ca5d097504c1dbabdc6a812afaba9
-size 33996
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/setup_wizard_ready.png b/product_docs/docs/hadoop_data_adapter/2.1.0/images/setup_wizard_ready.png
deleted file mode 100755
index 922e318868d..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/setup_wizard_ready.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:3ba6a1a88fe8a91b94571b57a36077fce7b3346e850a38f9bf015166ace93e36
-size 16833
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/specify_an_installation_directory.png b/product_docs/docs/hadoop_data_adapter/2.1.0/images/specify_an_installation_directory.png
deleted file mode 100755
index 208c85c46af..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/specify_an_installation_directory.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:dae28ab7f567617da49816514a3fa5eb6161e611c416295cfe2f829cd941f98e
-size 20596
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/images/the_installation_wizard_welcome_screen.png b/product_docs/docs/hadoop_data_adapter/2.1.0/images/the_installation_wizard_welcome_screen.png
deleted file mode 100755
index 2da19033b0e..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/images/the_installation_wizard_welcome_screen.png
+++ /dev/null
@@ -1,3 +0,0 @@
-version https://git-lfs.github.com/spec/v1
-oid sha256:7fd52b490dd37c86dca15975a7dbc9bdd47c7ae4ab0912d1bf570d785c521f79
-size 33097
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/index.mdx b/product_docs/docs/hadoop_data_adapter/2.1.0/index.mdx
deleted file mode 100644
index bb911fb9abe..00000000000
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/index.mdx
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: "Hadoop Foreign Data Wrapper Guide"
----
-
-The Hadoop Foreign Data Wrapper (`hdfs_fdw`) is a Postgres extension that allows you to access data that resides on a Hadoop file system from EDB Postgres Advanced Server. The foreign data wrapper makes the Hadoop file system a read-only data source that you can use with Postgres functions and utilities, or in conjunction with other data that resides on a Postgres host.
-
-The Hadoop Foreign Data Wrapper can be installed with an RPM package. You can download an installer from the [EDB website](https://www.enterprisedb.com/software-downloads-postgres/).
-
-This guide uses the term `Postgres` to refer to an instance of EDB Postgres Advanced Server.
-
-
diff --git a/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx b/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx
new file mode 100644
index 00000000000..5545cf7ef92
--- /dev/null
+++ b/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx
@@ -0,0 +1,16 @@
+---
+title: "Supported Database Versions"
+---
+
+This table lists the latest Hadoop Foreign Data Wrapper versions and their supported corresponding EDB Postgres Advanced Server (EPAS) versions. Hadoop Foreign Data Wrapper is supported on the same platforms as EDB Postgres Advanced Server. See Product Compatibility for details.
+
+| Hadoop Foreign Data Wrapper | EPAS 14 | EPAS 13 | EPAS 12 | EPAS 11 | EPAS 10 |
+| --------------------------- | ------- | ------- | ------- | ------- | ------- |
+| 2.1.0 | Y | Y | Y | Y | Y |
+| 2.0.8 | N | Y | Y | Y | Y |
+| 2.0.7 | N | Y | Y | N | N |
+| 2.0.5 | N | N | Y | N | N |
+| 2.0.4 | N | N | N | Y | Y |
+
+
+
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/03_architecture_overview.mdx b/product_docs/docs/hadoop_data_adapter/2/03_architecture_overview.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.8/03_architecture_overview.mdx
rename to product_docs/docs/hadoop_data_adapter/2/03_architecture_overview.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/04_supported_authentication_methods.mdx b/product_docs/docs/hadoop_data_adapter/2/04_supported_authentication_methods.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.8/04_supported_authentication_methods.mdx
rename to product_docs/docs/hadoop_data_adapter/2/04_supported_authentication_methods.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/01_hadoop_rhel8_x86.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/01_hadoop_rhel8_x86.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/01_hadoop_rhel8_x86.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/01_hadoop_rhel8_x86.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/02_hadoop_other_linux8_x86.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/02_hadoop_other_linux8_x86.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/02_hadoop_other_linux8_x86.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/02_hadoop_other_linux8_x86.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/03_hadoop_rhel7_x86.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/03_hadoop_rhel7_x86.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/03_hadoop_rhel7_x86.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/03_hadoop_rhel7_x86.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/04_hadoop_centos7_x86.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/04_hadoop_centos7_x86.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/04_hadoop_centos7_x86.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/04_hadoop_centos7_x86.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/05_hadoop_sles15_x86.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/05_hadoop_sles15_x86.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/05_hadoop_sles15_x86.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/05_hadoop_sles15_x86.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/07_hadoop_sles12_x86.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/07_hadoop_sles12_x86.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/07_hadoop_sles12_x86.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/07_hadoop_sles12_x86.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/09_hadoop__ubuntu20_deb10_x86.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/09_hadoop__ubuntu20_deb10_x86.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/09_hadoop__ubuntu20_deb10_x86.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/09_hadoop__ubuntu20_deb10_x86.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/11_hadoop__ubuntu18_deb9_x86.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/11_hadoop__ubuntu18_deb9_x86.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/11_hadoop__ubuntu18_deb9_x86.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/11_hadoop__ubuntu18_deb9_x86.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/13_hadoop_rhel8_ppcle.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/13_hadoop_rhel8_ppcle.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/13_hadoop_rhel8_ppcle.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/13_hadoop_rhel8_ppcle.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/15_hadoop_rhel7_ppcle.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/15_hadoop_rhel7_ppcle.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/15_hadoop_rhel7_ppcle.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/15_hadoop_rhel7_ppcle.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/17_hadoop_sles15_ppcle.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/17_hadoop_sles15_ppcle.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/17_hadoop_sles15_ppcle.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/17_hadoop_sles15_ppcle.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/19_hadoop_sles12_ppcle.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/19_hadoop_sles12_ppcle.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/19_hadoop_sles12_ppcle.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/19_hadoop_sles12_ppcle.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/index.mdx b/product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/index.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/05_installing_the_hadoop_data_adapter/index.mdx
rename to product_docs/docs/hadoop_data_adapter/2/05_installing_the_hadoop_data_adapter/index.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2/06_updating_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/06_updating_the_hadoop_data_adapter.mdx
new file mode 100644
index 00000000000..f036e321d51
--- /dev/null
+++ b/product_docs/docs/hadoop_data_adapter/2/06_updating_the_hadoop_data_adapter.mdx
@@ -0,0 +1,66 @@
+---
+title: "Upgrading the Hadoop Foreign Data Wrapper"
+---
+
+
+
+If you have an existing installation of MongoDB Foreign Data Wrapper that you installed using the EDB repository, you can use the `upgrade` command to update your repository configuration file and then upgrade to a more recent product version. To start the process, open a terminal window, assume superuser privileges, and enter the commands applicable to the operating system and package manager used for the installation:
+
+## On RHEL or Rocky Linux or AlmaLinux or OL 8
+
+```shell
+# Update your edb.repo file to access the current EDB repository
+dnf upgrade edb-repo
+
+# Upgrade to the latest version product
+dnf upgrade edb-as-hdfs_fdw
+# where is the EDB Postgres Advanced Server version number
+```
+## On RHEL or CentOS or OL 7:
+
+```shell
+# Update your edb.repo file to access the current EDB repository
+yum upgrade edb-repo
+
+# Upgrade to the latest version product version
+yum upgrade edb-as-hdfs_fdw edb-libmongoc-libs
+# where is the EDB Postgres Advanced Server version number
+```
+
+## On SLES
+
+```shell
+# Update your edb.repo file to access the current EDB repository
+zypper upgrade edb-repo
+
+# Upgrade to the latest version product
+zypper upgrade edb-as-hdfs_fdw
+# where is the EDB Postgres Advanced Server version number
+```
+
+## On Debian or Ubuntu
+
+```shell
+# Update your edb.repo file to access the current EDB repository
+zypper upgrade edb-repo
+
+# Upgrade to the latest version product
+apt-get --only-upgrade install edb-as-hdfs-fdw edb-libmongoc
+# where is the EDB Postgres Advanced Server version number
+```
+
+## On RHEL or CentOS 7 on PPCLE
+
+```shell
+# Update your edb.repo file to access the current EDB repository
+yum upgrade edb-repo
+
+# Upgrade to the latest version product version
+yum upgrade edb-as-hdfs_fdw edb-libmongoc-at-libs
+# where:
+# is the EDB Postgres Advanced Server version number
+# is Advance Toolchain major version number. For EDB Postgres
+# Advanced Server versions 10 to 11, must be 10 and for
+# EDB Postgres Advanced Server version 12 and later, must be 11.
+```
+
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/07_features_of_hdfs_fdw.mdx b/product_docs/docs/hadoop_data_adapter/2/07_features_of_hdfs_fdw.mdx
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/07_features_of_hdfs_fdw.mdx
rename to product_docs/docs/hadoop_data_adapter/2/07_features_of_hdfs_fdw.mdx
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/08_configuring_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
similarity index 99%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/08_configuring_the_hadoop_data_adapter.mdx
rename to product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
index d15b2ec3477..60c4b5010e0 100644
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/08_configuring_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
@@ -97,8 +97,7 @@ The role that defines the server is the owner of the server; use the `ALTER SERV
| auth_type | The authentication type of the client; specify LDAP or NOSASL. If you do not specify an auth_type, the data wrapper will decide the auth_type value on the basis of the user mapping. If the user mapping includes a user name and password, the data wrapper will use LDAP authentication.
+
+You use the `remove` command to uninstall MongoDB Foreign Data Wrapper packages. To uninstall, open a terminal window, assume superuser privileges, and enter the command applicable to the operating system and package manager used for the installation:
+
+- On RHEL or CentOS 7:
+
+ `yum remove edb-as-hdfs_fdw`
+
+- On RHEL or Rocky Linux or AlmaLinux 8:
+
+ `dnf remove edb-as-hdfs_fdw`
+
+- On SLES:
+
+ `zypper remove edb-as-hdfs_fdw`
+
+- On Debian or Ubuntu
+
+ `apt-get remove edb-as-hdfs_fdw`
+
+
+
+
diff --git a/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.4.mdx b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.4.mdx
new file mode 100644
index 00000000000..f82b3072512
--- /dev/null
+++ b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.4.mdx
@@ -0,0 +1,12 @@
+---
+title: "Version 2.0.4"
+---
+
+Enhancements, bug fixes, and other changes in Hadoop Foreign Data Wrapper 2.0.4 include:
+
+| Type | Description |
+| ----------- |------------ |
+| Enhancement | Support for EDB Postgres Advanced Server 11. |
+
+
+
diff --git a/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.5.mdx b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.5.mdx
new file mode 100644
index 00000000000..1d1c34c5db3
--- /dev/null
+++ b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.5.mdx
@@ -0,0 +1,12 @@
+---
+title: "Version 2.0.5"
+---
+
+Enhancements, bug fixes, and other changes in Hadoop Foreign Data Wrapper 2.0.5 include:
+
+| Type | Description |
+| ----------- |------------ |
+| Enhancement | Support for EDB Postgres Advanced Server 12. |
+
+
+
diff --git a/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.7.mdx b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.7.mdx
new file mode 100644
index 00000000000..71142180a98
--- /dev/null
+++ b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.7.mdx
@@ -0,0 +1,12 @@
+---
+title: "Version 2.0.7"
+---
+
+
+New features, enhancements, bug fixes, and other changes in Hadoop Foreign Data Wrapper 2.0.7 include the following:
+
+| Type | Description |
+| ----------- |--------------------------------------------- |
+| Enhancement | Support for EDB Postgres Advanced Server 13. |
+| Enhancement | Support for Ubuntu 20.04 LTS platform. |
+| Enhancement | Updated LICENSE file.
diff --git a/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.8.mdx b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.8.mdx
new file mode 100644
index 00000000000..5b3c48a7617
--- /dev/null
+++ b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.0.8.mdx
@@ -0,0 +1,18 @@
+---
+title: "Version 2.0.8"
+---
+
+
+New features, enhancements, bug fixes, and other changes in Hadoop Foreign Data Wrapper 2.0.8 include the following:
+
+| Type | Description |
+| ----------- |--------------------------------- |
+| Enhancement | Support for Hadoop version 3.2.x |
+| Enhancement | Support for Hive version 3.1.x |
+| Enhancement | Support for Spark version 3.0.x |
+| Bug Fix | Fixed building a SELECT query having a whole-row reference to avoid an error. |
+| Bug Fix | Fixed crash with the queries involving LEFT JOIN LATERAL. |
+| Bug Fix | Use proper hive-SQL quoting of table or column names containing special characters. |
+
+
+
diff --git a/product_docs/docs/hadoop_data_adapter/2.1.0/01_rel_notes.mdx b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.1.0.mdx
similarity index 70%
rename from product_docs/docs/hadoop_data_adapter/2.1.0/01_rel_notes.mdx
rename to product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.1.0.mdx
index 481fa815104..996c8f4e589 100644
--- a/product_docs/docs/hadoop_data_adapter/2.1.0/01_rel_notes.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.1.0.mdx
@@ -1,11 +1,12 @@
---
-title: "Release Notes"
+title: "Version 2.1.0"
---
Enhancements, bug fixes, and other changes in Hadoop Foreign Data Wrapper 2.1.0 include:
-| Type | Description |
-| ---- |------------ |
+| Type | Description |
+| ----------- |------------ |
+| Enhancement | Support for EDB Postgres Advanced Server 14. |
| Enhancement | Join Pushdown: If a query has a join between two foreign tables from the same remote server, you can now push that join down to the remote server instead of fetching all the rows for both the tables and performing a join locally. |
diff --git a/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/index.mdx b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/index.mdx
new file mode 100644
index 00000000000..badd229eebd
--- /dev/null
+++ b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/index.mdx
@@ -0,0 +1,22 @@
+---
+title: "Release Notes"
+redirects:
+ - ../01_whats_new/
+navigation:
+- hadoop_rel_notes_2.1.0
+- hadoop_rel_notes_2.0.8
+- hadoop_rel_notes_2.0.7
+- hadoop_rel_notes_2.0.5
+- hadoop_rel_notes_2.0.4
+---
+
+
+The Hadoop Foreign Data Wrapper documentation describes the latest version including minor releases and patches. The release notes in this section provide information on what was new in each release. For new functionality introduced in a minor or patch release, there are also indicators within the content about what release introduced the feature.
+
+| Version | Release Date |
+| --------------------------------| ------------ |
+| [2.1.0](hadoop_rel_notes_2.1.0) | 2021 Dec 02 |
+| [2.0.8](hadoop_rel_notes_2.0.8) | 2021 Jun 24 |
+| [2.0.7](hadoop_rel_notes_2.0.7) | 2020 Nov 23 |
+| [2.0.5](hadoop_rel_notes_2.0.5) | 2019 Dec 10 |
+| [2.0.4](hadoop_rel_notes_2.0.4) | 2018 Nov 28 |
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/EDB_logo.png b/product_docs/docs/hadoop_data_adapter/2/images/EDB_logo.png
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/EDB_logo.png
rename to product_docs/docs/hadoop_data_adapter/2/images/EDB_logo.png
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/ambari_administrative_interface.png b/product_docs/docs/hadoop_data_adapter/2/images/ambari_administrative_interface.png
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/ambari_administrative_interface.png
rename to product_docs/docs/hadoop_data_adapter/2/images/ambari_administrative_interface.png
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/edb_logo.svg b/product_docs/docs/hadoop_data_adapter/2/images/edb_logo.svg
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/edb_logo.svg
rename to product_docs/docs/hadoop_data_adapter/2/images/edb_logo.svg
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/hadoop_distributed_file_system_with_postgres.png b/product_docs/docs/hadoop_data_adapter/2/images/hadoop_distributed_file_system_with_postgres.png
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/hadoop_distributed_file_system_with_postgres.png
rename to product_docs/docs/hadoop_data_adapter/2/images/hadoop_distributed_file_system_with_postgres.png
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/installation_complete.png b/product_docs/docs/hadoop_data_adapter/2/images/installation_complete.png
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/installation_complete.png
rename to product_docs/docs/hadoop_data_adapter/2/images/installation_complete.png
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/installation_wizard_welcome_screen.png b/product_docs/docs/hadoop_data_adapter/2/images/installation_wizard_welcome_screen.png
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/installation_wizard_welcome_screen.png
rename to product_docs/docs/hadoop_data_adapter/2/images/installation_wizard_welcome_screen.png
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/progress_as_the_servers_restart.png b/product_docs/docs/hadoop_data_adapter/2/images/progress_as_the_servers_restart.png
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/progress_as_the_servers_restart.png
rename to product_docs/docs/hadoop_data_adapter/2/images/progress_as_the_servers_restart.png
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/restart_the_server.png b/product_docs/docs/hadoop_data_adapter/2/images/restart_the_server.png
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/restart_the_server.png
rename to product_docs/docs/hadoop_data_adapter/2/images/restart_the_server.png
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/setup_wizard_ready.png b/product_docs/docs/hadoop_data_adapter/2/images/setup_wizard_ready.png
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/setup_wizard_ready.png
rename to product_docs/docs/hadoop_data_adapter/2/images/setup_wizard_ready.png
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/specify_an_installation_directory.png b/product_docs/docs/hadoop_data_adapter/2/images/specify_an_installation_directory.png
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/specify_an_installation_directory.png
rename to product_docs/docs/hadoop_data_adapter/2/images/specify_an_installation_directory.png
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.7/images/the_installation_wizard_welcome_screen.png b/product_docs/docs/hadoop_data_adapter/2/images/the_installation_wizard_welcome_screen.png
similarity index 100%
rename from product_docs/docs/hadoop_data_adapter/2.0.7/images/the_installation_wizard_welcome_screen.png
rename to product_docs/docs/hadoop_data_adapter/2/images/the_installation_wizard_welcome_screen.png
diff --git a/product_docs/docs/hadoop_data_adapter/2.0.8/index.mdx b/product_docs/docs/hadoop_data_adapter/2/index.mdx
similarity index 59%
rename from product_docs/docs/hadoop_data_adapter/2.0.8/index.mdx
rename to product_docs/docs/hadoop_data_adapter/2/index.mdx
index bb911fb9abe..5a4fa872406 100644
--- a/product_docs/docs/hadoop_data_adapter/2.0.8/index.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/index.mdx
@@ -1,5 +1,18 @@
---
-title: "Hadoop Foreign Data Wrapper Guide"
+title: "Hadoop Foreign Data Wrapper"
+navigation:
+- hadoop_rel_notes
+- 02_requirements_overview
+- 03_architecture_overview
+- 04_supported_authentication_methods
+- 05_installing_the_hadoop_data_adapter
+- 08_configuring_the_hadoop_data_adapter
+- 06_updating_the_hadoop_data_adapter
+- 07_features_of_hdfs_fdw
+- 09_using_the_hadoop_data_adapter
+- 10_identifying_data_adapter_version
+- 10a_example_join_pushdown
+- 11_uninstalling_the_hadoop_data_adapter
---
The Hadoop Foreign Data Wrapper (`hdfs_fdw`) is a Postgres extension that allows you to access data that resides on a Hadoop file system from EDB Postgres Advanced Server. The foreign data wrapper makes the Hadoop file system a read-only data source that you can use with Postgres functions and utilities, or in conjunction with other data that resides on a Postgres host.
@@ -8,8 +21,3 @@ The Hadoop Foreign Data Wrapper can be installed with an RPM package. You can do
This guide uses the term `Postgres` to refer to an instance of EDB Postgres Advanced Server.
-
diff --git a/product_docs/docs/mongo_data_adapter/mongo5.3.0_rel_notes.mdx b/product_docs/docs/mongo_data_adapter/mongo5.3.0_rel_notes.mdx
new file mode 100644
index 00000000000..53ed4091569
--- /dev/null
+++ b/product_docs/docs/mongo_data_adapter/mongo5.3.0_rel_notes.mdx
@@ -0,0 +1,17 @@
+---
+title: "Version 5.3.0"
+redirects:
+- 01_5.3.0_rel_notes
+---
+
+Enhancements, bug fixes, and other changes in MongoDB Foreign Data Wrapper 5.3.0
+include:
+
+| Type | Description |
+| ----------- |------------ |
+| Enhancement | Support for EDB Postgres Advanced Server 14. |
+| Enhancement | Join pushdown: If a query has a join between two foreign tables from the same remote server, you can now push that join down to the remote server instead of fetching all the rows for both the tables and performing a join locally. |
+| Bug Fix | Improve API performance. |
+| Bug Fix | Need support for the whole-row reference. |
+
+
diff --git a/static/_redirects b/static/_redirects
index dd10cee7cfe..645d9f37bf0 100644
--- a/static/_redirects
+++ b/static/_redirects
@@ -135,6 +135,12 @@
/docs/mongo_data_adapter/5.2.7/* /docs/mongo_data_adapter/5/:splat 301
/docs/mongo_data_adapter/5.2.8/* /docs/mongo_data_adapter/5/:splat 301
/docs/mongo_data_adapter/5.3.0/* /docs/mongo_data_adapter/5/:splat 301
+/docs/hadoop_data_adapter/2.0.7/* /docs/hadoop_data_adapter/2/:splat 301
+/docs/hadoop_data_adapter/2.0.8/* /docs/hadoop_data_adapter/2/:splat 301
+/docs/hadoop_data_adapter/2.1.0/* /docs/hadoop_data_adapter/2/:splat 301
+/docs/mysql_data_adapter/2.7.0/* /docs/mongo_data_adapter/2/:splat 301
+/docs/mysql_data_adapter/2.6.0/* /docs/mongo_data_adapter/2/:splat 301
+/docs/mysql_data_adapter/2.5.5/* /docs/mongo_data_adapter/2/:splat 301
# BigAnimal
/docs/edbcloud/* /docs/biganimal/:splat 301