diff --git a/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx b/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx index d8542bd2eb5..4cdb335c9a2 100644 --- a/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx +++ b/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx @@ -45,7 +45,7 @@ The table shows a breakdown of management costs for Microsoft Azure. | Azure blob storage | BigAnimal uses blob storage to store metadata about your account. | | Key vault | BigAnimal uses key vault to securely store credentials for managing your infrastructure. | -At list price, estimated overall monthly management costs are $400–$700 for a single cluster. Check with your Microsoft Azure account manager for specifics that apply to your account. +At list price, estimated overall monthly management costs are $400–$700 for a single region. Check with your Microsoft Azure account manager for specifics that apply to your account. To get a better sense of your Microsoft Azure costs, check out the Microsoft Azure pricing [calculator](https://azure.microsoft.com/en-us/pricing/calculator/) and reach out to [BigAnimal Support](../overview/support). @@ -60,7 +60,7 @@ The table shows a breakdown of management costs for AWS. | Simple Storage Service (S3) | BigAnimal uses S3 to store metadata about your account. | | Key Management Service | BigAnimal uses the key management service to securely store credentials for managing your infrastructure. | -At list price, estimated overall monthly management costs are $400–$600 for a single cluster. Check with your AWS account manager for specifics that apply to your account. +At list price, estimated overall monthly management costs are $400–$600 for a single region. Check with your AWS account manager for specifics that apply to your account. To get a better sense of your AWS costs, check out the AWS pricing [calculator](https://calculator.aws/#/) and reach out to [BigAnimal Support](../overview/support). diff --git a/product_docs/docs/eprs/6.2/03_installation/01_installing_with_stackbuilder.mdx b/product_docs/docs/eprs/6.2/03_installation/01_installing_with_stackbuilder.mdx index dabdfcc30d5..f776142c753 100644 --- a/product_docs/docs/eprs/6.2/03_installation/01_installing_with_stackbuilder.mdx +++ b/product_docs/docs/eprs/6.2/03_installation/01_installing_with_stackbuilder.mdx @@ -43,7 +43,7 @@ Follow the directions for your host operating system to install Java runtime. **Step 5 (For Advanced Server):** Expand the EnterpriseDB Tools node and check the box for Replication Server. Click the Next button. !!! Note - Though the following ../images show Replication Server v6.0, use the same process for Replication Server v6.2. + Though the following images show Replication Server v6.0, use the same process for Replication Server v6.2. ![StackBuilder Plus applications](../images/image29.png) diff --git a/product_docs/docs/eprs/6.2/03_installation/02_installing_from_cli.mdx b/product_docs/docs/eprs/6.2/03_installation/02_installing_from_cli.mdx index 06ce8a05775..01eba3022ba 100644 --- a/product_docs/docs/eprs/6.2/03_installation/02_installing_from_cli.mdx +++ b/product_docs/docs/eprs/6.2/03_installation/02_installing_from_cli.mdx @@ -28,7 +28,7 @@ Follow the directions for your host operating system to install Java runtime. The following example shows how to start the xDB Replication Server installation in text mode. ```text -$ ./xdbreplicationserver-6.1.0-alpha-1-linux-x64.run --mode text +$ ./xdbreplicationserver-6.2.0-alpha-1-linux-x64.run --mode text Language Selection Please select the installation language @@ -47,7 +47,7 @@ The following example shows how to start the installation in unattended mode wit ```text $ su root Password: -$ ./xdbreplicationserver-6.1.0-alpha-1-linux-x64.run --optionfile /home/user/xdb_config +$ ./xdbreplicationserver-6.2.0-alpha-1-linux-x64.run --optionfile /home/user/xdb_config ``` The following is the content of the options file, `xdb_config`. diff --git a/product_docs/docs/eprs/6.2/03_installation/03_installing_rpm_package.mdx b/product_docs/docs/eprs/6.2/03_installation/03_installing_rpm_package.mdx index 57728215144..175ee4dc2c2 100644 --- a/product_docs/docs/eprs/6.2/03_installation/03_installing_rpm_package.mdx +++ b/product_docs/docs/eprs/6.2/03_installation/03_installing_rpm_package.mdx @@ -116,7 +116,7 @@ For example, to access xDB Replication Server version 6.2, enable the following ```text [enterprisedb-xdb60] -name=EnterpriseDB XDB 6.0 $releasever - $basearch +name=EnterpriseDB XDB 6.2 $releasever - $basearch baseurl=http://:@yum.enterprisedb.com/xdb60/redhat/rhel-$releasever-$basearch enabled=0 gpgcheck=1 diff --git a/product_docs/docs/eprs/6.2/08_xdb_cli/03_xdb_cli_commands/02_print_version.mdx b/product_docs/docs/eprs/6.2/08_xdb_cli/03_xdb_cli_commands/02_print_version.mdx index eff9aac5f14..f86f3a67fb5 100644 --- a/product_docs/docs/eprs/6.2/08_xdb_cli/03_xdb_cli_commands/02_print_version.mdx +++ b/product_docs/docs/eprs/6.2/08_xdb_cli/03_xdb_cli_commands/02_print_version.mdx @@ -14,7 +14,7 @@ Examples ```text $ java -jar edb-repcli.jar -version -Version: 6.1.0-alpha +Version: 6.2.0-alpha . ``` diff --git a/product_docs/docs/eprs/6.2/08_xdb_cli/03_xdb_cli_commands/03_print_xdb_server_version.mdx b/product_docs/docs/eprs/6.2/08_xdb_cli/03_xdb_cli_commands/03_print_xdb_server_version.mdx index 6647b737c23..eee37b98238 100644 --- a/product_docs/docs/eprs/6.2/08_xdb_cli/03_xdb_cli_commands/03_print_xdb_server_version.mdx +++ b/product_docs/docs/eprs/6.2/08_xdb_cli/03_xdb_cli_commands/03_print_xdb_server_version.mdx @@ -20,7 +20,7 @@ Examples ```text $ java -jar edb-repcli.jar -repversion -repsvrfile ~/pubsvrfile.prop -6.1.0-alpha +6.2.0-alpha . ``` diff --git a/product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/index.mdx b/product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/index.mdx deleted file mode 100644 index ff8666e0c36..00000000000 --- a/product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/index.mdx +++ /dev/null @@ -1,23 +0,0 @@ ---- -title: "Upgrading to Replication Server 7.0" -redirects: -- /docs/eprs/latest/10_appendix/02_upgrading_to_xdb6_2/ ---- - - - -You can install Replication Server 7.0 when you have existing single-master or multi-master replication systems that are running under Replication Server version 7.0. - -It is assumed that you are installing Replication Server 7.0 on the same host machine that is currently running Replication Server 6.2.x and that you will then manage the existing replication systems using Replication Server 7.0. - -A direct upgrade is supported only from Replication Server versions 6.2.x. - -After upgrading to version 6.2, you can then upgrade to 7.0. - -If you are using a version of Replication Server older than 6.2.x, first upgrade to 6.2.x, and then upgrade to version 7.0. - -
- -upgrading_with_gui_installer upgrading_with_xdb_rpm_package updating_sub_and_pub_ports - -
diff --git a/product_docs/docs/eprs/7/installing/index.mdx b/product_docs/docs/eprs/7/installing/index.mdx index 901be6f588d..260432c8aa1 100644 --- a/product_docs/docs/eprs/7/installing/index.mdx +++ b/product_docs/docs/eprs/7/installing/index.mdx @@ -10,7 +10,7 @@ navigation: - windows - installing_jdbc_driver - installation_details -- upgrading +- upgrading_replication_server - uninstalling --- diff --git a/product_docs/docs/eprs/7/installing/upgrading_replication_server/index.mdx b/product_docs/docs/eprs/7/installing/upgrading_replication_server/index.mdx new file mode 100644 index 00000000000..73172ee46ed --- /dev/null +++ b/product_docs/docs/eprs/7/installing/upgrading_replication_server/index.mdx @@ -0,0 +1,32 @@ +--- +title: "Upgrading Replication Server" +navTitle: Upgrading +redirects: +- /eprs/latest/10_appendix/02_upgrading_to_xdb6_2/ +- /eprs/latest/10_appendix/01_upgrading_to_xdb6_2/ +--- + + + +You can install Replication Server 7 when you have existing single-master or multi-master replication systems that are running under Replication Server version 7. + +It is assumed that you are installing Replication Server 7 on the same host machine that is currently running Replication Server 6.2.x and that you will then manage the existing replication systems using Replication Server 7. + +A direct upgrade is supported only from Replication Server versions 6.2.x. + +After upgrading to version 6.2, you can then upgrade to 7. + +If you are using a version of Replication Server older than 6.2.x, first upgrade to 6.2.x, and then upgrade to version 7. + +For more details on upgrading Replication Server, see: + +- [Updating the publication and subscription server](updating_sub_and_pub_ports) +- [Upgrading a Linux installation](upgrading_linux) +- [Upgrading with the graphical user interface installer](upgrading_with_gui_installer) +- [Upgrading from a Replication Server RPM package installation](upgrading_with_xdb_rpm_package) + +
+ +upgrading_with_gui_installer upgrading_with_xdb_rpm_package updating_sub_and_pub_ports + +
diff --git a/product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/04_updating_sub_and_pub_ports.mdx b/product_docs/docs/eprs/7/installing/upgrading_replication_server/updating_sub_and_pub_ports.mdx similarity index 51% rename from product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/04_updating_sub_and_pub_ports.mdx rename to product_docs/docs/eprs/7/installing/upgrading_replication_server/updating_sub_and_pub_ports.mdx index 29222bb2b3f..fc805f82cae 100644 --- a/product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/04_updating_sub_and_pub_ports.mdx +++ b/product_docs/docs/eprs/7/installing/upgrading_replication_server/updating_sub_and_pub_ports.mdx @@ -1,14 +1,15 @@ --- title: "Updating the publication and subscription server ports" redirects: -- /docs/eprs/latest/10_appendix/02_upgrading_to_xdb6_2/04_updating_sub_and_pub_ports +- /eprs/latest/10_appendix/02_upgrading_to_xdb6_2/04_updating_sub_and_pub_ports/ +- /eprs/latest/10_appendix/01_upgrading_to_xdb6_2/04_updating_sub_and_pub_ports/ --- -The newly installed publication server and subscription server of Replication Server 7.0 are configured to use the default port numbers 9051 and 9052, respectively. These port numbers are set in the Replication Server startup configuration file as described [Replication Server configuration file](../../02_overview/03_replication_server_components_and_architecture/01_physical_components/#xdb_replication_conf_file). +The newly installed publication server and subscription server of Replication Server 7 are configured to use the default port numbers 9051 and 9052, respectively. These port numbers are set in the Replication Server startup configuration file as described [Replication Server configuration file](../../02_overview/03_replication_server_components_and_architecture/01_physical_components/#xdb_replication_conf_file). -If your Replication Server 6.2.x replication systems were running under port numbers other than 9051 and 9052, you must adjust some of your settings in Replication Server 7.0 to continue to use these existing replication systems. +If your Replication Server 6.2.x replication systems were running under port numbers other than 9051 and 9052, you must adjust some of your settings in Replication Server 7 to continue to use these existing replication systems. !!! Note The following changes regarding port 9052 and the subscription server are needed only if you're running a single-master replication system. If you're using only a multi-master replication system, then only the changes involving port 9051 and the publication server are needed. @@ -16,6 +17,6 @@ If your Replication Server 6.2.x replication systems were running under port num You can use either of two methods can correct this: - To continue to use the old port numbers (other than 9051 and 9052) that were in use for Replication Server 6.2.x, stop the publication and subscription servers. Change the settings of the `PUBPORT` and `SUBPORT` parameters in the Replication Server Startup Configuration file from `9051` and `9052` to the old port numbers used by Replication Server 6.2.x. Restart the publication and subscription servers. Register the publication server and the subscription server with the old Replication Server 6.2.x port numbers along with the admin user and password as described in [Registering a publication server](../../05_smr_operation/02_creating_publication/01_registering_publication_server/#registering_publication_server) and [Registering a subscription server](../../05_smr_operation/03_creating_subscription/01_registering_subscription_server/#registering_subscription_server). -- To use the default port numbers 9051 and 9052 with the Replication Server 7.0 replication systems, you must replace the old port numbers with the default port numbers 9051 and 9052. Register the publication server and the subscription server with port numbers 9051 and 9052 along with the admin user and password as described in [Registering a publication server](../../05_smr_operation/02_creating_publication/01_registering_publication_server/#registering_publication_server) and [Registering a subscription server](../../05_smr_operation/03_creating_subscription/01_registering_subscription_server/#registering_subscription_server). For single-master replication systems only, you then need to change the port numbers stored in the control schema from the old port numbers to 9051 and 9052. First, perform the procedure described in [Subscription server network location](../../07_common_operations/06_managing_publication/01_updating_publication_server/#sub_server_network_loc), and then perform the procedure described in [Updating a subscription](../../05_smr_operation/05_managing_subscription/03_updating_subscription/#updating_subscription). +- To use the default port numbers 9051 and 9052 with the Replication Server 7 replication systems, you must replace the old port numbers with the default port numbers 9051 and 9052. Register the publication server and the subscription server with port numbers 9051 and 9052 along with the admin user and password as described in [Registering a publication server](../../05_smr_operation/02_creating_publication/01_registering_publication_server/#registering_publication_server) and [Registering a subscription server](../../05_smr_operation/03_creating_subscription/01_registering_subscription_server/#registering_subscription_server). For single-master replication systems only, you then need to change the port numbers stored in the control schema from the old port numbers to 9051 and 9052. First, perform the procedure described in [Subscription server network location](../../07_common_operations/06_managing_publication/01_updating_publication_server/#sub_server_network_loc), and then perform the procedure described in [Updating a subscription](../../05_smr_operation/05_managing_subscription/03_updating_subscription/#updating_subscription). After making the changes, select **Refresh** in the Replication Server console. The replication tree of the Replication Server console displays the complete set of nodes for the replication systems. diff --git a/product_docs/docs/eprs/7/installing/upgrading.mdx b/product_docs/docs/eprs/7/installing/upgrading_replication_server/upgrading_linux.mdx similarity index 86% rename from product_docs/docs/eprs/7/installing/upgrading.mdx rename to product_docs/docs/eprs/7/installing/upgrading_replication_server/upgrading_linux.mdx index e7d2e8e83e3..c3f8db5cf2e 100644 --- a/product_docs/docs/eprs/7/installing/upgrading.mdx +++ b/product_docs/docs/eprs/7/installing/upgrading_replication_server/upgrading_linux.mdx @@ -1,8 +1,8 @@ --- title: "Upgrading a Linux installation" -navTitle: "Upgrading" redirects: - /eprs/latest/03_installation/03a_updating_linux_installation/ +- /eprs/latest/installing/upgrading/ --- If you have an existing Replication Server installed on Linux, you can use yum to upgrade your repository configuration file and update to a more recent product version. To update the `edb.repo` file, assume superuser privileges and enter: @@ -13,4 +13,4 @@ If you have an existing Replication Server installed on Linux, you can use yum t `yum upgrade ppas-xdb*` -See [Upgrading from a Replication Server RPM Package Installation](../10_appendix/01_upgrading_to_xdb6_2/03_upgrading_with_xdb_rpm_package/) for details. +See [Upgrading from a Replication Server RPM Package Installation](upgrading_with_xdb_rpm_package) for details. diff --git a/product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/02_upgrading_with_gui_installer.mdx b/product_docs/docs/eprs/7/installing/upgrading_replication_server/upgrading_with_gui_installer.mdx similarity index 69% rename from product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/02_upgrading_with_gui_installer.mdx rename to product_docs/docs/eprs/7/installing/upgrading_replication_server/upgrading_with_gui_installer.mdx index 3018fd6617d..378f0397268 100644 --- a/product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/02_upgrading_with_gui_installer.mdx +++ b/product_docs/docs/eprs/7/installing/upgrading_replication_server/upgrading_with_gui_installer.mdx @@ -1,12 +1,13 @@ --- title: "Upgrading with the graphical user interface installer" redirects: -- /docs/eprs/latest/10_appendix/02_upgrading_to_xdb6_2/02_upgrading_with_gui_installer +- /eprs/latest/10_appendix/02_upgrading_to_xdb6_2/02_upgrading_with_gui_installer/ +- /eprs/latest/10_appendix/01_upgrading_to_xdb6_2/02_upgrading_with_gui_installer/ --- -You can upgrade to Replication Server 7.0 using the graphical user interface installer. +You can upgrade to Replication Server 7 using the graphical user interface installer. 1. Before starting the upgrade process, replicate any pending backlog of transactions on the publication tables. @@ -14,7 +15,7 @@ You can upgrade to Replication Server 7.0 using the graphical user interface ins 1. Ensure the installation user has administrative permissions on the `XDB_HOME/xdata` folder. On Windows, this can be done by opening the Replication Server installation directory in Windows Explorer and selecting the xdata folder. When prompted, select **Continue** to enable the required permission. -1. Install Replication Server 7.0. See [Installation and uninstallation](../../05_smr_operation/03_creating_subscription/01_registering_subscription_server/#registering_subscription_server) for instructions on installing Replication Server, but note the differences described in the following steps. +1. Install Replication Server 7. See [Installation and uninstallation](../../05_smr_operation/03_creating_subscription/01_registering_subscription_server/#registering_subscription_server) for instructions on installing Replication Server, but note the differences described in the following steps. 1. Following the acceptance of the license agreement, the Select Components screen appears but with the entries grayed out. The old Replication Server components are replaced with the new ones in the old Replication Server’s directory location. Select **Next**. @@ -28,13 +29,13 @@ You can upgrade to Replication Server 7.0 using the graphical user interface ins 1. Complete the publication server and subscription server configuration file setup. - In the `XDB_HOME/etc` directory, a new set of configuration files for Replication Server version 7.0 are created. These files are named `xdb_pubserver.conf.new` and `xdb_subserver.conf.new`. The new configuration files contain any new configuration options added for Replication Server 7.0. + In the `XDB_HOME/etc` directory, a new set of configuration files for Replication Server version 7 are created. These files are named `xdb_pubserver.conf.new` and `xdb_subserver.conf.new`. The new configuration files contain any new configuration options added for Replication Server 7. The old configuration files used by Replication Server version 6.2.x remain unchanged as `xdb_pubserver.conf` and `xdb_subserver.conf`. - Merge the old and new configuration files so that the resulting, active configuration files contain any new Replication Server 7.0 configuration options as well as any nondefault settings you used with the Replication Server 6.2.x and want to continue to use with Replication Server 7.0. The final set of active configuration files must be named `xdb_pubserver.conf` and `xdb_subserver.conf`. + Merge the old and new configuration files so that the resulting, active configuration files contain any new Replication Server 7 configuration options as well as any nondefault settings you used with the Replication Server 6.2.x and want to continue to use with Replication Server 7. The final set of active configuration files must be named `xdb_pubserver.conf` and `xdb_subserver.conf`. - In the `XDB_HOME/etc/sysconfig` directory, make sure the Replication Server startup configuration file `xdbReplicationServer-62.config` contains the parameter settings you want to use with Replication Server 7.0. See [Replication Server startup configuration file](../../02_overview/03_replication_server_components_and_architecture/01_physical_components/#xdb_startup_conf_file) for information on this file. + In the `XDB_HOME/etc/sysconfig` directory, make sure the Replication Server startup configuration file `xdbReplicationServer-62.config` contains the parameter settings you want to use with Replication Server 7. See [Replication Server startup configuration file](../../02_overview/03_replication_server_components_and_architecture/01_physical_components/#xdb_startup_conf_file) for information on this file. 1. Restart the publication server and the subscription server. See [Registering a publication server](../../05_smr_operation/02_creating_publication/01_registering_publication_server/#registering_publication_server) and [Registering a subscription server](../../05_smr_operation/03_creating_subscription/01_registering_subscription_server/#registering_subscription_server)). @@ -42,11 +43,11 @@ You can upgrade to Replication Server 7.0 using the graphical user interface ins 1. Adjust the publication server and subscription server port numbers if necessary. - The Replication Server 7.0 publication and subscription servers are installed to use the default port numbers `9051` and `9052`, respectively. If the Replication Server 6.2.x replication systems used port numbers other than `9051` and `9052`, then make the changes to correct this inconsistency as described in [Updating the publication and subscription server ports](04_updating_sub_and_pub_ports/#updating_sub_and_pub_ports). + The Replication Server 7 publication and subscription servers are installed to use the default port numbers `9051` and `9052`, respectively. If the Replication Server 6.2.x replication systems used port numbers other than `9051` and `9052`, then make the changes to correct this inconsistency as described in [Updating the publication and subscription server ports](updating_sub_and_pub_ports). If you don't need to adjust the port numbers, register the publication server and subscription server with the Replication Server console as described in [Registering a publication server](../../05_smr_operation/02_creating_publication/01_registering_publication_server/#registering_publication_server) and [Registering a subscription server](../../05_smr_operation/03_creating_subscription/01_registering_subscription_server/#registering_subscription_server). The existing replication systems appear in the replication tree of the Replication Server Console. -You are now ready to use Replication Server 7.0 to create new replication systems and manage existing ones. +You are now ready to use Replication Server 7 to create new replication systems and manage existing ones. !!! Note **For Windows:** If you give a new admin password during an upgrade, it is ignored. After the upgrade, Replication Server picks the old admin user name and password (which is saved in `edb-replconf`). diff --git a/product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/03_upgrading_with_xdb_rpm_package.mdx b/product_docs/docs/eprs/7/installing/upgrading_replication_server/upgrading_with_xdb_rpm_package.mdx similarity index 84% rename from product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/03_upgrading_with_xdb_rpm_package.mdx rename to product_docs/docs/eprs/7/installing/upgrading_replication_server/upgrading_with_xdb_rpm_package.mdx index 9eb814019c2..3314d1e1a36 100644 --- a/product_docs/docs/eprs/7/10_appendix/01_upgrading_to_xdb6_2/03_upgrading_with_xdb_rpm_package.mdx +++ b/product_docs/docs/eprs/7/installing/upgrading_replication_server/upgrading_with_xdb_rpm_package.mdx @@ -1,15 +1,16 @@ --- -title: "Upgrading from a Replication Server RPM package installation" +title: "Upgrading from a RPM package installation" redirects: -- /docs/eprs/latest/10_appendix/02_upgrading_to_xdb6_2/03_upgrading_with_xdb_rpm_package +- /eprs/latest/10_appendix/02_upgrading_to_xdb6_2/03_upgrading_with_xdb_rpm_package/ +- /eprs/latest/10_appendix/01_upgrading_to_xdb6_2/03_upgrading_with_xdb_rpm_package/ --- -If you're using Replication Server 6.2.x that was installed using the Replication Server RPM package, you can upgrade to Replication Server 7.0 from an RPM package. +If you're using Replication Server 6.2.x that was installed using the Replication Server RPM package, you can upgrade to Replication Server 7 from an RPM package. !!! Note - Be sure the repository configuration file `edb.repo` for Replication Server 7.0 is set up in the `/etc/yum.repos.d` directory. + Be sure the repository configuration file `edb.repo` for Replication Server 7 is set up in the `/etc/yum.repos.d` directory. 1. Before starting the upgrade process, replicate any pending backlog of transactions on the publication tables. @@ -24,16 +25,16 @@ If you're using Replication Server 6.2.x that was installed using the Replicatio Copies of these files are typically saved by the upgrade process if the files were modified since their original installation. However, it is safest to save copies in case the upgrade process doesn't. Use the saved files as your Replication Server 6.2.x configuration files for the updates described in Step 7. -1. If any Oracle publication or subscription databases are used in existing single-master replication systems, make sure a copy of the Oracle JDBC driver, version ojdbc5 or later, is accessible by the publication server and subscription server where Replication Server 7.0 will be installed. See [Enabling access to Oracle](../../05_smr_operation/01_prerequisites/03_enable_access_to_database/#enable_access_to_oracle) for information. +1. If any Oracle publication or subscription databases are used in existing single-master replication systems, make sure a copy of the Oracle JDBC driver, version ojdbc5 or later, is accessible by the publication server and subscription server where Replication Server 7 will be installed. See [Enabling access to Oracle](../../05_smr_operation/01_prerequisites/03_enable_access_to_database/#enable_access_to_oracle) for information. !!! Note Two options are available: 1) Copy the Oracle JDBC driver to the `jre/lib/ext` subdirectory of your Java runtime environment. 2) Copy the Oracle JDBC driver to the `lib/jdbc` subdirectory of the Replication Server installation directory. - We recommend that you copy the Oracle JDBC driver to the `jre/lib/ext` subdirectory of your Java runtime environment. If you instead copy it to `lib/jdbc`, you must copy the Oracle JDBC driver to the `/usr/edb/xdb/lib/jdbc` directory after you install Replication Server 7.0. + We recommend that you copy the Oracle JDBC driver to the `jre/lib/ext` subdirectory of your Java runtime environment. If you instead copy it to `lib/jdbc`, you must copy the Oracle JDBC driver to the `/usr/edb/xdb/lib/jdbc` directory after you install Replication Server 7. 1. Make sure that the controller database is up and running. The other publication and subscription databases of existing SMR and MMR systems don't need to be up and running. -1. As the root account, invoke the yum update command to begin the upgrade from Replication Server 6.2.x to Replication Server 7.0 as shown by the following: +1. As the root account, invoke the yum update command to begin the upgrade from Replication Server 6.2.x to Replication Server 7 as shown by the following: `yum update edb-xdb*` @@ -152,9 +153,9 @@ If you're using Replication Server 6.2.x that was installed using the Replicatio Complete! ``` - At this point, the publication server and the subscription server for Replication Server 7.0 aren't running. The directories now contain the following: + At this point, the publication server and the subscription server for Replication Server 7 aren't running. The directories now contain the following: - - Replication Server 7.0 is installed in directory location `/usr/edb/xdb`. + - Replication Server 7 is installed in directory location `/usr/edb/xdb`. - Replication Server 6.2.x remains in directory location `/usr/ppas-xdb-6.2` but with the files removed from the subdirectories such as `bin` and `lib`. - In the `etc` subdirectory, there might be the configuration files renamed as `xdb_pubserver.conf.rpmsave` and `xdb_subserver.conf.rpmsave`. - In the `etc/sysconfig subdirectory`, there might be the configuration file renamed as `xdbReplicationServer-62.config.rpmsave`. @@ -162,14 +163,14 @@ If you're using Replication Server 6.2.x that was installed using the Replicatio 1. Complete the publication server and subscription server configuration file setup. - In the `/usr/edb/xdb/etc` directory, a new set of configuration files for Replication Server version 7.0 was created. These files are named `xdb_pubserver.conf` and `xdb_subserver.conf`. The new configuration files contain any new configuration options added for Replication Server 7.0. The old configuration files used by Replication Server version 6.2.x can be found in the `/usr/ppas-xdb-6.2/etc` directory renamed as `xdb_pubserver.conf.rpmsave` and `xdb_subserver.conf.rpmsave`. + In the `/usr/edb/xdb/etc` directory, a new set of configuration files for Replication Server version 7 was created. These files are named `xdb_pubserver.conf` and `xdb_subserver.conf`. The new configuration files contain any new configuration options added for Replication Server 7. The old configuration files used by Replication Server version 6.2.x can be found in the `/usr/ppas-xdb-6.2/etc` directory renamed as `xdb_pubserver.conf.rpmsave` and `xdb_subserver.conf.rpmsave`. !!! Note If these files don't exist, use the ones you saved in Step 3. - Merge the old and new configuration files so that the resulting, active configuration files contain any new Replication Server 7.0 configuration options as well as any nondefault settings you used with Replication Server 6.2.x and want to continue to use with Replication Server 7.0. + Merge the old and new configuration files so that the resulting, active configuration files contain any new Replication Server 7 configuration options as well as any nondefault settings you used with Replication Server 6.2.x and want to continue to use with Replication Server 7. - The final set of active configuration files must be contained in directory `/usr/edb/xdb/etc` named `xdb_pubserver.conf` and `xdb_subserver.conf`. In the `/usr/edb/xdb/etc/sysconfig directory`, make sure the Replication Server startup configuration file `xdbReplicationServer-70.config` contains the parameter settings you want to use with Replication Server 7.0. See [Replication Server configuration file](../../02_overview/03_replication_server_components_and_architecture/01_physical_components/#xdb_replication_conf_file) for information on this file. + The final set of active configuration files must be contained in directory `/usr/edb/xdb/etc` named `xdb_pubserver.conf` and `xdb_subserver.conf`. In the `/usr/edb/xdb/etc/sysconfig directory`, make sure the Replication Server startup configuration file `xdbReplicationServer-70.config` contains the parameter settings you want to use with Replication Server 7. See [Replication Server configuration file](../../02_overview/03_replication_server_components_and_architecture/01_physical_components/#xdb_replication_conf_file) for information on this file. 8. Restart the publication server and the subscription server (see sections [Registering a publication server](../../05_smr_operation/02_creating_publication/01_registering_publication_server/#registering_publication_server) and [Registering a subscription server](../../05_smr_operation/03_creating_subscription/01_registering_subscription_server/#registering_subscription_server)). @@ -177,8 +178,8 @@ If you're using Replication Server 6.2.x that was installed using the Replicatio 10. Adjust the publication server and subscription server port numbers if necessary. - The Replication Server 7.0 publication and subscription servers are installed to use the default port numbers 9051 and 9052, respectively. If the Replication Server 6.2.x replication systems used port numbers other than 9051 and 9052 for the publication and subscription servers, then make the changes to correct this inconsistency as described in [Updating the publication and subscription server ports](04_updating_sub_and_pub_ports/#updating_sub_and_pub_ports). + The Replication Server 7 publication and subscription servers are installed to use the default port numbers 9051 and 9052, respectively. If the Replication Server 6.2.x replication systems used port numbers other than 9051 and 9052 for the publication and subscription servers, then make the changes to correct this inconsistency as described in [Updating the publication and subscription server ports](updating_sub_and_pub_ports). If you don't need to adjust the port numbers, register the publication server and subscription server with the Replication Server console as described in [Registering a publication server](../../05_smr_operation/02_creating_publication/01_registering_publication_server/#registering_publication_server) and [Registering a subscription server](../../05_smr_operation/03_creating_subscription/01_registering_subscription_server/#registering_subscription_server)). The existing replication systems appear in the replication tree of the Replication Server console. -You are now ready to use Replication Server 7.0 to create new replication systems and manage existing ones. +You are now ready to use Replication Server 7 to create new replication systems and manage existing ones. diff --git a/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx b/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx index 24b35bd0257..0698bf2bab7 100644 --- a/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx +++ b/product_docs/docs/hadoop_data_adapter/2/02_requirements_overview.mdx @@ -6,6 +6,8 @@ This table lists the latest Hadoop Foreign Data Wrapper versions and their suppo | Hadoop Foreign Data Wrapper | EPAS 14 | EPAS 13 | EPAS 12 | EPAS 11 | | --------------------------- | ------- | ------- | ------- | ------- | +| 2.3.0 | Y | Y | Y | Y | +| 2.2.0 | Y | Y | Y | Y | | 2.1.0 | Y | Y | Y | Y | | 2.0.8 | N | Y | Y | Y | | 2.0.7 | N | Y | Y | N | diff --git a/product_docs/docs/hadoop_data_adapter/2/07_features_of_hdfs_fdw.mdx b/product_docs/docs/hadoop_data_adapter/2/07_features_of_hdfs_fdw.mdx index e51e1a802d3..c1e33c86956 100644 --- a/product_docs/docs/hadoop_data_adapter/2/07_features_of_hdfs_fdw.mdx +++ b/product_docs/docs/hadoop_data_adapter/2/07_features_of_hdfs_fdw.mdx @@ -14,16 +14,30 @@ Hadoop Foreign Data Wrapper supports column pushdown. As a result, the query bri ## Join pushdown -Hadoop Foreign Data Wrapper supports join pushdown. It pushes the joins between the foreign tables of the same remote Hive or Spark server to that remote Hive or Spark server, enhancing the performance. +Hadoop Foreign Data Wrapper supports join pushdown. It pushes the joins between the foreign tables of the same remote Hive or Spark server to the remote Hive or Spark server, enhancing the performance. -For an example, see [Example: Join pushdown](10a_example_join_pushdown). +From version 2.3.0 and later, you can enable the join pushdown at session and query level using the `enable_join_pushdown` GUC variable. + +For more information, see [Example: Join pushdown](10a_example_join_pushdown). ## Aggregate pushdown -Hadoop Foreign Data Wrapper supports aggregate pushdown. It pushes the aggregates to the remote Hive or Spark server instead of fetching all of the rows and aggregating them locally. This gives a very good performance boost for the cases where aggregates can be pushed down. The push-down is currently limited to aggregate functions min, max, sum, avg, and count, to avoid pushing down the functions that are not present on the Hadoop server. Also, aggregate filters and orders are not pushed down. +Hadoop Foreign Data Wrapper supports aggregate pushdown. It pushes the aggregates to the remote Hive or Spark server instead of fetching all of the rows and aggregating them locally. This gives a very good performance boost for the cases where aggregates can be pushed down. The pushdown is currently limited to aggregate functions min, max, sum, avg, and count, to avoid pushing down the functions that are not present on the Hadoop server. Also, aggregate filters and orders are not pushed down. For more information, see [Example: Aggregate pushdown](10b_example_aggregate_pushdown). +## ORDER BY pushdown + +Hadoop Foreign Data Wrapper supports order by pushdown. If possible, push the `ORDER BY` clause to the remote server. This approach provides the ordered result set from the foreign server, which can help to enable an efficient merge join. + +For more information, see [Example: ORDER BY pushdown](10c_example_order_by_pushdown) + +## LIMIT OFFSET pushdown + +Hadoop Foreign Data Wrapper supports limit offset pushdown. Wherever possible, perform LIMIT and OFFSET operations on the remote server. This reduces network traffic between local Postgres and remote HDFS/Hive servers. ALL/NULL options aren't supported on the Hive server, which means they are not pushed down. Also, OFFSET without LIMIT isn't supported on the remote server. Queries having that construct are not pushed. + +For more information, see [Example: LIMIT OFFSET pushdown](10d_example_limit_offset_pushdown) + ## Automated cleanup Hadoop Foreign Data Wrappper allows the cleanup of foreign tables in a single operation using the `DROP EXTENSION` command. This feature is specifically useful when a foreign table is set for a temporary purpose. The syntax is: diff --git a/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx b/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx index 95cea2ac13a..4cb8e731600 100644 --- a/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx +++ b/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx @@ -6,7 +6,7 @@ This example shows join pushdown between the foreign tables of the same remote H Tables on HIVE/SPARK server: -```text +```sql 0: jdbc:hive2://localhost:10000> describe emp; +-----------+------------+----------+--+ | col_name | data_type | comment | @@ -30,16 +30,21 @@ Tables on HIVE/SPARK server: | loc | string | NULL | +-----------+------------+----------+--+ 3 rows selected (0.067 seconds) - ``` Tables on Postgres server: ```sql +-- load extension first time after install CREATE EXTENSION hdfs_fdw; + +-- create server object CREATE SERVER hdfs_server FOREIGN DATA WRAPPER hdfs_fdw OPTIONS(host 'localhost', port '10000', client_type 'spark', auth_type 'LDAP'); + +-- create user mapping CREATE USER MAPPING FOR public SERVER hdfs_server OPTIONS (username 'user1', password 'pwd123'); +-- create foreign table CREATE FOREIGN TABLE dept ( deptno INTEGER, dname VARCHAR(14), @@ -47,6 +52,7 @@ CREATE FOREIGN TABLE dept ( ) SERVER hdfs_server OPTIONS (dbname 'fdw_db', table_name 'dept'); +-- create foreign table CREATE FOREIGN TABLE emp ( empno INTEGER, ename VARCHAR(10), @@ -62,19 +68,23 @@ SERVER hdfs_server OPTIONS (dbname 'fdw_db', table_name 'emp'); Queries with join pushdown: -```text +```sql --inner join edb=# EXPLAIN VERBOSE SELECT t1.ename, t2.dname FROM emp t1 INNER JOIN dept t2 ON ( t1.deptno = t2.deptno ); - QUERY PLAN +__OUTPUT__ + QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------- Foreign Scan (cost=15.00..35.00 rows=5000 width=84) Output: t1.ename, t2.dname Relations: (fdw_db.emp t1) INNER JOIN (fdw_db.dept t2) Remote SQL: SELECT r1.`ename`, r2.`dname` FROM (`fdw_db`.`emp` r1 INNER JOIN `fdw_db`.`dept` r2 ON (((r1.`deptno` = r2.`deptno`)))) -(4 rows) + (4 rows) +``` +```sql --left join edb=# EXPLAIN VERBOSE SELECT t1.ename, t2.dname FROM emp t1 LEFT JOIN dept t2 ON ( t1.deptno = t2.deptno ); +__OUTPUT__ QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------- Foreign Scan (cost=15.00..35.00 rows=5000 width=84) @@ -82,9 +92,12 @@ edb=# EXPLAIN VERBOSE SELECT t1.ename, t2.dname FROM emp t1 LEFT JOIN dept t2 ON Relations: (fdw_db.emp t1) LEFT JOIN (fdw_db.dept t2) Remote SQL: SELECT r1.`ename`, r2.`dname` FROM (`fdw_db`.`emp` r1 LEFT JOIN `fdw_db`.`dept` r2 ON (((r1.`deptno` = r2.`deptno`)))) (4 rows) +``` +```sql --right join edb=# EXPLAIN VERBOSE SELECT t1.ename, t2.dname FROM emp t1 RIGHT JOIN dept t2 ON ( t1.deptno = t2.deptno ); +__OUTPUT__ QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------- Foreign Scan (cost=15.00..35.00 rows=5000 width=84) @@ -92,9 +105,12 @@ edb=# EXPLAIN VERBOSE SELECT t1.ename, t2.dname FROM emp t1 RIGHT JOIN dept t2 O Relations: (fdw_db.dept t2) LEFT JOIN (fdw_db.emp t1) Remote SQL: SELECT r1.`ename`, r2.`dname` FROM (`fdw_db`.`dept` r2 LEFT JOIN `fdw_db`.`emp` r1 ON (((r1.`deptno` = r2.`deptno`)))) (4 rows) +``` +```sql --full join edb=# EXPLAIN VERBOSE SELECT t1.ename, t2.dname FROM emp t1 FULL JOIN dept t2 ON ( t1.deptno = t2.deptno ); +__OUTPUT__ QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------- Foreign Scan (cost=15.00..35.00 rows=5000 width=84) @@ -102,9 +118,12 @@ edb=# EXPLAIN VERBOSE SELECT t1.ename, t2.dname FROM emp t1 FULL JOIN dept t2 ON Relations: (fdw_db.emp t1) FULL JOIN (fdw_db.dept t2) Remote SQL: SELECT r1.`ename`, r2.`dname` FROM (`fdw_db`.`emp` r1 FULL JOIN `fdw_db`.`dept` r2 ON (((r1.`deptno` = r2.`deptno`)))) (4 rows) +``` +```sql --cross join edb=# EXPLAIN VERBOSE SELECT t1.ename, t2.dname FROM emp t1 CROSS JOIN dept t2; +__OUTPUT__ QUERY PLAN -------------------------------------------------------------------------------------------------------------- Foreign Scan (cost=15.00..35.00 rows=1000000 width=84) @@ -113,3 +132,80 @@ edb=# EXPLAIN VERBOSE SELECT t1.ename, t2.dname FROM emp t1 CROSS JOIN dept t2; Remote SQL: SELECT r1.`ename`, r2.`dname` FROM (`fdw_db`.`emp` r1 INNER JOIN `fdw_db`.`dept` r2 ON (TRUE)) (4 rows) ``` + +Enable/disable GUC for join pushdown queries at table level: + +```sql +-- enable join pushdown at the table level +ALTER FOREIGN TABLE emp OPTIONS (SET enable_join_pushdown 'true'); +EXPLAIN (VERBOSE, COSTS OFF) +SELECT e.empno, e.ename, d.dname + FROM emp e JOIN dept d ON (e.deptno = d.deptno) + ORDER BY e.empno; +__OUTPUT__ + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------------------------------- + Sort + Output: e.empno, e.ename, d.dname + Sort Key: e.empno + -> Foreign Scan + Output: e.empno, e.ename, d.dname + Relations: (fdw_db.emp e) INNER JOIN (fdw_db.dept d) + Remote SQL: SELECT r1.`empno`, r1.`ename`, r2.`dname` FROM (`fdw_db`.`emp` r1 INNER JOIN `fdw_db`.`dept` r2 ON (((r1.`deptno` = r2.`deptno`)))) +(7 rows) +``` + +```sql +--Disable the GUC enable_join_pushdown. +SET hdfs_fdw.enable_join_pushdown to false; +-- Pushdown shouldn't happen as enable_join_pushdown is false. +EXPLAIN (VERBOSE, COSTS OFF) +SELECT e.empno, e.ename, d.dname + FROM emp e JOIN dept d ON (e.deptno = d.deptno) + ORDER BY e.empno; +__OUTPUT__ + QUERY PLAN +------------------------------------------------------------------------------------------- + Sort + Output: e.empno, e.ename, d.dname + Sort Key: e.empno + -> Nested Loop + Output: e.empno, e.ename, d.dname + Join Filter: (e.deptno = d.deptno) + -> Foreign Scan on public.emp e + Output: e.empno, e.ename, e.job, e.mgr, e.hiredate, e.sal, e.comm, e.deptno + Remote SQL: SELECT `empno`, `ename`, `deptno` FROM `fdw_db`.`emp` + -> Materialize + Output: d.dname, d.deptno + -> Foreign Scan on public.dept d + Output: d.dname, d.deptno + Remote SQL: SELECT `deptno`, `dname` FROM `fdw_db`.`dept` +``` + +Enable/disable GUC for join pushdown queries at the session level: + +```sql +SET hdfs_fdw.enable_join_pushdown to true; +EXPLAIN (VERBOSE, COSTS OFF) +SELECT e.empno, e.ename, d.dname + FROM emp e JOIN dept d ON (e.deptno = d.deptno) + ORDER BY e.empno; +__OUTPUT__ + QUERY PLAN +------------------------------------------------------------------------------------------- + Sort + Output: e.empno, e.ename, d.dname + Sort Key: e.empno + -> Nested Loop + Output: e.empno, e.ename, d.dname + Join Filter: (e.deptno = d.deptno) + -> Foreign Scan on public.emp e + Output: e.empno, e.ename, e.job, e.mgr, e.hiredate, e.sal, e.comm, e.deptno + Remote SQL: SELECT `empno`, `ename`, `deptno` FROM `fdw_db`.`emp` + -> Materialize + Output: d.dname, d.deptno + -> Foreign Scan on public.dept d + Output: d.dname, d.deptno + Remote SQL: SELECT `deptno`, `dname` FROM `fdw_db`.`dept` +(14 rows) +``` diff --git a/product_docs/docs/hadoop_data_adapter/2/10b_example_aggregate_pushdown.mdx b/product_docs/docs/hadoop_data_adapter/2/10b_example_aggregate_pushdown.mdx index 1f66148f4b9..e3fdc8f6a05 100644 --- a/product_docs/docs/hadoop_data_adapter/2/10b_example_aggregate_pushdown.mdx +++ b/product_docs/docs/hadoop_data_adapter/2/10b_example_aggregate_pushdown.mdx @@ -6,7 +6,7 @@ This example shows aggregate pushdown between the foreign tables of the same rem Tables on HIVE/SPARK server: -```text +```sql 0: jdbc:hive2://localhost:10000> describe emp; +-----------+------------+----------+--+ | col_name | data_type | comment | @@ -30,7 +30,6 @@ Tables on HIVE/SPARK server: | loc | string | NULL | +-----------+------------+----------+--+ 3 rows selected (0.067 seconds) - ``` Tables on Postgres server: @@ -75,7 +74,7 @@ SELECT deptno, COUNT(*),SUM(sal),MAX(sal),MIN(sal),AVG(sal) FROM emp GROUP BY deptno HAVING deptno IN (10,20) ORDER BY deptno; - +__OUTPUT__ QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------ Sort diff --git a/product_docs/docs/hadoop_data_adapter/2/10c_example_order_by_pushdown.mdx b/product_docs/docs/hadoop_data_adapter/2/10c_example_order_by_pushdown.mdx new file mode 100644 index 00000000000..3159e371446 --- /dev/null +++ b/product_docs/docs/hadoop_data_adapter/2/10c_example_order_by_pushdown.mdx @@ -0,0 +1,83 @@ +--- +title: "Example: ORDER BY pushdown " +--- + +This example shows ORDER BY pushdown between the foreign tables of the same remote HIVE/SPARK server as the remote HIVE/SPARK server: + +Tables on HIVE/SPARK server: + +```sql +0: jdbc:hive2://localhost:10000> describe emp; ++-----------+------------+----------+--+ +| col_name | data_type | comment | ++-----------+------------+----------+--+ +| empno | int | NULL | +| ename | string | NULL | +| job | string | NULL | +| mgr | int | NULL | +| hiredate | date | NULL | +| sal | int | NULL | +| comm | int | NULL | +| deptno | int | NULL | ++-----------+------------+----------+--+ +8 rows selected (0.747 seconds) +0: jdbc:hive2://localhost:10000> describe dept; ++-----------+------------+----------+--+ +| col_name | data_type | comment | ++-----------+------------+----------+--+ +| deptno | int | NULL | +| dname | string | NULL | +| loc | string | NULL | ++-----------+------------+----------+--+ +3 rows selected (0.067 seconds) +``` + +Tables on Postgres server: + +```sql +-- load extension first time after install +CREATE EXTENSION hdfs_fdw; + +-- create server object +CREATE SERVER hdfs_server FOREIGN DATA WRAPPER hdfs_fdw OPTIONS(host 'localhost', port '10000', client_type 'spark', auth_type 'LDAP'); + +-- create user mapping +CREATE USER MAPPING FOR public SERVER hdfs_server OPTIONS (username 'user1', password 'pwd123'); + +-- create foreign table +CREATE FOREIGN TABLE emp ( + empno INTEGER, + ename VARCHAR(10), + job VARCHAR(9), + mgr INTEGER, + hiredate DATE, + sal INTEGER, + comm INTEGER, + deptno INTEGER +) +SERVER hdfs_server OPTIONS (dbname 'fdw_db', table_name 'emp'); +``` + +Query with ORDER BY pushdown: + +```sql +edb=# SET hdfs_fdw.enable_order_by_pushdown TO ON; +SET +edb=# EXPLAIN (COSTS OFF) SELECT * FROM emp order by deptno; +__OUTPUT__ + QUERY PLAN +--------------------- + Foreign Scan on emp +(1 row) + +edb=# SET hdfs_fdw.enable_order_by_pushdown TO OFF; +SET +edb=# EXPLAIN (COSTS OFF) SELECT * FROM emp order by deptno; +__OUTPUT__ + QUERY PLAN +--------------------------- + Sort + Sort Key: deptno + -> Foreign Scan on emp +(3 rows) +``` diff --git a/product_docs/docs/hadoop_data_adapter/2/10d_example_limit_offset_pushdown.mdx b/product_docs/docs/hadoop_data_adapter/2/10d_example_limit_offset_pushdown.mdx new file mode 100644 index 00000000000..2d032666cb4 --- /dev/null +++ b/product_docs/docs/hadoop_data_adapter/2/10d_example_limit_offset_pushdown.mdx @@ -0,0 +1,85 @@ +--- +title: "Example: LIMIT OFFSET pushdown" +--- + +This example shows LIMIT OFFSET pushdown between the foreign tables of the same remote HIVE/SPARK server as the remote HIVE/SPARK server: + +Tables on HIVE/SPARK server: + +```sql +0: jdbc:hive2://localhost:10000> describe emp; ++-----------+------------+----------+--+ +| col_name | data_type | comment | ++-----------+------------+----------+--+ +| empno | int | NULL | +| ename | string | NULL | +| job | string | NULL | +| mgr | int | NULL | +| hiredate | date | NULL | +| sal | int | NULL | +| comm | int | NULL | +| deptno | int | NULL | ++-----------+------------+----------+--+ +8 rows selected (0.747 seconds) +0: jdbc:hive2://localhost:10000> describe dept; ++-----------+------------+----------+--+ +| col_name | data_type | comment | ++-----------+------------+----------+--+ +| deptno | int | NULL | +| dname | string | NULL | +| loc | string | NULL | ++-----------+------------+----------+--+ +3 rows selected (0.067 seconds) +``` + +Tables on Postgres server: + +```sql +-- load extension first time after install +CREATE EXTENSION hdfs_fdw; + +-- create server object +CREATE SERVER hdfs_server FOREIGN DATA WRAPPER hdfs_fdw OPTIONS(host 'localhost', port '10000', client_type 'spark', auth_type 'LDAP'); + +-- create user mapping +CREATE USER MAPPING FOR public SERVER hdfs_server OPTIONS (username 'user1', password 'pwd123'); + +-- create foreign table +CREATE FOREIGN TABLE emp ( + empno INTEGER, + ename VARCHAR(10), + job VARCHAR(9), + mgr INTEGER, + hiredate DATE, + sal INTEGER, + comm INTEGER, + deptno INTEGER +) +SERVER hdfs_server OPTIONS (dbname 'fdw_db', table_name 'emp'); +``` + +Query with LIMIT OFFSET pushdown: + +```sql +-- LIMIT OFFSET +EXPLAIN (VERBOSE, COSTS OFF) +SELECT empno FROM emp e ORDER BY empno LIMIT 5 OFFSET 2; +__OUTPUT__ + QUERY PLAN +--------------------------------------------------------------------------------------------- + Foreign Scan on public.emp e + Output: empno + Remote SQL: SELECT `empno` FROM `fdw_db`.`emp` ORDER BY `empno` ASC NULLS LAST LIMIT 2, 5 +(3 rows) + +SELECT empno FROM emp e ORDER BY empno LIMIT 5 OFFSET 2; +__OUTPUT__ + empno +------- + 7521 + 7566 + 7654 + 7698 + 7782 +(5 rows) +``` diff --git a/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.3.0.mdx b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.3.0.mdx new file mode 100644 index 00000000000..3401bc99c37 --- /dev/null +++ b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/hadoop_rel_notes_2.3.0.mdx @@ -0,0 +1,13 @@ +--- +title: "Version 2.3.0" +--- + +Enhancements, bug fixes, and other changes in Hadoop Foreign Data Wrapper 2.3 include: + +| Type | Description | +| ----------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Feature | When possible, we push the `ORDER BY` clause to the remote server. This approach provides the ordered result set from the foreign server, which can help to enable an efficient merge join. | +| Feature | Push down `LIMIT`/`OFFSET` to remote HDFS servers: Where possible, perform `LIMIT` and `OFFSET` operations on the remote server. This reduces network traffic between local Postgres and remote HDFS/Hive servers. | + + + diff --git a/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/index.mdx b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/index.mdx index b2ac0a935fa..492c56b03a2 100644 --- a/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/index.mdx +++ b/product_docs/docs/hadoop_data_adapter/2/hadoop_rel_notes/index.mdx @@ -3,6 +3,7 @@ title: "Release Notes" redirects: - ../01_whats_new/ navigation: +- hadoop_rel_notes_2.3.0 - hadoop_rel_notes_2.2.0 - hadoop_rel_notes_2.1.0 - hadoop_rel_notes_2.0.8 @@ -16,6 +17,7 @@ The Hadoop Foreign Data Wrapper documentation describes the latest version inclu | Version | Release Date | | --------------------------------| ------------ | +| [2.3.0](hadoop_rel_notes_2.3.0) | 2023 Jan 06 | | [2.2.0](hadoop_rel_notes_2.2.0) | 2022 May 26 | | [2.1.0](hadoop_rel_notes_2.1.0) | 2021 Dec 02 | | [2.0.8](hadoop_rel_notes_2.0.8) | 2021 Jun 24 | diff --git a/product_docs/docs/hadoop_data_adapter/2/index.mdx b/product_docs/docs/hadoop_data_adapter/2/index.mdx index 845c95f332f..1cc11132edf 100644 --- a/product_docs/docs/hadoop_data_adapter/2/index.mdx +++ b/product_docs/docs/hadoop_data_adapter/2/index.mdx @@ -17,6 +17,8 @@ navigation: - 09_using_the_hadoop_data_adapter - 10a_example_join_pushdown - 10b_example_aggregate_pushdown +- 10c_example_order_by_push_down +- 10d_example_limit_offset_push_down - "#Troubleshooting" - 10_identifying_data_adapter_version diff --git a/product_docs/docs/mongo_data_adapter/5/02_requirements_overview.mdx b/product_docs/docs/mongo_data_adapter/5/02_requirements_overview.mdx index c46c0c95d5b..6e72a219361 100644 --- a/product_docs/docs/mongo_data_adapter/5/02_requirements_overview.mdx +++ b/product_docs/docs/mongo_data_adapter/5/02_requirements_overview.mdx @@ -6,6 +6,8 @@ title: "Supported database versions" | MongoDB Foreign Data Wrapper | EPAS 14 | EPAS 13 | EPAS 12 | EPAS 11 | | --------- | ------- | ------- | ------- | ------- | +| 5.5.0 | Y | Y | Y | Y | +| 5.4.0 | Y | Y | Y | Y | | 5.3.0 | Y | Y | Y | Y | | 5.2.9 | N | Y | Y | Y | | 5.2.8 | N | Y | N | N | diff --git a/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx b/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx index d335fe91334..7bd716a8264 100644 --- a/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx +++ b/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx @@ -10,7 +10,7 @@ These are the key features of the MongoDB Foreign Data Wrapper. The MongoDB Foreign Data Wrapper lets you modify data on a MongoDB server. You can insert, update, and delete data in the remote MongoDB collections by inserting, updating and deleting data locally in foreign tables. -See also: +For more information, see: - [Example: Using the MongoDB Foreign Data Wrapper](08_example_using_the_mongo_data_adapter) @@ -18,7 +18,9 @@ See also: ## WHERE clause pushdown -MongoDB Foreign Data Wrapper allows the pushdown of the `WHERE` clause only when clauses include the comparison expressions that have a column and a constant as arguments. `WHERE` clause pushdown isn't supported where the constant is an array. +MongoDB Foreign Data Wrapper allows the pushdown of the `WHERE` clause only when clauses include the comparison expressions that have a column and a constant as arguments. `WHERE` clause pushdown isn't supported where the constant is an array. From version 5.5.0 and later, MongoDB Foreign Data Wrapper supports recursive operator expressions, Boolean expressions, relabel types, and vars on both sides of an operator. + +For more information, see [Example: WHERE clause pushdown](08c_example_where_pushdown). ## Join pushdown @@ -28,10 +30,22 @@ For more information, see [Example: Join pushdown](08a_example_join_pushdown). ## Aggregate pushdown -MongoDB Foreign Data Wrapper supports aggregate pushdown. It pushes the aggregates to the remote MongoDB server instead of fetching all of the rows and aggregating them locally. This gives a very good performance boost for the cases where aggregates can be pushed down. The push-down is currently limited to aggregate functions min, max, sum, avg, and count, to avoid pushing down the functions that are not present on the MongoDB server. The aggregate filters, orders, variadic and distinct are not pushed down. +MongoDB Foreign Data Wrapper supports aggregate pushdown. It pushes the aggregates to the remote MongoDB server instead of fetching all of the rows and aggregating them locally. This gives a very good performance boost for the cases where aggregates can be pushed down. The pushdown is currently limited to aggregate functions min, max, sum, avg, and count, to avoid pushing down the functions that are not present on the MongoDB server. The aggregate filters, orders, variadic and distinct are not pushed down. For more information, see [Example: Aggregate pushdown](08b_example_aggregate_pushdown). +## ORDER BY pushdown + +MongoDB Foreign Data Wrapper supports `ORDER BY` pushdown. If possible, push the `ORDER BY` clause to the remote server. This approach provides the ordered result set from the foreign server, which can help to have an efficient merge join. NULLs behavior is opposite on the MongoDB server. To get an equivalent result, push down `ORDER BY` with either `ASC NULLS FIRST` or `DESC NULLS LAST`. As MongoDB sorts only on fields, only column names in `ORDER BY` expressions are pushed down. + +For more information, see [Example: ORDER BY pushdown](example_order_by_push_down). + +## LIMIT OFFSET pushdown + +MongoDB Foreign Data Wrapper supports `LIMIT`/`OFFSET` pushdown. Wherever possible, perform `LIMIT` and `OFFSET` operations on the remote server. This reduces network traffic between local Postgres and remote MongoDB servers. + +For more information, see [Example: LIMIT OFFSET pushdown](example_limit_offset_push_down). + ## Connection pooling The MongoDB Foreign Data Wrapper establishes a connection to a foreign server during the first query that uses a foreign table associated with the foreign server. This connection is kept and reused for subsequent queries in the same session. diff --git a/product_docs/docs/mongo_data_adapter/5/08c_example_where_pushdown.mdx b/product_docs/docs/mongo_data_adapter/5/08c_example_where_pushdown.mdx new file mode 100644 index 00000000000..8d377957c67 --- /dev/null +++ b/product_docs/docs/mongo_data_adapter/5/08c_example_where_pushdown.mdx @@ -0,0 +1,48 @@ +--- +title: "Example: WHERE clause pushdown" +--- + +MongoDB Foreign Data Wrapper supports pushdown for the WHERE clause. For example: + +Postgres data set: + +```sql +-- load extension first time after install +CREATE EXTENSION mongo_fdw; + +-- create server object +CREATE SERVER mongo_server FOREIGN DATA WRAPPER mongo_fdw OPTIONS (address 'localhost', port '27017'); + +-- create user mapping +CREATE USER MAPPING FOR public SERVER mongo_server OPTIONS (username 'edb', password 'edb'); + +-- create foreign table +CREATE FOREIGN TABLE emp (_id NAME, eid INTEGER, ename TEXT, deptid INTEGER) SERVER mongo_server OPTIONS (database 'edb', collection 'emp'); + +-- insert into table +INSERT INTO emp VALUES (0, 100, 'John', 10); +INSERT INTO emp VALUES (0, 110, 'Mark', 10); +INSERT INTO emp VALUES (0, 120, 'Smith', 20); +INSERT INTO emp VALUES (0, 130, 'Ed', 30); +``` + +The output: +```sql +edb=# EXPLAIN (VERBOSE, COSTS FALSE) select eid from emp where deptid>20 order by eid; + QUERY PLAN +------------------------------------ + Sort + Output: eid + Sort Key: emp.eid + -> Foreign Scan on public.emp + Output: eid + Foreign Namespace: edb.emp +(6 rows) + +edb=# +edb=# select eid from emp where deptid>20 order by eid; + eid +----- + 130 +(1 row) +``` \ No newline at end of file diff --git a/product_docs/docs/mongo_data_adapter/5/example_limit_offset_push_down.mdx b/product_docs/docs/mongo_data_adapter/5/example_limit_offset_push_down.mdx new file mode 100644 index 00000000000..e18e8db4569 --- /dev/null +++ b/product_docs/docs/mongo_data_adapter/5/example_limit_offset_push_down.mdx @@ -0,0 +1,93 @@ +--- +title: "Example: LIMIT OFFSET pushdown" +--- + +This example shows LIMIT OFFSET pushdown on the EMP table. + +Postgres data set: + +```sql +-- load extension first time after install +CREATE EXTENSION mongo_fdw; + +-- create server object +CREATE SERVER mongo_server FOREIGN DATA WRAPPER mongo_fdw OPTIONS (address 'localhost', port '27017'); + +-- create user mapping +CREATE USER MAPPING FOR public SERVER mongo_server OPTIONS (username 'edb', password 'edb'); + +-- create foreign table +CREATE FOREIGN TABLE emp (_id NAME, eid INTEGER, ename TEXT, deptid INTEGER) SERVER mongo_server OPTIONS (database 'edb', collection 'emp'); + +-- insert into table +INSERT INTO emp VALUES (0, 100, 'John', 10); +INSERT INTO emp VALUES (0, 110, 'Mark', 10); +INSERT INTO emp VALUES (0, 120, 'Smith', 20); +INSERT INTO emp VALUES (0, 130, 'Ed', 30); +``` + +```sql +-- LIMIT and OFFSET +edb=# SELECT min(eid), eid FROM emp GROUP BY eid ORDER BY eid ASC NULLS FIRST LIMIT 0 OFFSET 0; + min | eid +-----+----- +(0 rows) + +-- LIMIT and OFFSET +edb=# EXPLAIN (VERBOSE, COSTS OFF) +edb-# SELECT min(eid), eid FROM emp GROUP BY eid ORDER BY eid ASC NULLS FIRST LIMIT NULL OFFSET 2; + QUERY PLAN +--------------------------------------------------- + Foreign Scan + Output: (min(eid)), eid + Foreign Namespace: Aggregate on ("FDW_134".emp) +(3 rows) + +-- LIMIT and OFFSET +edb=# SELECT min(eid), eid FROM emp GROUP BY eid ORDER BY eid ASC NULLS FIRST LIMIT NULL OFFSET 2; + min | eid +-----+----- + 120 | 120 + 130 | 130 + 140 | 140 +(3 rows) + +-- LIMIT and OFFSET +edb=# EXPLAIN (VERBOSE, COSTS OFF) +edb-# SELECT min(eid), eid FROM emp GROUP BY eid ORDER BY eid ASC NULLS FIRST LIMIT ALL OFFSET 2; + QUERY PLAN +--------------------------------------------------- + Foreign Scan + Output: (min(eid)), eid + Foreign Namespace: Aggregate on ("FDW_134".emp) +(3 rows) + +-- LIMIT only +edb=# EXPLAIN (VERBOSE, COSTS FALSE) +edb-# SELECT eid, max(eid) FROM emp GROUP BY eid ORDER BY 1 ASC NULLS FIRST LIMIT -1; + QUERY PLAN +---------------------------------------------- + Limit + Output: eid, (max(eid)) + -> GroupAggregate + Output: eid, max(eid) + Group Key: emp.eid + -> Foreign Scan on public.emp + Output: _id, eid, deptid + Foreign Namespace: FDW_134.emp +(8 rows) + +-- OFFSET only +edb=# EXPLAIN (VERBOSE, COSTS FALSE) +edb-# SELECT eid, max(eid) FROM emp GROUP BY eid ORDER BY 1 ASC NULLS FIRST OFFSET -2; + QUERY PLAN +--------------------------------------------------------- + Limit + Output: eid, (max(eid)) + -> Foreign Scan + Output: eid, (max(eid)) + Foreign Namespace: Aggregate on ("FDW_134".emp) +(5 rows) + +edb=# +``` \ No newline at end of file diff --git a/product_docs/docs/mongo_data_adapter/5/example_order_by_push_down.mdx b/product_docs/docs/mongo_data_adapter/5/example_order_by_push_down.mdx new file mode 100644 index 00000000000..902d128641d --- /dev/null +++ b/product_docs/docs/mongo_data_adapter/5/example_order_by_push_down.mdx @@ -0,0 +1,69 @@ +--- +title: "Example: ORDER BY pushdown" +--- + +This example shows ORDER BY pushdown on the EMP table. + +Postgres data set: + +```sql +-- load extension first time after install +CREATE EXTENSION mongo_fdw; + +-- create server object +CREATE SERVER mongo_server FOREIGN DATA WRAPPER mongo_fdw OPTIONS (address 'localhost', port '27017'); + +-- create user mapping +CREATE USER MAPPING FOR public SERVER mongo_server OPTIONS (username 'edb', password 'edb'); + +-- create foreign table +CREATE FOREIGN TABLE emp (_id NAME, eid INTEGER, ename TEXT, deptid INTEGER) SERVER mongo_server OPTIONS (database 'edb', collection 'emp'); + +-- insert into table +INSERT INTO emp VALUES (0, 100, 'John', 10); +INSERT INTO emp VALUES (0, 110, 'Mark', 10); +INSERT INTO emp VALUES (0, 120, 'Smith', 20); +INSERT INTO emp VALUES (0, 130, 'Ed', 30); +``` + +```sql +edb=# SELECT eid, sum(eid), count(*) FROM emp GROUP BY eid HAVING min(eid) > 100 ORDER +edb-# BY eid ASC NULLS FIRST; + eid | sum | count +-----+-----+------- + 110 | 110 | 1 + 120 | 120 | 1 + 130 | 130 | 1 + 140 | 140 | 1 +(4 rows) + +edb=# +edb=# EXPLAIN (VERBOSE, COSTS FALSE) +edb-# SELECT eid, sum(eid), count(*) FROM emp GROUP BY eid HAVING min(eid) > 100 ORDER +edb-# BY eid ASC NULLS FIRST; + QUERY PLAN +--------------------------------------------------- + Foreign Scan + Output: eid, (sum(eid)), (count(*)) + Foreign Namespace: Aggregate on ("FDW_134".emp) +(3 rows) + +edb=# +edb=# SELECT deptid, min(eid) FROM emp WHERE deptid > 20 GROUP BY deptid HAVING min(deptid) = +edb-# 30 ORDER BY deptid ASC NULLS FIRST; + deptid | min +--------+----- + 30 | 120 +(1 row) + +edb=# +edb=# EXPLAIN (VERBOSE, COSTS FALSE) +edb-# SELECT deptid, min(eid) FROM emp WHERE deptid > 20 GROUP BY deptid HAVING min(deptid) = +edb-# 30 ORDER BY deptid ASC NULLS FIRST; + QUERY PLAN +--------------------------------------------------- + Foreign Scan + Output: deptid, (min(eid)) + Foreign Namespace: Aggregate on ("FDW_134".emp) +(3 rows) +``` \ No newline at end of file diff --git a/product_docs/docs/mongo_data_adapter/5/index.mdx b/product_docs/docs/mongo_data_adapter/5/index.mdx index cb81864469a..202a5c93e72 100644 --- a/product_docs/docs/mongo_data_adapter/5/index.mdx +++ b/product_docs/docs/mongo_data_adapter/5/index.mdx @@ -16,6 +16,9 @@ navigation: - 08_example_using_the_mongo_data_adapter - 08a_example_join_pushdown - 08b_example_aggregate_pushdown +- example_limit_offset_push_down +- example_order_by_push_down +- 08c_example_where_pushdown - "#Troubleshooting" - 09_identifying_data_adapter_version --- diff --git a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/index.mdx b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/index.mdx index 3625e03f890..ef5e01247a4 100644 --- a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/index.mdx +++ b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/index.mdx @@ -3,6 +3,7 @@ title: "Release notes" redirects: - ../01_whats_new/ navigation: +- mongo5.5.0_rel_notes - mongo5.4.0_rel_notes - mongo5.3.0_rel_notes - mongo5.2.9_rel_notes @@ -15,6 +16,7 @@ The Mongo Foreign Data Wrapper documentation describes the latest version of Mon | Version | Release date | | ----------------------------- | ------------ | +| [5.5.0](mongo5.5.0_rel_notes) | 2023 Jan 06 | | [5.4.0](mongo5.4.0_rel_notes) | 2022 May 26 | | [5.3.0](mongo5.3.0_rel_notes) | 2021 Dec 02 | | [5.2.9](mongo5.2.9_rel_notes) | 2021 Jun 24 | diff --git a/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.5.0_rel_notes.mdx b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.5.0_rel_notes.mdx new file mode 100644 index 00000000000..b7d33fe1d96 --- /dev/null +++ b/product_docs/docs/mongo_data_adapter/5/mongo_rel_notes/mongo5.5.0_rel_notes.mdx @@ -0,0 +1,15 @@ +--- +title: "Version 5.5.0" +--- + +Enhancements, bug fixes, and other changes in MongoDB Foreign Data Wrapper 5.5.0 +include: + +| Type | Description | +| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Feature | When possible, we push the `ORDER BY` clause to the remote MongoDB server. This approach provides the ordered result set from the foreign server, which can help to have an efficient merge join. | +| Feature | When possible, perform `LIMIT` and `OFFSET` operations on the remote server. This reduces network traffic between local Postgres and remote MongoDB servers. | +| Enhancement | Improved the WHERE clause pushdown to remote MongoDB servers. Now supports recursive operator expressions, Boolean expressions, relabel types, and vars on both sides of an operator. | +| Bug Fix | For nested join queries, save the status of the `enable_aggregate_pushdown` option to access the aggregation path later. | +| Bug Fix | Fix server crash due to missing `Param` node handling. | +| Bug Fix | Fix typos in `autogen.sh` and `README.md` files. | diff --git a/product_docs/docs/mysql_data_adapter/2/02_requirements_overview.mdx b/product_docs/docs/mysql_data_adapter/2/02_requirements_overview.mdx index 0ee1f128de9..913aac11a7e 100644 --- a/product_docs/docs/mysql_data_adapter/2/02_requirements_overview.mdx +++ b/product_docs/docs/mysql_data_adapter/2/02_requirements_overview.mdx @@ -8,6 +8,7 @@ This table lists the latest MySQL Foreign Data Wrapper versions and their suppor | MySQL Foreign Data Wrapper | EPAS 14 | EPAS 13 | EPAS 12 | EPAS 11 | | -------------------------- | ------- | ------- | ------- | ------- | +| 2.9.0 | Y | Y | Y | Y | | 2.8.0 | Y | Y | Y | Y | | 2.7.0 | Y | Y | Y | Y | | 2.6.0 | N | Y | Y | Y | diff --git a/product_docs/docs/mysql_data_adapter/2/06_features_of_mysql_fdw.mdx b/product_docs/docs/mysql_data_adapter/2/06_features_of_mysql_fdw.mdx index a4c599b3c37..cda3d5c07f5 100644 --- a/product_docs/docs/mysql_data_adapter/2/06_features_of_mysql_fdw.mdx +++ b/product_docs/docs/mysql_data_adapter/2/06_features_of_mysql_fdw.mdx @@ -50,20 +50,63 @@ of the rows and aggregating them locally. Aggregate filters and aggregate orders See also: -- [Example: Aggregate pushdown](10a_example_aggregate_func_push_down) -- [Blog: Aggregate Pushdown](https://www.enterprisedb.com/blog/aggregate-push-down-mysqlfdw) - covers performance improvements, using join and aggregate pushdowns together, and pushing down aggregates to the partition table +For more information, see [Example: Aggregate pushdown](10a_example_aggregate_func_push_down) +Also, see [Blog: Aggregate Pushdown](https://www.enterprisedb.com/blog/aggregate-push-down-mysqlfdw) - covers performance improvements, using join and aggregate pushdowns together, and pushing down aggregates to the partition table ## ORDER BY pushdown -MySQL Foreign Data Wrapper supports order by push-down. If possible, push order by clause to the remote server so that we get the ordered result set from the foreign server itself. It might help us to have an efficient merge join. NULLs behavior is opposite on the MySQL server. Thus to get an equivalent result, we add the "expression IS NULL" clause at the beginning of each of the ORDER BY expressions. +MySQL Foreign Data Wrapper supports ORDER BY pushdown. If possible, push ORDER BY clause to the remote server so that we get the ordered result set from the foreign server itself. It might help us to have an efficient merge join. NULLs behavior is opposite on the MySQL server. Thus to get an equivalent result, we add the "expression IS NULL" clause at the beginning of each of the ORDER BY expressions. -- [Example: ORDER BY pushdown](10b_example_order_by_push_down) +For more information, see [Example: ORDER BY pushdown](10b_example_order_by_push_down) ## LIMIT OFFSET pushdown -MySQL Foreign Data Wrapper supports limit offset push-down. Wherever possible, perform LIMIT and OFFSET operations on the remote server. This reduces network traffic between local PostgreSQL and remote MySQL servers. ALL/NULL options are not supported on the MySQL server, and thus they are not pushed down. Also, OFFSET without LIMIT is not supported on the MySQL server hence queries having that construct are not pushed. +MySQL Foreign Data Wrapper supports limit offset push-down. Wherever possible, perform `LIMIT` and `OFFSET` operations on the remote server. This reduces network traffic between local PostgreSQL and remote MySQL servers. `ALL/NULL` options are not supported on the MySQL server, and thus they are not pushed down. Also, `OFFSET` without `LIMIT` is not supported on the MySQL server hence queries having that construct are not pushed. -- [Example: LIMIT OFFSET pushdown](10c_example_limit_offset_push_down) +For more information, see [Example: LIMIT OFFSET pushdown](10c_example_limit_offset_push_down) + +## Configuration file to restrict pushdowns + +MySQL 2.9.0 and later provides the `mysql_fdw_pushdown.config` configuration file to restrict the pushdowns. You can define the list of functions and operators in this file that can pushdown to the remote server. You can easily add or modify the list as per the requirements. + +This file lists the objects as aggregates, functions, and operators allowed to push down to the remote server. Each entry should be on a single line. Each entry must have two columns: + +- Object type that can be ROUTINE (functions, aggregates, and procedures) or OPERATOR +- The second column is the schema-qualified object names with their arguments + +The exact form of the second column can be formatted using the following query: + +For ROUTINE: + +```sql +SELECT pronamespace::regnamespace || '.' || oid::regprocedure FROM pg_proc +WHERE proname = '' +``` + +For OPERATOR: + +```sql +SELECT oprnamespace::regnamespace || '.' || oid::regoperator FROM pg_operator +WHERE oprname = '' +``` + +Example of `mysql_fdw_pushdwon.config` file: + +```shell +ROUTINE pg_catalog.sum(bigint) +ROUTINE pg_catalog.sum(smallint) +ROUTINE pg_catalog.to_number(text) +ROUTINE pg_catalog.to_number(text,text) +OPERATOR pg_catalog.=(integer,integer) +OPERATOR pg_catalog.=(text,text) +OPERATOR pg_catalog.=(smallint,integer) +OPERATOR pg_catalog.=(bigint,integer) +OPERATOR pg_catalog.=(numeric,numeric) +``` + +## IS [NOT] DISTINCT FROM operator + +MySQL 2.9.0 and later supports the `IS [NOT] DISTINCT FROM` operator. MySQL uses the `<=>` operator corresponding to the `IS NOT DISTINCT FROM` operator and the `NOT <=>` operator corresponding to the `IS DISTINCT FROM` operator. ## Prepared Statement @@ -71,9 +114,9 @@ MySQL Foreign Data Wrapper supports Prepared Statement. The select queries use p ## Import foreign schema -MySQL Foreign Data Wrapper supports import foreign schema, which enables the local host to import table definitions on EDB Postgres Advanced Server from the MySQL server. The new foreign tables are created with the corresponding column types and same table name as that of remote tables in the existing local schema. +MySQL Foreign Data Wrapper supports import foreign schema, which enables the local host to import table definitions to EDB Postgres Advanced Server from the MySQL server. The new foreign tables are created with the corresponding column types and same table name as the remote tables in the existing local schema. From version 2.9.0 and later, it supports the `import_generated` option to include columns generated from expressions in the definitions of foreign tables imported from a foreign server. -See [Example: Import foreign schema](09_example_import_foreign_schema/#example_import_foreign_schema) for an example. +For more information, see [Example: Import foreign schema](09_example_import_foreign_schema/#example_import_foreign_schema) for an example. ## Automated cleanup @@ -83,4 +126,6 @@ MySQL Foreign Data Wrapper allows the cleanup of foreign tables in a single oper For more information, see [DROP EXTENSION](https://www.postgresql.org/docs/current/sql-dropextension.html). +## Truncate table +From version 2.9.0 and later, MySQL Foreign Data Wrapper supports the `TRUNCATE TABLE` command on the foreign tables. However, the `CASCADE` option isn't supported with the `TRUNCATE TABLE` command. \ No newline at end of file diff --git a/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/index.mdx b/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/index.mdx index 08fa1cbd9d4..e24ccd46581 100644 --- a/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/index.mdx +++ b/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/index.mdx @@ -3,6 +3,7 @@ title: "Release notes" redirects: - ../01_whats_new/ navigation: +- mysql2.9.0_rel_notes - mysql2.8.0_rel_notes - mysql2.7.0_rel_notes - mysql2.6.0_rel_notes @@ -15,6 +16,7 @@ The MySQL Foreign Data Wrapper documentation describes the latest version of MyS | Version | Release Date | | ----------------------------- | ------------ | +| [2.9.0](mysql2.9.0_rel_notes) | 2023 Jan 06 | | [2.8.0](mysql2.8.0_rel_notes) | 2022 May 26 | | [2.7.0](mysql2.7.0_rel_notes) | 2021 Dec 02 | | [2.6.0](mysql2.6.0_rel_notes) | 2021 May 18 | diff --git a/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/mysql2.9.0_rel_notes.mdx b/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/mysql2.9.0_rel_notes.mdx new file mode 100644 index 00000000000..147c8a6f519 --- /dev/null +++ b/product_docs/docs/mysql_data_adapter/2/mysql_rel_notes/mysql2.9.0_rel_notes.mdx @@ -0,0 +1,19 @@ +--- +title: "Version 2.9.0" +--- + + +Enhancements, bug fixes, and other changes in MySQL Foreign Data Wrapper 2.9.0 include: + +| Type | Description | +| ----------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Feature | Add a configuration file approach to control the pushdowns. We can now provide the list of operators and functions to be pushed down to the remote server safely. | +| Feature | Add support for the `import_generated` option in `IMPORT FOREIGN SCHEMA` to include columns generated from expressions in the definitions of foreign tables imported from a foreign server. | +| Feature | Add support for the `TRUNCATE` command to truncate foreign tables. | +| Feature | Add support for `ON CONFLICT DO NOTHING` in `INSERT`. It is mapping to `INSERT IGNORE`. | +| Feature | Add support for `IS [ NOT ] DISTINCT FROM` operator. | +| Bug Fix | Fix an oversight in assessing the pushdown `ORDER BY` clause. Don't push when underneath `query_pathkeys` is not safe to push down. | +| Bug Fix | Fix unstable ordering in the `SQL/select` test. | +| Bug Fix | Push down the `LIMIT n` clause when `OFFSET` is `NULL`. | + +