diff --git a/.github/workflows/deploy-draft.yml b/.github/workflows/deploy-draft.yml
index 963b7ee7ead..c5d7f2184a2 100644
--- a/.github/workflows/deploy-draft.yml
+++ b/.github/workflows/deploy-draft.yml
@@ -90,6 +90,7 @@ jobs:
deploy-message: ${{ github.event.pull_request.title }}
enable-commit-comment: false
github-deployment-environment: ${{ env.BUILD_ENV_NAME }}
+ alias: deploy-preview-${{ github.event.number }}
env:
NETLIFY_AUTH_TOKEN: ${{ secrets.NETLIFY_AUTH_TOKEN }}
NETLIFY_SITE_ID: ${{ secrets.NETLIFY_DEVELOP_SITE_ID }}
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/CommVaultGuide/02-PartnerInformation.mdx
new file mode 100644
index 00000000000..106ccfaa3e8
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/02-PartnerInformation.mdx
@@ -0,0 +1,12 @@
+---
+title: 'Partner Information'
+description: 'Details for Commvault'
+
+---
+| | |
+| ----------- | ----------- |
+| **Partner Name** | Commvault |
+| **Partner Product** | Commvault Backup & Recovery |
+| **Web Site** | https://www.commvault.com/ |
+| **Version** | Commvault Backup & Recovery 11.24 |
+| **Product Description** | Wherever your data resides, ensure availability via a single interface with Commvault Backup & Recovery. Comprehensive workload coverage, files, apps, and databases including EDB Postgres Advanced Server and EDB Postgres Extended Server from a single extensible platform and user interface. Commvault Backup & Recovery provides a comprehensive backup and archiving solution for your trusted recovery, ransomware protection, and security. |
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/CommVaultGuide/03-SolutionSummary.mdx
new file mode 100644
index 00000000000..e2399ae117f
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/03-SolutionSummary.mdx
@@ -0,0 +1,11 @@
+---
+title: 'Solution Summary'
+description: 'Brief explanation of the solution and its purpose'
+---
+Commvault enables your business to streamline management of its continuously evolving data environment, whether the data is on-premises or in the cloud.
+
+Commvault PostgreSQL iDataAgent provides the flexibility to backup PostgreSQL, EDB Postgres Advanced Server and EDB Postgres Extended Server databases in different modes and restore them when needed. You can perform a full/log backup or restore of database servers/individual databases/archive logs at any time and have full control over the process.
+
+Managing your data means knowing that it is protected and being able to effectively report on success or failure. Through an easy-to-use interface, you can quickly check on the progress of your jobs to ensure things are moving as expected. You can also use pre-built reports in an on-demand fashion, or scheduled, to keep you in the know.
+
+ ![Commvault Architecture](Images/Final-SolutionSummaryImage.png)
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/04-ConfiguringCommvaultBackupandRecovery.mdx b/advocacy_docs/partner_docs/CommVaultGuide/04-ConfiguringCommvaultBackupandRecovery.mdx
new file mode 100644
index 00000000000..12eafe85cb0
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/04-ConfiguringCommvaultBackupandRecovery.mdx
@@ -0,0 +1,86 @@
+---
+title: 'Configuring Commvault Backup & Recovery'
+description: 'Walkthrough on configuring Commvault Backup & Recovery'
+---
+
+Implementing Commvault Backup & Recovery with an EDB database requires the following components:
+
+- EDB Postgres Advanced Server or EDB Postgres Extended Server
+- Commvault Backup & Recovery software
+
+## Prerequisites
+
+- A running EDB Postgres Advanced Server or EDB Postgres Extended Server instance.
+- Commvault Backup & Recovery installed.
+- EDB Postgres Advanced Server or EDB Postgres Extended Server application path and library directory path (e.g. `c:\Program Files\edb\as13\bin` and `c:\Program Files\edb\as13\lib`).
+- The login credentials used to access the EDB Postgres Advanced Server or EDB Postgres Extended Server database.
+- EDB Postgres Advanced Server or EDB Postgres Extended Server archive log directory configured.
+
+## Configure Commvault Backup & Recovery for EDB Postgres Advanced Server or EDB Postgres Extended Server
+
+### Setup a Disk Storage Pool and Database Server Backup Plan
+1. Run the Core Setup Wizard from Commvault Backup & Recovery's Command Center from the Machine where Commvault Backup & Recovery is installed. The wizard helps to set up a disk storage pool and to modify the server backup plan according to your requirements.
+2. Setup Storage Pool/Disk Storage: From the Welcome page, click `Let's get started`. On the `Disk` tab, in the `Name` box, enter a name for the storage pool.
+3. In the `MediaAgent` box, accept the default value.
+4. Click `Local` on the radio button for `Type`.
+5. In the `Backup` location box, click `Browse` to assign a path where backups will be stored.
+6. To enable deduplication on the storage, move the `Use` deduplication toggle key to the right, and in the `Deduplication DB` location box, browse to select the path to the deduplication database.
+7. To move to the next setup option, click `Save`.
+
+ ![Setup Storage Pool/Disk Storage](Images/DiskStorageConf.png)
+
+8. Creating a Server Backup Plan in Core Setup: In order to create a server backup plan in Core Setup, define where the data is stored and how often must the data undergo a backup.
+9. In the `Plan` name box,type the name of the plan e.g. `EPAS Server Plan`.
+10. In the `Backup destinations` section, select the Storage and Retention Period.
+11. In the `RPO` section, select the Backup frequency and Start time for the backup to start as per the backup frequency.
+
+ ![Creating a Server Backup Plan in Core Setup](Images/BackupPlanConf.png)
+
+### Install a client on an EDB database
+1. From the `Navigation` pane of the Commvault Backup & Recovery's Command Center, go to `Protect > Databases`.
+2. Click `Add server`.
+3. Select the database type for EDB database, which in this case is PostgreSQL.
+
+ ![Creating a Server Backup Plan in Core Setup](Images/ServerAdd1.png)
+
+4. In the `Database` server name box, enter the server name.
+5. In the `Username` and `Password` boxes, enter the credentials to connect to the server.
+6. From the Plan list, select the server plan created in step 8 under Setup a Disk Storage Pool and Database Server Backup Plan section for use with your EDB database.
+
+ ![Creating a Server Backup Plan in Core Setup](Images/ServerAdd2.png)
+
+7. Once the server is added, a Job will run to install a client on the Server. The screen shots below display the process from Job creation to software installation.
+
+ ![Creating a Server Backup Plan in Core Setup](Images/ServerAdd3.png)
+
+ ![Creating a Server Backup Plan in Core Setup](Images/ServerAdd4.png)
+
+ ![Creating a Server Backup Plan in Core Setup](Images/ServerAdd5.png)
+
+ ![Creating a Server Backup Plan in Core Setup](Images/ServerAdd6.png)
+
+### Configure the EDB Database Instances to backup and protect
+1. From the navigation pane, go to `Protect > Databases > DB Instances`.
+2. Click `Add instance`, and then select `PostgreSQL`.
+
+ ![Creating a Server Backup Plan in Core Setup](Images/CreateInstance1.png)
+
+3. From the Server name list, select a server where you want to create the new instance.
+4. In the `Instance Name` box, type the EDB database instance name.
+5. From the `Plan` list, select the server plan created in step 8 under Setup a Disk Storage Pool and Database Server Backup Plan section for use with your EDB database.
+6. Under `Connection details`, enter the following information.
+ - In the `Database` user box, type the user name to access the EDB database instance.
+ - In the `Password` box, type the EDB database user account password.
+ - In the `Port` box, type the port to open the communication between the EDB database and the clients.
+ - In the `Maintenance DB` box, type the name of a system database which is used as a maintenance database.
+ - In the `PostgreSQL` section, enter paths for `Binary Directory`, `Lib Directory` and `Archive Log Directory`.
+
+
+ ![Creating a Server Backup Plan in Core Setup](Images/CreateInstance2.png)
+
+
+7. Your database instance to backup is now created. You can now view its configuration.
+
+ ![Creating a Server Backup Plan in Core Setup](Images/CreateInstance3.png)
+
+ ![Creating a Server Backup Plan in Core Setup](Images/CreateInstance4.png)
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/05-UsingCommvaultBackupandRecovery.mdx b/advocacy_docs/partner_docs/CommVaultGuide/05-UsingCommvaultBackupandRecovery.mdx
new file mode 100644
index 00000000000..2bae8718e3e
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/05-UsingCommvaultBackupandRecovery.mdx
@@ -0,0 +1,161 @@
+---
+title: 'Using Commvault Backup & Recovery'
+description: 'Walkthroughs of multiple Commvault Backup & Recovery usage scenarios'
+---
+
+How to backup and restore an EDB Database using Commvault Backup & Recovery.
+
+## Using Commvault Backup & Recovery
+
+Commvault provides two methods of taking the backup from an EDB database and restoring it.
+
+1. DumpBasedBackupSet Backup and Restore
+2. FSBasedBackupSet Backup and Restore
+
+!!! Note
+ At this time there is a known issue with FSBased Restore that does not allow for proper restoration of the database. See the known issues section of the guide for more information.
+
+### DumpBasedBackupSet Backup and Restore
+
+Dump based backup uses the `pg_dump` Utility to take the backup.
+
+#### Taking DumpBasedBackupSet Backup
+
+1. Open Commvault Backup & Recovery's Command Center and from the navigation pane, go to `Protect > Databases`.
+
+ ![Instances Page](Images/Dumpbackup1.png)
+
+2. Click on the required instance.
+3. In the `Backup sets` section, click on `DumpBasedBackupSet` backup set.
+
+ ![Instances Page](Images/Dumpbackup2.png)
+
+4. In the `Database groups` section, click the database group that you want to back up. In this case it is `default`.
+
+ ![Select Database Group](Images/Dumpbackup3.png)
+
+5. In the `Backup` section, click `Back up now`.
+
+ ![Backup Section](Images/Dumpbackup4.png)
+
+6. Select `Full` in the `Select Backup Level` screen.
+
+ ![Backup Level Screen](Images/Dumpbackup5.png)
+
+7. A Job will be created to take the backup.
+
+ ![Backup Job](Images/Dumpbackup6.png)
+
+8. Once the Backup Job is completed then its status will be changed to `Completed`.
+
+ ![Backup Job](Images/Dumpbackup7.png)
+
+#### Restoring DumpBasedBackupSet Backup
+
+DumpBased backupset can be used to restore the individual databases.
+
+1. From the navigation pane, go to `Protect > Databases`.
+2. Click the instance that you want to restore.
+3. In the `Recovery points` calendar, select `DumpBasedBackupSet`.
+
+ ![Recovery Points Calendar](Images/Dumprestore1.png)
+
+4. Select a date from the calendar, and then click `Restore`.
+5. The `Backup Content` screen will display the databases to be restored, select the required database to restore or select all of them to restore all.
+
+ ![Backup Content Screen](Images/Dumprestore2.png)
+
+6. Click `Restore`.
+7. From the `Restore Options` screen, select the `Destination Server` and `Destination Instance` and click on `submit`.
+
+ ![Restore Options Screen](Images/Dumprestore3.png)
+
+8. A Job will be created to restore the backup.
+
+ ![Restore Job](Images/Dumprestore4.png)
+
+ ![Restore Job](Images/Dumprestore5.png)
+
+9. Once the Restore is completed successfully, login to the EDB database and check that the restore operation recovered the data. In our example below we connected to an EDB Postgres Advanced Server instance.
+
+```bash
+edb=#
+edb=# \l
+ List of databases
+ Name | Owner | Encoding | Collate | Ctype | ICU | Access privileges
+-------------+--------------+----------+----------------------------+----------------------------+-----+-------------------------------
+ edb | enterprisedb | UTF8 | English_United States.1252 | English_United States.1252 | |
+ epas13_test | enterprisedb | UTF8 | English_United States.1252 | English_United States.1252 | |
+ postgres | enterprisedb | UTF8 | English_United States.1252 | English_United States.1252 | |
+ template0 | enterprisedb | UTF8 | English_United States.1252 | English_United States.1252 | | =c/enterprisedb +
+ | | | | | | enterprisedb=CTc/enterprisedb
+ template1 | enterprisedb | UTF8 | English_United States.1252 | English_United States.1252 | | =c/enterprisedb +
+ | | | | | | enterprisedb=CTc/enterprisedb
+(5 rows)
+
+
+edb=#
+edb=# \c epas13_test
+You are now connected to database "epas13_test" as user "enterprisedb".
+epas13_test=# \dt
+ List of relations
+ Schema | Name | Type | Owner
+--------+------------------+-------+--------------
+ public | tp_department_db | table | enterprisedb
+ public | tp_sales_db | table | enterprisedb
+(2 rows)
+
+
+epas13_test=# select * from tp_department_db;
+ deptno | dname | location
+--------+-------------+----------
+ 10 | Development | Pakistan
+ 20 | Testing | Pakistan
+ 30 | CM | Pakistan
+ 40 | Marketing | India
+(4 rows)
+
+
+epas13_test=# select * from tp_sales_db;
+ salesman_id | salesman_name | sales_region | sales_amount | deptno
+-------------+---------------+--------------+--------------+--------
+ 100 | Person 1 | CITY 1 | 1 | 10
+ 110 | Person 2 | CITY 2 | 2 | 20
+ 120 | Person 3 | CITY 3 | 3 | 30
+ 130 | Person 4 | CITY 4 | 10000 | 40
+(4 rows)
+
+
+epas13_test=# select * from v1;
+ dept_no | dept_name | sales_no | sales_name | sales_salary | sales_dept_no
+---------+-------------+----------+------------+--------------+---------------
+ 10 | Development | 100 | Person 1 | 1 | 10
+ 20 | Testing | 110 | Person 2 | 2 | 20
+ 30 | CM | 120 | Person 3 | 3 | 30
+ 40 | Marketing | 130 | Person 4 | 10000 | 40
+(4 rows)
+
+
+epas13_test=# desc tp_sales_db;
+ Table "public.tp_sales_db"
+ Column | Type | Collation | Nullable | Default
+---------------+-----------------------+-----------+----------+------------------------------
+ salesman_id | integer | | |
+ salesman_name | character varying(30) | | |
+ sales_region | character varying(30) | | |
+ sales_amount | integer | | | nextval('sal_seq'::regclass)
+ deptno | integer | | |
+Indexes:
+ "lower_reg_idx" btree (lower(sales_region::text))
+ "reg1_idx" btree (salesman_id)
+Foreign-key constraints:
+ "department_employee_fk" FOREIGN KEY (deptno) REFERENCES tp_department_db(deptno)
+
+
+epas13_test=#
+epas13_test=#
+
+```
+
+## Known Issues
+FSBaseBackupSet Restore has an issue if the default edb directory (for example, `*:\Program files\edb`) has been lost or deleted. If this occurs then after a restore is performed, the permissions on the restored directories are not recovered. Instead the directory inherits the permissions from the parent directory which does not allow EDB Postgres Advanced Server services to start on the restored directory. We are working with Commvault to resolve the issue.
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/CommVaultGuide/06-CertificationEnvironment.mdx
new file mode 100644
index 00000000000..c5558f699a7
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/06-CertificationEnvironment.mdx
@@ -0,0 +1,11 @@
+---
+title: 'Certification Environment'
+description: 'Overview of the certification environment used in the certification of Commvault Backup & Recovery'
+---
+
+| | |
+| ----------- | ----------- |
+| **Certification Test Date** | June 16, 2022 |
+| **EDB Postgres Advanced Server** | 11, 12, 13, 14 |
+| **EDB Postgres Extended Server** | 11, 12, 13 |
+| **Commvault Backup & Recovery** | 11.24 |
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/BackupPlanConf.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/BackupPlanConf.png
new file mode 100644
index 00000000000..c56feb824cf
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/BackupPlanConf.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9463101302ce5e069b33d82d448207ca6c20c3b6f5c85942419d887c5a62c566
+size 344536
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance1.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance1.png
new file mode 100644
index 00000000000..594d040d937
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7697b21ab5c92762382472f732386c45b421dcd35343a61de8c9546afa4d061b
+size 450573
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance2.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance2.png
new file mode 100644
index 00000000000..b72946d0689
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3cbbff07d843e7e580e6a786bcd12f62e205ab37986e4820d13fcf84e3135eb6
+size 467612
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance3.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance3.png
new file mode 100644
index 00000000000..d6224cb7956
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f6acaca4eb44f29bf34081666f345613fd730d1ab258a17dff2df43501e44cc5
+size 436576
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance4.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance4.png
new file mode 100644
index 00000000000..af99d98bc32
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/CreateInstance4.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f2363a4bf2b1b981870fc53a65a90079166aa05ecfe92bf8113357e28c7b515
+size 337029
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/DiskStorageConf.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/DiskStorageConf.png
new file mode 100644
index 00000000000..45a5ccfdb51
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/DiskStorageConf.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1028e447f682f5ad7325cd991a5727dba73165a5b177e6d77ce4931766aa06a3
+size 321491
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup1.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup1.png
new file mode 100644
index 00000000000..358723fbac0
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:802842e22184cbeee0e20eee39d414493ce2a7c88ab72467760f95ffb92a1deb
+size 301793
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup2.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup2.png
new file mode 100644
index 00000000000..27fa618af5d
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1d873f73c562518df180400af2080f38df49fac5c102b2025a125cb26ab81fd2
+size 530067
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup3.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup3.png
new file mode 100644
index 00000000000..526da04153a
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:19edc3af934be9a0bb617833764ca32438937831a20fe7825abed90c2d1b2e41
+size 479921
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup4.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup4.png
new file mode 100644
index 00000000000..4d78586a916
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup4.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:44041863121daf7b671740ce0a787fc892a2edb0b99373940c6bbb030e6f5606
+size 465905
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup5.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup5.png
new file mode 100644
index 00000000000..6b3da909db9
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup5.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:89920505248afc94d2ec9654f8ef8233cdfec200e9949354829895d5d586ff6b
+size 507563
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup6.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup6.png
new file mode 100644
index 00000000000..c8f37d4e76e
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup6.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e9b97837018e5f287cf36df082e72e9db330fecba08bc36584c307d6180d55f1
+size 544061
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup7.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup7.png
new file mode 100644
index 00000000000..10ba5d8b15f
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumpbackup7.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e0a822aaf87fdbdb32b7317b0e8ac059f2a3a42ffacbc9045651096e2cbd9982
+size 352907
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore1.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore1.png
new file mode 100644
index 00000000000..8677f46778f
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:65f4454151a2e72b686fb12de551b5abf18932e6ad451c87299f5638206a55bd
+size 535213
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore2.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore2.png
new file mode 100644
index 00000000000..f0c45897974
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f27122d61a441bded0064d5932fe6b310e105ce5ef719d864be5b044108d683
+size 306560
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore3.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore3.png
new file mode 100644
index 00000000000..7289dbf65f4
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:47f199083bd9b242a7641feac3e0b2593cdd33412f0f9b6ed114480adfc6aab1
+size 460653
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore4.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore4.png
new file mode 100644
index 00000000000..ec9b2150e37
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore4.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cbdfbd13a8406a50cfbab49b90f6a508a420b7a90a8f9c55f4780e4d6f16e783
+size 266532
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore5.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore5.png
new file mode 100644
index 00000000000..b863425a8ff
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Dumprestore5.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f096c72af45c35f659f2bc597c0bd746f57fd46d2e2230115579aa3108bd3c00
+size 529765
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Final-SolutionSummaryImage.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Final-SolutionSummaryImage.png
new file mode 100644
index 00000000000..d8a0701b3c2
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Final-SolutionSummaryImage.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b3ee165b979166c9f89530986a43ac52105af94314df878716a58f82cfda38fa
+size 144891
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup1.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup1.png
new file mode 100644
index 00000000000..0b77017077b
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:327611545bfd828e26805c2223f5d221b87c4c5a0204420dfd5389fdca2e2c24
+size 562618
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup2.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup2.png
new file mode 100644
index 00000000000..54f116aa8b1
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:62e30933af18eb1ec9a62405fff60fcf4fb55d1b423c9224a2c209eeac6d9223
+size 536288
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup3.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup3.png
new file mode 100644
index 00000000000..08b0ddf3e1e
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2b20df76025cdbbcae961a580d7b1abb4d89715c1f1e6247828282a38458b170
+size 381865
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup4.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup4.png
new file mode 100644
index 00000000000..ecab7af0484
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup4.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d97d9e9da66eb5f532844e18bf2199abb3a050f2f6286ccd6e24826ded7c1b55
+size 479195
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup5.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup5.png
new file mode 100644
index 00000000000..9fe8cdca72f
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup5.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5e5d120becf5fe5b9387713c2e16c0320d5311f4eb3f9479f67729696c85043e
+size 512830
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup6.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup6.png
new file mode 100644
index 00000000000..3f990f05441
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup6.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a6e4abe7118135cfc09ed79e93b0496ca19a39627d5a76def225061c1305dd69
+size 476978
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup7.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup7.png
new file mode 100644
index 00000000000..a6fe0607641
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/Fsbackup7.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e0e95972db3c924ae138c7ab03bf8892d5bef1b019c6595a3908d00c239a714c
+size 257436
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/PartnerProgram.jpg.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/PartnerProgram.jpg.png
new file mode 100644
index 00000000000..a51f268a007
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/PartnerProgram.jpg.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6dddb2403778294d50b9c500a3b961fc5ed0aa764d4c425cd44c1c90193915e5
+size 9855
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd1.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd1.png
new file mode 100644
index 00000000000..3eea060e5db
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4384daa1209549d5457a3428881ca2ea4666b87c5c215c69d243e965bfeef723
+size 393950
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd2.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd2.png
new file mode 100644
index 00000000000..956dc7e3c71
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:de882b0aded5dbb8485a556271c2af376e7c4e9988903596fbf916d4619c6db3
+size 416405
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd3.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd3.png
new file mode 100644
index 00000000000..7a14fe1b6bf
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:92772f57dd2e211dc42eda0cc0d71b874a2754529acbdc29291f02ae60dd06d2
+size 476115
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd4.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd4.png
new file mode 100644
index 00000000000..69c9ec9fe8a
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd4.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8931a808f67d975ca9787988b7c29a2bc5aef8120ba7ec6c81b9bfe6d79fa72c
+size 341005
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd5.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd5.png
new file mode 100644
index 00000000000..0aef5efe6af
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd5.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f502b510b190d3dd1f6014012b98dddca58fadf4b9e8312f9c770ca58acb3e3e
+size 322179
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd6.png b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd6.png
new file mode 100644
index 00000000000..fb4f769e4b9
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/Images/ServerAdd6.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:710cba0453db5ec043577068388ff21fd5447745d95608ccecd306383eb1e786
+size 292631
diff --git a/advocacy_docs/partner_docs/CommVaultGuide/index.mdx b/advocacy_docs/partner_docs/CommVaultGuide/index.mdx
new file mode 100644
index 00000000000..e2fe88d612a
--- /dev/null
+++ b/advocacy_docs/partner_docs/CommVaultGuide/index.mdx
@@ -0,0 +1,12 @@
+---
+title: 'Commvault Backup & Recovery Implementation Guide'
+indexCards: simple
+directoryDefaults:
+ iconName: handshake
+---
+
+
+
+
+EDB GlobalConnect Technology Partner Implementation Guide
+Commvault Backup & Recovery
diff --git a/product_docs/docs/bdr/4/security.mdx b/product_docs/docs/bdr/4/security.mdx
index fff196de662..ec785949a37 100644
--- a/product_docs/docs/bdr/4/security.mdx
+++ b/product_docs/docs/bdr/4/security.mdx
@@ -4,7 +4,7 @@ title: Security and roles
---
-The BDR extension can be created only by superusers. However, if you want, you can set up the `pgextwlist` extension and configure it to allow BDR to be created by a non-superuser.
+Only superusers can create the BDR extension. However, if you want, you can set up the `pgextwlist` extension and configure it to allow a non-superuser to create a BDR extension.
Configuring and managing BDR doesn't require superuser access, nor is that recommended.
The privileges required by BDR are split across the following default/predefined roles, named
diff --git a/product_docs/docs/bdr/4/sequences.mdx b/product_docs/docs/bdr/4/sequences.mdx
index a309731534c..776cd500eae 100644
--- a/product_docs/docs/bdr/4/sequences.mdx
+++ b/product_docs/docs/bdr/4/sequences.mdx
@@ -6,219 +6,219 @@ title: Sequences
Many applications require that unique surrogate ids be assigned to database entries.
Often the database `SEQUENCE` object is used to produce these. In
-PostgreSQL these can be either a manually created sequence using the
-`CREATE SEQUENCE` command and retrieved by calling `nextval()` function,
-or `serial` and `bigserial` columns or alternatively
-`GENERATED BY DEFAULT AS IDENTITY` columns.
+PostgreSQL, these can be either:
+- A manually created sequence using the
+`CREATE SEQUENCE` command and retrieved by calling the `nextval()` function
+- `serial` and `bigserial` columns or, alternatively,
+`GENERATED BY DEFAULT AS IDENTITY` columns
-However, standard sequences in PostgreSQL are not multi-node aware, and only
-produce values that are unique on the local node. This is important because
-unique ids generated by such sequences will cause conflict and data loss (by
-means of discarded `INSERTs`) in multi-master replication.
+However, standard sequences in PostgreSQL aren't multi-node aware and
+produce values that are unique only on the local node. This is important because
+unique ids generated by such sequences cause conflict and data loss (by
+means of discarded `INSERT` actions) in multi-master replication.
-## BDR Global Sequences
+## BDR global sequences
For this reason, BDR provides an application-transparent way to generate unique
ids using sequences on bigint or bigserial datatypes across the whole BDR group,
-called **global sequences**.
+called *global sequences*.
BDR global sequences provide an easy way for applications to use the
database to generate unique synthetic keys in an asynchronous distributed
-system that works for most - but not necessarily all - cases.
+system that works for most—but not necessarily all—cases.
Using BDR global sequences allows you to avoid the problems with insert
conflicts. If you define a `PRIMARY KEY` or `UNIQUE` constraint on a column
-which is using a global sequence, it is not possible for any node to ever get
+that's using a global sequence, no node can ever get
the same value as any other node. When BDR synchronizes inserts between the
nodes, they can never conflict.
-BDR global sequences extend PostgreSQL sequences, so are crash-safe. To use
-them, you must have been granted the `bdr_application` role.
+BDR global sequences extend PostgreSQL sequences, so they are crash-safe. To use
+them, you must be granted the `bdr_application` role.
There are various possible algorithms for global sequences:
- SnowflakeId sequences
-- Globally-allocated range sequences
+- Globally allocated range sequences
-SnowflakeId sequences generate values using an algorithm that does not require
-inter-node communication at any point, so is faster and more robust, as well
-as having the useful property of recording the timestamp at which they were
+SnowflakeId sequences generate values using an algorithm that doesn't require
+inter-node communication at any point. It's faster and more robust and has the
+useful property of recording the timestamp at which the values were
created.
SnowflakeId sequences have the restriction that they work only for 64-bit BIGINT
-datatypes and produce values 19 digits long, which may be too long for
+datatypes and produce values 19 digits long, which might be too long for
use in some host language datatypes such as Javascript Integer types.
-Globally-allocated sequences allocate a local range of values which can
-be replenished as-needed by inter-node consensus, making them suitable for
+Globally allocated sequences allocate a local range of values that can
+be replenished as needed by inter-node consensus, making them suitable for
either BIGINT or INTEGER sequences.
-A global sequence can be created using the `bdr.alter_sequence_set_kind()`
+You can create a global sequence using the `bdr.alter_sequence_set_kind()`
function. This function takes a standard PostgreSQL sequence and marks it as
a BDR global sequence. It can also convert the sequence back to the standard
-PostgreSQL sequence (see below).
+PostgreSQL sequence.
BDR also provides the configuration variable `bdr.default_sequence_kind`, which
-determines what kind of sequence will be created when the `CREATE SEQUENCE`
-command is executed or when a `serial`, `bigserial` or
+determines the kind of sequence to create when the `CREATE SEQUENCE`
+command is executed or when a `serial`, `bigserial`, or
`GENERATED BY DEFAULT AS IDENTITY` column is created. Valid settings are:
-- `local` (the default) meaning that newly created
+- `local` (the default), meaning that newly created
sequences are the standard PostgreSQL (local) sequences.
-- `galloc` which always creates globally-allocated range sequences.
-- `snowflakeid` which creates global sequences for BIGINT sequences which
- consist of time, nodeid and counter components, cannot be used with
- INTEGER sequences (so it can be used for `bigserial` but not for `serial`).
-- `timeshard` older version of SnowflakeId sequence which is provided for
- backwards compatibility only, the SnowflakeId is preferred
-- `distributed` special value which can only be used for
- `bdr.default_sequence_kind` and will select `snowflakeid` for `int8`
- sequences (i.e. `bigserial`) and `galloc` sequence for `int4`
- (i.e. `serial`) and `int2` sequences.
+- `galloc`, which always creates globally allocated range sequences.
+- `snowflakeid`, which creates global sequences for BIGINT sequences that
+ consist of time, nodeid, and counter components. You can't use it with
+ INTEGER sequences (so you can use it for `bigserial` but not for `serial`).
+- `timeshard`, which is the older version of SnowflakeId sequence and is provided for
+ backward compatibility only. The SnowflakeId is preferred.
+- `distributed`, which is a special value that you can use only for
+ `bdr.default_sequence_kind`. It selects `snowflakeid` for `int8`
+ sequences (i.e., `bigserial`) and `galloc` sequence for `int4`
+ (i.e., `serial`) and `int2` sequences.
The `bdr.sequences` view shows information about individual sequence kinds.
`currval()` and `lastval()` work correctly for all types of global sequence.
-### SnowflakeId Sequences
+### SnowflakeId sequences
-The ids generated by SnowflakeId sequences are loosely time-ordered so they can
-be used to get the approximate order of data insertion, like standard PostgreSQL
+The ids generated by SnowflakeId sequences are loosely time ordered so you can
+use them to get the approximate order of data insertion, like standard PostgreSQL
sequences. Values generated within the same millisecond might be out of order,
-even on one node. The property of loose time-ordering means they are suitable
+even on one node. The property of loose time ordering means they are suitable
for use as range partition keys.
-SnowflakeId sequences work on one or more nodes, and do not require any inter-node
-communication after the node join process completes. So they may continue to
-be used even if there's the risk of extended network partitions, and are not
+SnowflakeId sequences work on one or more nodes and don't require any inter-node
+communication after the node join process completes. So you can continue to
+use them even if there's the risk of extended network partitions. They aren't
affected by replication lag or inter-node latency.
SnowflakeId sequences generate unique ids in a different
-way to standard sequences. The algorithm uses 3 components for a
+way from standard sequences. The algorithm uses three components for a
sequence number. The first component of the sequence is a timestamp
at the time of sequence number generation. The second component of
the sequence number is the unique id assigned to each BDR node,
-which ensures that the ids from different nodes will always be
-different. Finally, the third component is the number generated by
-the local sequence itself.
+which ensures that the ids from different nodes are always different.
+The third component is the number generated by
+the local sequence.
-While adding a unique node id to the sequence number would be enough
+While adding a unique node id to the sequence number is enough
to ensure there are no conflicts, we also want to keep another useful
-property of sequences, which is that the ordering of the sequence
+property of sequences. The ordering of the sequence
numbers roughly corresponds to the order in which data was inserted
into the table. Putting the timestamp first ensures this.
A few limitations and caveats apply to SnowflakeId sequences.
-SnowflakeId sequences are 64-bits wide and need a `bigint` or `bigserial`.
-Values generated will be at least 19 digits long.
-There is no practical 32-bit `integer` version, so cannot be used with `serial`
-sequences - use globally-allocated range sequences instead.
+SnowflakeId sequences are 64 bits wide and need a `bigint` or `bigserial`.
+Values generated are at least 19 digits long.
+There's no practical 32-bit `integer` version, so you can't use it with `serial`
+sequences. Use globally allocated range sequences instead.
-For SnowflakeId there is a limit of 4096 sequence values generated per
-millisecond on any given node (this means about 4 million sequence values per
-second). In case the sequence value generation wraps around within given
-millisecond, the SnowflakeId sequence will wait until next millisecond and get
+For SnowflakeId there's a limit of 4096 sequence values generated per
+millisecond on any given node (about 4 million sequence values per
+second). In case the sequence value generation wraps around within a given
+millisecond, the SnowflakeId sequence waits until the next millisecond and gets a
fresh value for that millisecond.
-Since SnowflakeId sequences encode timestamp into sequence value, new sequence
-values can only be generated within given time frame (depending on system clock).
-The oldest timestamp which can be used 2016-10-07 which is the epoch time for
-the SnowflakeId. The values will wrap to negative values in year 2086 and
+Since SnowflakeId sequences encode timestamps into the sequence value, you can generate new sequence
+values only within the given time frame (depending on the system clock).
+The oldest timestamp that you can use is 2016-10-07, which is the epoch time for
+the SnowflakeId. The values wrap to negative values in the year 2086 and
completely run out of numbers by 2156.
-Since timestamp is important part of SnowflakeId sequence, there is additional
-protection from generating sequences with older timestamp than the latest one
-used within the lifetime of postgres process (but not between postgres restarts).
+Since timestamp is an important part of a SnowflakeId sequence, there's additional
+protection from generating sequences with a timestamp older than the latest one
+used in the lifetime of a postgres process (but not between postgres restarts).
The `INCREMENT` option on a sequence used as input for SnowflakeId sequences is
-effectively ignored. This could be relevant for applications that do sequence
+effectively ignored. This might be relevant for applications that do sequence
ID caching, like many object-relational mapper (ORM) tools, notably Hibernate.
-Because the sequence is time-based, this has little practical effect since the
-sequence will have advanced to a new non-colliding value by the time the
+Because the sequence is time based, this has little practical effect since the
+sequence advances to a new noncolliding value by the time the
application can do anything with the cached values.
-Similarly, the `START`, `MINVALUE`, `MAXVALUE` and `CACHE` settings may
-be changed on the underlying sequence, but there is no benefit to doing
+Similarly, you might change the `START`, `MINVALUE`, `MAXVALUE`, and `CACHE` settings
+on the underlying sequence, but there's no benefit to doing
so. The sequence's low 14 bits are used and the rest is discarded, so
-the value range limits do not affect the function's result. For the same
-reason, `setval()` is not useful for SnowflakeId sequences.
+the value range limits don't affect the function's result. For the same
+reason, `setval()` isn't useful for SnowflakeId sequences.
#### Timeshard sequences
-Timeshard sequences are provided for backwards compatibility with existing
-installations but are not recommended for new application use. It's recommended
-to use the SnowflakeId sequence instead.
+Timeshard sequences are provided for backward compatibility with existing
+installations but aren't recommended for new application use. We recommend
+using the SnowflakeId sequence instead.
-Timeshard is very similar to SnowflakeId, but has different limits and fewer
-protections and worse performance.
+Timeshard is very similar to SnowflakeId but has different limits and fewer
+protections and slower performance.
The differences between timeshard and SnowflakeId are as following:
- Timeshard can generate up to 16384 per millisecond (about 16 million per
- second) which is more than SnowflakeId, however there is no protection
- against wraparound within given millisecond so schemas using the timeshard
- sequence should protect use `UNIQUE` constraint when using timeshard values
+ second), which is more than SnowflakeId. However, there's no protection
+ against wraparound within a given millisecond. Schemas using the timeshard
+ sequence must protect the use of the `UNIQUE` constraint when using timeshard values
for given column.
- - The timestamp component of timeshard sequence will run out of values in
- the year 2050, and if used in combination with bigint, the values will wrap
+ - The timestamp component of timeshard sequence runs out of values in
+ the year 2050 and, if used in combination with bigint, the values wrap
to negative numbers in the year 2033. This means that sequences generated
- after 2033 will have negative values. This is considerably shorter time
+ after 2033 have negative values. This is a considerably shorter time
span than SnowflakeId and is the main reason why SnowflakeId is preferred.
- Timeshard sequences require occasional disk writes (similar to standard local
- sequences), while SnowflakeId are calculated in memory so the SnowflakeId
+ sequences). SnowflakeIds are calculated in memory so the SnowflakeId
sequences are in general a little faster than timeshard sequences.
-### Globally-allocated range Sequences
+### Globally allocated range sequences
-The globally-allocated range (or `galloc`) sequences allocate ranges (chunks)
+The globally allocated range (or `galloc`) sequences allocate ranges (chunks)
of values to each node. When the local range is used up, a new range is
allocated globally by consensus amongst the other nodes. This uses the key
-space efficiently, but requires that the local node be connected to a majority
-of the nodes in the cluster for the sequence generator to progress, when the
-currently assigned local range has been used up.
-
-Unlike SnowflakeId sequences, galloc sequences support all sequence data types
-provided by PostgreSQL - `smallint`, `integer` and `bigint`. This means that
-galloc sequences can be used in environments where 64-bit sequences are
-problematic, such as using integers in javascript, since that supports only
+space efficiently but requires that the local node be connected to a majority
+of the nodes in the cluster for the sequence generator to progress when the
+currently assigned local range is used up.
+
+Unlike SnowflakeId sequences, `galloc` sequences support all sequence data types
+provided by PostgreSQL: `smallint`, `integer`, and `bigint`. This means that
+you can use `galloc` sequences in environments where 64-bit sequences are
+problematic. Examples include using integers in javascript, since that supports only
53-bit values, or when the sequence is displayed on output with limited space.
The range assigned by each voting is currently predetermined based on the
datatype the sequence is using:
-- smallint - 1 000 numbers
-- integer - 1 000 000 numbers
-- bigint - 1 000 000 000 numbers
+- smallint — 1 000 numbers
+- integer — 1 000 000 numbers
+- bigint — 1 000 000 000 numbers
-Each node will allocate two chunks of seq_chunk_size, one for the current use
+Each node allocates two chunks of seq_chunk_size, one for the current use
plus a reserved chunk for future usage, so the values generated from any one
-node will increase monotonically. However, viewed globally, the values
-generated will not be ordered at all. This could cause a loss of performance
-due to the effects on b-tree indexes, and will typically mean that generated
-values will not be useful as range partition keys.
+node increase monotonically. However, viewed globally, the values
+generated aren't ordered at all. This might cause a loss of performance
+due to the effects on b-tree indexes and typically means that generated
+values aren't useful as range partition keys.
-The main downside of the galloc sequences is that once the assigned range is
+The main downside of the `galloc` sequences is that once the assigned range is
used up, the sequence generator has to ask for consensus about the next range
-for the local node that requires inter-node communication, which could
-lead to delays or operational issues if the majority of the BDR group is not
-accessible. This may be avoided in later releases.
-
-The `CACHE`, `START`, `MINVALUE` and `MAXVALUE` options work correctly
-with galloc sequences, however you need to set them before transforming
-the sequence to galloc kind. The `INCREMENT BY` option also works
-correctly, however, you cannot assign an increment value which is equal
+for the local node that requires inter-node communication. This could
+lead to delays or operational issues if the majority of the BDR group isn't
+accessible. This might be avoided in later releases.
+
+The `CACHE`, `START`, `MINVALUE`, and `MAXVALUE` options work correctly
+with `galloc` sequences. However, you need to set them before transforming
+the sequence to the `galloc` kind. The `INCREMENT BY` option also works
+correctly. However, you can't assign an increment value that's equal
to or more than the above ranges assigned for each sequence datatype.
-`setval()` does not reset the global state for galloc sequences and
-should not be used.
+`setval()` doesn't reset the global state for `galloc` sequences; don't use it.
-A few limitations apply to galloc sequences. BDR tracks galloc sequences in a
-special BDR catalog [bdr.sequence_alloc](catalogs#bdrsequence_alloc). This
-catalog is required to track the currently allocated chunks for the galloc
+A few limitations apply to `galloc` sequences. BDR tracks `galloc` sequences in a
+special BDR catalog [bdr.sequence_alloc](catalogs#bdrsequence_alloc). This
+catalog is required to track the currently allocated chunks for the `galloc`
sequences. The sequence name and namespace is stored in this catalog. Since the
-sequence chunk allocation is managed via Raft whereas any changes to the
-sequence name/namespace is managed via replication stream, BDR currently does
-not support renaming galloc sequences, or moving them to another namespace or
-renaming the namespace that contains a galloc sequence. The user should be
+sequence chunk allocation is managed by Raft, whereas any changes to the
+sequence name/namespace is managed by the replication stream, BDR currently doesn't
+support renaming `galloc` sequences or moving them to another namespace or
+renaming the namespace that contains a `galloc` sequence. Be
mindful of this limitation while designing application schema.
#### Converting a local sequence to a galloc sequence
@@ -229,13 +229,13 @@ prerequisites.
##### 1. Verify that sequence and column data type match
Check that the sequence's data type matches the data type of the column with
-which it will be used. For example, it is possible to create a `bigint` sequence
+which it will be used. For example, you can create a `bigint` sequence
and assign an `integer` column's default to the `nextval()` returned by that
sequence. With galloc sequences, which for `bigint` are allocated in blocks of
-1 000 000 000, this will quickly result in the values returned by `nextval()`
+1 000 000 000, this quickly results in the values returned by `nextval()`
exceeding the `int4` range if more than two nodes are in use.
-The following example demonstrates what can happen:
+The following example shows what can happen:
```sql
CREATE SEQUENCE int8_seq;
@@ -259,8 +259,8 @@ SELECT bdr.alter_sequence_set_kind('public.int8_seq'::regclass, 'galloc', 1);
ALTER TABLE seqtest ALTER COLUMN id SET DEFAULT nextval('int8_seq'::regclass);
```
-After executing `INSERT INTO seqtest VALUES(DEFAULT)` on two nodes, the table will
-contain the following values:
+After executing `INSERT INTO seqtest VALUES(DEFAULT)` on two nodes, the table
+contains the following values:
```sql
SELECT * FROM seqtest;
@@ -271,26 +271,25 @@ SELECT * FROM seqtest;
(2 rows)
```
-However, attempting the same operation on a third node will fail with an
-`integer out of range` error, as the sequence will have generated the value
-`4000000002`. The next section contains more details on how chunks of sequences
-are allocated.
+However, attempting the same operation on a third node fails with an
+`integer out of range` error, as the sequence generated the value
+`4000000002`.
!!! Tip
- The current data type of a sequence can be retrieved from the PostgreSQL
+ You can retrieve the current data type of a sequence from the PostgreSQL
[pg_sequences](https://www.postgresql.org/docs/current/view-pg-sequences.html)
- view. The data type of a sequence can be modified with `ALTER SEQUENCE ... AS ...`,
- e.g.: `ALTER SEQUENCE public.sequence AS integer`, as long as its current
- value has not exceeded the maximum value of the new data type.
+ view. You can modify the data type of a sequence with `ALTER SEQUENCE ... AS ...`,
+ for example, `ALTER SEQUENCE public.sequence AS integer`, as long as its current
+ value doesn't exceed the maximum value of the new data type.
##### 2. Set a new start value for the sequence
-When the sequence kind is altered to `galloc`, it will be rewritten and restart from
+When the sequence kind is altered to `galloc`, it's rewritten and restarts from
the defined start value of the local sequence. If this happens on an existing
-sequence in a production database you, will need to query the current value
+sequence in a production database, you need to query the current value and
then set the start value appropriately. To assist with this use case, BDR
allows users to pass a starting value with the function `bdr.alter_sequence_set_kind()`.
-If you are already using offset and you have writes from multiple nodes, you
+If you're already using offset and you have writes from multiple nodes, you
need to check what is the greatest used value and restart the sequence at least
to the next value.
@@ -306,14 +305,14 @@ SELECT max((x->'response'->0->>'nextval')::bigint)
SELECT bdr.alter_sequence_set_kind('public.sequence'::regclass, 'galloc', $MAX + $MARGIN);
```
-Since users cannot lock a sequence, you must leave a `$MARGIN` value to allow
+Since users can't lock a sequence, you must leave a `$MARGIN` value to allow
operations to continue while the `max()` value is queried.
-The `bdr.sequence_alloc` table will give information on the chunk size and what
-ranges are allocated around the whole cluster.
-In this example we started our sequence from `333,` and we have two nodes in the
-cluster, we can see that we have a number of allocation 4, that is 2 per node
-and the chunk size is 1000000 that is related to an integer sequence.
+The `bdr.sequence_alloc` table gives information on the chunk size and the
+ranges allocated around the whole cluster.
+In this example, we started our sequence from `333`, and we have two nodes in the
+cluster. We can see that we have a number of allocation 4, which is 2 per node,
+and the chunk size is 1000000 that's related to an integer sequence.
```sql
SELECT * FROM bdr.sequence_alloc
@@ -362,54 +361,54 @@ SELECT last_value AS range_start, log_cnt AS range_end
3000334 | 4000333
```
-**NOTE** You can't combine it to single query (like WHERE ctid IN ('(0,2)', '(0,3)'))
-as that will still only show the first range.
+!!! NOTE
+ You can't combine it to a single query (like `WHERE ctid IN ('(0,2)', '(0,3)')`)
+ as that still shows only the first range.
-When a node finishes a chunk, it will ask a consensus for a new one and get the
-first available; in our case, it will be from 4000334 to 5000333. This will be
-the new reserved chunk and it will start to consume the old reserved chunk.
+When a node finishes a chunk, it asks a consensus for a new one and gets the
+first available. In the example, it's from 4000334 to 5000333. This is
+the new reserved chunk and starts to consume the old reserved chunk.
-## UUIDs, KSUUIDs and Other Approaches
+## UUIDs, KSUUIDs, and other approaches
There are other ways to generate globally unique ids without using the global
sequences that can be used with BDR. For example:
-- UUIDs, and their BDR variant, KSUUIDs
-- Local sequences with a different offset per node (i.e. manual)
-- An externally co-ordinated natural key
+- UUIDs and their BDR variant, KSUUIDs
+- Local sequences with a different offset per node (i.e., manual)
+- An externally coordinated natural key
-Please note that BDR applications **cannot** use other methods safely:
-counter-table
-based approaches relying on `SELECT ... FOR UPDATE`, `UPDATE ... RETURNING ...`
-or similar for sequence generation will not work correctly in BDR, because BDR
-does not take row locks between nodes. The same values will be generated on
+BDR applications can't use other methods safely:
+counter-table-based approaches relying on `SELECT ... FOR UPDATE`, `UPDATE ... RETURNING ...`
+or similar for sequence generation doesn't work correctly in BDR because BDR
+doesn't take row locks between nodes. The same values are generated on
more than one node. For the same reason, the usual strategies for "gapless"
-sequence generation do not work with BDR. In most cases the application should
-coordinate generation of sequences that must be gapless from some external
-source using two-phase commit, or it should only generate them on one node in
+sequence generation don't work with BDR. In most cases, the application
+coordinates generation of sequences that must be gapless from some external
+source using two-phase commit. Or it generates them only on one node in
the BDR group.
### UUIDs and KSUUIDs
`UUID` keys instead avoid sequences entirely and
use 128-bit universal unique identifiers. These are random
-or pseudorandom values that are so large that it is nearly
-impossible for the same value to be generated twice. There is
+or pseudorandom values that are so large that it's nearly
+impossible for the same value to be generated twice. There's
no need for nodes to have continuous communication when using `UUID` keys.
-In the incredibly unlikely event of a collision, conflict detection will
-choose the newer of the two inserted records to retain. Conflict logging,
-if enabled, will record such an event, but it is
-*exceptionally* unlikely to ever occur, since collisions
-only become practically likely after about `2^64` keys have been generated.
+In the unlikely event of a collision, conflict detection
+chooses the newer of the two inserted records to retain. Conflict logging,
+if enabled, records such an event. However, it's
+exceptionally unlikely to ever occur, since collisions
+become practically likely only after about `2^64` keys are generated.
-The main downside of `UUID` keys is that they're somewhat space- and
-network inefficient, consuming more space not only as a primary key, but
+The main downside of `UUID` keys is that they're somewhat inefficient in terms of space and
+the network. They consume more space not only as a primary key but
also where referenced in foreign keys and when transmitted on the wire.
-Additionally, not all applications cope well with `UUID` keys.
+Also, not all applications cope well with `UUID` keys.
BDR provides functions for working with a K-Sortable variant of `UUID` data,
-known as KSUUID, which generates values that can be stored using PostgreSQL's
+known as KSUUID, which generates values that can be stored using the PostgreSQL
standard `UUID` data type. A `KSUUID` value is similar to `UUIDv1` in that
it stores both timestamp and random data, following the `UUID` standard.
The difference is that `KSUUID` is K-Sortable, meaning that it's weakly
@@ -417,32 +416,32 @@ sortable by timestamp. This makes it more useful as a database key as it
produces more compact `btree` indexes, which improves
the effectiveness of search, and allows natural time-sorting of result data.
Unlike `UUIDv1`,
-`KSUUID` values do not include the MAC of the computer on which they were
-generated, so there should be no security concerns from using `KSUUID`s.
+`KSUUID` values don't include the MAC of the computer on which they were
+generated, so there are no security concerns from using them.
-`KSUUID` v2 is now recommended in all cases. Values generated are directly
-sortable with regular comparison operators.
+`KSUUID` v2 is now recommended in all cases. You can directly sort values generated
+with regular comparison operators.
-There are two versions of `KSUUID` in BDR, v1 and v2.
+There are two versions of `KSUUID` in BDR: v1 and v2.
The legacy `KSUUID` v1 is
-now deprecated but is kept in order to support existing installations and should
-not be used for new installations.
-The internal contents of the v1 and v2 are not compatible, and as such the
-functions to manipulate them are also not compatible. The v2 of `KSUUID` also
+deprecated but is kept in order to support existing installations. Don't
+use it for new installations.
+The internal contents of v1 and v2 aren't compatible. As such, the
+functions to manipulate them also aren't compatible. The v2 of `KSUUID` also
no longer stores the `UUID` version number.
-### Step & Offset Sequences
+### Step and offset sequences
In offset-step sequences, a normal PostgreSQL sequence is used on each node.
Each sequence increments by the same amount and starts at differing offsets.
-For example with step 1000, node1's sequence generates 1001, 2001, 3001, and
-so on, node2's generates 1002, 2002, 3002, etc. This scheme works well
-even if the nodes cannot communicate for extended periods, but the designer
+For example, with step 1000, node1's sequence generates 1001, 2001, 3001, and
+so on. node2's sequence generates 1002, 2002, 3002, and so on. This scheme works well
+even if the nodes can't communicate for extended periods. However, the designer
must specify a maximum number of nodes when establishing the
-schema, and it requires per-node configuration. However, mistakes can easily lead to
+schema, and it requires per-node configuration. Mistakes can easily lead to
overlapping sequences.
-It is relatively simple to configure this approach with BDR by creating the
+It's relatively simple to configure this approach with BDR by creating the
desired sequence on one node, like this:
```
@@ -455,8 +454,8 @@ CREATE SEQUENCE some_seq INCREMENT 1000 OWNED BY some_table.generated_value;
ALTER TABLE some_table ALTER COLUMN generated_value SET DEFAULT nextval('some_seq');
```
-... then on each node calling `setval()` to give each node a different offset
-starting value, e.g.:
+Then, on each node calling `setval()`, give each node a different offset
+starting value, for example:
```
-- On node 1
@@ -468,63 +467,65 @@ SELECT setval('some_seq', 2);
-- ... etc
```
-You should be sure to allow a large enough `INCREMENT` to leave room for all
-the nodes you may ever want to add, since changing it in future is difficult
+Be sure to allow a large enough `INCREMENT` to leave room for all
+the nodes you might ever want to add, since changing it in future is difficult
and disruptive.
-If you use `bigint` values, there is no practical concern about key exhaustion,
-even if you use offsets of 10000 or more. You'll need hundreds of years,
+If you use `bigint` values, there's no practical concern about key exhaustion,
+even if you use offsets of 10000 or more. It would take hundreds of years,
with hundreds of machines, doing millions of inserts per second, to have any
chance of approaching exhaustion.
-BDR does not currently offer any automation for configuration of the
+BDR doesn't currently offer any automation for configuration of the
per-node offsets on such step/offset sequences.
-#### Composite Keys
+#### Composite keys
A variant on step/offset sequences is to use a composite key composed of
`PRIMARY KEY (node_number, generated_value)`, where the
node number is usually obtained from a function that returns a different
-number on each node. Such a function may be created by temporarily
-disabling DDL replication and creating a constant SQL function, or by using
-a one-row table that is not part of a replication set to store a different
+number on each node. You can create such a function by temporarily
+disabling DDL replication and creating a constant SQL function. Alternatively, you can use
+a one-row table that isn't part of a replication set to store a different
value in each node.
-## Global Sequence Management Interfaces
+## Global sequence management interfaces
BDR provides an interface for converting between a standard PostgreSQL sequence
and the BDR global sequence.
-Note that the following functions are considered to be `DDL`, so DDL replication
+The following functions are considered to be `DDL`, so DDL replication
and global locking applies to them.
### bdr.alter_sequence_set_kind
Allows the owner of a sequence to set the kind of a sequence.
-Once set, `seqkind` is only visible via the `bdr.sequences` view;
-in all other ways the sequence will appear as a normal sequence.
+Once set, `seqkind` is visible only by way of the `bdr.sequences` view.
+In all other ways, the sequence appears as a normal sequence.
BDR treats this function as `DDL`, so DDL replication and global locking applies,
-if that is currently active. See [DDL Replication](ddl).
+if it's currently active. See [DDL Replication](ddl).
#### Synopsis
+
```sql
bdr.alter_sequence_set_kind(seqoid regclass, seqkind text)
```
#### Parameters
-- `seqoid` - name or Oid of the sequence to be altered
-- `seqkind` - `local` for a standard PostgreSQL sequence, `snowflakeid` or
+
+- `seqoid` — Name or Oid of the sequence to alter.
+- `seqkind` — `local` for a standard PostgreSQL sequence, `snowflakeid` or
`galloc` for globally unique BDR sequences, or `timeshard` for legacy
- globally unique sequence
+ globally unique sequence.
#### Notes
When changing the sequence kind to `galloc`, the first allocated range for that
-sequence will use the sequence start value as the starting point. When there are
-already existing values used by the sequence before it was changed to `galloc`,
-it is recommended to move the starting point so that the newly generated
-values will not conflict with the existing ones using the following command:
+sequence uses the sequence start value as the starting point. When there are
+existing values that were used by the sequence before it was changed to `galloc`,
+we recommend moving the starting point so that the newly generated
+values don't conflict with the existing ones using the following command:
```sql
ALTER SEQUENCE seq_name START starting_value RESTART
@@ -534,20 +535,20 @@ This function uses the same replication mechanism as `DDL` statements. This mean
that the replication is affected by the [ddl filters](repsets#ddl-replication-filtering)
configuration.
-The function will take a global `DDL` lock. It will also lock the sequence locally.
+The function takes a global `DDL` lock. It also locks the sequence locally.
-This function is transactional - the effects can be rolled back with the
-`ROLLBACK` of the transaction, and the changes are visible to the current
+This function is transactional. You can roll back the effects with the
+`ROLLBACK` of the transaction. The changes are visible to the current
transaction.
-The `bdr.alter_sequence_set_kind` function can be only executed by
-the owner of the sequence, unless `bdr.backwards_compatibility` is
-set is set to 30618 or below.
+Only the owner of the sequence can execute the `bdr.alter_sequence_set_kind` function
+unless `bdr.backwards_compatibility` is
+set is set to 30618 or lower.
### bdr.extract_timestamp_from_snowflakeid
This function extracts the timestamp component of the `snowflakeid` sequence.
-The return value is of type "timestamptz".
+The return value is of type timestamptz.
#### Synopsis
```sql
@@ -555,11 +556,11 @@ bdr.extract_timestamp_from_snowflakeid(snowflakeid bigint)
```
#### Parameters
- - `snowflakeid` - value of a snowflakeid sequence
+ - `snowflakeid` — Value of a snowflakeid sequence.
#### Notes
-This function is only executed on the local node.
+This function executes only on the local node.
### bdr.extract_nodeid_from_snowflakeid
@@ -571,11 +572,11 @@ bdr.extract_nodeid_from_snowflakeid(snowflakeid bigint)
```
#### Parameters
- - `snowflakeid` - value of a snowflakeid sequence
+ - `snowflakeid` — Value of a snowflakeid sequence.
#### Notes
-This function is only executed on the local node.
+This function executes only on the local node.
### bdr.extract_localseqid_from_snowflakeid
@@ -587,11 +588,11 @@ bdr.extract_localseqid_from_snowflakeid(snowflakeid bigint)
```
#### Parameters
- - `snowflakeid` - value of a snowflakeid sequence
+ - `snowflakeid` — Value of a snowflakeid sequence.
#### Notes
-This function is only executed on the local node.
+This function executes only on the local node.
### bdr.timestamp_to_snowflakeid
@@ -600,14 +601,14 @@ This function converts a timestamp value to a dummy snowflakeid sequence value.
This is useful for doing indexed searches or comparisons of values in the
snowflakeid column and for a specific timestamp.
-For example, given a table `foo` with a column `id` which is using a `snowflakeid`
+For example, given a table `foo` with a column `id` that's using a `snowflakeid`
sequence, we can get the number of changes since yesterday midnight like this:
```
SELECT count(1) FROM foo WHERE id > bdr.timestamp_to_snowflakeid('yesterday')
```
-A query formulated this way will use an index scan on the column `id`.
+A query formulated this way uses an index scan on the column `id`.
#### Synopsis
```sql
@@ -615,16 +616,16 @@ bdr.timestamp_to_snowflakeid(ts timestamptz)
```
#### Parameters
- - `ts` - timestamp to be used for the snowflakeid sequence generation
+ - `ts` — Timestamp to use for the snowflakeid sequence generation.
#### Notes
-This function is only executed on local node.
+This function executes only on the local node.
### bdr.extract_timestamp_from_timeshard
This function extracts the timestamp component of the `timeshard` sequence.
-The return value is of type "timestamptz".
+The return value is of type timestamptz.
#### Synopsis
@@ -634,11 +635,11 @@ bdr.extract_timestamp_from_timeshard(timeshard_seq bigint)
#### Parameters
-- `timeshard_seq` - value of a timeshard sequence
+- `timeshard_seq` — Value of a timeshard sequence.
#### Notes
-This function is only executed on the local node.
+This function executes only on the local node.
### bdr.extract_nodeid_from_timeshard
@@ -652,11 +653,11 @@ bdr.extract_nodeid_from_timeshard(timeshard_seq bigint)
#### Parameters
-- `timeshard_seq` - value of a timeshard sequence
+- `timeshard_seq` — Value of a timeshard sequence.
#### Notes
-This function is only executed on the local node.
+This function executes only on the local node.
### bdr.extract_localseqid_from_timeshard
@@ -670,11 +671,11 @@ bdr.extract_localseqid_from_timeshard(timeshard_seq bigint)
#### Parameters
-- `timeshard_seq` - value of a timeshard sequence
+- `timeshard_seq` — Value of a timeshard sequence.
#### Notes
-This function is only executed on the local node.
+This function executes only on the local node.
### bdr.timestamp_to_timeshard
@@ -683,14 +684,14 @@ This function converts a timestamp value to a dummy timeshard sequence value.
This is useful for doing indexed searches or comparisons of values in the
timeshard column and for a specific timestamp.
-For example, given a table `foo` with a column `id` which is using a `timeshard`
+For example, given a table `foo` with a column `id` that's using a `timeshard`
sequence, we can get the number of changes since yesterday midnight like this:
```
SELECT count(1) FROM foo WHERE id > bdr.timestamp_to_timeshard('yesterday')
```
-A query formulated this way will use an index scan on the column `id`.
+A query formulated this way uses an index scan on the column `id`.
#### Synopsis
@@ -700,11 +701,11 @@ bdr.timestamp_to_timeshard(ts timestamptz)
#### Parameters
-- `ts` - timestamp to be used for the timeshard sequence generation
+- `ts` — Timestamp to use for the timeshard sequence generation.
#### Notes
-This function is only executed on local node.
+This function executes only on the local node.
## KSUUID v2 Functions
@@ -712,11 +713,11 @@ Functions for working with `KSUUID` v2 data, K-Sortable UUID data.
### bdr.gen_ksuuid_v2
-This function generates a new `KSUUID` v2 value, using the value of timestamp passed as an
+This function generates a new `KSUUID` v2 value using the value of timestamp passed as an
argument or current system time if NULL is passed.
-If you want to generate KSUUID automatically using system time, pass NULL argument.
+If you want to generate KSUUID automatically using the system time, pass a NULL argument.
-The return value is of type "UUID".
+The return value is of type UUID.
#### Synopsis
@@ -726,13 +727,13 @@ bdr.gen_ksuuid_v2(timestamptz)
#### Notes
-This function is only executed on the local node.
+This function executes only on the local node.
### bdr.ksuuid_v2_cmp
This function compares the `KSUUID` v2 values.
-It returns 1 if first value is newer, -1 if second value is lower, or zero if they
+It returns 1 if the first value is newer, -1 if the second value is lower, or zero if they
are equal.
#### Synopsis
@@ -743,16 +744,16 @@ bdr.ksuuid_v2_cmp(uuid, uuid)
#### Parameters
-- `UUID` - `KSUUID` v2 to compare
+- `UUID` — `KSUUID` v2 to compare.
#### Notes
-This function is only executed on local node.
+This function executes only on the local node.
### bdr.extract_timestamp_from_ksuuid_v2
This function extracts the timestamp component of `KSUUID` v2.
-The return value is of type "timestamptz".
+The return value is of type timestamptz.
#### Synopsis
@@ -762,20 +763,20 @@ bdr.extract_timestamp_from_ksuuid_v2(uuid)
#### Parameters
-- `UUID` - `KSUUID` v2 value to extract timestamp from
+- `UUID` — `KSUUID` v2 value to extract timestamp from.
#### Notes
-This function is only executed on the local node.
+This function executes only on the local node.
-## KSUUID v1 Functions
+## KSUUID v1 functions
Functions for working with `KSUUID` v1 data, K-Sortable UUID data(v1).
### bdr.gen_ksuuid
This function generates a new `KSUUID` v1 value, using the current system time.
-The return value is of type "UUID".
+The return value is of type UUID.
#### Synopsis
@@ -785,13 +786,13 @@ bdr.gen_ksuuid()
#### Notes
-This function is only executed on the local node.
+This function executes only on the local node.
### bdr.uuid_v1_cmp
This function compares the `KSUUID` v1 values.
-It returns 1 if first value is newer, -1 if second value is lower, or zero if they
+It returns 1 if the first value is newer, -1 if the second value is lower, or zero if they
are equal.
#### Synopsis
@@ -802,16 +803,16 @@ bdr.uuid_v1_cmp(uuid, uuid)
#### Notes
-This function is only executed on the local node.
+This function executes only on the local node.
#### Parameters
-- `UUID` - `KSUUID` v1 to compare
+- `UUID` — `KSUUID` v1 to compare.
### bdr.extract_timestamp_from_ksuuid
This function extracts the timestamp component of `KSUUID` v1 or `UUIDv1` values.
-The return value is of type "timestamptz".
+The return value is of type timestamptz.
#### Synopsis
@@ -821,8 +822,8 @@ bdr.extract_timestamp_from_ksuuid(uuid)
#### Parameters
-- `UUID` - `KSUUID` v1 value to extract timestamp from
+- `UUID` — `KSUUID` v1 value to extract timestamp from.
#### Notes
-This function is only executed on the local node.
+This function executes on the local node.
diff --git a/product_docs/docs/bdr/4/striggers.mdx b/product_docs/docs/bdr/4/striggers.mdx
index 840de32c83b..77e99008f49 100644
--- a/product_docs/docs/bdr/4/striggers.mdx
+++ b/product_docs/docs/bdr/4/striggers.mdx
@@ -1,63 +1,63 @@
---
-title: Stream Triggers
+title: Stream triggers
---
-BDR introduces new types of triggers which can be used for additional
+BDR introduces new types of triggers that you can use for additional
data processing on the downstream/target node.
-- Conflict Triggers
-- Transform Triggers
+- Conflict triggers
+- Transform triggers
-Together, these types of triggers are known as Stream Triggers.
+Together, these types of triggers are known as *stream triggers*.
-Stream Triggers are designed to be trigger-like in syntax, they leverage the
-PostgreSQL BEFORE trigger architecture, and are likely to have similar
+Stream triggers are designed to be trigger-like in syntax. They leverage the
+PostgreSQL BEFORE trigger architecture and are likely to have similar
performance characteristics as PostgreSQL BEFORE Triggers.
-One trigger function can be used by multiple trigger definitions, just as with
+Multiple trigger definitions can use one trigger function, just as with
normal PostgreSQL triggers.
-A trigger function is simply a program defined in this form:
-`CREATE FUNCTION ... RETURNS TRIGGER`. Creating the actual trigger does not
-require use of the CREATE TRIGGER command. Instead, stream triggers
-are created using the special BDR functions
+A trigger function is a program defined in this form:
+`CREATE FUNCTION ... RETURNS TRIGGER`. Creating the trigger doesn't
+require use of the `CREATE TRIGGER` command. Instead, create stream triggers
+using the special BDR functions
`bdr.create_conflict_trigger()` and `bdr.create_transform_trigger()`.
-Once created, the trigger will be visible in the catalog table `pg_trigger`.
-The stream triggers will be marked as `tgisinternal = true` and
-`tgenabled = 'D'` and will have name suffix '\_bdrc' or '\_bdrt'. The view
+Once created, the trigger is visible in the catalog table `pg_trigger`.
+The stream triggers are marked as `tgisinternal = true` and
+`tgenabled = 'D'` and have the name suffix '\_bdrc' or '\_bdrt'. The view
`bdr.triggers` provides information on the triggers in relation to the table,
the name of the procedure that is being executed, the event that triggers it,
and the trigger type.
-Note that stream triggers are NOT therefore enabled for normal SQL processing.
-Because of this the `ALTER TABLE ... ENABLE TRIGGER` is blocked for stream
-triggers in both its specific name variant and the ALL variant, to prevent
+Stream triggers aren't enabled for normal SQL processing.
+Because of this, the `ALTER TABLE ... ENABLE TRIGGER` is blocked for stream
+triggers in both its specific name variant and the ALL variant. This mechanism prevents
the trigger from executing as a normal SQL trigger.
-Note that these triggers execute on the downstream or target node. There is no
-option for them to execute on the origin node, though one may wish to consider
+These triggers execute on the downstream or target node. There's no
+option for them to execute on the origin node. However, you might want to consider
the use of `row_filter` expressions on the origin.
-Also, any DML which is applied during the execution of a stream
-trigger will not be replicated to other BDR nodes, and will not
-trigger the execution of standard local triggers. This is intentional,
-and can be used for instance to log changes or conflicts captured by a
+Also, any DML that is applied while executing a stream
+trigger isn't replicated to other BDR nodes and doesn't
+trigger the execution of standard local triggers. This is intentional. You can use it, for example,
+to log changes or conflicts captured by a
stream trigger into a table that is crash-safe and specific of that
-node; a working example is provided at the end of this chapter.
+node. See [Stream triggers examples](#stream-triggers-examples) for a working example.
## Trigger execution during Apply
-Transform triggers execute first, once for each incoming change in the
-triggering table. These triggers fire before we have even attempted to locate a
+Transform triggers execute first—once for each incoming change in the
+triggering table. These triggers fire before we attempt to locate a
matching target row, allowing a very wide range of transforms to be applied
efficiently and consistently.
-Next, for UPDATE and DELETE changes we locate the target row. If there is no
-target row, then there is no further processing for those change types.
+Next, for UPDATE and DELETE changes, we locate the target row. If there's no
+target row, then no further processing occurs for those change types.
-We then execute any normal triggers that previously have been explicitly enabled
+We then execute any normal triggers that previously were explicitly enabled
as replica triggers at table-level:
```sql
@@ -65,23 +65,23 @@ ALTER TABLE tablename
ENABLE REPLICA TRIGGER trigger_name;
```
-We then decide whether a potential conflict exists and if so, we then call any
+We then decide whether a potential conflict exists. If so, we then call any
conflict trigger that exists for that table.
-### Missing Column Conflict Resolution
+### Missing column conflict resolution
Before transform triggers are executed, PostgreSQL tries to match the
-incoming tuple against the rowtype of the target table.
+incoming tuple against the row-type of the target table.
-Any column that exists on the input row but not on the target table
-will trigger a conflict of type `target_column_missing`; conversely, a
+Any column that exists on the input row but not on the target table
+triggers a conflict of type `target_column_missing`. Conversely, a
column existing on the target table but not in the incoming row
triggers a `source_column_missing` conflict. The default resolutions
for those two conflict types are respectively `ignore_if_null` and
`use_default_value`.
-This is relevant in the context of rolling schema upgrades; for
-instance, if the new version of the schema introduces a new
+This is relevant in the context of rolling schema upgrades, for
+example, if the new version of the schema introduces a new
column. When replicating from an old version of the schema to a new
one, the source column is missing, and the `use_default_value`
strategy is appropriate, as it populates the newly introduced column
@@ -89,230 +89,227 @@ with the default value.
However, when replicating from a node having the new schema version to
a node having the old one, the column is missing from the target
-table, and the `ignore_if_null` resolver is not appropriate for a
-rolling upgrade, because it will break replication as soon as the user
-inserts, in any of the upgraded nodes, a tuple with a non-NULL value
-in the new column.
+table. The `ignore_if_null` resolver isn't appropriate for a
+rolling upgrade because it breaks replication as soon as the user
+inserts a tuple with a non-NULL value
+in the new column in any of the upgraded nodes.
In view of this example, the appropriate setting for rolling schema
upgrades is to configure each node to apply the `ignore` resolver in
case of a `target_column_missing` conflict.
-This is done with the following query, that must be **executed
-separately on each node**, after replacing `node1` with the actual
-node name:
+You can do this with the following query, which you must execute
+separately on each node. Replace `node1` with the actual
+node name.
```sql
SELECT bdr.alter_node_set_conflict_resolver('node1',
'target_column_missing', 'ignore');
```
-#### Data Loss and Divergence Risk
+#### Data loss and divergence risk
-In this section, we show how setting the conflict resolver to `ignore`
+Setting the conflict resolver to `ignore`
can lead to data loss and cluster divergence.
Consider the following example: table `t` exists on nodes 1 and 2, but
-its column `col` only exists on node 1.
+its column `col` exists only on node 1.
If the conflict resolver is set to `ignore`, then there can be rows on
-node 1 where `c` is not null, e.g. `(pk=1, col=100)`. That row will be
-replicated to node 2, and the value in column `c` will be discarded,
-e.g. `(pk=1)`.
+node 1 where `c` isn't null, for example, `(pk=1, col=100)`. That row is
+replicated to node 2, and the value in column `c` is discarded,
+for example, `(pk=1)`.
-If column `c` is then added to the table on node 2, it will initially
-be set to NULL on all existing rows, and the row considered above
-becomes `(pk=1, col=NULL)`: the row having `pk=1` is no longer
+If column `c` is then added to the table on node 2, it is at first
+set to NULL on all existing rows, and the row considered above
+becomes `(pk=1, col=NULL)`. The row having `pk=1` is no longer
identical on all nodes, and the cluster is therefore divergent.
-Note that the default `ignore_if_null` resolver is not affected by
-this risk, because any row that is replicated to node 2 will have
+The default `ignore_if_null` resolver isn't affected by
+this risk because any row replicated to node 2 has
`col=NULL`.
Based on this example, we recommend running LiveCompare against the
whole cluster at the end of a rolling schema upgrade where the
-`ignore` resolver was used, to make sure that any divergence is
-detected and fixed.
+`ignore` resolver was used. This practice helps to ensure that you detect and fix any divergence.
## Terminology of row-types
-This document uses these row-types:
+We use these row-types:
-- `SOURCE_OLD` is the row before update, i.e. the key.
+- `SOURCE_OLD` is the row before update, that is, the key.
- `SOURCE_NEW` is the new row coming from another node.
-- `TARGET` is row that exists on the node already, i.e. conflicting row.
+- `TARGET` is the row that exists on the node already, that is, the conflicting row.
-## Conflict Triggers
+## Conflict triggers
-Conflict triggers are executed when a conflict is detected by BDR, and
-are used to decide what happens when the conflict has occurred.
+Conflict triggers execute when a conflict is detected by BDR.
+They decide what happens when the conflict has occurred.
-- If the trigger function returns a row, the action will be applied to the target.
-- If the trigger function returns NULL row, the action will be skipped.
+- If the trigger function returns a row, the action is applied to the target.
+- If the trigger function returns a NULL row, the action is skipped.
-To clarify, if the trigger is called for a `DELETE`, the trigger should
-return NULL if it wants to skip the `DELETE`. If you wish the DELETE to proceed,
-then return a row value - either `SOURCE_OLD` or `TARGET` will work.
+For example, if the trigger is called for a `DELETE`, the trigger
+returns NULL if it wants to skip the `DELETE`. If you want the `DELETE` to proceed,
+then return a row value: either `SOURCE_OLD` or `TARGET` works.
When the conflicting operation is either `INSERT` or `UPDATE`, and the
chosen resolution is the deletion of the conflicting row, the trigger
must explicitly perform the deletion and return NULL.
-The trigger function may perform other SQL actions as it chooses, but
-those actions will only be applied locally, not replicated.
+The trigger function can perform other SQL actions as it chooses, but
+those actions are only applied locally, not replicated.
-When a real data conflict occurs between two or more nodes, there will be
-two or more concurrent changes occurring. When we apply those changes, the
+When a real data conflict occurs between two or more nodes,
+two or more concurrent changes are occurring. When we apply those changes, the
conflict resolution occurs independently on each node. This means the conflict
-resolution will occur once on each node, and can occur with a
-significant time difference between then. As a result, there is no
-possibility of communication between the multiple executions of the conflict
-trigger. It is the responsibility of the author of the conflict trigger to
-ensure that the trigger gives exactly the same result for all related events,
-otherwise data divergence will occur. Technical Support recommends that all conflict
-triggers are formally tested using the isolationtester tool supplied with
+resolution occurs once on each node and can occur with a
+significant time difference between them. As a result, communication between the multiple executions of the conflict
+trigger isn't possible. It is the responsibility of the author of the conflict trigger to
+ensure that the trigger gives exactly the same result for all related events.
+Otherwise, data divergence occurs. Technical Support recommends that you formally test all conflict
+triggers using the isolationtester tool supplied with
BDR.
!!! Warning
- - Multiple conflict triggers can be specified on a single table, but
- they should match distinct event, i.e. each conflict should only
- match a single conflict trigger.
- - Multiple triggers matching the same event on the same table are
- not recommended; they might result in inconsistent behaviour, and
- will be forbidden in a future release.
-
-If the same conflict trigger matches more than one event, the `TG_OP`
-variable can be used within the trigger to identify the operation that
+ - You can specify multiple conflict triggers on a single table, but
+ they must match a distinct event. That is, each conflict must
+ match only a single conflict trigger.
+ - We don't recommend multiple triggers matching the same event on the same table.
+ They might result in inconsistent behavior and
+ will not be allowed in a future release.
+
+If the same conflict trigger matches more than one event, you can use the `TG_OP`
+variable in the trigger to identify the operation that
produced the conflict.
By default, BDR detects conflicts by observing a change of replication origin
-for a row, hence it is possible for a conflict trigger to be called even
-when there is only one change occurring. Since in this case there is no
-real conflict, we say that this conflict detection mechanism can generate
-false positive conflicts. The conflict trigger must handle all of those
-identically, as mentioned above.
-
-Note that in some cases, timestamp conflict detection will not detect a
-conflict at all. For example, in a concurrent UPDATE/DELETE where the
-DELETE occurs just after the UPDATE, any nodes that see first the UPDATE
-and then the DELETE will not see any conflict. If no conflict is seen,
-the conflict trigger will never be called. The same situation, but using
-row version conflict detection, *will* see a conflict, which can then be
-handled by a conflict trigger.
+for a row. Hence, you can call a conflict trigger even when
+only one change is occurring. Since, in this case, there's no
+real conflict, this conflict detection mechanism can generate
+false-positive conflicts. The conflict trigger must handle all of those
+identically.
+
+In some cases, timestamp conflict detection doesn't detect a
+conflict at all. For example, in a concurrent `UPDATE`/`DELETE` where the
+`DELETE` occurs just after the `UPDATE`, any nodes that see first the `UPDATE`
+and then the `DELETE` don't see any conflict. If no conflict is seen,
+the conflict trigger are never called. In the same situation but using
+row version conflict detection, a conflict is seen, which a conflict trigger can then
+handle.
The trigger function has access to additional state information as well as
-the data row involved in the conflict, depending upon the operation type:
+the data row involved in the conflict, depending on the operation type:
-- On `INSERT`, conflict triggers would be able to access `SOURCE_NEW` row from
- source and `TARGET` row
-- On `UPDATE`, conflict triggers would be able to access `SOURCE_OLD` and
- `SOURCE_NEW` row from source and `TARGET` row
-- On `DELETE`, conflict triggers would be able to access `SOURCE_OLD` row from
- source and `TARGET` row
+- On `INSERT`, conflict triggers can access the `SOURCE_NEW` row from
+ the source and `TARGET` row.
+- On `UPDATE`, conflict triggers can access the `SOURCE_OLD` and
+ `SOURCE_NEW` row from the source and `TARGET` row.
+- On `DELETE`, conflict triggers can access the `SOURCE_OLD` row from
+ the source and `TARGET` row.
-The function `bdr.trigger_get_row()` can be used to retrieve `SOURCE_OLD`, `SOURCE_NEW`
+You can use the function `bdr.trigger_get_row()` to retrieve `SOURCE_OLD`, `SOURCE_NEW`,
or `TARGET` rows, if a value exists for that operation.
Changes to conflict triggers happen transactionally and are protected by
-Global DML Locks during replication of the configuration change, similarly
+global DML locks during replication of the configuration change, similarly
to how some variants of `ALTER TABLE` are handled.
If primary keys are updated inside a conflict trigger, it can
-sometimes leads to unique constraint violations error due to a difference
+sometimes lead to unique constraint violations errors due to a difference
in timing of execution.
-Hence, users should avoid updating primary keys within conflict triggers.
+Hence, avoid updating primary keys in conflict triggers.
-## Transform Triggers
+## Transform triggers
-These triggers are similar to the Conflict Triggers, except they are executed
-for every row on the data stream against the specific table. The behaviour of
-return values and the exposed variables are similar, but transform triggers
+These triggers are similar to conflict triggers, except they are executed
+for every row on the data stream against the specific table. The behavior of
+return values and the exposed variables is similar, but transform triggers
execute before a target row is identified, so there is no `TARGET` row.
-Specify multiple Transform Triggers on each table in BDR, if desired.
+You can specify multiple transform triggers on each table in BDR.
Transform triggers execute in alphabetical order.
A transform trigger can filter away rows, and it can do additional operations
-as needed. It can alter the values of any column, or set them to `NULL`. The
-return value decides what further action is taken:
+as needed. It can alter the values of any column or set them to `NULL`. The
+return value decides the further action taken:
-- If the trigger function returns a row, it will be applied to the target.
-- If the trigger function returns a `NULL` row, there is no further action to be
- performed and as-yet unexecuted triggers will never execute.
-- The trigger function may perform other actions as it chooses.
+- If the trigger function returns a row, it's applied to the target.
+- If the trigger function returns a `NULL` row, there's no further action to
+ perform. Unexecuted triggers never execute.
+- The trigger function can perform other actions as it chooses.
The trigger function has access to additional state information as well as
rows involved in the conflict:
-- On `INSERT`, transform triggers would be able to access `SOURCE_NEW` row from source.
-- On `UPDATE`, transform triggers would be able to access `SOURCE_OLD` and `SOURCE_NEW` row from source.
-- On `DELETE`, transform triggers would be able to access `SOURCE_OLD` row from source.
+- On `INSERT`, transform triggers can access the `SOURCE_NEW` row from the source.
+- On `UPDATE`, transform triggers can access the `SOURCE_OLD` and `SOURCE_NEW` row from the source.
+- On `DELETE`, transform triggers can access the `SOURCE_OLD` row from the source.
-The function `bdr.trigger_get_row()` can be used to retrieve `SOURCE_OLD` or `SOURCE_NEW`
-rows; `TARGET` row is not available, since this type of trigger executes before such
+You can use the function `bdr.trigger_get_row()` to retrieve `SOURCE_OLD` or `SOURCE_NEW`
+rows. `TARGET` row isn't available, since this type of trigger executes before such
a target row is identified, if any.
-Transform Triggers look very similar to normal BEFORE row triggers, but have these
+Transform triggers look very similar to normal BEFORE row triggers but have these
important differences:
-- Transform trigger gets called for every incoming change.
- BEFORE triggers will not be called at all for UPDATE and DELETE changes
- if we don't find a matching row in a table.
+- A transform trigger gets called for every incoming change.
+ BEFORE triggers aren't called at all for `UPDATE` and `DELETE` changes
+ if a matching row in a table isn't found.
- Transform triggers are called before partition table routing occurs.
-- Transform triggers have access to the lookup key via SOURCE_OLD,
- which is not available to normal SQL triggers.
+- Transform triggers have access to the lookup key via `SOURCE_OLD`,
+ which isn't available to normal SQL triggers.
-## Stream Triggers Variables
+## Stream triggers variables
-Both Conflict Trigger and Transform Triggers have access to information about
-rows and metadata via the predefined variables provided by trigger API and
+Both conflict triggers and transform triggers have access to information about
+rows and metadata by way of the predefined variables provided by the trigger API and
additional information functions provided by BDR.
-In PL/pgSQL, the following predefined variables exist:
+In PL/pgSQL, you can use the predefined variables that follow.
### TG_NAME
-Data type name; variable that contains the name of the trigger actually fired.
-Note that the actual trigger name has a '\_bdrt' or '\_bdrc' suffix
+Data type name. This variable contains the name of the trigger actually fired.
+The actual trigger name has a '\_bdrt' or '\_bdrc' suffix
(depending on trigger type) compared to the name provided during trigger creation.
### TG_WHEN
-Data type text; this will say `BEFORE` for both Conflict and Transform triggers.
-The stream trigger type can be obtained by calling the `bdr.trigger_get_type()`
-information function (see below).
+Data type text. This variable says `BEFORE` for both conflict and transform triggers.
+You can get the stream trigger type by calling the `bdr.trigger_get_type()`
+information function. See [bdr.trigger_get_type](#bdrtrigger_get_type).
### TG_LEVEL
-Data type text; a string of `ROW`.
+Data type text: a string of `ROW`.
### TG_OP
-Data type text; a string of `INSERT`, `UPDATE` or `DELETE`
- telling for which operation the trigger was fired.
+Data type text: a string of `INSERT`, `UPDATE`, or `DELETE` identifying the operation for which the trigger was fired.
### TG_RELID
-Data type oid; the object ID of the table that caused the trigger invocation.
+Data type oid: the object ID of the table that caused the trigger invocation.
### TG_TABLE_NAME
-Data type name; the name of the table that caused the trigger invocation.
+Data type name: the name of the table that caused the trigger invocation.
### TG_TABLE_SCHEMA
-Data type name; the name of the schema of the table that caused the trigger
+Data type name: the name of the schema of the table that caused the trigger
invocation. For partitioned tables, this is the name of the root table.
### TG_NARGS
-Data type integer; the number of arguments given to the trigger function in
+Data type integer: the number of arguments given to the trigger function in
the `bdr.create_conflict_trigger()` or `bdr.create_transform_trigger()`
statement.
### TG_ARGV\[]
-Data type array of text; the arguments from the `bdr.create_conflict_trigger()`
+Data type array of text: the arguments from the `bdr.create_conflict_trigger()`
or `bdr.create_transform_trigger()` statement. The index counts from 0.
Invalid indexes (less than 0 or greater than or equal to `TG_NARGS`) result in
a `NULL` value.
@@ -322,8 +319,8 @@ a `NULL` value.
### bdr.trigger_get_row
This function returns the contents of a trigger row specified by an identifier
-as a `RECORD`. This function returns NULL if called inappropriately, i.e.
-called with SOURCE_NEW when the operation type (TG_OP) is DELETE.
+as a `RECORD`. This function returns `NULL` if called inappropriately, that is,
+called with `SOURCE_NEW` when the operation type (TG_OP) is `DELETE`.
#### Synopsis
@@ -333,15 +330,15 @@ bdr.trigger_get_row(row_id text)
#### Parameters
-- `row_id` - identifier of the row; can be any of SOURCE_NEW, SOURCE_OLD and
- TARGET, depending on the trigger type and operation (see documentation of
+- `row_id` — identifier of the row. Can be any of `SOURCE_NEW`, `SOURCE_OLD`, and
+ `TARGET`, depending on the trigger type and operation (see description of
individual trigger types).
### bdr.trigger_get_committs
This function returns the commit timestamp of a trigger row specified by an
-identifier. If not available because a row is frozen or row is not available,
-this will return NULL. Always returns NULL for row identifier SOURCE_OLD.
+identifier. If not available because a row is frozen or isn't available,
+returns `NULL`. Always returns `NULL` for row identifier `SOURCE_OLD`.
#### Synopsis
@@ -351,18 +348,18 @@ bdr.trigger_get_committs(row_id text)
#### Parameters
-- `row_id` - identifier of the row; can be any of SOURCE_NEW, SOURCE_OLD and
- TARGET, depending on trigger type and operation (see documentation of
+- `row_id` — Identifier of the row. Can be any of `SOURCE_NEW`, `SOURCE_OLD`, and
+ `TARGET`, depending on trigger type and operation (see description of
individual trigger types).
### bdr.trigger_get_xid
This function returns the local transaction id of a TARGET row specified by an
-identifier. If not available because a row is frozen or row is not available,
-this will return NULL. Always returns NULL for SOURCE_OLD and SOURCE_NEW row
+identifier. If not available because a row is frozen or isn't available,
+returns `NULL`. Always returns `NULL` for `SOURCE_OLD` and `SOURCE_NEW` row
identifiers.
-This is only available for conflict triggers.
+Available only for conflict triggers.
#### Synopsis
@@ -372,14 +369,14 @@ bdr.trigger_get_xid(row_id text)
#### Parameters
-- `row_id` - identifier of the row; can be any of SOURCE_NEW, SOURCE_OLD and
- TARGET, depending on trigger type and operation (see documentation of
+- `row_id` — Identifier of the row. Can be any of `SOURCE_NEW`, `SOURCE_OLD`, and
+ `TARGET`, depending on trigger type and operation (see description of
individual trigger types).
### bdr.trigger_get_type
-This function returns the current trigger type, which can be either `CONFLICT`
-or `TRANSFORM`. Returns null if called outside a Stream Trigger.
+This function returns the current trigger type, which can be `CONFLICT`
+or `TRANSFORM`. Returns null if called outside a stream trigger.
#### Synopsis
@@ -390,9 +387,9 @@ bdr.trigger_get_type()
### bdr.trigger_get_conflict_type
This function returns the current conflict type if called inside a conflict
-trigger, or `NULL` otherwise.
+trigger. Otherwise, returns `NULL`.
-See [Conflict Types](conflicts#list-of-conflict-types)
+See [Conflict types](conflicts#list-of-conflict-types)
for possible return values of this function.
#### Synopsis
@@ -404,11 +401,11 @@ bdr.trigger_get_conflict_type()
### bdr.trigger_get_origin_node_id
This function returns the node id corresponding to the origin for the trigger
-row_id passed in as argument. If the origin is not valid (which means the row
-has originated locally), return the node id of the source or target node,
-depending on the trigger row argument. Always returns NULL for row identifier
-SOURCE_OLD. This can be used to define conflict triggers to always favour a
-trusted source node. See the example given below.
+row_id passed in as argument. If the origin isn't valid (which means the row
+originated locally), returns the node id of the source or target node,
+depending on the trigger row argument. Always returns `NULL` for row identifier
+`SOURCE_OLD`. You can use this function to define conflict triggers to always favor a
+trusted source node.
#### Synopsis
@@ -418,13 +415,13 @@ bdr.trigger_get_origin_node_id(row_id text)
#### Parameters
-- `row_id` - identifier of the row; can be any of SOURCE_NEW, SOURCE_OLD and
- TARGET, depending on trigger type and operation (see documentation of
+- `row_id` — Identifier of the row. Can be any of `SOURCE_NEW`, `SOURCE_OLD`, and
+ `TARGET`, depending on trigger type and operation (see description of
individual trigger types).
### bdr.ri_fkey_on_del_trigger
-When called as a BEFORE trigger, this function will use FOREIGN KEY information
+When called as a BEFORE trigger, this function uses FOREIGN KEY information
to avoid FK anomalies.
#### Synopsis
@@ -433,27 +430,27 @@ to avoid FK anomalies.
bdr.ri_fkey_on_del_trigger()
```
-## Row Contents
+## Row contents
-The SOURCE_NEW, SOURCE_OLD and TARGET contents depend on the operation, REPLICA
+The `SOURCE_NEW`, `SOURCE_OLD`, and `TARGET` contents depend on the operation, REPLICA
IDENTITY setting of a table, and the contents of the target table.
-The TARGET row is only available in conflict triggers. The TARGET row only
-contains data if a row was found when applying UPDATE or DELETE in the target
-table; if the row is not found, the TARGET will be NULL.
+The TARGET row is available only in conflict triggers. The TARGET row
+contains data only if a row was found when applying `UPDATE` or `DELETE` in the target
+table. If the row isn't found, the TARGET is `NULL`.
-## Triggers Notes
+## Triggers notes
Execution order for triggers:
-- Transform triggers - execute once for each incoming row on the target
-- Normal triggers - execute once per row
-- Conflict triggers - execute once per row where a conflict exists
+- Transform triggers — Execute once for each incoming row on the target.
+- Normal triggers — Execute once per row.
+- Conflict triggers — Execute once per row where a conflict exists.
-## Stream Triggers Manipulation Interfaces
+## Stream triggers manipulation interfaces
-Stream Triggers can only be created on tables with `REPLICA IDENTITY FULL`
-or tables without any `TOAST`able columns.
+You can create stream triggers only on tables with `REPLICA IDENTITY FULL`
+or tables without any columns to which `TOAST` applies.
### bdr.create_conflict_trigger
@@ -471,13 +468,13 @@ bdr.create_conflict_trigger(trigger_name text,
#### Parameters
-- `trigger_name` - name of the new trigger
-- `events` - array of events on which to fire this trigger; valid values are
- '`INSERT`', '`UPDATE`' and '`DELETE`'
-- `relation` - for which relation to fire this trigger
-- `function` - which function to execute
-- `args` - optional; specifies the array of parameters the trigger function will
- receive upon execution (contents of `TG_ARGV` variable)
+- `trigger_name` — Name of the new trigger.
+- `events` — Array of events on which to fire this trigger. Valid values are
+ '`INSERT`', '`UPDATE`', and '`DELETE`'.
+- `relation` — Relation to fire this trigger for.
+- `function` — The function to execute.
+- `args` — Optional. Specifies the array of parameters the trigger function
+ receives on execution (contents of `TG_ARGV` variable).
#### Notes
@@ -485,23 +482,23 @@ This function uses the same replication mechanism as `DDL` statements. This
means that the replication is affected by the
[ddl filters](repsets#ddl-replication-filtering) configuration.
-The function will take a global DML lock on the relation on which the trigger
+The function takes a global DML lock on the relation on which the trigger
is being created.
-This function is transactional - the effects can be rolled back with the
-`ROLLBACK` of the transaction, and the changes are visible to the current
+This function is transactional. You can roll back the effects with the
+`ROLLBACK` of the transaction. The changes are visible to the current
transaction.
-Similarly to normal PostgreSQL triggers, the `bdr.create_conflict_trigger`
+Similar to normal PostgreSQL triggers, the `bdr.create_conflict_trigger`
function requires `TRIGGER` privilege on the `relation` and `EXECUTE`
privilege on the function. This applies with a
`bdr.backwards_compatibility` of 30619 or above. Additional
security rules apply in BDR to all triggers including conflict
-triggers; see the [security chapter on triggers](security#triggers).
+triggers. See [Security and roles](security#triggers).
### bdr.create_transform_trigger
-This function creates a new transform trigger.
+This function creates a transform trigger.
#### Synopsis
@@ -515,13 +512,13 @@ bdr.create_transform_trigger(trigger_name text,
#### Parameters
-- `trigger_name` - name of the new trigger
-- `events` - array of events on which to fire this trigger, valid values are
- '`INSERT`', '`UPDATE`' and '`DELETE`'
-- `relation` - for which relation to fire this trigger
-- `function` - which function to execute
-- `args` - optional, specify array of parameters the trigger function will
- receive upon execution (contents of `TG_ARGV` variable)
+- `trigger_name` — Name of the new trigger.
+- `events` — Array of events on which to fire this trigger. Valid values are
+ '`INSERT`', '`UPDATE`', and '`DELETE`'.
+- `relation` — Relation to fire this trigger for.
+- `function` — The function to execute.
+- `args` — Optional. Specify array of parameters the trigger function
+ receives on execution (contents of `TG_ARGV` variable).
#### Notes
@@ -529,18 +526,18 @@ This function uses the same replication mechanism as `DDL` statements. This
means that the replication is affected by the
[ddl filters](repsets#ddl-replication-filtering) configuration.
-The function will take a global DML lock on the relation on which the trigger
+The function takes a global DML lock on the relation on which the trigger
is being created.
-This function is transactional - the effects can be rolled back with the
-`ROLLBACK` of the transaction, and the changes are visible to the current
+This function is transactional. You can roll back the effects with the
+`ROLLBACK` of the transaction. The changes are visible to the current
transaction.
Similarly to normal PostgreSQL triggers, the `bdr.create_transform_trigger`
function requires the `TRIGGER` privilege on the `relation` and `EXECUTE`
privilege on the function. Additional security rules apply in BDR to all
-triggers including transform triggers; see the
-[security chapter on triggers](security#triggers).
+triggers including transform triggers. See
+[Security and roles](security#triggers).
### bdr.drop_trigger
@@ -556,10 +553,10 @@ bdr.drop_trigger(trigger_name text,
#### Parameters
-- `trigger_name` - name of an existing trigger
-- `relation` - which relation is the trigger defined for
-- `ifexists` - when set to true `true`, this command will ignore missing
- triggers
+- `trigger_name` — Name of an existing trigger.
+- `relation` — The relation the trigger is defined for.
+- `ifexists` — When set to `true`, this function ignores missing
+ triggers.
#### Notes
@@ -567,19 +564,18 @@ This function uses the same replication mechanism as `DDL` statements. This
means that the replication is affected by the
[ddl filters](repsets#ddl-replication-filtering) configuration.
-The function will take a global DML lock on the relation on which the trigger
+The function takes a global DML lock on the relation on which the trigger
is being created.
-This function is transactional - the effects can be rolled back with the
-`ROLLBACK` of the transaction, and the changes are visible to the current
+This function is transactional. You can roll back the effects with the
+`ROLLBACK` of the transaction. The changes are visible to the current
transaction.
-The `bdr.drop_trigger` function can be only executed by the owner of
-the `relation`.
+Only the owner of the `relation` can execute the `bdr.drop_trigger` function.
-## Stream Triggers Examples
+## Stream triggers examples
-A conflict trigger which provides similar behaviour as the update_if_newer
+A conflict trigger that provides similar behavior as the `update_if_newer`
conflict resolver:
```sql
@@ -598,7 +594,7 @@ END;
$$;
```
-A conflict trigger which applies a delta change on a counter column and uses
+A conflict trigger that applies a delta change on a counter column and uses
SOURCE_NEW for all other columns:
```sql
@@ -624,7 +620,7 @@ END;
$$;
```
-A transform trigger which logs all changes to a log table instead of applying them:
+A transform trigger that logs all changes to a log table instead of applying them:
```sql
CREATE OR REPLACE FUNCTION log_change
@@ -653,10 +649,10 @@ END;
$$;
```
-The example below shows a conflict trigger that implements Trusted Source
-conflict detection, also known as trusted site, preferred node or Always Wins
-resolution. This uses the bdr.trigger_get_origin_node_id() function to provide
-a solution that works with 3 or more nodes.
+This example shows a conflict trigger that implements trusted source
+conflict detection, also known as trusted site, preferred node, or Always Wins
+resolution. This uses the `bdr.trigger_get_origin_node_id()` function to provide
+a solution that works with three or more nodes.
```sql
CREATE OR REPLACE FUNCTION test_conflict_trigger()
diff --git a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/01_preparing_azure/index.mdx b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/01_preparing_azure/index.mdx
index 572e71ead90..8f6a9734151 100644
--- a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/01_preparing_azure/index.mdx
+++ b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/01_preparing_azure/index.mdx
@@ -16,20 +16,20 @@ BigAnimal requires you to check the readiness of your Azure subscription before
## Check for readiness
-We recommend using the `biganimal-preflight-azure` script to check whether all requirements and resource limits are met in your subscription. However, you can also manually check the requirements using the Azure CLI or the Azure Portal.
+We recommend using the `biganimal-csp-preflight` script to check whether all requirements and resource limits are met in your subscription. However, you can also manually check the requirements using the Azure CLI or the Azure Portal.
- [Method 1: Use EDB's shell script](#method-1-use-edbs-shell-script) (recommended)
- [Method 2: Manually check requirements](#method-2-manually-check-requirements)
### Method 1: Use EDB's shell script
-EDB provides a shell script, called [`biganimal-preflight-azure`](https://github.com/EnterpriseDB/cloud-utilities/blob/main/azure/biganimal-preflight-azure), which checks whether requirements and resource limits are met in your Azure subscription based on the clusters you plan to deploy.
+EDB provides a shell script, called [`biganimal-csp-preflight`](https://github.com/EnterpriseDB/cloud-utilities/blob/main/azure/biganimal-csp-preflight), which checks whether requirements and resource limits are met in your Azure subscription based on the clusters you plan to deploy.
1. Open the [Azure Cloud Shell](https://shell.azure.com/) in your browser.
-2. From the Azure Cloud Shell, run the following command:
+2. From the Azure Cloud Shell, run the following command:
```shell
- curl -sL https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/azure/biganimal-preflight-azure | bash -s [options]
+ curl -sL https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/azure/biganimal-csp-preflight | bash -s [options]
```
The required arguments are:
@@ -37,27 +37,28 @@ EDB provides a shell script, called [`biganimal-preflight-azure`](https://github
| -------- | ----------- |
| <target-subscription> | Azure subscription ID of your BigAnimal deployment. |
| <region> | Azure region where your clusters are being deployed. See [Supported regions](/biganimal/release/overview/03a_region_support) for a list of possible regions. |
-
+
Possible options are:
| Options | Description |
| ------- | ----------- |
| `-h` or `--help`| Displays the command help. |
| `-i` or `--instance-type` | Azure VM instance type for the BigAnimal cluster. The `help` command provides a list of possible VM instance types. Choose the instance type that best suits your application and workload. Choose an instance type in the memory optimized ESv3 or ESv4 series for large data sets. Choose from the compute optimized FSv2 series for compute-bound applications. Choose from the general purpose DSv3 or DSv4 series if you don't require memory or compute optimization. See [Sizes for virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes) for information to help you choose the appropriate instance type. |
- | `-a` or `--high-availability` | Enables high availability for the cluster. See [Supported cluster types](/biganimal/release/overview/02_high_availability) for more information.|
+ | `-a` or `--high-availability` | *DEPRECATED* - Enables high availability for the cluster. Replaced with `-x` or `--cluster-architecture` command.|
+ | `-x` or `--cluster-architecture` | Defines the cluster architecture and can be [ single | ha | eha ]. See [Supported cluster types](/biganimal/release/overview/02_high_availability) for more information.|
| `-e` or `--endpoint` | Type of network endpoint for the BigAnimal cluster, either `public` or `private`. See [Cluster networking architecture](/biganimal/release/getting_started/creating_a_cluster/01_cluster_networking) for more information. |
| `-r` or `--activate-region` | Specifies region activation if no clusters currently exist in the region. |
- | `--onboard` | Checks if the user and subscription are correctly configured.
-
+ | `--onboard` | Checks if the user and subscription are correctly configured.
+
The behavior of the script defaults to `--onboard` if you provide no other options.
For example, if you want to deploy a cluster in an Azure subscription having an ID of `12412ab3d-1515-2217-96f5-0338184fcc04`, with an instance type of `e2s_v3`, in the `eastus2` region, in a `public` network, and with no existing cluster deployed, run the following command:
```shell
- curl -sL https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/azure/biganimal-preflight-azure | bash -s \
+ curl -sL https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/azure/biganimal-csp-preflight | bash -s \
12412ab3d-1515-2217-96f5-0338184fcc04 eastus2 \
--instance-type e2s_v3 \
- --high-availability \
+ --cluster-architecture ha \
--endpoint public \
--activate-region
```
@@ -112,13 +113,12 @@ The script displays the following output:
Total Regional vCPUs 130 27 103 89 OK
Standard Dv4 Family vCPUs 20 14 6 0 Need Increase
Standard ESv3 Family vCPUs 20 4 16 8 OK
- Public IP Addresses — Basic 20 11 9 8 OK
Public IP Addresses — Standard 20 3 17 16 OK
```
### Method 2: Manually check readiness
-You can manually check the requirements instead of using the `biganimal-preflight-azure` script.
+You can manually check the requirements instead of using the `biganimal-csp-preflight` script.
#### Check Azure resource provider registrations using Azure Cloud Shell
diff --git a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/02_preparing_aws.mdx b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/02_preparing_aws.mdx
index 7cca53dc7c1..6576964c25b 100644
--- a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/02_preparing_aws.mdx
+++ b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/02_preparing_aws.mdx
@@ -12,7 +12,7 @@ BigAnimal requires you to check the readiness of your AWS account before you dep
EDB provides a shell script, called [biganimal-csp-preflight](https://github.com/EnterpriseDB/cloud-utilities/blob/main/aws/biganimal-csp-preflight), which checks whether requirements and resource limits are met in your AWS account based on the clusters you plan to deploy.
1. Open the [AWS Cloud Shell](https://console.aws.amazon.com/cloudshell) in your browser.
-2. From the AWS Cloud Shell, run the following command:
+2. From the AWS Cloud Shell, run the following command:
```shell
curl -sL https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/aws/biganimal-csp-preflight | bash -s [options]
@@ -21,20 +21,21 @@ EDB provides a shell script, called [biganimal-csp-preflight](https://github.com
| Argument | Description |
| -------- | ----------- |
- | <target-subscription> | AWS account ID of your BigAnimal deployment. |
+ | <account-id> | AWS account ID of your BigAnimal deployment. |
| <region> | AWS region where your clusters are being deployed. See [Supported regions](../../overview/03a_region_support) for a list of possible regions. |
-
+
Possible options are:
| Options | Description |
| ------- | ----------- |
| `-h` or `--help`| Displays the command help. |
| `-i` or `--instance-type` | AWS instance type for the BigAnimal cluster. The help command provides a list of possible VM instance types. Choose the instance type that best suits your application and workload. Choose an instance type in the memory optimized R5, R5B, or R6I series for large data sets. Choose from the compute-optimized C5 or C6I series for compute-bound applications. Choose from the general purpose M5 or M6I series if you don't require memory or compute optimization.|
- | `-a` or `--high-availability` | Enables high availability for the cluster. See [Supported cluster types(../../overview/02_high_availability) for more information.|
+ | `-a` or `--high-availability` | *DEPRECATED* - Enables high availability for the cluster. See [Supported cluster types(../../overview/02_high_availability) for more information.|
+ | `-x` or `--cluster-architecture` | Defines the Cluster architecture and can be [ single | ha | eha ]. See [Supported cluster types](/biganimal/release/overview/02_high_availability) for more information.|
| `-e` or `--endpoint` | Type of network endpoint for the BigAnimal cluster, either `public` or `private`. See [Cluster networking architecture](../creating_a_cluster/01_cluster_networking) for more information. |
| `-r` or `--activate-region` | Specifies region activation if no clusters currently exist in the region. |
- | `--onboard` | Checks if the user and subscription are correctly configured.
-
+ | `--onboard` | Checks if the user and subscription are correctly configured.
+
The behavior of the script defaults to `--onboard` if you provide no other options.
For example, if you want to deploy a cluster in an AWS account having an ID of `1234-5678-9012`, with an instance type of `r5.24xlarge`, in the `us-east-1` region, in a `public` endpoint, and with no existing cluster deployed, run the following command:
diff --git a/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx b/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx
index 27ab98c61c4..0063c4c6f31 100644
--- a/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx
+++ b/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx
@@ -32,13 +32,12 @@ You can modify your cluster by modifying:
| Instance type \* | [Cluster Settings](../../getting_started/03_create_cluster/#cluster-settings-tab) |
| Networking type (public or private) \**| [Cluster Settings](../../getting_started/03_create_cluster/#cluster-settings-tab)|
| Database configuration parameters | [DB Configuration](05_db_configuration_parameters) |
-
\* Changing the instance type could incur higher cloud infrastructure charges.
\** If you are using Azure and previously setup a private link and want to change to a public network, you must remove the private link resources before making the change.
- !!! Note
- Saving changes might require a database restart.
+ !!!Note
+ Saving changes might require a database restart.
5. Save your changes.
diff --git a/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
index d04d6c2e049..a45200ea013 100644
--- a/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/08_configuring_the_hadoop_data_adapter.mdx
@@ -10,7 +10,7 @@ After installing Postgres, modify `postgresql.conf`, located in:
Modify the configuration file, adding the `hdfs_fdw.jvmpath` parameter to the end of the configuration file and setting the value to specify the location of the Java virtual machine (`libjvm.so`). Set the value of `hdfs_fdw.classpath` to indicate the location of the Java class files used by the adapter. Use a colon (:) as a delimiter between each path. For example:
- ``` Text
+ ```ini
hdfs_fdw.classpath=
'/usr/edb/as12/lib/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
```
@@ -36,7 +36,7 @@ Before using the Hadoop Foreign Data Wrapper:
Use the `CREATE EXTENSION` command to create the `hdfs_fdw` extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you will be querying the Hive or Spark server, and invoke the command:
-```text
+```sql
CREATE EXTENSION [IF NOT EXISTS] hdfs_fdw [WITH] [SCHEMA schema_name];
```
@@ -65,7 +65,7 @@ For more information about using the foreign data wrapper `CREATE EXTENSION` com
Use the `CREATE SERVER` command to define a connection to a foreign server. The syntax is:
-```text
+```sql
CREATE SERVER server_name FOREIGN DATA WRAPPER hdfs_fdw
[OPTIONS (option 'value' [, ...])]
```
@@ -103,7 +103,7 @@ The role that defines the server is the owner of the server. Use the `ALTER SERV
The following command creates a foreign server named `hdfs_server` that uses the `hdfs_fdw` foreign data wrapper to connect to a host with an IP address of `170.11.2.148`:
-```text
+```sql
CREATE SERVER hdfs_server FOREIGN DATA WRAPPER hdfs_fdw OPTIONS (host '170.11.2.148', port '10000', client_type 'hiveserver2', auth_type 'LDAP', connect_timeout '10000', query_timeout '10000');
```
@@ -117,7 +117,7 @@ For more information about using the `CREATE SERVER` command, see the [PostgreSQ
Use the `CREATE USER MAPPING` command to define a mapping that associates a Postgres role with a foreign server:
-```text
+```sql
CREATE USER MAPPING FOR role_name SERVER server_name
[OPTIONS (option 'value' [, ...])];
```
@@ -155,7 +155,7 @@ The following command creates a user mapping for a role named `enterprisedb`. Th
If the database host uses LDAP authentication, provide connection credentials when creating the user mapping:
-```text
+```sql
CREATE USER MAPPING FOR enterprisedb SERVER hdfs_server OPTIONS (username 'alice', password '1safepwd');
```
@@ -169,7 +169,7 @@ For detailed information about the `CREATE USER MAPPING` command, see the [Postg
A foreign table is a pointer to a table that resides on the Hadoop host. Before creating a foreign table definition on the Postgres server, connect to the Hive or Spark server and create a table. The columns in the table map to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the table that resides on the Hadoop host. The syntax is:
-```text
+```sql
CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
{ column_name data_type [ OPTIONS ( option 'value' [, ... ] ) ] [ COLLATE collation ] [ column_constraint [ ... ] ]
| table_constraint }
@@ -181,14 +181,14 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
`column_constraint` is:
-```text
+```sql
[ CONSTRAINT constraint_name ]
{ NOT NULL | NULL | CHECK (expr) [ NO INHERIT ] | DEFAULT default_expr }
```
`table_constraint` is:
-```text
+```sql
[ CONSTRAINT constraint_name ] CHECK (expr) [ NO INHERIT ]
```
@@ -258,7 +258,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
To use data that is stored on a distributed file system, you must create a table on the Postgres host that maps the columns of a Hadoop table to the columns of a Postgres table. For example, for a Hadoop table with the following definition:
-```text
+```sql
CREATE TABLE weblogs (
client_ip STRING,
full_request_date STRING,
@@ -282,7 +282,7 @@ fields terminated by '\t';
Execute a command on the Postgres server that creates a comparable table on the Postgres server:
-```text
+```sql
CREATE FOREIGN TABLE weblogs
(
client_ip TEXT,
@@ -334,7 +334,7 @@ When using the foreign data wrapper, you must create a table on the Postgres ser
Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you're dropping the Hadoop server, and run the command:
-```text
+```sql
DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ];
```
@@ -368,7 +368,7 @@ For more information about using the foreign data wrapper `DROP EXTENSION` comma
Use the `DROP SERVER` command to remove a connection to a foreign server. The syntax is:
-```text
+```sql
DROP SERVER [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
@@ -404,7 +404,7 @@ For more information about using the `DROP SERVER` command, see the [PostgreSQL
Use the `DROP USER MAPPING` command to remove a mapping that associates a Postgres role with a foreign server. You must be the owner of the foreign server to remove a user mapping for that server.
-```text
+```sql
DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC } SERVER server_name;
```
@@ -434,7 +434,7 @@ For detailed information about the `DROP USER MAPPING` command, see the [Postgre
A foreign table is a pointer to a table that resides on the Hadoop host. Use the `DROP FOREIGN TABLE` command to remove a foreign table. Only the owner of the foreign table can drop it.
-```text
+```sql
DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
@@ -458,7 +458,7 @@ DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
### Example
-```text
+```sql
DROP FOREIGN TABLE warehouse;
```
diff --git a/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx b/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
index 03dd2d61f55..4697ec66570 100644
--- a/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/09_using_the_hadoop_data_adapter.mdx
@@ -21,7 +21,7 @@ To use HDFS FDW with Apache Hive on top of Hadoop:
1. Upload the `weblog_parse.txt` file using these commands:
- ```text
+ ```shell
hadoop fs -mkdir /weblogs
hadoop fs -mkdir /weblogs/parse
hadoop fs -put weblogs_parse.txt /weblogs/parse/part-00000
@@ -29,13 +29,13 @@ To use HDFS FDW with Apache Hive on top of Hadoop:
1. Start HiveServer, if not already running, using following command:
- ```text
+ ```shell
$HIVE_HOME/bin/hiveserver2
```
or
- ```text
+ ```shell
$HIVE_HOME/bin/hive --service hiveserver2
```
@@ -47,9 +47,9 @@ To use HDFS FDW with Apache Hive on top of Hadoop:
beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl
```
-1. Create a table in Hive. The example creates a table named `weblogs`"
+1. Create a table in Hive. The example creates a table named `weblogs`:
- ```text
+ ```sql
CREATE TABLE weblogs (
client_ip STRING,
full_request_date STRING,
@@ -73,13 +73,13 @@ To use HDFS FDW with Apache Hive on top of Hadoop:
1. Load data into the table.
- ```text
+ ```shell
hadoop fs -cp /weblogs/parse/part-00000 /user/hive/warehouse/weblogs/
```
1. Access your data from Postgres. You can now use the `weblog` table. Once you're connected using psql, follow these steps:
- ```text
+ ```sql
-- set the GUC variables appropriately, e.g. :
hdfs_fdw.jvmpath='/home/edb/Projects/hadoop_fdw/jdk1.8.0_111/jre/lib/amd64/server/'
hdfs_fdw.classpath='/usr/local/edbas/lib/postgresql/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
@@ -154,7 +154,7 @@ To use HDFS FDW with Apache Spark on top of Hadoop:
1. In the folder `$SPARK_HOME/conf`, create a file `spark-defaults.conf` containing the following line:
- ```text
+ ```shell
spark.sql.warehouse.dir hdfs://localhost:9000/user/hive/warehouse
```
@@ -162,7 +162,7 @@ To use HDFS FDW with Apache Spark on top of Hadoop:
1. Start Spark Thrift Server.
- ```text
+ ```shell
./start-thriftserver.sh
```
@@ -170,7 +170,7 @@ To use HDFS FDW with Apache Spark on top of Hadoop:
1. Create a local file (`names.txt`) that contains the following entries:
- ```text
+ ```shell
$ cat /tmp/names.txt
1,abcd
2,pqrs
@@ -182,7 +182,7 @@ To use HDFS FDW with Apache Spark on top of Hadoop:
1. Connect to Spark Thrift Server2 using the Spark beeline client. For example:
- ```text
+ ```shell
$ beeline
Beeline version 1.2.1.spark2 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl org.apache.hive.jdbc.HiveDriver
@@ -190,7 +190,7 @@ To use HDFS FDW with Apache Spark on top of Hadoop:
1. Prepare the sample data on Spark. Run the following commands in the beeline command line tool:
- ```text
+ ```shell
./beeline
Beeline version 1.2.1.spark2 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000/default;auth=noSasl org.apache.hive.jdbc.HiveDriver
@@ -242,7 +242,7 @@ To use HDFS FDW with Apache Spark on top of Hadoop:
The following commands list the corresponding files in Hadoop:
- ```text
+ ```shell
$ hadoop fs -ls /user/hive/warehouse/
Found 1 items
drwxrwxrwx - org.apache.hive.jdbc.HiveDriver supergroup 0 2020-06-12 17:03 /user/hive/warehouse/my_test_db.db
@@ -254,7 +254,7 @@ To use HDFS FDW with Apache Spark on top of Hadoop:
1. Access your data from Postgres using psql:
- ```text
+ ```sql
-- set the GUC variables appropriately, e.g. :
hdfs_fdw.jvmpath='/home/edb/Projects/hadoop_fdw/jdk1.8.0_111/jre/lib/amd64/server/'
hdfs_fdw.classpath='/usr/local/edbas/lib/postgresql/HiveJdbcClient-1.0.jar:/home/edb/Projects/hadoop_fdw/hadoop/share/hadoop/common/hadoop-common-2.6.4.jar:/home/edb/Projects/hadoop_fdw/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar'
diff --git a/product_docs/docs/hadoop_data_adapter/2/10_identifying_data_adapter_version.mdx b/product_docs/docs/hadoop_data_adapter/2/10_identifying_data_adapter_version.mdx
index 8e6f401f8f5..9d8f5050ed1 100644
--- a/product_docs/docs/hadoop_data_adapter/2/10_identifying_data_adapter_version.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/10_identifying_data_adapter_version.mdx
@@ -6,7 +6,7 @@ title: "Identifying the version"
The Hadoop Foreign Data Wrapper includes a function that you can use to identify the currently installed version of the `.so` file for the data wrapper. To use the function, connect to the Postgres server, and enter:
-```text
+```sql
SELECT hdfs_fdw_version();
```
diff --git a/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx b/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx
index bcad81896a9..95cea2ac13a 100644
--- a/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/10a_example_join_pushdown.mdx
@@ -35,7 +35,7 @@ Tables on HIVE/SPARK server:
Tables on Postgres server:
-```text
+```sql
CREATE EXTENSION hdfs_fdw;
CREATE SERVER hdfs_server FOREIGN DATA WRAPPER hdfs_fdw OPTIONS(host 'localhost', port '10000', client_type 'spark', auth_type 'LDAP');
CREATE USER MAPPING FOR public SERVER hdfs_server OPTIONS (username 'user1', password 'pwd123');
diff --git a/product_docs/docs/hadoop_data_adapter/2/10b_example_aggregate_pushdown.mdx b/product_docs/docs/hadoop_data_adapter/2/10b_example_aggregate_pushdown.mdx
index 6511f027321..1f66148f4b9 100644
--- a/product_docs/docs/hadoop_data_adapter/2/10b_example_aggregate_pushdown.mdx
+++ b/product_docs/docs/hadoop_data_adapter/2/10b_example_aggregate_pushdown.mdx
@@ -35,7 +35,7 @@ Tables on HIVE/SPARK server:
Tables on Postgres server:
-```text
+```sql
-- load extension first time after install
CREATE EXTENSION hdfs_fdw;
@@ -68,7 +68,7 @@ SERVER hdfs_server OPTIONS (dbname 'fdw_db', table_name 'emp');
Query with aggregate pushdown:
-```text
+```sql
-- aggregate functions
EXPLAIN (VERBOSE, COSTS OFF)
SELECT deptno, COUNT(*),SUM(sal),MAX(sal),MIN(sal),AVG(sal) FROM emp
diff --git a/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx b/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx
index 1dbfbfc7b40..6591bca6504 100644
--- a/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/06_features_of_mongo_fdw.mdx
@@ -62,22 +62,23 @@ Steps for retrieving the document:
1. Create a foreign table with a column name `__doc`. The type of the column can be json, jsonb, text, or varchar.
-```text
-CREATE FOREIGN TABLE test_json(__doc json) SERVER mongo_server OPTIONS (database 'testdb', collection 'warehouse');
-```
+ ```sql
+ CREATE FOREIGN TABLE test_json(__doc json) SERVER mongo_server OPTIONS (database 'testdb', collection 'warehouse');
+ ```
2. Retrieve the document.
-```text
-SELECT * FROM test_json ORDER BY __doc::text COLLATE "C";
-```
+ ```sql
+ SELECT * FROM test_json ORDER BY __doc::text COLLATE "C";
+ ```
The output:
-```text
-edb=#SELECT * FROM test_json ORDER BY __doc::text COLLATE "C";
- __doc ---------------------------------------------------------------------------------------------------------------------------------------------------------
-{ "_id" : { "$oid" : "58a1ebbaf543ec0b90545859" }, "warehouse_id" : 1, "warehouse_name" : "UPS", "warehouse_created" : { "$date" : 1418368330000 } }
-{ "_id" : { "$oid" : "58a1ebbaf543ec0b9054585a" }, "warehouse_id" : 2, "warehouse_name" : "Laptop", "warehouse_created" : { "$date" : 1447229590000 } }
-(2 rows)
-```
+ ```sql
+ edb=#SELECT * FROM test_json ORDER BY __doc::text COLLATE "C";
+ __OUTPUT__
+ __doc ---------------------------------------------------------------------------------------------------------------------------------------------------------
+ { "_id" : { "$oid" : "58a1ebbaf543ec0b90545859" }, "warehouse_id" : 1, "warehouse_name" : "UPS", "warehouse_created" : { "$date" : 1418368330000 } }
+ { "_id" : { "$oid" : "58a1ebbaf543ec0b9054585a" }, "warehouse_id" : 2, "warehouse_name" : "Laptop", "warehouse_created" : { "$date" : 1447229590000 } }
+ (2 rows)
+ ```
diff --git a/product_docs/docs/mongo_data_adapter/5/07_configuring_the_mongo_data_adapter.mdx b/product_docs/docs/mongo_data_adapter/5/07_configuring_the_mongo_data_adapter.mdx
index f7980360cbc..1bf60d94e6f 100644
--- a/product_docs/docs/mongo_data_adapter/5/07_configuring_the_mongo_data_adapter.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/07_configuring_the_mongo_data_adapter.mdx
@@ -17,7 +17,7 @@ Before using the MongoDB Foreign Data Wrapper:
Use the `CREATE EXTENSION` command to create the `mongo_fdw` extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you want to query the MongoDB server, and invoke the command:
-```text
+```sql
CREATE EXTENSION [IF NOT EXISTS] mongo_fdw [WITH] [SCHEMA schema_name];
```
@@ -45,7 +45,7 @@ For more information about using the foreign data wrapper `CREATE EXTENSION` com
Use the `CREATE SERVER` command to define a connection to a foreign server. The syntax is:
-```text
+```sql
CREATE SERVER server_name FOREIGN DATA WRAPPER mongo_fdw
[OPTIONS (option 'value' [, ...])]
```
@@ -84,7 +84,7 @@ The role that defines the server is the owner of the server. Use the `ALTER SERV
The following command creates a foreign server named `mongo_server` that uses the `mongo_fdw` foreign data wrapper to connect to a host with an IP address of `127.0.0.1`:
-```text
+```sql
CREATE SERVER mongo_server FOREIGN DATA WRAPPER mongo_fdw OPTIONS (host '127.0.0.1', port '27017');
```
@@ -98,7 +98,7 @@ For more information about using the `CREATE SERVER` command, see the [PostgreSQ
Use the `CREATE USER MAPPING` command to define a mapping that associates a Postgres role with a foreign server:
-```text
+```sql
CREATE USER MAPPING FOR role_name SERVER server_name
[OPTIONS (option 'value' [, ...])];
```
@@ -131,7 +131,7 @@ The following command creates a user mapping for a role named `enterprisedb`. Th
If the database host uses secure authentication, provide connection credentials when creating the user mapping:
-```text
+```sql
CREATE USER MAPPING FOR enterprisedb SERVER mongo_server OPTIONS (username 'mongo_user', password 'mongo_pass');
```
@@ -145,7 +145,7 @@ For detailed information about the `CREATE USER MAPPING` command, see the [Postg
A foreign table is a pointer to a table that resides on the MongoDB host. Before creating a foreign table definition on the Postgres server, connect to the MongoDB server and create a collection. The columns in the table map to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the collection that resides on the MongoDB host. The syntax is:
-```text
+```sql
CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
{ column_name data_type [ OPTIONS ( option 'value' [, ... ] ) ] [ COLLATE collation ] [ column_constraint [ ... ] ]
| table_constraint }
@@ -157,14 +157,14 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
`column_constraint` is:
-```text
+```sql
[ CONSTRAINT constraint_name ]
{ NOT NULL | NULL | CHECK (expr) [ NO INHERIT ] | DEFAULT default_expr }
```
`table_constraint` is:
-```text
+```sql
[ CONSTRAINT constraint_name ] CHECK (expr) [ NO INHERIT ]
```
@@ -234,7 +234,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
To use data that's stored on MongoDB server, you must create a table on the Postgres host that maps the columns of a MongoDB collection to the columns of a Postgres table. For example, for a MongoDB collection with the following definition:
-```text
+```sql
db.warehouse.find
(
{
@@ -251,7 +251,7 @@ db.warehouse.find
Execute a command on the Postgres server that creates a comparable table on the Postgres server:
-```text
+```sql
CREATE FOREIGN TABLE warehouse
(
_id NAME,
@@ -295,7 +295,7 @@ When using the foreign data wrapper, you must create a table on the Postgres ser
Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you're dropping the MongoDB server, and run the command:
-```text
+```sql
DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ];
```
@@ -329,7 +329,7 @@ For more information about using the foreign data wrapper `DROP EXTENSION` comma
Use the `DROP SERVER` command to remove a connection to a foreign server. The syntax is:
-```text
+```sql
DROP SERVER [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
@@ -365,7 +365,7 @@ For more information about using the `DROP SERVER` command, see the [PostgreSQL
Use the `DROP USER MAPPING` command to remove a mapping that associates a Postgres role with a foreign server. You must be the owner of the foreign server to remove a user mapping for that server.
-```text
+```sql
DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC } SERVER server_name;
```
@@ -395,7 +395,7 @@ For detailed information about the `DROP USER MAPPING` command, see the [Postgre
A foreign table is a pointer to a table that resides on the MongoDB host. Use the `DROP FOREIGN TABLE` command to remove a foreign table. Only the owner of the foreign table can drop it.
-```text
+```sql
DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
@@ -419,7 +419,7 @@ DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
### Example
-```text
+```sql
DROP FOREIGN TABLE warehouse;
```
diff --git a/product_docs/docs/mongo_data_adapter/5/08_example_using_the_mongo_data_adapter.mdx b/product_docs/docs/mongo_data_adapter/5/08_example_using_the_mongo_data_adapter.mdx
index 47591e59fd5..c8ab696970e 100644
--- a/product_docs/docs/mongo_data_adapter/5/08_example_using_the_mongo_data_adapter.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/08_example_using_the_mongo_data_adapter.mdx
@@ -4,7 +4,7 @@ title: "Example: End-to-end"
Before using the MongoDB foreign data wrapper, you must connect to your database with a client application. The following example uses the wrapper with the psql client. After connecting to psql, you can follow the steps in the example:
-```text
+```sql
-- load extension first time after install
CREATE EXTENSION mongo_fdw;
diff --git a/product_docs/docs/mongo_data_adapter/5/08a_example_join_pushdown.mdx b/product_docs/docs/mongo_data_adapter/5/08a_example_join_pushdown.mdx
index 8e533fa9417..25c22914f19 100644
--- a/product_docs/docs/mongo_data_adapter/5/08a_example_join_pushdown.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/08a_example_join_pushdown.mdx
@@ -6,7 +6,7 @@ MongoDB Foreign Data Wrapper supports pushdown for inner joins, left joins, and
Postgres data set:
-```text
+```sql
-- load extension first time after install
CREATE EXTENSION mongo_fdw;
@@ -35,32 +35,36 @@ INSERT INTO dept VALUES (0, 30, 'IT');
```
The output:
-```
+```sql
--inner join
edb=# EXPLAIN VERBOSE SELECT e.ename, d.dname FROM emp e INNER JOIN dept d ON (e.deptid = d.deptid);
+__OUTPUT__
QUERY PLAN
----------------------------------------------------------
Foreign Scan (cost=15.00..35.00 rows=5000 width=64)
Output: e.ename, d.dname
Foreign Namespace: (edb.emp e) INNER JOIN (edb.dept d)
(3 rows)
-
+```
+```sql
--left join
edb=# EXPLAIN VERBOSE SELECT e.ename, d.dname FROM emp e LEFT JOIN dept d ON (e.deptid = d.deptid);
+__OUTPUT__
QUERY PLAN
---------------------------------------------------------
Foreign Scan (cost=15.00..35.00 rows=5000 width=64)
Output: e.ename, d.dname
Foreign Namespace: (edb.emp e) LEFT JOIN (edb.dept d)
(3 rows)
-
+```
+```sql
--right join
edb=# EXPLAIN VERBOSE SELECT e.ename, d.dname FROM emp e RIGHT JOIN dept d ON (e.deptid = d.deptid);
+__OUTPUT__
QUERY PLAN
---------------------------------------------------------
Foreign Scan (cost=15.00..35.00 rows=5000 width=64)
Output: e.ename, d.dname
Foreign Namespace: (edb.dept d) LEFT JOIN (edb.emp e)
(3 rows)
-
- ```
\ No newline at end of file
+```
\ No newline at end of file
diff --git a/product_docs/docs/mongo_data_adapter/5/08b_example_aggregate_pushdown.mdx b/product_docs/docs/mongo_data_adapter/5/08b_example_aggregate_pushdown.mdx
index 4e5a4f72156..997794f6284 100644
--- a/product_docs/docs/mongo_data_adapter/5/08b_example_aggregate_pushdown.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/08b_example_aggregate_pushdown.mdx
@@ -12,7 +12,7 @@ MongoDB Foreign Data Wrapper supports pushdown for the following aggregate funct
Postgres data set:
-```text
+```sql
-- load extension first time after install
CREATE EXTENSION mongo_fdw;
@@ -34,54 +34,65 @@ INSERT INTO emp VALUES (0, 130, 30);
The output:
-```text
+```sql
-- COUNT function
edb# EXPLAIN VERBOSE SELECT COUNT(*) FROM emp;
+__OUTPUT__
QUERY PLAN
--------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=8)
Output: (count(*))
Foreign Namespace: Aggregate on (db1.emp)
(3 rows)
-
+```
+```sql
-- SUM function
edb# EXPLAIN VERBOSE SELECT SUM(deptid) FROM emp;
+__OUTPUT__
QUERY PLAN
--------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=8)
Output: (sum(deptid))
Foreign Namespace: Aggregate on (db1.emp)
(3 rows)
-
+```
+```sql
-- AVG function
edb# EXPLAIN VERBOSE SELECT AVG(deptid) FROM emp;
+__OUTPUT__
QUERY PLAN
---------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=32)
Output: (avg(deptid))
Foreign Namespace: Aggregate on (db1.emp)
(3 rows)
-
+```
+```sql
-- MAX function
edb# EXPLAIN VERBOSE SELECT MAX(eid) FROM emp;
+__OUTPUT__
QUERY PLAN
--------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=4)
Output: (max(eid))
Foreign Namespace: Aggregate on (db1.emp)
(3 rows)
-
+```
+```sql
-- MIN function
edb# EXPLAIN VERBOSE SELECT MIN(eid) FROM emp;
+__OUTPUT__
QUERY PLAN
--------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=4)
Output: (min(eid))
Foreign Namespace: Aggregate on (db1.emp)
(3 rows)
-
+```
+```sql
-- MIN and SUM functions with GROUPBY
edb# EXPLAIN VERBOSE SELECT MIN(deptid), SUM(eid) FROM emp GROUP BY deptid HAVING MAX(eid) > 120;
+__OUTPUT__
QUERY PLAN
-----------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=200 width=16)
diff --git a/product_docs/docs/mongo_data_adapter/5/09_identifying_data_adapter_version.mdx b/product_docs/docs/mongo_data_adapter/5/09_identifying_data_adapter_version.mdx
index 1c2ba36f186..4d3222d244c 100644
--- a/product_docs/docs/mongo_data_adapter/5/09_identifying_data_adapter_version.mdx
+++ b/product_docs/docs/mongo_data_adapter/5/09_identifying_data_adapter_version.mdx
@@ -6,7 +6,7 @@ title: "Identifying the version"
The MongoDB Foreign Data Wrapper includes a function that you can use to identify the currently installed version of the `.so` file for the data wrapper. To use the function, connect to the Postgres server, and enter:
-```text
+```sql
SELECT mongo_fdw_version();
```
diff --git a/product_docs/docs/mysql_data_adapter/2/05_updating_the_mysql_data_adapter.mdx b/product_docs/docs/mysql_data_adapter/2/05_updating_the_mysql_data_adapter.mdx
index e2dd9bb6c95..c01846710aa 100644
--- a/product_docs/docs/mysql_data_adapter/2/05_updating_the_mysql_data_adapter.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/05_updating_the_mysql_data_adapter.mdx
@@ -11,15 +11,15 @@ If you have an existing RPM installation of MySQL Foreign Data Wrapper, you can
- For MySQL 8:
- On RHEL or CentOS 7:
- ```text
+ ```shell
sudo yum -y upgrade edb-as-mysql8_fdw* mysql-community-devel
```
- On RHEL or Rocky Linux or AlmaLinux 8:
- ```text
+ ```shell
sudo dnf -y upgrade edb-as-mysql8_fdw* mysql-community-devel
```
- For MySQL 5 on RHEL or CentOS 7:
- ```text
+ ```shell
sudo yum -y upgrade edb-as-mysql5_fdw* mysql-community-devel
```
@@ -35,11 +35,11 @@ If you are upgrading from MySQL FDW 2.5.5 to 2.6.0 for MySQL 8, then before inst
Use the following command to upgrade the FDW installation:
- For MySQL 8:
- ```text
+ ```shell
apt-get upgrade edb-as-mysql8-fdw*
```
- For MySQL 5:
- ```text
+ ```shell
apt-get upgrade edb-as-mysql5-fdw*
```
where `xx` is the server version number.
diff --git a/product_docs/docs/mysql_data_adapter/2/07_configuring_the_mysql_data_adapter.mdx b/product_docs/docs/mysql_data_adapter/2/07_configuring_the_mysql_data_adapter.mdx
index 7c8ef01e17c..07de8655c6d 100644
--- a/product_docs/docs/mysql_data_adapter/2/07_configuring_the_mysql_data_adapter.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/07_configuring_the_mysql_data_adapter.mdx
@@ -14,7 +14,7 @@ Before using the MySQL Foreign Data Wrapper:
Use the `CREATE EXTENSION` command to create the `mysql_fdw` extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you're querying the MySQL server, and invoke the command:
-```text
+```sql
CREATE EXTENSION [IF NOT EXISTS] mysql_fdw [WITH] [SCHEMA schema_name];
```
@@ -42,7 +42,7 @@ For more information about using the foreign data wrapper `CREATE EXTENSION` com
Use the `CREATE SERVER` command to define a connection to a foreign server. The syntax is:
-```text
+```sql
CREATE SERVER server_name FOREIGN DATA WRAPPER mysql_fdw
[OPTIONS (option 'value' [, ...])]
```
@@ -84,7 +84,7 @@ The role that defines the server is the owner of the server. Use the `ALTER SERV
The following command creates a foreign server named `mysql_server` that uses the `mysql_fdw` foreign data wrapper to connect to a host with an IP address of `127.0.0.1`:
-```text
+```sql
CREATE SERVER mysql_server FOREIGN DATA WRAPPER mysql_fdw OPTIONS (host '127.0.0.1', port '3306');
```
@@ -97,7 +97,7 @@ For more information about using the `CREATE SERVER` command, see the [PostgreSQ
Use the `CREATE USER MAPPING` command to define a mapping that associates a Postgres role with a foreign server:
-```text
+```sql
CREATE USER MAPPING FOR role_name SERVER server_name
[OPTIONS (option 'value' [, ...])];
```
@@ -130,7 +130,7 @@ The following command creates a user mapping for a role named `enterprisedb`. Th
If the database host uses secure authentication, provide connection credentials when creating the user mapping:
-```text
+```sql
CREATE USER MAPPING FOR public SERVER mysql_server OPTIONS (username 'foo', password 'bar');
```
@@ -143,7 +143,7 @@ For detailed information about the `CREATE USER MAPPING` command, see the [Postg
A foreign table is a pointer to a table that resides on the MySQL host. Before creating a foreign table definition on the Postgres server, connect to the MySQL server and create a table. The columns in the table map to columns in a table on the Postgres server. Then, use the `CREATE FOREIGN TABLE` command to define a table on the Postgres server with columns that correspond to the table that resides on the MySQL host. The syntax is:
-```text
+```sql
CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
{ column_name data_type [ OPTIONS ( option 'value' [, ... ] ) ] [ COLLATE collation ] [ column_constraint [ ... ] ]
| table_constraint }
@@ -155,14 +155,14 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
`column_constraint` is:
-```text
+```sql
[ CONSTRAINT constraint_name ]
{ NOT NULL | NULL | CHECK (expr) [ NO INHERIT ] | DEFAULT default_expr }
```
`table_constraint` is:
-```text
+```sql
[ CONSTRAINT constraint_name ] CHECK (expr) [ NO INHERIT ]
```
@@ -234,7 +234,7 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] table_name ( [
To use data that's stored on a MySQL server, you must create a table on the Postgres host that maps the columns of a MySQL table to the columns of a Postgres table. For example, for a MySQL table with the following definition:
-```text
+```sql
CREATE TABLE warehouse (
warehouse_id INT PRIMARY KEY,
warehouse_name TEXT,
@@ -243,7 +243,7 @@ CREATE TABLE warehouse (
Execute a command on the Postgres server that creates a comparable table on the Postgres server:
-```text
+```sql
CREATE FOREIGN TABLE warehouse
(
warehouse_id INT,
@@ -293,7 +293,7 @@ When using the foreign data wrapper, you must create a table on the Postgres ser
Use the `IMPORT FOREIGN SCHEMA` command to import table definitions on the Postgres server from the MySQL server. The new foreign tables are created with the same column definitions as that of remote tables in the existing local schema. The syntax is:
-```text
+```sql
IMPORT FOREIGN SCHEMA remote_schema
[ { LIMIT TO | EXCEPT } ( table_name [, ...] ) ]
FROM SERVER server_name
@@ -337,7 +337,7 @@ IMPORT FOREIGN SCHEMA remote_schema
For a MySQL table created in the edb database with the following definition:
-```text
+```sql
CREATE TABLE color(cid INT PRIMARY KEY, cname TEXT);
INSERT INTO color VALUES (1, 'Red');
INSERT INTO color VALUES (2, 'Green');
@@ -350,7 +350,7 @@ INSERT INTO fruit VALUES (2, 'Mango');
Execute a command on the Postgres server that imports a comparable table on the Postgres server:
-```text
+```sql
IMPORT FOREIGN SCHEMA edb FROM SERVER mysql_server INTO public;
SELECT * FROM color;
@@ -379,7 +379,7 @@ For more information about using the `IMPORT FOREIGN SCHEMA` command, see the [P
Use the `DROP EXTENSION` command to remove an extension. To invoke the command, use your client of choice (for example, psql) to connect to the Postgres database from which you're dropping the MySQL server, and run the command:
-```text
+```sql
DROP EXTENSION [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ];
```
@@ -413,7 +413,7 @@ For more information about using the foreign data wrapper `DROP EXTENSION` comma
Use the `DROP SERVER` command to remove a connection to a foreign server. The syntax is:
-```text
+```sql
DROP SERVER [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
@@ -447,7 +447,7 @@ For more information about using the `DROP SERVER` command, see the [PostgreSQL
Use the `DROP USER MAPPING` command to remove a mapping that associates a Postgres role with a foreign server. You must be the owner of the foreign server to remove a user mapping for that server.
-```text
+```sql
DROP USER MAPPING [ IF EXISTS ] FOR { user_name | USER | CURRENT_USER | PUBLIC } SERVER server_name;
```
@@ -477,7 +477,7 @@ For detailed information about the `DROP USER MAPPING` command, see the [Postgre
A foreign table is a pointer to a table that resides on the MySQL host. Use the `DROP FOREIGN TABLE` command to remove a foreign table. Only the owner of the foreign table can drop it.
-```text
+```sql
DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
```
@@ -501,7 +501,7 @@ DROP FOREIGN TABLE [ IF EXISTS ] name [, ...] [ CASCADE | RESTRICT ]
### Example
-```text
+```sql
DROP FOREIGN TABLE warehouse;
```
diff --git a/product_docs/docs/mysql_data_adapter/2/08_example_using_the_mysql_data_adapter.mdx b/product_docs/docs/mysql_data_adapter/2/08_example_using_the_mysql_data_adapter.mdx
index 9066be68d2d..d2116bb3226 100644
--- a/product_docs/docs/mysql_data_adapter/2/08_example_using_the_mysql_data_adapter.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/08_example_using_the_mysql_data_adapter.mdx
@@ -6,7 +6,7 @@ title: "Example: End-to-end"
Access data from EDB Postgres Advanced Server and connect to psql. Once you're connected to psql, follow these steps:
-```text
+```sql
-- load extension first time after install
CREATE EXTENSION mysql_fdw;
diff --git a/product_docs/docs/mysql_data_adapter/2/09_example_import_foreign_schema.mdx b/product_docs/docs/mysql_data_adapter/2/09_example_import_foreign_schema.mdx
index 56b7aa85c3b..8c942c6c981 100644
--- a/product_docs/docs/mysql_data_adapter/2/09_example_import_foreign_schema.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/09_example_import_foreign_schema.mdx
@@ -6,7 +6,7 @@ title: "Example: Import foreign schema"
Access data from EDB Postgres Advanced Server and connect to psql. Once you're connected to psql, follow these steps:
-```text
+```sql
-- load extension first time after install
CREATE EXTENSION mysql_fdw;
diff --git a/product_docs/docs/mysql_data_adapter/2/10_example_join_push_down.mdx b/product_docs/docs/mysql_data_adapter/2/10_example_join_push_down.mdx
index b36d2231904..7cee1df03ef 100644
--- a/product_docs/docs/mysql_data_adapter/2/10_example_join_push_down.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/10_example_join_push_down.mdx
@@ -7,7 +7,7 @@ This example shows join pushdown between two foreign tables: `warehouse` and `sa
Table on MySQL server:
-```text
+```sql
CREATE TABLE warehouse
(
warehouse_id INT PRIMARY KEY,
@@ -24,7 +24,7 @@ qty INT
Table on Postgres server:
-```text
+```sql
CREATE EXTENSION mysql_fdw;
CREATE SERVER mysql_server FOREIGN DATA WRAPPER mysql_fdw OPTIONS (host '127.0.0.1', port '3306');
CREATE USER MAPPING FOR public SERVER mysql_server OPTIONS (username 'edb', password 'edb');
@@ -53,9 +53,10 @@ INSERT INTO sales_records values (3, 200);
The output:
-```text
+```sql
--inner join
edb=# EXPLAIN VERBOSE SELECT t1.warehouse_name, t2.qty FROM warehouse t1 INNER JOIN sales_records t2 ON (t1.warehouse_id = t2.warehouse_id);
+__OUTPUT__
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Foreign Scan (cost=15.00..35.00 rows=5000 width=36)
@@ -64,9 +65,11 @@ edb=# EXPLAIN VERBOSE SELECT t1.warehouse_name, t2.qty FROM warehouse t1 INNER J
Local server startup cost: 10
Remote query: SELECT r1.`warehouse_name`, r2.`qty` FROM (`edb`.`warehouse` r1 INNER JOIN `edb`.`sales_records` r2 ON (((r1.`warehouse_id` = r2.`warehouse_id`))))
(5 rows)
-
+```
+```sql
--left join
edb=# EXPLAIN VERBOSE SELECT t1.warehouse_name, t2.qty FROM warehouse t1 LEFT JOIN sales_records t2 ON (t1.warehouse_id = t2.warehouse_id);
+__OUTPUT__
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Foreign Scan (cost=15.00..35.00 rows=5000 width=36)
@@ -75,9 +78,11 @@ edb=# EXPLAIN VERBOSE SELECT t1.warehouse_name, t2.qty FROM warehouse t1 LEFT JO
Local server startup cost: 10
Remote query: SELECT r1.`warehouse_name`, r2.`qty` FROM (`edb`.`warehouse` r1 LEFT JOIN `edb`.`sales_records` r2 ON (((r1.`warehouse_id` = r2.`warehouse_id`))))
(5 rows)
-
+```
+```sql
--right join
edb=# EXPLAIN VERBOSE SELECT t1.warehouse_name, t2.qty FROM warehouse t1 RIGHT JOIN sales_records t2 ON (t1.warehouse_id = t2.warehouse_id);
+__OUTPUT__
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Foreign Scan (cost=15.00..35.00 rows=5000 width=36)
@@ -86,10 +91,12 @@ edb=# EXPLAIN VERBOSE SELECT t1.warehouse_name, t2.qty FROM warehouse t1 RIGHT J
Local server startup cost: 10
Remote query: SELECT r1.`warehouse_name`, r2.`qty` FROM (`edb`.`sales_records` r2 LEFT JOIN `edb`.`warehouse` r1 ON (((r1.`warehouse_id` = r2.`warehouse_id`))))
(5 rows)
-
+```
+```sql
--cross join
edb=# EXPLAIN VERBOSE SELECT t1.warehouse_name, t2.qty FROM warehouse t1 CROSS JOIN
sales_records t2;
+__OUTPUT__
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------
Foreign Scan (cost=15.00..35.00 rows=1000000 width=36)
diff --git a/product_docs/docs/mysql_data_adapter/2/10a_example_aggregate_func_push_down.mdx b/product_docs/docs/mysql_data_adapter/2/10a_example_aggregate_func_push_down.mdx
index f6f39d0c424..ef3c4089aa0 100644
--- a/product_docs/docs/mysql_data_adapter/2/10a_example_aggregate_func_push_down.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/10a_example_aggregate_func_push_down.mdx
@@ -12,7 +12,7 @@ MySQL Foreign Data Wrapper supports pushdown for the following aggregate functio
Table on MySQL server:
-```text
+```sql
CREATE FOREIGN TABLE sales_records
(
warehouse_id INT PRIMARY KEY,
@@ -21,7 +21,7 @@ qty INT
```
Table on Postgres server:
-```
+```sql
CREATE EXTENSION mysql_fdw;
CREATE SERVER mysql_server FOREIGN DATA WRAPPER mysql_fdw OPTIONS (host '127.0.0.1', port '3306');
CREATE USER MAPPING FOR public SERVER mysql_server OPTIONS (username 'edb', password 'edb');
@@ -39,8 +39,9 @@ INSERT INTO sales_records values (3, 200);
The output:
-```text
+```sql
edb=# EXPLAIN VERBOSE SELECT avg(qty) FROM sales_records;
+__OUTPUT__
QUERY PLAN
--------------------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=32)
@@ -49,8 +50,10 @@ edb=# EXPLAIN VERBOSE SELECT avg(qty) FROM sales_records;
Local server startup cost: 10
Remote query: SELECT avg(`qty`) FROM `edb`.`sales_records`
(5 rows)
-
+```
+```sql
edb=# EXPLAIN VERBOSE SELECT COUNT(qty) FROM sales_records;
+__OUTPUT__
QUERY PLAN
----------------------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=8)
@@ -59,8 +62,10 @@ edb=# EXPLAIN VERBOSE SELECT COUNT(qty) FROM sales_records;
Local server startup cost: 10
Remote query: SELECT count(`qty`) FROM `edb`.`sales_records`
(5 rows)
-
+```
+```sql
edb=# EXPLAIN VERBOSE SELECT MIN(qty) FROM sales_records;
+__OUTPUT__
QUERY PLAN
--------------------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=4)
@@ -69,8 +74,10 @@ edb=# EXPLAIN VERBOSE SELECT MIN(qty) FROM sales_records;
Local server startup cost: 10
Remote query: SELECT min(`qty`) FROM `edb`.`sales_records`
(5 rows)
-
+```
+```sql
edb=# EXPLAIN VERBOSE SELECT MAX(qty) FROM sales_records;
+__OUTPUT__
QUERY PLAN
--------------------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=4)
@@ -79,8 +86,10 @@ edb=# EXPLAIN VERBOSE SELECT MAX(qty) FROM sales_records;
Local server startup cost: 10
Remote query: SELECT max(`qty`) FROM `edb`.`sales_records`
(5 rows)
-
+```
+```sql
edb=# EXPLAIN VERBOSE SELECT SUM(qty) FROM sales_records;
+__OUTPUT__
QUERY PLAN
--------------------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=8)
@@ -89,8 +98,10 @@ edb=# EXPLAIN VERBOSE SELECT SUM(qty) FROM sales_records;
Local server startup cost: 10
Remote query: SELECT sum(`qty`) FROM `edb`.`sales_records`
(5 rows)
-
+```
+```sql
edb=# EXPLAIN VERBOSE SELECT SUM(qty) FROM sales_records GROUP BY warehouse_id HAVING SUM(qty) = 75;
+__OUTPUT__
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=200 width=12)
diff --git a/product_docs/docs/mysql_data_adapter/2/10b_example_order_by_push_down.mdx b/product_docs/docs/mysql_data_adapter/2/10b_example_order_by_push_down.mdx
index 1ebd5ce3339..335f707be8d 100644
--- a/product_docs/docs/mysql_data_adapter/2/10b_example_order_by_push_down.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/10b_example_order_by_push_down.mdx
@@ -6,7 +6,7 @@ This example shows ORDER BY pushdown on foreign table: `sales_records`.
Table on MySQL server:
-```text
+```sql
CREATE TABLE sales_records(
warehouse_id INT PRIMARY KEY,
qty INT);
@@ -14,7 +14,7 @@ qty INT);
Table on Postgres server:
-```text
+```sql
-- load extension first time after install
CREATE EXTENSION mysql_fdw;
@@ -39,9 +39,10 @@ INSERT INTO sales_records values (3, 200);
The Output:
-```text
+```sql
-- ORDER BY ASC
edb=#EXPLAIN VERBOSE SELECT * FROM sales_records WHERE qty > 80 ORDER BY warehouse_id;
+__OUTPUT__
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------
Foreign Scan on public.sales_records (cost=10.00..1010.00 rows=1000 width=8)
@@ -49,9 +50,11 @@ edb=#EXPLAIN VERBOSE SELECT * FROM sales_records WHERE qty > 80 ORDER BY warehou
Local server startup cost: 10
Remote query: SELECT `warehouse_id`, `qty` FROM `edb`.`sales_records` WHERE ((`qty` > 80)) ORDER BY `warehouse_id` IS NULL, `warehouse_id` ASC
(4 rows)
-
+```
+```sql
-- ORDER BY DESC
edb=#EXPLAIN VERBOSE SELECT * FROM sales_records ORDER BY warehouse_id DESC;
+__OUTPUT__
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------
Foreign Scan on public.sales_records (cost=10.00..1010.00 rows=1000 width=8)
@@ -59,9 +62,11 @@ edb=#EXPLAIN VERBOSE SELECT * FROM sales_records ORDER BY warehouse_id DESC;
Local server startup cost: 10
Remote query: SELECT `warehouse_id`, `qty` FROM `edb`.`sales_records` ORDER BY `warehouse_id` IS NOT NULL, `warehouse_id` DESC
(4 rows)
-
+```
+```sql
-- ORDER BY with AGGREGATES
edb@91975=#EXPLAIN VERBOSE SELECT count(warehouse_id) FROM sales_records WHERE qty > 80 group by warehouse_id ORDER BY warehouse_id;
+__OUTPUT__
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=10 width=12)
diff --git a/product_docs/docs/mysql_data_adapter/2/10c_example_limit_offset_push_down.mdx b/product_docs/docs/mysql_data_adapter/2/10c_example_limit_offset_push_down.mdx
index 45652eac7ad..c9b998c7f0c 100644
--- a/product_docs/docs/mysql_data_adapter/2/10c_example_limit_offset_push_down.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/10c_example_limit_offset_push_down.mdx
@@ -6,7 +6,7 @@ This example shows LIMIT OFFSET pushdown on foreign table: `sales_records`.
Table on MySQL server:
-```text
+```sql
CREATE TABLE sales_records(
warehouse_id INT PRIMARY KEY,
qty INT);
@@ -14,7 +14,7 @@ qty INT);
Table on Postgres server:
-```text
+```sql
-- load extension first time after install
CREATE EXTENSION mysql_fdw;
@@ -38,9 +38,10 @@ INSERT INTO sales_records values (3, 200);
The output:
-```text
+```sql
-- LIMIT only
edb@91975=#EXPLAIN VERBOSE SELECT * FROM sales_records WHERE qty > 80 ORDER BY warehouse_id LIMIT 5;
+__OUTPUT__
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------
Foreign Scan on public.sales_records (cost=1.00..2.00 rows=1 width=8)
@@ -48,9 +49,11 @@ edb@91975=#EXPLAIN VERBOSE SELECT * FROM sales_records WHERE qty > 80 ORDER BY w
Local server startup cost: 10
Remote query: SELECT `warehouse_id`, `qty` FROM `edb`.`sales_records` WHERE ((`qty` > 80)) ORDER BY `warehouse_id` IS NULL, `warehouse_id` ASC LIMIT 5
(4 rows)
-
+```
+```sql
-- LIMIT and OFFSET
edb@91975=#EXPLAIN VERBOSE SELECT * FROM sales_records WHERE qty > 80 ORDER BY warehouse_id LIMIT 5 OFFSET 5;
+__OUTPUT__
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Foreign Scan on public.sales_records (cost=1.00..2.00 rows=1 width=8)
diff --git a/product_docs/docs/mysql_data_adapter/2/11_identifying_data_adapter_version.mdx b/product_docs/docs/mysql_data_adapter/2/11_identifying_data_adapter_version.mdx
index 51196fec529..1a4ddbd8f76 100644
--- a/product_docs/docs/mysql_data_adapter/2/11_identifying_data_adapter_version.mdx
+++ b/product_docs/docs/mysql_data_adapter/2/11_identifying_data_adapter_version.mdx
@@ -6,7 +6,7 @@ title: "Identifying the version"
The MySQL Foreign Data Wrapper includes a function that you can use to identify the currently installed version of the `.so` file for the data wrapper. To use the function, connect to the Postgres server, and enter:
-```text
+```sql
SELECT mysql_fdw_version();
```
diff --git a/product_docs/docs/pem/8/pem_rel_notes/index.mdx b/product_docs/docs/pem/8/pem_rel_notes/index.mdx
index c90fae8d3c5..a17127cbc0d 100644
--- a/product_docs/docs/pem/8/pem_rel_notes/index.mdx
+++ b/product_docs/docs/pem/8/pem_rel_notes/index.mdx
@@ -6,11 +6,11 @@ The Postgres Enterprise Manager (PEM) documentation describes the latest version
| Version | Release Date | Upstream Merges | Accessibility Conformance |
| ------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------- |
-| [8.5.0](03_850_rel_notes) | 23 Jun 2022 | NA | NA |
+| [8.5.0](03_850_rel_notes) | 23 Jun 2022 | NA | [Conformance Report](https://www.enterprisedb.com/accessibility) |
| [8.4.0](04_840_rel_notes) | 01 Apr 2022 | NA | [Conformance Report](https://www.enterprisedb.com/accessibility) |
| [8.3.0](05_830_rel_notes) | 24 Nov 2021 | pgAdmin [5.7](https://www.pgadmin.org/docs/pgadmin4/5.7/release_notes_5_7.html#bug-fixes) | [Conformance Report](https://www.enterprisedb.com/accessibility) |
| [8.2.0](06_820_rel_notes) | 09 Sep 2021 | pgAdmin [5.4](https://www.pgadmin.org/docs/pgadmin4/5.4/release_notes_5_4.html#bug-fixes), [5.5](https://www.pgadmin.org/docs/pgadmin4/5.5/release_notes_5_5.html#bug-fixes), and [5.6](https://www.pgadmin.org/docs/pgadmin4/5.6/release_notes_5_6.html#bug-fixes) | [Conformance Report](https://www.enterprisedb.com/accessibility) |
-| [8.1.1](07_811_rel_notes) | 22 Jul 2021 | NA | NA |
+| [8.1.1](07_811_rel_notes) | 22 Jul 2021 | NA | [Conformance Report](https://www.enterprisedb.com/accessibility) |
| [8.1.0](08_810_rel_notes) | 16 Jun 2021 | pgAdmin [5.0](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_0.html#bug-fixes), [5.1](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_1.html#bug-fixes), [5.2](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_2.html#bug-fixes), and [5.3](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_3.html#bug-fixes) | [Conformance Report](https://www.enterprisedb.com/accessibility) |
| [8.0.1](09_801_rel_notes) | 3 Mar 2021 | pgAdmin [4.29](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_29.html#bug-fixes), [4.30](https://www.pgadmin.org/docs/pgadmin4/4.30/release_notes_4_30.html#bug-fixes), and [5.0](https://www.pgadmin.org/docs/pgadmin4/5.3/release_notes_5_0.html#bug-fixes) | NA |
| [8.0.0](10_800_rel_notes) | 9 Dec 2020 | pgAdmin [4.27](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_27.html#bug-fixes), [4.28](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_28.html#bug-fixes), and [4.29](https://www.pgadmin.org/docs/pgadmin4/4.29/release_notes_4_29.html#bug-fixes) | [Conformance Report](https://www.enterprisedb.com/accessibility) |
diff --git a/product_docs/docs/pem/8/registering_agent.mdx b/product_docs/docs/pem/8/registering_agent.mdx
index fecd8bf2622..e5a27c7295c 100644
--- a/product_docs/docs/pem/8/registering_agent.mdx
+++ b/product_docs/docs/pem/8/registering_agent.mdx
@@ -97,43 +97,87 @@ For information about using the pemworker utility to register a server, see [Reg
To use a non-root user account to register a PEM agent, you must first install the PEM agent as a root user. After installation, assume the identity of a non-root user (for example, `edb`) and perform the following steps:
-1. Create the `.pem` directory and `logs` directory and assign read, write, and execute permissions to the file:
+1. Login as `edb` user. Create `pem` and `logs` directories and assign read, write, and execute permissions:
```shell
- mkdir /home//.pem
- mkdir /home//.pem/logs
- chmod 700 /home//.pem
- chmod 700 /home//.pem/logs
+ $ mkdir /home/edb/pem
+ $ mkdir /home/edb/pem/logs
+ $ chmod 700 /home/edb/pem
+ $ chmod 700 /home/edb/pem/logs
```
2. Register the agent with PEM server:
```shell
- ./pemworker --register-agent --pem-server <172.19.11.230> --pem-user --pem-port <5432> --display-name --cert-path /home/ --config-dir /home/
-
- The above command creates agent certificates and an agent configuration file (``agent.cfg``) in the ``/home/edb/.pem`` directory. Use the following command to assign read and write permissions to these files:
-
- ``chmod -R 600 /home/edb/.pem/agent*``
+ $ export PEM_SERVER_PASSWORD=edb
+
+ # Use the following command to create agent certificates and an agent
+ # configuration file (`agent.cfg`) in the `/home/edb/pem` directory.
+ $ /usr/edb/pem/bin/pemworker --register-agent --pem-server <172.19.11.230> --pem-user postgres --pem-port 5432 --display-name non_root_pem_agent --cert-path /home/edb/pem --config-dir /home/edb/pem
+
+ # Use the following command to assign read and write permissions to
+ # these files:
+ $ chmod -R 600 /home/edb/pem/agent*
```
3. Change the parameters of the `agent.cfg` file:
```ini
- agent_ssl_key=/home/edb/.pem/agent.key
- agent_ssl_crt=/home/edb/.pem/agent.crt
- log_location=/home/edb/.pem/worker.log
- agent_log_location=/home/edb/.pem/agent.log
+ $ vi /home/edb/pem/agent.cfg
+ agent_ssl_key=/home/edb/pem/agent.key
+ agent_ssl_crt=/home/edb/pem/agent.crt
+ log_location=/home/edb/pem/worker.log
+ agent_log_location=/home/edb/pem/agent.log
```
-4. Update the values for the configuration file path and the user in the `pemagent` service file:
+ Where `` is the assigned PEM agent ID.
+
+4. Create a `tmp` directory, set the environment variable and start the agent:
+
```ini
- User=edb
- ExecStart=/usr/edb/pem/agent/bin/pemagent -c /home/edb/.pem/agent.cfg
+ $ mkdir /home/edb/pem/tmp
+
+ # Create a script file, add the environment variable, give permissions, and execute:
+ $ vi /home/edb/pem/run_pemagent.sh
+ #!/bin/bash
+ export TEMP=/home/edb/agent/tmp
+ /usr/edb/pem/agent/bin/pemagent -c /home/edb/agent/etc/agent.cfg
+ $ chmod a+x /home/edb/pem/run_pemagent.sh
+ $ cd /home/edb/pem
+ $ ./run_pemagent.sh
```
-5. Stop the agent process, and then restart the agent service using the non-root user:
- ```shell
- sudo systemctl start/stop/restart pemagent
- ```
+ Your PEM agent is now registered and started with `edb` user. If your machine restarts then this agent does not restart automatically and you need to start it manually using the previous command.
+
+5. Optionally, you can create the service for this PEM agent as root user to start this agent automatically at machine restart as follows:
+
+ a. Update the values for the configuration file path and the user in the `pemagent` service file as superuser:
+
+ ```ini
+ $ sudo vi /usr/lib/systemd/system/pemagent.service
+ User=edb
+ Group=edb
+ WorkingDirectory=/home/edb/pem/agent
+ Environment=LD_LIBRARY_PATH=/usr/edb/pem/agent/lib
+ Environment=TEMP=/home/edb/pem/agent/tmp
+ ExecStart=/usr/edb/pem/agent/bin/pemagent -c /home/edb/pem/agent.cfg
+ ```
+
+ b. Stop the running agent process, and then restart the agent service:
+
+ ```shell
+ # Find the process id of the running pem agent and pem worker process and kill that process
+ $ ps -ax | grep pemagent
+ $ kill -9
+ $ ps -ax | grep pemworker
+ $ kill -9
+ # Start the pem agent services as superuser
+ $ sudo systemctl enable pemagent
+ $ sudo systemctl start pemagent
+ $ sudo systemctl status pemagent
+ ```
6. Check the agent status on PEM dashboard.
+
+!!! Note
+ Any probes and jobs that require root permission or the file access owned by another user (for example, enterprisedb), will fail.
diff --git a/product_docs/docs/pgd/4/cli/command_ref/pgd_check-health.mdx b/product_docs/docs/pgd/4/cli/command_ref/pgd_check-health.mdx
index 139e358b7f3..071197cbc3b 100644
--- a/product_docs/docs/pgd/4/cli/command_ref/pgd_check-health.mdx
+++ b/product_docs/docs/pgd/4/cli/command_ref/pgd_check-health.mdx
@@ -9,6 +9,10 @@ Checks the health of the EDB Postgres Distributed cluster.
Performs various checks such as if all nodes are accessible, all replication
slots are working, and CAMO pairs are connected.
+Please note that the current implementation of clock skew may return an
+inaccurate skew value if the cluster is under high load while running this
+command or has large number of nodes in it.
+
```sh
pgd check-health [flags]
```
diff --git a/product_docs/docs/pgd/4/cli/command_ref/pgd_show-clockskew.mdx b/product_docs/docs/pgd/4/cli/command_ref/pgd_show-clockskew.mdx
index 008aba51571..a535f1fbc4b 100644
--- a/product_docs/docs/pgd/4/cli/command_ref/pgd_show-clockskew.mdx
+++ b/product_docs/docs/pgd/4/cli/command_ref/pgd_show-clockskew.mdx
@@ -8,6 +8,10 @@ Shows the status of clock skew between each BDR node pair.
Shows the status of clock skew between each BDR node pair in the cluster.
+Please note that the current implementation of clock skew may return an
+inaccurate skew value if the cluster is under high load while running this
+command or has large number of nodes in it.
+
Symbol Meaning
------- --------
* ok
diff --git a/src/pages/index.js b/src/pages/index.js
index cb9d0b8106b..44004b4e048 100644
--- a/src/pages/index.js
+++ b/src/pages/index.js
@@ -291,6 +291,9 @@ const Page = () => (
iconName={iconNames.HANDSHAKE}
headingText="Third Party Integrations"
>
+
+ Commvault Backup & Recovery
+
DBeaver PRO