diff --git a/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx b/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx index ffc41414e80..29b6ab62547 100644 --- a/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx +++ b/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx @@ -30,8 +30,8 @@ This table shows the cost breakdown. | Database type | Hourly price | Monthly price\* | | ---------------------------- | -------------- | --------------- | -| EDB Postgres Extended Server | $0.2511 / vCPU | $188.33 / vCPU | -| EDB Postgres Advanced Server | $0.3424 / vCPU | $256.80 / vCPU | +| EDB Postgres Extended Server | $0.2511 / vCPU | $183.30 / vCPU | +| EDB Postgres Advanced Server | $0.3424 / vCPU | $249.95 / vCPU | \* The monthly cost is approximate and assumes 730 hours in a month. diff --git a/product_docs/docs/efm/4/05_using_efm.mdx b/product_docs/docs/efm/4/05_using_efm.mdx index fe797d2ee82..6a7c7601131 100644 --- a/product_docs/docs/efm/4/05_using_efm.mdx +++ b/product_docs/docs/efm/4/05_using_efm.mdx @@ -265,27 +265,39 @@ After creating the `acctg.properties` and `sales.properties` files, create a ser ### RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x -If you're using RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x, copy the `edb-efm-4.` unit file to a new file with a name that is unique for each cluster. For example, if you have two clusters named acctg and sales, the unit file names might be: +If you're using RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x, copy the service file `/usr/lib/systemd/system/edb-efm-4..service` to `/etc/systemd/system` with a new name that is unique for each cluster. -```text -/usr/lib/systemd/system/efm-acctg.service +For example, if you have two clusters named `acctg` and `sales` managed by Failover Manager 4.7, the unit file names might be `efm-acctg.service` and `efm-sales.service`, and they can be created with: -/usr/lib/systemd/system/efm-sales.service +```shell +cp /usr/lib/systemd/system/edb-efm-4.7.service /etc/systemd/system/efm-acctg.service +cp /usr/lib/systemd/system/edb-efm-4.7.service /etc/systemd/system/efm-sales.service ``` -Then, edit the `CLUSTER` variable in each unit file, changing the specified cluster name from `efm` to the new cluster name. For example, for a cluster named `acctg`, the value specifies: +Then use `systemctl edit` to edit the `CLUSTER` variable in each unit file, changing the specified cluster name from `efm` to the new cluster name. +Also update the value of the `PIDfile` parameter to match the new cluster name. -```text +In our example, edit the `acctg` cluster by running `systemctl edit efm-acctg.service` and write: + +```ini +[Service] Environment=CLUSTER=acctg +PIDFile=/run/efm-4.7/acctg.pid ``` -Also update the value of the `PIDfile` parameter to specify the new cluster name. For example: +And edit the `sales` cluster by running `systemctl edit efm-sales.service` and write: ```ini -PIDFile=/var/run/efm-4.7/acctg.pid +[Service] +Environment=CLUSTER=sales +PIDFile=/run/efm-4.7/sales.pid ``` -After copying the service scripts, enable the services: +!!!Note +You could also have edited the files in `/etc/systemd/system` directly, but then you'll have to run `systemctl daemon-reload`, which is unecessary when using `systemd edit` to change the override files. +!!! + +After saving the changes, enable the services: ```text # systemctl enable efm-acctg.service @@ -296,7 +308,7 @@ After copying the service scripts, enable the services: Then, use the new service scripts to start the agents. For example, to start the `acctg` agent: ```text -# systemctl start efm-acctg` +# systemctl start efm-acctg ``` For information about customizing a unit file, see [Understanding and administering systemd](https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html). diff --git a/product_docs/docs/efm/4/13_troubleshooting.mdx b/product_docs/docs/efm/4/13_troubleshooting.mdx index 62749d37f2e..1652ac10269 100644 --- a/product_docs/docs/efm/4/13_troubleshooting.mdx +++ b/product_docs/docs/efm/4/13_troubleshooting.mdx @@ -1,6 +1,6 @@ --- title: "Troubleshooting" -redirects: +redirects: - ../efm_user/13_troubleshooting legacyRedirectsGenerated: # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. @@ -47,3 +47,9 @@ openjdk version "1.8.0_191" OpenJDK Runtime Environment (build 1.8.0_191-b12) OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode) ``` +!!! Note + There is a temporary issue with OpenJDK version 11 on RHEL and its derivatives. When starting Failover Manager, you may see an error like the following: + + `java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-2.el8.x86_64/lib/tzdb.dat (No such file or directory)` + + If so, the workaround is to manually install the missing package using the command `sudo dnf install tzdata-java` diff --git a/product_docs/docs/efm/4/installing/prerequisites.mdx b/product_docs/docs/efm/4/installing/prerequisites.mdx index 67ecb464465..8b74021e55c 100644 --- a/product_docs/docs/efm/4/installing/prerequisites.mdx +++ b/product_docs/docs/efm/4/installing/prerequisites.mdx @@ -17,6 +17,13 @@ Before configuring a Failover Manager cluster, you must satisfy the prerequisite Before using Failover Manager, you must first install Java (version 1.8 or later). Failover Manager is tested with OpenJDK, and we strongly recommend installing that version of Java. [Installation instructions for Java](https://openjdk.java.net/install/) are platform specific. +!!! Note + There is a temporary issue with OpenJDK version 11 on RHEL and its derivatives. When starting Failover Manager, you may see an error like the following: + + `java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-2.el8.x86_64/lib/tzdb.dat (No such file or directory)` + + If so, the workaround is to manually install the missing package using the command `sudo dnf install tzdata-java` + ## Provide an SMTP server You can receive notifications from Failover Manager as specified by a user-defined notification script, by email, or both. diff --git a/product_docs/docs/pgd/5/architectures.mdx b/product_docs/docs/pgd/5/architectures.mdx index aee575d6b11..210ac0bcf2f 100644 --- a/product_docs/docs/pgd/5/architectures.mdx +++ b/product_docs/docs/pgd/5/architectures.mdx @@ -94,12 +94,14 @@ they aren't part of the standard Always On architectures. * Can be 3 data nodes (recommended) * Can be 2 data nodes and 1 witness that doesn't hold data (not depicted) * A PGD Proxy for each data node with affinity to the applications - * Can be colocated with data node + * Can be colocated with data node (recommended) + * Can be located on a separate node + * Configuration and infrastructure symmetry of data nodes is expected to ensure proper resources are available to handle application workload when rerouted * Barman for backup and recovery (not depicted) * Offsite is optional but recommended - * Can be shared by multiple clusters + * Can be shared by multiple PGD clusters * Postgres Enterprise Manager (PEM) for monitoring (not depicted) - * Can be shared by multiple clusters + * Can be shared by multiple PGD clusters ### Always On multi-location @@ -112,14 +114,17 @@ they aren't part of the standard Always On architectures. * Can be 3 data nodes (recommended) * Can be 2 data nodes and 1 witness which does not hold data (not depicted) * A PGD-Proxy for each data node with affinity to the applications - * can be co-located with data node + * can be co-located with data node (recommended) + * can be located on a separate node + * Configuration and infrastructure symmetry of data nodes and locations is expected to ensure proper resources are available to handle application workload when rerouted * Barman for backup and recovery (not depicted). - * Can be shared by multiple clusters + * Can be shared by multiple PGD clusters * Postgres Enterprise Manager (PEM) for monitoring (not depicted). - * Can be shared by multiple clusters + * Can be shared by multiple PGD clusters * An optional witness node must be placed in a third region to increase tolerance for location failure. * Otherwise, when a location fails, actions requiring global consensus are blocked, such as adding new nodes and distributed DDL. + ## Choosing your architecture All architectures provide the following: diff --git a/product_docs/docs/pgd/5/cli/discover_connections.mdx b/product_docs/docs/pgd/5/cli/discover_connections.mdx index 61bde0b5ed4..5a24b8a63e9 100644 --- a/product_docs/docs/pgd/5/cli/discover_connections.mdx +++ b/product_docs/docs/pgd/5/cli/discover_connections.mdx @@ -1,23 +1,27 @@ --- -title: "Discovering Connection Strings" -navTitle: "Discovering Connection Strings" +title: "Discovering connection strings" +navTitle: "Discovering connection strings" indexdepth: 2 deepToC: true --- -PGD CLI can be installed on any system which is able to connect to the PGD cluster. You will require a user with PGD superuser privileges - the [bdr_superuser role](../security) - or equivalent (e.g. edb_admin on BigAnimal distributed high-availability) to use PGD CLI. +You can install PGD CLI on any system that can connect to the PGD cluster. To use PGD CLI, you need a user with PGD superuser privileges or equivalent. The PGD user with superuser privileges is the [bdr_superuser role](../security). An example of an equivalent user is `edb_admin` on an EDB BigAnimal Distributed High Availability cluster. ## PGD CLI and database connection strings -You may not need a database connection string. For example, when Trusted Postgres Architect installs the PGD CLI on a system, it also configures the connection to the PGD cluster. This means that PGD CLI will automatically connect when run. +You might not need a database connection string. For example, when Trusted Postgres Architect installs the PGD CLI on a system, it also configures the connection to the PGD cluster. This means that PGD CLI can connect to the cluster when run. ## Getting your database connection string -Every deployment method has a different way of deriving a connection string for it. This is because of the range of different configurations that PGD supports. Generally, you can obtain the required information from the configuration of your deployment; this section provides a guide of how to assemble that information into connection strings. +Because of the range of different configurations that PGD supports, every deployment method has a different way of deriving a connection string for it. Generally, you can obtain the required information from the configuration of your deployment. You can then assemble that information into connection strings. ### For a TPA-deployed PGD cluster -Because TPA is so flexible, you will have to derive your connection string from your cluster configuration file (config.yml). You will need the name or IP address of a host with the role pgd-proxy listed for it. This host will have a proxy you can connect to. Usually the proxy will be listening on port 6432 (check the setting for `default_pgd_proxy_options` and `listen_port` in the config to confirm). The default database name is `bdrdb` (check the setting `bdr_database` in the config to confirm) and the default PGD superuser will be `enterprisedb` for EPAS and `postgres` for Postgres and Postgres Extended. +Because TPA is so flexible, you have to derive your connection string from your cluster configuration file (`config.yml`). + +- You need the name or IP address of a host with the role pgd-proxy listed for it. This host has a proxy you can connect to. Usually the proxy listens on port 6432. (Check the setting for `default_pgd_proxy_options` and `listen_port` in the config to confirm.) +- The default database name is `bdrdb`. (Check the setting `bdr_database` in the config to confirm.) +- The default PGD superuser is `enterprisedb` for EDB Postgres Advanced Server and `postgres` for Postgres and Postgres Extended. You can then assemble a connection string based on that information: @@ -25,7 +29,7 @@ You can then assemble a connection string based on that information: "host= port= dbname= user= sslmode=require" ``` -To illustrate this, here's some excerpts of a config.yml file for a cluster: +To illustrate this, here are some excerpts of a `config.yml` file for a cluster: ```yaml ... @@ -51,26 +55,35 @@ instances: ... ``` -The connection string for this cluster would be: +The connection string for this cluster is: ``` "host=192.168.100.2 port=6432 dbname=bdrdb user=enterprisedb sslmode=require" ``` !!! Note Host name versus IP address -In our example, we use the IP address because the configuration is from a Docker TPA install with no name resolution available. Generally, you should be able to use the host name as configured. +The example uses the IP address because the configuration is from a Docker TPA install with no name resolution available. Generally, you can use the host name as configured. !!! -### For a BigAnimal distributed high-availability cluster +### For an EDB BigAnimal Distributed High Availability cluster -1. Log into the [BigAnimal Clusters](https://portal.biganimal.com/clusters) view. +1. Log in to the [BigAnimal clusters](https://portal.biganimal.com/clusters) view. +1. In the filter, set the Cluster Type to "Distributed High Availability" to only show clusters which work with PGD CLI. 1. Select your cluster. -1. In the view of your cluster, select the Connect tab. -1. Copy the Read/Write URI from the connection info. This is your connection string. +1. In the view of your cluster, select the **Connect** tab. +1. Copy the read/write URI from the connection info. This is your connection string. + +### For a cluster deployed with EDB PGD for Kubernetes + +As with TPA, EDB PGD for Kubernetes is very flexible, and there are multiple ways to obtain a connection string. It depends, in large part, on how the [services](/postgres_distributed_for_kubernetes/latest/connectivity/#services) were configured for the deployment: + +- If you use the Node Service Template, direct connectivity to each node and proxy service is available. +- If you use the Group Service Template, there's a gateway service to each group. +- If you use the Proxy Service Template, a single proxy provides an entry point to the cluster for all applications. -### For an EDB PGD for Kubernetes deployed cluster +Consult your configuration file to determine this information. -As with TPA, EDB PGD for Kubernetes is very flexible and there is no one way to obtain a connection string. It depends, in large part, on how the [Services](https://www.enterprisedb.com/docs/postgres_distributed_for_kubernetes/latest/connectivity/#services) have been configured for the deployment. If the Node Service Template is used, there should be direct connectivity to each node and proxy service available. If the Group Service Template, there will be a gateway service to each group. Finally, if the Proxy Service Template has been used, there should be a single proxy providing an entry point to the cluster for all applications. Consult your configuration file to determine this information. You should be able to establish a host name or IP address, port, database name (default: `bdrdb`) and username (`enterprisedb` for EPAS and `postgres` for Postgres and Postgres Extended.). +Establish a host name or IP address, port, database name, and username. The default database name is `bdrdb`, and the default username is `enterprisedb` for EDB Postgres Advanced Server and `postgres` for Postgres and Postgres Extended.). You can then assemble a connection string based on that information: @@ -78,6 +91,6 @@ You can then assemble a connection string based on that information: "host= port= dbname= user=" ``` -You may need to add `sslmode=` if the deployment's configuration requires it. +If the deployment's configuration requires it, add `sslmode=`. diff --git a/product_docs/docs/pgd/5/cli/index.mdx b/product_docs/docs/pgd/5/cli/index.mdx index 443ec13d74e..1b87d879919 100644 --- a/product_docs/docs/pgd/5/cli/index.mdx +++ b/product_docs/docs/pgd/5/cli/index.mdx @@ -13,9 +13,9 @@ directoryDefaults: description: "The PGD Command Line Interface (CLI) is a tool to manage your EDB Postgres Distributed cluster" --- -The EDB Postgres Distributed Command Line Interface (PGD CLI) is a tool for managing your EDB Postgres Distributed cluster. It allows you to run commands against EDB Postgres Distributed clusters. It may be installed automatically on systems within a TPA-deployed PGD cluster or it can be installed manually on systems that can connect to any PGD cluster, including BigAnimal Distributed High Availability PGD clusters or PGD clusters deployed using the EDB PGD for Kubernetes operator. +The EDB Postgres Distributed Command Line Interface (PGD CLI) is a tool for managing your EDB Postgres Distributed cluster. It allows you to run commands against EDB Postgres Distributed clusters. It is installed automatically on systems in a TPA-deployed PGD cluster. Or it can be installed manually on systems that can connect to any PGD cluster, such as EDB BigAnimal Distributed High Availability clusters or PGD clusters deployed using the EDB PGD for Kubernetes operator. -See [Installing PGD CLI](installing_cli) for information about how to install PGD CLI, both automatically with Trusted Postgres Architect and manually. +See [Installing PGD CLI](installing_cli) for information about how to manually install PGD CLI on systems. See [Using PGD CLI](using_cli) for an introduction to using the PGD CLI and connecting to your PGD cluster. @@ -23,5 +23,5 @@ See [Configuring PGD CLI](configuring_cli) for details on creating persistent co See the [Command reference](command_ref) for the available commands to inspect, manage, and get information about cluster resources. -There is also a guide to [discovering connection strings](discover_connections). It shows how to obtain the correct connection strings for your PGD-powered deployment. +See [Discovering connection strings](discover_connections) to learn how to obtain the correct connection strings for your PGD-powered deployment. diff --git a/product_docs/docs/pgd/5/cli/installing_cli.mdx b/product_docs/docs/pgd/5/cli/installing_cli.mdx index 2ec709cf331..c72e436f58b 100644 --- a/product_docs/docs/pgd/5/cli/installing_cli.mdx +++ b/product_docs/docs/pgd/5/cli/installing_cli.mdx @@ -3,14 +3,15 @@ title: "Installing PGD CLI" navTitle: "Installing PGD CLI" --- -PGD CLI can be installed on any system which is able to connect to the PGD cluster. You will require a user with PGD superuser privileges - the [bdr_superuser role](../security) - or equivalent (e.g. edb_admin on BigAnimal distributed high-availability) to use PGD CLI. +You can install PGD CLI on any system that can connect to the PGD cluster. To use PGD CLI, you need a user with PGD superuser privileges or equivalent. The PGD user with superuser privileges is the [bdr_superuser role](../security). An example of an equivalent user is edb_admin on a BigAnimal distributed high-availability cluster. ## Installing automatically with Trusted Postgres Architect (TPA) + By default, Trusted Postgres Architect installs and configures PGD CLI on each PGD node. If you want to install PGD CLI on any non-PGD instance in the cluster, attach the pgdcli role to that instance in Trusted Postgres Architect's configuration file before deploying. See [Trusted Postgres Architect](/tpa/latest/) for more information. ## Installing manually on Linux -PGD CLI is installable from the EDB Repositories. These repositories require a token to enable downloads from them. You will need to login to [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) to obtain your token. Then execute the following command, substituting +PGD CLI is installable from the EDB repositories. These repositories require a token to enable downloads from them. Log in to [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) to obtain your token. Then execute the command shown for your operating system, substituting your token for ``. ### Add repository and install PGD CLI on Debian or Ubuntu @@ -20,7 +21,7 @@ curl -1sLf 'https://downloads.enterprisedb.com//postgres_distributed sudo apt-get install edb-pgd5-cli ``` -### Add repository and install PGD CLI on RHEL, Rocky, AlmaLinux or Oracle Linux +### Add repository and install PGD CLI on RHEL, Rocky, AlmaLinux, or Oracle Linux ```bash curl -1sLf 'https://downloads.enterprisedb.com//postgres_distributed/setup.rpm.sh' | sudo -E bash @@ -28,5 +29,3 @@ sudo yum install edb-pgd5-cli ``` [Next: Using PGD CLI](using_cli) - - diff --git a/product_docs/docs/pgd/5/cli/using_cli.mdx b/product_docs/docs/pgd/5/cli/using_cli.mdx index 3ea54d77ce3..ee992a9c828 100644 --- a/product_docs/docs/pgd/5/cli/using_cli.mdx +++ b/product_docs/docs/pgd/5/cli/using_cli.mdx @@ -3,27 +3,27 @@ title: "Using PGD CLI" navTitle: "Using PGD CLI" --- -## What is the PGD CLI +## What is the PGD CLI? -The PGD CLI is a convenient way to connect to and manage your PGD cluster. You will need the credentials of a Postgres users with PGD superuser privileges - the [bdr_superuser role](../security) - or equivalent (e.g. edb_admin on BigAnimal distributed high availability) to use it. +The PGD CLI is a convenient way to connect to and manage your PGD cluster. To use it, you need a user with PGD superuser privileges or equivalent. The PGD user with superuser privileges is the [bdr_superuser role](../security). An example of an equivalent user is edb_admin on a BigAnimal distributed high-availability cluster. !!! Important Setting passwords -PGD CLI does not interactively prompt for your user's password. You must pass your password using one of the following methods: +PGD CLI doesn't interactively prompt for your password. You must pass your password using one of the following methods: - 1. Adding an entry to your [`.pgpass` password file](https://www.postgresql.org/docs/current/libpq-pgpass.html) which includes the host, port, database name, user name, and password. - 1. Setting the password in the `PGPASSWORD` environment variable. - 1. Including the password in the connection string. + - Adding an entry to your [`.pgpass` password file](https://www.postgresql.org/docs/current/libpq-pgpass.html), which includes the host, port, database name, user name, and password. + - Setting the password in the `PGPASSWORD` environment variable. + - Including the password in the connection string. -We recommend the first option, as the other options don't scale well with multiple databases or compromise password confidentiality. +We recommend the first option, as the other options don't scale well with multiple databases, or they compromise password confidentiality. !!! ## Running the PGD CLI -Once you have [installed pgd-cli](installing_cli), run the `pgd` command to access the PGD command line interface. The `pgd` command will need details of which host, port, and database to connect to, along with your username and password. +Once you have [installed pgd-cli](installing_cli), run the `pgd` command to access the PGD command line interface. The `pgd` command needs details about the host, port, and database to connect to, along with your username and password. ## Passing a database connection string -Use the `--dsn` flag to pass a database connection string to the `pgd` command. You don't need a configuration file when you pass the connection string with the `--dsn` flag. The flag takes precedence even if a configuration file is present. For example: +Use the `--dsn` flag to pass a database connection string to the `pgd` command. When you pass the connection string with the `--dsn` flag, you don't need a configuration file. The flag takes precedence even if a configuration file is present. For example: ```sh pgd show-nodes --dsn "host=bdr-a1 port=5432 dbname=bdrdb user=enterprisedb" @@ -33,7 +33,7 @@ See [pgd](command_ref) in the command reference for a description of the command ## Specifying a configuration file -If a `pgd-cli-config.yml` file is in `/etc/edb/pgd-cli` or `$HOME/.edb/pgd-cli`, `pgd` will automatically use it. You can override +If a `pgd-cli-config.yml` file is in `/etc/edb/pgd-cli` or `$HOME/.edb/pgd-cli`, `pgd` uses it. You can override this behavior using the optional `-f` or `--config-file` flag. For example: ```sh @@ -75,13 +75,13 @@ pgd show-nodes -o json ] ``` -The PGD CLI supports the following output formats: +The PGD CLI supports the following output formats. | Setting | Format | Considerations | | ------- | ------ | --------- | | none | Tabular | Default format. This setting presents the data in tabular form.| | `json` | JSON | Presents the raw data with no formatting. For some commands, the JSON output might show more data than the tabular output, such as extra fields and more detailed messages. | -| `yaml` | YAML |Similar to the JSON output, but as YAML and with the fields ordered alphabetically. Experimental and may not be fully supported in future versions. | +| `yaml` | YAML | Similar to the JSON output but as YAML and with the fields ordered alphabetically. Experimental and might not be fully supported in future versions. | ## Accessing the command line help diff --git a/product_docs/docs/pgd/5/limitations.mdx b/product_docs/docs/pgd/5/limitations.mdx index 87c3db87415..2ea6a437a25 100644 --- a/product_docs/docs/pgd/5/limitations.mdx +++ b/product_docs/docs/pgd/5/limitations.mdx @@ -108,28 +108,26 @@ Be sure to disable transaction streaming when planning to use CAMO. You can configure this option globally or in the PGD node group. See [Transaction streaming configuration](../transaction-streaming#configuration). -- Not all DDL can run when CAMO is used. If unsupported DDL is used a warning is logged -and the transactions commit scope is set to local only. The only supported DDL operations are: - - non-concurrent CREATE INDEX - - non-concurrent DROP INDEX - - non-concurrent REINDEX of an individual table or index - - CLUSTER (of a single relation or index only) - - ANALYZE - - TRUNCATE +- Not all DDL can run when you use CAMO. If you use unsupported DDL, a warning is logged and the transactions commit scope is set to local only. The only supported DDL operations are: + - non-concurrent `CREATE INDEX` + - non-concurrent `DROP INDEX` + - non-concurrent `REINDEX` of an individual table or index + - `CLUSTER` (of a single relation or index only) + - `ANALYZE` + - `TRUNCATE` ## Group Commit [Group Commit](durability/group-commit) is a feature which enables configurable synchronous commits over nodes in a group. If you use this feature, take the following limitations into account: -- Not all DDL can run when Group Commit is used. If unsupported DDL is used a warning is logged -and the transactions commit scope is set to local only. The only supported DDL operations are: - - non-concurrent CREATE INDEX - - non-concurrent DROP INDEX - - non-concurrent REINDEX of an individual table or index - - CLUSTER (of a single relation or index only) - - ANALYZE - - TRUNCATE +- Not all DDL can run when you use Group Commit. If you use unsupported DDL, a warning is logged and the transactions commit scope is set to local. The only supported DDL operations are: + - non-concurrent `CREATE INDEX` + - non-concurrent `DROP INDEX` + - non-concurrent `REINDEX` of an individual table or index + - `CLUSTER` (of a single relation or index only) + - `ANALYZE` + - `TRUNCATE` ## Eager