From 5ae0ebb4cef1d8810f7594801856b3b07d3fc24f Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Fri, 8 Sep 2023 10:20:26 +0100 Subject: [PATCH] Applying @kelpoole suggestions and correcting pgbench/pgd_bench confusion --- .../docs/pgd/5/reference/testingandtuning.mdx | 16 ++++++++-------- product_docs/docs/pgd/5/testingandtuning.mdx | 8 ++++---- 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/product_docs/docs/pgd/5/reference/testingandtuning.mdx b/product_docs/docs/pgd/5/reference/testingandtuning.mdx index 34fcd63291d..60575bc2949 100644 --- a/product_docs/docs/pgd/5/reference/testingandtuning.mdx +++ b/product_docs/docs/pgd/5/reference/testingandtuning.mdx @@ -7,11 +7,11 @@ indexdepth: 2 EDB Postgres Distributed has tools that help with testing and tuning your PGD clusters. For background, see [Testing and tuning](../testingandtuning). -## pgd_bench +## `pgd_bench` ### Synopsis -A benchmarking tool for PGD-enhanced PostgreSQL. +A benchmarking tool for EDB Postgres Distributed deployments. ```shell pgd_bench [OPTION]... [DBNAME] [DBNAME2] @@ -39,26 +39,26 @@ The mode can be set to `regular`, `camo`, or `failover`. The default is `regular When using `-m failover`, an additional option `--retry` is available. This option instructs pgd_bench to retry transactions when there's a failover. The `--retry` -option is enabled with `-m camo`. +option is automatically enabled when `-m camo` is used. #### Setting GUC variables `-o` or `--set-option` This option is followed by `NAME=VALUE` entries, which are applied using the -PostgreSQL [`SET`](https://www.postgresql.org/docs/current/sql-set.html) command on each server that pgd_bench connects to, and only those servers. +Postgres [`SET`](https://www.postgresql.org/docs/current/sql-set.html) command on each server that pgd_bench connects to, and only those servers. -The other options are identical to the PostgreSQL pgd_bench command. For +The other options are identical to the Postgres pgbench command. For details, see the PostgreSQL -[pgd_bench](https://www.postgresql.org/docs/current/pgbench.html) documentation. +[pgbench](https://www.postgresql.org/docs/current/pgbench.html) documentation. The complete list of options (pgd_bench and pgbench) follow. #### Initialization options - `-i, --initialize` — Invoke initialization mode. - `-I, --init-steps=[dtgGvpf]+` (default `"dtgvp"`) — Run selected initialization steps. - - `d` — Drop any existing pgd_bench tables. - - `t` — Create the tables used by the standard pgd_bench scenario. + - `d` — Drop any existing pgbench tables. + - `t` — Create the tables used by the standard pgbench scenario. - `g` — Generate data client-side and load it into the standard tables, replacing any data already present. - `G` — Generate data server-side and load it into the standard tables, replacing any data already present. - `v` — Invoke `VACUUM` on the standard tables. diff --git a/product_docs/docs/pgd/5/testingandtuning.mdx b/product_docs/docs/pgd/5/testingandtuning.mdx index 6cd35efc806..712d56c1d89 100644 --- a/product_docs/docs/pgd/5/testingandtuning.mdx +++ b/product_docs/docs/pgd/5/testingandtuning.mdx @@ -45,7 +45,7 @@ Key differences include: pgbench scenario to prevent global lock timeouts in certain cases. - `VACUUM` command in the standard scenario is executed on all nodes. - pgd_bench releases are tied to the releases of the BDR extension - and are built against the corresponding PostgreSQL flavor. This is + and are built against the corresponding Postgres distribution. This is reflected in the output of the `--version` flag. The current version allows you to run failover tests while using CAMO or @@ -105,12 +105,12 @@ responsibility to suppress them by setting appropriate variables, such as `clien ## Performance testing and tuning -PGD allows you to issue write transactions onto multiple master nodes. Bringing +PGD allows you to issue write transactions onto multiple leader nodes. Bringing those writes back together onto each node has a performance cost. -First, replaying changes from another node has a CPU cost an an I/O cost, +First, replaying changes from another node has a CPU cost, an I/O cost, and it generates WAL records. The resource use is usually less -than in the original transaction since CPU overheads are lower as a result +than in the original transaction since CPU overhead is lower as a result of not needing to reexecute SQL. In the case of UPDATE and DELETE transactions, there might be I/O costs on replay if data isn't cached.