Skip to content

Commit

Permalink
Applying @kelpoole suggestions and correcting pgbench/pgd_bench confu…
Browse files Browse the repository at this point in the history
…sion
  • Loading branch information
djw-m authored Sep 8, 2023
1 parent f1cd898 commit b7ae79e
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 12 deletions.
16 changes: 8 additions & 8 deletions product_docs/docs/pgd/5/reference/testingandtuning.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ indexdepth: 2
EDB Postgres Distributed has tools that help with testing and tuning your PGD clusters. For background, see [Testing and tuning](../testingandtuning).


## pgd_bench
## `pgd_bench`

### Synopsis

A benchmarking tool for PGD-enhanced PostgreSQL.
A benchmarking tool for EDB Postgres Distributed deployments.

```shell
pgd_bench [OPTION]... [DBNAME] [DBNAME2]
Expand Down Expand Up @@ -39,26 +39,26 @@ The mode can be set to `regular`, `camo`, or `failover`. The default is `regular

When using `-m failover`, an additional option `--retry` is available. This option
instructs pgd_bench to retry transactions when there's a failover. The `--retry`
option is enabled with `-m camo`.
option is automatically enabled when `-m camo` is used.

#### Setting GUC variables

`-o` or `--set-option`

This option is followed by `NAME=VALUE` entries, which are applied using the
PostgreSQL [`SET`](https://www.postgresql.org/docs/current/sql-set.html) command on each server that pgd_bench connects to, and only those servers.
Postgres [`SET`](https://www.postgresql.org/docs/current/sql-set.html) command on each server that pgd_bench connects to, and only those servers.

The other options are identical to the PostgreSQL pgd_bench command. For
The other options are identical to the Postgres pgbench command. For
details, see the PostgreSQL
[pgd_bench](https://www.postgresql.org/docs/current/pgbench.html) documentation.
[pgbench](https://www.postgresql.org/docs/current/pgbench.html) documentation.

The complete list of options (pgd_bench and pgbench) follow.

#### Initialization options
- `-i, --initialize` — Invoke initialization mode.
- `-I, --init-steps=[dtgGvpf]+` (default `"dtgvp"`) — Run selected initialization steps.
- `d` — Drop any existing pgd_bench tables.
- `t` — Create the tables used by the standard pgd_bench scenario.
- `d` — Drop any existing pgbench tables.
- `t` — Create the tables used by the standard pgbench scenario.
- `g` — Generate data client-side and load it into the standard tables, replacing any data already present.
- `G` — Generate data server-side and load it into the standard tables, replacing any data already present.
- `v` — Invoke `VACUUM` on the standard tables.
Expand Down
8 changes: 4 additions & 4 deletions product_docs/docs/pgd/5/testingandtuning.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Key differences include:
pgbench scenario to prevent global lock timeouts in certain cases.
- `VACUUM` command in the standard scenario is executed on all nodes.
- pgd_bench releases are tied to the releases of the BDR extension
and are built against the corresponding PostgreSQL flavor. This is
and are built against the corresponding Postgres distribution. This is
reflected in the output of the `--version` flag.

The current version allows you to run failover tests while using CAMO or
Expand Down Expand Up @@ -105,12 +105,12 @@ responsibility to suppress them by setting appropriate variables, such as `clien

## Performance testing and tuning

PGD allows you to issue write transactions onto multiple master nodes. Bringing
PGD allows you to issue write transactions onto multiple leader nodes. Bringing
those writes back together onto each node has a performance cost.

First, replaying changes from another node has a CPU cost an an I/O cost,
First, replaying changes from another node has a CPU cost, an I/O cost,
and it generates WAL records. The resource use is usually less
than in the original transaction since CPU overheads are lower as a result
than in the original transaction since CPU overhead is lower as a result
of not needing to reexecute SQL. In the case of UPDATE and DELETE
transactions, there might be I/O costs on replay if data isn't cached.

Expand Down

0 comments on commit b7ae79e

Please sign in to comment.