From c3467c82c4361b3a2f6fe86a6d57a1ecb3d13b35 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 13 Feb 2024 10:40:11 +0000 Subject: [PATCH 1/4] Import for 23.29 release Signed-off-by: Dj Walker-Morgan --- product_docs/docs/tpa/23/INSTALL-repo.mdx | 56 +++--- product_docs/docs/tpa/23/INSTALL.mdx | 174 +++++++++--------- .../docs/tpa/23/configure-instance.mdx | 85 ++++----- product_docs/docs/tpa/23/configure-source.mdx | 72 ++++---- .../docs/tpa/23/firstclusterdeployment.mdx | 84 +++++---- product_docs/docs/tpa/23/index.mdx | 144 +++++++-------- .../docs/tpa/23/reference/INSTALL-docker.mdx | 34 ++-- .../docs/tpa/23/reference/distributions.mdx | 45 ++--- .../tpa/23/reference/edb_repositories.mdx | 21 +-- product_docs/docs/tpa/23/reference/efm.mdx | 39 ++-- .../docs/tpa/23/reference/git-credentials.mdx | 45 +++-- .../docs/tpa/23/reference/haproxy.mdx | 59 +++--- product_docs/docs/tpa/23/reference/harp.mdx | 118 ++++++------ product_docs/docs/tpa/23/reference/hosts.mdx | 20 +- product_docs/docs/tpa/23/reference/initdb.mdx | 30 +-- .../tpa/23/reference/install_from_source.mdx | 22 +-- .../docs/tpa/23/reference/local-repo.mdx | 94 +++++----- product_docs/docs/tpa/23/reference/locale.mdx | 8 +- .../docs/tpa/23/tpaexec-configure.mdx | 30 ++- 19 files changed, 600 insertions(+), 580 deletions(-) diff --git a/product_docs/docs/tpa/23/INSTALL-repo.mdx b/product_docs/docs/tpa/23/INSTALL-repo.mdx index c9cb0d7646b..e9f688ebadb 100644 --- a/product_docs/docs/tpa/23/INSTALL-repo.mdx +++ b/product_docs/docs/tpa/23/INSTALL-repo.mdx @@ -5,23 +5,23 @@ originalFilePath: INSTALL-repo.md --- -This document explains how to use TPA from a copy of the source code +You can use TPA from a copy of the source code repository. !!! Note - EDB customers must [install TPA from packages](INSTALL/) in - order to receive EDB support for the software. + To receive EDB support for the software, + EDB customers must [install TPA from packages](INSTALL/). To run TPA from source, you must install all of the dependencies -(e.g., Python 3.9+) that the packages would handle for you, or download +(for example, Python 3.9+) that the packages would handle for you. Or, download the source and [run TPA in a Docker container](reference/INSTALL-docker/). -(Either way will work fine on Linux and macOS.) +(Either way works fine on Linux and macOS.) ## Quickstart -First, you must install the various dependencies Python 3, Python -venv, git, openvpn and patch. Installing from EDB repositories would -would install these automatically along with the TPA +First, you must install the various dependencies: Python 3, Python +venv, git, openvpn, and patch. Installing from EDB repositories +installs these for you along with the TPA packages. Before you install TPA, you must install the required packages: @@ -32,35 +32,35 @@ Before you install TPA, you must install the required packages: ## Clone and setup -With prerequisites installed, you can now clone the repository. +After the prerequisites are installed, you can clone the repository: ``` git clone https://github.com/enterprisedb/tpa.git ~/tpa ``` -This creates a `tpa` directory in your home directory. +Cloning creates a `tpa` directory in your home directory. -If you prefer to checkout with ssh use:
+If you prefer to check out with SSH, use: ``` git clone ssh://git@github.com/EnterpriseDB/tpa.git ~/tpa ``` -Add the bin directory, found within in your newly created clone, to your path with: +Add the bin directory to your path. You can find the bin directory in your newly created clone. -`export PATH=$PATH:$HOME/tpa/bin` +Add this line to your `.bashrc` file (or other profile file for your preferred shell): -Add this line to your `.bashrc` file (or other profile file for your preferred shell). +`export PATH=$PATH:$HOME/tpa/bin` -You can now create a working tpa environment by running: +You can now create a working TPA environment by running: `tpaexec setup` -This will create the Python virtual environment that TPA will use in future. All needed packages are installed in this environment. To test this configured correctly, run the following: +This command creates the Python virtual environment that TPA will use in future. All needed packages are installed in this environment. To test whether this was configured correctly, run: `tpaexec selftest` -You now have tpaexec installed. +tpaexec is now installed. ## Dependencies @@ -69,7 +69,9 @@ You now have tpaexec installed. TPA requires Python 3.9 or later, available on most modern distributions. If you don't have it, you can use [pyenv](https://github.com/pyenv/pyenv) to install any version of Python -you like without affecting the system packages. +you like without affecting the system packages. (If you weren't already using pyenv, add `pyenv` to +your PATH in `.bashrc`, and call `eval "$(pyenv init -)"` as described in +the [pyenv documentation](https://github.com/pyenv/pyenv#installation).) ```bash # First, install pyenv and activate it in ~/.bashrc @@ -92,23 +94,19 @@ $ python3 --version 3.9.0 ``` -If you were not already using pyenv, please remember to add `pyenv` to -your PATH in .bashrc and call `eval "$(pyenv init -)"` as described in -the [pyenv documentation](https://github.com/pyenv/pyenv#installation). - ### Virtual environment options -By default, `tpaexec setup` will use the builtin Python 3 `-m venv` -to create a venv under `$TPA_DIR/tpa-venv`, and activate it -automatically whenever `tpaexec` is invoked. +By default, `tpaexec setup` uses the builtin Python 3 `-m venv` +to create a venv under `$TPA_DIR/tpa-venv` and activate it +whenever `tpaexec` is invoked. You can run `tpaexec setup --venv /other/location` to specify a different location for the new venv. -We strongly suggest sticking to the default venv location. If you use a -different location, you must also set the environment variable TPA_VENV -to its location, for example by adding the following line to your -.bashrc (or other shell startup scripts): +However, we strongly suggest leaving the default venv location. If you use a +different location, you must also set the environment variable `TPA_VENV` +to that location. For example, add the following line to your +`.bashrc` or other shell startup scripts: ```bash export TPA_VENV="/other/location" diff --git a/product_docs/docs/tpa/23/INSTALL.mdx b/product_docs/docs/tpa/23/INSTALL.mdx index 2e262af72be..3c3457de6f1 100644 --- a/product_docs/docs/tpa/23/INSTALL.mdx +++ b/product_docs/docs/tpa/23/INSTALL.mdx @@ -6,28 +6,28 @@ originalFilePath: INSTALL.md --- To use TPA, you need to install from packages or source and run the -`tpaexec setup` command. This document explains how to install TPA -packages. If you have an EDB subscription plan, and therefore have -access to the EDB repositories, you should follow these instructions. To -install TPA from source, please refer to -[Installing TPA from Source](INSTALL-repo/). +`tpaexec setup` command. If you have an EDB subscription plan, and therefore have +access to the EDB repositories, follow these instructions to install TPA packages. + +To install TPA from source, see +[Installing from source](INSTALL-repo/). See [Distribution support](reference/distributions/) for information -on what platforms are supported. +about the platforms that are supported. !!! Info - Please make absolutely sure that your system has the correct - date and time set, because various things will fail otherwise. We - recommend you use a network time, for example `sudo ntpdate - pool.ntp.org` + Make absolutely sure that your system has the correct + date and time set. Various operations will fail otherwise. We + recommend you use a network time, for example, `sudo ntpdate + pool.ntp.org`. -## Quickstart +## Quick start -Login to [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) -to obtain your token. Then execute the following command, substituting -your token for `` and replacing `` with -one of the following according to which EDB plan you are subscribed: -`enterprise`, `standard`, `community360`, `postgres_distributed`. +To obtain your token, log in to [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads). +Then execute the following command, substituting +your token for ``. Replace `` with +one of the following according to the EDB plan you're subscribed to: +`enterprise`, `standard`, `community360`, or `postgres_distributed`. #### Add repository and install TPA on Debian or Ubuntu @@ -36,7 +36,7 @@ curl -1sLf 'https://downloads.enterprisedb.com///setup.de sudo apt-get install tpaexec ``` -#### Add repository and install TPA on RHEL, Rocky, AlmaLinux or Oracle Linux +#### Add repository and install TPA on RHEL, Rocky, AlmaLinux, or Oracle Linux ```bash curl -1sLf 'https://downloads.enterprisedb.com///setup.rpm.sh' | sudo -E bash @@ -55,20 +55,20 @@ sudo /opt/EDB/TPA/bin/tpaexec setup /opt/EDB/TPA/bin/tpaexec selftest ``` -More detailed explanations of each step are given below. +More detailed explanations of each step follow. ## Where to install TPA -As long as you are using a supported platform, TPA can be installed and -run from your workstation. This is fine for learning, local testing or -demonstration purposes. TPA supports [deploying to Docker containers](platform-docker/) -should you wish to perform a complete deployment on your own workstation. +As long as you're using a supported platform, you can install and run TPA +from your workstation. This approach is fine for learning, local testing, or +demonstration purposes. if you want to perform a complete deployment on your +own workstation, TPA supports [deploying to Docker containers](platform-docker/). -For production use, we recommend running TPA on a dedicated, persistent +For production use, we recommend running TPA on a dedicated persistent virtual machine. We recommend this because it ensures that the cluster directories are retained and available to your team for future cluster -management or update. It also means you only have to update one copy of -TPA and you only need to provide network access from a single TPA host +management or update. It also means you have to update only one copy of +TPA and you need to provide network access only from a single TPA host to the target instances. ## Installing TPA packages @@ -76,11 +76,11 @@ to the target instances. To install TPA, you must first subscribe to an EDB repository that provides it. The preferred source for repositories is EDB Repos 2.0. -Login to [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) -to obtain your token. Then execute the following command, substituting -your token for `` and replacing `` with -one of the following according to which EDB plan you are subscribed: -`enterprise`, `standard`, `community360`, `postgres_distributed`. +To obtain your token, log in to [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads). +Then execute the following command, substituting +your token for ``. Replace `` with +one of the following according to the EDB plan you're subscribed to: +`enterprise`, `standard`, `community360`, or `postgres_distributed`. #### Add repository on Debian or Ubuntu @@ -95,14 +95,14 @@ curl -1sLf 'https://downloads.enterprisedb.com///setup.de curl -1sLf 'https://downloads.enterprisedb.com///setup.rpm.sh' | sudo -E bash ``` -Alternatively, you may obtain TPA from the legacy 2ndQuadrant -repository. To do so, login to the EDB Customer Support Portal and -subscribe to the ["products/tpa/release" repository](https://techsupport.enterprisedb.com/software_subscriptions/add/products/tpa/) -by adding a subscription under Support/Software/Subscriptions, -and following the instructions to enable the repository on your system. +Alternatively, you can obtain TPA from the legacy 2ndQuadrant +repository. To do so, log in to the EDB Customer Support Portal and +subscribe to the [products/tpa/release repository](https://techsupport.enterprisedb.com/software_subscriptions/add/products/tpa/) +by adding a subscription under **Support/Software/Subscriptions**. +Then follow the instructions to enable the repository on your system. -Once you have enabled one of these repositories, you may install TPA -as follows: +Once you have enabled one of these repositories, you can install TPA +as follows. #### Install on Debian or Ubuntu @@ -110,18 +110,18 @@ as follows: sudo apt-get install tpaexec ``` -#### Install on RHEL, Rocky, AlmaLinux or Oracle Linux +#### Install on RHEL, Rocky, AlmaLinux, or Oracle Linux ```bash sudo yum install tpaexec ``` -This will install TPA into `/opt/EDB/TPA`. It will also -ensure that other required packages (e.g., Python 3.9 or later) are +This command installs TPA into `/opt/EDB/TPA`. It also +ensures that other required packages (such as Python 3.9 or later) are installed. -We mention `sudo` here only to indicate which commands need root -privileges. You may use any other means to run the commands as root. +We mention `sudo` here only to indicate the commands that need root +privileges. You can use any other means to run the commands as root. ## Setting up the TPA Python environment @@ -129,20 +129,20 @@ Next, run `tpaexec setup` to create an isolated Python environment and install the correct versions of all required modules. !!! Note - On Ubuntu versions prior to 20.04, please use `sudo -H tpaexec setup` - to avoid subsequent permission errors during `tpaexec configure` + On Ubuntu versions prior to 20.04, use `sudo -H tpaexec setup` + to avoid subsequent permission errors during `tpaexec configure`. ```bash sudo /opt/EDB/TPA/bin/tpaexec setup ``` -You must run this as root because it writes to `/opt/EDB/TPA`, -but the process will not affect any system-wide Python modules you may +You must run this command as root because it writes to `/opt/EDB/TPA`, +but the process doesn't affect any system-wide Python modules you have installed (including Ansible). -Add `/opt/EDB/TPA/bin` to the `PATH` of the user who will -normally run `tpaexec` commands. For example, you could add this to -your .bashrc or equivalent shell configuration file: +Add `/opt/EDB/TPA/bin` to the `PATH` of the user who +normally runs `tpaexec` commands. For example, you can add this to +your `.bashrc` or equivalent shell configuration file: ```bash export PATH=$PATH:/opt/EDB/TPA/bin @@ -150,15 +150,15 @@ export PATH=$PATH:/opt/EDB/TPA/bin ## Installing TPA without internet or network access (air-gapped) -This section describes how to install TPA onto a server which cannot +You can install TPA onto a server that can't access either the EDB repositories, a Python package index, or both. -For information on how to use TPA in such an environment, please see +For information on how to use TPA in such an environment, see [Managing clusters in a disconnected or air-gapped -environment](reference/air-gapped/) +environment](reference/air-gapped/). ### Downloading TPA packages -If you cannot access the EDB repositories directly from the server on +If you can't access the EDB repositories directly from the server on which you need to install TPA, you can download the packages from an internet-connected machine and transfer them. There are several ways to achieve this. @@ -167,60 +167,70 @@ If your internet-connected machine uses the same operating system as the target, we recommend using `yumdownloader` (RHEL-like) or `apt download` (Debian-like) to download the packages. -If this is not possible, please contact EDB support and we will provide +If this approach isn't possible, contact EDB Support, which can provide you with a download link or instructions appropriate to your subscription. ### Installing without access to a Python package index -When you run `tpaexec setup`, it will ordinarily download the Python +When you run `tpaexec setup`, it ordinarily downloads the Python packages from a Python package index. Unless your environment provides a -different index the default is the official [PyPI](https://pypi.org). If -no package index is available, you should install the `tpaexec-deps` -package in the same way your installed `tpaexec`. The `tpaexec-deps` +different index, the default is the official [PyPI](https://pypi.org). If +no package index is available, install the `tpaexec-deps` +package in the same way you installed tpaexec. The `tpaexec-deps` package (available from the same repository as tpaexec) bundles -everything that would have been downloaded, so that they can be -installed without network access. Just install the package before you -run `tpaexec setup` and the bundled copies will be used automatically. +everything that you would have downloaded, so that they can be +installed without network access. Install the package before you +run `tpaexec setup`, and the bundled copies are used automatically. ## Verifying your TPA installation -Once you're done with all of the above steps, run the following command -to verify your local installation: +After completing the installation, +verify your local installation: ```bash tpaexec selftest ``` -If that command completes without any errors, your TPA installation +If this command completes without any errors, your TPA installation is ready for use. ## Upgrading TPA To upgrade to a later release of TPA, you must: -1. Install the latest `tpaexec` package -2. Install the latest `tpaexec-deps` package (if required; see above) -3. Run `tpaexec setup` again +1. Install the latest `tpaexec` package. +2. Install the latest `tpaexec-deps` package (if required; see [Installing without access to a Python package index](#installing-without-access-to-a-python-package-index)). +3. Run `tpaexec setup` again. -If you have subscribed to the TPA package repository as described -above, running `apt-get update && apt-get upgrade` or `yum update` -should install the latest available versions of these packages. If not, +If you subscribed to the TPA package repository, +running `apt-get update && apt-get upgrade` or `yum update` +installs the latest available versions of these packages. If not, you can install the packages by any means available. We recommend that you run `tpaexec setup` again whenever a new version -of `tpaexec` is installed. Some new releases may not strictly require -this, but others will not work without it. +of `tpaexec` is installed. Some new releases might not strictly require +this, but others can't work without it. + +## Ansible versions + +TPA uses Ansible version 8 by default (ansible-core 2.15). You can use +2ndQuadrant Ansible version 2.9 by passing the `--use-2q-ansible` +option to `tpaexec setup`, or a different version of community Ansible +by passing the `--ansible-version` option with a version number +argument. The available versions are `2.9`, `8`, and `9`. -## Ansible community support +Ansible 2.9 is now deprecated in TPA and support for it will be removed +in a future version. If you are using `--skip-tags`, you need to +continue to use 2ndQuadrant Ansible 2.9 because of the changes in the +behaviour of this option in community Ansible; an alternative means of +skipping tasks will be provided in a future TPA version, before support +for 2ndQuadrant ansible is removed. -TPA now uses the community distribution of ansible by default; you can -continue to use the 2ndQuadrant/ansible fork by passing the -`--use-2q-ansible` option to `tpaexec setup`. In a future TPA release, -support for the 2ndQuadrant ansible fork will be removed. +Support for Ansible 9 is experimental. It requires Python 3.10 or +greater, so if you have edb-python 3.9 installed, you must explicitly +set your python version when running `tpaexec setup`: -For most users, this makes no difference. However, if you are using -`--skip-tags` with 2ndQuadrant ansible, be aware that this is not supported -An alternative means of skipping tasks, compatible with all ansible -versions, will be provided before support for 2ndQuadrant ansible is -removed. +```bash +PYTHON=/usr/bin/python3.10 tpaexec setup --ansible-version 9 +``` diff --git a/product_docs/docs/tpa/23/configure-instance.mdx b/product_docs/docs/tpa/23/configure-instance.mdx index f440e7017e2..6de970b51af 100644 --- a/product_docs/docs/tpa/23/configure-instance.mdx +++ b/product_docs/docs/tpa/23/configure-instance.mdx @@ -4,26 +4,23 @@ originalFilePath: configure-instance.md --- -This page presents an overview of the various controls that TPA -offers to customise the deployment process on cluster instances, with -links to more detailed documentation. +This is an overview of the TPA settings you can use to +customize the deployment process on cluster instances. -Before you dive into the details of deployment, it may be helpful to -read [an overview of configuring a cluster](configure-cluster/) to -understand how cluster and instance variables and the other mechanisms -in config.yml work together to allow you to write a concise, -easy-to-review configuration. +There's also [an overview of configuring a cluster](configure-cluster/), which explains +how to use cluster and instance variables together to write a concise, +easy-to-review `config.yml`. ## System-level configuration The first thing TPA does is to ensure that Python is bootstrapped and ready to execute Ansible modules (a distribution-specific process). -Then it completes various system-level configuration tasks before moving -on to [Postgres configuration](#postgres) below. +Then it completes various system-level configuration tasks before it +[configures Postgres](#postgres). - [Distribution support](reference/distributions/) - [Python environment](reference/python/) (`preferred_python_version`) -- [Environment variables](reference/target_environment/) (e.g., `https_proxy`) +- [Environment variables](reference/target_environment/) (for example, `https_proxy`) ### Package repositories @@ -32,7 +29,7 @@ You can use the to execute tasks before any package repositories are configured. - [Configure YUM repositories](reference/yum_repositories/) - (for RHEL, Rocky and AlmaLinux) + (for RHEL, Rocky, and AlmaLinux) - [Configure APT repositories](reference/apt_repositories/) (for Debian and Ubuntu) @@ -45,9 +42,9 @@ to execute tasks before any package repositories are configured. You can use the [post-repo hook](tpaexec-hooks/#post-repo) -to execute tasks after package repositories have been configured (e.g., -to correct a problem with the repository configuration before installing -any packages). +to execute tasks after package repository configuration. For example, +you can use it to correct a problem with the repository configuration before installing +any packages. ### Package installation @@ -56,15 +53,14 @@ stages throughout the deployment, beginning with a batch of system packages: - [Install non-Postgres packages](reference/packages/) - (e.g., acl, openssl, sysstat) + (for example, acl, openssl, sysstat) -Postgres and other components (e.g., Barman, repmgr, pgbouncer) will be -installed separately according to the cluster configuration; these are -documented in their own sections below. +Postgres and other components (for example, Barman, repmgr, pgbouncer) are +installed separately according to the cluster configuration. See [Other components](#other-components). ### Other system-level tasks -- [Create and mount filesystems](reference/volumes/) (including RAID, +- [Create and mount file systems](reference/volumes/) (including RAID, LUKS setup) - [Upload artifacts](reference/artifacts/) (files, directories, tar archives) @@ -75,19 +71,18 @@ documented in their own sections below. ## Postgres -Postgres configuration is an extended process that goes hand-in-hand -with setting up other components like repmgr and pgbouncer. It begins -with installing Postgres itself. +Postgres configuration is an extended process that's interleaved with the configuration of +other components like repmgr and pgbouncer. The first step is to install Postgres. ### Version selection Use the [configure options](tpaexec-configure/#software-versions) to -select a Postgres flavour and version, or set `postgres_version` in -config.yml to specify which Postgres major version you want to install. +select a Postgres flavor and version, or set `postgres_version` in +`config.yml` to specify the Postgres major version you want to install. -That's all you really need to do to set up a working cluster. Everything -else on this page is optional. You can control every aspect of the -deployment if you want to, but the defaults are carefully tuned to give +That's all you need to do to set up a working cluster. Everything +else described here is optional. You can control every aspect of the +deployment if you want to, but the defaults are carefully selected to give you a sensible cluster as a starting point. ### Installation @@ -97,7 +92,7 @@ the version of Postgres you selected, along with various extensions, according to the architecture's needs: - [Install Postgres and Postgres-related packages](reference/postgres_installation_method_pkg/) - (e.g., pglogical, BDR, etc.) + (for example, pglogical, BDR, and so on) - [Build and install Postgres and extensions from source](reference/postgres_installation_method_src/) (for development and testing) @@ -119,11 +114,11 @@ cluster configuration with a minimum of effort. You can use the [postgres-config hook](tpaexec-hooks/#postgres-config) -to execute tasks after the Postgres configuration files have been -installed (e.g., to install additional configuration files). +to execute tasks after the Postgres configuration files are +installed (for example, to install additional configuration files). -Once the Postgres configuration is in place, TPA will go on to -install and configure other components such as Barman, repmgr, +Once the Postgres configuration is in place, TPA +installs and configures other components, such as Barman, repmgr, pgbouncer, and haproxy, according to the details of the architecture. ## Other components @@ -136,17 +131,17 @@ pgbouncer, and haproxy, according to the details of the architecture. ### Configuring and starting services -TPA will now install systemd service unit files for each service. -The service for Postgres is named `postgres.service`, and can be started -or stopped with `systemctl start postgres`. +TPA installs systemd service unit files for each service. +The service for Postgres is named `postgres.service`. You can use +`systemctl start postgres` to start it and `systemctl stop postgres` +to stop it. -In the first deployment, the Postgres service will now be started. If -you are running `tpaexec deploy` again, the service may be reloaded or -restarted depending on what configuration changes you may have made. Of -course, if the service is already running and there are no changes, then -it's left alone. +If you're deploying a cluster for the first time, TPA starts the Postgres service at this point. +On an existing cluster, if there are any relevant configuration changes, TPA reloads or restarts +the Postgres service as appropriate. If there are no changes and Postgres is already running, it +leaves the service alone. (If Postgres isn't running on an existing cluster, TPA starts it.) -In any case, Postgres will be running at the end of this step. +In any case, Postgres is running at the end of this step. ## After starting Postgres @@ -162,11 +157,11 @@ In any case, Postgres will be running at the end of this step. You can use the [postgres-config-final hook](tpaexec-hooks/#postgres-config-final) -to execute tasks after the post-startup Postgres configuration has been -completed (e.g., to perform SQL queries to create objects or load data). +to execute tasks after the post-startup Postgres configuration is +complete (for example, to perform SQL queries to create objects or load data). - [Configure BDR](reference/bdr/) You can use the [post-deploy hook](tpaexec-hooks/#post-deploy) -to execute tasks after the deployment process has completed. +to execute tasks after the deployment process is complete. diff --git a/product_docs/docs/tpa/23/configure-source.mdx b/product_docs/docs/tpa/23/configure-source.mdx index 7dffe4eb9b0..cd8c88402df 100644 --- a/product_docs/docs/tpa/23/configure-source.mdx +++ b/product_docs/docs/tpa/23/configure-source.mdx @@ -6,23 +6,23 @@ originalFilePath: configure-source.md TPA can build Postgres and other required components from source and deploy a cluster with exactly the same configuration as with the default -packaged installation. This makes it possible to deploy repeatedly from -source to quickly test changes in a realistic, fully-configured cluster +packaged installation. This ability makes it possible to deploy repeatedly from +source to quickly test changes in a realistic, fully configured cluster that reproduces every aspect of a particular setup, regardless of architecture or platform. You can even combine packaged installations of certain components with source builds of others. For example, you can install Postgres from -packages and compile pglogical and PGD from source, but package -dependencies would prevent installing pglogical from source and PGD from +packages and compile pglogical and PGD from source. However, package +dependencies prevent installing pglogical from source and PGD from packages. Source builds are meant for use in development, testing, and for support operations. -## Quickstart +## Quick start -Spin up a cluster with 2ndQPostgres, pglogical3, and bdr all built from +Set up a cluster with 2ndQPostgres, pglogical3, and bdr all built from stable branches: ```bash @@ -35,9 +35,7 @@ $ tpaexec configure ~/clusters/speedy -a BDR-Always-ON \ bdr3:REL3_7_STABLE ``` -As above, but set up a cluster that builds 2ndQPostgres source code from -the official git repository and uses the given local work trees to build -pglogical and BDR. This feature is specific to Docker: +On Socker clusters, you can also build components from local work trees instead of a remote git repository: ```bash $ tpaexec configure ~/clusters/speedy \ @@ -52,29 +50,28 @@ $ tpaexec configure ~/clusters/speedy \ After deploying your cluster, you can use `tpaexec deploy … --skip-tags build-clean` on subsequent runs to -reuse build directories. (Otherwise the build directory is emptied -before starting the build.) +reuse build directories. Otherwise, the build directory is emptied +before starting the build. -Read on for a detailed explanation of how to build Postgres, pglogical, BDR, and other components with custom locations and build parameters. ## Configuration There are two aspects to configuring source builds. -If you just want a cluster running a particular combination of sources, +If you want a cluster running a particular combination of sources, run `tpaexec configure` to generate a configuration with sensible defaults to download, compile, and install the components you select. -You can build Postgres or Postgres Extended, pglogical, and BDR, and specify -branch names to build from, as shown in the examples above. +You can build Postgres or Postgres Extended, pglogical, and BDR and specify +branch names to build from, as shown in the examples in [Quick start](#quick-start). The underlying mechanism is capable of much more than the command-line -options allow. By editing config.yml, you can clone different source +options allow. By editing `config.yml`, you can clone different source repositories, change the build location, specify different configure or build parameters, redefine the build commands entirely, and so on. You can, therefore, build things other than Postgres, pglogical, and BDR. -The available options are documented here: +For the available options, see: - [Building Postgres from source](reference/postgres_installation_method_src/) @@ -83,7 +80,7 @@ The available options are documented here: ## Local source directories You can use TPA to provision Docker containers that build Postgres -and/or extensions from your local source directories instead of from a +and extensions from your local source directories instead of from a Git repository. Suppose you're using `--install-from-source` to declare what you want @@ -99,8 +96,8 @@ $ tpaexec configure ~/clusters/speedy \ … ``` -By default, this will clone the known repositories for Postgres Extended, -pglogical3, and bdr3, check out the given branches, and build them. But +By default, this command results in a cluster configuration that cases `tpaexec deploy` to clone the known repositories for Postgres Extended, +pglogical3, and bdr3, checks out the given branches, and builds them. But you can add `--local-source-directories` to specify that you want the sources to be taken directly from your host machine instead: @@ -116,22 +113,21 @@ $ tpaexec configure ~/clusters/speedy \ … ``` -This configuration will still install Postgres Extended from the repository, -but it obtains pglogical3 and bdr3 sources from the given directories on +This configuration installs Postgres Extended from the repository, +but obtains pglogical3 and bdr3 sources from the given directories on the host. These directories are bind-mounted read-only into the Docker containers at the same locations where the git repository would have been cloned to, and the default (out-of-tree) build proceeds as usual. -If you specify a local source directory for a component, you cannot -specify a branch to build (cf. `pglogical3:REL3_7_STABLE` vs. -`pglogical3` for `--install-from-source` in the examples above). The +If you specify a local source directory for a component, you can't +specify a branch to build (see `pglogical3:REL3_7_STABLE` versus +`plogical3` for `--install-from-source` in the previous examples). The source directory is mounted read-only in the containers, so TPA -cannot do anything to change it—neither `git pull`, nor -`git checkout`. You get whichever branch you have checked out locally, -uncommitted changes and all. +can't use `git pull` or `git checkout` to update it. You get whichever +branch is checked out locally, including any uncommitted changes. Using `--local-source-directories` includes a list of Docker volume -definitions in config.yml: +definitions in `config.yml`: ```yaml local_source_directories: @@ -143,23 +139,21 @@ local_source_directories: ### ccache TPA installs ccache by default for source builds of all kinds. When -you are using a Docker cluster with local source directories, by default +you're using a Docker cluster with local source directories, by default a new Docker volume is attached to the cluster's containers to serve as a shared ccache directory. This volume is completely isolated from the -host, and is removed when the cluster is deprovisioned. +host and is removed when the cluster is deprovisioned. Use the `--shared-ccache /path/to/host/ccache` configure option to -specify a longer-lived shared ccache directory. This directory will be -bind-mounted r/w into the containers, and its contents will be shared +specify a longer-lived shared ccache directory. This directory is +bind-mounted read-write into the containers, and its contents are shared between the host and the containers. -(By design, there is no way to install binaries compiled on the host +(By design, there's no way to install binaries compiled on the host directly into the containers.) ## Rebuilding -After deploying a cluster with components built from source, you can -rebuild those components quickly without having to rerun `tpaexec -deploy` by using the `tpaexec rebuild-sources` command. This will run -`git pull` for any components built from git repositories on the -containers, and rebuild all components. +After deploying a cluster with components built from source, run +`tpaexec rebuild-sources` to quickly rebuild and redeploy just those components. +This command is faster than running `tpaexec deploy` but doesn't apply any configuration changes. diff --git a/product_docs/docs/tpa/23/firstclusterdeployment.mdx b/product_docs/docs/tpa/23/firstclusterdeployment.mdx index 5dcd032f757..8fe92ad91fb 100644 --- a/product_docs/docs/tpa/23/firstclusterdeployment.mdx +++ b/product_docs/docs/tpa/23/firstclusterdeployment.mdx @@ -1,45 +1,43 @@ --- navTitle: Tutorial -title: A First Cluster Deployment +title: A first cluster deployment originalFilePath: firstclusterdeployment.md --- -In this short tutorial, we are going to work through deploying a simple [M1 architecture](architecture-M1/) deployment onto a local Docker installation. By the end you will have four containers, one primary database, two replicas and a backup node, configured and ready for you to explore. +In this short tutorial, you work through deploying a simple [M1 architecture](architecture-M1/) deployment onto a local Docker installation. By the end of the tutorial, you will have four containers, one primary database, two replicas, and a backup node configured and ready for you to explore. -For this example, we will run TPA on an Ubuntu system, but the considerations are similar for most Linux systems. +This example runs TPA on an Ubuntu system, but the considerations are similar for most Linux systems. ### Installing TPA -If you're an EDB customer, you'll want to follow the [EDB Repo instructions](INSTALL/) which will install the TPA packages straight from EDB's repositories. +If you're an EDB customer, first follow the [EDB repo instructions](INSTALL/), which install the TPA packages straight from EDB's repositories. -If you are an open source user of TPA, there's [instructions on how to build from the source](INSTALL-repo/) which you can download from Github.com. - -Follow those guides and then return here. +If you're an open source user of TPA, first see the [instructions on how to build from the source](INSTALL-repo/), which explains how to download TPA from Github.com. ### Installing Docker -As we said, We are going to deploy the example deployment onto Docker and unless you already have Docker installed we'll need to set that up. +This tutorial deploys the example deployment onto Docker. If you don't already have Docker installed, you need to set it up. -On Debian or Ubuntu, install Docker by running: +To install Docker on Debian or Ubuntu: ``` sudo apt update sudo apt install docker.io ``` -For other Linux distributions, consult the [Docker Engine Install page](https://docs.docker.com/engine/install/). +For other Linux distributions, see [Install Docker Engine](https://docs.docker.com/engine/install/). -You will want to add your user to the docker group with: +Add your user to the docker group: ``` sudo usermod -aG docker newgrp docker ``` -### CgroupVersion +### Cgroups version -Currently, TPA requires Cgroups Version 1 be configured on your system, +Currently, TPA requires Cgroups Version 1 to be configured on your system. Run: @@ -47,15 +45,15 @@ Run: mount | grep cgroup | head -1 ``` -and if you do not see a reference to `tmpfs` in the output, you'll need to disable cgroups v2. +If you don't see a reference to `tmpfs` in the output, you need to disable cgroups v2. -Run: +To make the appropriate changes, run: ``` echo 'GRUB_CMDLINE_LINUX=systemd.unified_cgroup_hierarchy=false' | sudo tee /etc/default/grub.d/cgroup.cfg ``` -To make the appropriate changes, then update Grub and reboot your system with: +Then update Grub and reboot your system: ``` sudo update-grub @@ -64,42 +62,42 @@ sudo reboot !!! Warning Giving a user the ability to speak to the Docker daemon - lets them trivially gain root on the Docker host. Only trusted users - should have access to the Docker daemon. + lets them trivially gain root on the Docker host. Give only trusted users + access to the Docker daemon. ### Creating a configuration with TPA -The next step in this process is to create a configuration. TPA does most of the work for you through its `configure` command. All you have to do is supply command line flags and options to select, in broad terms, what you want to deploy. Here's our `tpaexec configure` command: +The next step is to create a configuration. TPA does most of the work for you by way of its `configure` command. All you have to do is supply command-line flags and options to select, in broad terms, what you want to deploy. Here's the `tpaexec configure` command: ``` tpaexec configure demo --architecture M1 --platform docker --postgresql 15 --enable-repmgr --no-git ``` -This creates a configuration called `demo` which has the [M1 architecture](architecture-M1/). It will therefore have a primary, replica and backup node. +This command creates a configuration called `demo` that has the [M1 architecture](architecture-M1/). It will therefore have a primary, replica, and backup node. -The `--platform docker` tells TPA that this configuration should be created on a local Docker instance; it will provision all the containers and OS requirements. Other platforms include [AWS](platform-aws), which does the same with Amazon Web Services and [Bare](platform-bare), which skips to operating system provisioning and goes straight to installing software on already configured Linux hosts. +The `--platform docker` tells TPA to create this configuration on a local Docker instance. It will provision all the containers and OS requirements. Other platforms include [AWS](platform-aws), which does the same with Amazon Web Services, and [Bare](platform-bare), which skips to operating system provisioning and goes straight to installing software on already configured Linux hosts. -With `--postgresql 15`, we instruct TPA to use Community Postgres, version 15. There are several options here in terms of selecting software, but this is the most straightforward default for open-source users. +The `--postgresql 15` argument instructs TPA to use community Postgres, version 15. There are several options for selecting software, but this is the most straightforward default for open-source users. -Adding `--enable-repmgr` tells TPA to use configure the deployment to use [Replication Manager](https://www.repmgr.org/) to hand replication and failover. +Adding `--enable-repmgr` tells TPA to configure the deployment to use [Replication Manager](https://www.repmgr.org/) to handle replication and failover. -Finally, `--no-git` turns off the feature in TPA which allows you to revision control your configuration through git. +Finally, `--no-git` turns off the feature in TPA that allows you to revision control your configuration using Git. -Run this command, and apparently, nothing will happen on the command line. But you will find a directory called `demo` has been created containing some files including a `config.yml` file which is a blueprint for our new deployment. +Run this command, which doesn't return anything at the command line when it's complete. However, a directory called `demo` is created that contains some files. These files include `config.yml`, which is a blueprint for the new deployment. ## Provisioning the deployment -Now we are ready to create the containers (or virtual machines) on which we will run our new deployment. This can be achieved with the `provision` command. Run: +Now you're ready to create the containers (or virtual machines) on which to run the new deployment. Use the `provision` command to achieve this: ``` tpaexec provision demo ``` -You will see TPA work through the various operations needed to prepare for deployment of your configuration. +You will see TPA work through the various operations needed to prepare to deploy your configuration. ## Deploying -Once provisioned, you can move on to deployment. This installs, if needed, operating systems and system packages. It then installs the requested Postgres architecture and performs all the needed configuration. +Once the containers are provisioned, you can move on to deployment. Deploying installs, if needed, operating systems and system packages. It then installs the requested Postgres architecture and performs all the needed configuration. ``` tpaexec deploy demo @@ -109,7 +107,7 @@ You will see TPA work through the various operations needed to deploy your confi ## Testing -You can quickly test your newly deployed configuration using the tpaexec `test` command which will run pgbench on your new database. +You can quickly test your newly deployed configuration using the tpaexec `test` command. This command runs pgbench on your new database. ``` tpaexec test demo @@ -117,13 +115,13 @@ tpaexec test demo ## Connecting -To get to a psql prompt, the simplest route is to log into one of the containers (or VMs or host depending on configuration) using docker or SSH. Run +To get to a psql prompt, the simplest route is to log into one of the containers (or VMs or host, depending on configuration) using Docker or SSH. To ping all the connectable hosts in the deployment, run: ``` tpaexec ping demo ``` -to ping all the connectable hosts in the deployment: You will get output that looks something like: +The output looks something like: ``` $ tpaexec ping demo @@ -145,13 +143,13 @@ uptight | SUCCESS => { } ``` -Select one of the nodes which responded with `SUCCESS`. We shall use `uptake` for this example. +Select one of the nodes that responded with `SUCCESS`. This tutorial uses `uptake`. -If you are only planning on using docker, use the command `docker exec -it uptake /bin/bash`, substituting in the appropriate hostname. +If you're only planning on using Docker, use the command `docker exec -it uptake /bin/bash`, substituting the appropriate hostname. -Another option, that works with all types of TPA deployment is to use SSH. To do that, first change current directory to the created configuration directory. +Another option that works with all types of TPA deployment is to use SSH. To do that, first change current directory to the created configuration directory. -For example, our configuration is called demo, so we go to that directory. In there, we run `ssh -F ssh_config ourhostname` to connect. +For example, the tutorial configuration is called `demo`. Go to that directory and run `ssh -F ssh_config ourhostname` to connect: ``` cd demo @@ -160,9 +158,9 @@ Last login: Wed Sep 6 10:08:01 2023 from 172.17.0.1 [root@uptake ~]# ``` -In both cases, you will be logged in as a root user on the container. +In both cases, you're logged in as a root user on the container. -We can now change user to the `postgres` user using `sudo -iu postgres`. As `postgres` we can run `psql`. TPA has already configured that user with a `.pgpass` file so there's no need to present a password. +You can now change user to the postgres user using `sudo -iu postgres`. As postgres, you can run psql. TPA has already configured that user with a `.pgpass` file, so you don't need to enter a password. ``` [root@uptake ~]# @@ -173,9 +171,9 @@ Type "help" for help. postgres=# ``` -And we are connected to our database. +You're connected to the database. -You can connect from the host system without SSHing into one of the containers. Obtain the IP address of the host you want to connect to from the `ssh_config` file. +You can connect from the host system without using SSH to get into one of the containers. Obtain the IP address of the host you want to connect to from the `ssh_config` file: ``` $ grep "^ *Host" demo/ssh_config @@ -190,9 +188,9 @@ Host uptake HostName 172.17.0.11 ``` -We are going to connect to uptake, so the IP address is 172.17.0.11. +You're going to connect to `uptake`, so the IP address is 172.17.0.11. -You will also need to retrieve the password for the postgres user too. Run `tpaexec show-password demo postgres` to get the stored password from the system. +You also need to retrieve the password for the postgres user. Run `tpaexec show-password demo postgres` to get the stored password from the system: ``` tpaexec show-password demo postgres @@ -206,7 +204,7 @@ psql --host 172.17.0.11 -U postgres Password for user postgres: ``` -Enter the password you previously retrieved. +Enter the password you previously retrieved: ``` psql (14.9 (Ubuntu 14.9-0ubuntu0.22.04.1), server 15.4) @@ -218,4 +216,4 @@ Type "help" for help. postgres=# ``` -You are now connected from the Docker host to Postgres running in one of the TPA deployed Docker containers. +You're now connected from the Docker host to Postgres running in one of the TPA-deployed Docker containers. diff --git a/product_docs/docs/tpa/23/index.mdx b/product_docs/docs/tpa/23/index.mdx index 58e630134c9..aadeeae4528 100644 --- a/product_docs/docs/tpa/23/index.mdx +++ b/product_docs/docs/tpa/23/index.mdx @@ -49,51 +49,51 @@ clusters according to EDB's recommendations. TPA embodies the best practices followed by EDB, informed by many years of hard-earned experience with deploying and supporting Postgres. These -recommendations are as applicable to quick testbed setups as to +recommendations apply to quick testbed setups as well as production environments. ## What can TPA do? TPA is built around a declarative configuration mechanism that you can -use to describe a Postgres cluster, from its topology right down to the +use to describe a Postgres cluster, from its topology to the smallest details of its configuration. Start by running `tpaexec configure` to generate an initial cluster -configuration based on a few high-level choices (e.g., which version of -Postgres to install). The default configuration is ready to use as-is, -but you can edit it to suit your needs (the generated configuration is -just a text file, config.yml). +configuration based on a few high-level choices, such as the Postgres +version to install. The default configuration is ready to use as is, +but you can edit it to suit your needs. (The generated configuration is +a text file, `config.yml`). Using this configuration, TPA can: -1. Provision servers (e.g., AWS EC2 instances or Docker containers) and - any other resources needed to host the cluster (or you can deploy to - existing servers or VMs just by specifying connection details). +1. Provision servers, for example, AWS EC2 instances or Docker containers, and + any other resources needed to host the cluster. Or you can deploy to + existing servers or VMs just by specifying connection details. -2. Configure the operating system (tweak kernel settings, create users +2. Configure the operating system, for example, tweak kernel settings, create users and SSH keys, install packages, define systemd services, set up log - rotation, and so on). + rotation, and so on. -3. Install and configure Postgres and associated components (e.g., PGD, - Barman, pgbouncer, repmgr, and various Postgres extensions). +3. Install and configure Postgres and associated components, such as PGD, + Barman, pgbouncer, repmgr, and various Postgres extensions. 4. Run automated tests on the cluster after deployment. -5. Deploy future changes to your configuration (e.g., changing Postgres +5. Deploy future changes to your configuration, such as changing Postgres settings, installing and upgrading packages, adding new servers, and - so on). + so on. -## How do I use it? +## How do you use it? To use TPA, you need to install it and run the `tpaexec setup` command. Follow the [installation instructions](INSTALL/) for your platform. -TPA operates in four distinct stages to bring up a Postgres cluster: +TPA operates in four stages to bring up a Postgres cluster: -- Generate a cluster [configuration](#configuration) -- [Provision](#provisioning) servers (VMs, containers) to host the cluster -- [Deploy](#deployment) software to the provisioned instances -- [Test](#testing) the deployed cluster +- Generate a cluster [configuration](#configuration). +- [Provision](#provisioning) servers (VMs, containers) to host the cluster. +- [Deploy](#deployment) software to the provisioned instances. +- [Test](#testing) the deployed cluster. ```bash # 1. Configuration: decide what kind of cluster you want @@ -114,7 +114,7 @@ TPA operates in four distinct stages to bring up a Postgres cluster: You can run TPA from your laptop, an EC2 instance, or any machine that can reach the cluster's servers over the network. -Here's a [list of capabilities and supported software](reference/tpaexec-support/). +For more information, see [TPA capabilities and supported software](reference/tpaexec-support/). ### Configuration @@ -127,63 +127,63 @@ changes to your cluster](configure-cluster/), both before and after it's created. At this stage, you must select an architecture and a platform for the -cluster. An **architecture** is a recommended layout of servers and +cluster. An *architecture* is a recommended layout of servers and software to set up Postgres for a specific purpose. Examples include -"M1" (Postgres with a primary and streaming replicas) and -"PGD-Always-ON" (EDB Postgres Distributed 5 in an Always On -configuration). A **platform** is a means to host the servers to deploy -any architecture, e.g., AWS, Docker, or bare-metal servers. +M1 (Postgres with a primary and streaming replicas) and +PGD-Always-ON (EDB Postgres Distributed 5 in an Always On +configuration). A *platform* is a means to host the servers to deploy +any architecture, for example, AWS, Docker, or bare-metal servers. ### Provisioning The [`tpaexec provision`](tpaexec-provision/) command creates instances and other resources required by the cluster. -The details of the process depend on the architecture (e.g., M1) and -platform (e.g., AWS) that you selected while configuring the cluster. +The details of the process depend on the architecture (for example, M1) and +platform (for example, AWS) that you selected while configuring the cluster. For example, given AWS access with the necessary privileges, TPA -will provision EC2 instances, VPCs, subnets, routing tables, internet -gateways, security groups, EBS volumes, elastic IPs, etc. +provisions EC2 instances, VPCs, subnets, routing tables, internet +gateways, security groups, EBS volumes, elastic IPs, and so on. -You can also "provision" existing servers by selecting the "bare" +You can also provision existing servers by selecting the bare platform and providing connection details. Whether these are bare metal servers or those provisioned separately on a cloud platform, they can be -used just as if they had been created by TPA. +used as if they were created by TPA. -You are not restricted to a single platform—you can spread your cluster -out across some AWS instances (in multiple regions) and some on-premise -servers, or servers in other data centres, as needed. +You aren't restricted to a single platform. You can spread your cluster +out across some AWS instances (in multiple regions) and some on-premises +servers or servers in other data centers as needed. -At the end of the provisioning stage, you will have the required number +At the end of the provisioning stage, you have the required number of instances with the basic operating system installed, which TPA -can access via SSH (with sudo to root). +can access by way of SSH (with sudo to root). ### Deployment The [`tpaexec deploy`](tpaexec-deploy/) command installs and configures Postgres and other software on the -provisioned servers (which may or may not have been created by TPA; -but it doesn't matter who created them so long as SSH and sudo access is -available). This includes setting up replication, backups, and so on. +provisioned servers. (These servers can be created by TPA, but they don't have to be. +It doesn't matter which application created them as long as SSH and sudo access is +available.) This includes setting up replication, backups, and so on. -At the end of the deployment stage, Postgres will be up and running. +At the end of the deployment stage, Postgres is up and running. ### Testing The [`tpaexec test`](tpaexec-test/) command executes various architecture and platform-specific tests against the deployed cluster to -ensure that it is working as expected. +ensure that it's working as expected. -At the end of the testing stage, you will have a fully-functioning +At the end of the testing stage, you have a fully functioning cluster. ### Incremental changes TPA is carefully designed so that provisioning, deployment, and testing are idempotent. You can run through them, make a change to -config.yml, and run through the process again to deploy the change. If -nothing has changed in the configuration or on the instances, then -rerunning the entire process will not change anything either. +`config.yml`, and run through the process again to deploy the change. If +nothing changed in the configuration or on the instances, then +rerunning the entire process doesn't change anything either. ### Cluster management @@ -196,62 +196,62 @@ safer to manage your cluster than making the changes by hand. TPA supports a [variety of configuration options](configure-instance/), so you can do a lot just by editing -config.yml and re-running provision/deploy/test. If you do need to go -beyond what TPA already supports, you can write +`config.yml` and rerunning provision/deploy/test. If you do need to go +beyond what TPA already supports, you can write: - [Custom commands](reference/tpaexec-commands/), which make it simple to write - playbooks to run on the cluster. Just create + playbooks to run on the cluster. Create `commands/xyz.yml` in your cluster directory, and invoke it - using `tpaexec xyz /path/to/cluster`. Ideal for any management tasks + using `tpaexec xyz /path/to/cluster`. Custom commands are ideal for any management tasks or processes that you need to automate. - [Custom tests](reference/tpaexec-tests/), which augment the builtin tests with in-depth verifications specific to your environment and application. Using `tpaexec test` to run all tests in a uniform, repeatable way - ensures that you will not miss out on anything important, either when - dealing with a crisis, or just during routine cluster management. + ensures that you don't miss out on anything important, either when + dealing with a crisis or during routine cluster management. - [Hook scripts](tpaexec-hooks/), which are invoked during various stages of the deployment. For example, tasks in `hooks/pre-deploy.yml` - will be run before the main deployment; there are many other hooks, - including `post-deploy`. This places the full range of Ansible - functionality at your disposal. + are run before the main deployment. There are many other hooks, + including `post-deploy`. Using hook scripts gives you easy access to + the full range of Ansible functionality. ## It's just Postgres TPA can create complex clusters with many features configured, but the result is just Postgres. The installation follows some conventions -designed to make life simpler, but there is no hidden magic or anything +designed to make life simpler, but there's no hidden magic or anything standing in the way between you and the database. You can do everything -on a TPA cluster that you could do on any other Postgres installation. +on a TPA cluster that you can do on any other Postgres installation. ## Versioning in TPA TPA previously used a date-based versioning scheme whereby the major -version was derived from the year. From version 23 we have moved to a -derivative of semantic versioning. For historical reasons, we are not +version was derived from the year. From version 23, we moved to a +derivative of semantic versioning. For historical reasons, we aren't using the full three-part semantic version number. Instead TPA uses a two-part `major.minor` format. The minor version is incremented on every -release, the major version is only incremented where required to comply -with the backward compatibility principle below. +release. The major version is incremented only when required to comply +with the backward compatibility principle that follows. -### Backwards compatibility +### Backward compatibility -A key development principle of TPA is to maintain backwards -compatibility so there is no reason for users to need anything other -than the latest version of TPA. We define backwards compatibility as +A key development principle of TPA is to maintain backward +compatibility so there's no reason for users to need anything other +than the latest version of TPA. We define backward compatibility as follows: -- A config.yml created with TPA X.a will be valid with TPA X.b where - b>=a -- The cluster created from that config.yml will be maintainable and - re-deployable with TPA X.b +- A `config.yml` created with TPA X.a is valid with TPA X.b, where + b>=a. +- The cluster created from that `config.yml` can be maintained and + redeployed with TPA X.b. Therefore, a new major version implies a break in backward compatibility. As such, we aim to avoid releasing major versions and - will only do so in exceptional circumstances. + do so only in exceptional circumstances. ## Getting started Follow the [TPA installation instructions](INSTALL/) for your -system, then [configure your first cluster](tpaexec-configure/). +system. Then [configure your first cluster](tpaexec-configure/). diff --git a/product_docs/docs/tpa/23/reference/INSTALL-docker.mdx b/product_docs/docs/tpa/23/reference/INSTALL-docker.mdx index c6ff69ce762..2c5184623a8 100644 --- a/product_docs/docs/tpa/23/reference/INSTALL-docker.mdx +++ b/product_docs/docs/tpa/23/reference/INSTALL-docker.mdx @@ -4,22 +4,22 @@ originalFilePath: INSTALL-docker.md --- -If you are using a system for which there are no [TPA +If you're using a system for which there are no [TPA packages](../INSTALL/) available, and it's difficult to run TPA after [installing from source](../INSTALL-repo/) (for example, because it's not -easy to obtain a working Python 3.9+ interpreter), your last resort may +easy to obtain a working Python 3.9+ interpreter), your last resort might be to build a Docker image and run TPA inside a Docker container. -Please note that you do not need to run TPA in a Docker container in -order to [deploy to Docker containers](../platform-docker/). It's always -preferable to run TPA directly if you can (even on MacOS X). +You don't need to run TPA in a Docker container +to [deploy to Docker containers](../platform-docker/). It's always +preferable to run TPA directly if you can, even on MacOS X. -## Quickstart +## Quick start -You must have Docker installed and working on your system already. +Make sure you have Docker installed and working on your system. -Run the following commands to clone the tpaexec source repository from Github -and build a new Docker image named `tpa/tpaexec`: +To clone the tpaexec source repository from Github +and build a new Docker image named `tpa/tpaexec`, run the following commands: ```bash $ git clone ssh://git@github.com/EnterpriseDB/tpa.git @@ -51,7 +51,7 @@ $ docker run --platform=linux/amd64 --rm -v ~/clusters:/clusters \ ``` You can now run commands like `tpaexec provision /clusters/speedy` at the -container prompt. (When you exit the shell, the container will be removed.) +container prompt. (When you exit the shell, the container is removed.) If you want to provision Docker containers using TPA, you must also allow the container to access the Docker control socket on the host: @@ -62,16 +62,16 @@ $ docker run --platform=linux/amd64 --rm -v ~/clusters:/clusters \ -it tpa/tpaexec:latest ``` -Run `docker ps` within the container to make sure that your connection to the +Run `docker ps` in the container to make sure that your connection to the host Docker daemon is working. ## Installing Docker -Please consult the -[Docker documentation](https://docs.docker.com) if you need help to -[install Docker](https://docs.docker.com/install) and -[get started](https://docs.docker.com/get-started/) with it. +See the +[Docker documentation](https://docs.docker.com) if you need help +[installing Docker](https://docs.docker.com/install) and +[getting started](https://docs.docker.com/get-started/) with it. -On MacOS X, you can [install "Docker Desktop for -Mac"](https://hub.docker.com/editions/community/docker-ce-desktop-mac/) +On MacOS X, you can [install Docker Desktop for +Mac](https://hub.docker.com/editions/community/docker-ce-desktop-mac/) and launch Docker from the application menu. diff --git a/product_docs/docs/tpa/23/reference/distributions.mdx b/product_docs/docs/tpa/23/reference/distributions.mdx index c4baa483295..38d4f207e7f 100644 --- a/product_docs/docs/tpa/23/reference/distributions.mdx +++ b/product_docs/docs/tpa/23/reference/distributions.mdx @@ -5,47 +5,48 @@ originalFilePath: distributions.md --- TPA detects and adapts to the distribution running on each target -instance. This page lists platforms which are actively supported and -'legacy distribution' which have previously been supported. Deploying to a -legacy platform is likely to work as long as you have access to the -necessary packages, but this is not considered a supported use of TPA -and is not suitable for production use. +instance. Listed here are the platforms that are actively supported and +legacy distributions that were previously supported. + +Deploying to a legacy platform is likely to work as long as you have access to the +necessary packages. However, this isn't a supported use of TPA +and isn't suitable for production use. Fully supported platforms are supported both as host systems for running TPA and target systems on which TPA deploys the Postgres cluster. ## Debian x86 -- Debian 11/bullseye is fully supported -- Debian 10/buster is fully supported -- Debian 9/stretch is a legacy distribution -- Debian 8/jessie is a legacy distribution +- Debian 11/bullseye is fully supported. +- Debian 10/buster is fully supported. +- Debian 9/stretch is a legacy distribution. +- Debian 8/jessie is a legacy distribution. ## Ubuntu x86 -- Ubuntu 22.04/jammy is fully supported -- Ubuntu 20.04/focal is fully supported -- Ubuntu 18.04/bionic is a legacy distribution -- Ubuntu 16.04/xenial is a legacy distribution +- Ubuntu 22.04/jammy is fully supported. +- Ubuntu 20.04/focal is fully supported. +- Ubuntu 18.04/bionic is a legacy distribution. +- Ubuntu 16.04/xenial is a legacy distribution. ## Oracle Linux -- Oracle Linux 9.x is fully supported (docker only) -- Oracle Linux 8.x is fully supported (docker only) -- Oracle Linux 7.x is fully supported (docker only) +- Oracle Linux 9.x is fully supported (Docker only). +- Oracle Linux 8.x is fully supported (Docker only). +- Oracle Linux 7.x is fully supported (Docker only). ## RedHat x86 -- RHEL/Rocky/AlmaLinux/Oracle Linux 9.x is fully supported (python3 only) -- RHEL/CentOS/Rocky/AlmaLinux 8.x is fully supported (python3 only) -- RHEL/CentOS 7.x is fully supported (python2 only) +- RHEL/Rocky/AlmaLinux/Oracle Linux 9.x is fully supported (Python 3 only). +- RHEL/CentOS/Rocky/AlmaLinux 8.x is fully supported (Python 3 only). +- RHEL/CentOS 7.x is fully supported (Python 2 only). ## SLES -- SLES 15.x is fully supported +- SLES 15.x is fully supported. ## Platform-specific considerations -Some platforms may not work with the legacy distributions mentioned here. -For example, Debian 8 and Ubuntu 16.04 are not available in [Docker +Some platforms might not work with the legacy distributions mentioned here. +For example, Debian 8 and Ubuntu 16.04 aren't available in [Docker containers](../platform-docker/). diff --git a/product_docs/docs/tpa/23/reference/edb_repositories.mdx b/product_docs/docs/tpa/23/reference/edb_repositories.mdx index 61381fe54bc..e2908af267e 100644 --- a/product_docs/docs/tpa/23/reference/edb_repositories.mdx +++ b/product_docs/docs/tpa/23/reference/edb_repositories.mdx @@ -4,11 +4,10 @@ originalFilePath: edb_repositories.md --- -This page explains how to configure EDB Repos 2.0 package repositories -on any system. +You can configure EDB Repos 2.0 package repositories using cluster variables. For more details on the EDB and 2ndQuadrant package sources used by -TPA see [this page](2q_and_edb_repositories/). +TPA, see [How TPA uses 2ndQuadrant and EDB repositories](2q_and_edb_repositories/). To specify the complete list of repositories from EDB Repos 2.0 to install on each instance, set `edb_repositories` to a list of EDB @@ -21,16 +20,16 @@ cluster_vars: - postgres_distributed ``` -This example will install the enterprise subscription repository as well -as postgres_distributed giving access to EPAS and PGD5 products. -On Debian or Ubuntu systems, it will use the APT repository and on -RedHat or SLES systems, it will use the rpm repositories, through the yum -or zypper frontends respectively. +This example installs the enterprise subscription repository as well +as postgres_distributed, giving access to EDB Postgres Advanced Server and PGD version 5 products. +On Debian or Ubuntu systems, it uses the apt repository. On +RedHat or SLES systems, it uses the rpm repositories, through the yum +or zypper front ends, respectively. -If any EDB repositories are specified, any 2ndQuadrant repositories -specified will be ignored and no EDB Repos 1.0 will be installed. +If you specify any EDB repositories, any 2ndQuadrant repositories +specified are ignored and no EDB Repos 1.0 are installed. -To use [EDB Repos 2.0](https://www.enterprisedb.com/repos/) you must +To use [EDB Repos 2.0](https://www.enterprisedb.com/repos/), you must `export EDB_SUBSCRIPTION_TOKEN=xxx` before you run tpaexec. You can get your subscription token from [the web interface](https://www.enterprisedb.com/repos-downloads). diff --git a/product_docs/docs/tpa/23/reference/efm.mdx b/product_docs/docs/tpa/23/reference/efm.mdx index ef87afa549a..49ded8377ae 100644 --- a/product_docs/docs/tpa/23/reference/efm.mdx +++ b/product_docs/docs/tpa/23/reference/efm.mdx @@ -4,28 +4,26 @@ originalFilePath: efm.md --- -TPA will install and configure EFM when `failover_manager` is set to +TPA installs and configures EFM when `failover_manager` is set to `efm`. -Note that EFM is only available via EDB's package repositories -and requires a valid subscription. +You need a valid subscription to EDB's package repositories +to obtain EFM packages. ## EFM configuration -TPA will generate `efm.nodes` and `efm.properties` with the appropriate -instance-specific settings, with remaining settings set to the respective -default values. TPA will also place an `efm.notification.sh` script which -basically contains nothing by default and leaves it up to the user to fill it -in however they want. +TPA generates `efm.nodes` and `efm.properties` with the appropriate +instance-specific settings or default values. TPA also installs an `efm.notification.sh` +script, which does nothing by default. You can fill it in however you want. See the [EFM documentation](https://www.enterprisedb.com/docs/efm/latest/) -for more details on EFM configuration. +for more details on configuring EFM. ## efm_conf_settings -You can use `efm_conf_settings` to set any parameters, whether recognised -by TPA or not. Where needed, you need to quote the value exactly as it -would appear in `efm.properties`: +You can use `efm_conf_settings` to set any parameters, whether or not +TPA recognizes them. Where needed, you need to quote the value exactly as +you would if you were editing `efm.properties` manually: ```yaml cluster_vars: @@ -37,20 +35,19 @@ cluster_vars: reconfigure.sync.primary: true ``` -If you make changes to values under `efm_conf_settings`, TPA will always -restart EFM to activate the changes. +If you change `efm_conf_settings`, TPA always +restarts EFM to activate the changes. ### EFM witness -TPA will install and configure EFM as witness on instances whose `role` +TPA installs and configures EFM as a witness on instances whose `role` contains `efm-witness`. ### Repmgr -EFM works as a failover manager and therefore TPA will still install -repmgr for setting up postgresql replicas on postgres versions 11 and -below. `repmgrd` i.e. repmgr's daemon remains disabled in this case and -repmgr's only job is to provided replication setup functionality. +EFM doesn't provide a way to create new replicas. TPA uses repmgr to +create replicas for Postgres versions 11 and earlier. Although repmgr +packages are installed in this case, the repmgrd daemon remains disabled when EFM is in use. -For postgres versions 12 and above, any cluster that uses EFM will use -`pg_basebackup` to create standby nodes and not use repmgr in any form. +For Postgres versions 12 and later, any cluster that uses EFM uses +`pg_basebackup` to create standby nodes and doesn't use repmgr in any form. diff --git a/product_docs/docs/tpa/23/reference/git-credentials.mdx b/product_docs/docs/tpa/23/reference/git-credentials.mdx index 77eb22e2afb..7b9f8ef4fc8 100644 --- a/product_docs/docs/tpa/23/reference/git-credentials.mdx +++ b/product_docs/docs/tpa/23/reference/git-credentials.mdx @@ -4,64 +4,63 @@ originalFilePath: git-credentials.md --- -This page explains how to clone Git repositories that require -authentication. - -This may be required when you change `postgres_git_url` -to [install Postgres from source](postgres_installation_method_src/) or -[use `install_from_source`](install_from_source/) to compile and -install extensions. +You can clone Git repositories that require +authentication. If you're +[installing Postgres from source](postgres_installation_method_src/) or +[using `install_from_source`](install_from_source/) to compile and +install extensions, and the source repositories require authentication, +you can use SSH key-based authentication or HTTPS username/password based +authentication to access them with TPA. You have two options to authenticate without writing the credentials to disk on the target instance: - For an `ssh://` repository, you can add an SSH key to your local - ssh-agent. Agent forwarding is enabled by default if you use - `--install-from-source` (`forward_ssh_agent: yes` in config.yml). + SSH agent. Agent forwarding is enabled by default if you use + `--install-from-source` (`forward_ssh_agent: yes` in `config.yml`). - For an `https://` repository, you can `export TPA_GIT_CREDENTIALS=username:token` in your environment before running `tpaexec deploy`. !!! Note - When deploying to Docker on macOS, you should use only `https://` - repository URLs because Docker containers cannot be accessed by ssh - from the host in this environment. + Docker containers on macOS can't use ssh:// URLs because SSH access from + the host to containers doesn't work. https:// repository URLs will work fine. ## SSH key authentication -If you are cloning an SSH repository and have an SSH keypair +If you're cloning an SSH repository and have an SSH key pair (`id_example` and `id_example.pub`), use SSH agent forwarding to authenticate on the target instances: - **You need to run `ssh-agent` locally**. If your desktop environment - does not already set this up for you (as most do—`pgrep ssh-agent` + doesn't already set this up for you (as most do: `pgrep ssh-agent` to check if it's running), run `ssh-agent bash` to temporarily start - a new shell with the agent enabled, and run `tpaexec deploy` from + a new shell with the agent enabled. Then run `tpaexec deploy` from that shell. - **Add the required key(s) to the agent** with - `ssh-add /path/to/id_example` (the private key file) + `ssh-add /path/to/id_example` (the private key file). - **Enable SSH agent forwarding** by setting `forward_ssh_agent: yes` - at the top level in config.yml before `tpaexec provision`. (This is + at the top level in `config.yml` before `tpaexec provision`. (This is done by default if you use `--install-from-source`.) -During deployment, any keys you add to your agent will be made available +During deployment, any keys you add to your agent are made available for authentication to remote servers through the forwarded agent connection. Use SSH agent forwarding with caution, preferably with a disposable -keypair generated specifically for this purpose. Users with the +key pair generated specifically for this purpose. Users with the privileges to access the agent's Unix domain socket on the target server can co-opt the agent into impersonating you while authenticating to other servers. ## HTTPS username/password authentication -If you are cloning an HTTPS repository with a username and +If you're cloning an HTTPS repository with a username and authentication token or password, just `export TPA_GIT_CREDENTIALS=username:token` in your environment before -`tpaexec deploy`. During deployment, these credentials will be made -available to any `git clone` or `git pull` tasks (only). They will -not be written to disk on the target instances. +`tpaexec deploy`. During deployment, these credentials are made +available to any `git clone` or `git pull` tasks (only). They aren't +written to disk on the target instances. diff --git a/product_docs/docs/tpa/23/reference/haproxy.mdx b/product_docs/docs/tpa/23/reference/haproxy.mdx index 4a98a51b207..5595dd7c7bc 100644 --- a/product_docs/docs/tpa/23/reference/haproxy.mdx +++ b/product_docs/docs/tpa/23/reference/haproxy.mdx @@ -4,58 +4,59 @@ originalFilePath: haproxy.md --- -TPA will install and configure haproxy on instances whose `role` +TPA installs and configures haproxy on instances whose `role` contains `haproxy`. By default, haproxy listens on `127.0.0.1:5432` for requests forwarded by [`pgbouncer`](pgbouncer/) running on the same instance. You must specify a list of `haproxy_backend_servers` to forward requests to. -TPA will install the latest available version of haproxy by default. -You can install a specific version instead by setting -`haproxy_package_version: 1.9.15*` (for example). +TPA installs the latest available version of haproxy by default. +Set `haproxy_package_version: 1.9.15*` or any valid version specifier +to install a different version. -Note: see limitations of using wildcards in package_version in -[tpaexec-configure](../tpaexec-configure/#known-issue-with-wildcard-use). +!!! Note + See limitations of using wildcards in `package_version` in + [tpaexec-configure](../tpaexec-configure/#known-issue-with-wildcard-use). Haproxy packages are selected according to the type of architecture. -An EDB managed haproxy package may be used but requires a subscription. +You can use an EDB-managed haproxy package, but it requires a subscription. Packages from PGDG extras repo can be installed if required. You can set the following variables on any `haproxy` instance. -| Variable | Default value | Description | -| ------------------------- | --------------------- | --------------------------------------------------------------------------------------------------------------------------------- | -| `haproxy_bind_address` | 127.0.0.1 | The address haproxy should bind to | -| `haproxy_port` | 5432 (5444 for EPAS) | The TCP port haproxy should listen on | -| `haproxy_read_only_port` | 5433 (5445 for EPAS) | TCP port for read-only load-balancer | -| `haproxy_backend_servers` | None | A list of Postgres instance names | -| `haproxy_maxconn` | `max_connections`×0.9 | The maximum number of connections allowed per backend server; the default is derived from the backend's `max_connections` setting | -| `haproxy_peer_enabled` | True\* | Add known haproxy hosts as `peer` list.
\*`False` if `failover_manager` is harp or patroni. | +| Variable | Default value | Description | +| ------------------------- | --------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | +| `haproxy_bind_address` | 127.0.0.1 | The address for haproxy to bind to. | +| `haproxy_port` | 5432 (5444 for EPAS) | The TCP port for haproxy to listen on. | +| `haproxy_read_only_port` | 5433 (5445 for EPAS) | TCP port for read-only load balancer. | +| `haproxy_backend_servers` | None | A list of Postgres instance names. | +| `haproxy_maxconn` | `max_connections`×0.9 | The maximum number of connections allowed per backend server. The default is derived from the backend's `max_connections` setting. | +| `haproxy_peer_enabled` | True\* | Add known haproxy hosts as `peer` list.
\*`False` if `failover_manager` is harp or patroni. | -## Read-Only load-balancer +## Read-only load balancer Haproxy can be configured to listen on an additional port for read-only -access to the database. At the moment this is only supported with the +access to the database. Currently, this is supported only with the Patroni failover manager. The backend health check determines which -postgres instances are currently acting as replicas and will send -traffic using a roundrobin load balancing algorithm. +Postgres instances are currently acting as replicas and uses +round-robin load balancing to distribute traffic to them. -The read-only load balancer is disabled by default but can be turned on -using the cluster_vars variable -`haproxy_read_only_load_balancer_enabled`. +The read-only load balancer is disabled by default. You can turn it on +by setting `haproxy_read_only_load_balancer_enabled: true`. ## Server options -TPA will generate `/etc/haproxy/haproxy.cfg` with a backend that has +TPA generates `/etc/haproxy/haproxy.cfg` with a backend that has a `default-server` line and one line per backend server. All but the -first one will be marked as "backup" servers. +first one are marked as "backup" servers. -Set `haproxy_default_server_extra_options` to a list of options on the -haproxy instance to add options to the `default-server` line; and set -`haproxy_server_options` to a list of options on the backend server to -add options (which will override the defaults) to the individual server -lines for each backend. +To add options to the `default-server` line, set `haproxy_default_server_extra_options` to a list of options on the +haproxy instance. + +To add options (which override the defaults) to the individual server +lines for each backend, set +`haproxy_server_options` to a list of options on the backend server. ## Example diff --git a/product_docs/docs/tpa/23/reference/harp.mdx b/product_docs/docs/tpa/23/reference/harp.mdx index d1a76df27a0..19317aa372f 100644 --- a/product_docs/docs/tpa/23/reference/harp.mdx +++ b/product_docs/docs/tpa/23/reference/harp.mdx @@ -4,47 +4,47 @@ originalFilePath: harp.md --- -TPA will install and configure HARP when `failover_manager` is set -to `harp`, which is the default for BDR-Always-ON clusters. +TPA installs and configures HARP when `failover_manager` is set +to `harp`. This value is the default for BDR-Always-ON clusters. ## Installing HARP -You must provide the `harp-manager` and `harp-proxy` packages. Please -contact EDB to obtain access to these packages. +You must provide the `harp-manager` and `harp-proxy` packages. +Contact EDB to obtain access to these packages. ## Configuring HARP See the [HARP documentation](https://documentation.enterprisedb.com/harp/release/latest/configuration/) for more details on HARP configuration. -| Variable | Default value | Description | -| --------------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `cluster_name` | \`\` | The name of the cluster. | -| `harp_consensus_protocol` | \`\` | The consensus layer to use (`etcd` or `bdr`) | -| `harp_location` | `location` | The location of this instance (defaults to the `location` parameter) | -| `harp_ready_status_duration` | `10` | Amount of time in seconds the node's readiness status will persist if not refreshed. | -| `harp_leader_lease_duration` | `6` | Amount of time in seconds the Lead Master lease will persist if not refreshed. | -| `harp_lease_refresh_interval` | `2000` | Amount of time in milliseconds between refreshes of the Lead Master lease. | -| `harp_dcs_reconnect_interval` | `1000` | The interval, measured in ms, between attempts that a disconnected node tries to reconnect to the DCS. | -| `harp_dcs_priority` | `500` | In the case two nodes have an equal amount of lag and other qualified criteria to take the Lead Master lease, this acts as an additional ranking value to prioritize one node over another. | -| `harp_stop_database_when_fenced` | `false` | Rather than simply removing a node from all possible routing, stop the database on a node when it is fenced. | -| `harp_fenced_node_on_dcs_failure` | `false` | If HARP is unable to reach the DCS then fence the node. | -| `harp_maximum_lag` | `1048576` | Highest allowable variance (in bytes) between last recorded LSN of previous Lead Master and this node before being allowed to take the Lead Master lock. | -| `harp_maximum_camo_lag` | `1048576` | Highest allowable variance (in bytes) between last received LSN and applied LSN between this node and its CAMO partner(s). | -| `harp_camo_enforcement` | `lag_only` | Whether CAMO queue state should be strictly enforced. | -| `harp_use_unix_sock` | `false` | Use unix domain socket for manager database access. | -| `harp_request_timeout` | `250` | Time in milliseconds to allow a query to the DCS to succeed. | -| `harp_watch_poll_interval` | `500` | Milliseconds to sleep between polling DCS. Only applies when `harp_consensus_protocol` is `bdr`. | -| `harp_proxy_timeout` | `1` | Builtin proxy connection timeout, in seconds, to Lead Master. | -| `harp_proxy_keepalive` | `5` | Amount of time builtin proxy will wait on an idle connection to the Lead Master before sending a keepalive ping. | -| `harp_proxy_max_client_conn` | `75` | Maximum number of client connections accepted by harp-proxy (`max_client_conn`) | -| `harp_ssl_password_command` | None | a custom command that should receive the obfuscated sslpassword in the stdin and provide the handled sslpassword via stdout. | -| `harp_db_request_timeout` | `10s` | similar to dcs -> request_timeout, but for connection to the database itself. | +| Variable | Default value | Description | +| --------------------------------- | ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `cluster_name` | \`\` | The name of the cluster. | +| `harp_consensus_protocol` | ` ` | The consensus layer to use (`etcd` or `bdr`). | +| `harp_location` | `location` | The location of this instance (defaults to the `location` parameter). | +| `harp_ready_status_duration` | `10` | Amount of time in seconds the node's readiness status persists if not refreshed. | +| `harp_leader_lease_duration` | `6` | Amount of time in seconds the Lead Master lease persists if not refreshed. | +| `harp_lease_refresh_interval` | `2000` | Amount of time in milliseconds between refreshes of the Lead Master lease. | +| `harp_dcs_reconnect_interval` | `1000` | The interval, measured in ms, between attempts that a disconnected node tries to reconnect to the DCS. | +| `harp_dcs_priority` | `500` | In the case in which two nodes have an equal amount of lag and other qualified criteria to take the Lead Master lease, acts as an additional ranking value to prioritize one node over another. | +| `harp_stop_database_when_fenced` | `false` | Rather than removing a node from all possible routing, stop the database on a node when it's fenced. | +| `harp_fenced_node_on_dcs_failure` | `false` | If HARP is unable to reach the DCS, then fence the node. | +| `harp_maximum_lag` | `1048576` | Highest allowable variance (in bytes) between last recorded LSN of previous Lead Master and this node before being allowed to take the Lead Master lock. | +| `harp_maximum_camo_lag` | `1048576` | Highest allowable variance (in bytes) between last received LSN and applied LSN between this node and its CAMO partners. | +| `harp_camo_enforcement` | `lag_only` | Whether to strictly enforce CAMO queue state. | +| `harp_use_unix_sock` | `false` | Use Unix domain socket for manager database access. | +| `harp_request_timeout` | `250` | Time in milliseconds to allow a query to the DCS to succeed. | +| `harp_watch_poll_interval` | `500` | Milliseconds to sleep between polling DCS. Applies only when `harp_consensus_protocol` is `bdr`. | +| `harp_proxy_timeout` | `1` | Builtin proxy connection timeout, in seconds, to Lead Master. | +| `harp_proxy_keepalive` | `5` | Amount of time builtin proxy waits on an idle connection to the Lead Master before sending a keepalive ping. | +| `harp_proxy_max_client_conn` | `75` | Maximum number of client connections accepted by harp-proxy (`max_client_conn`). | +| `harp_ssl_password_command` | None | A custom command to receive the obfuscated sslpassword in the stdin and provide the handled sslpassword via stdout. | +| `harp_db_request_timeout` | `10s` | Similar to dcs -> request_timeout but for connection to the database. | You can use the [harp-config hook](../tpaexec-hooks/#harp-config) -to execute tasks after the HARP configuration files have been -installed (e.g., to install additional configuration files). +to execute tasks after the HARP configuration files are +installed, for example, to install additional configuration files. ## Consensus layer @@ -54,14 +54,14 @@ mandatory for the BDR-Always-ON architecture. ### etcd If the `--harp-consensus-protocol etcd` option is given to `tpaexec -configure`, then TPA will set `harp_consensus_protocol` to `etcd` -in config.yml and give the `etcd` role to a suitable subset of the +configure`, then TPA sets `harp_consensus_protocol` to `etcd` +in `config.yml`. It gives the `etcd` role to a suitable subset of the instances, depending on your chosen layout. -HARP v2 requires etcd v3.5.0 or above, which is available in the +HARP v2 requires etcd v3.5.0 or later, which is available in the products/harp/release package repositories provided by EDB. -You can configure the following parameters for etcd: +You can configure the following parameters for etcd, | Variable | Default value | Description | | ---------------- | ------------- | -------------------------------------------- | @@ -71,45 +71,45 @@ You can configure the following parameters for etcd: ### bdr If the `--harp-consensus-protocol bdr` option is given to `tpaexec -configure`, then TPA will set `harp_consensus_protocol` to `bdr` -in config.yml. In this case the existing PGD instances will be used +configure`, then TPA sets `harp_consensus_protocol` to `bdr` +in `config.yml`. In this case, the existing PGD instances are used for consensus, and no further configuration is required. -## Configuring a separate user for harp proxy +## Configuring a separate user for HARP proxy -If you want harp proxy to use a separate readonly user, you can specify that -by setting `harp_dcs_user: username` under cluster_vars. TPA will use -`harp_dcs_user` setting to create a readonly user and set it up in the DCS +If you want HARP proxy to use a separate read-only user, you can specify that +by setting `harp_dcs_user: username` under `cluster_vars`. TPA uses the +`harp_dcs_user` setting to create a read-only user and set it up in the DCS configuration. -## Configuring a separate user for harp manager +## Configuring a separate user for HARP manager -If you want harp manager to use a separate user, you can specify that by setting `harp_manager_user: username` under `cluster_vars`. TPAexec will use that setting to create a new user and grant it the `bdr_superuser` role. +If you want HARP manager to use a separate user, you can specify that by setting `harp_manager_user: username` under `cluster_vars`. TPA uses that setting to create a new user and grant it the `bdr_superuser` role. ## Custom SSL password command -The command provided by `harp_ssl_password_command` will be used by HARP -to de-obfuscate the `sslpassword` given in connection string. If -`sslpassword` is not present then `harp_ssl_password_command` is -ignored. If `sslpassword` is not obfuscated then -`harp_ssl_password_command` is not required and should not be specified. +The command provided by `harp_ssl_password_command` is used by HARP +to de-obfuscate the `sslpassword` given in the connection string. If +`sslpassword` isn't present, then `harp_ssl_password_command` is +ignored. If `sslpassword` isn't obfuscated, then +`harp_ssl_password_command` isn't required and should not be specified. -## Configuring the harp service +## Configuring the HARP service -You can configure the following parameters for the harp service: +You can configure the following parameters for the HARP service. | Variable | Default value | Description | | --------------------------------- | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `harp_manager_restart_on_failure` | `false` | If `true`, the `harp-manager` service is overridden so it's restarted on failure. The default is `false` to comply with the service installed by the `harp-manager` package. | -## Configuring harp http(s) health probes +## Configuring HARP http(s) health probes -You can enable and configure the http(s) service for harp that will -provide api endpoints to monitor service's health. +You can enable and configure the http(s) service for HARP that +provides API endpoints to monitor the service's health. -| Variable | Default value | Description | -| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ | -| `harp_http_options` | `enable: false``secure: false``host: ``port: 8080``probes:`  `timeout: 10s``endpoint: "host= port=<6432> dbname= user="` | Configure the http section of harp config.yml that defines the http(s) api settings. | +| Variable | Default value | Description | +| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- | +| `harp_http_options` | `enable: false``secure: false``host: ``port: 8080``probes:`  `timeout: 10s``endpoint: "host= port=<6432> dbname= user="` | Configure the http section of HARP `config.yml` that defines the http(s) API settings. | The variable can contain these keys: @@ -132,10 +132,10 @@ You must ensure that both certificate and key are available at the given location on the target node before running `deploy`. Leave both `cert_file` and `key_file` empty if you want TPA to generate a -certificate and key for you using a cluster specific CA certificate. -TPA CA certificate won't be 'well known', you will need to add this certificate -to the trust store of each machine that will probe the endpoints. -The CA certificate can be found on the cluster directory on the TPA node at: +certificate and key for you using a cluster-specific CA certificate. +TPA CA certificate isn't "well-known." You need to add this certificate +to the trust store of each machine that probes the endpoints. +The CA certificate can be found on the cluster directory on the TPA node at `/ssl/CA.crt` after `deploy`. -see harp documentation for more information on the available api endpoints. +See the [HARP documentation](https://documentation.enterprisedb.com/harp/release/latest/) for more information about the available API endpoints. diff --git a/product_docs/docs/tpa/23/reference/hosts.mdx b/product_docs/docs/tpa/23/reference/hosts.mdx index 51e5b4060a0..4067ca1f6ea 100644 --- a/product_docs/docs/tpa/23/reference/hosts.mdx +++ b/product_docs/docs/tpa/23/reference/hosts.mdx @@ -4,12 +4,12 @@ originalFilePath: hosts.md --- -By default, TPA will add lines to /etc/hosts on the target instances -with the IP address and hostname(s) of every instance in the cluster, so -that they can use each other's names for communication within the -cluster (e.g., in `primary_conninfo` for Postgres). +By default, TPA adds lines to `/etc/hosts` on the target instances +with the IP address and hostnames of every instance in the cluster. This +enables the instances to use each other's names for communication within the +cluster (for example, in `primary_conninfo` for Postgres). -You can specify a list of `extra_etc_hosts_lines` too: +You can specify a list of `extra_etc_hosts_lines`, too: ```yaml instances: @@ -21,9 +21,9 @@ instances: - 192.0.2.2 water.example.com ``` -If you don't want the default entries at all, you can specify the -complete list of `etc_hosts_lines` for an instance instead, and only -those lines will be added to /etc/hosts: +If you don't want any of the default entries, you can specify the +complete list of `etc_hosts_lines` for an instance instead. Add only +those lines to `/etc/hosts`: ```yaml instances: @@ -36,6 +36,6 @@ instances: - 192.0.2.3 base.example.com ``` -If your /etc/hosts doesn't contain the default entries for instances in -the cluster, you'll need to ensure the names can be resolved in some +If your `/etc/hosts` doesn't contain the default entries for instances in +the cluster, you need to ensure the names can be resolved in some other way. diff --git a/product_docs/docs/tpa/23/reference/initdb.mdx b/product_docs/docs/tpa/23/reference/initdb.mdx index 87ba860b53b..b8ed2c51eb7 100644 --- a/product_docs/docs/tpa/23/reference/initdb.mdx +++ b/product_docs/docs/tpa/23/reference/initdb.mdx @@ -4,16 +4,16 @@ originalFilePath: initdb.md --- -TPA will first create `postgres_data_dir` if it does not exist, and -ensure it has the correct ownership, permissions, and SELinux context. -Then, unless the directory already contains a `VERSION` file, it will -run `initdb` to initialise `postgres_data_dir`. +TPA first creates `postgres_data_dir` if it doesn't exist and +ensures it has the correct ownership, permissions, and SELinux context. +Then, unless the directory already contains a `VERSION` file, it +runs `initdb` to initialize `postgres_data_dir`. You can use the [pre-initdb hook](../tpaexec-hooks/#pre-initdb) to execute tasks before `postgres_data_dir` is created and `initdb` is -run. If the hook initialises `postgres_data_dir`, TPA will find the -`VERSION` file and realise that it does not need to run `initdb` itself. +run. If the hook initializes `postgres_data_dir`, TPA finds the +`VERSION` file and therefore doesn't run `initdb`. You can optionally set `postgres_initdb_opts` to a list of options to pass to `initdb`: @@ -25,17 +25,17 @@ cluster_vars: - --data-checksums ``` -We recommend always including the `--data-checksums` option (which is -included by default). +We recommend always including the `--data-checksums` option, which is +included by default. -TPA will set `TZ=UTC` in the environment, and set `LC_ALL` to -the `postgres_locale` you specify, when running `initdb`. +When running `initdb`, TPA sets `TZ=UTC` in the environment and sets `LC_ALL` to +the `postgres_locale` you specify. ## Separate configuration directory By default, `postgres_conf_dir` is equal to `postgres_data_dir`, and the -Postgres configuration files (postgresql.conf, pg_ident.conf, -pg_hba.conf, and the include files in conf.d) are created within the -data directory. If you change `postgres_conf_dir`, TPA will move the -generated configuration files to the new location after running -`initdb`. +Postgres configuration files (`postgresql.conf`, `pg_ident.conf`, +`pg_hba.conf`, and the include files in `conf.d`) are created in the +data directory. If you change `postgres_conf_dir`, after running +`initdb`, TPA moves the +generated configuration files to the new location. diff --git a/product_docs/docs/tpa/23/reference/install_from_source.mdx b/product_docs/docs/tpa/23/reference/install_from_source.mdx index f70aa74da3e..ae8b60f0cfd 100644 --- a/product_docs/docs/tpa/23/reference/install_from_source.mdx +++ b/product_docs/docs/tpa/23/reference/install_from_source.mdx @@ -5,7 +5,7 @@ originalFilePath: install_from_source.md --- You can define a list of extensions to build and install from their Git -repositories by setting `install_from_source` in config.yml: +repositories by setting `install_from_source` in `config.yml`: ```yaml cluster_vars: @@ -25,30 +25,30 @@ cluster_vars: VAR: value ``` -TPA will build and install extensions one by one in the order -listed, so you can build extensions that depend on another (such as +TPA builds and installs extensions one by one in the order +listed. So you can build extensions that depend on another (such as pglogical and BDR) by mentioning them in the correct order. Each entry must specify a `name`, `git_repository_url`, and `git_repository_ref` (default: `master`) to build. You can use [SSH agent forwarding or an HTTPS username/password](git-credentials/) -to authenticate to the Git repository; and also set +to authenticate to the Git repository. Also set `source_directory`, `build_directory`, `build_environment`, and -`build_commands` as shown above. +`build_commands`, as shown in the example. -Run `tpaexec deploy … --skip-tags build-clean` in order to reuse the -build directory when doing repeated deploys. (Otherwise the old build -directory is emptied before starting the build.) You can also configure +To reuse the build directory when doing repeated deploys, +run `tpaexec deploy … --skip-tags build-clean`. Otherwise the old build +directory is emptied before starting the build. You can also configure [local source directories](../configure-source/#local-source-directories) to speed up your development builds. -Whenever you run a source build, Postgres will be restarted. +Whenever you run a source build, Postgres is restarted. ## Build dependencies -If you're building from source, TPA will ensure that the basic +If you're building from source, TPA ensures that the basic Postgres build dependencies are installed. If you need any additional -packages, mention them in [`packages`](packages/). For example +packages, mention them in [`packages`](packages/). For example: ```yaml cluster_vars: diff --git a/product_docs/docs/tpa/23/reference/local-repo.mdx b/product_docs/docs/tpa/23/reference/local-repo.mdx index 80510daaba2..8823f0d0906 100644 --- a/product_docs/docs/tpa/23/reference/local-repo.mdx +++ b/product_docs/docs/tpa/23/reference/local-repo.mdx @@ -4,17 +4,17 @@ originalFilePath: local-repo.md --- -If you create a local repository within your cluster directory, TPA -will make any packages in the repository available to cluster instances. -This is an easy way to ship extra packages to your cluster. +If you create a local repository in your cluster directory, TPA +makes any packages in the repository available to cluster instances. +This provides an easy way to ship extra packages to your cluster. Optionally, you can also instruct TPA to configure the instances to -use *only* this repository, i.e., disable all others. In this case, you +use *only* this repository, disabling all others. In this case, you must provide *all* packages required during the deployment, starting from basic dependencies like rsync, Python, and so on. -You can create a local repository manually, or have TPA create one for -you. Instructions for both are included below. +You can create a local repository manually or have TPA create one for +you. !!! Note Specific instructions are available for [managing clusters in an @@ -23,9 +23,9 @@ you. Instructions for both are included below. ## Creating a local repository with TPA TPA includes tools to help create such a local repository. Specifically -the `--enable-local-repo` switch can be used with `tpaexec configure` to -create an empty directory structure to be used as a local repository, -and `tpaexec download-packages` populates that structure with the +you can use the `--enable-local-repo` switch with `tpaexec configure` to +create an empty directory structure to use as a local repository. +Use `tpaexec download-packages` to populate that structure with the necessary packages. ### Creating the directory structure @@ -33,21 +33,21 @@ necessary packages. To configure a cluster with a local repository, run: ``` -tpaexec configure --enable-local-repo … +`tpaexec configure --enable-local-repo …` ``` -This will generate your cluster configuration and create a `local-repo` -directory and OS-specific subdirectories. See below for [details of the -layout](#local-repo-layout). +This command generates your cluster configuration and creates a `local-repo` +directory and OS-specific subdirectories. See [Local repo layout](#local-repo-layout) +for details. ### Populate the repository and generate metadata Run [`tpaexec download-packages`](tpaexec-download-packages/) to download all the packages required by a cluster into the local-repo. -The resulting repository will contain the full dependency tree of all +The resulting repository contains the full dependency tree of all packages so the entire cluster can be installed from this repository. -Metadata for the repository will also be created automatically meaning -it is ready to use immediately. +Metadata for the repository is also created, which means +it's ready to use immediately. ## Creating a local repository manually @@ -55,10 +55,10 @@ it is ready to use immediately. To create a local repository manually, you must first create an appropriate directory structure. When using `--enable-local-repo`, -TPA will create a `local-repo` directory and OS-specific -subdirectories within it (e.g., `local-repo/Debian/10`), based on the OS -you select for the cluster. We recommend that this structure is also -used for manually created repositories. +TPA creates a `local-repo` directory and OS-specific +subdirectories within it (for example, `local-repo/Debian/10`), based on the OS +you select for the cluster. We recommend that you also use this structure +for repositories you create manually. For example, a cluster running RedHat 8 might have the following layout: @@ -70,8 +70,8 @@ local-repo/ `-- repodata ``` -For each instance, TPA will look for the following subdirectories of -`local-repo` in order and use the first one it finds: +For each instance, TPA looks for the following subdirectories of +`local-repo` in order and uses the first one it finds: - `/`, e.g., `RedHat/8.5` - `/`, e.g., `RedHat/8` @@ -79,26 +79,26 @@ For each instance, TPA will look for the following subdirectories of - ``, e.g., `Debian` - The `local-repo` directory itself. -If none of these directories exists, of course, TPA will not try to +If none of these directories exists, TPA doesn't try to set up any local repository on target instances. ## Populating the repository and generating metadata -The steps detailed below must be completed before running +You must complete the steps that follow before running `tpaexec deploy`. -To populate the repository, copy the packages you wish to include into +To populate the repository, copy the packages you want to include into the appropriate directory. Then generate metadata using the correct -tool for your system as detailed below. +tool for your system, as follows. !!! Note - You must generate the metadata on the control node, i.e., the machine - where you run tpaexec. TPA will copy the metadata and packages to + You must generate the metadata on the control node, that is, the machine + where you run tpaexec. TPA copies the metadata and packages to target instances. !!! Note You must generate the metadata in the subdirectory that the instance - will use, i.e., if you copy packages into `local-repo/Debian/10`, you + will use. That is, if you copy packages into `local-repo/Debian/10`, you must create the metadata in that directory, not in `local-repo/Debian`. ### Debian/Ubuntu repository metadata @@ -109,7 +109,7 @@ For Debian-based distributions, install the `dpkg-dev` package: $ sudo apt-get update && sudo apt-get install -y dpkg-dev ``` -Now you can use `dpkg-scanpackages` to generate the metadata: +Use `dpkg-scanpackages` to generate the metadata: ```shell $ cd local-repo/Debian/buster @@ -125,7 +125,7 @@ First, install the `createrepo` package: $ sudo yum install -y createrepo ``` -Now you can use `createrepo` to generate the metadata: +Use `createrepo` to generate the metadata: ```shell $ cd local-repo/RedHat/8 @@ -137,32 +137,32 @@ $ createrepo . ### Copying the repository -TPA will use rsync to copy the contents of the repository directory, -including the generated metadata, to a directory on target instances. +TPA uses rsync to copy the contents of the repository directory +to a directory on target instances. The contents include the generated metadata. -If rsync is not already available on an instance, TPA can install it -(i.e., `apt-get install rsync` or `yum install rsync`). However, if you +If rsync isn't already available on an instance, TPA can install it +(that is, `apt-get install rsync` or `yum install rsync`). However, if you have set `use_local_repo_only`, the rsync package must be included in -the local repo. If required, TPA will copy just the rsync package -using scp and install it before copying the rest. +the local repo. If required, TPA copies just the rsync package +using scp and installs it before copying the rest. ### Repository configuration After copying the contents of the local repo to target instances, -TPA will configure the destination directory as a local (i.e., -path-based, rather than URL-based) repository. +TPA configures the destination directory as a local repository, +that is, path based, rather than URL based. If you provide, say, `example.deb` in the repository -directory, running `apt-get install example` will suffice to install it, +directory, running `apt-get install example` is enough to install it, just like any package in any other repository. ### Package installation -TPA configures a repository with the contents that you provide, but -if the same package is available from different repositories, it is up -to the package manager to decide which one to install (usually the -latest, unless you specify a particular version). +TPA configures a repository with the contents that you provide. But +if the same package is available from different repositories, it's up +to the package manager to decide which one to install. Usually it installs the +latest, unless you specify a particular version. -(However, if you set `use_local_repo_only: yes`, TPA will disable -all other package repositories, so that instances can only use the -packages that you provide in `local-repo`.) +However, if you set `use_local_repo_only: yes`, TPA disables +all other package repositories, so that instances can use only the +packages that you provide in `local-repo`. diff --git a/product_docs/docs/tpa/23/reference/locale.mdx b/product_docs/docs/tpa/23/reference/locale.mdx index a24e3996afb..2fe04d02860 100644 --- a/product_docs/docs/tpa/23/reference/locale.mdx +++ b/product_docs/docs/tpa/23/reference/locale.mdx @@ -4,20 +4,20 @@ originalFilePath: locale.md --- -For some platform images and environments it might be desirable to +For some platform images and environments, you might want to set the region and language settings. -By default, TPAexec will install the `en_US.UTF-8` locale system files. +By default, tpaexec installs the `en_US.UTF-8` locale system files. You can set the desired locale in your `config.yml`: ```yaml user_locale: en_GB.UTF-8 ``` -To find supported locales consult the output of the following command: +To see the supported locales, use the following command: ```shell localectl list-locales ``` -Or the contents of the file /etc/locales.defs on Debian or Ubuntu. +Alternatively, on Debian or Ubuntu, look at the contents of the file `/etc/locales.defs`. diff --git a/product_docs/docs/tpa/23/tpaexec-configure.mdx b/product_docs/docs/tpa/23/tpaexec-configure.mdx index 0f20d457fb5..ec7ea6375e2 100644 --- a/product_docs/docs/tpa/23/tpaexec-configure.mdx +++ b/product_docs/docs/tpa/23/tpaexec-configure.mdx @@ -400,7 +400,7 @@ Use the `--use-ansible-tower` and `--tower-git-repository` options to create a cluster adapted for deployment with Ansible Tower. See [Ansible Tower](tower/) for details. -## git repository +## Git repository By default, a git repository is created with an initial branch named after the cluster, and a single commit is made, with the configure @@ -411,6 +411,34 @@ option. (Note that in an Ansible Tower cluster, a git repository is required and will be created later by `tpaexec provision` if it does not already exist.) +## Keyring backend for vault password + +TPA generates a cluster specific ansible vault password. +This password is used to encrypt other sensitive variables generated +for the cluster, postgres user password, barman user password and so on. + +Keyring backend `system` will leverage the best keyring backend on your system +from the list of supported backend by python keyring module including +gnome-keyring and secret-tool. + +Default is to store the vault password using `system` keyring for new cluster. +removing `keyring_backend: system` in config.yml file **before** any `provision` +will revert previous default to store vault password in plaintext file. + +Using `keyring_backend: system` also generates a `vault_name` entry in config.yml +used to store the vault password unique storage name. TPA generate an UUID by +default but there is no naming scheme requirements. + +Note: When using `keyring_backend: system` and the same base config.yml file +for multiple clusters with same `cluster_name`, by copying the config file to +a different location, ensure the value pair (`vault_name`, `cluster_name`) +is unique for each cluster copy. + +Note: When using `keyring_backend: system` and moving an already provisioned +cluster folder to a different tpa host, ensure that you export the associated +vault password on the new machine's system keyring. vault password can be +displayed via `tpaexec show-vault `. + ## Examples Let's see what happens when we run the following command: From 513216e001a15ccff49573aeeead3e37c13d84c3 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 13 Feb 2024 11:42:25 +0000 Subject: [PATCH 2/4] Stubs for release notes Signed-off-by: Dj Walker-Morgan --- product_docs/docs/tpa/23/rel_notes/index.mdx | 2 ++ .../docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx | 12 ++++++++++++ 2 files changed, 14 insertions(+) create mode 100644 product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx diff --git a/product_docs/docs/tpa/23/rel_notes/index.mdx b/product_docs/docs/tpa/23/rel_notes/index.mdx index 963d8c8cf2a..7ae4b3aee35 100644 --- a/product_docs/docs/tpa/23/rel_notes/index.mdx +++ b/product_docs/docs/tpa/23/rel_notes/index.mdx @@ -2,6 +2,7 @@ title: Trusted Postgres Architect release notes navTitle: "Release notes" navigation: + - tpa_23.29_rel_notes - tpa_23.28_rel_notes - tpa_23.27_rel_notes - tpa_23.26_rel_notes @@ -26,6 +27,7 @@ The Trusted Postgres Architect documentation describes the latest version of Tru | Version | Release date | | ---------------------------- | ------------ | +| [23.29](tpa_23.29_rel_notes) | XX Feb 2024 | | [23.28](tpa_23.28_rel_notes) | 23 Jan 2024 | | [23.27](tpa_23.27_rel_notes) | 19 Dec 2023 | | [23.26](tpa_23.26_rel_notes) | 30 Nov 2023 | diff --git a/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx b/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx new file mode 100644 index 00000000000..bf6910dbf1e --- /dev/null +++ b/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx @@ -0,0 +1,12 @@ +--- +title: Trusted Postgres Architect 23.29 release notes +navTitle: "Version 23.29" +--- + +Released: XX Feb 2024 + +New features, enhancements, bug fixes, and other changes in Trusted Postgres Architect 23.29 include the following: + +| Type | Description | +| ---- |------------ | +| Enhancement | Added release notes | From 275a1d34ddd7f8ddc6d0bfd736aaf9cb14c0011a Mon Sep 17 00:00:00 2001 From: Simon Notley <43099400+sonotley@users.noreply.github.com> Date: Tue, 13 Feb 2024 17:19:15 +0000 Subject: [PATCH 3/4] Populate release notes --- product_docs/docs/tpa/23/rel_notes/index.mdx | 2 +- .../docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx | 8 ++++++-- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/tpa/23/rel_notes/index.mdx b/product_docs/docs/tpa/23/rel_notes/index.mdx index 7ae4b3aee35..47c4fb9caf7 100644 --- a/product_docs/docs/tpa/23/rel_notes/index.mdx +++ b/product_docs/docs/tpa/23/rel_notes/index.mdx @@ -27,7 +27,7 @@ The Trusted Postgres Architect documentation describes the latest version of Tru | Version | Release date | | ---------------------------- | ------------ | -| [23.29](tpa_23.29_rel_notes) | XX Feb 2024 | +| [23.29](tpa_23.29_rel_notes) | 15 Feb 2024 | | [23.28](tpa_23.28_rel_notes) | 23 Jan 2024 | | [23.27](tpa_23.27_rel_notes) | 19 Dec 2023 | | [23.26](tpa_23.26_rel_notes) | 30 Nov 2023 | diff --git a/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx b/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx index bf6910dbf1e..70c499caa67 100644 --- a/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx +++ b/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx @@ -3,10 +3,14 @@ title: Trusted Postgres Architect 23.29 release notes navTitle: "Version 23.29" --- -Released: XX Feb 2024 +Released: 15 Feb 2024 New features, enhancements, bug fixes, and other changes in Trusted Postgres Architect 23.29 include the following: | Type | Description | | ---- |------------ | -| Enhancement | Added release notes | +| Enhancement | Added support for storing the cluster vault password in the system keyring. This leverages python keyring module to store vault password in the supported system keyring when `keyring_backend` is set to `system` (default for new clusters). This change does not impact existing clusters or any clusters that set `keyring_backend` to `legacy` in config.yml. | +| Enhancement | The `--ansible-version` argument to `tpaexec setup` now accepts `8` or `9` as valid ansible versions, as well as the existing `2q` or `community`, both of which imply ansible 2.9. The default is now `8`. Support for ansible 9 is experimental and requires python 3.10 or above. | +| Bug Fix | Fixed an issue whereby edb_repositories already defined in config.yml are not kept during reconfigure. Fixes bdr4 to pgd5 upgrade scenario in air gapped environment. | +| Bug Fix |Fixed a bug in the password generation steps for tpaexec upgrade. | + From d53ec0ccb55dd7e7291e78e64806f512d9ef9111 Mon Sep 17 00:00:00 2001 From: Simon Notley <43099400+sonotley@users.noreply.github.com> Date: Wed, 14 Feb 2024 16:12:13 +0000 Subject: [PATCH 4/4] Add extra bug fixes to release notes --- .../docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx b/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx index 70c499caa67..7ec9a17484e 100644 --- a/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx +++ b/product_docs/docs/tpa/23/rel_notes/tpa_23.29_rel_notes.mdx @@ -11,6 +11,10 @@ New features, enhancements, bug fixes, and other changes in Trusted Postgres Arc | ---- |------------ | | Enhancement | Added support for storing the cluster vault password in the system keyring. This leverages python keyring module to store vault password in the supported system keyring when `keyring_backend` is set to `system` (default for new clusters). This change does not impact existing clusters or any clusters that set `keyring_backend` to `legacy` in config.yml. | | Enhancement | The `--ansible-version` argument to `tpaexec setup` now accepts `8` or `9` as valid ansible versions, as well as the existing `2q` or `community`, both of which imply ansible 2.9. The default is now `8`. Support for ansible 9 is experimental and requires python 3.10 or above. | -| Bug Fix | Fixed an issue whereby edb_repositories already defined in config.yml are not kept during reconfigure. Fixes bdr4 to pgd5 upgrade scenario in air gapped environment. | -| Bug Fix |Fixed a bug in the password generation steps for tpaexec upgrade. | - +| Bug Fix | Fixed an issue whereby edb_repositories already defined in config.yml are not kept during reconfigure. Fixes bdr4 to pgd5 upgrade scenario in air gapped environment. | +| Bug Fix | TPA's `postgres-monitor` will now recognize the message "the database system is not yet accepting connections" as a recoverable error. | +| Bug Fix | TPA now correctly skips the `postgres/config/final` role on replicas when upgrading. | +| Bug Fix | Fixed an issue whereby wildcards in package names were not respected when using package downloader on Debian and Ubuntu systems. | +| Bug Fix | The downloader now runs apt-get update before fetching packages on Debian and Ubuntu systems. | +| Bug Fix | TPA now disables transaction streaming when CAMO is enabled in PGD clusters. | +| Bug Fix | TPA now correctly configures Barman servers where selinux is enabled. | \ No newline at end of file