diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx new file mode 100644 index 00000000000..6d8dbbe63c8 --- /dev/null +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -0,0 +1,60 @@ +--- +title: Notifications +--- + +With BigAnimal, you can opt to get specific types of notifications and receive both in-app and email notifications. + +Different types of events are sent as notifications. These notifications are set at different levels and users with different roles can configure this notifications. This table provides the list of events sent as notifications grouped by different levels at which they can be set: + +| Level | Event | Role | Subscription type | +|--------------|--------------------------------------------------------------------------------------------------|----------------------------------|--------------------- | +| Organization | Payment method added | Organization owner/admin | Digital self-service | +| Organization | Personal access key is expiring | Account owner | All | +| Organization | Machine user access key is expiring | Organization owner | All | +| Project | Upcoming maintenance upgrade on a cluster (24hr) | Project owner/editor | All | +| Project | Successful maintenance upgrade on a cluster | Project owner/editor | All | +| Project | Failed maintenance upgrade on a cluster | Project owner/editor | All | +| Project | Paused cluster will automatically reactivated in 24 hours | Project owner/editor | All | +| Project | Paused cluster was automatically reactivated | Project owner/editor | All | +| Project | You must set up the encryption key permission for your CMK-enabled cluster | Project owner/editor | All | +| Project | Key error with CMK-enabled cluster | Project owner and project editor | All | +| Project | User is invited to a project (displays only to the Project owner) | Project owner | All | +| Project | New role is assigned to you | Account owner | All | +| Project | Role is unassigned from you | Account owner | All | +| Project | Failed connection to third-party monitoring integration (and future non-monitoring integrations) | Project owner/editor | All | + +!!!note +All subscription type means Digital self-service, Direct purchase, and Azure Marketplace. For more information, see [subscription types](/biganimal/latest/pricing_and_billing/#payments-and-billing). +!!! + +## Configuring notifications + +The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the in-app inbox, email or both. They can also configure email notifications for their teams within their organization. + +Project level notifications are configured within the project. + +Notification settings made by a user are applicable only to that user. If an email notification is enabled, the email is sent to the email address associated with the user's login. + +## Viewing notifications + +Users in the following roles can view the notifications: +- Organization owners/admins can view the organization-level notifications. +- Project owners/editors can view the project-level notifications. +- Account owners can view their own account-level notifications. + +Each notification indicates the level and/or project it belongs to for the user having multiple roles within BigAnimal. + +Select the bell icon on the top of your BigAnimal portal to view the in-app notifications. On the bell icon, you can read the notification, mark it as unread and also archive the notification. + +Check the inbox of your configured email addresses, to view the email notifications. + +## Manage notifications + +To manage the notifications: +1. Log in to the BigAnimal portal. +1. From the menu under your name in the top right panel, select **My Account**. +1. Select the **Notifications tab**. Notifications are grouped by organizations and projects available to you. +1. Select any specific organization/project to manage the notifications. + - Enable/disable the notification for a particular event using the toggle button. + - Select the **Email** and **Inbox** next to an event to enable/disable the email and in-app notifications for the event. + diff --git a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx index 71c4d648bcf..604dc5de0bc 100644 --- a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx @@ -78,17 +78,60 @@ Once your identity provider is set up, you can view your connection status, ID, You need a verified domain so your users can have a streamlined login experience with their email address. 1. On the **Domains** tab, enter the domain name and select **Next: Verify Domain**. -1. Copy the TXT record and follow the instructions. -1. Select **Done**. +2. Copy the TXT record and follow the instructions in the on screen verify box (repeated below), to add it as a TXT record on that domain within your DNS provider's management console. - Your domain and its status appear on the **Domains** tab, where you can delete or verify it. Domains can take up to 48 hours to be verified. + - Log in to your domain registrar or web host account. + - Navigate to the DNS settings for the domain you want to verify. + - Add a TXT record. + - In the Name field, enter @. + - In the Value field, enter the verification string provided, eg. + - “edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku” + - Save your changes and wait for the DNS propagation period to complete. This can take up to 48 hours. + +3. Select **Done**. + + Your domain and its status appear on the **Domains** tab, where you can delete or verify it. + Domains can take up to 48 hours for the change of the domain record by the DNS provider to propagate before you can verify it. + +4. If your domain has not verified after a day, you can debug whether your domain has the matching verification text field. + Select **Verify** next to the domain at `/settings/domains` to check the exact value of the required TXT field. + Query your domain directly with DNS tools, such as nslookup, to check if you have an exact match for a text = "verification" field. + Domains can have many TXT fields. As long as one matches, it should verify. + +``` +> nslookup -type=TXT mydomain.com + +;; Truncated, retrying in TCP mode. +Server: 192.168.1.1 +Address: 192.168.1.1#53 +Non-authoritative answer: +... +mydomain.com text = “edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku” +``` To add another domain, select **Add Domain**. -When you have at least one verified domain, the identity provider status becomes **Active** on the **Identity Providers** tab. When the domain is no longer verified, the status becomes **Inactive**. +When you have at least one verified domain (with Status = Verified, in green), the identity provider status becomes **Active** on the **Identity Providers** tab. +When the domain is no longer verified, the status becomes **Inactive**. !!! Note - The identity provider status can take up to three minutes to update. + Your DNS provider can take up to 48 hours to update. Once the domain is verified, the identity provider status can take up to three minutes to update. + +### Domain expiry + +The EDB system has a 10-day expiry set for checking whether domains are verified. + +You buy domains from DNS providers by way of a leasing system. If the lease expires, you no longer own the domain, and it disappears from the Internet. +If this happens, you need to renew your domain with your DNS provider. + +Whether the domain failed to verify within the 10 days or it expired months later, +it appears as **Status = Expired** (in red). +You can't reinstate an expired domain +because expiry means you might no longer own the domain. You need to verify it again. + +To delete the domain, select the bin icon. +To re-create it, select **Add Domain**. +Set a new verification key for the domain and update the TXT record for it in your DNS provider's management console, as described in [Add a doman](#add-a-domain). ### Manage roles for added users diff --git a/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx similarity index 59% rename from product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx rename to product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx index fbfa2c9e110..255f6d9a62a 100644 --- a/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx +++ b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx @@ -2,17 +2,22 @@ title: Known issues with distributed high availability/PGD navTitle: Distributed high availability/PGD known issues deepToC: true +redirects: +- /biganimal/latest/known_issues/known_issues_dha/ --- -These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. These known issues are tracked in our ticketing system and are expected to be resolved in a future release. +These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. +These known issues are tracked in our ticketing system and are expected to be resolved in a future release. ## Management/administration ### Deleting a PGD data group may not fully reconcile -When deleting a PGD data group, the target group resources is physically deleted, but in some cases we have observed that the PGD nodes may not be completely partitioned from the remaining PGD Groups. We recommend avoiding use of this feature until this is fixed and removed from the known issues list. +When deleting a PGD data group, the target group resources is physically deleted, but in some cases we have observed that the PGD nodes may not be completely partitioned from the remaining PGD Groups. +We recommend avoiding use of this feature until this is fixed and removed from the known issues list. ### Adjusting PGD cluster architecture may not fully reconcile -In rare cases, we have observed that changing the node architecture of an existing PGD cluster may not complete. If a change hasn't taken effect in 1 hour, reach out to Support. +In rare cases, we have observed that changing the node architecture of an existing PGD cluster may not complete. +If a change hasn't taken effect in 1 hour, reach out to Support. ### PGD cluster may fail to create due to Azure SKU issue In some cases, although a regional quota check may have passed initially when the PGD cluster is created, it may fail if an SKU critical for the witness nodes is unavailable across three availability zones. @@ -28,24 +33,39 @@ If you have already encountered this issue, reach out to Azure support: We're going to be provisioning a number of instances of in and need to be able to provision these instances in all AZs. Can you please ensure that subscription is able to provision this VM type in all AZs of . Thank you! ``` +### Changing the default database name is not possible +Currently, the default database for a replicated PGD cluster is `bdrdb`. +This cannot be changed, either at initialization or after the cluster is created. + ## Replication ### A PGD replication slot may fail to transition cleanly from disconnect to catch up -As part of fault injection testing with PGD on BigAnimal, you may decide to delete VMs. Your cluster will recover if you do so, as expected. However, if you're testing in a bring-your-own-account (BYOA) deployment, in some cases, as the cluster is recovering, a replication slot may remain disconnected. This will persist for a few hours until the replication slot recovers automatically. +As part of fault injection testing with PGD on BigAnimal, you may decide to delete VMs. +Your cluster will recover if you do so, as expected. +However, if you're testing in a bring-your-own-account (BYOA) deployment, in some cases, as the cluster is recovering, a replication slot may remain disconnected. +This will persist for a few hours until the replication slot recovers automatically. ### Replication speed is slow during a large data migration During a large data migration, when migrating to a PGD cluster, you may experience a replication rate of 20 MBps. ### PGD leadership change on healthy cluster -PGD clusters that are in a healthy state may experience a change in PGD node leadership, potentially resulting in failover. No intervention is needed as a new leader will be appointed. +PGD clusters that are in a healthy state may experience a change in PGD node leadership, potentially resulting in failover. +No intervention is needed as a new leader will be appointed. + +### Extensions which require alternate roles are not supported +Where an extension requires a role other than the default role (`streaming_replica`) used for replication, it will fail when attempting to replicate. +This is because PGD runs replication writer operations as a `SECURITY_RESTRICTED_OPERATION` to mitigate the risk of privilege escalation. +Attempts to install such extensions may cause the cluster to fail to operate. ## Migration ### Connection interruption disrupts migration via Migration Toolkit -When using Migration Toolkit (MTK), if the session is interrupted, the migration errors out. To resolve, you need to restart the migration from the beginning. The recommended path to avoid this is to migrate on a per-table basis when using MTK so that if this issue does occur, you retry the migration with a table rather than the whole database. +When using Migration Toolkit (MTK), if the session is interrupted, the migration errors out. To resolve, you need to restart the migration from the beginning. +The recommended path to avoid this is to migrate on a per-table basis when using MTK so that if this issue does occur, you retry the migration with a table rather than the whole database. ### Ensure loaderCount is less than 1 in Migration ToolKit -When using Migration Toolkit to migrate a PGD cluster, if you adjusted the loaderCount to be greater than 1 to speed up migration, you may see an error in the MTK CLI that says “pgsql_tmp/': No such file or directory.” If you see this, reduce your loaderCount to 1 in MTK. +When using Migration Toolkit to migrate a PGD cluster, if you adjusted the loaderCount to be greater than 1 to speed up migration, you may see an error in the MTK CLI that says “pgsql_tmp/': No such file or directory.” +If you see this, reduce your loaderCount to 1 in MTK. ## Tools diff --git a/product_docs/docs/biganimal/release/release_notes/index.mdx b/product_docs/docs/biganimal/release/release_notes/index.mdx index ef19e94be8e..16696fb7a0f 100644 --- a/product_docs/docs/biganimal/release/release_notes/index.mdx +++ b/product_docs/docs/biganimal/release/release_notes/index.mdx @@ -2,6 +2,7 @@ title: BigAnimal release notes navTitle: Release notes navigation: +- mar_2024_rel_notes - feb_2024_rel_notes - jan_2024_rel_notes - dec_2023_rel_notes @@ -22,6 +23,7 @@ The BigAnimal documentation describes the latest version of BigAnimal, including | Month | |--------------------------------------| +| [March 2024](mar_2024_rel_notes) | | [February 2024](feb_2024_rel_notes) | | [January 2024](jan_2024_rel_notes) | | [December 2023](dec_2023_rel_notes) | diff --git a/product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx b/product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx new file mode 100644 index 00000000000..647dc1653a7 --- /dev/null +++ b/product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx @@ -0,0 +1,15 @@ +--- +title: BigAnimal March 2024 release notes +navTitle: March 2024 +--- + +BigAnimal's March 2024 includes the following enhancements and bugfixes: + +| Type | Description | +|------|-------------| +| Enhancement | EDB Postgres Extended Server is now available in BigAnimal for single-node, high-availability, and Distributed High Availability clusters. | +| Enhancement | You can now use Transparent Data Encryption (TDE) for clusters running on EDB Postgres Advanced Server or EDB Postgres Extended Server versions 15 and later in BigAnimal’s AWS account. With TDE, you can connect your keys from AWS’s Key Management Service to encrypt your clusters at the database level in addition to the default volume-level encryption. | +| Enhancement | BigAnimal Terraform provider v0.8.1 is now available. Learn more about what’s new [here](https://github.com/EnterpriseDB/terraform-provider-biganimal/releases/tag/v0.8.1) and download the provider [here](https://registry.terraform.io/providers/EnterpriseDB/biganimal/latest). | +| Enhancement | BigAnimal CLI v3.6.0 is now available. Learn more about what’s new [here](https://cli.biganimal.com/versions/v3.6.0/). | + + diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx new file mode 100644 index 00000000000..094cc4d9268 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -0,0 +1,234 @@ +--- +title: Performing a major version upgrade of Postgres on BigAnimal +navTitle: Upgrading Postgres major versions +deepToC: true +--- + +## Using logical replication + +!!! Note +This procedure does not work with distributed high-availability BigAnimal instances. +!!! + +Logical replication is a common method for upgrading the Postgres major version on BigAnimal instances, enabling a transition with minimal downtime. + +By replicating changes in real-time from an older version (source instance) to a newer one (target instance), this method provides a reliable upgrade path while maintaining database availability. + +!!! Important +Depending on where your older and newer versioned BigAnimal instances are located, this procedure may accrue ingress and egress costs from your cloud service provider (CSP) for the migrated data. Please consult your CSP's pricing documentation to see how ingress and egress fees are calculated to determine any extra costs. +!!! + +### Overview of upgrading + +To perform a major version upgrade, use the following steps, explained in further detail below: + +1. [Create a BigAnimal instance](#create-a-biganimal-instance) +1. [Gather instance information](#gather-instance-information) +1. [Confirm the Postgres versions before migration](#confirm-the-postgres-versions-before-migration) +1. [Migrate the database schema](#migrate-the-database-schema) +1. [Create a publication](#create-a-publication) +1. [Create a logical replication slot](#create-a-logical-replication-slot) +1. [Create a subscription](#create-a-subscription) +1. [Validate the migration](#validate-the-migration) + + +### Create a BigAnimal instance + +To perform a major version upgrade, create a BigAnimal instance with your desired version of Postgres. This will be your target instance. + +Ensure your target instance is provisioned with a storage size equal to or greater than your source instance. + +For detailed steps on creating a BigAnimal instance, see [this guide](../getting_started/creating_a_cluster.mdx). + +### Gather instance information + +Use the BigAnimal console to obtain the following information for your source and target instance: + +- Read/write URI +- Database name +- Username +- Read/write host + +Using the BigAnimal console: + +1. Select the **Clusters** tab. +1. Select your source instance. +1. From the Connect tab, obtain the information from **Connection Info**. + +### Confirm the Postgres versions before migration + +Confirm the Postgres version of your source and target BigAnimal instances: + +``` +psql "" -c "select version();" +``` + +Output using Postgres 16: + +``` + version +------------------------------------------------------------------------------------------------------------------------------------- + PostgreSQL 16.2 (Debian 16.2.0-3.buster) (BigAnimal Edition) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit +(1 row) +``` + +### Migrate the database schema + +On your source instance, use the `dt` command to view the details of the schema to be migrated: + +```sql +/dt+; +``` + +Here is a sample database schema for this example: + +``` + List of relations + Schema | Name | Type | Owner | Persistence | Access method | Size | Description +--------+------------------+-------+-----------+-------------+---------------+------------+------------- + public | pgbench_accounts | table | edb_admin | permanent | heap | 1572 MB | + public | pgbench_branches | table | edb_admin | permanent | heap | 8192 bytes | + public | pgbench_history | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_tellers | table | edb_admin | permanent | heap | 120 kB | +``` + +Use pg_dump with the `--schema-only` flag to copy the schema from your source to your target instance. For more information on using `pg_dump`, [see the Postgres documentation](https://www.postgresql.org/docs/current/app-pgdump.html). + +``` +pg_dump --schema-only -h -U -d | psql -h -U -d +``` + +On the target instance, confirm the schema was migrated: + +``` + List of relations + Schema | Name | Type | Owner | Persistence | Access method | Size | Description +--------+------------------+-------+-----------+-------------+---------------+---------+------------- + public | pgbench_accounts | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_branches | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_history | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_tellers | table | edb_admin | permanent | heap | 0 bytes | +``` + +!!! Note +A successful schema-only copy shows the tables with zero bytes. +!!! + +### Create a publication + +Use the `CREATE PUBLICATION` command to create a publication on your source instance. For more information on using `CREATE PUBLICATION`, see [the Posgres documentation](https://www.postgresql.org/docs/current/sql-createpublication.html). + +```sql +CREATE PUBLICATION ; +``` + +In this example: + + +```sql +CREATE PUBLICATION v12_pub; +``` + +The expected output is: `CREATE PUBLICATION`. + +Add tables that you want to replicate to your target instance: + +```sql +ALTER PUBLICATION ADD TABLE ; +``` + +```sql +ALTER PUBLICATION v12_pub ADD TABLE pgbench_accounts; +ALTER PUBLICATION v12_pub ADD TABLE pgbench_branches; +ALTER PUBLICATION v12_pub ADD TABLE pgbench_history; +ALTER PUBLICATION v12_pub ADD TABLE pgbench_tellers; +``` + +The expected output is: `ALTER PUBLICATION`. + +### Creating the Logical Replication Slot + +Then, on the source instance, create a replication slot using the `pgoutput` plugin: + +```sql +SELECT pg_create_logical_replication_slot('','pgoutput'); +``` + +In the current example: + +```sql +SELECT pg_create_logical_replication_slot('v12_pub','pgoutput'); +``` + +The expected output returns the `slot_name` and `lsn`. + +``` + pg_create_logical_replication_slot +------------------------------------ + (v12_pub,0/AC003330) +``` + +The replication slot tracks changes to the published tables from the source instance and replicates changes to the subscriber on the target instance. + +### Create a subscription + +Use the `CREATE SUBSCRIPTION` command to create a subscription on your target instance. For more information on using `CREATE SUBSCRIPTION`, see [the Postgres documentation](https://www.postgresql.org/docs/current/sql-createsubscription.html). + +```sql +CREATE SUBSCRIPTION CONNECTION 'user= host= sslmode=require port= dbname= password=' PUBLICATION WITH (enabled=true, copy_data = true, create_slot = false, slot_name=); +``` + +Creating a subscription on a Postgres 16 instance to a publication on a Postgres 12 instance: + +```sql +CREATE SUBSCRIPTION v16_sub CONNECTION 'user=edb_admin host=p-x67kjhacc4.pg.biganimal.io sslmode=require port=5432 dbname=edb_admin password=XXX' PUBLICATION v12_pub WITH (enabled=true, copy_data = true, create_slot = false, slot_name=v12_pub); +``` + +The expected output is: `CREATE SUBSCRIPTION`. + +In this example, the subscription uses a connection string to specify the source database and includes options to copy existing data and to follow the publication identified by 'v12_pub'. + +The subscriber pulls schema changes (with some exceptions, as noted in the PostgreSQL [documentation on Limitations of Logical Replication](https://www.postgresql.org/docs/current/logical-replication-restrictions.html)) and data from the source to the target database, effectively replicating the data. + +### Validate the migration + +To validate the progress of the data migration, use `dt+` from the source and target BigAnimal instances to compare the size of each table. + +``` + List of relations + Schema | Name | Type | Owner | Persistence | Access method | Size | Description +--------+------------------+-------+-----------+-------------+---------------+------------+------------- + public | pgbench_accounts | table | edb_admin | permanent | heap | 1572 MB | + public | pgbench_branches | table | edb_admin | permanent | heap | 8192 bytes | + public | pgbench_history | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_tellers | table | edb_admin | permanent | heap | 120 kB | +``` + +``` + List of relations + Schema | Name | Type | Owner | Persistence | Access method | Size | Description +--------+------------------+-------+-----------+-------------+---------------+---------+------------- + public | pgbench_accounts | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_branches | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_history | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_tellers | table | edb_admin | permanent | heap | 0 bytes | +``` + +If logical replication is running correctly, each time you run `\dt+;` you see that more data has been migrated: + +``` + List of relations + Schema | Name | Type | Owner | Persistence | Access method | Size | Description +--------+------------------+-------+-----------+-------------+---------------+---------+------------- + public | pgbench_accounts | table | edb_admin | permanent | heap | 344 MB | + public | pgbench_branches | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_history | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_tellers | table | edb_admin | permanent | heap | 0 bytes | +``` + +!!! Note +You can optionally use [LiveCompare](https://www.enterprisedb.com/docs/livecompare/latest/) to generate a comparison report of the source and target databases to validate that all database objects and data are consistent. +!!! + + + diff --git a/product_docs/docs/biganimal/release/using_cluster/index.mdx b/product_docs/docs/biganimal/release/using_cluster/index.mdx index a7398ff4ced..d10e72ac6e8 100644 --- a/product_docs/docs/biganimal/release/using_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/index.mdx @@ -11,6 +11,7 @@ navigation: - 05_monitoring_and_logging - fault_injection_testing - 05a_deleting_your_cluster +- 05c_upgrading_log_rep - 06_analyze_with_superset - 06_demonstration_oracle_compatibility - terraform_provider diff --git a/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx b/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx index 34e7c81de58..9e4c4f06923 100644 --- a/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx +++ b/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx @@ -10,12 +10,19 @@ redirects: - /pem/latest/pem_online_help/07_toc_pem_sql_profiler/05_sp_sql_profiler_tab/ --- -The SQL Profiler extension allows a database superuser to locate and optimize inefficient SQL code. Microsoft's SQL Server Profiler is very similar to PEM’s SQL Profiler in operation and capabilities. +The SQL Profiler extension allows a user to locate and optimize inefficient SQL code. Microsoft's SQL Server Profiler is very similar to PEM’s SQL Profiler in operation and capabilities. SQL Profiler works with PEM to allow you to profile a server's workload. You can install and enable the SQL Profiler extension on servers with or without a PEM agent. However, you can run traces only in ad hoc mode on unmanaged servers and you can schedule them only on managed servers. SQL Profiler captures and displays a specific SQL workload for analysis in a SQL trace. You can start and review captured SQL traces immediately or save captured traces for review later. You can use SQL Profiler to create and store up to 15 named traces. +## Permissions for SQL profiler + +To access the SQL profiler tool on PEM, there are two prerequisites: + +1. The user logged in to the PEM GUI must either be a superuser or a member of group `pem_comp_sqlprofiler` within the PEM server database. For more information, see [pem groups](/pem/latest/managing_pem_server/#using-pem-predefined-roles-to-manage-access-to-pem-functionality). +1. The user configured for the database server in the server tree (`Username`), must be a superuser on the database server that the trace is being run on (monitored server). + ## Creating a trace You can use the Create Trace dialog box to define a SQL trace for any database on which SQL Profiler was installed and configured. To open the dialog box, select the database in the PEM client tree and select **Tools > Server > SQL Profiler > Create trace**. diff --git a/product_docs/docs/pgd/4/bdr/scaling.mdx b/product_docs/docs/pgd/4/bdr/scaling.mdx index 8e20d943774..165e0b71152 100644 --- a/product_docs/docs/pgd/4/bdr/scaling.mdx +++ b/product_docs/docs/pgd/4/bdr/scaling.mdx @@ -22,9 +22,10 @@ Otherwise, later executions will alter the definition. `bdr.autopartition()` doesn't lock the actual table. It changes the definition of when and how new partition maintenance actions take place. -`bdr.autopartition()` leverages the features that allow a partition to be -attached or detached/dropped without locking the rest of the table -(when the underlying Postgres version supports it). +PGD AutoPartition leverages underlying Postgres features that allow a partition +to be attached or detached/dropped without locking the rest of the table +(Autopartion currently only supports this when used with 2nd Quadrant Postgres +11). An ERROR is raised if the table isn't RANGE partitioned or a multi-column partition key is used. diff --git a/product_docs/docs/pgd/5/admin-manual/installing/01-provisioning-hosts.mdx b/product_docs/docs/pgd/5/admin-manual/installing/01-provisioning-hosts.mdx index 79b0a10d8a4..a0a53479b71 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/01-provisioning-hosts.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/01-provisioning-hosts.mdx @@ -1,6 +1,6 @@ --- -title: Step 1 - Provisioning Hosts -navTitle: Provisioning Hosts +title: Step 1 - Provisioning hosts +navTitle: Provisioning hosts deepToC: true --- @@ -8,19 +8,19 @@ deepToC: true The first step in the process of deploying PGD is to provision and configure hosts. -You can deploy to virtual machine instances in the cloud with Linux installed, on-premise virtual machines with Linux installed or on-premise physical hardware also with Linux installed. +You can deploy to virtual machine instances in the cloud with Linux installed, on-premises virtual machines with Linux installed, or on-premises physical hardware, also with Linux installed. -Whichever [supported Linux operating system](https://www.enterprisedb.com/resources/platform-compatibility#bdr) and whichever deployment platform you select, the result of provisioning a machine must be a Linux system that can be accessed by you using SSH with a user that has superuser, administrator or sudo privileges. +Whichever [supported Linux operating system](https://www.enterprisedb.com/resources/platform-compatibility#bdr) and whichever deployment platform you select, the result of provisioning a machine must be a Linux system that you can access using SSH with a user that has superuser, administrator, or sudo privileges. -Each machine provisioned should be able to make connections to any other machine you are provisioning for your cluster. +Each machine provisioned must be able to make connections to any other machine you're provisioning for your cluster. -On cloud deployments, this may be done over the public network or over a VPC. +On cloud deployments, you can do this over the public network or over a VPC. -On-premise deployments should be able to connect over the local network. +On-premises deployments must be able to connect over the local network. !!! Note Cloud provisioning guides -If you are new to cloud provisioning, these guides may provide assistance: +If you're new to cloud provisioning, these guides may provide assistance: Vendor | Platform | Guide ------ | -------- | ------ @@ -36,29 +36,29 @@ If you are new to cloud provisioning, these guides may provide assistance: We recommend that you configure an admin user for each provisioned instance. The admin user must have superuser or sudo (to superuser) privileges. -We also recommend that the admin user should be configured for passwordless SSH access using certificates. +We also recommend that the admin user be configured for passwordless SSH access using certificates. #### Ensure networking connectivity -With the admin user created, ensure that each machine can communicate with the other machines you are provisioning. +With the admin user created, ensure that each machine can communicate with the other machines you're provisioning. In particular, the PostgreSQL TCP/IP port (5444 for EDB Postgres Advanced -Server, 5432 for EDB Postgres Extended and Community PostgreSQL) should be open +Server, 5432 for EDB Postgres Extended and community PostgreSQL) must be open to all machines in the cluster. If you plan to deploy PGD Proxy, its port must be -open to any applications which will connect to the cluster. Port 6432 is typically +open to any applications that will connect to the cluster. Port 6432 is typically used for PGD Proxy. ## Worked example -For the example in this section, we have provisioned three hosts with Red Hat Enterprise Linux 9. +For this example, three hosts with Red Hat Enterprise Linux 9 were provisioned: * host-one * host-two * host-three -Each is configured with a "admin" admin user. +Each is configured with an admin user named admin. -These hosts have been configured in the cloud and as such each host has both a public and private IP address. +These hosts were configured in the cloud. As such, each host has both a public and private IP address. Name | Public IP | Private IP ------|-----------|---------------------- @@ -66,11 +66,10 @@ These hosts have been configured in the cloud and as such each host has both a p host-two | 172.24.113.247 | 192.168.254.247 host-three | 172.24.117.23 | 192.168.254.135 -For our example cluster, we have also edited `/etc/hosts` to use those private IP addresses: +For the example cluster, `/etc/hosts` was also edited to use those private IP addresses: ``` 192.168.254.166 host-one 192.168.254.247 host-two 192.168.254.135 host-three ``` - diff --git a/product_docs/docs/pgd/5/admin-manual/installing/02-install-postgres.mdx b/product_docs/docs/pgd/5/admin-manual/installing/02-install-postgres.mdx index 7e44569c2d0..bf40116517b 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/02-install-postgres.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/02-install-postgres.mdx @@ -6,43 +6,43 @@ deepToC: true ## Installing Postgres -You will need to install Postgres on all the hosts. +You need to install Postgres on all the hosts. An EDB account is required to use the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can get installation instructions. Select your platform and Postgres edition. -You will be presented with 2 steps of instructions, the first covering how to configure the required package repository and the second covering how to install the packages from that repository. +You're presented with 2 steps of instructions. The first step covers how to configure the required package repository. The second step covers how to install the packages from that repository. Run both steps. ## Worked example -In our example, we will be installing EDB Postgres Advanced Server 16 on Red Hat Enterprise Linux 9 (RHEL 9). +This example installs EDB Postgres Advanced Server 16 on Red Hat Enterprise Linux 9 (RHEL 9). ### EDB account -You'll need an EDB account to install both Postgres and PGD. +You need an EDB account to install both Postgres and PGD. -Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can select your platform and then scroll down the list to select the Postgres version you wish to install: +Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can select your platform. Then scroll down the list to select the Postgres version you want to install: * EDB Postgres Advanced Server * EDB Postgres Extended * PostgreSQL -Upon selecting the version of the Postgres server you want, two steps will be displayed. +When you select the version of the Postgres server you want, two steps are displayed. ### 1: Configuring repositories -For step 1, you can choose to use the automated script or step through the manual install instructions that are displayed. Your EDB repository token will be automatically inserted by the EDB Repos 2.0 site into these scripts. -In our examples, it will be shown as `XXXXXXXXXXXXXXXX`. +For step 1, you can choose to use the automated script or step through the manual install instructions that are displayed. Your EDB repository token is inserted into these scripts by the EDB Repos 2.0 site. +In the examples, it's shown as `XXXXXXXXXXXXXXXX`. -On each provisioned host, either run the automatic repository installation script which will look like this: +On each provisioned host, you either run the automatic repository installation script or use the manual installation steps. The automatic script looks like this: ```shell curl -1sLf 'https://downloads.enterprisedb.com/XXXXXXXXXXXXXXXX/enterprise/setup.rpm.sh' | sudo -E bash ``` -Or use the manual installation steps which look like this: +The manual installation steps look like this: ```shell dnf install yum-utils @@ -54,9 +54,8 @@ dnf -q makecache -y --disablerepo='*' --enablerepo='enterprisedb-enterprise' ### 2: Install Postgres -For step 2, we just run the command to install the packages. +For step 2, run the command to install the packages: ``` sudo dnf -y install edb-as16-server ``` - diff --git a/product_docs/docs/pgd/5/admin-manual/installing/03-configuring-repositories.mdx b/product_docs/docs/pgd/5/admin-manual/installing/03-configuring-repositories.mdx index 10331b0ed6d..4ecad1cfa4e 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/03-configuring-repositories.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/03-configuring-repositories.mdx @@ -8,7 +8,7 @@ deepToC: true To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages. -The following operations should be carried out on each host. For the purposes of this exercise, each host will be a standard data node, but the procedure would be the same for other [node types](../../node_management/node_types) such as witness or subscriber-only nodes. +Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](../node_management/node_types), such as witness or subscriber-only nodes. * Use your EDB account. * Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page. @@ -39,7 +39,7 @@ The following operations should be carried out on each host. For the purposes of ### Use your EDB account -You'll need an EDB account to install Postgres Distributed. +You need an EDB account to install Postgres Distributed. Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page, where you can obtain your repo token. @@ -47,7 +47,7 @@ On your first visit to this page, select **Request Access** to generate your rep ![EDB Repos 2.0](images/edbrepos2.0.png) -Copy the token to your clipboard using the **Copy Token** button and store it safely. +Select **Copy Token** to copy the token to your clipboard, and store the token safely. ### Set environment variables @@ -61,40 +61,40 @@ export EDB_SUBSCRIPTION_TOKEN= You can add this to your `.bashrc` script or similar shell profile to ensure it's always set. !!! Note -Your preferred platform may support storing this variable as a secret which can appear as an environment variable. If this is the case, don't add the setting to `.bashrc` and instead add it to your platform's secret manager. +Your preferred platform may support storing this variable as a secret, which can appear as an environment variable. If this is the case, add it to your platform's secret manager, and don't add the setting to `.bashrc`. !!! ### Configure the repository All the software you need is available from the EDB Postgres Distributed package repository. -You have the option to simply download and run a script to configure the EDB Postgres Distributed repository. -You can also download, inspect and then run that same script. -The following instructions also include the essential steps that the scripts take for any user wanting to manually run, or automate, the installation process. +You have the option to download and run a script to configure the EDB Postgres Distributed repository. +You can also download, inspect, and then run that same script. +The following instructions also include the essential steps that the scripts take for any user wanting to manually run the installation process or to automate it. #### RHEL/Other RHEL-based -You can autoinstall with automated OS detection +You can autoinstall with automated OS detection: ``` curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.rpm.sh" | sudo -E bash ``` -If you wish to inspect the script that is generated for you run: +If you want to inspect the script that's generated for you, run: ``` curl -1sLfO "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.rpm.sh" ``` -Then inspect the resulting `setup.rpm.sh` file. When you are happy to proceed, run: +Then inspect the resulting `setup.rpm.sh` file. When you're ready to proceed, run: ``` sudo -E bash setup.rpm.sh ``` -If you want to perform all steps manually or use your own preferred deployment mechanism, you can use the following example as a guide: +If you want to perform all steps manually or use your own preferred deployment mechanism, you can use the following example as a guide. -You will need to pass details of your Linux distribution and version. You may need to change the codename to match the version of RHEL you are using. Here we set it for RHEL compatible Linux version 9: +You will need to pass details of your Linux distribution and version. You may need to change the codename to match the version of RHEL you're using. This example sets it for RHEL-compatible Linux version 9: ``` export DISTRO="el" @@ -107,13 +107,13 @@ Now install the yum-utils package: sudo dnf install -y yum-utils ``` -The next step will import a GPG key for the repositories: +The next step imports a GPG key for the repositories: ``` sudo rpm --import "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/gpg.B09F406230DA0084.key" ``` -Now, we can import the repository details, add them to the local configuration and enable the repository. +Now you can import the repository details, add them to the local configuration, and enable the repository. ``` curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/config.rpm.txt?distro=$DISTRO&codename=$CODENAME" > /tmp/enterprise.repo diff --git a/product_docs/docs/pgd/5/admin-manual/installing/04-installing-software.mdx b/product_docs/docs/pgd/5/admin-manual/installing/04-installing-software.mdx index 153e9e9fe42..8ea39a2a8a9 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/04-installing-software.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/04-installing-software.mdx @@ -7,65 +7,65 @@ deepToC: true ## Installing the PGD software With the repositories configured, you can now install the Postgres Distributed software. -These steps must be carried out on each host before proceeding to the next step. +You must perform these steps on each host before proceeding to the next step. -* **Install the packages** - * Install the PGD packages which include a server specific BDR package and generic PGD proxy and cli packages. (`edb-bdr5-`, `edb-pgd5-proxy`, and `edb-pgd5-cli`) +* **Install the packages.** + * Install the PGD packages, which include a server-specific BDR package and generic PGD Proxy and CLI packages. (`edb-bdr5-`, `edb-pgd5-proxy`, and `edb-pgd5-cli`) * **Ensure the Postgres database server has been initialized and started.** - * Use `systemctl status ` to check the service is running - * If not, initialize the database and start the service + * Use `systemctl status` to check that the service is running. + * If the service isn't running, initialize the database and start the service. -* **Configure the BDR extension** - * Add the BDR extension (`$libdir/bdr`) at the start of the shared_preload_libraries setting in `postgresql.conf`. +* **Configure the BDR extension.** + * Add the BDR extension (`$libdir/bdr`) at the start of the `shared_preload_libraries` setting in `postgresql.conf`. * Set the `wal_level` GUC variable to `logical` in `postgresql.conf`. * Turn on commit timestamp tracking by setting `track_commit_timestamp` to `'on'` in `postgresql.conf`. - * Raise the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.

+ * Increase the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.

!!! Note The `max_worker_processes` value - The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases and other factors. - To calculate the needed value see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings). - The value of 16 was calculated for the size of cluster we are deploying and must be raised for larger clusters. + The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors. + To calculate the needed value, see [Postgres configuration/settings](../postgres-configuration/#postgres-settings). + The value of 16 was calculated for the size of cluster being deployed in this example. It must be increased for larger clusters. !!! * Set a password on the EnterprisedDB/Postgres user. * Add rules to `pg_hba.conf` to allow nodes to connect to each other. - * Ensure that these lines are present in `pg_hba.conf: + * Ensure that these lines are present in `pg_hba.conf`: ``` host all all all md5 host replication all all md5 ``` * Add a `.pgpass` file to allow nodes to authenticate each other. - * Configure a user with sufficient privileges to be able to log into the other nodes. + * Configure a user with sufficient privileges to log in to the other nodes. * See [The Password File](https://www.postgresql.org/docs/current/libpq-pgpass.html) in the Postgres documentation for more on the `.pgpass` file. * **Restart the server.** - * Verify the restarted server is running with the modified settings and the bdr extension is available + * Verify the restarted server is running with the modified settings and the BDR extension is available. * **Create the replicated database.** - * Log into the server's default database (`edb` for EPAS, `postgres` for PGE and Community). + * Log in to the server's default database (`edb` for EDB Postgres Advanced Server, `postgres` for PGE and community Postgres). * Use `CREATE DATABASE bdrdb` to create the default PGD replicated database. * Log out and then log back in to `bdrdb`. * Use `CREATE EXTENSION bdr` to enable the BDR extension and PGD to run on that database. -We will look in detail at the steps for EDB Postgres Advanced Server in the worked example below. +The worked example that follows shows the steps for EDB Postgres Advanced Server in detail. -If you are installing PGD with EDB Postgres Extended Server or Community Postgres, the steps are similar, but details such as package names and paths are different. These differences are summarized in [Installing PGD for EDB Postgres Extended Server](#installing-pgd-for-edb-postgres-extended-server) and [Installing PGD for Postgresql](#installing-pgd-for-postgresql). +If you're installing PGD with EDB Postgres Extended Server or community Postgres, the steps are similar, but details such as package names and paths are different. These differences are summarized in [Installing PGD for EDB Postgres Extended Server](#installing-pgd-for-edb-postgres-extended-server) and [Installing PGD for community Postgresql](#installing-pgd-for-community-postgresql). ## Worked example ### Install the packages -The first step is to install the packages. For each Postgres package, there is a `edb-bdr5-` package to go with it. -For example, if we are installing EDB Postgres Advanced Server (epas) version 16, we would install `edb-bdr5-epas16`. +The first step is to install the packages. Each Postgres package has an `edb-bdr5-` package to go with it. +For example, if you're installing EDB Postgres Advanced Server (epas) version 16, you'd install `edb-bdr5-epas16`. There are two other packages to also install: -- `edb-pgd5-proxy` for PGD Proxy. -- `edb-pgd5-cli` for the PGD command line tool. +- `edb-pgd5-proxy` for PGD Proxy +- `edb-pgd5-cli` for the PGD command line tool To install all of these packages on a RHEL or RHEL compatible Linux, run: @@ -75,15 +75,15 @@ sudo dnf -y install edb-bdr5-epas16 edb-pgd5-proxy edb-pgd5-cli ### Ensure the database is initialized and started -If it wasn't initialized and started by the database's package initialisation (or you are repeating the process), you will need to initialize and start the server. +If the server wasn't initialized and started by the database's package initialization (or you're repeating the process), you need to initialize and start the server. -To see if the server is running, you can check the service. The service name for EDB Advanced Server is `edb-as-16` so run: +To see if the server is running, you can check the service. The service name for EDB Advanced Server is `edb-as-16`, so run: ``` sudo systemctl status edb-as-16 ``` -If the server is not running, this will respond with: +If the server isn't running, the response is: ``` ○ edb-as-16.service - EDB Postgres Advanced Server 16 @@ -91,18 +91,18 @@ If the server is not running, this will respond with: Active: inactive (dead) ``` -The "Active: inactive (dead)" tells us we will need to initialize and start the server. +`Active: inactive (dead)` tells you that you need to initialize and start the server. -You will need to know the path to the setup script for your particular Postgres flavor. +You need to know the path to the setup script for your particular Postgres flavor. -For EDB Postgres Advanced Server, this script can be found in `/usr/edb/as16/bin` as `edb-as-16-setup`. -This command needs to be run with the `initdb` parameter and we need to pass an option setting the database to use UTF-8. +For EDB Postgres Advanced Server, you can find this script in `/usr/edb/as16/bin` as `edb-as-16-setup`. +Run this command with the `initdb` parameter and pass an option to set the database to use UTF-8: ``` sudo PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/edb/as16/bin/edb-as-16-setup initdb ``` -Once the database is initialized, we will start it which will enable us to continue configuring the BDR extension. +Once the database is initialized, start it so that you can continue configuring the BDR extension: ``` sudo systemctl start edb-as-16 @@ -110,24 +110,24 @@ sudo systemctl start edb-as-16 ### Configure the BDR extension -Installing EDB Postgres Advanced Server creates a system user `enterprisedb` with admin capabilities when connected to the database. We will be using this user to configure the BDR extension. +Installing EDB Postgres Advanced Server creates a system user enterprisedb with admin capabilities when connected to the database. You'll use this user to configure the BDR extension. #### Preload the BDR library -We want the bdr library to be preloaded with other libraries. -EPAS has a number of libraries already preloaded, so we have to prefix the existing list with the BDR library. +You need to preload the BDR library with other libraries. +EDB Postgres Advanced Server has a number of libraries already preloaded, so you have to prefix the existing list with the BDR library. ``` echo -e "shared_preload_libraries = '\$libdir/bdr,\$libdir/dbms_pipe,\$libdir/edb_gen,\$libdir/dbms_aq'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null ``` !!!tip -This command format (`echo ... | sudo ... tee -a ...`) appends the echoed string to the end of the postgresql.conf file, which is owned by another user. +This command format (`echo ... | sudo ... tee -a ...`) appends the echoed string to the end of the `postgresql.conf` file, which is owned by another user. !!! #### Set the `wal_level` -The BDR extension needs to set the server to perform logical replication. We do this by setting `wal_level` to `logical`. +The BDR extension needs to set the server to perform logical replication. Do this by setting `wal_level` to `logical`: ``` echo -e "wal_level = 'logical'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null @@ -136,24 +136,24 @@ echo -e "wal_level = 'logical'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/ #### Enable commit timestamp tracking -The BDR extension also needs the commit timestamp tracking enabled. +The BDR extension also needs the commit timestamp tracking enabled: ``` echo -e "track_commit_timestamp = 'on'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null ``` -#### Raise `max_worker_processes` +#### Increase `max_worker_processes` To communicate between multiple nodes, Postgres Distributed nodes run more worker processes than usual. The default limit (8) is too low even for a small cluster. -The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases and other factors. -To calculate the needed value see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings). +The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors. +To calculate the needed value, see [Postgres configuration/settings](../../../postgres-configuration/#postgres-settings). -For this example, with a 3 node cluster, we are using the value of 16. +This example, with a 3-node cluster, uses the value of 16. -Raise the maximum number of worker processes to 16 with this commmand: +Increase the maximum number of worker processes to 16: ``` echo -e "max_worker_processes = '16'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null @@ -161,14 +161,14 @@ echo -e "max_worker_processes = '16'" | sudo -u enterprisedb tee -a /var/lib/edb ``` -This value must be raised for larger clusters. +This value must be increased for larger clusters. #### Add a password to the Postgres enterprisedb user To allow connections between nodes, a password needs to be set on the Postgres enterprisedb user. -For this example, we are using the password `secret`. +This example uses the password `secret`. Select a different password for your deployments. -You will need this password when we get to [Creating the PGD Cluster](05-creating-cluster). +You will need this password for [connecting the cluster](05-connecting-cluster). ``` sudo -u enterprisedb psql edb -c "ALTER USER enterprisedb WITH PASSWORD 'secret'" @@ -186,7 +186,7 @@ echo -e "host all all all md5\nhost replication all all md5" | sudo tee -a /var/ ``` -It will append +The command appends the following to `pg_hba.conf`: ``` host all all all md5 @@ -194,15 +194,15 @@ host replication all all md5 ``` -to `pg_hba.conf` which will enable the nodes to replicate. +These commands enable the nodes to replicate. #### Enable authentication between nodes As part of the process of connecting nodes for replication, PGD logs into other nodes. -It will perform that log in as the user that Postgres is running under. -For epas, this is the `enterprisedb` user. -That user will need credentials to log into the other nodes. -We will supply these credentials using the `.pgpass` file which needs to reside in the user's home directory. +It performs that login as the user that Postgres is running under. +For EDB Postgres Advanced server, this is the enterprisedb user. +That user needs credentials to log into the other nodes. +Supply these credentials using the `.pgpass` file, which needs to reside in the user's home directory. The home directory for `enterprisedb` is `/var/lib/edb`. Run this command to create the file: @@ -216,7 +216,7 @@ You can read more about the `.pgpass` file in [The Password File](https://www.po ### Restart the server -After all these configuration changes, it is recommended that the server is restarted with: +After all these configuration changes, we recommend that you restart the server with: ``` sudo systemctl restart edb-as-16 @@ -225,14 +225,14 @@ sudo systemctl restart edb-as-16 #### Check the extension has been installed -At this point, it is worth checking the extension is actually available and our configuration has been correctly loaded. You can query the pg_available_extensions table for the bdr extension like this: +At this point, it's worth checking whether the extension is actually available and the configuration was correctly loaded. You can query the `pg_available_extensions` table for the BDR extension like this: ``` sudo -u enterprisedb psql edb -c "select * from pg_available_extensions where name like 'bdr'" ``` -Which should return an entry for the extension and its version. +This command returns an entry for the extension and its version: ``` name | default_version | installed_version | comment @@ -250,7 +250,7 @@ sudo -u enterprisedb psql edb -c "show all" | grep -e wal_level -e track_commit_ ### Create the replicated database The server is now prepared for PGD. -We need to next create a database named `bdrdb` and install the bdr extension when logged into it. +You need to next create a database named `bdrdb` and install the BDR extension when logged into it: ``` sudo -u enterprisedb psql edb -c "CREATE DATABASE bdrdb" @@ -258,14 +258,14 @@ sudo -u enterprisedb psql bdrdb -c "CREATE EXTENSION bdr" ``` -Finally, test the connection by logging into the server. +Finally, test the connection by logging in to the server. ``` sudo -u enterprisedb psql bdrdb ``` -You should be connected to the server. -Execute the command "\\dx" to list extensions installed. +You're connected to the server. +Execute the command "\\dx" to list extensions installed: ``` bdrdb=# \dx @@ -280,13 +280,13 @@ bdrdb=# \dx (5 rows) ``` -Notice that the bdr extension is listed in the table, showing it is installed. +Notice that the BDR extension is listed in the table, showing that it's installed. ## Summaries ### Installing PGD for EDB Postgres Advanced Server -These are all the commands used in this section gathered together for your convenience. +For your convenience, here's a summary of the commands used in this example. ``` sudo dnf -y install edb-bdr5-epas16 edb-pgd5-proxy edb-pgd5-cli @@ -308,14 +308,14 @@ sudo -u enterprisedb psql bdrdb ### Installing PGD for EDB Postgres Extended Server -If installing PGD with EDB Postgres Extended Server, there are a number of differences from the EPAS installation. +Installing PGD with EDB Postgres Extended Server has a number of differences from the EDB Postgres Advanced Server installation: -* The BDR package to install is named `edb-bdrV-pgextendedNN` (where V is the PGD version and NN is the PGE version number) -* A different setup utility should be called: /usr/edb/pgeNN/bin/edb-pge-NN-setup -* The service name is edb-pge-NN. -* The system user is postgres (not enterprisedb) -* The home directory for the postgres user is `/var/lib/pgqsl` -* There are no pre-existing libraries to be added to `shared_preload_libraries` +* The BDR package to install is named `edb-bdrV-pgextendedNN` (where V is the PGD version and NN is the PGE version number). +* Call a different setup utility: `/usr/edb/pgeNN/bin/edb-pge-NN-setup`. +* The service name is `edb-pge-NN`. +* The system user is postgres (not enterprisedb). +* The home directory for the postgres user is `/var/lib/pgqsl`. +* There are no preexisting libraries to add to `shared_preload_libraries`. #### Summary: Installing PGD for EDB Postgres Extended Server 16 @@ -339,14 +339,14 @@ sudo -u postgres psql bdrdb ### Installing PGD for Postgresql -If installing PGD with PostgresSQL, there are a number of differences from the EPAS installation. +Installing PGD with PostgresSQL has a number of differences from the EDB Postgres Advanced Server installation: -* The BDR package to install is named `edb-bdrV-pgNN` (where V is the PGD version and NN is the PostgreSQL version number) -* A different setup utility should be called: /usr/pgsql-NN/bin/postgresql-NN-setup -* The service name is postgresql-NN. -* The system user is postgres (not enterprisedb) -* The home directory for the postgres user is `/var/lib/pgqsl` -* There are no pre-existing libraries to be added to `shared_preload_libraries` +* The BDR package to install is named `edb-bdrV-pgNN` (where V is the PGD version and NN is the PostgreSQL version number). +* Call a different setup utility: `/usr/pgsql-NN/bin/postgresql-NN-setup`. +* The service name is `postgresql-NN`. +* The system user is postgres (not enterprisedb). +* The home directory for the postgres user is `/var/lib/pgqsl`. +* There are no preexisting libraries to add to `shared_preload_libraries`. #### Summary: Installing PGD for Postgresql 16 @@ -367,5 +367,3 @@ sudo -u postgres psql bdrdb -c "CREATE EXTENSION bdr" sudo -u postgres psql bdrdb ``` - - diff --git a/product_docs/docs/pgd/5/admin-manual/installing/05-creating-cluster.mdx b/product_docs/docs/pgd/5/admin-manual/installing/05-creating-cluster.mdx index b14caa5dac4..c94c70250f0 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/05-creating-cluster.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/05-creating-cluster.mdx @@ -1,68 +1,67 @@ --- -title: Step 5 - Creating the PGD Cluster -navTitle: Creating the Cluster +title: Step 5 - Creating the PGD cluster +navTitle: Creating the cluster deepToC: true --- ## Creating the PGD cluster * **Create connection strings for each node**. -For each node we want to create a connection string which will allow PGD to perform replication. +For each node, create a connection string that will allow PGD to perform replication. - The connection string is a key/value string which starts with a `host=` and the IP address of the host (or if you have resolvable named hosts, the name of the host). + The connection string is a key/value string that starts with a `host=` and the IP address of the host. (If you have resolvable named hosts, the name of the host is used instead of the IP address.) - That is followed by the name of the database; `dbname=bdrdb` as we created a `bdrdb` database when [installing the software](04-installing-software). + That's followed by the name of the database. In this case, use `dbname=bdrdb`, as a `bdrdb` database was created when [installing the software](04-installing-software). - We recommend you also add the port number of the server to your connection string as `port=5444` for EDB Postgres Advanced Server and `port=5432` for EDB Postgres Extended and Community PostgreSQL. + We recommend you also add the port number of the server to your connection string as `port=5444` for EDB Postgres Advanced Server and `port=5432` for EDB Postgres Extended and community PostgreSQL. * **Prepare the first node.** -To create the cluster, we select and log into one of the hosts Postgres server's `bdrdb` database. - +To create the cluster, select and log in to the `bdrdb` database on any host's Postgres server. * **Create the first node.** - Run `bdr.create_node` and give the node a name and its connection string where *other* nodes may connect to it. + Run `bdr.create_node` and give the node a name and its connection string where *other* nodes can connect to it. * Create the top-level group. - Create a top-level group for the cluster with `bdr.create_node_group` giving it a single parameter, the name of the top-level group. - * Create a sub-group. - Create a sub-group as a child of the top-level group with `bdr.create_node_group` giving it two parameters, the name of the sub-group and the name of the parent (and top-level) group. - This initializes the first node. + Create a top-level group for the cluster with `bdr.create_node_group`, giving it a single parameter: the name of the top-level group. + * Create a subgroup. + Create a subgroup as a child of the top-level group with `bdr.create_node_group`, giving it two parameters: the name of the subgroup and the name of the parent (and top-level) group. + This process initializes the first node. -* **Adding the second node.** - * Create the second node. - Log into another initialized node's `bdrdb` database. - Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes may connect to it. - * Join the second node to the cluster - Next, run `bdr.join_node_group` passing two parameters, the connection string for the first node and the name of the sub-group you want the node to join. +* **Add the second node.** + * Create the second node. + Log in to another initialized node's `bdrdb` database. + Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes can connect to it. + * Join the second node to the cluster. + Next, run `bdr.join_node_group`, passing two parameters: the connection string for the first node and the name of the subgroup you want the node to join. -* **Adding the third node.** - * Create the third node - Log into another initialized node's `bdrdb` database. - Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes may connect to it. - * Join the third node to the cluster - Next, run `bdr.join_node_group` passing two parameters, the connection string for the first node and the name of the sub-group you want the node to join. +* **Add the third node.** + * Create the third node. + Log in to another initialized node's `bdrdb` database. + Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes can connect to it. + * Join the third node to the cluster. + Next, run `bdr.join_node_group`, passing two parameters: the connection string for the first node and the name of the subgroup you want the node to join. ## Worked example -So far, we have: +So far, this example has: -* Created three Hosts. +* Created three hosts. * Installed a Postgres server on each host. * Installed Postgres Distributed on each host. * Configured the Postgres server to work with PGD on each host. -To create the cluster, we will tell `host-one`'s Postgres instance that it is a PGD node - `node-one` and create PGD groups on that node. -Then we will tell `host-two` and `host-three`'s Postgres instances that they are PGD nodes - `node-two` and `node-three` and that they should join a group on `node-one`. +To create the cluster, you tell host-one's Postgres instance that it's a PGD node—node-one—and create PGD groups on that node. +Then you tell host-two and host-three's Postgres instances that they are PGD nodes—node-two and node-three—and that they must join a group on node-one. ### Create connection strings for each node -We calculate the connection strings for each of the node in advance. -Below are the connection strings for our 3 node example: +Calculate the connection strings for each of the nodes in advance. +Following are the connection strings for this 3-node example. -| Name | Node Name | Private IP | Connection string | +| Name | Node name | Private IP | Connection string | | ---------- | ---------- | --------------- | -------------------------------------- | | host-one | node-one | 192.168.254.166 | host=host-one dbname=bdrdb port=5444 | | host-two | node-two | 192.168.254.247 | host=host-two dbname=bdrdb port=5444 | @@ -70,7 +69,7 @@ Below are the connection strings for our 3 node example: ### Preparing the first node -Log into host-one's Postgres server. +Log in to host-one's Postgres server. ``` ssh admin@host-one @@ -79,7 +78,7 @@ sudo -iu enterprisedb psql bdrdb ### Create the first node -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string which other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444'); @@ -87,21 +86,21 @@ select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444'); #### Create the top-level group -Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter will create the top-level group with that name. For our example, we will create a top-level group named `pgd`. +Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter creates the top-level group with that name. This example creates a top-level group named `pgd`. ``` select bdr.create_node_group('pgd'); ``` -#### Create a sub-group +#### Create a subgroup -Using sub-groups to organize your nodes is preferred as it allows services like PGD proxy, which we will be configuring later, to coordinate their operations. -In a larger PGD installation, multiple sub-groups can exist providing organizational grouping that enables geographical mapping of clusters and localized resilience. -For that reason, in this example, we are creating a sub-group for our first nodes to enable simpler expansion and use of PGD proxy. +Using subgroups to organize your nodes is preferred, as it allows services like PGD Proxy, which you'll configure later, to coordinate their operations. +In a larger PGD installation, multiple subgroups can exist. These subgroups provide organizational grouping that enables geographical mapping of clusters and localized resilience. +For that reason, this example creates a subgroup for the first nodes to enable simpler expansion and the use of PGD Proxy. -Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a sub-group of the top-level group. -The sub-group name is the first parameter, the parent group is the second parameter. -For our example, we will create a sub-group `dc1` as a child of `pgd`. +Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a subgroup of the top-level group. +The subgroup name is the first parameter, and the parent group is the second parameter. +This example creates a subgroup `dc1` as a child of `pgd`. ``` @@ -110,7 +109,7 @@ select bdr.create_node_group('dc1','pgd'); ### Adding the second node -Log into host-two's Postgres server +Log in to host-two's Postgres server ``` ssh admin@host-two @@ -119,7 +118,7 @@ sudo -iu enterprisedb psql bdrdb #### Create the second node -We call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444'); @@ -127,15 +126,15 @@ select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444'); #### Join the second node to the cluster -Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) we can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter. +Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. ``` select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1'); ``` -### Adding the third node +### Add the third node -Log into host-three's Postgres server +Log in to host-three's Postgres server. ``` ssh admin@host-three @@ -144,7 +143,7 @@ sudo -iu enterprisedb psql bdrdb #### Create the third node -We call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444'); @@ -152,10 +151,10 @@ select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444'); #### Join the third node to the cluster -Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) we can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter. +Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. ``` select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1'); ``` -We have now created a PGD cluster. +A PGD cluster is now created. diff --git a/product_docs/docs/pgd/5/admin-manual/installing/06-check-cluster.mdx b/product_docs/docs/pgd/5/admin-manual/installing/06-check-cluster.mdx index df92b8e1f99..f3de96fdd3a 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/06-check-cluster.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/06-check-cluster.mdx @@ -7,56 +7,56 @@ deepToC: true ## Checking the cluster -With the cluster up and running, it is worthwhile running some basic checks on how effectively it is replicating. +With the cluster up and running, it's worthwhile to run some basic checks to see how effectively it's replicating. -In the following example, we show one quick way to do this but you should ensure that any testing you perform is appropriate for your use case. +The following example shows one quick way to do this, but you must ensure that any testing you perform is appropriate for your use case. * **Preparation** - * Ensure the cluster is ready - * Log into the database on host-one/node-one - * Run `select bdr.wait_slot_confirm_lsn(NULL, NULL);` - * When the query returns the cluster is ready + * Ensure the cluster is ready: + * Log in to the database on host-one/node-one. + * Run `select bdr.wait_slot_confirm_lsn(NULL, NULL);`. + * When the query returns, the cluster is ready. * **Create data** - The simplest way to test the cluster is replicating is to log into one node, create a table and populate it. - * On node-one create a table + The simplest way to test that the cluster is replicating is to log in to one node, create a table, and populate it. + * On node-one, create a table: ```sql CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); ``` - * On node-one populate the table + * On node-one, populate the table: ```sql INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000); ``` - * On node-one monitor performance + * On node-one, monitor performance: ```sql select * from bdr.node_replication_rates; ``` - * On node-one get a sum of the value column (for checking) + * On node-one, get a sum of the value column (for checking): ```sql select COUNT(*),SUM(value) from quicktest; ``` * **Check data** - * Log into node-two - Log into the database on host-two/node-two - * On node-two get a sum of the value column (for checking) + * Log in to node-two. + Log in to the database on host-two/node-two. + * On node-two, get a sum of the value column (for checking): ```sql select COUNT(*),SUM(value) from quicktest; ``` - * Compare with the result from node-one - * Log into node-three - Log into the database on host-three/node-three - * On node-three get a sum of the value column (for checking) + * Compare with the result from node-one. + * Log in to node-three. + Log in to the database on host-three/node-three. + * On node-three, get a sum of the value column (for checking): ```sql select COUNT(*),SUM(value) from quicktest; ``` - * Compare with the result from node-one and node-two + * Compare with the result from node-one and node-two. ## Worked example ### Preparation -Log into host-one's Postgres server. +Log in to host-one's Postgres server. ``` ssh admin@host-one sudo -iu enterprisedb psql bdrdb @@ -72,9 +72,9 @@ To ensure that the cluster is ready to go, run: select bdr.wait_slot_confirm_lsn(NULL, NULL) ``` -This query will block while the cluster is busy initializing and return when the cluster is ready. +This query blocks while the cluster is busy initializing and returns when the cluster is ready. -In another window, log into host-two's Postgres server +In another window, log in to host-two's Postgres server: ``` ssh admin@host-two @@ -83,23 +83,23 @@ sudo -iu enterprisedb psql bdrdb ### Create data -#### On node-one create a table +#### On node-one, create a table -Run +Run: ```sql CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); ``` -#### On node-one populate the table +#### On node-one, populate the table ``` INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000); ``` -This will generate a table of 10000 rows of random values. +This command generates a table of 10000 rows of random values. -#### On node-one monitor performance +#### On node-one, monitor performance As soon as possible, run: @@ -107,7 +107,7 @@ As soon as possible, run: select * from bdr.node_replication_rates; ``` -And you should see statistics on how quickly that data has been replicated to the other two nodes. +The command shows statistics about how quickly that data was replicated to the other two nodes: ```console bdrdb=# select * from bdr.node_replication_rates; @@ -120,7 +120,7 @@ al (2 rows) ``` -And it's already replicated. +And it's already replicated. #### On node-one get a checksum @@ -130,7 +130,7 @@ Run: select COUNT(*),SUM(value) from quicktest; ``` -to get some values from the generated data: +This command gets some values from the generated data: ```sql bdrdb=# select COUNT(*),SUM(value) from quicktest; @@ -143,7 +143,7 @@ __OUTPUT__ ### Check data -#### Log into host-two's Postgres server. +#### Log in to host-two's Postgres server ``` ssh admin@host-two sudo -iu enterprisedb psql bdrdb @@ -151,7 +151,7 @@ sudo -iu enterprisedb psql bdrdb This is your connection to PGD's node-two. -#### On node-two get a checksum +#### On node-two, get a checksum Run: @@ -159,7 +159,7 @@ Run: select COUNT(*),SUM(value) from quicktest; ``` -to get node-two's values for the generated data: +This command gets node-two's values for the generated data: ```sql bdrdb=# select COUNT(*),SUM(value) from quicktest; @@ -172,11 +172,11 @@ __OUTPUT__ #### Compare with the result from node-one -And the values will be identical. +The values are identical. -You can repeat the process with node-three, or generate new data on any node and see it replicate to the other nodes. +You can repeat the process with node-three or generate new data on any node and see it replicate to the other nodes. -#### Log into host-threes's Postgres server. +#### Log in to host-three's Postgres server ``` ssh admin@host-two sudo -iu enterprisedb psql bdrdb @@ -184,7 +184,7 @@ sudo -iu enterprisedb psql bdrdb This is your connection to PGD's node-three. -#### On node-three get a checksum +#### On node-three, get a checksum Run: @@ -192,7 +192,7 @@ Run: select COUNT(*),SUM(value) from quicktest; ``` -to get node-three's values for the generated data: +This command gets node-three's values for the generated data: ```sql bdrdb=# select COUNT(*),SUM(value) from quicktest; @@ -205,6 +205,4 @@ __OUTPUT__ #### Compare with the result from node-one and node-two -And the values will be identical. - - +The values are identical. diff --git a/product_docs/docs/pgd/5/admin-manual/installing/07-configure-proxies.mdx b/product_docs/docs/pgd/5/admin-manual/installing/07-configure-proxies.mdx index a716c1eedc6..10c94ecde9d 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/07-configure-proxies.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/07-configure-proxies.mdx @@ -6,33 +6,34 @@ deepToC: true ## Configure proxies -PGD can use proxies to direct traffic to one of the clusters nodes, selected automatically by the cluster. -There are performance and availabilty reasons for using a proxy: +PGD can use proxies to direct traffic to one of the cluster's nodes, selected automatically by the cluster. +There are performance and availability reasons for using a proxy: -* Performance: By directing all traffic and in particular write traffic, to one node, the node can resolve write conflicts locally and more efficiently. -* Availability: When a node is taken down for maintenance or goes offline for other reasons, the proxy can automatically direct new traffic to a new, automatically selected, write leader. +* Performance: By directing all traffic (in particular, write traffic) to one node, the node can resolve write conflicts locally and more efficiently. +* Availability: When a node is taken down for maintenance or goes offline for other reasons, the proxy can direct new traffic to a new write leader that it selects. -It is best practice to configure PGD Proxy for clusters to enable this behavior. +It's best practice to configure PGD Proxy for clusters to enable this behavior. ### Configure the cluster for proxies -To set up a proxy, you will need to first prepare the cluster and sub-group the proxies will be working with by: +To set up a proxy, you need to first prepare the cluster and subgroup the proxies will be working with by: -* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the sub-group. Use [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option), passing the sub-group name, option name and new value as parameters. -* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) and passing the new proxy name and the sub-group it should be attached to. -* Create a `pgdproxy` user on the cluster with a password (or other authentication) +* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the subgroup. Use [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option), passing the subgroup name, option name, and new value as parameters. +* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) and passing the new proxy name and the subgroup to attach it to. The [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) does not create a proxy, but creates a space for a proxy to register itself with the cluster. The space contains configuration values which can be modified later. Initially it is configured with default proxy options such as setting the `listen_address` to `0.0.0.0`. +* Configure proxy routes to each node by setting route_dsn for each node in the subgroup. The route_dsn is the connection string that the proxy should use to connect to that node. Use [`bdr.alter_node_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_option) to set the route_dsn for each node in the subgroup. +* Create a pgdproxy user on the cluster with a password or other authentication. ### Configure each host as a proxy -Once the cluster is ready, you will need to configure each host to run pgd-proxy by: +Once the cluster is ready, you need to configure each host to run pgd-proxy: -* Creating a `pgdproxy` local user -* Creating a `.pgpass` file for that user which will allow it to log into the cluster as `pgdproxy`. +* Create a pgdproxy local user. +* Create a `.pgpass` file for that user that allows the user to log into the cluster as pgdproxy. * Modify the systemd service file for pgdproxy to use the pgdproxy user. -* Create a proxy config file for the host which lists the connection strings for all the nodes in the sub-group, specifies the name that the proxy should use when connected and gives the endpoint connection string the proxy will accept connections on. -* Install that file as `/etc/edb/pgd-proxy/pgd-proxy-config.yml` +* Create a proxy config file for the host that lists the connection strings for all the nodes in the subgroup, specifies the name for the proxy to use when fetching proxy options like `listen_address` and `listen_port`. +* Install that file as `/etc/edb/pgd-proxy/pgd-proxy-config.yml`. * Restart the systemd service and check its status. -* Log into the proxy and verify its operation. +* Log in to the proxy and verify its operation. Further detail on all these steps is included in the worked example. @@ -42,14 +43,14 @@ Further detail on all these steps is included in the worked example. For proxies to function, the `dc1` subgroup must enable Raft and routing. -Log into any node in the cluster, using psql to connect to the bdrdb database as the `enterprisedb` user, and execute: +Log in to any node in the cluster, using psql to connect to the `bdrdb` database as the enterprisedb user. Execute: -``` +```sql SELECT bdr.alter_node_group_option('dc1', 'enable_raft', 'true'); SELECT bdr.alter_node_group_option('dc1', 'enable_proxy_routing', 'true'); ``` -The [`bdr.node_group_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_group_summary) view can be used to check the status of options previously set with bdr.alter_node_group_option(): +You can use the [`bdr.node_group_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_group_summary) view to check the status of options previously set with `bdr.alter_node_group_option()`: ```sql SELECT node_group_name, enable_proxy_routing, enable_raft @@ -66,9 +67,9 @@ bdrdb=# Next, create a PGD proxy within the cluster using the `bdr.create_proxy` function. -This function takes two parameters, the proxy's unique name and the group it should be a proxy for. +This function takes two parameters: the proxy's unique name and the group you want it to be a proxy for. -In our example, we want a proxy on each host in the dc1 sub-group: +In this example, you want a proxy on each host in the `dc1` subgroup: ``` SELECT bdr.create_proxy('pgd-proxy-one','dc1'); @@ -76,7 +77,7 @@ SELECT bdr.create_proxy('pgd-proxy-two','dc1'); SELECT bdr.create_proxy('pgd-proxy-three','dc1'); ``` -The [`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) view can be used to check that the proxies were created: +You can use the [`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) view to check that the proxies were created: ```sql SELECT proxy_name, node_group_name @@ -93,26 +94,44 @@ __OUTPUT__ ## Create a pgdproxy user on the database -Create a user named pgdproxy and give it a password. In this example we will use `proxysecret` +Create a user named pgdproxy and give it a password. This example uses `proxysecret`. -On any node, log into the bdrdb database as enterprisedb/postgres. +On any node, log into the `bdrdb` database as enterprisedb/postgres. ``` CREATE USER pgdproxy PASSWORD 'proxysecret'; GRANT bdr_superuser TO pgdproxy; ``` -## Create a pgdproxy user on each host +## Configure proxy routes to each node + +Once a proxy has connected, it gets its dsn values (connection strings) from the cluster. The cluster needs to know the connection details that a proxy should use for each node in the subgroup. This is done by setting the `route_dsn` option for each node to a connection string that the proxy can use to connect to that node. + +Please note that when a proxy starts, it gets the initial dsn from the proxy's config file. The route_dsn value set in this step and in config file should match. +On any node, log into the bdrdb database as enterprisedb/postgres. + +```sql +SELECT bdr.alter_node_option('host-one', 'route_dsn', 'host=host-one dbname=bdrdb port=5444 user=pgdproxy'); +SELECT bdr.alter_node_option('host-two', 'route_dsn', 'host=host-two dbname=bdrdb port=5444 user=pgdproxy'); +SELECT bdr.alter_node_option('host-three', 'route_dsn', 'host=host-three dbname=bdrdb port=5444 user=pgdproxy'); ``` + +Note that the endpoints in this example specify `port=5444`. +This is necessary for EDB Postgres Advanced Server instances. +For EDB Postgres Extended and community PostgreSQL, you can omit this. + +## Create a pgdproxy user on each host + +```shell sudo adduser pgdproxy ``` -This user will need credentials to connect to the server. -We will create a .pgpass file with the `proxysecret` password in it. -Then we will lock down the `.pgpass` file so it is only accessible by its owner. +This user needs credentials to connect to the server. +Create a `.pgpass` file with the `proxysecret` password in it. +Then lock down the `.pgpass` file so it's accessible only by its owner. -``` +```shell echo -e "*:*:*:pgdproxy:proxysecret" | sudo tee /home/pgdproxy/.pgpass sudo chown pgdproxy /home/pgdproxy/.pgpass sudo chmod 0600 /home/pgdproxy/.pgpass @@ -120,60 +139,57 @@ sudo chmod 0600 /home/pgdproxy/.pgpass ## Configure the systemd service on each host -Switch the service file from using root to using the pgdproxy user +Switch the service file from using root to using the pgdproxy user. -``` +```shell sudo sed -i s/root/pgdproxy/ /usr/lib/systemd/system/pgd-proxy.service ``` Reload the systemd daemon. -``` +```shell sudo systemctl daemon-reload ``` ## Create a proxy config file for each host -The proxy configuration file will be slightly different for each host. -It is a YAML file which contains a cluster object. This in turn has three +The proxy configuration file is slightly different for each host. +It's a YAML file that contains a cluster object. The cluster object has three properties: -The name of the PGD cluster's top-level group (as `name`). -An array of endpoints of databases (as `endpoints`). -The proxy definition object with a name and endpoint (as `proxy`). +* The name of the PGD cluster's top-level group (as `name`) +* An array of endpoints of databases (as `endpoints`) +* The proxy definition object with a name and endpoint (as `proxy`) -The first two properties will be the same for all hosts: +The first two properties are the same for all hosts: ``` cluster: name: pgd endpoints: - - host=host-one dbname=bdrdb port=5444 - - host=host-two dbname=bdrdb port=5444 - - host=host-three dbname=bdrdb port=5444 + - "host=host-one dbname=bdrdb port=5444 user=pgdproxy" + - "host=host-two dbname=bdrdb port=5444 user=pgdproxy" + - "host=host-three dbname=bdrdb port=5444 user=pgdproxy" ``` -Remember that host-one, host-two and host-three are the systems on which the cluster nodes (node-one, node-two, node-three) are running. -We use the name of the host, not the node, for the endpoint connection. +Remember that host-one, host-two, and host-three are the systems on which the cluster nodes (node-one, node-two, node-three) are running. +You use the name of the host, not the node, for the endpoint connection. -Also note that the endpoints in this example specify port=5444. +Again, note that the endpoints in this example specify `port=5444`. This is necessary for EDB Postgres Advanced Server instances. -For EDB Postgres Extended and Community PostgreSQL, this can be omitted. +For EDB Postgres Extended and community PostgreSQL, you can set this to `port=5432`. - -The third property, `proxy`, has a `name` property and an `endpoint` property. -The `name` property should be a name created with `bdr.create_proxy` earlier, and it will be different on each host. -The `endpoint` property is a string which defines how the proxy presents itself as a connection string. -A proxy cannot be on the same port as the Postgres server and, ideally, should be on a commonly used port different from direct connections, even when no Postgres server is running on the host. -We typically use port 6432 for PGD proxies. +The third property, `proxy`, has a `name` property. +The `name` property is a name created with `bdr.create_proxy` earlier, and it's different on each host. +A proxy can't be on the same port as the Postgres server and, ideally, should be on a commonly used port different from direct connections, even when no Postgres server is running on the host. +Typically, you use port 6432 for PGD proxies. ``` proxy: name: pgd-proxy-one - endpoint: "host=localhost dbname=bdrdb port=6432" ``` -In this case, by using 'localhost' in the endpoint, we specify that this proxy will listen on the host where the proxy is running. +In this case, using `localhost` in the endpoint specifies that this proxy will listen on the host where the proxy is running. ## Install a PGD proxy configuration on each host @@ -183,47 +199,47 @@ For each host, create the `/etc/edb/pgd-proxy` directory: sudo mkdir -p /etc/edb/pgd-proxy ``` -Then on each host, write the appropriate configuration to the `pgd-proxy-config.yml` file in the `/etc/edb/pgd-proxy` directory. +Then, on each host, write the appropriate configuration to the `pgd-proxy-config.yml` file in the `/etc/edb/pgd-proxy` directory. -For our example, this could be run on host-one to create the file. +For this example, you can run this on host-one to create the file: ``` cat <