From 4f2911ac696ce28c137778856d410fa45b7ee54a Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 26 Feb 2024 16:47:32 +0530 Subject: [PATCH 01/48] PostGIS - 3.4.2 release branch Added release notes --- .../docs/postgis/3.2/01_release_notes/index.mdx | 2 ++ .../postgis/3.2/01_release_notes/rel_notes342.mdx | 14 ++++++++++++++ 2 files changed, 16 insertions(+) create mode 100644 product_docs/docs/postgis/3.2/01_release_notes/rel_notes342.mdx diff --git a/product_docs/docs/postgis/3.2/01_release_notes/index.mdx b/product_docs/docs/postgis/3.2/01_release_notes/index.mdx index 67803d2ae71..84b4b455f04 100644 --- a/product_docs/docs/postgis/3.2/01_release_notes/index.mdx +++ b/product_docs/docs/postgis/3.2/01_release_notes/index.mdx @@ -1,6 +1,7 @@ --- title: "Release notes" navigation: +- rel_notes342 - rel_notes321 - rel_notes32 - rel_notes315 @@ -13,6 +14,7 @@ cover what was new in each release. | Version | Release date | | ------------------------ | ------------ | +| [3.4.2](rel_notes342) | 29 Feb 2024 | | [3.2.1](rel_notes321) | 04 Aug 2023 | | [3.2.0](rel_notes32) | 01 Dec 2022 | | [3.1.5](rel_notes315) | 03 Aug 2022| diff --git a/product_docs/docs/postgis/3.2/01_release_notes/rel_notes342.mdx b/product_docs/docs/postgis/3.2/01_release_notes/rel_notes342.mdx new file mode 100644 index 00000000000..73013eb708b --- /dev/null +++ b/product_docs/docs/postgis/3.2/01_release_notes/rel_notes342.mdx @@ -0,0 +1,14 @@ +--- +title: "PostGIS 3.4.2 release notes" +navTitle: Version 3.4.2 +--- + +Released: 29 Feb 2024 + +EDB PostGIS is a PostgreSQL extension that allows you to store geographic information systems (GIS) objects in an EDB Postgres Advanced Server database. + +New features, enhancements, bug fixes, and other changes in PostGIS 3.4.2 include: + +| Type | Description | +| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | +| Upstream merges | Merged with community PostGIS 3.4.2. See the community [Release Notes](https://postgis.net/docs/release_notes.html#idm47120) for details. | From 524fedc962e29b66eb7a097f42a9367cec83a865 Mon Sep 17 00:00:00 2001 From: edcrewe Date: Tue, 12 Mar 2024 14:10:48 +0000 Subject: [PATCH 02/48] feat(UPM-30067): provide more detailed customer help for verifying domains --- .../identity_provider/index.mdx | 41 +++++++++++++++++-- 1 file changed, 37 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx index 71c4d648bcf..9ae1416b9a6 100644 --- a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx @@ -78,17 +78,50 @@ Once your identity provider is set up, you can view your connection status, ID, You need a verified domain so your users can have a streamlined login experience with their email address. 1. On the **Domains** tab, enter the domain name and select **Next: Verify Domain**. -1. Copy the TXT record and follow the instructions. +1. Copy the TXT record and follow the instructions to add it as a TXT record on that domain within your DNS provider's management console. 1. Select **Done**. - Your domain and its status appear on the **Domains** tab, where you can delete or verify it. Domains can take up to 48 hours to be verified. + Your domain and its status appear on the **Domains** tab, where you can delete or verify it. + Domains can take up to 48 hours for the change of the domain record by the DNS provider to propagate, so that it can be verified. + + If your domain has not appeared as verified after some time. You can debug whether your domain has the matching verification text field by querying it directly with dns tools such as nslookup (see below) + + Click on the Verify button next to the domain at /settings/domains to check this matches the required TXT field. Domains can have many TXT fields, as long as one matches it will work. + +``` +> nslookup -type=TXT mydomain.com + +;; Truncated, retrying in TCP mode. +Server: 192.168.1.1 +Address: 192.168.1.1#53 +Non-authoritative answer: +... +mydomain.com text = “edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku” +``` To add another domain, select **Add Domain**. -When you have at least one verified domain, the identity provider status becomes **Active** on the **Identity Providers** tab. When the domain is no longer verified, the status becomes **Inactive**. +When you have at least one verified domain (with Status = Verified, in green), the identity provider status becomes **Active** on the **Identity Providers** tab. +When the domain is no longer verified, the status becomes **Inactive**. !!! Note - The identity provider status can take up to three minutes to update. + Your DNS provider can take up to 48 hours to update. Once verified, the identity provider status can take up to three minutes to update. + +### Domain expiry + +The EDB system has a 10 day expiry set for domains for checking if they are verified. + +Domains are bought via a leasing system from DNS providers. If the lease expires, you no longer own the domain and it disappears from the internet. +If this happens you will need to renew your domain with your DNS provider. + +Whether the domain failed to ever verify, ie within 10 days, or it expired months later. +It will appear as Status = Expired (in red). +You cannot reinstate an expired domain. +Because expiry means you may no longer own the domain, so it needs to be verified again. + +Please click on the bin icon to delete it. +Click Add Domain and recreate it. +Repeat the process above to set a new verification key for the domain and update the TXT record for it in your DNS provider's management console. ### Manage roles for added users From a34e86764bf741ffc5f2a9d9e6643a96d63dfdb7 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman Date: Tue, 12 Mar 2024 11:25:17 -0400 Subject: [PATCH 03/48] Update index.mdx --- .../identity_provider/index.mdx | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx index 9ae1416b9a6..2bc6c0650c4 100644 --- a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx @@ -82,11 +82,11 @@ You need a verified domain so your users can have a streamlined login experience 1. Select **Done**. Your domain and its status appear on the **Domains** tab, where you can delete or verify it. - Domains can take up to 48 hours for the change of the domain record by the DNS provider to propagate, so that it can be verified. + Domains can take up to 48 hours for the change of the domain record by the DNS provider to propagate before you can verify it. - If your domain has not appeared as verified after some time. You can debug whether your domain has the matching verification text field by querying it directly with dns tools such as nslookup (see below) + If your domain doesn't appeared as verified after some time, you can debug whether your domain has the matching verification text field. Query it directly with DNS tools, such as nslookup. - Click on the Verify button next to the domain at /settings/domains to check this matches the required TXT field. Domains can have many TXT fields, as long as one matches it will work. + Select **Verify** next to the domain at `/settings/domains` to check whether this matches the required TXT field. Domains can have many TXT fields. As long as one matches, it works. ``` > nslookup -type=TXT mydomain.com @@ -105,23 +105,23 @@ When you have at least one verified domain (with Status = Verified, in green), t When the domain is no longer verified, the status becomes **Inactive**. !!! Note - Your DNS provider can take up to 48 hours to update. Once verified, the identity provider status can take up to three minutes to update. + Your DNS provider can take up to 48 hours to update. Once the domain is verified, the identity provider status can take up to three minutes to update. ### Domain expiry -The EDB system has a 10 day expiry set for domains for checking if they are verified. +The EDB system has a 10-day expiry set for checking whether domains are verified. -Domains are bought via a leasing system from DNS providers. If the lease expires, you no longer own the domain and it disappears from the internet. -If this happens you will need to renew your domain with your DNS provider. +You buy domains from DNS providers by way of a leasing system. If the lease expires, you no longer own the domain, and it disappears from the Internet. +If this happens, you need to renew your domain with your DNS provider. -Whether the domain failed to ever verify, ie within 10 days, or it expired months later. -It will appear as Status = Expired (in red). -You cannot reinstate an expired domain. -Because expiry means you may no longer own the domain, so it needs to be verified again. +Whether the domain failed to verify within the 10 days or it expired months later, +it appears as **Status = Expired** (in red). +You can't reinstate an expired domain +because expiry means you might no longer own the domain. You need to verify it again. -Please click on the bin icon to delete it. -Click Add Domain and recreate it. -Repeat the process above to set a new verification key for the domain and update the TXT record for it in your DNS provider's management console. +To delete the domain, select the bin icon. +To re-create it, select **Add Domain**. +Set a new verification key for the domain and update the TXT record for it in your DNS provider's management console, as described in [Add a doman](#add-a-domain). ### Manage roles for added users From 7510304356a307c79a026a2947646fd6c79de17a Mon Sep 17 00:00:00 2001 From: edcrewe Date: Wed, 13 Mar 2024 08:48:50 +0000 Subject: [PATCH 04/48] feat(UPM-30067): review changes --- .../getting_started/identity_provider/index.mdx | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx index 2bc6c0650c4..5303e033ed6 100644 --- a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx @@ -78,15 +78,16 @@ Once your identity provider is set up, you can view your connection status, ID, You need a verified domain so your users can have a streamlined login experience with their email address. 1. On the **Domains** tab, enter the domain name and select **Next: Verify Domain**. -1. Copy the TXT record and follow the instructions to add it as a TXT record on that domain within your DNS provider's management console. -1. Select **Done**. +2. Copy the TXT record and follow the instructions in the on screen verify box, to add it as a TXT record on that domain within your DNS provider's management console. +3. Select **Done**. Your domain and its status appear on the **Domains** tab, where you can delete or verify it. Domains can take up to 48 hours for the change of the domain record by the DNS provider to propagate before you can verify it. - If your domain doesn't appeared as verified after some time, you can debug whether your domain has the matching verification text field. Query it directly with DNS tools, such as nslookup. - - Select **Verify** next to the domain at `/settings/domains` to check whether this matches the required TXT field. Domains can have many TXT fields. As long as one matches, it works. +4. If your domain has not verified after a day, you can debug whether your domain has the matching verification text field. + Select **Verify** next to the domain at `/settings/domains` to check the exact value of the required TXT field. + Query your domain directly with DNS tools, such as nslookup, to check if you have an exact match for a text = "verification" field. + Domains can have many TXT fields. As long as one matches, it should verify. ``` > nslookup -type=TXT mydomain.com From ab5ef02d3cfb2de53007b9d19052fdc573a17a0b Mon Sep 17 00:00:00 2001 From: edcrewe Date: Thu, 14 Mar 2024 09:27:28 +0000 Subject: [PATCH 05/48] feat(UPM-30067): add BA verify box instructions here too --- .../getting_started/identity_provider/index.mdx | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx index 5303e033ed6..af98cc4702e 100644 --- a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx @@ -78,7 +78,16 @@ Once your identity provider is set up, you can view your connection status, ID, You need a verified domain so your users can have a streamlined login experience with their email address. 1. On the **Domains** tab, enter the domain name and select **Next: Verify Domain**. -2. Copy the TXT record and follow the instructions in the on screen verify box, to add it as a TXT record on that domain within your DNS provider's management console. +2. Copy the TXT record and follow the instructions in the on screen verify box, to add it as a TXT record on that domain within your DNS provider's management console + Those instructions are repeated here: + i. Log in to your domain registrar or web host account. + ii. Navigate to the DNS settings for the domain you want to verify. + iii. Add a TXT record. + iv. In the Name field, enter @. + v. In the Value field, enter the verification string provided, eg. + “edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku” + vi. Save your changes and wait for the DNS propagation period to complete. This can take up to 48 hours. + 3. Select **Done**. Your domain and its status appear on the **Domains** tab, where you can delete or verify it. From 8aee25785851b1a07a6e9fbad6404905cb508ff6 Mon Sep 17 00:00:00 2001 From: edcrewe Date: Thu, 14 Mar 2024 09:29:28 +0000 Subject: [PATCH 06/48] feat(UPM-30067): add BA verify box instructions here too --- .../getting_started/identity_provider/index.mdx | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx index af98cc4702e..b246cf26838 100644 --- a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx @@ -80,13 +80,13 @@ You need a verified domain so your users can have a streamlined login experience 1. On the **Domains** tab, enter the domain name and select **Next: Verify Domain**. 2. Copy the TXT record and follow the instructions in the on screen verify box, to add it as a TXT record on that domain within your DNS provider's management console Those instructions are repeated here: - i. Log in to your domain registrar or web host account. - ii. Navigate to the DNS settings for the domain you want to verify. - iii. Add a TXT record. - iv. In the Name field, enter @. - v. In the Value field, enter the verification string provided, eg. + - Log in to your domain registrar or web host account. + - Navigate to the DNS settings for the domain you want to verify. + - Add a TXT record. + - In the Name field, enter @. + - In the Value field, enter the verification string provided, eg. “edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku” - vi. Save your changes and wait for the DNS propagation period to complete. This can take up to 48 hours. + - Save your changes and wait for the DNS propagation period to complete. This can take up to 48 hours. 3. Select **Done**. From 1b00d6c97a79446cadaec636c88164cc5c69e601 Mon Sep 17 00:00:00 2001 From: edcrewe Date: Thu, 14 Mar 2024 09:32:02 +0000 Subject: [PATCH 07/48] feat(UPM-30067): add BA verify box instructions here too --- .../release/getting_started/identity_provider/index.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx index b246cf26838..604dc5de0bc 100644 --- a/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/identity_provider/index.mdx @@ -78,14 +78,14 @@ Once your identity provider is set up, you can view your connection status, ID, You need a verified domain so your users can have a streamlined login experience with their email address. 1. On the **Domains** tab, enter the domain name and select **Next: Verify Domain**. -2. Copy the TXT record and follow the instructions in the on screen verify box, to add it as a TXT record on that domain within your DNS provider's management console - Those instructions are repeated here: +2. Copy the TXT record and follow the instructions in the on screen verify box (repeated below), to add it as a TXT record on that domain within your DNS provider's management console. + - Log in to your domain registrar or web host account. - Navigate to the DNS settings for the domain you want to verify. - Add a TXT record. - In the Name field, enter @. - In the Value field, enter the verification string provided, eg. - “edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku” + - “edb-biganimal-verification=VjpcxtIC57DujkKMtECSwo67FyfCExku” - Save your changes and wait for the DNS propagation period to complete. This can take up to 48 hours. 3. Select **Done**. From 7f141d154c8bf03fc3491baeb0f7eef05ce94345 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 18 Mar 2024 21:00:26 +0530 Subject: [PATCH 08/48] BigAnimal - in-app notifications Added Notifications topic as per [UPM-31169](https://enterprisedb.atlassian.net/browse/UPM-31169)/ [DOCS-210](https://enterprisedb.atlassian.net/browse/DOCS-210) --- .../administering_cluster/notifications.mdx | 30 +++++++++++++++++++ 1 file changed, 30 insertions(+) create mode 100644 product_docs/docs/biganimal/release/administering_cluster/notifications.mdx diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx new file mode 100644 index 00000000000..8a7b585114f --- /dev/null +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -0,0 +1,30 @@ +--- +title: Notifications +--- + +BigAnimal supports receiving in-app and email notifications and allows you to choose receive notifications of a selected type. + +Different types of events are sent as notifications. These notifications are set at different levels and users with different roles can configure this notifications. This table provides the list of events sent as notifications grouped by different levels at which they can be set: + +| Level | Event | Role | +|-------|-------------------------|-------| +| Project | Upcoming maintenance upgrade on a cluster (24hr) | Project owner/editor | +| Project | Successful maintenance upgrade on a cluster | Project owner/editor | +| Project | Failed maintenance upgrade on a cluster | Project owner/editor | +| Project | Paused cluster will automatically reactivated in 24 hours | Project owner/editor | +| Project | Paused cluster was automatically reactivated | Project owner/editor | +| Project | You must set up the encryption key permission for your CMK-enabled cluster | Project owner/editor | +| Project | Key error with CMK-enabled cluster | Project owner and project editor | +| Project | User is invited to a project (displays only to the Project owner) | Project owner | +| Project | New role is assigned to you | Project user | +| Project | Role is unassigned from you | Project user | +| Project | Failed connection to third-party monitoring integration (and future non-monitoring integrations) | Project owner/editor | +| Organization | Payment method added | Organization owner/admin | +| Organization | Personal access key is expiring | Organization owner | +| Organization | Machine user access key is expiring | Organization owner | + +## Configuring notifications + + + +## Viewing notifications \ No newline at end of file From 932eaa9f1ba147b567dd2be1b89845d909cde0eb Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Thu, 21 Mar 2024 16:22:00 +0530 Subject: [PATCH 09/48] Added few bullets in configuration section --- .../administering_cluster/notifications.mdx | 41 +++++++++++-------- 1 file changed, 25 insertions(+), 16 deletions(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index 8a7b585114f..38ab6ebd361 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -6,25 +6,34 @@ BigAnimal supports receiving in-app and email notifications and allows you to ch Different types of events are sent as notifications. These notifications are set at different levels and users with different roles can configure this notifications. This table provides the list of events sent as notifications grouped by different levels at which they can be set: -| Level | Event | Role | -|-------|-------------------------|-------| -| Project | Upcoming maintenance upgrade on a cluster (24hr) | Project owner/editor | -| Project | Successful maintenance upgrade on a cluster | Project owner/editor | -| Project | Failed maintenance upgrade on a cluster | Project owner/editor | -| Project | Paused cluster will automatically reactivated in 24 hours | Project owner/editor | -| Project | Paused cluster was automatically reactivated | Project owner/editor | -| Project | You must set up the encryption key permission for your CMK-enabled cluster | Project owner/editor | -| Project | Key error with CMK-enabled cluster | Project owner and project editor | -| Project | User is invited to a project (displays only to the Project owner) | Project owner | -| Project | New role is assigned to you | Project user | -| Project | Role is unassigned from you | Project user | -| Project | Failed connection to third-party monitoring integration (and future non-monitoring integrations) | Project owner/editor | -| Organization | Payment method added | Organization owner/admin | -| Organization | Personal access key is expiring | Organization owner | -| Organization | Machine user access key is expiring | Organization owner | +| Level | Event | Role | +|--------------|--------------------------------------------------------------------------------------------------|----------------------------------| +| Project | Upcoming maintenance upgrade on a cluster (24hr) | Project owner/editor | +| Project | Successful maintenance upgrade on a cluster | Project owner/editor | +| Project | Failed maintenance upgrade on a cluster | Project owner/editor | +| Project | Paused cluster will automatically reactivated in 24 hours | Project owner/editor | +| Project | Paused cluster was automatically reactivated | Project owner/editor | +| Project | You must set up the encryption key permission for your CMK-enabled cluster | Project owner/editor | +| Project | Key error with CMK-enabled cluster | Project owner and project editor | +| Project | User is invited to a project (displays only to the Project owner) | Project owner | +| Project | New role is assigned to you | Project user | +| Project | Role is unassigned from you | Project user | +| Project | Failed connection to third-party monitoring integration (and future non-monitoring integrations) | Project owner/editor | +| Organization | Payment method added | Organization owner/admin | +| Organization | Personal access key is expiring | Organization owner | +| Organization | Machine user access key is expiring | Organization owner | ## Configuring notifications +The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the UI or email or both. They can also configure email notifications for their teams with in their organization. + +Organization or project owners can add one or more team email addresses to their organizations or projects. They can enable/disable different notification types to be sent to the address(es) they provide. + +Notification settings made by a user is applicable only to that user. + +Project level notifications are to be configured for a project. + +If an email notification is enabled, the email is send to the email address associated with their login. ## Viewing notifications \ No newline at end of file From a3c13a99a6a7c9a64fe98acdb83921737e328b0a Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Thu, 28 Mar 2024 13:56:49 +0530 Subject: [PATCH 10/48] Added more content as per the PEP doc --- .../administering_cluster/notifications.mdx | 36 ++++++++++++++----- 1 file changed, 28 insertions(+), 8 deletions(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index 38ab6ebd361..c3363d452b6 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -8,6 +8,9 @@ Different types of events are sent as notifications. These notifications are set | Level | Event | Role | |--------------|--------------------------------------------------------------------------------------------------|----------------------------------| +| Organization | Payment method added | Organization owner/admin | +| Organization | Personal access key is expiring | Organization owner | +| Organization | Machine user access key is expiring | Organization owner | | Project | Upcoming maintenance upgrade on a cluster (24hr) | Project owner/editor | | Project | Successful maintenance upgrade on a cluster | Project owner/editor | | Project | Failed maintenance upgrade on a cluster | Project owner/editor | @@ -19,21 +22,38 @@ Different types of events are sent as notifications. These notifications are set | Project | New role is assigned to you | Project user | | Project | Role is unassigned from you | Project user | | Project | Failed connection to third-party monitoring integration (and future non-monitoring integrations) | Project owner/editor | -| Organization | Payment method added | Organization owner/admin | -| Organization | Personal access key is expiring | Organization owner | -| Organization | Machine user access key is expiring | Organization owner | ## Configuring notifications The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the UI or email or both. They can also configure email notifications for their teams with in their organization. -Organization or project owners can add one or more team email addresses to their organizations or projects. They can enable/disable different notification types to be sent to the address(es) they provide. - -Notification settings made by a user is applicable only to that user. +Organization/Project owners can add one or more team email addresses to their organizations/projects. They can enable/disable different notification types to be sent to the team email address(es) they provide. Project level notifications are to be configured for a project. -If an email notification is enabled, the email is send to the email address associated with their login. +Notification settings made by a user is applicable only to that user. If an email notification is enabled, the email is send to the email address associated with their login. + +Notifications related to unused features for an organization or a project aren't visible or configurable in the UI. + +## Viewing notifications + +Users in the following roles can view the notifications: +- Organization owners/admins can view the organization-level notifications. +- Project owners/editors can view the project-level notifications. + +Each notification indicates the level and/or project it belongs to for the user having multiple roles within BigAnimal. + +Select the bell icon on the top of your BigAnimal portal to view the in-app notifications. On the bell icon, you can read the notification, mark it as unread and also archive the notification. + +Check the inbox of your configured email addresses, to view the email notifications. + +## Manage notifications +To manage the notifications: +1. Log in to the BigAnimal portal. +1. From the menu under your name in the top right panel, select **My Account**. +1. Select the **Notifications tab**. Notifications are grouped by organizations and projects available to you. +1. Select any specific organization/project to manage the notifications. + - Enable/disable the notification for a particular event using the toggle button. + - Select the **Email** and **Inbox** next to an event to enable/disable the email and in-app notifications for the event. -## Viewing notifications \ No newline at end of file From ce145fcaf03992f2769601db60ea23580e73ce10 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Thu, 28 Mar 2024 14:37:04 +0530 Subject: [PATCH 11/48] Update product_docs/docs/biganimal/release/administering_cluster/notifications.mdx Co-authored-by: Vishal Sawale --- .../biganimal/release/administering_cluster/notifications.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index c3363d452b6..93df06c1e96 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -25,7 +25,7 @@ Different types of events are sent as notifications. These notifications are set ## Configuring notifications -The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the UI or email or both. They can also configure email notifications for their teams with in their organization. +The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the UI or email or both. They can also configure email notifications for their teams within their organization. Organization/Project owners can add one or more team email addresses to their organizations/projects. They can enable/disable different notification types to be sent to the team email address(es) they provide. From 01b036496c5911807ea7695307903d240da7fda6 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 2 Apr 2024 12:19:46 +0530 Subject: [PATCH 12/48] Update notifications.mdx --- .../biganimal/release/administering_cluster/notifications.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index 93df06c1e96..f83500b692d 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -31,7 +31,7 @@ Organization/Project owners can add one or more team email addresses to their or Project level notifications are to be configured for a project. -Notification settings made by a user is applicable only to that user. If an email notification is enabled, the email is send to the email address associated with their login. +Notification settings made by a user is applicable only to that user. If an email notification is enabled, the email is sent to the email address associated with their login. Notifications related to unused features for an organization or a project aren't visible or configurable in the UI. From d5c31ab1a3d931a7222c49e58da0c77920bae1fe Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 2 Apr 2024 14:15:17 +0530 Subject: [PATCH 13/48] Update product_docs/docs/biganimal/release/administering_cluster/notifications.mdx Co-authored-by: Nagesh Dhope --- .../biganimal/release/administering_cluster/notifications.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index f83500b692d..68d35295b49 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -25,7 +25,7 @@ Different types of events are sent as notifications. These notifications are set ## Configuring notifications -The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the UI or email or both. They can also configure email notifications for their teams within their organization. +The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the in-app inbox or email or both. They can also configure email notifications for their teams within their organization. Organization/Project owners can add one or more team email addresses to their organizations/projects. They can enable/disable different notification types to be sent to the team email address(es) they provide. From 9c72f564a46b8616784bc964e67b97350911ffe7 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 2 Apr 2024 14:38:18 +0530 Subject: [PATCH 14/48] Update product_docs/docs/biganimal/release/administering_cluster/notifications.mdx --- .../biganimal/release/administering_cluster/notifications.mdx | 1 - 1 file changed, 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index 68d35295b49..4a1a8c7c81c 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -27,7 +27,6 @@ Different types of events are sent as notifications. These notifications are set The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the in-app inbox or email or both. They can also configure email notifications for their teams within their organization. -Organization/Project owners can add one or more team email addresses to their organizations/projects. They can enable/disable different notification types to be sent to the team email address(es) they provide. Project level notifications are to be configured for a project. From a4b02998c2c5ece3f2a4d73132e82095a2843b4f Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 3 Apr 2024 14:38:06 +0100 Subject: [PATCH 15/48] Added 2 known issues (default db name and extension replication) Signed-off-by: Dj Walker-Morgan --- .../biganimal/release/known_issues/known_issues_dha.mdx | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx b/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx index fbfa2c9e110..f34e201cad2 100644 --- a/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx +++ b/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx @@ -28,6 +28,9 @@ If you have already encountered this issue, reach out to Azure support: We're going to be provisioning a number of instances of in and need to be able to provision these instances in all AZs. Can you please ensure that subscription is able to provision this VM type in all AZs of . Thank you! ``` +### Changing the default database name is not possible +Currently, the default database for a replicated PGD cluster is `bdrdb`. This cannot be changed, either at initialization or after the cluster is created. + ## Replication ### A PGD replication slot may fail to transition cleanly from disconnect to catch up @@ -39,6 +42,9 @@ During a large data migration, when migrating to a PGD cluster, you may experien ### PGD leadership change on healthy cluster PGD clusters that are in a healthy state may experience a change in PGD node leadership, potentially resulting in failover. No intervention is needed as a new leader will be appointed. +### Extensions which require alternate roles are not supported +Where an extension requires a role other than the default role used for replication, it will fail when attempting to replicate. This is because PGD runs replication writer operations as a `SECURITY_RESTRICTED_OPERATION` to mitigate the risk of privilege escalation. Attempts to install such extensions may cause the cluster to fail to operate. + ## Migration ### Connection interruption disrupts migration via Migration Toolkit From bee57f0d64090e1184c06089f1542a3a2ba9e68a Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 13 Mar 2024 10:41:42 +0000 Subject: [PATCH 16/48] Added specific PG11 exception, added references to PGD AutoPartition to be clear it is PGD, formatted docs. Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/scaling.mdx | 160 +++++++++++++++------------- 1 file changed, 87 insertions(+), 73 deletions(-) diff --git a/product_docs/docs/pgd/5/scaling.mdx b/product_docs/docs/pgd/5/scaling.mdx index f8fe9b972d7..ef7d43e66d3 100644 --- a/product_docs/docs/pgd/5/scaling.mdx +++ b/product_docs/docs/pgd/5/scaling.mdx @@ -1,13 +1,13 @@ --- -title: AutoPartition +title: PGD AutoPartition redirects: - ../bdr/scaling --- -AutoPartition allows you to split tables into several partitions. It lets -tables grow easily to large sizes using automatic -partitioning management. This capability uses features of PGD, -such as low-conflict locking of creating and dropping partitions. +PGD AutoPartition allows you to split tables into several partitions. It lets +tables grow easily to large sizes using automatic partitioning management. This +capability uses features of PGD, such as low-conflict locking of creating and +dropping partitions. You can create new partitions regularly and then drop them when the data retention period expires. @@ -18,32 +18,38 @@ your search_path, you need to schema-qualify the name of each function. ## Auto creation of partitions -[`bdr.autopartition()`](/pgd/latest/reference/autopartition#bdrautopartition) creates or alters the definition of automatic range -partitioning for a table. If no definition exists, it's created. -Otherwise, later executions will alter the definition. +PGD AutoPartition uses the [`bdr.autopartition()`](/pgd/latest/reference/autopartition#bdrautopartition) +function to create or alter the definition of automatic range partitioning for a table. If +no definition exists, it's created. Otherwise, later executions will alter the +definition. -[`bdr.autopartition()`](/pgd/latest/reference/autopartition#bdrautopartition) doesn't lock the actual table. It changes the -definition of when and how new partition maintenance actions take place. +The +[`bdr.autopartition()`](/pgd/latest/reference/autopartition#bdrautopartition) +function doesn't lock the actual table. It changes the definition of when and +how new partition maintenance actions take place. -[`bdr.autopartition()`](/pgd/latest/reference/autopartition#bdrautopartition) leverages the features that allow a partition to be -attached or detached/dropped without locking the rest of the table -(when the underlying Postgres version supports it). +PGD AutoPartition leverages underlying Postgres features that allow a partition +to be attached or detached/dropped without locking the rest of the table +(Autopartion currently only supports this when used with 2nd Quadrant Postgres 11). An ERROR is raised if the table isn't RANGE partitioned or a multi-column partition key is used. By default, AutoPartition manages partitions globally. In other words, when a -partition is created on one node, the same partition is created on all -other nodes in the cluster. Using the default makes all partitions consistent and guaranteed to -be available. For this capability, AutoPartition makes use of Raft. - -You can change this behavior by setting `managed_locally` to `true`. In that case, all partitions -are managed locally on each node. Managing partitions locally is useful when the -partitioned table isn't a replicated table, in which case you might not need or want -to have all partitions on all nodes. For example, the -built-in [`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) table isn't a replicated table. It's -managed by AutoPartition locally. Each node creates partitions for this table -locally and drops them once they're old enough. +partition is created on one node, the same partition is created on all other +nodes in the cluster. Using the default makes all partitions consistent and +guaranteed to be available. For this capability, AutoPartition makes use of +Raft. + +You can change this behavior by setting `managed_locally` to `true`. In that +case, all partitions are managed locally on each node. Managing partitions +locally is useful when the partitioned table isn't a replicated table, in which +case you might not need or want to have all partitions on all nodes. For +example, the built-in +[`bdr.conflict_history`](/pgd/latest/reference/catalogs-visible#bdrconflict_history) +table isn't a replicated table. It's managed by AutoPartition locally. Each node +creates partitions for this table locally and drops them once they're old +enough. Also consider: @@ -57,7 +63,7 @@ managed by AutoPartition. Doing so can make the AutoPartition metadata inconsistent and might cause it to fail. -## Autopartition Examples +## AutoPartition Examples Daily partitions, keep data for one month: @@ -71,7 +77,8 @@ unitsales int bdr.autopartition('measurement', '1 day', data_retention_period := '30 days'); ``` -Create five advance partitions when there are only two more partitions remaining. Each partition can hold 1 billion orders. +Create five advance partitions when there are only two more partitions +remaining. Each partition can hold 1 billion orders. ```sql bdr.autopartition('Orders', '1000000000', @@ -90,31 +97,32 @@ key of type `timestamp` or `date`, the `partition_increment` must be a valid constant of type `interval`. For example, specifying `1 Day` causes a new partition to be added each day, with partition bounds that are one day apart. -If the partition column is connected to a `snowflakeid`, `timeshard`, or `ksuuid` sequence, -you must specify the `partition_increment` as type `interval`. Otherwise, -if the partition key is integer or numeric, then the `partition_increment` -must be a valid constant of the same datatype. For example, specifying -`1000000` causes new partitions to be added every 1 million values. +If the partition column is connected to a `snowflakeid`, `timeshard`, or +`ksuuid` sequence, you must specify the `partition_increment` as type +`interval`. Otherwise, if the partition key is integer or numeric, then the +`partition_increment` must be a valid constant of the same datatype. For +example, specifying `1000000` causes new partitions to be added every 1 million +values. If the table has no existing partition, then the specified `partition_initial_lowerbound` is used as the lower bound for the first partition. If you don't specify `partition_initial_lowerbound`, then the system tries to derive its value from the partition column type and the specified -`partition_increment`. For example, if `partition_increment` is specified as `1 Day`, -then `partition_initial_lowerbound` is set to CURRENT -DATE. If `partition_increment` is specified as `1 Hour`, then -`partition_initial_lowerbound` is set to the current hour of the current -date. The bounds for the subsequent partitions are set using the -`partition_increment` value. +`partition_increment`. For example, if `partition_increment` is specified as `1 +Day`, then `partition_initial_lowerbound` is set to CURRENT DATE. If +`partition_increment` is specified as `1 Hour`, then +`partition_initial_lowerbound` is set to the current hour of the current date. +The bounds for the subsequent partitions are set using the `partition_increment` +value. The system always tries to have a certain minimum number of advance partitions. -To decide whether to create new partitions, it uses the -specified `partition_autocreate_expression`. This can be an expression that can be evaluated by SQL, -which is evaluated every time a check is performed. For example, -for a partitioned table on column type `date`, if -`partition_autocreate_expression` is specified as `DATE_TRUNC('day',CURRENT_DATE)`, -`partition_increment` is specified as `1 Day` and -`minimum_advance_partitions` is specified as `2`, then new partitions are +To decide whether to create new partitions, it uses the specified +`partition_autocreate_expression`. This can be an expression that can be +evaluated by SQL, which is evaluated every time a check is performed. For +example, for a partitioned table on column type `date`, if +`partition_autocreate_expression` is specified as +`DATE_TRUNC('day',CURRENT_DATE)`, `partition_increment` is specified as `1 Day` +and `minimum_advance_partitions` is specified as `2`, then new partitions are created until the upper bound of the last partition is less than `DATE_TRUNC('day', CURRENT_DATE) + '2 Days'::interval`. @@ -122,16 +130,16 @@ The expression is evaluated each time the system checks for new partitions. For a partitioned table on column type `integer`, you can specify the `partition_autocreate_expression` as `SELECT max(partcol) FROM -schema.partitioned_table`. The system then regularly checks if the maximum value of -the partitioned column is within the distance of `minimum_advance_partitions * partition_increment` -of the last partition's upper bound. Create an index on the `partcol` so that the query runs efficiently. -If you don't specify the `partition_autocreate_expression` for a partition table -on column type `integer`, `smallint`, or `bigint`, then the system -sets it to `max(partcol)`. +schema.partitioned_table`. The system then regularly checks if the maximum value +of the partitioned column is within the distance of `minimum_advance_partitions +* partition_increment` of the last partition's upper bound. Create an index on +the `partcol` so that the query runs efficiently. If you don't specify the +`partition_autocreate_expression` for a partition table on column type +`integer`, `smallint`, or `bigint`, then the system sets it to `max(partcol)`. -If the `data_retention_period` is set, partitions are -dropped after this period. Partitions are dropped at the same time as new -partitions are added, to minimize locking. If this value isn't set, you must drop the partitions manually. +If the `data_retention_period` is set, partitions are dropped after this period. +Partitions are dropped at the same time as new partitions are added, to minimize +locking. If this value isn't set, you must drop the partitions manually. The `data_retention_period` parameter is supported only for timestamp (and related) based partitions. The period is calculated by considering the upper @@ -140,38 +148,44 @@ upper bound. ## Stopping automatic creation of partitions -Use [`bdr.drop_autopartition()`](/pgd/latest/reference/autopartition#bdrdrop_autopartition) to drop the autopartitioning rule for the -given relation. All pending work items for the relation are deleted, and no new -work items are created. +Use +[`bdr.drop_autopartition()`](/pgd/latest/reference/autopartition#bdrdrop_autopartition) +to drop the autopartitioning rule for the given relation. All pending work items +for the relation are deleted, and no new work items are created. ## Waiting for partition creation Partition creation is an asynchronous process. AutoPartition provides a set of -functions to wait for the partition to be created, locally or on all -nodes. +functions to wait for the partition to be created, locally or on all nodes. -Use [`bdr.autopartition_wait_for_partitions()`](/pgd/latest/reference/autopartition#bdrautopartition_wait_for_partitions) to wait for the creation of -partitions on the local node. The function takes the partitioned table name and -a partition key column value and waits until the partition that holds that -value is created. +Use +[`bdr.autopartition_wait_for_partitions()`](/pgd/latest/reference/autopartition#bdrautopartition_wait_for_partitions) +to wait for the creation of partitions on the local node. The function takes the +partitioned table name and a partition key column value and waits until the +partition that holds that value is created. -The function waits only for the partitions to be created locally. It doesn't guarantee -that the partitions also exists on the remote nodes. +The function waits only for the partitions to be created locally. It doesn't +guarantee that the partitions also exists on the remote nodes. To wait for the partition to be created on all PGD nodes, use the -[`bdr.autopartition_wait_for_partitions_on_all_nodes()`](/pgd/latest/reference/autopartition#bdrautopartition_wait_for_partitions_on_all_nodes) function. This function -internally checks local as well as all remote nodes and waits until the -partition is created everywhere. +[`bdr.autopartition_wait_for_partitions_on_all_nodes()`](/pgd/latest/reference/autopartition#bdrautopartition_wait_for_partitions_on_all_nodes) +function. This function internally checks local as well as all remote nodes and +waits until the partition is created everywhere. ## Finding a partition -Use the [`bdr.autopartition_find_partition()`](/pgd/latest/reference/autopartition#bdrautopartition_find_partition) function to find the partition for the -given partition key value. If partition to hold that value doesn't exist, then -the function returns NULL. Otherwise Oid of the partition is returned. +Use the +[`bdr.autopartition_find_partition()`](/pgd/latest/reference/autopartition#bdrautopartition_find_partition) +function to find the partition for the given partition key value. If partition +to hold that value doesn't exist, then the function returns NULL. Otherwise Oid +of the partition is returned. ## Enabling or disabling AutoPartitioning -Use [`bdr.autopartition_enable()`](/pgd/latest/reference/autopartition#bdrautopartition_enable) to enable AutoPartitioning on the given table. -If AutoPartitioning is already enabled, then no action occurs. Similarly, use -[`bdr.autopartition_disable()`](/pgd/latest/reference/autopartition#bdrautopartition_disable) to disable AutoPartitioning on the given table. +Use +[`bdr.autopartition_enable()`](/pgd/latest/reference/autopartition#bdrautopartition_enable) +to enable AutoPartitioning on the given table. If AutoPartitioning is already +enabled, then no action occurs. Similarly, use +[`bdr.autopartition_disable()`](/pgd/latest/reference/autopartition#bdrautopartition_disable) +to disable AutoPartitioning on the given table. From 5416404e8d40e919dab66c672dcdc12899c11495 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 2 Apr 2024 15:09:42 +0100 Subject: [PATCH 17/48] Adjusted messaging to allow for PGD5 not supporting PG11 Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/4/bdr/scaling.mdx | 7 ++++--- product_docs/docs/pgd/5/scaling.mdx | 2 +- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/pgd/4/bdr/scaling.mdx b/product_docs/docs/pgd/4/bdr/scaling.mdx index 8e20d943774..165e0b71152 100644 --- a/product_docs/docs/pgd/4/bdr/scaling.mdx +++ b/product_docs/docs/pgd/4/bdr/scaling.mdx @@ -22,9 +22,10 @@ Otherwise, later executions will alter the definition. `bdr.autopartition()` doesn't lock the actual table. It changes the definition of when and how new partition maintenance actions take place. -`bdr.autopartition()` leverages the features that allow a partition to be -attached or detached/dropped without locking the rest of the table -(when the underlying Postgres version supports it). +PGD AutoPartition leverages underlying Postgres features that allow a partition +to be attached or detached/dropped without locking the rest of the table +(Autopartion currently only supports this when used with 2nd Quadrant Postgres +11). An ERROR is raised if the table isn't RANGE partitioned or a multi-column partition key is used. diff --git a/product_docs/docs/pgd/5/scaling.mdx b/product_docs/docs/pgd/5/scaling.mdx index ef7d43e66d3..0ca179d1b0c 100644 --- a/product_docs/docs/pgd/5/scaling.mdx +++ b/product_docs/docs/pgd/5/scaling.mdx @@ -30,7 +30,7 @@ how new partition maintenance actions take place. PGD AutoPartition leverages underlying Postgres features that allow a partition to be attached or detached/dropped without locking the rest of the table -(Autopartion currently only supports this when used with 2nd Quadrant Postgres 11). +(Autopartion on PGD5 currently does not support this). An ERROR is raised if the table isn't RANGE partitioned or a multi-column partition key is used. From 4ab5c065fd44e12d863f82d6fb7044fdde7a8b5a Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Thu, 4 Apr 2024 11:32:25 +0100 Subject: [PATCH 18/48] Shortened the scaling note. --- product_docs/docs/pgd/5/scaling.mdx | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/product_docs/docs/pgd/5/scaling.mdx b/product_docs/docs/pgd/5/scaling.mdx index 0ca179d1b0c..610e840b0f7 100644 --- a/product_docs/docs/pgd/5/scaling.mdx +++ b/product_docs/docs/pgd/5/scaling.mdx @@ -23,14 +23,8 @@ function to create or alter the definition of automatic range partitioning for a no definition exists, it's created. Otherwise, later executions will alter the definition. -The -[`bdr.autopartition()`](/pgd/latest/reference/autopartition#bdrautopartition) -function doesn't lock the actual table. It changes the definition of when and -how new partition maintenance actions take place. - -PGD AutoPartition leverages underlying Postgres features that allow a partition -to be attached or detached/dropped without locking the rest of the table -(Autopartion on PGD5 currently does not support this). +PGD Autopartition in PGD 5 currently locks the actual table while performing +new partition maintenance operations. An ERROR is raised if the table isn't RANGE partitioned or a multi-column partition key is used. From 0ea9ad2716cc45e761a12f22ac2cc3b2306feea0 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 4 Apr 2024 10:10:16 +0100 Subject: [PATCH 19/48] Added replication role name, single-lined some entries for easier management Signed-off-by: Dj Walker-Morgan --- .../release/known_issues/known_issues_dha.mdx | 30 +++++++++++++------ 1 file changed, 21 insertions(+), 9 deletions(-) diff --git a/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx b/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx index f34e201cad2..b609662f7f5 100644 --- a/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx +++ b/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx @@ -4,15 +4,18 @@ navTitle: Distributed high availability/PGD known issues deepToC: true --- -These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. These known issues are tracked in our ticketing system and are expected to be resolved in a future release. +These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. +These known issues are tracked in our ticketing system and are expected to be resolved in a future release. ## Management/administration ### Deleting a PGD data group may not fully reconcile -When deleting a PGD data group, the target group resources is physically deleted, but in some cases we have observed that the PGD nodes may not be completely partitioned from the remaining PGD Groups. We recommend avoiding use of this feature until this is fixed and removed from the known issues list. +When deleting a PGD data group, the target group resources is physically deleted, but in some cases we have observed that the PGD nodes may not be completely partitioned from the remaining PGD Groups. +We recommend avoiding use of this feature until this is fixed and removed from the known issues list. ### Adjusting PGD cluster architecture may not fully reconcile -In rare cases, we have observed that changing the node architecture of an existing PGD cluster may not complete. If a change hasn't taken effect in 1 hour, reach out to Support. +In rare cases, we have observed that changing the node architecture of an existing PGD cluster may not complete. +If a change hasn't taken effect in 1 hour, reach out to Support. ### PGD cluster may fail to create due to Azure SKU issue In some cases, although a regional quota check may have passed initially when the PGD cluster is created, it may fail if an SKU critical for the witness nodes is unavailable across three availability zones. @@ -29,29 +32,38 @@ We're going to be provisioning a number of instances of in Date: Thu, 4 Apr 2024 10:51:34 +0100 Subject: [PATCH 20/48] Renamed to _pgd and added redirect Signed-off-by: Dj Walker-Morgan --- .../known_issues/{known_issues_dha.mdx => known_issues_pgd.mdx} | 2 ++ 1 file changed, 2 insertions(+) rename product_docs/docs/biganimal/release/known_issues/{known_issues_dha.mdx => known_issues_pgd.mdx} (98%) diff --git a/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx similarity index 98% rename from product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx rename to product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx index b609662f7f5..25df81794b2 100644 --- a/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx +++ b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx @@ -2,6 +2,8 @@ title: Known issues with distributed high availability/PGD navTitle: Distributed high availability/PGD known issues deepToC: true +redirect: + - /docs/biganimal/known-issues/know_issuses_dha --- These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. From 0aaa353d0240120878fb4ec7996e6bf7490394ec Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 4 Apr 2024 10:54:31 +0100 Subject: [PATCH 21/48] Fix redirect Signed-off-by: Dj Walker-Morgan --- .../docs/biganimal/release/known_issues/known_issues_pgd.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx index 25df81794b2..348a3f2bd4f 100644 --- a/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx +++ b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx @@ -3,7 +3,7 @@ title: Known issues with distributed high availability/PGD navTitle: Distributed high availability/PGD known issues deepToC: true redirect: - - /docs/biganimal/known-issues/know_issuses_dha + - /biganimal/latest/known_issues/known_issues_dha --- These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. From b6c7621ba82ca749b6705b08afdb504e0b82bff3 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Thu, 4 Apr 2024 11:45:46 +0100 Subject: [PATCH 22/48] Redirect fix 2 Signed-off-by: Dj Walker-Morgan --- .../docs/biganimal/release/known_issues/known_issues_pgd.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx index 348a3f2bd4f..255f6d9a62a 100644 --- a/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx +++ b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx @@ -2,8 +2,8 @@ title: Known issues with distributed high availability/PGD navTitle: Distributed high availability/PGD known issues deepToC: true -redirect: - - /biganimal/latest/known_issues/known_issues_dha +redirects: +- /biganimal/latest/known_issues/known_issues_dha/ --- These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. From ff0bc45df918790f7ad5942f3c745274a402d990 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Mon, 25 Mar 2024 11:23:10 +0000 Subject: [PATCH 23/48] Complete options for bdr.raft_leadership_transfer Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/reference/functions.mdx | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/pgd/5/reference/functions.mdx b/product_docs/docs/pgd/5/reference/functions.mdx index f22d858236a..ede80e29a4f 100644 --- a/product_docs/docs/pgd/5/reference/functions.mdx +++ b/product_docs/docs/pgd/5/reference/functions.mdx @@ -167,7 +167,7 @@ table and use that instead. Doing so doesn't require you to stop the consensus w #### Synopsis ```sql -bdr.consensus_snapshot_import(IN snapshot bytea) +bdr.consensus_snapshot_import(snapshot bytea) ``` Import a consensus snapshot that was exported by @@ -206,7 +206,7 @@ applied log position. #### Synopsis ```sql -bdr.consensus_snapshot_verify(IN snapshot bytea) +bdr.consensus_snapshot_verify(snapshot bytea) ``` Verify the given consensus snapshot that was exported by @@ -232,7 +232,9 @@ Alias for `bdr.get_consensus_status`. #### Synopsis ```sql -bdr.raft_leadership_transfer(IN node_name text, IN wait_for_completion boolean) +bdr.raft_leadership_transfer(node_name text, + wait_for_completion boolean, + node_group text DEFAULT NULL) ``` Request the node identified by `node_name` to be the Raft leader. The @@ -254,6 +256,12 @@ the requested node fails to become Raft leader (for example, due to network issues). We therefore recommend that you always set a `statement_timeout` with `wait_for_completion` to prevent an infinite loop. +The `node_group` is optional and can be used to specify the node group where the +leadership transfer should happen. If not specified, it defaults to NULL which +is interpreted as the top-level group in the cluster. If the `node_group` is +specified, the function will only transfer leadership within the specified node +group. + ## Utility functions ### `bdr.wait_slot_confirm_lsn` From ed66b8f249ed7dcd593efe8ca43b03d8903680d5 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Tue, 26 Mar 2024 21:21:38 +0000 Subject: [PATCH 24/48] Apply suggestions from code review --- product_docs/docs/pgd/5/reference/functions.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/pgd/5/reference/functions.mdx b/product_docs/docs/pgd/5/reference/functions.mdx index ede80e29a4f..85d9cd62f2f 100644 --- a/product_docs/docs/pgd/5/reference/functions.mdx +++ b/product_docs/docs/pgd/5/reference/functions.mdx @@ -234,7 +234,7 @@ Alias for `bdr.get_consensus_status`. ```sql bdr.raft_leadership_transfer(node_name text, wait_for_completion boolean, - node_group text DEFAULT NULL) + node_group_name text DEFAULT NULL) ``` Request the node identified by `node_name` to be the Raft leader. The @@ -256,9 +256,9 @@ the requested node fails to become Raft leader (for example, due to network issues). We therefore recommend that you always set a `statement_timeout` with `wait_for_completion` to prevent an infinite loop. -The `node_group` is optional and can be used to specify the node group where the +The `node_group_name` is optional and can be used to specify the name of the node group where the leadership transfer should happen. If not specified, it defaults to NULL which -is interpreted as the top-level group in the cluster. If the `node_group` is +is interpreted as the top-level group in the cluster. If the `node_group_name` is specified, the function will only transfer leadership within the specified node group. From e47fccf9a814f951c20d7017f025ce6a387ba1f3 Mon Sep 17 00:00:00 2001 From: Matthew Gwillam <71602865+matthew123987@users.noreply.github.com> Date: Thu, 4 Apr 2024 15:17:58 +0100 Subject: [PATCH 25/48] Update using_sql_profiler.mdx --- .../pem/9/profiling_workloads/using_sql_profiler.mdx | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx b/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx index 34e7c81de58..653fe783e34 100644 --- a/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx +++ b/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx @@ -10,12 +10,19 @@ redirects: - /pem/latest/pem_online_help/07_toc_pem_sql_profiler/05_sp_sql_profiler_tab/ --- -The SQL Profiler extension allows a database superuser to locate and optimize inefficient SQL code. Microsoft's SQL Server Profiler is very similar to PEM’s SQL Profiler in operation and capabilities. +The SQL Profiler extension allows a user to locate and optimize inefficient SQL code. Microsoft's SQL Server Profiler is very similar to PEM’s SQL Profiler in operation and capabilities. SQL Profiler works with PEM to allow you to profile a server's workload. You can install and enable the SQL Profiler extension on servers with or without a PEM agent. However, you can run traces only in ad hoc mode on unmanaged servers and you can schedule them only on managed servers. SQL Profiler captures and displays a specific SQL workload for analysis in a SQL trace. You can start and review captured SQL traces immediately or save captured traces for review later. You can use SQL Profiler to create and store up to 15 named traces. +## Permissions for SQL profiler + +To access the SQL profiler tool on PEM, there are two prerequisites: + +1. The user logged in to the PEM GUI must either be a superuser, or member of group `pem_comp_sqlprofiler` within the PEM server database. Please see [https://www.enterprisedb.com/docs/pem/latest/managing_pem_server/#using-pem-predefined-roles-to-manage-access-to-pem-functionality HOW DO WE LINK THIS?] for more information on pem groups. +1. The user configured for the database server in the server tree (`Username`), must be a superuser on the database server that the trace is being run on (monitored server). + ## Creating a trace You can use the Create Trace dialog box to define a SQL trace for any database on which SQL Profiler was installed and configured. To open the dialog box, select the database in the PEM client tree and select **Tools > Server > SQL Profiler > Create trace**. From 84fbb3c18fa2731d5050defae4ef9deb5228471f Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Fri, 5 Apr 2024 13:53:50 +0530 Subject: [PATCH 26/48] Update product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx --- .../docs/pem/9/profiling_workloads/using_sql_profiler.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx b/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx index 653fe783e34..58da81f2bdd 100644 --- a/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx +++ b/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx @@ -20,7 +20,7 @@ SQL Profiler captures and displays a specific SQL workload for analysis in a SQL To access the SQL profiler tool on PEM, there are two prerequisites: -1. The user logged in to the PEM GUI must either be a superuser, or member of group `pem_comp_sqlprofiler` within the PEM server database. Please see [https://www.enterprisedb.com/docs/pem/latest/managing_pem_server/#using-pem-predefined-roles-to-manage-access-to-pem-functionality HOW DO WE LINK THIS?] for more information on pem groups. +1. The user logged in to the PEM GUI must either be a superuser or a member of group `pem_comp_sqlprofiler` within the PEM server database. For more information, see [pem groups] (/pem/latest/managing_pem_server/#using-pem-predefined-roles-to-manage-access-to-pem-functionality). 1. The user configured for the database server in the server tree (`Username`), must be a superuser on the database server that the trace is being run on (monitored server). ## Creating a trace From 87940c39214dffcf66ac81b85b90817a03fc7ad4 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Fri, 5 Apr 2024 14:14:46 +0530 Subject: [PATCH 27/48] Update using_sql_profiler.mdx --- .../docs/pem/9/profiling_workloads/using_sql_profiler.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx b/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx index 58da81f2bdd..9e4c4f06923 100644 --- a/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx +++ b/product_docs/docs/pem/9/profiling_workloads/using_sql_profiler.mdx @@ -20,7 +20,7 @@ SQL Profiler captures and displays a specific SQL workload for analysis in a SQL To access the SQL profiler tool on PEM, there are two prerequisites: -1. The user logged in to the PEM GUI must either be a superuser or a member of group `pem_comp_sqlprofiler` within the PEM server database. For more information, see [pem groups] (/pem/latest/managing_pem_server/#using-pem-predefined-roles-to-manage-access-to-pem-functionality). +1. The user logged in to the PEM GUI must either be a superuser or a member of group `pem_comp_sqlprofiler` within the PEM server database. For more information, see [pem groups](/pem/latest/managing_pem_server/#using-pem-predefined-roles-to-manage-access-to-pem-functionality). 1. The user configured for the database server in the server tree (`Username`), must be a superuser on the database server that the trace is being run on (monitored server). ## Creating a trace From 355b7b4bb8cb320adcc4e2d74be59e00af414169 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Wed, 27 Mar 2024 09:46:22 +0000 Subject: [PATCH 28/48] Edits to PGD PR5104 - with 5377 changes Signed-off-by: Dj Walker-Morgan --- .../installing/01-provisioning-hosts.mdx | 33 ++-- .../installing/02-install-postgres.mdx | 23 ++- .../03-configuring-repositories.mdx | 28 ++-- .../installing/04-installing-software.mdx | 146 +++++++++-------- .../installing/05-creating-cluster.mdx | 99 ++++++------ .../installing/06-check-cluster.mdx | 80 +++++----- .../installing/07-configure-proxies.mdx | 151 ++++++++++-------- .../installing/08-using-pgd-cli.mdx | 97 ++++++----- .../pgd/5/admin-manual/installing/index.mdx | 44 ++--- 9 files changed, 348 insertions(+), 353 deletions(-) diff --git a/product_docs/docs/pgd/5/admin-manual/installing/01-provisioning-hosts.mdx b/product_docs/docs/pgd/5/admin-manual/installing/01-provisioning-hosts.mdx index 79b0a10d8a4..a0a53479b71 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/01-provisioning-hosts.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/01-provisioning-hosts.mdx @@ -1,6 +1,6 @@ --- -title: Step 1 - Provisioning Hosts -navTitle: Provisioning Hosts +title: Step 1 - Provisioning hosts +navTitle: Provisioning hosts deepToC: true --- @@ -8,19 +8,19 @@ deepToC: true The first step in the process of deploying PGD is to provision and configure hosts. -You can deploy to virtual machine instances in the cloud with Linux installed, on-premise virtual machines with Linux installed or on-premise physical hardware also with Linux installed. +You can deploy to virtual machine instances in the cloud with Linux installed, on-premises virtual machines with Linux installed, or on-premises physical hardware, also with Linux installed. -Whichever [supported Linux operating system](https://www.enterprisedb.com/resources/platform-compatibility#bdr) and whichever deployment platform you select, the result of provisioning a machine must be a Linux system that can be accessed by you using SSH with a user that has superuser, administrator or sudo privileges. +Whichever [supported Linux operating system](https://www.enterprisedb.com/resources/platform-compatibility#bdr) and whichever deployment platform you select, the result of provisioning a machine must be a Linux system that you can access using SSH with a user that has superuser, administrator, or sudo privileges. -Each machine provisioned should be able to make connections to any other machine you are provisioning for your cluster. +Each machine provisioned must be able to make connections to any other machine you're provisioning for your cluster. -On cloud deployments, this may be done over the public network or over a VPC. +On cloud deployments, you can do this over the public network or over a VPC. -On-premise deployments should be able to connect over the local network. +On-premises deployments must be able to connect over the local network. !!! Note Cloud provisioning guides -If you are new to cloud provisioning, these guides may provide assistance: +If you're new to cloud provisioning, these guides may provide assistance: Vendor | Platform | Guide ------ | -------- | ------ @@ -36,29 +36,29 @@ If you are new to cloud provisioning, these guides may provide assistance: We recommend that you configure an admin user for each provisioned instance. The admin user must have superuser or sudo (to superuser) privileges. -We also recommend that the admin user should be configured for passwordless SSH access using certificates. +We also recommend that the admin user be configured for passwordless SSH access using certificates. #### Ensure networking connectivity -With the admin user created, ensure that each machine can communicate with the other machines you are provisioning. +With the admin user created, ensure that each machine can communicate with the other machines you're provisioning. In particular, the PostgreSQL TCP/IP port (5444 for EDB Postgres Advanced -Server, 5432 for EDB Postgres Extended and Community PostgreSQL) should be open +Server, 5432 for EDB Postgres Extended and community PostgreSQL) must be open to all machines in the cluster. If you plan to deploy PGD Proxy, its port must be -open to any applications which will connect to the cluster. Port 6432 is typically +open to any applications that will connect to the cluster. Port 6432 is typically used for PGD Proxy. ## Worked example -For the example in this section, we have provisioned three hosts with Red Hat Enterprise Linux 9. +For this example, three hosts with Red Hat Enterprise Linux 9 were provisioned: * host-one * host-two * host-three -Each is configured with a "admin" admin user. +Each is configured with an admin user named admin. -These hosts have been configured in the cloud and as such each host has both a public and private IP address. +These hosts were configured in the cloud. As such, each host has both a public and private IP address. Name | Public IP | Private IP ------|-----------|---------------------- @@ -66,11 +66,10 @@ These hosts have been configured in the cloud and as such each host has both a p host-two | 172.24.113.247 | 192.168.254.247 host-three | 172.24.117.23 | 192.168.254.135 -For our example cluster, we have also edited `/etc/hosts` to use those private IP addresses: +For the example cluster, `/etc/hosts` was also edited to use those private IP addresses: ``` 192.168.254.166 host-one 192.168.254.247 host-two 192.168.254.135 host-three ``` - diff --git a/product_docs/docs/pgd/5/admin-manual/installing/02-install-postgres.mdx b/product_docs/docs/pgd/5/admin-manual/installing/02-install-postgres.mdx index 7e44569c2d0..bf40116517b 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/02-install-postgres.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/02-install-postgres.mdx @@ -6,43 +6,43 @@ deepToC: true ## Installing Postgres -You will need to install Postgres on all the hosts. +You need to install Postgres on all the hosts. An EDB account is required to use the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can get installation instructions. Select your platform and Postgres edition. -You will be presented with 2 steps of instructions, the first covering how to configure the required package repository and the second covering how to install the packages from that repository. +You're presented with 2 steps of instructions. The first step covers how to configure the required package repository. The second step covers how to install the packages from that repository. Run both steps. ## Worked example -In our example, we will be installing EDB Postgres Advanced Server 16 on Red Hat Enterprise Linux 9 (RHEL 9). +This example installs EDB Postgres Advanced Server 16 on Red Hat Enterprise Linux 9 (RHEL 9). ### EDB account -You'll need an EDB account to install both Postgres and PGD. +You need an EDB account to install both Postgres and PGD. -Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can select your platform and then scroll down the list to select the Postgres version you wish to install: +Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos) page where you can select your platform. Then scroll down the list to select the Postgres version you want to install: * EDB Postgres Advanced Server * EDB Postgres Extended * PostgreSQL -Upon selecting the version of the Postgres server you want, two steps will be displayed. +When you select the version of the Postgres server you want, two steps are displayed. ### 1: Configuring repositories -For step 1, you can choose to use the automated script or step through the manual install instructions that are displayed. Your EDB repository token will be automatically inserted by the EDB Repos 2.0 site into these scripts. -In our examples, it will be shown as `XXXXXXXXXXXXXXXX`. +For step 1, you can choose to use the automated script or step through the manual install instructions that are displayed. Your EDB repository token is inserted into these scripts by the EDB Repos 2.0 site. +In the examples, it's shown as `XXXXXXXXXXXXXXXX`. -On each provisioned host, either run the automatic repository installation script which will look like this: +On each provisioned host, you either run the automatic repository installation script or use the manual installation steps. The automatic script looks like this: ```shell curl -1sLf 'https://downloads.enterprisedb.com/XXXXXXXXXXXXXXXX/enterprise/setup.rpm.sh' | sudo -E bash ``` -Or use the manual installation steps which look like this: +The manual installation steps look like this: ```shell dnf install yum-utils @@ -54,9 +54,8 @@ dnf -q makecache -y --disablerepo='*' --enablerepo='enterprisedb-enterprise' ### 2: Install Postgres -For step 2, we just run the command to install the packages. +For step 2, run the command to install the packages: ``` sudo dnf -y install edb-as16-server ``` - diff --git a/product_docs/docs/pgd/5/admin-manual/installing/03-configuring-repositories.mdx b/product_docs/docs/pgd/5/admin-manual/installing/03-configuring-repositories.mdx index 10331b0ed6d..4ecad1cfa4e 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/03-configuring-repositories.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/03-configuring-repositories.mdx @@ -8,7 +8,7 @@ deepToC: true To install and run PGD requires that you configure repositories so that the system can download and install the appropriate packages. -The following operations should be carried out on each host. For the purposes of this exercise, each host will be a standard data node, but the procedure would be the same for other [node types](../../node_management/node_types) such as witness or subscriber-only nodes. +Perform the following operations on each host. For the purposes of this exercise, each host is a standard data node, but the procedure would be the same for other [node types](../node_management/node_types), such as witness or subscriber-only nodes. * Use your EDB account. * Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page. @@ -39,7 +39,7 @@ The following operations should be carried out on each host. For the purposes of ### Use your EDB account -You'll need an EDB account to install Postgres Distributed. +You need an EDB account to install Postgres Distributed. Use your EDB account to sign in to the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page, where you can obtain your repo token. @@ -47,7 +47,7 @@ On your first visit to this page, select **Request Access** to generate your rep ![EDB Repos 2.0](images/edbrepos2.0.png) -Copy the token to your clipboard using the **Copy Token** button and store it safely. +Select **Copy Token** to copy the token to your clipboard, and store the token safely. ### Set environment variables @@ -61,40 +61,40 @@ export EDB_SUBSCRIPTION_TOKEN= You can add this to your `.bashrc` script or similar shell profile to ensure it's always set. !!! Note -Your preferred platform may support storing this variable as a secret which can appear as an environment variable. If this is the case, don't add the setting to `.bashrc` and instead add it to your platform's secret manager. +Your preferred platform may support storing this variable as a secret, which can appear as an environment variable. If this is the case, add it to your platform's secret manager, and don't add the setting to `.bashrc`. !!! ### Configure the repository All the software you need is available from the EDB Postgres Distributed package repository. -You have the option to simply download and run a script to configure the EDB Postgres Distributed repository. -You can also download, inspect and then run that same script. -The following instructions also include the essential steps that the scripts take for any user wanting to manually run, or automate, the installation process. +You have the option to download and run a script to configure the EDB Postgres Distributed repository. +You can also download, inspect, and then run that same script. +The following instructions also include the essential steps that the scripts take for any user wanting to manually run the installation process or to automate it. #### RHEL/Other RHEL-based -You can autoinstall with automated OS detection +You can autoinstall with automated OS detection: ``` curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.rpm.sh" | sudo -E bash ``` -If you wish to inspect the script that is generated for you run: +If you want to inspect the script that's generated for you, run: ``` curl -1sLfO "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.rpm.sh" ``` -Then inspect the resulting `setup.rpm.sh` file. When you are happy to proceed, run: +Then inspect the resulting `setup.rpm.sh` file. When you're ready to proceed, run: ``` sudo -E bash setup.rpm.sh ``` -If you want to perform all steps manually or use your own preferred deployment mechanism, you can use the following example as a guide: +If you want to perform all steps manually or use your own preferred deployment mechanism, you can use the following example as a guide. -You will need to pass details of your Linux distribution and version. You may need to change the codename to match the version of RHEL you are using. Here we set it for RHEL compatible Linux version 9: +You will need to pass details of your Linux distribution and version. You may need to change the codename to match the version of RHEL you're using. This example sets it for RHEL-compatible Linux version 9: ``` export DISTRO="el" @@ -107,13 +107,13 @@ Now install the yum-utils package: sudo dnf install -y yum-utils ``` -The next step will import a GPG key for the repositories: +The next step imports a GPG key for the repositories: ``` sudo rpm --import "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/gpg.B09F406230DA0084.key" ``` -Now, we can import the repository details, add them to the local configuration and enable the repository. +Now you can import the repository details, add them to the local configuration, and enable the repository. ``` curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/config.rpm.txt?distro=$DISTRO&codename=$CODENAME" > /tmp/enterprise.repo diff --git a/product_docs/docs/pgd/5/admin-manual/installing/04-installing-software.mdx b/product_docs/docs/pgd/5/admin-manual/installing/04-installing-software.mdx index 153e9e9fe42..8ea39a2a8a9 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/04-installing-software.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/04-installing-software.mdx @@ -7,65 +7,65 @@ deepToC: true ## Installing the PGD software With the repositories configured, you can now install the Postgres Distributed software. -These steps must be carried out on each host before proceeding to the next step. +You must perform these steps on each host before proceeding to the next step. -* **Install the packages** - * Install the PGD packages which include a server specific BDR package and generic PGD proxy and cli packages. (`edb-bdr5-`, `edb-pgd5-proxy`, and `edb-pgd5-cli`) +* **Install the packages.** + * Install the PGD packages, which include a server-specific BDR package and generic PGD Proxy and CLI packages. (`edb-bdr5-`, `edb-pgd5-proxy`, and `edb-pgd5-cli`) * **Ensure the Postgres database server has been initialized and started.** - * Use `systemctl status ` to check the service is running - * If not, initialize the database and start the service + * Use `systemctl status` to check that the service is running. + * If the service isn't running, initialize the database and start the service. -* **Configure the BDR extension** - * Add the BDR extension (`$libdir/bdr`) at the start of the shared_preload_libraries setting in `postgresql.conf`. +* **Configure the BDR extension.** + * Add the BDR extension (`$libdir/bdr`) at the start of the `shared_preload_libraries` setting in `postgresql.conf`. * Set the `wal_level` GUC variable to `logical` in `postgresql.conf`. * Turn on commit timestamp tracking by setting `track_commit_timestamp` to `'on'` in `postgresql.conf`. - * Raise the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.

+ * Increase the maximum worker processes to 16 or higher by setting `max_worker_processes` to `'16'` in `postgresql.conf`.

!!! Note The `max_worker_processes` value - The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases and other factors. - To calculate the needed value see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings). - The value of 16 was calculated for the size of cluster we are deploying and must be raised for larger clusters. + The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors. + To calculate the needed value, see [Postgres configuration/settings](../postgres-configuration/#postgres-settings). + The value of 16 was calculated for the size of cluster being deployed in this example. It must be increased for larger clusters. !!! * Set a password on the EnterprisedDB/Postgres user. * Add rules to `pg_hba.conf` to allow nodes to connect to each other. - * Ensure that these lines are present in `pg_hba.conf: + * Ensure that these lines are present in `pg_hba.conf`: ``` host all all all md5 host replication all all md5 ``` * Add a `.pgpass` file to allow nodes to authenticate each other. - * Configure a user with sufficient privileges to be able to log into the other nodes. + * Configure a user with sufficient privileges to log in to the other nodes. * See [The Password File](https://www.postgresql.org/docs/current/libpq-pgpass.html) in the Postgres documentation for more on the `.pgpass` file. * **Restart the server.** - * Verify the restarted server is running with the modified settings and the bdr extension is available + * Verify the restarted server is running with the modified settings and the BDR extension is available. * **Create the replicated database.** - * Log into the server's default database (`edb` for EPAS, `postgres` for PGE and Community). + * Log in to the server's default database (`edb` for EDB Postgres Advanced Server, `postgres` for PGE and community Postgres). * Use `CREATE DATABASE bdrdb` to create the default PGD replicated database. * Log out and then log back in to `bdrdb`. * Use `CREATE EXTENSION bdr` to enable the BDR extension and PGD to run on that database. -We will look in detail at the steps for EDB Postgres Advanced Server in the worked example below. +The worked example that follows shows the steps for EDB Postgres Advanced Server in detail. -If you are installing PGD with EDB Postgres Extended Server or Community Postgres, the steps are similar, but details such as package names and paths are different. These differences are summarized in [Installing PGD for EDB Postgres Extended Server](#installing-pgd-for-edb-postgres-extended-server) and [Installing PGD for Postgresql](#installing-pgd-for-postgresql). +If you're installing PGD with EDB Postgres Extended Server or community Postgres, the steps are similar, but details such as package names and paths are different. These differences are summarized in [Installing PGD for EDB Postgres Extended Server](#installing-pgd-for-edb-postgres-extended-server) and [Installing PGD for community Postgresql](#installing-pgd-for-community-postgresql). ## Worked example ### Install the packages -The first step is to install the packages. For each Postgres package, there is a `edb-bdr5-` package to go with it. -For example, if we are installing EDB Postgres Advanced Server (epas) version 16, we would install `edb-bdr5-epas16`. +The first step is to install the packages. Each Postgres package has an `edb-bdr5-` package to go with it. +For example, if you're installing EDB Postgres Advanced Server (epas) version 16, you'd install `edb-bdr5-epas16`. There are two other packages to also install: -- `edb-pgd5-proxy` for PGD Proxy. -- `edb-pgd5-cli` for the PGD command line tool. +- `edb-pgd5-proxy` for PGD Proxy +- `edb-pgd5-cli` for the PGD command line tool To install all of these packages on a RHEL or RHEL compatible Linux, run: @@ -75,15 +75,15 @@ sudo dnf -y install edb-bdr5-epas16 edb-pgd5-proxy edb-pgd5-cli ### Ensure the database is initialized and started -If it wasn't initialized and started by the database's package initialisation (or you are repeating the process), you will need to initialize and start the server. +If the server wasn't initialized and started by the database's package initialization (or you're repeating the process), you need to initialize and start the server. -To see if the server is running, you can check the service. The service name for EDB Advanced Server is `edb-as-16` so run: +To see if the server is running, you can check the service. The service name for EDB Advanced Server is `edb-as-16`, so run: ``` sudo systemctl status edb-as-16 ``` -If the server is not running, this will respond with: +If the server isn't running, the response is: ``` ○ edb-as-16.service - EDB Postgres Advanced Server 16 @@ -91,18 +91,18 @@ If the server is not running, this will respond with: Active: inactive (dead) ``` -The "Active: inactive (dead)" tells us we will need to initialize and start the server. +`Active: inactive (dead)` tells you that you need to initialize and start the server. -You will need to know the path to the setup script for your particular Postgres flavor. +You need to know the path to the setup script for your particular Postgres flavor. -For EDB Postgres Advanced Server, this script can be found in `/usr/edb/as16/bin` as `edb-as-16-setup`. -This command needs to be run with the `initdb` parameter and we need to pass an option setting the database to use UTF-8. +For EDB Postgres Advanced Server, you can find this script in `/usr/edb/as16/bin` as `edb-as-16-setup`. +Run this command with the `initdb` parameter and pass an option to set the database to use UTF-8: ``` sudo PGSETUP_INITDB_OPTIONS="-E UTF-8" /usr/edb/as16/bin/edb-as-16-setup initdb ``` -Once the database is initialized, we will start it which will enable us to continue configuring the BDR extension. +Once the database is initialized, start it so that you can continue configuring the BDR extension: ``` sudo systemctl start edb-as-16 @@ -110,24 +110,24 @@ sudo systemctl start edb-as-16 ### Configure the BDR extension -Installing EDB Postgres Advanced Server creates a system user `enterprisedb` with admin capabilities when connected to the database. We will be using this user to configure the BDR extension. +Installing EDB Postgres Advanced Server creates a system user enterprisedb with admin capabilities when connected to the database. You'll use this user to configure the BDR extension. #### Preload the BDR library -We want the bdr library to be preloaded with other libraries. -EPAS has a number of libraries already preloaded, so we have to prefix the existing list with the BDR library. +You need to preload the BDR library with other libraries. +EDB Postgres Advanced Server has a number of libraries already preloaded, so you have to prefix the existing list with the BDR library. ``` echo -e "shared_preload_libraries = '\$libdir/bdr,\$libdir/dbms_pipe,\$libdir/edb_gen,\$libdir/dbms_aq'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null ``` !!!tip -This command format (`echo ... | sudo ... tee -a ...`) appends the echoed string to the end of the postgresql.conf file, which is owned by another user. +This command format (`echo ... | sudo ... tee -a ...`) appends the echoed string to the end of the `postgresql.conf` file, which is owned by another user. !!! #### Set the `wal_level` -The BDR extension needs to set the server to perform logical replication. We do this by setting `wal_level` to `logical`. +The BDR extension needs to set the server to perform logical replication. Do this by setting `wal_level` to `logical`: ``` echo -e "wal_level = 'logical'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null @@ -136,24 +136,24 @@ echo -e "wal_level = 'logical'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/ #### Enable commit timestamp tracking -The BDR extension also needs the commit timestamp tracking enabled. +The BDR extension also needs the commit timestamp tracking enabled: ``` echo -e "track_commit_timestamp = 'on'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null ``` -#### Raise `max_worker_processes` +#### Increase `max_worker_processes` To communicate between multiple nodes, Postgres Distributed nodes run more worker processes than usual. The default limit (8) is too low even for a small cluster. -The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases and other factors. -To calculate the needed value see [Postgres configuration/settings](../../postgres-configuration/#postgres-settings). +The `max_worker_processes` value is derived from the topology of the cluster, the number of peers, number of databases, and other factors. +To calculate the needed value, see [Postgres configuration/settings](../../../postgres-configuration/#postgres-settings). -For this example, with a 3 node cluster, we are using the value of 16. +This example, with a 3-node cluster, uses the value of 16. -Raise the maximum number of worker processes to 16 with this commmand: +Increase the maximum number of worker processes to 16: ``` echo -e "max_worker_processes = '16'" | sudo -u enterprisedb tee -a /var/lib/edb/as16/data/postgresql.conf >/dev/null @@ -161,14 +161,14 @@ echo -e "max_worker_processes = '16'" | sudo -u enterprisedb tee -a /var/lib/edb ``` -This value must be raised for larger clusters. +This value must be increased for larger clusters. #### Add a password to the Postgres enterprisedb user To allow connections between nodes, a password needs to be set on the Postgres enterprisedb user. -For this example, we are using the password `secret`. +This example uses the password `secret`. Select a different password for your deployments. -You will need this password when we get to [Creating the PGD Cluster](05-creating-cluster). +You will need this password for [connecting the cluster](05-connecting-cluster). ``` sudo -u enterprisedb psql edb -c "ALTER USER enterprisedb WITH PASSWORD 'secret'" @@ -186,7 +186,7 @@ echo -e "host all all all md5\nhost replication all all md5" | sudo tee -a /var/ ``` -It will append +The command appends the following to `pg_hba.conf`: ``` host all all all md5 @@ -194,15 +194,15 @@ host replication all all md5 ``` -to `pg_hba.conf` which will enable the nodes to replicate. +These commands enable the nodes to replicate. #### Enable authentication between nodes As part of the process of connecting nodes for replication, PGD logs into other nodes. -It will perform that log in as the user that Postgres is running under. -For epas, this is the `enterprisedb` user. -That user will need credentials to log into the other nodes. -We will supply these credentials using the `.pgpass` file which needs to reside in the user's home directory. +It performs that login as the user that Postgres is running under. +For EDB Postgres Advanced server, this is the enterprisedb user. +That user needs credentials to log into the other nodes. +Supply these credentials using the `.pgpass` file, which needs to reside in the user's home directory. The home directory for `enterprisedb` is `/var/lib/edb`. Run this command to create the file: @@ -216,7 +216,7 @@ You can read more about the `.pgpass` file in [The Password File](https://www.po ### Restart the server -After all these configuration changes, it is recommended that the server is restarted with: +After all these configuration changes, we recommend that you restart the server with: ``` sudo systemctl restart edb-as-16 @@ -225,14 +225,14 @@ sudo systemctl restart edb-as-16 #### Check the extension has been installed -At this point, it is worth checking the extension is actually available and our configuration has been correctly loaded. You can query the pg_available_extensions table for the bdr extension like this: +At this point, it's worth checking whether the extension is actually available and the configuration was correctly loaded. You can query the `pg_available_extensions` table for the BDR extension like this: ``` sudo -u enterprisedb psql edb -c "select * from pg_available_extensions where name like 'bdr'" ``` -Which should return an entry for the extension and its version. +This command returns an entry for the extension and its version: ``` name | default_version | installed_version | comment @@ -250,7 +250,7 @@ sudo -u enterprisedb psql edb -c "show all" | grep -e wal_level -e track_commit_ ### Create the replicated database The server is now prepared for PGD. -We need to next create a database named `bdrdb` and install the bdr extension when logged into it. +You need to next create a database named `bdrdb` and install the BDR extension when logged into it: ``` sudo -u enterprisedb psql edb -c "CREATE DATABASE bdrdb" @@ -258,14 +258,14 @@ sudo -u enterprisedb psql bdrdb -c "CREATE EXTENSION bdr" ``` -Finally, test the connection by logging into the server. +Finally, test the connection by logging in to the server. ``` sudo -u enterprisedb psql bdrdb ``` -You should be connected to the server. -Execute the command "\\dx" to list extensions installed. +You're connected to the server. +Execute the command "\\dx" to list extensions installed: ``` bdrdb=# \dx @@ -280,13 +280,13 @@ bdrdb=# \dx (5 rows) ``` -Notice that the bdr extension is listed in the table, showing it is installed. +Notice that the BDR extension is listed in the table, showing that it's installed. ## Summaries ### Installing PGD for EDB Postgres Advanced Server -These are all the commands used in this section gathered together for your convenience. +For your convenience, here's a summary of the commands used in this example. ``` sudo dnf -y install edb-bdr5-epas16 edb-pgd5-proxy edb-pgd5-cli @@ -308,14 +308,14 @@ sudo -u enterprisedb psql bdrdb ### Installing PGD for EDB Postgres Extended Server -If installing PGD with EDB Postgres Extended Server, there are a number of differences from the EPAS installation. +Installing PGD with EDB Postgres Extended Server has a number of differences from the EDB Postgres Advanced Server installation: -* The BDR package to install is named `edb-bdrV-pgextendedNN` (where V is the PGD version and NN is the PGE version number) -* A different setup utility should be called: /usr/edb/pgeNN/bin/edb-pge-NN-setup -* The service name is edb-pge-NN. -* The system user is postgres (not enterprisedb) -* The home directory for the postgres user is `/var/lib/pgqsl` -* There are no pre-existing libraries to be added to `shared_preload_libraries` +* The BDR package to install is named `edb-bdrV-pgextendedNN` (where V is the PGD version and NN is the PGE version number). +* Call a different setup utility: `/usr/edb/pgeNN/bin/edb-pge-NN-setup`. +* The service name is `edb-pge-NN`. +* The system user is postgres (not enterprisedb). +* The home directory for the postgres user is `/var/lib/pgqsl`. +* There are no preexisting libraries to add to `shared_preload_libraries`. #### Summary: Installing PGD for EDB Postgres Extended Server 16 @@ -339,14 +339,14 @@ sudo -u postgres psql bdrdb ### Installing PGD for Postgresql -If installing PGD with PostgresSQL, there are a number of differences from the EPAS installation. +Installing PGD with PostgresSQL has a number of differences from the EDB Postgres Advanced Server installation: -* The BDR package to install is named `edb-bdrV-pgNN` (where V is the PGD version and NN is the PostgreSQL version number) -* A different setup utility should be called: /usr/pgsql-NN/bin/postgresql-NN-setup -* The service name is postgresql-NN. -* The system user is postgres (not enterprisedb) -* The home directory for the postgres user is `/var/lib/pgqsl` -* There are no pre-existing libraries to be added to `shared_preload_libraries` +* The BDR package to install is named `edb-bdrV-pgNN` (where V is the PGD version and NN is the PostgreSQL version number). +* Call a different setup utility: `/usr/pgsql-NN/bin/postgresql-NN-setup`. +* The service name is `postgresql-NN`. +* The system user is postgres (not enterprisedb). +* The home directory for the postgres user is `/var/lib/pgqsl`. +* There are no preexisting libraries to add to `shared_preload_libraries`. #### Summary: Installing PGD for Postgresql 16 @@ -367,5 +367,3 @@ sudo -u postgres psql bdrdb -c "CREATE EXTENSION bdr" sudo -u postgres psql bdrdb ``` - - diff --git a/product_docs/docs/pgd/5/admin-manual/installing/05-creating-cluster.mdx b/product_docs/docs/pgd/5/admin-manual/installing/05-creating-cluster.mdx index b14caa5dac4..c94c70250f0 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/05-creating-cluster.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/05-creating-cluster.mdx @@ -1,68 +1,67 @@ --- -title: Step 5 - Creating the PGD Cluster -navTitle: Creating the Cluster +title: Step 5 - Creating the PGD cluster +navTitle: Creating the cluster deepToC: true --- ## Creating the PGD cluster * **Create connection strings for each node**. -For each node we want to create a connection string which will allow PGD to perform replication. +For each node, create a connection string that will allow PGD to perform replication. - The connection string is a key/value string which starts with a `host=` and the IP address of the host (or if you have resolvable named hosts, the name of the host). + The connection string is a key/value string that starts with a `host=` and the IP address of the host. (If you have resolvable named hosts, the name of the host is used instead of the IP address.) - That is followed by the name of the database; `dbname=bdrdb` as we created a `bdrdb` database when [installing the software](04-installing-software). + That's followed by the name of the database. In this case, use `dbname=bdrdb`, as a `bdrdb` database was created when [installing the software](04-installing-software). - We recommend you also add the port number of the server to your connection string as `port=5444` for EDB Postgres Advanced Server and `port=5432` for EDB Postgres Extended and Community PostgreSQL. + We recommend you also add the port number of the server to your connection string as `port=5444` for EDB Postgres Advanced Server and `port=5432` for EDB Postgres Extended and community PostgreSQL. * **Prepare the first node.** -To create the cluster, we select and log into one of the hosts Postgres server's `bdrdb` database. - +To create the cluster, select and log in to the `bdrdb` database on any host's Postgres server. * **Create the first node.** - Run `bdr.create_node` and give the node a name and its connection string where *other* nodes may connect to it. + Run `bdr.create_node` and give the node a name and its connection string where *other* nodes can connect to it. * Create the top-level group. - Create a top-level group for the cluster with `bdr.create_node_group` giving it a single parameter, the name of the top-level group. - * Create a sub-group. - Create a sub-group as a child of the top-level group with `bdr.create_node_group` giving it two parameters, the name of the sub-group and the name of the parent (and top-level) group. - This initializes the first node. + Create a top-level group for the cluster with `bdr.create_node_group`, giving it a single parameter: the name of the top-level group. + * Create a subgroup. + Create a subgroup as a child of the top-level group with `bdr.create_node_group`, giving it two parameters: the name of the subgroup and the name of the parent (and top-level) group. + This process initializes the first node. -* **Adding the second node.** - * Create the second node. - Log into another initialized node's `bdrdb` database. - Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes may connect to it. - * Join the second node to the cluster - Next, run `bdr.join_node_group` passing two parameters, the connection string for the first node and the name of the sub-group you want the node to join. +* **Add the second node.** + * Create the second node. + Log in to another initialized node's `bdrdb` database. + Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes can connect to it. + * Join the second node to the cluster. + Next, run `bdr.join_node_group`, passing two parameters: the connection string for the first node and the name of the subgroup you want the node to join. -* **Adding the third node.** - * Create the third node - Log into another initialized node's `bdrdb` database. - Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes may connect to it. - * Join the third node to the cluster - Next, run `bdr.join_node_group` passing two parameters, the connection string for the first node and the name of the sub-group you want the node to join. +* **Add the third node.** + * Create the third node. + Log in to another initialized node's `bdrdb` database. + Run `bdr.create_node` and give the node a different name and its connection string where *other* nodes can connect to it. + * Join the third node to the cluster. + Next, run `bdr.join_node_group`, passing two parameters: the connection string for the first node and the name of the subgroup you want the node to join. ## Worked example -So far, we have: +So far, this example has: -* Created three Hosts. +* Created three hosts. * Installed a Postgres server on each host. * Installed Postgres Distributed on each host. * Configured the Postgres server to work with PGD on each host. -To create the cluster, we will tell `host-one`'s Postgres instance that it is a PGD node - `node-one` and create PGD groups on that node. -Then we will tell `host-two` and `host-three`'s Postgres instances that they are PGD nodes - `node-two` and `node-three` and that they should join a group on `node-one`. +To create the cluster, you tell host-one's Postgres instance that it's a PGD node—node-one—and create PGD groups on that node. +Then you tell host-two and host-three's Postgres instances that they are PGD nodes—node-two and node-three—and that they must join a group on node-one. ### Create connection strings for each node -We calculate the connection strings for each of the node in advance. -Below are the connection strings for our 3 node example: +Calculate the connection strings for each of the nodes in advance. +Following are the connection strings for this 3-node example. -| Name | Node Name | Private IP | Connection string | +| Name | Node name | Private IP | Connection string | | ---------- | ---------- | --------------- | -------------------------------------- | | host-one | node-one | 192.168.254.166 | host=host-one dbname=bdrdb port=5444 | | host-two | node-two | 192.168.254.247 | host=host-two dbname=bdrdb port=5444 | @@ -70,7 +69,7 @@ Below are the connection strings for our 3 node example: ### Preparing the first node -Log into host-one's Postgres server. +Log in to host-one's Postgres server. ``` ssh admin@host-one @@ -79,7 +78,7 @@ sudo -iu enterprisedb psql bdrdb ### Create the first node -Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string which other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create a node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444'); @@ -87,21 +86,21 @@ select bdr.create_node('node-one','host=host-one dbname=bdrdb port=5444'); #### Create the top-level group -Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter will create the top-level group with that name. For our example, we will create a top-level group named `pgd`. +Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function to create a top-level group for your PGD cluster. Passing a single string parameter creates the top-level group with that name. This example creates a top-level group named `pgd`. ``` select bdr.create_node_group('pgd'); ``` -#### Create a sub-group +#### Create a subgroup -Using sub-groups to organize your nodes is preferred as it allows services like PGD proxy, which we will be configuring later, to coordinate their operations. -In a larger PGD installation, multiple sub-groups can exist providing organizational grouping that enables geographical mapping of clusters and localized resilience. -For that reason, in this example, we are creating a sub-group for our first nodes to enable simpler expansion and use of PGD proxy. +Using subgroups to organize your nodes is preferred, as it allows services like PGD Proxy, which you'll configure later, to coordinate their operations. +In a larger PGD installation, multiple subgroups can exist. These subgroups provide organizational grouping that enables geographical mapping of clusters and localized resilience. +For that reason, this example creates a subgroup for the first nodes to enable simpler expansion and the use of PGD Proxy. -Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a sub-group of the top-level group. -The sub-group name is the first parameter, the parent group is the second parameter. -For our example, we will create a sub-group `dc1` as a child of `pgd`. +Call the [`bdr.create_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node_group) function again to create a subgroup of the top-level group. +The subgroup name is the first parameter, and the parent group is the second parameter. +This example creates a subgroup `dc1` as a child of `pgd`. ``` @@ -110,7 +109,7 @@ select bdr.create_node_group('dc1','pgd'); ### Adding the second node -Log into host-two's Postgres server +Log in to host-two's Postgres server ``` ssh admin@host-two @@ -119,7 +118,7 @@ sudo -iu enterprisedb psql bdrdb #### Create the second node -We call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444'); @@ -127,15 +126,15 @@ select bdr.create_node('node-two','host=host-two dbname=bdrdb port=5444'); #### Join the second node to the cluster -Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) we can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter. +Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-two to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. ``` select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1'); ``` -### Adding the third node +### Add the third node -Log into host-three's Postgres server +Log in to host-three's Postgres server. ``` ssh admin@host-three @@ -144,7 +143,7 @@ sudo -iu enterprisedb psql bdrdb #### Create the third node -We call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string which other nodes can use to connect to it. +Call the [`bdr.create_node`](/pgd/latest/reference/nodes-management-interfaces#bdrcreate_node) function to create this node, passing it the node name and a connection string that other nodes can use to connect to it. ``` select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444'); @@ -152,10 +151,10 @@ select bdr.create_node('node-three','host=host-three dbname=bdrdb port=5444'); #### Join the third node to the cluster -Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group) we can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group, and the group name as a second parameter. +Using [`bdr.join_node_group`](/pgd/latest/reference/nodes-management-interfaces#bdrjoin_node_group), you can ask node-three to join node-one's `dc1` group. The function takes as a first parameter the connection string of a node already in the group and the group name as a second parameter. ``` select bdr.join_node_group('host=host-one dbname=bdrdb port=5444','dc1'); ``` -We have now created a PGD cluster. +A PGD cluster is now created. diff --git a/product_docs/docs/pgd/5/admin-manual/installing/06-check-cluster.mdx b/product_docs/docs/pgd/5/admin-manual/installing/06-check-cluster.mdx index df92b8e1f99..f3de96fdd3a 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/06-check-cluster.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/06-check-cluster.mdx @@ -7,56 +7,56 @@ deepToC: true ## Checking the cluster -With the cluster up and running, it is worthwhile running some basic checks on how effectively it is replicating. +With the cluster up and running, it's worthwhile to run some basic checks to see how effectively it's replicating. -In the following example, we show one quick way to do this but you should ensure that any testing you perform is appropriate for your use case. +The following example shows one quick way to do this, but you must ensure that any testing you perform is appropriate for your use case. * **Preparation** - * Ensure the cluster is ready - * Log into the database on host-one/node-one - * Run `select bdr.wait_slot_confirm_lsn(NULL, NULL);` - * When the query returns the cluster is ready + * Ensure the cluster is ready: + * Log in to the database on host-one/node-one. + * Run `select bdr.wait_slot_confirm_lsn(NULL, NULL);`. + * When the query returns, the cluster is ready. * **Create data** - The simplest way to test the cluster is replicating is to log into one node, create a table and populate it. - * On node-one create a table + The simplest way to test that the cluster is replicating is to log in to one node, create a table, and populate it. + * On node-one, create a table: ```sql CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); ``` - * On node-one populate the table + * On node-one, populate the table: ```sql INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000); ``` - * On node-one monitor performance + * On node-one, monitor performance: ```sql select * from bdr.node_replication_rates; ``` - * On node-one get a sum of the value column (for checking) + * On node-one, get a sum of the value column (for checking): ```sql select COUNT(*),SUM(value) from quicktest; ``` * **Check data** - * Log into node-two - Log into the database on host-two/node-two - * On node-two get a sum of the value column (for checking) + * Log in to node-two. + Log in to the database on host-two/node-two. + * On node-two, get a sum of the value column (for checking): ```sql select COUNT(*),SUM(value) from quicktest; ``` - * Compare with the result from node-one - * Log into node-three - Log into the database on host-three/node-three - * On node-three get a sum of the value column (for checking) + * Compare with the result from node-one. + * Log in to node-three. + Log in to the database on host-three/node-three. + * On node-three, get a sum of the value column (for checking): ```sql select COUNT(*),SUM(value) from quicktest; ``` - * Compare with the result from node-one and node-two + * Compare with the result from node-one and node-two. ## Worked example ### Preparation -Log into host-one's Postgres server. +Log in to host-one's Postgres server. ``` ssh admin@host-one sudo -iu enterprisedb psql bdrdb @@ -72,9 +72,9 @@ To ensure that the cluster is ready to go, run: select bdr.wait_slot_confirm_lsn(NULL, NULL) ``` -This query will block while the cluster is busy initializing and return when the cluster is ready. +This query blocks while the cluster is busy initializing and returns when the cluster is ready. -In another window, log into host-two's Postgres server +In another window, log in to host-two's Postgres server: ``` ssh admin@host-two @@ -83,23 +83,23 @@ sudo -iu enterprisedb psql bdrdb ### Create data -#### On node-one create a table +#### On node-one, create a table -Run +Run: ```sql CREATE TABLE quicktest ( id SERIAL PRIMARY KEY, value INT ); ``` -#### On node-one populate the table +#### On node-one, populate the table ``` INSERT INTO quicktest (value) SELECT random()*10000 FROM generate_series(1,10000); ``` -This will generate a table of 10000 rows of random values. +This command generates a table of 10000 rows of random values. -#### On node-one monitor performance +#### On node-one, monitor performance As soon as possible, run: @@ -107,7 +107,7 @@ As soon as possible, run: select * from bdr.node_replication_rates; ``` -And you should see statistics on how quickly that data has been replicated to the other two nodes. +The command shows statistics about how quickly that data was replicated to the other two nodes: ```console bdrdb=# select * from bdr.node_replication_rates; @@ -120,7 +120,7 @@ al (2 rows) ``` -And it's already replicated. +And it's already replicated. #### On node-one get a checksum @@ -130,7 +130,7 @@ Run: select COUNT(*),SUM(value) from quicktest; ``` -to get some values from the generated data: +This command gets some values from the generated data: ```sql bdrdb=# select COUNT(*),SUM(value) from quicktest; @@ -143,7 +143,7 @@ __OUTPUT__ ### Check data -#### Log into host-two's Postgres server. +#### Log in to host-two's Postgres server ``` ssh admin@host-two sudo -iu enterprisedb psql bdrdb @@ -151,7 +151,7 @@ sudo -iu enterprisedb psql bdrdb This is your connection to PGD's node-two. -#### On node-two get a checksum +#### On node-two, get a checksum Run: @@ -159,7 +159,7 @@ Run: select COUNT(*),SUM(value) from quicktest; ``` -to get node-two's values for the generated data: +This command gets node-two's values for the generated data: ```sql bdrdb=# select COUNT(*),SUM(value) from quicktest; @@ -172,11 +172,11 @@ __OUTPUT__ #### Compare with the result from node-one -And the values will be identical. +The values are identical. -You can repeat the process with node-three, or generate new data on any node and see it replicate to the other nodes. +You can repeat the process with node-three or generate new data on any node and see it replicate to the other nodes. -#### Log into host-threes's Postgres server. +#### Log in to host-three's Postgres server ``` ssh admin@host-two sudo -iu enterprisedb psql bdrdb @@ -184,7 +184,7 @@ sudo -iu enterprisedb psql bdrdb This is your connection to PGD's node-three. -#### On node-three get a checksum +#### On node-three, get a checksum Run: @@ -192,7 +192,7 @@ Run: select COUNT(*),SUM(value) from quicktest; ``` -to get node-three's values for the generated data: +This command gets node-three's values for the generated data: ```sql bdrdb=# select COUNT(*),SUM(value) from quicktest; @@ -205,6 +205,4 @@ __OUTPUT__ #### Compare with the result from node-one and node-two -And the values will be identical. - - +The values are identical. diff --git a/product_docs/docs/pgd/5/admin-manual/installing/07-configure-proxies.mdx b/product_docs/docs/pgd/5/admin-manual/installing/07-configure-proxies.mdx index a716c1eedc6..10c94ecde9d 100644 --- a/product_docs/docs/pgd/5/admin-manual/installing/07-configure-proxies.mdx +++ b/product_docs/docs/pgd/5/admin-manual/installing/07-configure-proxies.mdx @@ -6,33 +6,34 @@ deepToC: true ## Configure proxies -PGD can use proxies to direct traffic to one of the clusters nodes, selected automatically by the cluster. -There are performance and availabilty reasons for using a proxy: +PGD can use proxies to direct traffic to one of the cluster's nodes, selected automatically by the cluster. +There are performance and availability reasons for using a proxy: -* Performance: By directing all traffic and in particular write traffic, to one node, the node can resolve write conflicts locally and more efficiently. -* Availability: When a node is taken down for maintenance or goes offline for other reasons, the proxy can automatically direct new traffic to a new, automatically selected, write leader. +* Performance: By directing all traffic (in particular, write traffic) to one node, the node can resolve write conflicts locally and more efficiently. +* Availability: When a node is taken down for maintenance or goes offline for other reasons, the proxy can direct new traffic to a new write leader that it selects. -It is best practice to configure PGD Proxy for clusters to enable this behavior. +It's best practice to configure PGD Proxy for clusters to enable this behavior. ### Configure the cluster for proxies -To set up a proxy, you will need to first prepare the cluster and sub-group the proxies will be working with by: +To set up a proxy, you need to first prepare the cluster and subgroup the proxies will be working with by: -* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the sub-group. Use [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option), passing the sub-group name, option name and new value as parameters. -* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) and passing the new proxy name and the sub-group it should be attached to. -* Create a `pgdproxy` user on the cluster with a password (or other authentication) +* Logging in and setting the `enable_raft` and `enable_proxy_routing` node group options to `true` for the subgroup. Use [`bdr.alter_node_group_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_group_option), passing the subgroup name, option name, and new value as parameters. +* Create as many uniquely named proxies as you plan to deploy using [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) and passing the new proxy name and the subgroup to attach it to. The [`bdr.create_proxy`](/pgd/latest/reference/routing#bdrcreate_proxy) does not create a proxy, but creates a space for a proxy to register itself with the cluster. The space contains configuration values which can be modified later. Initially it is configured with default proxy options such as setting the `listen_address` to `0.0.0.0`. +* Configure proxy routes to each node by setting route_dsn for each node in the subgroup. The route_dsn is the connection string that the proxy should use to connect to that node. Use [`bdr.alter_node_option`](/pgd/latest/reference/nodes-management-interfaces#bdralter_node_option) to set the route_dsn for each node in the subgroup. +* Create a pgdproxy user on the cluster with a password or other authentication. ### Configure each host as a proxy -Once the cluster is ready, you will need to configure each host to run pgd-proxy by: +Once the cluster is ready, you need to configure each host to run pgd-proxy: -* Creating a `pgdproxy` local user -* Creating a `.pgpass` file for that user which will allow it to log into the cluster as `pgdproxy`. +* Create a pgdproxy local user. +* Create a `.pgpass` file for that user that allows the user to log into the cluster as pgdproxy. * Modify the systemd service file for pgdproxy to use the pgdproxy user. -* Create a proxy config file for the host which lists the connection strings for all the nodes in the sub-group, specifies the name that the proxy should use when connected and gives the endpoint connection string the proxy will accept connections on. -* Install that file as `/etc/edb/pgd-proxy/pgd-proxy-config.yml` +* Create a proxy config file for the host that lists the connection strings for all the nodes in the subgroup, specifies the name for the proxy to use when fetching proxy options like `listen_address` and `listen_port`. +* Install that file as `/etc/edb/pgd-proxy/pgd-proxy-config.yml`. * Restart the systemd service and check its status. -* Log into the proxy and verify its operation. +* Log in to the proxy and verify its operation. Further detail on all these steps is included in the worked example. @@ -42,14 +43,14 @@ Further detail on all these steps is included in the worked example. For proxies to function, the `dc1` subgroup must enable Raft and routing. -Log into any node in the cluster, using psql to connect to the bdrdb database as the `enterprisedb` user, and execute: +Log in to any node in the cluster, using psql to connect to the `bdrdb` database as the enterprisedb user. Execute: -``` +```sql SELECT bdr.alter_node_group_option('dc1', 'enable_raft', 'true'); SELECT bdr.alter_node_group_option('dc1', 'enable_proxy_routing', 'true'); ``` -The [`bdr.node_group_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_group_summary) view can be used to check the status of options previously set with bdr.alter_node_group_option(): +You can use the [`bdr.node_group_summary`](/pgd/latest/reference/catalogs-visible#bdrnode_group_summary) view to check the status of options previously set with `bdr.alter_node_group_option()`: ```sql SELECT node_group_name, enable_proxy_routing, enable_raft @@ -66,9 +67,9 @@ bdrdb=# Next, create a PGD proxy within the cluster using the `bdr.create_proxy` function. -This function takes two parameters, the proxy's unique name and the group it should be a proxy for. +This function takes two parameters: the proxy's unique name and the group you want it to be a proxy for. -In our example, we want a proxy on each host in the dc1 sub-group: +In this example, you want a proxy on each host in the `dc1` subgroup: ``` SELECT bdr.create_proxy('pgd-proxy-one','dc1'); @@ -76,7 +77,7 @@ SELECT bdr.create_proxy('pgd-proxy-two','dc1'); SELECT bdr.create_proxy('pgd-proxy-three','dc1'); ``` -The [`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) view can be used to check that the proxies were created: +You can use the [`bdr.proxy_config_summary`](/pgd/latest/reference/catalogs-internal#bdrproxy_config_summary) view to check that the proxies were created: ```sql SELECT proxy_name, node_group_name @@ -93,26 +94,44 @@ __OUTPUT__ ## Create a pgdproxy user on the database -Create a user named pgdproxy and give it a password. In this example we will use `proxysecret` +Create a user named pgdproxy and give it a password. This example uses `proxysecret`. -On any node, log into the bdrdb database as enterprisedb/postgres. +On any node, log into the `bdrdb` database as enterprisedb/postgres. ``` CREATE USER pgdproxy PASSWORD 'proxysecret'; GRANT bdr_superuser TO pgdproxy; ``` -## Create a pgdproxy user on each host +## Configure proxy routes to each node + +Once a proxy has connected, it gets its dsn values (connection strings) from the cluster. The cluster needs to know the connection details that a proxy should use for each node in the subgroup. This is done by setting the `route_dsn` option for each node to a connection string that the proxy can use to connect to that node. + +Please note that when a proxy starts, it gets the initial dsn from the proxy's config file. The route_dsn value set in this step and in config file should match. +On any node, log into the bdrdb database as enterprisedb/postgres. + +```sql +SELECT bdr.alter_node_option('host-one', 'route_dsn', 'host=host-one dbname=bdrdb port=5444 user=pgdproxy'); +SELECT bdr.alter_node_option('host-two', 'route_dsn', 'host=host-two dbname=bdrdb port=5444 user=pgdproxy'); +SELECT bdr.alter_node_option('host-three', 'route_dsn', 'host=host-three dbname=bdrdb port=5444 user=pgdproxy'); ``` + +Note that the endpoints in this example specify `port=5444`. +This is necessary for EDB Postgres Advanced Server instances. +For EDB Postgres Extended and community PostgreSQL, you can omit this. + +## Create a pgdproxy user on each host + +```shell sudo adduser pgdproxy ``` -This user will need credentials to connect to the server. -We will create a .pgpass file with the `proxysecret` password in it. -Then we will lock down the `.pgpass` file so it is only accessible by its owner. +This user needs credentials to connect to the server. +Create a `.pgpass` file with the `proxysecret` password in it. +Then lock down the `.pgpass` file so it's accessible only by its owner. -``` +```shell echo -e "*:*:*:pgdproxy:proxysecret" | sudo tee /home/pgdproxy/.pgpass sudo chown pgdproxy /home/pgdproxy/.pgpass sudo chmod 0600 /home/pgdproxy/.pgpass @@ -120,60 +139,57 @@ sudo chmod 0600 /home/pgdproxy/.pgpass ## Configure the systemd service on each host -Switch the service file from using root to using the pgdproxy user +Switch the service file from using root to using the pgdproxy user. -``` +```shell sudo sed -i s/root/pgdproxy/ /usr/lib/systemd/system/pgd-proxy.service ``` Reload the systemd daemon. -``` +```shell sudo systemctl daemon-reload ``` ## Create a proxy config file for each host -The proxy configuration file will be slightly different for each host. -It is a YAML file which contains a cluster object. This in turn has three +The proxy configuration file is slightly different for each host. +It's a YAML file that contains a cluster object. The cluster object has three properties: -The name of the PGD cluster's top-level group (as `name`). -An array of endpoints of databases (as `endpoints`). -The proxy definition object with a name and endpoint (as `proxy`). +* The name of the PGD cluster's top-level group (as `name`) +* An array of endpoints of databases (as `endpoints`) +* The proxy definition object with a name and endpoint (as `proxy`) -The first two properties will be the same for all hosts: +The first two properties are the same for all hosts: ``` cluster: name: pgd endpoints: - - host=host-one dbname=bdrdb port=5444 - - host=host-two dbname=bdrdb port=5444 - - host=host-three dbname=bdrdb port=5444 + - "host=host-one dbname=bdrdb port=5444 user=pgdproxy" + - "host=host-two dbname=bdrdb port=5444 user=pgdproxy" + - "host=host-three dbname=bdrdb port=5444 user=pgdproxy" ``` -Remember that host-one, host-two and host-three are the systems on which the cluster nodes (node-one, node-two, node-three) are running. -We use the name of the host, not the node, for the endpoint connection. +Remember that host-one, host-two, and host-three are the systems on which the cluster nodes (node-one, node-two, node-three) are running. +You use the name of the host, not the node, for the endpoint connection. -Also note that the endpoints in this example specify port=5444. +Again, note that the endpoints in this example specify `port=5444`. This is necessary for EDB Postgres Advanced Server instances. -For EDB Postgres Extended and Community PostgreSQL, this can be omitted. +For EDB Postgres Extended and community PostgreSQL, you can set this to `port=5432`. - -The third property, `proxy`, has a `name` property and an `endpoint` property. -The `name` property should be a name created with `bdr.create_proxy` earlier, and it will be different on each host. -The `endpoint` property is a string which defines how the proxy presents itself as a connection string. -A proxy cannot be on the same port as the Postgres server and, ideally, should be on a commonly used port different from direct connections, even when no Postgres server is running on the host. -We typically use port 6432 for PGD proxies. +The third property, `proxy`, has a `name` property. +The `name` property is a name created with `bdr.create_proxy` earlier, and it's different on each host. +A proxy can't be on the same port as the Postgres server and, ideally, should be on a commonly used port different from direct connections, even when no Postgres server is running on the host. +Typically, you use port 6432 for PGD proxies. ``` proxy: name: pgd-proxy-one - endpoint: "host=localhost dbname=bdrdb port=6432" ``` -In this case, by using 'localhost' in the endpoint, we specify that this proxy will listen on the host where the proxy is running. +In this case, using `localhost` in the endpoint specifies that this proxy will listen on the host where the proxy is running. ## Install a PGD proxy configuration on each host @@ -183,47 +199,47 @@ For each host, create the `/etc/edb/pgd-proxy` directory: sudo mkdir -p /etc/edb/pgd-proxy ``` -Then on each host, write the appropriate configuration to the `pgd-proxy-config.yml` file in the `/etc/edb/pgd-proxy` directory. +Then, on each host, write the appropriate configuration to the `pgd-proxy-config.yml` file in the `/etc/edb/pgd-proxy` directory. -For our example, this could be run on host-one to create the file. +For this example, you can run this on host-one to create the file: ``` cat < Date: Fri, 5 Apr 2024 19:11:55 +0530 Subject: [PATCH 29/48] Updated content as per the review comments --- .../administering_cluster/notifications.mdx | 46 ++++++++++--------- 1 file changed, 24 insertions(+), 22 deletions(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index 4a1a8c7c81c..15fc0a3acad 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -2,43 +2,45 @@ title: Notifications --- -BigAnimal supports receiving in-app and email notifications and allows you to choose receive notifications of a selected type. +With BigAnimal, you can opt to get specific types of notifications and receive both in-app and email notifications. Different types of events are sent as notifications. These notifications are set at different levels and users with different roles can configure this notifications. This table provides the list of events sent as notifications grouped by different levels at which they can be set: -| Level | Event | Role | -|--------------|--------------------------------------------------------------------------------------------------|----------------------------------| -| Organization | Payment method added | Organization owner/admin | -| Organization | Personal access key is expiring | Organization owner | -| Organization | Machine user access key is expiring | Organization owner | -| Project | Upcoming maintenance upgrade on a cluster (24hr) | Project owner/editor | -| Project | Successful maintenance upgrade on a cluster | Project owner/editor | -| Project | Failed maintenance upgrade on a cluster | Project owner/editor | -| Project | Paused cluster will automatically reactivated in 24 hours | Project owner/editor | -| Project | Paused cluster was automatically reactivated | Project owner/editor | -| Project | You must set up the encryption key permission for your CMK-enabled cluster | Project owner/editor | -| Project | Key error with CMK-enabled cluster | Project owner and project editor | -| Project | User is invited to a project (displays only to the Project owner) | Project owner | -| Project | New role is assigned to you | Project user | -| Project | Role is unassigned from you | Project user | -| Project | Failed connection to third-party monitoring integration (and future non-monitoring integrations) | Project owner/editor | +| Level | Event | Role | Subscription type | +|--------------|--------------------------------------------------------------------------------------------------|----------------------------------|--------------------- | +| Organization | Payment method added | Organization owner/admin | Digital self-service | +| Organization | Personal access key is expiring | Account owner | All | +| Organization | Machine user access key is expiring | Organization owner | All | +| Project | Upcoming maintenance upgrade on a cluster (24hr) | Project owner/editor | All | +| Project | Successful maintenance upgrade on a cluster | Project owner/editor | All | +| Project | Failed maintenance upgrade on a cluster | Project owner/editor | All | +| Project | Paused cluster will automatically reactivated in 24 hours | Project owner/editor | All | +| Project | Paused cluster was automatically reactivated | Project owner/editor | All | +| Project | You must set up the encryption key permission for your CMK-enabled cluster | Project owner/editor | All | +| Project | Key error with CMK-enabled cluster | Project owner and project editor | All | +| Project | User is invited to a project (displays only to the Project owner) | Project owner | All | +| Project | New role is assigned to you | Account owner | All | +| Project | Role is unassigned from you | Account owner | All | +| Project | Failed connection to third-party monitoring integration (and future non-monitoring integrations) | Project owner/editor | All | + +!!!note +All subscription type means Digital self-service, Direct purchase, and Azure Marketplace. For more information, see [subscription types](/biganimal/latest/pricing_and_billing/#payments-and-billing). +!!! ## Configuring notifications -The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the in-app inbox or email or both. They can also configure email notifications for their teams within their organization. - +The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the in-app inbox, email or both. They can also configure email notifications for their teams within their organization. Project level notifications are to be configured for a project. Notification settings made by a user is applicable only to that user. If an email notification is enabled, the email is sent to the email address associated with their login. -Notifications related to unused features for an organization or a project aren't visible or configurable in the UI. - ## Viewing notifications Users in the following roles can view the notifications: - Organization owners/admins can view the organization-level notifications. -- Project owners/editors can view the project-level notifications. +- Project owners/editors can view the project-level notifications. +- Account owner can view the account-level notifications. Each notification indicates the level and/or project it belongs to for the user having multiple roles within BigAnimal. From 96650de0831c294f0a3a4173af0504adf01cedfb Mon Sep 17 00:00:00 2001 From: Josh Earlenbaugh Date: Fri, 8 Mar 2024 16:13:58 -0500 Subject: [PATCH 30/48] Created file. --- .../biganimal/release/using_cluster/05c_upgrading_log_rep.mdx | 0 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx new file mode 100644 index 00000000000..e69de29bb2d From f7fc8916a0e281fecb86654515ef70fe82f40d21 Mon Sep 17 00:00:00 2001 From: Josh Earlenbaugh Date: Fri, 8 Mar 2024 16:18:10 -0500 Subject: [PATCH 31/48] Title and navtitle. --- .../release/using_cluster/05c_upgrading_log_rep.mdx | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index e69de29bb2d..3110ebaaea9 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -0,0 +1,5 @@ +--- +title: Performing a rolling upgrade of Postgres major version on BigAnimal +navTitle: Upgrading Postgres major versions +--- + From 99b404f7e7f3dcbc914c8cbed3bce0e130b4ab62 Mon Sep 17 00:00:00 2001 From: Josh Earlenbaugh Date: Mon, 11 Mar 2024 10:54:01 -0400 Subject: [PATCH 32/48] Added more structure around the steps. --- .../using_cluster/05c_upgrading_log_rep.mdx | 45 +++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index 3110ebaaea9..42bce7cd9f9 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -3,3 +3,48 @@ title: Performing a rolling upgrade of Postgres major version on BigAnimal navTitle: Upgrading Postgres major versions --- +## Using logical replication + +Logical replication offers a powerful method for upgrading major Postgres versions on BigAnimal instances, enabling a seamless transition with minimal downtime. + +By replicating database changes in real-time from an older version to a newer one, this approach ensures data integrity and continuity. It's ideal for migrating data across different Postgres versions, providing a reliable upgrade path without sacrificing availability. + + + +### Schema Migration + +First, copy over the database schema from the old instance using the `pg_dump` command: + +``` +pg_dump --schema-only -h -U -d | psql -h -U -d +``` + +The `pg_dump --schema-only` command exports the schema (structure) of the existing database without the data. It's then piped into `psql` to import this schema into the new Postgres 16 instance. This prepares the target database with the necessary structure to hold the data. + + +### Setting up Publication + +On the Postgres 12 instance, a publication is created. This publication is configured to include specific tables, making their changes available for replication. + +```sql +-- Add tables to publication +ALTER PUBLICATION v12_pub ADD TABLE pgbench_accounts; +ALTER PUBLICATION v12_pub ADD TABLE pgbench_branches; +ALTER PUBLICATION v12_pub ADD TABLE pgbench_history; +ALTER PUBLICATION v12_pub ADD TABLE pgbench_tellers; +``` + +### Creating Logical Replication Slot + +A replication slot named 'v12_pub' using the 'pgoutput' plugin is created on the version 12 instance. This slot tracks changes to the published tables to ensure they can be replicated to the subscriber without losing any data. + +```sql +SELECT pg_create_logical_replication_slot('v12_pub','pgoutput'); +``` +### Setting up Subscription + +On the Postgres 16 instance, a subscription to the publication on the Postgres 12 instance is created. This subscription uses a connection string to specify the source database and includes options to copy existing data and to follow the publication identified by 'v12_pub'. The subscription mechanism pulls schema changes and data from the source to the target database, effectively replicating the data. + +```sql +CREATE SUBSCRIPTION v16_sub CONNECTION 'user=edb_admin host=p-4bwwpm01u4.pg.biganimal.io port=5432 dbname=edb_admin password=XXX' PUBLICATION v12_pub WITH (enabled=true, copy_data = true, create_slot = false, slot_name=v12_pub); +``` \ No newline at end of file From 8d4167c95089ee5c81a5def970dd0e779e06f3dd Mon Sep 17 00:00:00 2001 From: Josh Earlenbaugh Date: Mon, 11 Mar 2024 13:37:59 -0400 Subject: [PATCH 33/48] Complete first draft. --- .../using_cluster/05c_upgrading_log_rep.mdx | 155 +++++++++++++++++- 1 file changed, 147 insertions(+), 8 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index 42bce7cd9f9..e4602a22a95 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -9,11 +9,72 @@ Logical replication offers a powerful method for upgrading major Postgres versio By replicating database changes in real-time from an older version to a newer one, this approach ensures data integrity and continuity. It's ideal for migrating data across different Postgres versions, providing a reliable upgrade path without sacrificing availability. +!!! Important +Depending on where your older and newer versioned BigAnimal instances are located, this procedure may accrue ingress and egress costs from your cloud service provider (CSP) for the migrated data. Please consult your CSP's pricing documentation to see how ingress and egress fees are calculated to determine any extra costs. +!!! +### Create a new BigAnimal instance -### Schema Migration +Migrating between major versions first requires creating a new BigAnimal instance with the newer version of Postgres. This procedure does not work for upgrading a distributed high-availability BigAnimal instance. -First, copy over the database schema from the old instance using the `pg_dump` command: +The new instance must have sufficient storage to receive the data from the old instance, so ensure enough storage is provisioned when creating the new instance. + +Considering these caveats, [create the new BigAnimal instance](../getting_started/creating_a_cluster.mdx). + +### Gather instance information + +Next, information obtained from the BigAnimal console is necessary. For both the new instance and the old instance, you need the following: + +1. Read/write URI +2. Database name +3. Username +4. Read/write host + +To access this information, select the **Clusters** tab on the left-side navigation menu of the BigAnimal console, find the instance in question, and select it by name. Clusters are listed in alphabetical order. + +After selecting the instance in question, navigate to the **Connect** tab under the instance's name on its show page. The information needed for the procedure is listed there under **Connection Info**. + +### Confirm the Postgres versions before migration + +Next, from a machine with a Postgres client, confirm the current version of Postgres on both the old BigAnimal instance (the instance you are migrating *from*) and the new BigAnimal instance (the instance you are migrating *to*): + +``` +psql "" -c "select version();" +``` + +Example output looks like the following for a version 16 instance: + +``` + version +------------------------------------------------------------------------------------------------------------------------------------- + PostgreSQL 16.2 (Debian 16.2.0-3.buster) (BigAnimal Edition) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit +(1 row) +``` + +Check that both instances are the expected versions. + +### Migrate the database schema + +Next, confirm the database schema you want to migrate. While logged into the old instance, use the following SQL to look at the table data: + +```sql +/dt+ +``` + +Here is an example database schema: + +``` + List of relations + Schema | Name | Type | Owner | Persistence | Access method | Size | Description +--------+------------------+-------+-----------+-------------+---------------+------------+------------- + public | pgbench_accounts | table | edb_admin | permanent | heap | 1572 MB | + public | pgbench_branches | table | edb_admin | permanent | heap | 8192 bytes | + public | pgbench_history | table | edb_admin | permanent | heap | 0 bytes | + public | pgbench_tellers | table | edb_admin | permanent | heap | 120 kB | +``` + + +Then, from a Postgres client, copy over the database schema from the old instance using the `pg_dump` command: ``` pg_dump --schema-only -h -U -d | psql -h -U -d @@ -21,30 +82,108 @@ pg_dump --schema-only -h -U -d Date: Mon, 11 Mar 2024 15:42:06 -0400 Subject: [PATCH 34/48] Some small structuring changes. --- .../using_cluster/05c_upgrading_log_rep.mdx | 51 ++++++++++++++----- 1 file changed, 38 insertions(+), 13 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index e4602a22a95..de7d6b9eea4 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -58,10 +58,10 @@ Check that both instances are the expected versions. Next, confirm the database schema you want to migrate. While logged into the old instance, use the following SQL to look at the table data: ```sql -/dt+ +/dt+; ``` -Here is an example database schema: +Here is a sample database schema for this example: ``` List of relations @@ -82,7 +82,7 @@ pg_dump --schema-only -h -U -d ; +``` + +In this example: + + ```sql CREATE PUBLICATION v12_pub; ``` The expected output is `CREATE PUBLICATION`. -Next, configure the publication to include specific tables, making any changes available for replication. +Next, configure the publication to include the specific tables you want replicated on the new instance: + +```sql +ALTER PUBLICATION ADD TABLE ; +``` + +Which in the current example is: ```sql ALTER PUBLICATION v12_pub ADD TABLE pgbench_accounts; @@ -115,17 +128,23 @@ ALTER PUBLICATION v12_pub ADD TABLE pgbench_history; ALTER PUBLICATION v12_pub ADD TABLE pgbench_tellers; ``` -Each of these alterations creates `ALTER PUBLICATION` output upon success. +Each of these alterations produces `ALTER PUBLICATION` output upon success. ### Creating the Logical Replication Slot -Then, on the version 12 instance, create a replication slot named 'v12_pub' using the 'pgoutput' plugin: +Then, on the older versioned instance, create a replication slot using the 'pgoutput' plugin: + +```sql +SELECT pg_create_logical_replication_slot('','pgoutput'); +``` + +In the current example: ```sql SELECT pg_create_logical_replication_slot('v12_pub','pgoutput'); ``` -If successful, the expected output is, in this example: +If successful, the expected output is something like the following: ``` pg_create_logical_replication_slot @@ -138,10 +157,16 @@ This slot tracks changes to the published tables on the old instance to ensure t ### Setting up the Subscription -Now, logged into the Postgres 16 instance, create a subscription to the publication on the Postgres 12 instance: +Now, logged into the newer Postgres instance, create a subscription to the publication on the older instance in the following format: + +```sql +CREATE SUBSCRIPTION CONNECTION 'user= host= port= dbname= password=' PUBLICATION WITH (enabled=true, copy_data = true, create_slot = false, slot_name=); +``` + +Specifically, in this example: ```sql -CREATE SUBSCRIPTION v16_sub CONNECTION 'user=edb_admin host=p-4bwwpm01u4.pg.biganimal.io port=5432 dbname=edb_admin password=XXX' PUBLICATION v12_pub WITH (enabled=true, copy_data = true, create_slot = false, slot_name=v12_pub); +CREATE SUBSCRIPTION v16_sub CONNECTION 'user=edb_admin host=p-x67kjhacc4.pg.biganimal.io port=5432 dbname=edb_admin password=XXX' PUBLICATION v12_pub WITH (enabled=true, copy_data = true, create_slot = false, slot_name=v12_pub); ``` Look for the expected output: `CREATE SUBSCRIPTION`. @@ -152,7 +177,7 @@ The subscription mechanism pulls schema changes and data from the source to the ### Validate the migration -Finally, you can follow the progression of the migration. First, use `\dt+` while logged into the older BigAnimal Postgres instance to see how much data is to be replicated. In this example: +Finally, you can follow the progression of the migration. First, use `\dt+;` while logged into the older BigAnimal Postgres instance to see how much data is to be replicated. In this example: ``` List of relations @@ -164,7 +189,7 @@ Finally, you can follow the progression of the migration. First, use `\dt+` whil public | pgbench_tellers | table | edb_admin | permanent | heap | 120 kB | ``` -Then run `\dt+` while logged into the new BigAnimal instance and compare: +Then run `\dt+;` while logged into the new BigAnimal instance and compare: ``` List of relations @@ -176,7 +201,7 @@ Then run `\dt+` while logged into the new BigAnimal instance and compare: public | pgbench_tellers | table | edb_admin | permanent | heap | 0 bytes | ``` -If logical replication is running correctly, each time you run `\dt+` you see that more data has been migrated: +If logical replication is running correctly, each time you run `\dt+;` you see that more data has been migrated: ``` List of relations From 21aae44c9d356036f32a066a2e805c442c343cfc Mon Sep 17 00:00:00 2001 From: Josh Earlenbaugh Date: Mon, 11 Mar 2024 16:02:24 -0400 Subject: [PATCH 35/48] Fix index.mdx. --- product_docs/docs/biganimal/release/using_cluster/index.mdx | 1 + 1 file changed, 1 insertion(+) diff --git a/product_docs/docs/biganimal/release/using_cluster/index.mdx b/product_docs/docs/biganimal/release/using_cluster/index.mdx index a7398ff4ced..d10e72ac6e8 100644 --- a/product_docs/docs/biganimal/release/using_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/index.mdx @@ -11,6 +11,7 @@ navigation: - 05_monitoring_and_logging - fault_injection_testing - 05a_deleting_your_cluster +- 05c_upgrading_log_rep - 06_analyze_with_superset - 06_demonstration_oracle_compatibility - terraform_provider From 586f59037b927a548fa2343c510bfab489bf18bb Mon Sep 17 00:00:00 2001 From: dbwagoner <143614338+dbwagoner@users.noreply.github.com> Date: Wed, 13 Mar 2024 09:18:07 -0400 Subject: [PATCH 36/48] Update 05c_upgrading_log_rep.mdx Added a caveat about the limitations of logical replication, with respect to replicating schema changes (DDL). Doc Team, please handle the link that I added in the format that you like to present hyperlinks. Thanks! --- .../biganimal/release/using_cluster/05c_upgrading_log_rep.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index de7d6b9eea4..775269654a6 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -173,7 +173,7 @@ Look for the expected output: `CREATE SUBSCRIPTION`. In this example, the subscription uses a connection string to specify the source database and includes options to copy existing data and to follow the publication identified by 'v12_pub'. -The subscription mechanism pulls schema changes and data from the source to the target database, effectively replicating the data. +The subscription mechanism pulls schema changes (with some exceptions, as noted in the PostgreSQL documentation on Limitations of Logical Replication: https://www.postgresql.org/docs/current/logical-replication-restrictions.html) and data from the source to the target database, effectively replicating the data. ### Validate the migration @@ -211,4 +211,4 @@ If logical replication is running correctly, each time you run `\dt+;` you see t public | pgbench_branches | table | edb_admin | permanent | heap | 0 bytes | public | pgbench_history | table | edb_admin | permanent | heap | 0 bytes | public | pgbench_tellers | table | edb_admin | permanent | heap | 0 bytes | -``` \ No newline at end of file +``` From 195a04c1b0523332f766eff5a73ae2b70e13b667 Mon Sep 17 00:00:00 2001 From: dbwagoner <143614338+dbwagoner@users.noreply.github.com> Date: Wed, 13 Mar 2024 09:22:58 -0400 Subject: [PATCH 37/48] Update 05c_upgrading_log_rep.mdx removed reference to the specific version of Postgres here --- .../biganimal/release/using_cluster/05c_upgrading_log_rep.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index 775269654a6..435191efc13 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -80,7 +80,7 @@ Then, from a Postgres client, copy over the database schema from the old instanc pg_dump --schema-only -h -U -d | psql -h -U -d ``` -The `pg_dump --schema-only` command exports the schema (structure) of the existing database without the data. It's then piped into `psql` to import this schema into the new Postgres 16 instance. This prepares the target database with the necessary structure to hold the data. +The `pg_dump --schema-only` command exports the schema (structure) of the existing database without the data. It's then piped into `psql` to import this schema into the new Postgres instance. This prepares the target database with the necessary structure to hold the data. Finally, log into the new instance and confirm that the database schema migrated correctly. Run `\dt+;` in this example again and you see the database schema migrated to the new instance as expected: From e491e7e6a472dd26d0f0298f0c908e8239c36af3 Mon Sep 17 00:00:00 2001 From: dbwagoner <143614338+dbwagoner@users.noreply.github.com> Date: Thu, 21 Mar 2024 08:51:24 -0400 Subject: [PATCH 38/48] Update 05c_upgrading_log_rep.mdx Proposed a rewording in the title --- .../biganimal/release/using_cluster/05c_upgrading_log_rep.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index 435191efc13..f5baaf69163 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -1,5 +1,5 @@ --- -title: Performing a rolling upgrade of Postgres major version on BigAnimal +title: Performing a major version upgrade of Postgres on BigAnimal navTitle: Upgrading Postgres major versions --- From 7d0b5fd6c07b85591e814d6fbc59675a92e69d29 Mon Sep 17 00:00:00 2001 From: aswright491 <36008570+aswright491@users.noreply.github.com> Date: Thu, 21 Mar 2024 16:13:58 -0400 Subject: [PATCH 39/48] Update 05c_upgrading_log_rep.mdx Adjust sentence in the consideration section, is my update accurate? --- .../biganimal/release/using_cluster/05c_upgrading_log_rep.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index f5baaf69163..374f07a14b4 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -15,7 +15,7 @@ Depending on where your older and newer versioned BigAnimal instances are locate ### Create a new BigAnimal instance -Migrating between major versions first requires creating a new BigAnimal instance with the newer version of Postgres. This procedure does not work for upgrading a distributed high-availability BigAnimal instance. +Migrating between major versions first requires creating a new BigAnimal instance with the newer version of Postgres. This procedure does not work for upgrading from a single node or standard high availability cluster to a distributed high-availability BigAnimal instance. The new instance must have sufficient storage to receive the data from the old instance, so ensure enough storage is provisioned when creating the new instance. From 95a76fcfbcbfa2f1552da587e9b3a0797ba6fdfe Mon Sep 17 00:00:00 2001 From: Josh Earlenbaugh Date: Fri, 22 Mar 2024 12:32:31 -0400 Subject: [PATCH 40/48] Updating in response to comments. --- .../using_cluster/05c_upgrading_log_rep.mdx | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index 374f07a14b4..8632dd5c60b 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -5,6 +5,10 @@ navTitle: Upgrading Postgres major versions ## Using logical replication +!!! Note +This procedure does not work with distributed high-availability BigAnimal instances. +!!! + Logical replication offers a powerful method for upgrading major Postgres versions on BigAnimal instances, enabling a seamless transition with minimal downtime. By replicating database changes in real-time from an older version to a newer one, this approach ensures data integrity and continuity. It's ideal for migrating data across different Postgres versions, providing a reliable upgrade path without sacrificing availability. @@ -15,7 +19,7 @@ Depending on where your older and newer versioned BigAnimal instances are locate ### Create a new BigAnimal instance -Migrating between major versions first requires creating a new BigAnimal instance with the newer version of Postgres. This procedure does not work for upgrading from a single node or standard high availability cluster to a distributed high-availability BigAnimal instance. +Migrating between major versions first requires creating a new BigAnimal instance with the newer version of Postgres. The new instance must have sufficient storage to receive the data from the old instance, so ensure enough storage is provisioned when creating the new instance. @@ -160,13 +164,13 @@ This slot tracks changes to the published tables on the old instance to ensure t Now, logged into the newer Postgres instance, create a subscription to the publication on the older instance in the following format: ```sql -CREATE SUBSCRIPTION CONNECTION 'user= host= port= dbname= password=' PUBLICATION WITH (enabled=true, copy_data = true, create_slot = false, slot_name=); +CREATE SUBSCRIPTION CONNECTION 'user= host= sslmode=require port= dbname= password=' PUBLICATION WITH (enabled=true, copy_data = true, create_slot = false, slot_name=); ``` Specifically, in this example: ```sql -CREATE SUBSCRIPTION v16_sub CONNECTION 'user=edb_admin host=p-x67kjhacc4.pg.biganimal.io port=5432 dbname=edb_admin password=XXX' PUBLICATION v12_pub WITH (enabled=true, copy_data = true, create_slot = false, slot_name=v12_pub); +CREATE SUBSCRIPTION v16_sub CONNECTION 'user=edb_admin host=p-x67kjhacc4.pg.biganimal.io sslmode=require port=5432 dbname=edb_admin password=XXX' PUBLICATION v12_pub WITH (enabled=true, copy_data = true, create_slot = false, slot_name=v12_pub); ``` Look for the expected output: `CREATE SUBSCRIPTION`. @@ -212,3 +216,10 @@ If logical replication is running correctly, each time you run `\dt+;` you see t public | pgbench_history | table | edb_admin | permanent | heap | 0 bytes | public | pgbench_tellers | table | edb_admin | permanent | heap | 0 bytes | ``` + +#### LiveCompare + +A final validation step to ensure the two databases are identical is to use [EDB's LiveCompare](https://www.enterprisedb.com/docs/livecompare/latest/). + +LiveCompare compares the databases, generates a comparison report, and provides data manipulation language (DML) scripts so you can optionally apply the DML and fix any database inconsistencies. + From 46ca32d03a325ec7c857005ea87f23474c641119 Mon Sep 17 00:00:00 2001 From: Josh Earlenbaugh Date: Mon, 1 Apr 2024 12:20:31 -0400 Subject: [PATCH 41/48] Addressed Adam's suggestions. --- .../using_cluster/05c_upgrading_log_rep.mdx | 70 +++++++++---------- 1 file changed, 34 insertions(+), 36 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index 8632dd5c60b..b583519a3d4 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -19,34 +19,38 @@ Depending on where your older and newer versioned BigAnimal instances are locate ### Create a new BigAnimal instance -Migrating between major versions first requires creating a new BigAnimal instance with the newer version of Postgres. +To perform a major version upgrade, create a BigAnimal instance with your desired version of Postgres. This is your target instance. -The new instance must have sufficient storage to receive the data from the old instance, so ensure enough storage is provisioned when creating the new instance. +Ensure your target instance is provisioned with a storage size equal to or greater than your source instance. -Considering these caveats, [create the new BigAnimal instance](../getting_started/creating_a_cluster.mdx). +Considering these caveats, [create your target BigAnimal instance](../getting_started/creating_a_cluster.mdx). ### Gather instance information -Next, information obtained from the BigAnimal console is necessary. For both the new instance and the old instance, you need the following: +Use the BigAnimal console to obtain the following information for your source and target instance: -1. Read/write URI -2. Database name -3. Username -4. Read/write host +- Read/write URI +- Database name +- Username +- Read/write host -To access this information, select the **Clusters** tab on the left-side navigation menu of the BigAnimal console, find the instance in question, and select it by name. Clusters are listed in alphabetical order. +Using the BigAnimal console: -After selecting the instance in question, navigate to the **Connect** tab under the instance's name on its show page. The information needed for the procedure is listed there under **Connection Info**. + 1. Select the **Clusters** tab. + 1. Select your source instance. + 1. From the Connect tab, obtain the information from **Connection Info**. + +Repeat the steps for your target instance. ### Confirm the Postgres versions before migration -Next, from a machine with a Postgres client, confirm the current version of Postgres on both the old BigAnimal instance (the instance you are migrating *from*) and the new BigAnimal instance (the instance you are migrating *to*): +Confirm the Postgres version on both your source and target BigAnimal instances: ``` psql "" -c "select version();" ``` -Example output looks like the following for a version 16 instance: +Output using Postgres 16: ``` version @@ -55,11 +59,9 @@ Example output looks like the following for a version 16 instance: (1 row) ``` -Check that both instances are the expected versions. - ### Migrate the database schema -Next, confirm the database schema you want to migrate. While logged into the old instance, use the following SQL to look at the table data: +On your source instance, view the details of the schema to be migrated: ```sql /dt+; @@ -77,16 +79,13 @@ Here is a sample database schema for this example: public | pgbench_tellers | table | edb_admin | permanent | heap | 120 kB | ``` - -Then, from a Postgres client, copy over the database schema from the old instance using the `pg_dump` command: +Use pg_dump with the `--schema-only` flag to copy the schema from your source to your target instance. For more information on using `pg_dump`, [see the Postgres documentation](https://www.postgresql.org/docs/current/app-pgdump.html). ``` -pg_dump --schema-only -h -U -d | psql -h -U -d +pg_dump --schema-only -h -U -d | psql -h -U -d ``` -The `pg_dump --schema-only` command exports the schema (structure) of the existing database without the data. It's then piped into `psql` to import this schema into the new Postgres instance. This prepares the target database with the necessary structure to hold the data. - -Finally, log into the new instance and confirm that the database schema migrated correctly. Run `\dt+;` in this example again and you see the database schema migrated to the new instance as expected: +On the target instance, confirm the schema is migrated: ``` List of relations @@ -98,11 +97,13 @@ Finally, log into the new instance and confirm that the database schema migrated public | pgbench_tellers | table | edb_admin | permanent | heap | 0 bytes | ``` -While the schema looks the same on the new instance, the "Size" column shows 0 bytes of data, confirming that only the schema has been migrated so far. +!!! Note +A successful schema-only copy shows the tables with zero bytes. +!!! ### Setting up Publication -To set up the publication, log into the old BigAnimal instance and create a publication: +Create a publication from your source instance: ```sql CREATE PUBLICATION ; @@ -115,16 +116,14 @@ In this example: CREATE PUBLICATION v12_pub; ``` -The expected output is `CREATE PUBLICATION`. +The expected output is: `CREATE PUBLICATION`. -Next, configure the publication to include the specific tables you want replicated on the new instance: +Add tables that you want to replicate to your target instance: ```sql ALTER PUBLICATION ADD TABLE ; ``` -Which in the current example is: - ```sql ALTER PUBLICATION v12_pub ADD TABLE pgbench_accounts; ALTER PUBLICATION v12_pub ADD TABLE pgbench_branches; @@ -132,11 +131,11 @@ ALTER PUBLICATION v12_pub ADD TABLE pgbench_history; ALTER PUBLICATION v12_pub ADD TABLE pgbench_tellers; ``` -Each of these alterations produces `ALTER PUBLICATION` output upon success. +The expected output is: `ALTER PUBLICATION`. ### Creating the Logical Replication Slot -Then, on the older versioned instance, create a replication slot using the 'pgoutput' plugin: +Then, on the source instance, create a replication slot using the `pgoutput` plugin: ```sql SELECT pg_create_logical_replication_slot('','pgoutput'); @@ -148,7 +147,7 @@ In the current example: SELECT pg_create_logical_replication_slot('v12_pub','pgoutput'); ``` -If successful, the expected output is something like the following: +The expected output returns the `slot_name` and `lsn`. ``` pg_create_logical_replication_slot @@ -156,24 +155,23 @@ If successful, the expected output is something like the following: (v12_pub,0/AC003330) ``` -This slot tracks changes to the published tables on the old instance to ensure they can be replicated to the subscriber on the new instance without losing any data. - +The replication slot tracks changes to the published tables from the source instance and replicates changes to the subscriber on the target instance. ### Setting up the Subscription -Now, logged into the newer Postgres instance, create a subscription to the publication on the older instance in the following format: +On the target instance, create a subscription: ```sql -CREATE SUBSCRIPTION CONNECTION 'user= host= sslmode=require port= dbname= password=' PUBLICATION WITH (enabled=true, copy_data = true, create_slot = false, slot_name=); +CREATE SUBSCRIPTION CONNECTION 'user= host= sslmode=require port= dbname= password=' PUBLICATION WITH (enabled=true, copy_data = true, create_slot = false, slot_name=); ``` -Specifically, in this example: +Creating a subscription on a Postgres 16 instance to a publication on a Postgres 12 instance: ```sql CREATE SUBSCRIPTION v16_sub CONNECTION 'user=edb_admin host=p-x67kjhacc4.pg.biganimal.io sslmode=require port=5432 dbname=edb_admin password=XXX' PUBLICATION v12_pub WITH (enabled=true, copy_data = true, create_slot = false, slot_name=v12_pub); ``` -Look for the expected output: `CREATE SUBSCRIPTION`. +The expected output is: `CREATE SUBSCRIPTION`. In this example, the subscription uses a connection string to specify the source database and includes options to copy existing data and to follow the publication identified by 'v12_pub'. From bfd04d3e26bbd39f8e000c8202840d06848a2fac Mon Sep 17 00:00:00 2001 From: Josh Earlenbaugh Date: Fri, 5 Apr 2024 13:15:14 -0400 Subject: [PATCH 42/48] Integrating last round of Adam P's suggestions. --- .../using_cluster/05c_upgrading_log_rep.mdx | 61 +++++++++++-------- 1 file changed, 36 insertions(+), 25 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx index b583519a3d4..094cc4d9268 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05c_upgrading_log_rep.mdx @@ -1,6 +1,7 @@ --- title: Performing a major version upgrade of Postgres on BigAnimal navTitle: Upgrading Postgres major versions +deepToC: true --- ## Using logical replication @@ -9,21 +10,35 @@ navTitle: Upgrading Postgres major versions This procedure does not work with distributed high-availability BigAnimal instances. !!! -Logical replication offers a powerful method for upgrading major Postgres versions on BigAnimal instances, enabling a seamless transition with minimal downtime. +Logical replication is a common method for upgrading the Postgres major version on BigAnimal instances, enabling a transition with minimal downtime. -By replicating database changes in real-time from an older version to a newer one, this approach ensures data integrity and continuity. It's ideal for migrating data across different Postgres versions, providing a reliable upgrade path without sacrificing availability. +By replicating changes in real-time from an older version (source instance) to a newer one (target instance), this method provides a reliable upgrade path while maintaining database availability. !!! Important Depending on where your older and newer versioned BigAnimal instances are located, this procedure may accrue ingress and egress costs from your cloud service provider (CSP) for the migrated data. Please consult your CSP's pricing documentation to see how ingress and egress fees are calculated to determine any extra costs. !!! -### Create a new BigAnimal instance +### Overview of upgrading -To perform a major version upgrade, create a BigAnimal instance with your desired version of Postgres. This is your target instance. +To perform a major version upgrade, use the following steps, explained in further detail below: + +1. [Create a BigAnimal instance](#create-a-biganimal-instance) +1. [Gather instance information](#gather-instance-information) +1. [Confirm the Postgres versions before migration](#confirm-the-postgres-versions-before-migration) +1. [Migrate the database schema](#migrate-the-database-schema) +1. [Create a publication](#create-a-publication) +1. [Create a logical replication slot](#create-a-logical-replication-slot) +1. [Create a subscription](#create-a-subscription) +1. [Validate the migration](#validate-the-migration) + + +### Create a BigAnimal instance + +To perform a major version upgrade, create a BigAnimal instance with your desired version of Postgres. This will be your target instance. Ensure your target instance is provisioned with a storage size equal to or greater than your source instance. -Considering these caveats, [create your target BigAnimal instance](../getting_started/creating_a_cluster.mdx). +For detailed steps on creating a BigAnimal instance, see [this guide](../getting_started/creating_a_cluster.mdx). ### Gather instance information @@ -36,15 +51,13 @@ Use the BigAnimal console to obtain the following information for your source an Using the BigAnimal console: - 1. Select the **Clusters** tab. - 1. Select your source instance. - 1. From the Connect tab, obtain the information from **Connection Info**. - -Repeat the steps for your target instance. +1. Select the **Clusters** tab. +1. Select your source instance. +1. From the Connect tab, obtain the information from **Connection Info**. ### Confirm the Postgres versions before migration -Confirm the Postgres version on both your source and target BigAnimal instances: +Confirm the Postgres version of your source and target BigAnimal instances: ``` psql "" -c "select version();" @@ -61,7 +74,7 @@ Output using Postgres 16: ### Migrate the database schema -On your source instance, view the details of the schema to be migrated: +On your source instance, use the `dt` command to view the details of the schema to be migrated: ```sql /dt+; @@ -85,7 +98,7 @@ Use pg_dump with the `--schema-only` flag to copy the schema from your source to pg_dump --schema-only -h -U -d | psql -h -U -d ``` -On the target instance, confirm the schema is migrated: +On the target instance, confirm the schema was migrated: ``` List of relations @@ -101,9 +114,9 @@ On the target instance, confirm the schema is migrated: A successful schema-only copy shows the tables with zero bytes. !!! -### Setting up Publication +### Create a publication -Create a publication from your source instance: +Use the `CREATE PUBLICATION` command to create a publication on your source instance. For more information on using `CREATE PUBLICATION`, see [the Posgres documentation](https://www.postgresql.org/docs/current/sql-createpublication.html). ```sql CREATE PUBLICATION ; @@ -157,9 +170,9 @@ The expected output returns the `slot_name` and `lsn`. The replication slot tracks changes to the published tables from the source instance and replicates changes to the subscriber on the target instance. -### Setting up the Subscription +### Create a subscription -On the target instance, create a subscription: +Use the `CREATE SUBSCRIPTION` command to create a subscription on your target instance. For more information on using `CREATE SUBSCRIPTION`, see [the Postgres documentation](https://www.postgresql.org/docs/current/sql-createsubscription.html). ```sql CREATE SUBSCRIPTION CONNECTION 'user= host= sslmode=require port= dbname= password=' PUBLICATION WITH (enabled=true, copy_data = true, create_slot = false, slot_name=); @@ -175,12 +188,12 @@ The expected output is: `CREATE SUBSCRIPTION`. In this example, the subscription uses a connection string to specify the source database and includes options to copy existing data and to follow the publication identified by 'v12_pub'. -The subscription mechanism pulls schema changes (with some exceptions, as noted in the PostgreSQL documentation on Limitations of Logical Replication: https://www.postgresql.org/docs/current/logical-replication-restrictions.html) and data from the source to the target database, effectively replicating the data. +The subscriber pulls schema changes (with some exceptions, as noted in the PostgreSQL [documentation on Limitations of Logical Replication](https://www.postgresql.org/docs/current/logical-replication-restrictions.html)) and data from the source to the target database, effectively replicating the data. ### Validate the migration -Finally, you can follow the progression of the migration. First, use `\dt+;` while logged into the older BigAnimal Postgres instance to see how much data is to be replicated. In this example: - +To validate the progress of the data migration, use `dt+` from the source and target BigAnimal instances to compare the size of each table. + ``` List of relations Schema | Name | Type | Owner | Persistence | Access method | Size | Description @@ -191,8 +204,6 @@ Finally, you can follow the progression of the migration. First, use `\dt+;` whi public | pgbench_tellers | table | edb_admin | permanent | heap | 120 kB | ``` -Then run `\dt+;` while logged into the new BigAnimal instance and compare: - ``` List of relations Schema | Name | Type | Owner | Persistence | Access method | Size | Description @@ -215,9 +226,9 @@ If logical replication is running correctly, each time you run `\dt+;` you see t public | pgbench_tellers | table | edb_admin | permanent | heap | 0 bytes | ``` -#### LiveCompare +!!! Note +You can optionally use [LiveCompare](https://www.enterprisedb.com/docs/livecompare/latest/) to generate a comparison report of the source and target databases to validate that all database objects and data are consistent. +!!! -A final validation step to ensure the two databases are identical is to use [EDB's LiveCompare](https://www.enterprisedb.com/docs/livecompare/latest/). -LiveCompare compares the databases, generates a comparison report, and provides data manipulation language (DML) scripts so you can optionally apply the DML and fix any database inconsistencies. From ee29aa4553391013b4ff1c472c4cfa80eeec38dc Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 8 Apr 2024 15:44:16 +0530 Subject: [PATCH 43/48] Update product_docs/docs/biganimal/release/administering_cluster/notifications.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> --- .../biganimal/release/administering_cluster/notifications.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index 15fc0a3acad..391a1d097b5 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -31,7 +31,7 @@ All subscription type means Digital self-service, Direct purchase, and Azure Mar The project owners/editors and organization owners/admins can configure the notifications for the events visible to them. They can choose if they want to receive notifications in the in-app inbox, email or both. They can also configure email notifications for their teams within their organization. -Project level notifications are to be configured for a project. +Project level notifications are configured within the project. Notification settings made by a user is applicable only to that user. If an email notification is enabled, the email is sent to the email address associated with their login. From d90d2f9804d59c59b6482e01773e0d4b2d4b0ca6 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 8 Apr 2024 15:44:37 +0530 Subject: [PATCH 44/48] Update product_docs/docs/biganimal/release/administering_cluster/notifications.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> --- .../biganimal/release/administering_cluster/notifications.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index 391a1d097b5..49c8f252538 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -33,7 +33,7 @@ The project owners/editors and organization owners/admins can configure the noti Project level notifications are configured within the project. -Notification settings made by a user is applicable only to that user. If an email notification is enabled, the email is sent to the email address associated with their login. +Notification settings made by a user are applicable only to that user. If an email notification is enabled, the email is sent to the email address associated with the user's login. ## Viewing notifications From 558fce505a8e6cc45a114d02ee8f99255c0bc562 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 8 Apr 2024 15:44:53 +0530 Subject: [PATCH 45/48] Update product_docs/docs/biganimal/release/administering_cluster/notifications.mdx Co-authored-by: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> --- .../biganimal/release/administering_cluster/notifications.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx index 49c8f252538..6d8dbbe63c8 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/notifications.mdx @@ -40,7 +40,7 @@ Notification settings made by a user are applicable only to that user. If an ema Users in the following roles can view the notifications: - Organization owners/admins can view the organization-level notifications. - Project owners/editors can view the project-level notifications. -- Account owner can view the account-level notifications. +- Account owners can view their own account-level notifications. Each notification indicates the level and/or project it belongs to for the user having multiple roles within BigAnimal. From a50e38914bae9c36ca2210b80cca5c7a64c862a7 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Mon, 8 Apr 2024 12:06:31 +0100 Subject: [PATCH 46/48] Updated BA release notes for March 2024 Signed-off-by: Dj Walker-Morgan --- .../biganimal/release/release_notes/index.mdx | 2 ++ .../release/release_notes/mar_2024_rel_notes.mdx | 15 +++++++++++++++ 2 files changed, 17 insertions(+) create mode 100644 product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx diff --git a/product_docs/docs/biganimal/release/release_notes/index.mdx b/product_docs/docs/biganimal/release/release_notes/index.mdx index ef19e94be8e..16696fb7a0f 100644 --- a/product_docs/docs/biganimal/release/release_notes/index.mdx +++ b/product_docs/docs/biganimal/release/release_notes/index.mdx @@ -2,6 +2,7 @@ title: BigAnimal release notes navTitle: Release notes navigation: +- mar_2024_rel_notes - feb_2024_rel_notes - jan_2024_rel_notes - dec_2023_rel_notes @@ -22,6 +23,7 @@ The BigAnimal documentation describes the latest version of BigAnimal, including | Month | |--------------------------------------| +| [March 2024](mar_2024_rel_notes) | | [February 2024](feb_2024_rel_notes) | | [January 2024](jan_2024_rel_notes) | | [December 2023](dec_2023_rel_notes) | diff --git a/product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx b/product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx new file mode 100644 index 00000000000..bcab45d734a --- /dev/null +++ b/product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx @@ -0,0 +1,15 @@ +--- +title: BigAnimal March 2024 release notes +navTitle: March 2024 +--- + +BigAnimal's March 2024 includes the following enhancements and bugfixes: + +| Type | Description | +|------|-------------| +| Enhancement | EDB Postgres Extended Server is now available in BigAnimal for single-node and High Availability in addition to Distributed High Availability clusters. | +| Enhancement | You can now take advantage of Transparent Data Encryption (TDE) for clusters running on EDB Postgres Advanced Server or EDB Postgres Extended Server versions 15 and newer in BigAnimal’s AWS account. With TDE, you can connect your own keys from AWS’s Key Management Service to encrypt your clusters at the database level in addition to the default volume-level encryption. | +| Enhancement | BigAnimal Terraform provider v0.8.1 is now available. Learn more about what’s new [here](https://github.com/EnterpriseDB/terraform-provider-biganimal/releases/tag/v0.8.1) and download the provider [here](https://registry.terraform.io/providers/EnterpriseDB/biganimal/latest). | +| Enhancement | BigAnimal CLI v3.6.0 is now available. Learn more about what’s new [here](https://cli.biganimal.com/versions/v3.6.0/). | + + From 35c8541f940b87569e8870e726015f1c953d75fc Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 8 Apr 2024 16:48:39 +0530 Subject: [PATCH 47/48] PostGIS - release date fix --- product_docs/docs/postgis/{3.2 => 3}/01_release_notes/index.mdx | 2 +- .../docs/postgis/{3.2 => 3}/01_release_notes/rel_notes312.mdx | 0 .../docs/postgis/{3.2 => 3}/01_release_notes/rel_notes314.mdx | 0 .../docs/postgis/{3.2 => 3}/01_release_notes/rel_notes315.mdx | 0 .../docs/postgis/{3.2 => 3}/01_release_notes/rel_notes32.mdx | 0 .../docs/postgis/{3.2 => 3}/01_release_notes/rel_notes321.mdx | 0 .../docs/postgis/{3.2 => 3}/01_release_notes/rel_notes342.mdx | 2 +- product_docs/docs/postgis/{3.2 => 3}/02_creating_extensions.mdx | 0 product_docs/docs/postgis/{3.2 => 3}/04_using_postgis.mdx | 0 product_docs/docs/postgis/{3.2 => 3}/images/EDB_logo.png | 0 .../{3.2 => 3}/images/SBP_Installation_Files_Downloaded.png | 0 .../docs/postgis/{3.2 => 3}/images/SBP_Selected_Packages.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/SBP_welcome.png | 0 .../docs/postgis/{3.2 => 3}/images/SB_PostGIS_Selection.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/Unisntall2.png | 0 .../{3.2 => 3}/images/advanced_server_installation_details.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/edb_logo.svg | 0 .../docs/postgis/{3.2 => 3}/images/installattion_complete.png | 0 .../docs/postgis/{3.2 => 3}/images/installing_postgis.png | 0 .../docs/postgis/{3.2 => 3}/images/language_selection.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/pgadmin.png | 0 .../docs/postgis/{3.2 => 3}/images/postgis_installation.png | 0 .../{3.2 => 3}/images/postgis_installation_directory.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/postgis_pgadmin.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/postgis_welcome.png | 0 .../docs/postgis/{3.2 => 3}/images/ready_to_install.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/selected.png | 0 .../docs/postgis/{3.2 => 3}/images/selected_packages.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/spatial.png | 0 .../docs/postgis/{3.2 => 3}/images/spatial_extensions.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/stack.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/uninstall1.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/uninstall4final.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/uninstallfinal.png | 0 product_docs/docs/postgis/{3.2 => 3}/images/welcome.png | 0 product_docs/docs/postgis/{3.2 => 3}/index.mdx | 0 product_docs/docs/postgis/{3.2 => 3}/installing/index.mdx | 0 .../docs/postgis/{3.2 => 3}/installing/linux_ppc64le/index.mdx | 0 .../{3.2 => 3}/installing/linux_ppc64le/postgis_rhel_8.mdx | 0 .../{3.2 => 3}/installing/linux_ppc64le/postgis_rhel_9.mdx | 0 .../{3.2 => 3}/installing/linux_ppc64le/postgis_sles_12.mdx | 0 .../{3.2 => 3}/installing/linux_ppc64le/postgis_sles_15.mdx | 0 .../docs/postgis/{3.2 => 3}/installing/linux_x86_64/index.mdx | 0 .../{3.2 => 3}/installing/linux_x86_64/postgis_centos_7.mdx | 0 .../{3.2 => 3}/installing/linux_x86_64/postgis_debian_10.mdx | 0 .../{3.2 => 3}/installing/linux_x86_64/postgis_debian_11.mdx | 0 .../installing/linux_x86_64/postgis_other_linux_8.mdx | 0 .../installing/linux_x86_64/postgis_other_linux_9.mdx | 0 .../{3.2 => 3}/installing/linux_x86_64/postgis_rhel_7.mdx | 0 .../{3.2 => 3}/installing/linux_x86_64/postgis_rhel_8.mdx | 0 .../{3.2 => 3}/installing/linux_x86_64/postgis_rhel_9.mdx | 0 .../{3.2 => 3}/installing/linux_x86_64/postgis_sles_12.mdx | 0 .../{3.2 => 3}/installing/linux_x86_64/postgis_sles_15.mdx | 0 .../{3.2 => 3}/installing/linux_x86_64/postgis_ubuntu_20.mdx | 0 .../{3.2 => 3}/installing/linux_x86_64/postgis_ubuntu_22.mdx | 0 .../docs/postgis/{3.2 => 3}/installing/uninstalling.mdx | 0 product_docs/docs/postgis/{3.2 => 3}/installing/upgrading.mdx | 0 product_docs/docs/postgis/{3.2 => 3}/installing/windows.mdx | 0 product_docs/docs/postgis/{3.2 => 3}/supported_platforms.mdx | 0 59 files changed, 2 insertions(+), 2 deletions(-) rename product_docs/docs/postgis/{3.2 => 3}/01_release_notes/index.mdx (93%) rename product_docs/docs/postgis/{3.2 => 3}/01_release_notes/rel_notes312.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/01_release_notes/rel_notes314.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/01_release_notes/rel_notes315.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/01_release_notes/rel_notes32.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/01_release_notes/rel_notes321.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/01_release_notes/rel_notes342.mdx (97%) rename product_docs/docs/postgis/{3.2 => 3}/02_creating_extensions.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/04_using_postgis.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/EDB_logo.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/SBP_Installation_Files_Downloaded.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/SBP_Selected_Packages.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/SBP_welcome.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/SB_PostGIS_Selection.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/Unisntall2.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/advanced_server_installation_details.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/edb_logo.svg (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/installattion_complete.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/installing_postgis.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/language_selection.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/pgadmin.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/postgis_installation.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/postgis_installation_directory.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/postgis_pgadmin.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/postgis_welcome.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/ready_to_install.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/selected.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/selected_packages.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/spatial.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/spatial_extensions.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/stack.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/uninstall1.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/uninstall4final.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/uninstallfinal.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/images/welcome.png (100%) rename product_docs/docs/postgis/{3.2 => 3}/index.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/index.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_ppc64le/index.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_ppc64le/postgis_rhel_8.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_ppc64le/postgis_rhel_9.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_ppc64le/postgis_sles_12.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_ppc64le/postgis_sles_15.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/index.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_centos_7.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_debian_10.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_debian_11.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_other_linux_8.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_other_linux_9.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_rhel_7.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_rhel_8.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_rhel_9.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_sles_12.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_sles_15.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_ubuntu_20.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/linux_x86_64/postgis_ubuntu_22.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/uninstalling.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/upgrading.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/installing/windows.mdx (100%) rename product_docs/docs/postgis/{3.2 => 3}/supported_platforms.mdx (100%) diff --git a/product_docs/docs/postgis/3.2/01_release_notes/index.mdx b/product_docs/docs/postgis/3/01_release_notes/index.mdx similarity index 93% rename from product_docs/docs/postgis/3.2/01_release_notes/index.mdx rename to product_docs/docs/postgis/3/01_release_notes/index.mdx index 84b4b455f04..d639897e3e4 100644 --- a/product_docs/docs/postgis/3.2/01_release_notes/index.mdx +++ b/product_docs/docs/postgis/3/01_release_notes/index.mdx @@ -14,7 +14,7 @@ cover what was new in each release. | Version | Release date | | ------------------------ | ------------ | -| [3.4.2](rel_notes342) | 29 Feb 2024 | +| [3.4.2](rel_notes342) | 01 Apr 2024 | | [3.2.1](rel_notes321) | 04 Aug 2023 | | [3.2.0](rel_notes32) | 01 Dec 2022 | | [3.1.5](rel_notes315) | 03 Aug 2022| diff --git a/product_docs/docs/postgis/3.2/01_release_notes/rel_notes312.mdx b/product_docs/docs/postgis/3/01_release_notes/rel_notes312.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/01_release_notes/rel_notes312.mdx rename to product_docs/docs/postgis/3/01_release_notes/rel_notes312.mdx diff --git a/product_docs/docs/postgis/3.2/01_release_notes/rel_notes314.mdx b/product_docs/docs/postgis/3/01_release_notes/rel_notes314.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/01_release_notes/rel_notes314.mdx rename to product_docs/docs/postgis/3/01_release_notes/rel_notes314.mdx diff --git a/product_docs/docs/postgis/3.2/01_release_notes/rel_notes315.mdx b/product_docs/docs/postgis/3/01_release_notes/rel_notes315.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/01_release_notes/rel_notes315.mdx rename to product_docs/docs/postgis/3/01_release_notes/rel_notes315.mdx diff --git a/product_docs/docs/postgis/3.2/01_release_notes/rel_notes32.mdx b/product_docs/docs/postgis/3/01_release_notes/rel_notes32.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/01_release_notes/rel_notes32.mdx rename to product_docs/docs/postgis/3/01_release_notes/rel_notes32.mdx diff --git a/product_docs/docs/postgis/3.2/01_release_notes/rel_notes321.mdx b/product_docs/docs/postgis/3/01_release_notes/rel_notes321.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/01_release_notes/rel_notes321.mdx rename to product_docs/docs/postgis/3/01_release_notes/rel_notes321.mdx diff --git a/product_docs/docs/postgis/3.2/01_release_notes/rel_notes342.mdx b/product_docs/docs/postgis/3/01_release_notes/rel_notes342.mdx similarity index 97% rename from product_docs/docs/postgis/3.2/01_release_notes/rel_notes342.mdx rename to product_docs/docs/postgis/3/01_release_notes/rel_notes342.mdx index 73013eb708b..66c85c3f357 100644 --- a/product_docs/docs/postgis/3.2/01_release_notes/rel_notes342.mdx +++ b/product_docs/docs/postgis/3/01_release_notes/rel_notes342.mdx @@ -3,7 +3,7 @@ title: "PostGIS 3.4.2 release notes" navTitle: Version 3.4.2 --- -Released: 29 Feb 2024 +Released: 01 Apr 2024 EDB PostGIS is a PostgreSQL extension that allows you to store geographic information systems (GIS) objects in an EDB Postgres Advanced Server database. diff --git a/product_docs/docs/postgis/3.2/02_creating_extensions.mdx b/product_docs/docs/postgis/3/02_creating_extensions.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/02_creating_extensions.mdx rename to product_docs/docs/postgis/3/02_creating_extensions.mdx diff --git a/product_docs/docs/postgis/3.2/04_using_postgis.mdx b/product_docs/docs/postgis/3/04_using_postgis.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/04_using_postgis.mdx rename to product_docs/docs/postgis/3/04_using_postgis.mdx diff --git a/product_docs/docs/postgis/3.2/images/EDB_logo.png b/product_docs/docs/postgis/3/images/EDB_logo.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/EDB_logo.png rename to product_docs/docs/postgis/3/images/EDB_logo.png diff --git a/product_docs/docs/postgis/3.2/images/SBP_Installation_Files_Downloaded.png b/product_docs/docs/postgis/3/images/SBP_Installation_Files_Downloaded.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/SBP_Installation_Files_Downloaded.png rename to product_docs/docs/postgis/3/images/SBP_Installation_Files_Downloaded.png diff --git a/product_docs/docs/postgis/3.2/images/SBP_Selected_Packages.png b/product_docs/docs/postgis/3/images/SBP_Selected_Packages.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/SBP_Selected_Packages.png rename to product_docs/docs/postgis/3/images/SBP_Selected_Packages.png diff --git a/product_docs/docs/postgis/3.2/images/SBP_welcome.png b/product_docs/docs/postgis/3/images/SBP_welcome.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/SBP_welcome.png rename to product_docs/docs/postgis/3/images/SBP_welcome.png diff --git a/product_docs/docs/postgis/3.2/images/SB_PostGIS_Selection.png b/product_docs/docs/postgis/3/images/SB_PostGIS_Selection.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/SB_PostGIS_Selection.png rename to product_docs/docs/postgis/3/images/SB_PostGIS_Selection.png diff --git a/product_docs/docs/postgis/3.2/images/Unisntall2.png b/product_docs/docs/postgis/3/images/Unisntall2.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/Unisntall2.png rename to product_docs/docs/postgis/3/images/Unisntall2.png diff --git a/product_docs/docs/postgis/3.2/images/advanced_server_installation_details.png b/product_docs/docs/postgis/3/images/advanced_server_installation_details.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/advanced_server_installation_details.png rename to product_docs/docs/postgis/3/images/advanced_server_installation_details.png diff --git a/product_docs/docs/postgis/3.2/images/edb_logo.svg b/product_docs/docs/postgis/3/images/edb_logo.svg similarity index 100% rename from product_docs/docs/postgis/3.2/images/edb_logo.svg rename to product_docs/docs/postgis/3/images/edb_logo.svg diff --git a/product_docs/docs/postgis/3.2/images/installattion_complete.png b/product_docs/docs/postgis/3/images/installattion_complete.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/installattion_complete.png rename to product_docs/docs/postgis/3/images/installattion_complete.png diff --git a/product_docs/docs/postgis/3.2/images/installing_postgis.png b/product_docs/docs/postgis/3/images/installing_postgis.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/installing_postgis.png rename to product_docs/docs/postgis/3/images/installing_postgis.png diff --git a/product_docs/docs/postgis/3.2/images/language_selection.png b/product_docs/docs/postgis/3/images/language_selection.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/language_selection.png rename to product_docs/docs/postgis/3/images/language_selection.png diff --git a/product_docs/docs/postgis/3.2/images/pgadmin.png b/product_docs/docs/postgis/3/images/pgadmin.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/pgadmin.png rename to product_docs/docs/postgis/3/images/pgadmin.png diff --git a/product_docs/docs/postgis/3.2/images/postgis_installation.png b/product_docs/docs/postgis/3/images/postgis_installation.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/postgis_installation.png rename to product_docs/docs/postgis/3/images/postgis_installation.png diff --git a/product_docs/docs/postgis/3.2/images/postgis_installation_directory.png b/product_docs/docs/postgis/3/images/postgis_installation_directory.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/postgis_installation_directory.png rename to product_docs/docs/postgis/3/images/postgis_installation_directory.png diff --git a/product_docs/docs/postgis/3.2/images/postgis_pgadmin.png b/product_docs/docs/postgis/3/images/postgis_pgadmin.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/postgis_pgadmin.png rename to product_docs/docs/postgis/3/images/postgis_pgadmin.png diff --git a/product_docs/docs/postgis/3.2/images/postgis_welcome.png b/product_docs/docs/postgis/3/images/postgis_welcome.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/postgis_welcome.png rename to product_docs/docs/postgis/3/images/postgis_welcome.png diff --git a/product_docs/docs/postgis/3.2/images/ready_to_install.png b/product_docs/docs/postgis/3/images/ready_to_install.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/ready_to_install.png rename to product_docs/docs/postgis/3/images/ready_to_install.png diff --git a/product_docs/docs/postgis/3.2/images/selected.png b/product_docs/docs/postgis/3/images/selected.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/selected.png rename to product_docs/docs/postgis/3/images/selected.png diff --git a/product_docs/docs/postgis/3.2/images/selected_packages.png b/product_docs/docs/postgis/3/images/selected_packages.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/selected_packages.png rename to product_docs/docs/postgis/3/images/selected_packages.png diff --git a/product_docs/docs/postgis/3.2/images/spatial.png b/product_docs/docs/postgis/3/images/spatial.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/spatial.png rename to product_docs/docs/postgis/3/images/spatial.png diff --git a/product_docs/docs/postgis/3.2/images/spatial_extensions.png b/product_docs/docs/postgis/3/images/spatial_extensions.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/spatial_extensions.png rename to product_docs/docs/postgis/3/images/spatial_extensions.png diff --git a/product_docs/docs/postgis/3.2/images/stack.png b/product_docs/docs/postgis/3/images/stack.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/stack.png rename to product_docs/docs/postgis/3/images/stack.png diff --git a/product_docs/docs/postgis/3.2/images/uninstall1.png b/product_docs/docs/postgis/3/images/uninstall1.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/uninstall1.png rename to product_docs/docs/postgis/3/images/uninstall1.png diff --git a/product_docs/docs/postgis/3.2/images/uninstall4final.png b/product_docs/docs/postgis/3/images/uninstall4final.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/uninstall4final.png rename to product_docs/docs/postgis/3/images/uninstall4final.png diff --git a/product_docs/docs/postgis/3.2/images/uninstallfinal.png b/product_docs/docs/postgis/3/images/uninstallfinal.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/uninstallfinal.png rename to product_docs/docs/postgis/3/images/uninstallfinal.png diff --git a/product_docs/docs/postgis/3.2/images/welcome.png b/product_docs/docs/postgis/3/images/welcome.png similarity index 100% rename from product_docs/docs/postgis/3.2/images/welcome.png rename to product_docs/docs/postgis/3/images/welcome.png diff --git a/product_docs/docs/postgis/3.2/index.mdx b/product_docs/docs/postgis/3/index.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/index.mdx rename to product_docs/docs/postgis/3/index.mdx diff --git a/product_docs/docs/postgis/3.2/installing/index.mdx b/product_docs/docs/postgis/3/installing/index.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/index.mdx rename to product_docs/docs/postgis/3/installing/index.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_ppc64le/index.mdx b/product_docs/docs/postgis/3/installing/linux_ppc64le/index.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_ppc64le/index.mdx rename to product_docs/docs/postgis/3/installing/linux_ppc64le/index.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_ppc64le/postgis_rhel_8.mdx b/product_docs/docs/postgis/3/installing/linux_ppc64le/postgis_rhel_8.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_ppc64le/postgis_rhel_8.mdx rename to product_docs/docs/postgis/3/installing/linux_ppc64le/postgis_rhel_8.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_ppc64le/postgis_rhel_9.mdx b/product_docs/docs/postgis/3/installing/linux_ppc64le/postgis_rhel_9.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_ppc64le/postgis_rhel_9.mdx rename to product_docs/docs/postgis/3/installing/linux_ppc64le/postgis_rhel_9.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_ppc64le/postgis_sles_12.mdx b/product_docs/docs/postgis/3/installing/linux_ppc64le/postgis_sles_12.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_ppc64le/postgis_sles_12.mdx rename to product_docs/docs/postgis/3/installing/linux_ppc64le/postgis_sles_12.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_ppc64le/postgis_sles_15.mdx b/product_docs/docs/postgis/3/installing/linux_ppc64le/postgis_sles_15.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_ppc64le/postgis_sles_15.mdx rename to product_docs/docs/postgis/3/installing/linux_ppc64le/postgis_sles_15.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/index.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/index.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/index.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/index.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_centos_7.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_centos_7.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_centos_7.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_centos_7.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_debian_10.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_debian_10.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_debian_10.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_debian_10.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_debian_11.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_debian_11.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_debian_11.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_debian_11.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_other_linux_8.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_other_linux_8.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_other_linux_8.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_other_linux_8.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_other_linux_9.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_other_linux_9.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_other_linux_9.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_other_linux_9.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_rhel_7.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_rhel_7.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_rhel_7.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_rhel_7.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_rhel_8.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_rhel_8.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_rhel_8.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_rhel_8.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_rhel_9.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_rhel_9.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_rhel_9.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_rhel_9.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_sles_12.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_sles_12.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_sles_12.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_sles_12.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_sles_15.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_sles_15.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_sles_15.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_sles_15.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_ubuntu_20.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_ubuntu_20.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_ubuntu_20.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_ubuntu_20.mdx diff --git a/product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_ubuntu_22.mdx b/product_docs/docs/postgis/3/installing/linux_x86_64/postgis_ubuntu_22.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/linux_x86_64/postgis_ubuntu_22.mdx rename to product_docs/docs/postgis/3/installing/linux_x86_64/postgis_ubuntu_22.mdx diff --git a/product_docs/docs/postgis/3.2/installing/uninstalling.mdx b/product_docs/docs/postgis/3/installing/uninstalling.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/uninstalling.mdx rename to product_docs/docs/postgis/3/installing/uninstalling.mdx diff --git a/product_docs/docs/postgis/3.2/installing/upgrading.mdx b/product_docs/docs/postgis/3/installing/upgrading.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/upgrading.mdx rename to product_docs/docs/postgis/3/installing/upgrading.mdx diff --git a/product_docs/docs/postgis/3.2/installing/windows.mdx b/product_docs/docs/postgis/3/installing/windows.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/installing/windows.mdx rename to product_docs/docs/postgis/3/installing/windows.mdx diff --git a/product_docs/docs/postgis/3.2/supported_platforms.mdx b/product_docs/docs/postgis/3/supported_platforms.mdx similarity index 100% rename from product_docs/docs/postgis/3.2/supported_platforms.mdx rename to product_docs/docs/postgis/3/supported_platforms.mdx From 61d7423e1c234ce26d4dbd3264a6a706574670f8 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 8 Apr 2024 16:52:07 +0530 Subject: [PATCH 48/48] Update mar_2024_rel_notes.mdx --- .../biganimal/release/release_notes/mar_2024_rel_notes.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx b/product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx index bcab45d734a..647dc1653a7 100644 --- a/product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx +++ b/product_docs/docs/biganimal/release/release_notes/mar_2024_rel_notes.mdx @@ -7,8 +7,8 @@ BigAnimal's March 2024 includes the following enhancements and bugfixes: | Type | Description | |------|-------------| -| Enhancement | EDB Postgres Extended Server is now available in BigAnimal for single-node and High Availability in addition to Distributed High Availability clusters. | -| Enhancement | You can now take advantage of Transparent Data Encryption (TDE) for clusters running on EDB Postgres Advanced Server or EDB Postgres Extended Server versions 15 and newer in BigAnimal’s AWS account. With TDE, you can connect your own keys from AWS’s Key Management Service to encrypt your clusters at the database level in addition to the default volume-level encryption. | +| Enhancement | EDB Postgres Extended Server is now available in BigAnimal for single-node, high-availability, and Distributed High Availability clusters. | +| Enhancement | You can now use Transparent Data Encryption (TDE) for clusters running on EDB Postgres Advanced Server or EDB Postgres Extended Server versions 15 and later in BigAnimal’s AWS account. With TDE, you can connect your keys from AWS’s Key Management Service to encrypt your clusters at the database level in addition to the default volume-level encryption. | | Enhancement | BigAnimal Terraform provider v0.8.1 is now available. Learn more about what’s new [here](https://github.com/EnterpriseDB/terraform-provider-biganimal/releases/tag/v0.8.1) and download the provider [here](https://registry.terraform.io/providers/EnterpriseDB/biganimal/latest). | | Enhancement | BigAnimal CLI v3.6.0 is now available. Learn more about what’s new [here](https://cli.biganimal.com/versions/v3.6.0/). |