From 07b2cd4ae2a04212e2d621265d51353e98ef86fa Mon Sep 17 00:00:00 2001 From: yuki-tei <57980555+yuki-tei@users.noreply.github.com> Date: Tue, 8 Aug 2023 10:36:32 +0900 Subject: [PATCH 01/19] Update 03-quick_start.mdx Currently the EPAS newest major release is v15, hence pg-version-force option should be added to pgbackuprest.conf to solve below issue. https://github.com/pgbackrest/pgbackrest/issues/2032 --- .../supported-open-source/pgbackrest/03-quick_start.mdx | 1 + 1 file changed, 1 insertion(+) diff --git a/advocacy_docs/supported-open-source/pgbackrest/03-quick_start.mdx b/advocacy_docs/supported-open-source/pgbackrest/03-quick_start.mdx index 1a9204f5f5f..c874b5e4133 100644 --- a/advocacy_docs/supported-open-source/pgbackrest/03-quick_start.mdx +++ b/advocacy_docs/supported-open-source/pgbackrest/03-quick_start.mdx @@ -31,6 +31,7 @@ repo1-path=/var/lib/edb/as13/backups pg1-path=/var/lib/edb/as13/data pg1-user=enterprisedb pg1-port=5444 +pg-version-force=15 ``` For **PostgreSQL**: From bf8ab4c0485e14c4eafbdf0d4a180bf22cb80bbf Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Tue, 19 Sep 2023 18:23:49 +0100 Subject: [PATCH 02/19] Added notes on what to set pg-verrsion-force to. --- .../pgbackrest/03-quick_start.mdx | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/advocacy_docs/supported-open-source/pgbackrest/03-quick_start.mdx b/advocacy_docs/supported-open-source/pgbackrest/03-quick_start.mdx index c874b5e4133..30fb3d55514 100644 --- a/advocacy_docs/supported-open-source/pgbackrest/03-quick_start.mdx +++ b/advocacy_docs/supported-open-source/pgbackrest/03-quick_start.mdx @@ -25,23 +25,25 @@ For **EDB Postgres Advanced Server**: ```ini [global] -repo1-path=/var/lib/edb/as13/backups +repo1-path=/var/lib/edb/as15/backups [demo] -pg1-path=/var/lib/edb/as13/data +pg1-path=/var/lib/edb/as15/data pg1-user=enterprisedb pg1-port=5444 pg-version-force=15 ``` +The `pg-version-force` value should be set to the same major version number as the server reports when using `show server_version_num;` in PSQL. Only the first two digits are the major version. For example, 15000 is major version 15. + For **PostgreSQL**: ```ini [global] -repo1-path=/var/lib/pgsql/13/backups +repo1-path=/var/lib/pgsql/15/backups [demo] -pg1-path=/var/lib/pgsql/13/data +pg1-path=/var/lib/pgsql/15/data pg1-user=postgres pg1-port=5432 ``` From 59d390f8f22211768471e34ad060840b74d5d6d9 Mon Sep 17 00:00:00 2001 From: kunliuedb <95676424+kunliuedb@users.noreply.github.com> Date: Tue, 2 Jan 2024 10:52:25 +0800 Subject: [PATCH 03/19] Update poolers.mdx --- product_docs/docs/biganimal/release/overview/poolers.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/overview/poolers.mdx b/product_docs/docs/biganimal/release/overview/poolers.mdx index 3c41b903c11..45a1f8ade55 100644 --- a/product_docs/docs/biganimal/release/overview/poolers.mdx +++ b/product_docs/docs/biganimal/release/overview/poolers.mdx @@ -7,7 +7,7 @@ EDB PgBouncer can manage your connections to Postgres databases and help your wo BigAnimal provisions up to three instances per EDB PgBouncer-enabled cluster to ensure that performance is unaffected, so each availability zone receives its own instance of EDB PgBouncer. !!!Note - Currently, you can't enable EDB PgBouncer when using BigAnimal's cloud account or when creating a distributed high-availability cluster using your cloud account. + Currently, you can't enable EDB PgBouncer when creating a distributed high-availability cluster using your cloud account. If you want to deploy and manage PgBouncer outside of BigAnimal, see the [How to configure EDB PgBouncer with BigAnimal cluster](https://support.biganimal.com/hc/en-us/articles/4848726654745-How-to-configure-PgBouncer-with-BigAnimal-Cluster) knowledge-base article. From 8f5e9e49f60731f7daa79ef826c52342c4adb769 Mon Sep 17 00:00:00 2001 From: kunliuedb <95676424+kunliuedb@users.noreply.github.com> Date: Tue, 2 Jan 2024 11:09:32 +0800 Subject: [PATCH 04/19] Update poolers.mdx --- product_docs/docs/biganimal/release/overview/poolers.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/overview/poolers.mdx b/product_docs/docs/biganimal/release/overview/poolers.mdx index 45a1f8ade55..2c93116214b 100644 --- a/product_docs/docs/biganimal/release/overview/poolers.mdx +++ b/product_docs/docs/biganimal/release/overview/poolers.mdx @@ -7,7 +7,7 @@ EDB PgBouncer can manage your connections to Postgres databases and help your wo BigAnimal provisions up to three instances per EDB PgBouncer-enabled cluster to ensure that performance is unaffected, so each availability zone receives its own instance of EDB PgBouncer. !!!Note - Currently, you can't enable EDB PgBouncer when creating a distributed high-availability cluster using your cloud account. + Currently, you can't enable EDB PgBouncer when creating a distributed high-availability cluster. If you want to deploy and manage PgBouncer outside of BigAnimal, see the [How to configure EDB PgBouncer with BigAnimal cluster](https://support.biganimal.com/hc/en-us/articles/4848726654745-How-to-configure-PgBouncer-with-BigAnimal-Cluster) knowledge-base article. From de18e390af36576444d5bee0947d86d3ba0f7fd6 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 9 Jan 2024 13:08:24 +0530 Subject: [PATCH 05/19] BigAnimal - Dedicated Wal storage Added the content as per UPM-15247 --- .../release/getting_started/creating_a_cluster/index.mdx | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index 36f876bf51f..ea76be5ec43 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -96,7 +96,12 @@ The following options aren't available when creating your cluster: !!!tip To maximize your disk size for AWS, select R5b as your instance and then io2 Block Express as your storage to get a maximum disk size of 64 TB and 256,000 IOPS. -1. In the **Storage** section, from the **Volume Type** list, select your volume type. +1. In the **Storage** section: + + By default, the **Database Storage** volume includes the storage volume for Write-Ahead (WAL) Logs. If your database is OLTP and have more WAL log generation then you can allocate separate storage volume for the WAL logs. To allocate separate storage volume for WAL logs, select the check-box before **Use a separate storage volume for Write-Ahead Logs**. Select the Volume Type, Size, IOPS, and Disk Throughput separately for **Database Storage** and **Write-Ahead Logs Storage**. + + From the **Volume Type** list, select your volume type. + - For Azure, in **Volume Type**, select **Premium SSD** or **Ultra Disk**. Compared to Premium SSD volumes, ultra disks offer lower-latency, high-performance options and direct control over your disk's input/output operations per second (IOPS). For BigAnimal, we recommend using ultra disks for workloads that require the most demanding performance. See [Using Azure ultra disks](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal) for more information. - For Premium SSD, in **Volume Properties**, select the type and amount of storage needed for your cluster. See [Azure Premium SSD storage types](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssds) for more information. @@ -118,7 +123,7 @@ The following options aren't available when creating your cluster: In **Volume Properties**, select the disk size for your cluster, and configure the IOPS. - + 2. ##### Network, Logs, & Telemetry section In **Connectivity Type**, specify whether to use private or public networking. Networking is set to **Public** by default. Public means that any client can connect to your cluster’s public IP address over the internet. Optionally, you can limit traffic to your public cluster by specifying an IP allowlist, which allows access only to certain blocks of IP addresses. To limit access, add one or more classless inter-domain routing (CIDR) blocks in the **IP Allowlists** section. CIDR is a method for allocating IP addresses and IP routing to a whole network or subnet. If you have any CIDR block entries, access is limited to those IP addresses. If none are specified, all network traffic is allowed. From d74999adfba31cf52528fe38079cac57b2c4d269 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Mon, 15 Jan 2024 11:08:05 -0500 Subject: [PATCH 06/19] Install procedures for sslutils and index advisor --- .../15/epas_security_guide/04_sslutils.mdx | 24 ++++++++++++++- .../02_index_advisor/index.mdx | 1 + .../installing_index_advisor.mdx | 29 +++++++++++++++++++ 3 files changed, 53 insertions(+), 1 deletion(-) create mode 100644 product_docs/docs/epas/15/managing_performance/02_index_advisor/installing_index_advisor.mdx diff --git a/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx b/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx index 095b96135e9..1d07c9d442c 100644 --- a/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx +++ b/product_docs/docs/epas/15/epas_security_guide/04_sslutils.mdx @@ -9,7 +9,29 @@ description: "The sslutils Postgres extension provides SSL certificate generatio ## Installing the extension -You install `sslutils` by using the `edb-as-server-sslutils` RPM package, where `` is the EDB Postgres Advanced Server version number. +Install `sslutils` using the following command: + +```shell + sudo -y install edb-as15-server-sslutils + ``` + + Where: + + - ``is the package manager used with your operating system: + + | Package manager | Operating system | + | --------------- | -------------------------------- | + | dnf | RHEL 8/9 and derivatives | + | yum | RHEL 7 and derivatives, CentOS 7 | + | zypper | SLES | + | apt-get | Debian 10/11 and derivatives | + + + For example, to install `sslutils` on a RHEL 9 platform: + + ```shell + sudo dnf -y install edb-as15-server-sslutils + ``` Each parameter in the function’s parameter list is described by `parameter n`, where `n` refers to the `nth` ordinal position (for example, first, second, or third) in the function’s parameter list. diff --git a/product_docs/docs/epas/15/managing_performance/02_index_advisor/index.mdx b/product_docs/docs/epas/15/managing_performance/02_index_advisor/index.mdx index 8f6dca2785e..29adff0892c 100644 --- a/product_docs/docs/epas/15/managing_performance/02_index_advisor/index.mdx +++ b/product_docs/docs/epas/15/managing_performance/02_index_advisor/index.mdx @@ -6,6 +6,7 @@ description: "How to use the Index Advisor utility to help determine the columns navigation: - index_advisor_overview - 05_index_advisor_limitations + - installing_index_advisor - 02_index_advisor_configuration - 03_using_index_advisor - 04_reviewing_the_index_advisor_recommendations diff --git a/product_docs/docs/epas/15/managing_performance/02_index_advisor/installing_index_advisor.mdx b/product_docs/docs/epas/15/managing_performance/02_index_advisor/installing_index_advisor.mdx new file mode 100644 index 00000000000..756d3ed6110 --- /dev/null +++ b/product_docs/docs/epas/15/managing_performance/02_index_advisor/installing_index_advisor.mdx @@ -0,0 +1,29 @@ +--- +title: "Installing Index Advisor" +--- + +Install Index Advisor using the following command: + +```shell + sudo -y install edb-as15-server-indexadvisor + ``` + + Where: + + - ``is the package manager used with your operating system: + + | Package manager | Operating system | + | --------------- | -------------------------------- | + | dnf | RHEL 8/9 and derivatives | + | yum | RHEL 7 and derivatives, CentOS 7 | + | apt-get | Debian 10/11 and derivatives | + + + For example, to install Index Advisor on a RHEL 9 platform: + + ```shell + sudo dnf -y install edb-as15-server-indexadvisor + ``` +!!! Note + Index Advisor is not available on the SLES operating system. + From b0c18944476b2730fb4775f11aacc6339cd4635c Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 16 Jan 2024 09:38:41 +0530 Subject: [PATCH 07/19] Edits done as per suggestion from Amrita --- .../release/getting_started/creating_a_cluster/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index ea76be5ec43..3fdbff31fab 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -98,7 +98,7 @@ The following options aren't available when creating your cluster: 1. In the **Storage** section: - By default, the **Database Storage** volume includes the storage volume for Write-Ahead (WAL) Logs. If your database is OLTP and have more WAL log generation then you can allocate separate storage volume for the WAL logs. To allocate separate storage volume for WAL logs, select the check-box before **Use a separate storage volume for Write-Ahead Logs**. Select the Volume Type, Size, IOPS, and Disk Throughput separately for **Database Storage** and **Write-Ahead Logs Storage**. + By default, the **Database Storage** volume stores the Postgres data and the Write-Ahead (WAL) Logs together. If you want to improve write performance for WAL files, you can allocate separate storage volume for the WAL files. To allocate separate storage volume for WAL files, select the check-box before **Use a separate storage volume for Write-Ahead Logs**. Then select the Volume Type, Size, IOPS, and Disk Throughput separately for **Database Storage** and **Write-Ahead Logs Storage**. If you allocate separate storage volume for the WAL files, you have to pay cloud infrastructure costs for the second volume. From the **Volume Type** list, select your volume type. From 31071ca1104c5d7ac40266f7e6063aa8017ccbac Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Mon, 22 Jan 2024 17:48:09 +0530 Subject: [PATCH 08/19] updated the note as per discussion with Simon on PEM-4665 --- product_docs/docs/pem/9/profiling_workloads/index_advisor.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pem/9/profiling_workloads/index_advisor.mdx b/product_docs/docs/pem/9/profiling_workloads/index_advisor.mdx index 5008195bccd..90ea9cdbbb6 100644 --- a/product_docs/docs/pem/9/profiling_workloads/index_advisor.mdx +++ b/product_docs/docs/pem/9/profiling_workloads/index_advisor.mdx @@ -7,7 +7,7 @@ redirects: --- !!! Note "Important" -Index Advisor isn't supported for EDB Postgres Advanced Server and PostgreSQL version 11 and later. +Index Advisor isn't supported for EDB Postgres Advanced Server and PostgreSQL version 16 and later. !!! Index Advisor is distributed with EDB Postgres Advanced Server. Index Advisor works with SQL Profiler by examining collected SQL statements and making indexing recommendations for any underlying tables to improve SQL response time. Index Advisor works on all DML (INSERT, UPDATE, DELETE) and SELECT statements that are invoked by a superuser. From 2b92b263ead0dd52ddd02affa651c87e781fa9a5 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 16 Jan 2024 15:23:08 +0530 Subject: [PATCH 09/19] BigAnimal - Third-party integrations as per UPM-27644 Added the topic under Monitoring and logging --- .../third_party_integrations/index.mdx | 22 +++++++++++++++++++ 1 file changed, 22 insertions(+) create mode 100644 product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx new file mode 100644 index 00000000000..a9cc9997528 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx @@ -0,0 +1,22 @@ +--- +title: "Third party integrations" +--- + +BigAnimal provides support for third-party integrations for monitoring and logging. The third-party tools available in BigAnimal are: + +- Datadog +- New relic + +The third-party integrations are allowed at the project level. Users can't turn monitoring integrations on or off for the individual clusters. An Org admin or a Project owner can set up an integration. Only one monitoring integration can be set up per project. + +By default, all the third-party integrations are disabled. Enable it using the **Integrations** tab after creating the project. All the metrics collected by these tools are displayed in BigAnimal *Monitoring and logging** tab using PEMx. The collected logs are exported to the object storage by default. + +The Project Owners, Editors, and Viewers can view the active integrations and can link to the third party service from the BigAnimal UI. + +To enable the third-party integrations: +1. Select an existing project from the **Projects** page. +2. Select the **Settings** gear next to the project. +3. Go to **Integrations** tab +4. Select any one of the available integrations: + - **Datadog** - If you select Datadog, a window pops-up. Provide the **Datadog API Key**, the **site URL** and select save. + - **New Relic** - It you select New Relic, a window pops-up. Provide the *New Relic API key**, the **other** details and select the **Test Connection**. If the connection test is successful, then select save. From 78698970221802e5c263c286efc00a02f5a43aa8 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Wed, 17 Jan 2024 16:19:25 +0530 Subject: [PATCH 10/19] Added more content --- .../third_party_integrations/datadog.mdx | 34 +++++++++++++++++++ .../third_party_integrations/index.mdx | 20 ++++------- .../third_party_integrations/newrelic.mdx | 25 ++++++++++++++ 3 files changed, 66 insertions(+), 13 deletions(-) create mode 100644 product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx create mode 100644 product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx new file mode 100644 index 00000000000..19efc1ffd37 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx @@ -0,0 +1,34 @@ +--- +title: "Datadog" +--- + +Datadog integration is based on the OpenTelemetry Collector and the Datadog plugin bundled with the OpenTelemetry collector. Once enabled, BigAnimal automatically configures data planes to send the Telemetry to your Datadog account via Datadog's cloud receive endpoint. + +## Pre-requisites + +The Datadog account is required to enable this integration. + +## Enabling Datadog + +To enable the **Datadog** integrations: + +1. Select an existing project from the **Projects** page. +2. Go to **Settings** on the left-side navigation. +3. Select **Integrations** from the **Settings** drop down list. +4. Select **Datadog**, a window pops-up to provide the details: + - **Datadog API Key** — Provide an API Key. For more details, see [Datadog API Key](https://docs.datadoghq.com/account_management/api-app-keys/) + - **Datadog Site Name** — Provide the [website name](https://www.datadoghq.com/) + - **Datadog Site Parameter** — Provide the [site URL](https://docs.datadoghq.com/getting_started/site/) + - **Datadog API Key ID** — Provide the API Key ID + + Provide all the details and then select **Save**. + +## Metrics + +A subset of the OpenTelemetry Collector’s metrics from the **hostmetrics** and **kubeletstats** receivers, plus BigAnimal custom metrics for the monitored postgres instance + +## Cost + +- [Datadog pricing list](https://www.datadoghq.com/pricing/list/) +- [Custom metrics billing](https://docs.datadoghq.com/account_management/billing/custom_metrics/?tab=countrate#counting-custom-metrics) +- [Usage metrics billing](https://docs.datadoghq.com/account_management/billing/usage_metrics/) \ No newline at end of file diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx index a9cc9997528..536cc6d0b65 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx @@ -2,21 +2,15 @@ title: "Third party integrations" --- -BigAnimal provides support for third-party integrations for monitoring and logging. The third-party tools available in BigAnimal are: +BigAnimal provides support for third-party monitoring integrations for both Bring your own account and BigAnimal's hosted. -- Datadog -- New relic +The integrations can be done at the project level. You can't turn monitoring integrations on or off for the individual clusters. An Org admin or a Project owner can set up an integration. Only one integration can be set up per project. -The third-party integrations are allowed at the project level. Users can't turn monitoring integrations on or off for the individual clusters. An Org admin or a Project owner can set up an integration. Only one monitoring integration can be set up per project. - -By default, all the third-party integrations are disabled. Enable it using the **Integrations** tab after creating the project. All the metrics collected by these tools are displayed in BigAnimal *Monitoring and logging** tab using PEMx. The collected logs are exported to the object storage by default. +By default, all the integrations are disabled. Enable it using the **Integrations** tab after creating the project. All the metrics collected from all the clusters in the Project is sent to the integrated tool and are displayed in BigAnimal *Monitoring and logging** tab using PEMx. The collected logs are exported to the object storage by default. The Project Owners, Editors, and Viewers can view the active integrations and can link to the third party service from the BigAnimal UI. -To enable the third-party integrations: -1. Select an existing project from the **Projects** page. -2. Select the **Settings** gear next to the project. -3. Go to **Integrations** tab -4. Select any one of the available integrations: - - **Datadog** - If you select Datadog, a window pops-up. Provide the **Datadog API Key**, the **site URL** and select save. - - **New Relic** - It you select New Relic, a window pops-up. Provide the *New Relic API key**, the **other** details and select the **Test Connection**. If the connection test is successful, then select save. +The third-party integrations available in BigAnimal are: + +- [Datadog](./datadog) +- [New Relic](./newrelic) diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx new file mode 100644 index 00000000000..26dacc92c9f --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx @@ -0,0 +1,25 @@ +--- +title: "New Relic" +--- + +The New Relic integration is based on the OpenTelemetry Collector, shipping the metrics over the standard OLTP protocol to the New Relic cloud endpoint. + +## Pre-requisites + +The New Relic account is required to enable this integration. + +## Enable New Relic + +To enable the **New Relic** integration: +1. Select an existing project from the **Projects** page. +2. Go to **Settings** on the left-side navigation. +3. Select **Integrations** from the **Settings** drop down list. +4. Select **New Relic**, a window pops up to provide the details: + - *New Relic API key** + - **New Relic Account ID** + - **New Relic API Key Name** + Provide all the details and select **Save**. + +## Metrics + +## Costs \ No newline at end of file From 55670d6954e71e9b6ee230ea843e6be498eba03b Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 23 Jan 2024 16:24:19 +0530 Subject: [PATCH 11/19] Updated content as per review commentes from Craig and Amrita --- .../third_party_integrations/datadog.mdx | 16 +++++++++++----- .../third_party_integrations/index.mdx | 8 ++++---- .../third_party_integrations/newrelic.mdx | 15 +++++++++++---- 3 files changed, 26 insertions(+), 13 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx index 19efc1ffd37..9c8f1fa394d 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx @@ -6,7 +6,7 @@ Datadog integration is based on the OpenTelemetry Collector and the Datadog plug ## Pre-requisites -The Datadog account is required to enable this integration. +A pre-existing Datadog account is required to use this integration. ## Enabling Datadog @@ -16,13 +16,19 @@ To enable the **Datadog** integrations: 2. Go to **Settings** on the left-side navigation. 3. Select **Integrations** from the **Settings** drop down list. 4. Select **Datadog**, a window pops-up to provide the details: - - **Datadog API Key** — Provide an API Key. For more details, see [Datadog API Key](https://docs.datadoghq.com/account_management/api-app-keys/) - - **Datadog Site Name** — Provide the [website name](https://www.datadoghq.com/) - - **Datadog Site Parameter** — Provide the [site URL](https://docs.datadoghq.com/getting_started/site/) - - **Datadog API Key ID** — Provide the API Key ID + - **Datadog API Key** — Provide an API Key. For more details, see [Datadog API Key](https://docs.datadoghq.com/account_management/api-app-keys/). The API Key is sensitive and can't be retrieved once saved. + - **Datadog Site Name** — Provide the Datadog site name by selecting from the available options in the drop-down list. The telemetry goes to this chosen Datadog site. + - **Datadog Site URL** — The Datadog site URL gets selected automatically based on the Datadog site name you selected. + - **Datadog API Key ID** — Provide the API Key ID, that acts as the name to store an identifier for the key. This identifier is used for future reference to check the configuration details. Provide all the details and then select **Save**. +Once you enable this integration, it sends BigAnimal telemetry to your Datadog account. + +!!! note "Important" +Generate a new API Key in Datadog to enable this integration in BigAnimal. This API key must be specific to this integration and shouldn't be shared. Whenever you disable this integration, revoke the API Key using the Datadog UI. Once revoked, it disables the Datadog ingestion and billable usage from the BigAnimal integration. This doesn't impact the other services using the Datadog API. +!!! + ## Metrics A subset of the OpenTelemetry Collector’s metrics from the **hostmetrics** and **kubeletstats** receivers, plus BigAnimal custom metrics for the monitored postgres instance diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx index 536cc6d0b65..afffb8d47f1 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx @@ -1,14 +1,14 @@ --- -title: "Third party integrations" +title: "Third party monitoring integrations" --- BigAnimal provides support for third-party monitoring integrations for both Bring your own account and BigAnimal's hosted. -The integrations can be done at the project level. You can't turn monitoring integrations on or off for the individual clusters. An Org admin or a Project owner can set up an integration. Only one integration can be set up per project. +Monitoring integrations are configured at project level in BigAnimal. It can't be turned on or off for the individual clusters. An Org admin or a Project owner can set up an integration. Only one integration can be set up per project. -By default, all the integrations are disabled. Enable it using the **Integrations** tab after creating the project. All the metrics collected from all the clusters in the Project is sent to the integrated tool and are displayed in BigAnimal *Monitoring and logging** tab using PEMx. The collected logs are exported to the object storage by default. +By default, all the integrations are disabled. Enable it using the **Integrations** tab after creating the project. -The Project Owners, Editors, and Viewers can view the active integrations and can link to the third party service from the BigAnimal UI. +All the metrics collected from all the clusters in the Project are sent to the integrated tool and are displayed in BigAnimal's *Monitoring and logging** tab using PEMx. The collected logs are exported to the object storage by default. The third-party integrations available in BigAnimal are: diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx index 26dacc92c9f..9d276c5b16d 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx @@ -6,7 +6,7 @@ The New Relic integration is based on the OpenTelemetry Collector, shipping the ## Pre-requisites -The New Relic account is required to enable this integration. +A pre-existing New Relic account is required to use this integration. ## Enable New Relic @@ -15,11 +15,18 @@ To enable the **New Relic** integration: 2. Go to **Settings** on the left-side navigation. 3. Select **Integrations** from the **Settings** drop down list. 4. Select **New Relic**, a window pops up to provide the details: - - *New Relic API key** - - **New Relic Account ID** - - **New Relic API Key Name** + - **New Relic API key** - Provide an API Key. For more details, see [New Relic API Key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/). The API Key is sensitive and can't be retrieved once saved. + - **New Relic API Key Name** - Provide the API Key name to store an identifier for the key. This identifier is used for future reference to check the configuration details. + - **New Relic Account ID** - Provide the account ID of New Relic. + Provide all the details and select **Save**. +Once you enable this integration, it sends BigAnimal telemetry to your New Relic account. + +!!! note "Important" +Generate a new API License Key in New Relic to enable this integration in BigAnimal. This API key must be specific to this integration and shouldn't be shared. Whenever you disable this integration, revoke the API Key using the New Relic UI. Once revoked, it disables the New Relic ingestion and billable usage from the BigAnimal integration. This doesn't impact the other services using the New Relic API. +!!! + ## Metrics ## Costs \ No newline at end of file From bfdaa6dc9d6d6ef0e1c08d0606dc022ffdd604a2 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 23 Jan 2024 12:19:54 -0500 Subject: [PATCH 12/19] Verbiage for Costs and Metrics for Datadog and New Relic --- .../third_party_integrations/datadog.mdx | 22 ++++++++++++++- .../third_party_integrations/newrelic.mdx | 27 ++++++++++++++++++- 2 files changed, 47 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx index 9c8f1fa394d..d89213df8d9 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx @@ -31,10 +31,30 @@ Generate a new API Key in Datadog to enable this integration in BigAnimal. This ## Metrics -A subset of the OpenTelemetry Collector’s metrics from the **hostmetrics** and **kubeletstats** receivers, plus BigAnimal custom metrics for the monitored postgres instance +BigAnimal sends a subset of the OpenTelemetry Collector's metrics from the hostmetrics and kubeletstats receivers, plus BigAnimal custom metrics for the monitored Postgres instance. + +You can see a list of metrics in the Datadog user interface, along with the tags for each metric. + +To see the list of metrics, go to **Metrics > Summary**. Then select a specific metric to see the tags for that metric. + +The set of metrics delivered to DataDog is subject to change. Metrics with names that begin with `postgres.preview.` or `biganimal.preview.` are likely to be renamed or removed in a future release. Other metrics may also be renamed, added, or removed to better integrate into the DataDog platform. ## Cost +After enabling the BigAnimal telemetry integration, check your billable Datadog usage and continue to monitor it over time. + +Be aware of the following cost considerations: + +* *You are responsible for all costs* charged to your Datadog account by telemetry sent by the BigAnimal Datadog integration. Charges are based on usage but they could even result from BigAnimal errors or oversights. If you do not accept this responsibility, do not enable the Datadog integration. +* Datadog bills for each monitored Postgres node (including any replicas) as a "monitored host" in the your Datadog plan. See the [Datadog price list](https://www.datadoghq.com/pricing/list/) for current pricing per monitored host. +* Datadog *may* bill for each Postgres container and monitoring agent container running on each monitored Postgres node. See the [Datadog price list](https://www.datadoghq.com/pricing/list/) for current pricing per monitored container. +* Datadog counts some of the metrics sent by BigAnimal as custom metrics. Datadog plans include a limited number of free unique custom metrics dimensions per host. Additional metrics dimensions above the free limit are billable at a rate set in the Datadog price list. See your [Datadog plan price list](https://www.datadoghq.com/pricing/list/) for details. Although BigAnimal's telemetry integration for Datadog tries to limit custom metrics delivery to minimize additional billable Datadog usage, you should still expect some billable custom metrics usage. + +To disable the Datadog integration for BigAnimal and ensure no further costs are incurred on your Datadog account, you must revoke the API key provided to BigAnimal. Disabling the integration on the BigAnimal Portal is not sufficient. + +Currently, Datadog does not offer any mechanism for configuring a usage limit or billing cap for monitored hosts, monitored containers, or custom metrics. The Datadog [metrics without limits feature](https://docs.datadoghq.com/metrics/metrics-without-limits/) can limit cardinality-based billing for custom metrics, but it enables ingestion-based billing instead, so the overall price may actually be greater. You must actively monitor and be alert to your Datadog usage-based billing using the Datadog-provided [usage metrics](https://docs.datadoghq.com/account_management/billing/usage_metrics/), such as `datadog.estimated_usage.hosts`, `datadog.estimated_usage.containers`, and `datadog.estimated_usage.metrics`.custom. + + - [Datadog pricing list](https://www.datadoghq.com/pricing/list/) - [Custom metrics billing](https://docs.datadoghq.com/account_management/billing/custom_metrics/?tab=countrate#counting-custom-metrics) - [Usage metrics billing](https://docs.datadoghq.com/account_management/billing/usage_metrics/) \ No newline at end of file diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx index 9d276c5b16d..5bce2eab2a8 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx @@ -29,4 +29,29 @@ Generate a new API License Key in New Relic to enable this integration in BigAni ## Metrics -## Costs \ No newline at end of file +BigAnimal sends a subset of the OpenTelemetry Collector's metrics from the hostmetrics and kubeletstats receivers, plus BigAnimal custom metrics for the monitored Postgres instance. + +You can see a list of metrics in the New Relic user interface, along with dimensions for each metric. + +To see a list of metrics, go to **Metrics & Events**. Then select a specific metric to see the dimensions sent for that metric. + +The set of metrics delivered to New Relic is subject to change. Metrics with names that begin with `postgres.preview.` or `biganimal.preview.` may be renamed or removed in a future release. Other metrics may also be renamed, added, or removed to better integrate into the New Relic platform. + +## Cost + +After enabling the BigAnimal telemetry integration, check your billable New Relic usage and continue to monitor it over time. + +Be aware of the following important cost considerations: + +* *You are responsible for all costs* charged to your New Relic account by telemetry sent by the BigAnimal New Relic integration. Charges are based on usage but they could even result from BigAnimal errors or oversights. If you do not accept this responsibility, do not enable the New Relic integration. + +* New Relic bills usage by bytes ingested and for data retention. You must monitor and configure alerts on your New Relic data ingestion to reduce the risk of unexpectedly large usage bills. Also check your retention settings. + +New Relic has features to limit usage and ingestion. Customers should review their limits before enabling the BigAnimal integration. In the New Relic user interface, see **Administration>Data management>Limits**. Also refer to the following related topics in the New Relic documentation: + +* [Understand and manage data ingest](https://docs.newrelic.com/docs/data-apis/manage-data/manage-data-coming-new-relic/) +* [Data ingest: Billing and rules](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-pricing-billing/data-ingest-billing/) +* [Understand New Relic data limits](https://docs.newrelic.com/docs/data-apis/manage-data/view-system-limits/) +* [Get more detail about your data limits](https://docs.newrelic.com/docs/data-apis/manage-data/query-limits/) + +To disable the New Relic integration for BigAnimal and ensure no further costs are incurred on your New Relic account, you must revoke the API key provided to BigAnimal. Disabling the integration on the BigAnimal Portal is not sufficient. From 08b74400efa054ea444d4824927de31e00b329b1 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Wed, 24 Jan 2024 16:41:41 +0530 Subject: [PATCH 13/19] minor fix as per the review comments --- .../third_party_integrations/datadog.mdx | 4 ++-- .../third_party_integrations/index.mdx | 4 ++-- .../third_party_integrations/newrelic.mdx | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx index d89213df8d9..bff30e3b6c2 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/datadog.mdx @@ -52,9 +52,9 @@ Be aware of the following cost considerations: To disable the Datadog integration for BigAnimal and ensure no further costs are incurred on your Datadog account, you must revoke the API key provided to BigAnimal. Disabling the integration on the BigAnimal Portal is not sufficient. -Currently, Datadog does not offer any mechanism for configuring a usage limit or billing cap for monitored hosts, monitored containers, or custom metrics. The Datadog [metrics without limits feature](https://docs.datadoghq.com/metrics/metrics-without-limits/) can limit cardinality-based billing for custom metrics, but it enables ingestion-based billing instead, so the overall price may actually be greater. You must actively monitor and be alert to your Datadog usage-based billing using the Datadog-provided [usage metrics](https://docs.datadoghq.com/account_management/billing/usage_metrics/), such as `datadog.estimated_usage.hosts`, `datadog.estimated_usage.containers`, and `datadog.estimated_usage.metrics`.custom. - +Currently, Datadog does not offer any mechanism for configuring a usage limit or billing cap for monitored hosts, monitored containers, or custom metrics. The Datadog [metrics without limits feature](https://docs.datadoghq.com/metrics/metrics-without-limits/) can limit cardinality-based billing for custom metrics, but it enables ingestion-based billing instead, so the overall price may actually be greater. You must actively monitor and be alert to your Datadog usage-based billing using the Datadog-provided [usage metrics](https://docs.datadoghq.com/account_management/billing/usage_metrics/), such as `datadog.estimated_usage.hosts`, `datadog.estimated_usage.containers`, and `datadog.estimated_usage.metrics.custom`. +For more information, see also: - [Datadog pricing list](https://www.datadoghq.com/pricing/list/) - [Custom metrics billing](https://docs.datadoghq.com/account_management/billing/custom_metrics/?tab=countrate#counting-custom-metrics) - [Usage metrics billing](https://docs.datadoghq.com/account_management/billing/usage_metrics/) \ No newline at end of file diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx index afffb8d47f1..e36bd87a88f 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/index.mdx @@ -2,13 +2,13 @@ title: "Third party monitoring integrations" --- -BigAnimal provides support for third-party monitoring integrations for both Bring your own account and BigAnimal's hosted. +BigAnimal provides support for third-party monitoring integrations for both using your own account and BigAnimal's cloud account. Monitoring integrations are configured at project level in BigAnimal. It can't be turned on or off for the individual clusters. An Org admin or a Project owner can set up an integration. Only one integration can be set up per project. By default, all the integrations are disabled. Enable it using the **Integrations** tab after creating the project. -All the metrics collected from all the clusters in the Project are sent to the integrated tool and are displayed in BigAnimal's *Monitoring and logging** tab using PEMx. The collected logs are exported to the object storage by default. +All the metrics collected from all the clusters in the Project are sent to the integrated tool and are displayed in BigAnimal's [**Monitoring and logging** tab using PEMx](../monitoring_using_pemx). The collected logs are exported to the object storage by default. The third-party integrations available in BigAnimal are: diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx index 5bce2eab2a8..9db753c2e4f 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging/third_party_integrations/newrelic.mdx @@ -47,7 +47,7 @@ Be aware of the following important cost considerations: * New Relic bills usage by bytes ingested and for data retention. You must monitor and configure alerts on your New Relic data ingestion to reduce the risk of unexpectedly large usage bills. Also check your retention settings. -New Relic has features to limit usage and ingestion. Customers should review their limits before enabling the BigAnimal integration. In the New Relic user interface, see **Administration>Data management>Limits**. Also refer to the following related topics in the New Relic documentation: +New Relic has features to limit usage and ingestion. Customers should review their limits before enabling the BigAnimal integration. In the New Relic user interface, see **Administration > Data management > Limits**. Also refer to the following related topics in the New Relic documentation: * [Understand and manage data ingest](https://docs.newrelic.com/docs/data-apis/manage-data/manage-data-coming-new-relic/) * [Data ingest: Billing and rules](https://docs.newrelic.com/docs/accounts/accounts-billing/new-relic-one-pricing-billing/data-ingest-billing/) From dec27db992672663086485143f365d4772924742 Mon Sep 17 00:00:00 2001 From: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Date: Tue, 30 Jan 2024 10:40:06 +0530 Subject: [PATCH 14/19] Added a point as per feedback from dhilipkumar --- .../release/getting_started/creating_a_cluster/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index 3fdbff31fab..8a47793ec79 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -98,7 +98,7 @@ The following options aren't available when creating your cluster: 1. In the **Storage** section: - By default, the **Database Storage** volume stores the Postgres data and the Write-Ahead (WAL) Logs together. If you want to improve write performance for WAL files, you can allocate separate storage volume for the WAL files. To allocate separate storage volume for WAL files, select the check-box before **Use a separate storage volume for Write-Ahead Logs**. Then select the Volume Type, Size, IOPS, and Disk Throughput separately for **Database Storage** and **Write-Ahead Logs Storage**. If you allocate separate storage volume for the WAL files, you have to pay cloud infrastructure costs for the second volume. + By default, the **Database Storage** volume stores the Postgres data and the Write-Ahead (WAL) Logs together. If you want to improve write performance for WAL files, you can allocate separate storage volume for the WAL files. To allocate separate storage volume for WAL files, select the check-box before **Use a separate storage volume for Write-Ahead Logs**. Then select the Volume Type, Size, IOPS, and Disk Throughput separately for **Database Storage** and **Write-Ahead Logs Storage**. If you allocate separate storage volume for the WAL files, you have to pay cloud infrastructure costs for the second volume. Once separate storage volume is allocated for WAL files, it can't be removed from the cluster settings later on. From the **Volume Type** list, select your volume type. From d438b2c4347161e243fec84d831c952081f43ce1 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 30 Jan 2024 12:46:08 +0000 Subject: [PATCH 15/19] First commit of release notes generator Signed-off-by: Dj Walker-Morgan --- tools/user/import/bareleasenotes/barelease.js | 151 ++++++++++ .../import/bareleasenotes/package-lock.json | 271 ++++++++++++++++++ tools/user/import/bareleasenotes/package.json | 18 ++ 3 files changed, 440 insertions(+) create mode 100644 tools/user/import/bareleasenotes/barelease.js create mode 100644 tools/user/import/bareleasenotes/package-lock.json create mode 100644 tools/user/import/bareleasenotes/package.json diff --git a/tools/user/import/bareleasenotes/barelease.js b/tools/user/import/bareleasenotes/barelease.js new file mode 100644 index 00000000000..c807ac5e013 --- /dev/null +++ b/tools/user/import/bareleasenotes/barelease.js @@ -0,0 +1,151 @@ +import fetch from "node-fetch"; +import fs from "fs"; +import yargs from "yargs"; +import { hideBin } from "yargs/helpers"; + +function getMonthName(monthNumber) { + const monthNames = [ + "January", + "February", + "March", + "April", + "May", + "June", + "July", + "August", + "September", + "October", + "November", + "December", + ]; + return monthNames[monthNumber]; +} + +function getShortMonthName(monthNumber) { + const monthNames = [ + "jan", + "feb", + "mar", + "apr", + "may", + "jun", + "jul", + "aug", + "sep", + "oct", + "nov", + "dec", + ]; + return monthNames[monthNumber]; +} + +function printReleaseNotesHeader(currentMonth, currentYear) { + return `--- +title: BigAnimal ${getMonthName(currentMonth)} ${currentYear} release notes +navTitle: ${getMonthName(currentMonth)} ${currentYear} +--- + +BigAnimal's ${getMonthName( + currentMonth, + )} ${currentYear} includes the following enhancements and bugfixes: + +| Type | Description | +|------|-------------|`; +} + +async function fetchAndProcess(directory, currentYear, currentMonth) { + try { + const response = await fetch( + "https://status.biganimal.com/api/maintenance-windows/done/index.json", + ); + const data = await response.json(); + + const filteredData = data.data.filter((item) => { + const itemDate = new Date(item.date); + return ( + item.title !== undefined && + item.title.endsWith("Production release") && + itemDate.getFullYear() === currentYear && + itemDate.getMonth() === currentMonth + ); + }); + + const lines = filteredData.map((item) => { + if (item.description.startsWith("* ")) { + var splits = item.description.split("* "); + return splits; + } + if (item.description.startsWith("- ")) { + var splits = item.description.split("- "); + return splits; + } + return item.description; + }); + + // Flatten and clean the array of features + const cleanlines = lines.flat().filter((item) => { + return ( + item != "" && + !item.startsWith("Improvements and updates for the cloud service") + ); + }); + + const releaseNoteHeader = printReleaseNotesHeader( + currentMonth, + currentYear, + ); + const releaseNotesBody = cleanlines + .map((line) => `| Enhancement | ${line.trim()} |`) + .join("\n"); + + const releaseNotesFile = fs.openSync( + `${directory}/${getShortMonthName(currentMonth)}_${currentYear}.md`, + "w", + ); + + fs.writeSync( + releaseNotesFile, + `${releaseNoteHeader}\n${releaseNotesBody}\n\n\n`, + 0, + ); + + console.log( + `Release notes ${directory}/${getShortMonthName( + currentMonth, + )}_${currentYear}.md generated successfully!`, + ); + } catch (error) { + console.log(error); + } +} + +const currentDate = new Date(); +const currentYear = currentDate.getFullYear(); +const currentMonth = currentDate.getMonth() + 1; + +var argv = yargs(hideBin(process.argv)) + .usage("Usage: $0 -d [path] [options]") + .option("year", { + alias: "y", + type: "number", + default: currentYear, + description: "Set year to generate release note for (default current year)", + }) + .option("month", { + alias: "m", + type: "number", + default: currentMonth, + description: + "Set month to generate release note for (default current month)", + }) + .option("dir", { + alias: "d", + type: "string", + required: true, + description: "Set directory for release note to be written to", + }) + .parse(); + +// Get current year and month + +fetchAndProcess(argv.dir, argv.year, argv.month - 1); diff --git a/tools/user/import/bareleasenotes/package-lock.json b/tools/user/import/bareleasenotes/package-lock.json new file mode 100644 index 00000000000..690014f25b1 --- /dev/null +++ b/tools/user/import/bareleasenotes/package-lock.json @@ -0,0 +1,271 @@ +{ + "name": "bareleasenotes", + "version": "1.0.0", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "bareleasenotes", + "version": "1.0.0", + "license": "ISC", + "dependencies": { + "minimist": "^1.2.8", + "node-fetch": "^3.3.2", + "yargs": "^17.7.2" + } + }, + "node_modules/ansi-regex": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.1.tgz", + "integrity": "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==", + "engines": { + "node": ">=8" + } + }, + "node_modules/ansi-styles": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz", + "integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==", + "dependencies": { + "color-convert": "^2.0.1" + }, + "engines": { + "node": ">=8" + }, + "funding": { + "url": "https://github.com/chalk/ansi-styles?sponsor=1" + } + }, + "node_modules/cliui": { + "version": "8.0.1", + "resolved": "https://registry.npmjs.org/cliui/-/cliui-8.0.1.tgz", + "integrity": "sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==", + "dependencies": { + "string-width": "^4.2.0", + "strip-ansi": "^6.0.1", + "wrap-ansi": "^7.0.0" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/color-convert": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz", + "integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==", + "dependencies": { + "color-name": "~1.1.4" + }, + "engines": { + "node": ">=7.0.0" + } + }, + "node_modules/color-name": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz", + "integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==" + }, + "node_modules/data-uri-to-buffer": { + "version": "4.0.1", + "resolved": "https://registry.npmjs.org/data-uri-to-buffer/-/data-uri-to-buffer-4.0.1.tgz", + "integrity": "sha512-0R9ikRb668HB7QDxT1vkpuUBtqc53YyAwMwGeUFKRojY/NWKvdZ+9UYtRfGmhqNbRkTSVpMbmyhXipFFv2cb/A==", + "engines": { + "node": ">= 12" + } + }, + "node_modules/emoji-regex": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-8.0.0.tgz", + "integrity": "sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==" + }, + "node_modules/escalade": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/escalade/-/escalade-3.1.1.tgz", + "integrity": "sha512-k0er2gUkLf8O0zKJiAhmkTnJlTvINGv7ygDNPbeIsX/TJjGJZHuh9B2UxbsaEkmlEo9MfhrSzmhIlhRlI2GXnw==", + "engines": { + "node": ">=6" + } + }, + "node_modules/fetch-blob": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/fetch-blob/-/fetch-blob-3.2.0.tgz", + "integrity": "sha512-7yAQpD2UMJzLi1Dqv7qFYnPbaPx7ZfFK6PiIxQ4PfkGPyNyl2Ugx+a/umUonmKqjhM4DnfbMvdX6otXq83soQQ==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/jimmywarting" + }, + { + "type": "paypal", + "url": "https://paypal.me/jimmywarting" + } + ], + "dependencies": { + "node-domexception": "^1.0.0", + "web-streams-polyfill": "^3.0.3" + }, + "engines": { + "node": "^12.20 || >= 14.13" + } + }, + "node_modules/formdata-polyfill": { + "version": "4.0.10", + "resolved": "https://registry.npmjs.org/formdata-polyfill/-/formdata-polyfill-4.0.10.tgz", + "integrity": "sha512-buewHzMvYL29jdeQTVILecSaZKnt/RJWjoZCF5OW60Z67/GmSLBkOFM7qh1PI3zFNtJbaZL5eQu1vLfazOwj4g==", + "dependencies": { + "fetch-blob": "^3.1.2" + }, + "engines": { + "node": ">=12.20.0" + } + }, + "node_modules/get-caller-file": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/get-caller-file/-/get-caller-file-2.0.5.tgz", + "integrity": "sha512-DyFP3BM/3YHTQOCUL/w0OZHR0lpKeGrxotcHWcqNEdnltqFwXVfhEBQ94eIo34AfQpo0rGki4cyIiftY06h2Fg==", + "engines": { + "node": "6.* || 8.* || >= 10.*" + } + }, + "node_modules/is-fullwidth-code-point": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/is-fullwidth-code-point/-/is-fullwidth-code-point-3.0.0.tgz", + "integrity": "sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==", + "engines": { + "node": ">=8" + } + }, + "node_modules/minimist": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.8.tgz", + "integrity": "sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/node-domexception": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/node-domexception/-/node-domexception-1.0.0.tgz", + "integrity": "sha512-/jKZoMpw0F8GRwl4/eLROPA3cfcXtLApP0QzLmUT/HuPCZWyB7IY9ZrMeKw2O/nFIqPQB3PVM9aYm0F312AXDQ==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/jimmywarting" + }, + { + "type": "github", + "url": "https://paypal.me/jimmywarting" + } + ], + "engines": { + "node": ">=10.5.0" + } + }, + "node_modules/node-fetch": { + "version": "3.3.2", + "resolved": "https://registry.npmjs.org/node-fetch/-/node-fetch-3.3.2.tgz", + "integrity": "sha512-dRB78srN/l6gqWulah9SrxeYnxeddIG30+GOqK/9OlLVyLg3HPnr6SqOWTWOXKRwC2eGYCkZ59NNuSgvSrpgOA==", + "dependencies": { + "data-uri-to-buffer": "^4.0.0", + "fetch-blob": "^3.1.4", + "formdata-polyfill": "^4.0.10" + }, + "engines": { + "node": "^12.20.0 || ^14.13.1 || >=16.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/node-fetch" + } + }, + "node_modules/require-directory": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", + "integrity": "sha512-fGxEI7+wsG9xrvdjsrlmL22OMTTiHRwAMroiEeMgq8gzoLC/PQr7RsRDSTLUg/bZAZtF+TVIkHc6/4RIKrui+Q==", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/string-width": { + "version": "4.2.3", + "resolved": "https://registry.npmjs.org/string-width/-/string-width-4.2.3.tgz", + "integrity": "sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==", + "dependencies": { + "emoji-regex": "^8.0.0", + "is-fullwidth-code-point": "^3.0.0", + "strip-ansi": "^6.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/strip-ansi": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/strip-ansi/-/strip-ansi-6.0.1.tgz", + "integrity": "sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==", + "dependencies": { + "ansi-regex": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/web-streams-polyfill": { + "version": "3.3.2", + "resolved": "https://registry.npmjs.org/web-streams-polyfill/-/web-streams-polyfill-3.3.2.tgz", + "integrity": "sha512-3pRGuxRF5gpuZc0W+EpwQRmCD7gRqcDOMt688KmdlDAgAyaB1XlN0zq2njfDNm44XVdIouE7pZ6GzbdyH47uIQ==", + "engines": { + "node": ">= 8" + } + }, + "node_modules/wrap-ansi": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/wrap-ansi/-/wrap-ansi-7.0.0.tgz", + "integrity": "sha512-YVGIj2kamLSTxw6NsZjoBxfSwsn0ycdesmc4p+Q21c5zPuZ1pl+NfxVdxPtdHvmNVOQ6XSYG4AUtyt/Fi7D16Q==", + "dependencies": { + "ansi-styles": "^4.0.0", + "string-width": "^4.1.0", + "strip-ansi": "^6.0.0" + }, + "engines": { + "node": ">=10" + }, + "funding": { + "url": "https://github.com/chalk/wrap-ansi?sponsor=1" + } + }, + "node_modules/y18n": { + "version": "5.0.8", + "resolved": "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz", + "integrity": "sha512-0pfFzegeDWJHJIAmTLRP2DwHjdF5s7jo9tuztdQxAhINCdvS+3nGINqPd00AphqJR/0LhANUS6/+7SCb98YOfA==", + "engines": { + "node": ">=10" + } + }, + "node_modules/yargs": { + "version": "17.7.2", + "resolved": "https://registry.npmjs.org/yargs/-/yargs-17.7.2.tgz", + "integrity": "sha512-7dSzzRQ++CKnNI/krKnYRV7JKKPUXMEh61soaHKg9mrWEhzFWhFnxPxGl+69cD1Ou63C13NUPCnmIcrvqCuM6w==", + "dependencies": { + "cliui": "^8.0.1", + "escalade": "^3.1.1", + "get-caller-file": "^2.0.5", + "require-directory": "^2.1.1", + "string-width": "^4.2.3", + "y18n": "^5.0.5", + "yargs-parser": "^21.1.1" + }, + "engines": { + "node": ">=12" + } + }, + "node_modules/yargs-parser": { + "version": "21.1.1", + "resolved": "https://registry.npmjs.org/yargs-parser/-/yargs-parser-21.1.1.tgz", + "integrity": "sha512-tVpsJW7DdjecAiFpbIB1e3qxIQsE6NoPc5/eTdrbbIC4h0LVsWhnoa3g+m2HclBIujHzsxZ4VJVA+GUuc2/LBw==", + "engines": { + "node": ">=12" + } + } + } +} diff --git a/tools/user/import/bareleasenotes/package.json b/tools/user/import/bareleasenotes/package.json new file mode 100644 index 00000000000..695bcffe33e --- /dev/null +++ b/tools/user/import/bareleasenotes/package.json @@ -0,0 +1,18 @@ +{ + "name": "bareleasenotes", + "version": "1.0.0", + "description": "", + "main": "index.js", + "type": "module", + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1" + }, + "keywords": [], + "author": "", + "license": "ISC", + "dependencies": { + "minimist": "^1.2.8", + "node-fetch": "^3.3.2", + "yargs": "^17.7.2" + } +} From 8e52d7200e8345fe63806a6bb50e0e0798be7968 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 30 Jan 2024 13:07:33 +0000 Subject: [PATCH 16/19] Fixes for filename handling Signed-off-by: Dj Walker-Morgan --- tools/user/import/bareleasenotes/barelease.js | 26 ++++++++----------- 1 file changed, 11 insertions(+), 15 deletions(-) diff --git a/tools/user/import/bareleasenotes/barelease.js b/tools/user/import/bareleasenotes/barelease.js index c807ac5e013..49fac9b674a 100644 --- a/tools/user/import/bareleasenotes/barelease.js +++ b/tools/user/import/bareleasenotes/barelease.js @@ -72,20 +72,18 @@ async function fetchAndProcess(directory, currentYear, currentMonth) { const lines = filteredData.map((item) => { if (item.description.startsWith("* ")) { - var splits = item.description.split("* "); - return splits; + return item.description.split("* "); } if (item.description.startsWith("- ")) { - var splits = item.description.split("- "); - return splits; + return item.description.split("- "); } return item.description; }); // Flatten and clean the array of features - const cleanlines = lines.flat().filter((item) => { + const cleanLines = lines.flat().filter((item) => { return ( - item != "" && + item !== "" && !item.startsWith("Improvements and updates for the cloud service") ); }); @@ -94,14 +92,14 @@ async function fetchAndProcess(directory, currentYear, currentMonth) { currentMonth, currentYear, ); - const releaseNotesBody = cleanlines + const releaseNotesBody = cleanLines .map((line) => `| Enhancement | ${line.trim()} |`) .join("\n"); + const releaseNotesFileName = `${directory}/${getShortMonthName( + currentMonth, + )}_${currentYear}_release_notes.mdx`; - const releaseNotesFile = fs.openSync( - `${directory}/${getShortMonthName(currentMonth)}_${currentYear}.md`, - "w", - ); + const releaseNotesFile = fs.openSync(`${releaseNotesFileName}`, "w"); fs.writeSync( releaseNotesFile, @@ -110,9 +108,7 @@ async function fetchAndProcess(directory, currentYear, currentMonth) { ); console.log( - `Release notes ${directory}/${getShortMonthName( - currentMonth, - )}_${currentYear}.md generated successfully!`, + `Release notes ${releaseNotesFileName} generated successfully!`, ); } catch (error) { console.log(error); @@ -123,7 +119,7 @@ const currentDate = new Date(); const currentYear = currentDate.getFullYear(); const currentMonth = currentDate.getMonth() + 1; -var argv = yargs(hideBin(process.argv)) +let argv = yargs(hideBin(process.argv)) .usage("Usage: $0 -d [path] [options]") .option("year", { alias: "y", From 186fae863647cfd3675d400c28c40702ffdcd3c8 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 30 Jan 2024 13:27:42 +0000 Subject: [PATCH 17/19] First release of bareleasenotes updater with releasenotes Signed-off-by: Dj Walker-Morgan --- .../biganimal/release/release_notes/index.mdx | 2 ++ .../release/release_notes/jan_2024_rel_notes.mdx | 16 ++++++++++++++++ tools/user/import/bareleasenotes/barelease.js | 2 +- 3 files changed, 19 insertions(+), 1 deletion(-) create mode 100644 product_docs/docs/biganimal/release/release_notes/jan_2024_rel_notes.mdx diff --git a/product_docs/docs/biganimal/release/release_notes/index.mdx b/product_docs/docs/biganimal/release/release_notes/index.mdx index 0c7913986a4..aaadff90b49 100644 --- a/product_docs/docs/biganimal/release/release_notes/index.mdx +++ b/product_docs/docs/biganimal/release/release_notes/index.mdx @@ -2,6 +2,7 @@ title: BigAnimal release notes navTitle: Release notes navigation: +- jan_2024_rel_notes - dec_2023_rel_notes - nov_2023_rel_notes - oct_2023_rel_notes @@ -20,6 +21,7 @@ The BigAnimal documentation describes the latest version of BigAnimal, including | Month | | ------------------------------------ | +| [January 2024](jan_2024_rel_notes) | | [December 2023](dec_2023_rel_notes) | | [November 2023](nov_2023_rel_notes) | | [October 2023](oct_2023_rel_notes) | diff --git a/product_docs/docs/biganimal/release/release_notes/jan_2024_rel_notes.mdx b/product_docs/docs/biganimal/release/release_notes/jan_2024_rel_notes.mdx new file mode 100644 index 00000000000..53d591c813b --- /dev/null +++ b/product_docs/docs/biganimal/release/release_notes/jan_2024_rel_notes.mdx @@ -0,0 +1,16 @@ +--- +title: BigAnimal January 2024 release notes +navTitle: January 2024 +--- + +BigAnimal's January 2024 includes the following enhancements and bugfixes: + +| Type | Description | +|------|-------------| +| Enhancement | BigAnimal has added new integrations with third-party monitoring services. You can now set up monitoring integrations with DataDog and New Relic on your BigAnimal Projects. | +| Enhancement | BigAnimal now supports adding a storage volume to your cluster for the Write-Ahead Log (WAL). Dedicated WAL storage volumes significantly improve write performance for WAL files, boosting the IO of the overall cluster. | +| Enhancement | BigAnimal Terraform provider v0.7.0 is now available. Learn more about what’s new [here](https://github.com/EnterpriseDB/terraform-provider-biganimal/releases/tag/v0.7.0) and download the provider [here](https://registry.terraform.io/providers/EnterpriseDB/biganimal/latest). | +| Enhancement | BigAnimal CLI v3.5.0 is now available. Learn more about what’s new [here](https://cli.biganimal.com/versions/v3.5.0/). | +| Enhancement | BigAnimal now supports pausing and resuming clusters on demand. You can now pause clusters when you aren’t using them without losing your data or configurations, giving you more control over your cluster operations as well as helping you save on compute costs. | + + diff --git a/tools/user/import/bareleasenotes/barelease.js b/tools/user/import/bareleasenotes/barelease.js index 49fac9b674a..5619accfec9 100644 --- a/tools/user/import/bareleasenotes/barelease.js +++ b/tools/user/import/bareleasenotes/barelease.js @@ -97,7 +97,7 @@ async function fetchAndProcess(directory, currentYear, currentMonth) { .join("\n"); const releaseNotesFileName = `${directory}/${getShortMonthName( currentMonth, - )}_${currentYear}_release_notes.mdx`; + )}_${currentYear}_rel_notes.mdx`; const releaseNotesFile = fs.openSync(`${releaseNotesFileName}`, "w"); From a4328ef54dcbfaa525f28c7dc3ca26ca8bfc8ec6 Mon Sep 17 00:00:00 2001 From: Ian Barwick Date: Tue, 30 Jan 2024 16:02:10 +0900 Subject: [PATCH 18/19] pgd: fixes for bdr.join_node_group() reference documentation Update the default value for the "pause_in_standby" parameter and note that it has been deprecated since BDR 5.0 (commit 8a453ca6). In passing, tidy up and generally improve the reference entry wording, including adding links to referenced objects. BDR-4555. --- .../reference/nodes-management-interfaces.mdx | 36 +++++++++++-------- 1 file changed, 22 insertions(+), 14 deletions(-) diff --git a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx index b8b10b343e9..cb667ea6192 100644 --- a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx +++ b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx @@ -362,7 +362,7 @@ This function joins the local node to an already existing PGD group. bdr.join_node_group ( join_target_dsn text, node_group_name text DEFAULT NULL, - pause_in_standby boolean DEFAULT false, + pause_in_standby boolean DEFAULT NULL, wait_for_completion boolean DEFAULT true, synchronize_structure text DEFAULT 'all' ) @@ -373,22 +373,30 @@ bdr.join_node_group ( - `join_target_dsn` — Specifies the connection string to an existing (source) node in the PGD group you want to add the local node to. - `node_group_name` — Optional name of the PGD group. Defaults to NULL, which - tries to detect the group name from information present on the source - node. -- `pause_in_standby` — Optionally tells the join process to join only as a - logical standby node, which can be later promoted to a full member. + tries to detect the group name from information present on the source + node. - `wait_for_completion` — Wait for the join process to complete before - returning. Defaults to `true`. + returning. Defaults to `true`. - `synchronize_structure` — Set the kind of structure (schema) synchronization - to do during the join. Valid options are `all`, which synchronizes - the complete database structure, and `none`, which doesn't synchronize any - structure. However, it still synchronizes data. + to do during the join. Valid options are `all`, which synchronizes + the complete database structure, and `none`, which doesn't synchronize any + structure. However, it still synchronizes data. +- `pause_in_standby` — Optionally tells the join process to join only as a + logical standby node, which can be later promoted to a full member. + This option is deprecated and will be disabled or removed in future + versions of PGD. -If `wait_for_completion` is specified as `false`, -this is an asynchronous call that returns as soon as the joining procedure starts. -You can see progress of the join in logs and the -`bdr.event_summary` information view or by calling the -`bdr.wait_for_join_completion()` function after `bdr.join_node_group()` returns. +!!! Note + `pause_in_standby` is deprecated since BDR 5.0. The recommended way to create + a logical standby is to set `node_kind` to `standby` when creating the node + with `[bdr.create_node](#bdrcreate_node)`. + +If `wait_for_completion` is specified as `false`, the function call will return +as soon as the joining procedure starts. Progress of the join can be viewed in +the log files and the `[bdr.event_summary](catalogs-internal.mdx#bdrevent_summary)` +information view. The function `[bdr.wait_for_join_completion()](#bdrwait_for_join_completion)` +can be called after `bdr.join_node_group()` to wait for the join operation to complete, +and can emit progress information if called with `verbose_progress` set to `true`. ### Notes From 4a630510c16a23b9a7c15b315fdbff4a851638b2 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 30 Jan 2024 13:54:42 +0000 Subject: [PATCH 19/19] Moved the note to a warning (to be more precise and to avoid two note/notes) Signed-off-by: Dj Walker-Morgan --- .../docs/pgd/5/reference/nodes-management-interfaces.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx index cb667ea6192..3c85ed0e498 100644 --- a/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx +++ b/product_docs/docs/pgd/5/reference/nodes-management-interfaces.mdx @@ -386,7 +386,7 @@ bdr.join_node_group ( This option is deprecated and will be disabled or removed in future versions of PGD. -!!! Note +!!! Warning `pause_in_standby` is deprecated since BDR 5.0. The recommended way to create a logical standby is to set `node_kind` to `standby` when creating the node with `[bdr.create_node](#bdrcreate_node)`.