From bf3b1420a03fc8c49759ac0076bd008182d3015e Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 15 Dec 2021 13:50:14 -0500 Subject: [PATCH 01/16] First tranche of point releases for v11 --- .../10_epas11.14.24_rel_notes.mdx | 14 +++++++++++++ .../11_epas11.13.23_rel_notes.mdx | 7 +++++++ .../13_epas11.12.22_rel_notes.mdx | 11 ++++++++++ .../15_epas11.12.21_rel_notes.mdx | 20 +++++++++++++++++++ .../17_epas11.11.20_rel_notes.mdx | 16 +++++++++++++++ 5 files changed, 68 insertions(+) create mode 100644 product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/11_epas11.13.23_rel_notes.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/13_epas11.12.22_rel_notes.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/15_epas11.12.21_rel_notes.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx diff --git a/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx new file mode 100644 index 00000000000..7c4927fbe86 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx @@ -0,0 +1,14 @@ +--- +title: Version 11.14.24 +--- + +EDB Postgres Advanced Server 11.14.24 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | --------- | +| Upstream Merge | Merged with communuity PostgreSQL 11.14 . See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-14.html) for details. | | | +| Bug Fix | Obey the AM meridian indicator correctly in `to_timestamp()`. [Support Ticket: #74035] | DB-149 | | +| Bug Fix | Prevent possible crash after implicit rollback handling `Parse` protocol message. [Support Ticket: #72626] | DB-1449 | | +| Bug Fix | Fix possible server crash when the package is dropped from another session | DB-1403 | SPL | +| Bug Fix | Populate the event type for missing node type. | DB-1184 | edb_audit | +| Bug Fix | Fix server crash when the package is re-compiled in the same session. [Support Ticket: #1181417] | DB-1038 | SPL | \ No newline at end of file diff --git a/product_docs/docs/epas/11/epas_rel_notes/11_epas11.13.23_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/11_epas11.13.23_rel_notes.mdx new file mode 100644 index 00000000000..14db34338f3 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/11_epas11.13.23_rel_notes.mdx @@ -0,0 +1,7 @@ +--- +title: Version 11.13.23 +--- + +| Type | Description | ID | +| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ | ------- | +| Upstream Merge | Merged with communuity PostgreSQL 11.13. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-13.html) for details. | | diff --git a/product_docs/docs/epas/11/epas_rel_notes/13_epas11.12.22_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/13_epas11.12.22_rel_notes.mdx new file mode 100644 index 00000000000..982b81d7875 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/13_epas11.12.22_rel_notes.mdx @@ -0,0 +1,11 @@ +--- +title: Version 11.12.22 +--- + +EDB Postgres Advanced Server 11.12.22 includes the following bug fixes: + +| Type | Description | ID | +| ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | +| Upstream Merge | Merged with communuity PostgreSQL 11.12. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-12.html) for details. | | +| Bug Fix | Fix `pg_upgrade` to allow the system catalog composite type used in user tables. | DB-1237 | +| Bug Fix | Fix possible misbehavior when aborting an autonomous transaction and also fix interaction of autonomous transactions with `edb_stmt_level_tx=on`. | DB-1034 | diff --git a/product_docs/docs/epas/11/epas_rel_notes/15_epas11.12.21_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/15_epas11.12.21_rel_notes.mdx new file mode 100644 index 00000000000..f6bd43149a3 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/15_epas11.12.21_rel_notes.mdx @@ -0,0 +1,20 @@ +--- +title: Version 11.12.21 +--- + +EDB Postgres Advanced Server 11.12.21 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------------- | +| Upstream Merge | Merged with communuity PostgreSQL PostgreSQL 11.12. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-12.html) for details. | | | +| Bug Fix | Prevent some uses of `COMMIT` and `ROLLBACK` in stored procedures that were unsafe and could cause crashes. [Support Ticket: #1234722] | DB-1183 | | +| Bug Fix | Prevent `index_advisor` from interfering with materialized view creation. [Support Ticket: #1189859] | DB-1154 | | +| Bug Fix | Support password redaction in `edb_filter_log` with the extended protocol, specially used with connectors. [Support Ticket: #1234131] | DB-1139 | | +| Bug Fix | Correct `QUEUE` object handling in `EVENT TRIGGER`. | DB-1129 | | +| Bug Fix | Correct `REDACTION COLUMN` object handling in `EVENT TRIGGER`. | DB-1129 | | +| Bug Fix | Fix `pg_upgrade` to not fail when a custom configuration file directory is used. [Support Ticket: #1200560] | DB-1084 | | +| Bug Fix | Refrain from dropping trigger on parent table through partitioning dependency. [Support Ticket: #1187215] | DB-1063 | | +| Bug Fix | Free temporary memory to avoid PGA memory accumulation and exceeds errors. [Support Ticket: #1129386] | DB-1061 | dblink_ora | +| Bug Fix | Fix possible server crash with partition-wish join push-down code path. | DB-1042 | edb_dblink_oci | +| Bug Fix | Fix incorrect error message in `edbldr`. [Support Ticket: #1104048] | DB-826 | | +| Bug Fix | Allow partition creation with `ROWIDS` option. | DB-731 | | diff --git a/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx new file mode 100644 index 00000000000..1aa5dfad1b9 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx @@ -0,0 +1,16 @@ +--- +title: Version 11.11.20 +--- + +EDB Postgres Advanced Server 11.11.20 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.11. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-11.html) for details. | | | +| Bug Fix | Use correct relation information while loading into multiple tables with a single control file to avoid unexpected behavior. [Support Ticket: #1165964] | DB-973 | edbldr | +| Bug Fix | For nested subprocedures, verify that a set-returning function is called in a valid place or not. | DB-946 | | +| Bug Fix | Skip SQL Protect-related files when running `pg_checksums` or `pg_verify_checksums`. [Support Ticket: #1140841] | DB-919 | | +| Bug Fix | Forbid `CONNECT_BY_ROOT` and `SYS_CONNECT_BY_PATH` in join expressions. | DB-914 | | +| Bug Fix | Correct start position handling with multibyte encodings in `INSTR`. [Support Ticket: #1133262] | DB-911 | | +| Bug Fix | Remove obsolete function overloading check condition in `pg_dump`. [Support Ticket: #1133344] | DB-892 | | +| Bug Fix | Fix handling of whitespaces when the delimiter is whitespace. [Support Ticket: #1060286] | DB-739 | edbldr | \ No newline at end of file From 9c9dff2ceebf5f05260687ce34844394ec45c17c Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 15 Dec 2021 13:55:22 -0500 Subject: [PATCH 02/16] test edit --- product_docs/docs/epas/11/epas_rel_notes/index.mdx | 1 + 1 file changed, 1 insertion(+) diff --git a/product_docs/docs/epas/11/epas_rel_notes/index.mdx b/product_docs/docs/epas/11/epas_rel_notes/index.mdx index 3f85531bbf2..4a6216252c3 100644 --- a/product_docs/docs/epas/11/epas_rel_notes/index.mdx +++ b/product_docs/docs/epas/11/epas_rel_notes/index.mdx @@ -14,6 +14,7 @@ legacyRedirectsGenerated: - "/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/installation-guide-for-windows/11/EDB_Postgres_Advanced_Server_Installation_Guide_Windows.1.05.html" --- +- With this release of EDB Postgres Advanced Server 11, EnterpriseDB continues to lead as the only worldwide company to deliver innovative and low cost open-source-derived database solutions with commercial quality, ease of use, compatibility, scalability, and performance for small or large-scale enterprises. EDB Postgres Advanced Server 11 is built on the open source PostgreSQL 11. EDB Postgres Advanced Server 11 adds a number of new features that will please developers and DBAs alike, including: From e6b646c84312f5fa975ed84e1599888ad1cadb67 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Mon, 20 Dec 2021 11:25:38 -0500 Subject: [PATCH 03/16] First rel note for v11 --- ...es.mdx => 17_epas11.11.20_rel_notes copy.mdx} | 0 .../epas_rel_notes/19_epas11.10.19_rel_notes.mdx | 16 ++++++++++++++++ 2 files changed, 16 insertions(+) rename product_docs/docs/epas/11/epas_rel_notes/{17_epas11.11.20_rel_notes.mdx => 17_epas11.11.20_rel_notes copy.mdx} (100%) create mode 100644 product_docs/docs/epas/11/epas_rel_notes/19_epas11.10.19_rel_notes.mdx diff --git a/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes copy.mdx similarity index 100% rename from product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx rename to product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes copy.mdx diff --git a/product_docs/docs/epas/11/epas_rel_notes/19_epas11.10.19_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/19_epas11.10.19_rel_notes.mdx new file mode 100644 index 00000000000..0e559fae64e --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/19_epas11.10.19_rel_notes.mdx @@ -0,0 +1,16 @@ +--- +title: Version 11.10.19 +--- + +EDB Postgres Advanced Server 11.10.19 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.10. See the community [Release Notes](https://www.postgresql.org/docs/10/release-11-10.html) for details. | | | +| Bug Fix | Fixed crash with `COPY FROM` and/or foreign partition routing operations performed on `libpq_dblink` and `oci_dblink` FDW table. |DB‑804 | | +| Bug Fix | Don't emit useless messages about GSS or SSL negotiation failures. [Support Tickets: #1048322, #1048143, #1018659] | DB-770 | | +| Bug Fix | We create this system column for all the foreign tables. However, only `oci_dblink` and `libpq_dblink` FDWs use it and the code assumes that this column is used with these two FDWs only. That is incorrect. This commit fixes the assertion failure when system column `__remote_rowid_` is selected. | DB-749 | | +| Bug Fix | Fixed handling of multi-character record separators in edbldr. [Support Ticket: #1051362] | DB-743 | | +| Bug Fix | Fixed group estimate for the remote relation in `oci_dblink`. | DB-701 | | +| Bug Fix | In non-Pro*C mode, evaluate `EXEC SQL IFDEF/IFNDEF` expression irrespective of the enclosed 'C' preprocessor directives and emit all 'C' preprocessor directives as-is to the output file. [Support Ticket: #1007795] | DB-551 | ecpg | +| Bug Fix | Fix row incompatibility issue when `SUBTYPE` is used in composite type. | DB-107 | SPL | From 326bae2a1a6bf9c9be98a4a8bbbff6c7f6c882cb Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Mon, 20 Dec 2021 11:26:08 -0500 Subject: [PATCH 04/16] remove spaces --- .../docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx index 7c4927fbe86..fb0ade4ca45 100644 --- a/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx +++ b/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx @@ -6,7 +6,7 @@ EDB Postgres Advanced Server 11.14.24 includes the following bug fixes: | Type | Description | ID | Category | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | --------- | -| Upstream Merge | Merged with communuity PostgreSQL 11.14 . See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-14.html) for details. | | | +| Upstream Merge | Merged with communuity PostgreSQL 11.14. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-14.html) for details. | | | | Bug Fix | Obey the AM meridian indicator correctly in `to_timestamp()`. [Support Ticket: #74035] | DB-149 | | | Bug Fix | Prevent possible crash after implicit rollback handling `Parse` protocol message. [Support Ticket: #72626] | DB-1449 | | | Bug Fix | Fix possible server crash when the package is dropped from another session | DB-1403 | SPL | From 7031c14a5db01839b08069a5373dc6955be0cf42 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 15 Dec 2021 13:50:14 -0500 Subject: [PATCH 05/16] First tranche of point releases for v11 --- .../epas_rel_notes/17_epas11.11.20_rel_notes.mdx | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) create mode 100644 product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx diff --git a/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx new file mode 100644 index 00000000000..1aa5dfad1b9 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx @@ -0,0 +1,16 @@ +--- +title: Version 11.11.20 +--- + +EDB Postgres Advanced Server 11.11.20 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.11. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-11.html) for details. | | | +| Bug Fix | Use correct relation information while loading into multiple tables with a single control file to avoid unexpected behavior. [Support Ticket: #1165964] | DB-973 | edbldr | +| Bug Fix | For nested subprocedures, verify that a set-returning function is called in a valid place or not. | DB-946 | | +| Bug Fix | Skip SQL Protect-related files when running `pg_checksums` or `pg_verify_checksums`. [Support Ticket: #1140841] | DB-919 | | +| Bug Fix | Forbid `CONNECT_BY_ROOT` and `SYS_CONNECT_BY_PATH` in join expressions. | DB-914 | | +| Bug Fix | Correct start position handling with multibyte encodings in `INSTR`. [Support Ticket: #1133262] | DB-911 | | +| Bug Fix | Remove obsolete function overloading check condition in `pg_dump`. [Support Ticket: #1133344] | DB-892 | | +| Bug Fix | Fix handling of whitespaces when the delimiter is whitespace. [Support Ticket: #1060286] | DB-739 | edbldr | \ No newline at end of file From fab4f037b8a009164d5d60d58176f81a87146770 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Mon, 20 Dec 2021 12:50:13 -0500 Subject: [PATCH 06/16] multiple release notes added for v11 --- .../21_epas11.9.17_rel_notes.mdx | 19 +++++++++++++++ .../23_epas11.9.16_rel_notes.mdx | 18 ++++++++++++++ .../25_epas11.8.15_rel_notes.mdx | 12 ++++++++++ .../27_epas11.7.14_rel_notes.mdx | 24 +++++++++++++++++++ 4 files changed, 73 insertions(+) create mode 100644 product_docs/docs/epas/11/epas_rel_notes/21_epas11.9.17_rel_notes.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/23_epas11.9.16_rel_notes.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/25_epas11.8.15_rel_notes.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/27_epas11.7.14_rel_notes.mdx diff --git a/product_docs/docs/epas/11/epas_rel_notes/21_epas11.9.17_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/21_epas11.9.17_rel_notes.mdx new file mode 100644 index 00000000000..4669ac9cb7b --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/21_epas11.9.17_rel_notes.mdx @@ -0,0 +1,19 @@ +--- +title: Version 11.9.17 +--- + +EDB Postgres Advanced Server 11.9.17 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.9. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-9.html) for details. | | | +| Bug Fix | In the case of an error coming from `standard_planner`, global pointer is not getting cleaned up and results in a server crash. Commit fix dangling pointer de-reference for global index candidates. [Support Ticket: #1026986] | DB‑682 | | +| Bug Fix | Fix minor problems with non-exclusive backup cleanup. [Support Ticket: #1009383] | DB-597 | | +| Bug Fix | Fix to follow the same order to compute the hash values of partition key in which it's defined. [Support Ticket: #1010975] | DB-537 | edb_enable_pruning | +| Bug Fix | Make the scope of an autonomous transaction include the exception block. [Support Ticket: #977822] | DB-371 | | +| Bug Fix | Fix FF mode in Redwood `to_char()` datetime related functions | DB-640 | | +| Bug Fix | Protect (timestamp + number) and (timestamp - number) against overflow. | DB-639 | | +| Bug Fix | Disallow using `NULLIF` as SQL expression evaluation is not supported in the direct load. | DB-477 | edbldr| +| Bug Fix | Fix assorted issues where ZONED data input length is less than the precision. | DB-301 | edbldr | +| Bug Fix | Fix assorted issues where ZONED data input length is less than the precision. | DB-301 | edbldr | +| Bug Fix | Commit move the schema grant statement from `edb-sys.sql` to code in order to fix the `pg_upgrade` for `--no-redwood-compat mode`. | RM43953 | | diff --git a/product_docs/docs/epas/11/epas_rel_notes/23_epas11.9.16_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/23_epas11.9.16_rel_notes.mdx new file mode 100644 index 00000000000..de12a05c2a0 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/23_epas11.9.16_rel_notes.mdx @@ -0,0 +1,18 @@ +--- +title: Version 11.9.16 +--- + +EDB Postgres Advanced Server 11.9.16 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.9. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-9.html) for details. | | | +| Bug Fix | In the case of an error coming from `standard_planner`, global pointer is not getting cleaned up and results in a server crash. Commit fix dangling pointer de-reference for global index candidates. [Support Ticket: #1026986] | DB‑682 | | +| Bug Fix | Fix to follow the same order to compute the hash values of partition key in which it's defined. [Support Ticket: #1010975] | DB-537 | edb_enable_pruning | +| Bug Fix | Make the scope of an autonomous transaction include the exception block. [Support Ticket: #977822] | DB-371 | | +| Bug Fix | Fix FF mode in Redwood `to_char()` datetime related functions | DB-640 | | +| Bug Fix | Protect (timestamp + number) and (timestamp - number) against overflow. | DB-639 | | +| Bug Fix | Disallow using `NULLIF` as SQL expression evaluation is not supported in the direct load. | DB-477 | edbldr| +| Bug Fix | Fix assorted issues where ZONED data input length is less than the precision. | DB-301 | edbldr | +| Bug Fix | Fix assorted issues where ZONED data input length is less than the precision. | DB-301 | edbldr | +| Bug Fix | Commit move the schema grant statement from `edb-sys.sql` to code in order to fix the `pg_upgrade` for `--no-redwood-compat mode`. | RM43953 | | diff --git a/product_docs/docs/epas/11/epas_rel_notes/25_epas11.8.15_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/25_epas11.8.15_rel_notes.mdx new file mode 100644 index 00000000000..ab489b82fe6 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/25_epas11.8.15_rel_notes.mdx @@ -0,0 +1,12 @@ +--- +title: Version 11.8.15 +--- + +EDB Postgres Advanced Server 11.8.15 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.8. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-8.html) for details. | | | +| Bug Fix | Add GUC to control the scan type for the remote statement. New GUC `edb_dblink_oci.rescan = {serializable \| scroll}` allows a user to choose the scrollable vs non-scrollable cursor. [Support Ticket: #947738] | DB‑380 | dblink_oci | +| Bug Fix | Do not push `ROWNUM` to the child scan/join targets paths and disable partition-wise aggregate when query has `ROWNUM`. | DB-187 | | + diff --git a/product_docs/docs/epas/11/epas_rel_notes/27_epas11.7.14_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/27_epas11.7.14_rel_notes.mdx new file mode 100644 index 00000000000..5494ff16e95 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/27_epas11.7.14_rel_notes.mdx @@ -0,0 +1,24 @@ +--- +title: Version 11.7.14 +--- + +EDB Postgres Advanced Server 11.7.14 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.7. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-7.html) for details. | | | +| Bug Fix | [Support Ticket: #947738] | DB‑380 | dblink_oci | +| Bug Fix | Suppressed error detail when password expiration is set to infinity while using password profiles. | RM44176 | +| Bug Fix | Allowed altering only its own password when expired while using password profiles. | RM44175/ DB‑228 | +| Bug Fix | Disabled partition pruning while fetching a cursor rowtype. [Support Ticket: #950257] | RM44174 | +| Bug Fix | Fixed invalid memory context handling inside `switchToVariableContext()`. | RM44164 | +| Bug Fix | Fixed execution of empty statements via expired accounts. [Support Ticket: #942512] | RM44155 | +| Bug Fix | Skipped expression simplification while fetching cursor rowtype. [Support Ticket: #950257] | RM44151 | +| Bug Fix | Fixed `DROP TRIGGER` by name behavior for statement-level triggers. | RM44141 | +| Bug Fix | Fixed `makeConst()` call arguments to order correctly in `parse_spl_var()`. | RM44130 | +| Bug Fix | Fixed TAP testcase failure. | RM44129 | +| Bug Fix | Fixed server crash in interval partitioning with `CLOBBER_CACHE_ALWAYS` build. | RM44128/ DB‑132 | +| Bug Fix | Changed datatype from int to double to avoid wrap-around in password profile. | RM44118 | +| Bug Fix | Fixed handling of subtypes in the extended protocol. | RM44123 | +| Bug Fix | Fixed `CREATE TRIGGER .. AUTHORIZATION` behavior to avoid potential `pg_upgrade` failures. | RM44109 | +| Bug Fix | Cleared `rolpasswordsetat` when a user is renamed. | RM44108 From cde727a7eabf56f87a40f50a28f27e6907465543 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Mon, 20 Dec 2021 14:07:08 -0500 Subject: [PATCH 07/16] Two more release notes for v11 --- .../27_epas11.7.14_rel_notes.mdx | 6 ++-- .../29_epas11.6.13_rel_notes.mdx | 31 +++++++++++++++++++ 2 files changed, 34 insertions(+), 3 deletions(-) create mode 100644 product_docs/docs/epas/11/epas_rel_notes/29_epas11.6.13_rel_notes.mdx diff --git a/product_docs/docs/epas/11/epas_rel_notes/27_epas11.7.14_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/27_epas11.7.14_rel_notes.mdx index 5494ff16e95..8276ae0b172 100644 --- a/product_docs/docs/epas/11/epas_rel_notes/27_epas11.7.14_rel_notes.mdx +++ b/product_docs/docs/epas/11/epas_rel_notes/27_epas11.7.14_rel_notes.mdx @@ -7,18 +7,18 @@ EDB Postgres Advanced Server 11.7.14 includes the following bug fixes: | Type | Description | ID | Category | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | | Upstream Merge | Merged with community PostgreSQL 11.7. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-7.html) for details. | | | -| Bug Fix | [Support Ticket: #947738] | DB‑380 | dblink_oci | | Bug Fix | Suppressed error detail when password expiration is set to infinity while using password profiles. | RM44176 | | Bug Fix | Allowed altering only its own password when expired while using password profiles. | RM44175/ DB‑228 | | Bug Fix | Disabled partition pruning while fetching a cursor rowtype. [Support Ticket: #950257] | RM44174 | -| Bug Fix | Fixed invalid memory context handling inside `switchToVariableContext()`. | RM44164 | | Bug Fix | Fixed execution of empty statements via expired accounts. [Support Ticket: #942512] | RM44155 | | Bug Fix | Skipped expression simplification while fetching cursor rowtype. [Support Ticket: #950257] | RM44151 | | Bug Fix | Fixed `DROP TRIGGER` by name behavior for statement-level triggers. | RM44141 | | Bug Fix | Fixed `makeConst()` call arguments to order correctly in `parse_spl_var()`. | RM44130 | | Bug Fix | Fixed TAP testcase failure. | RM44129 | -| Bug Fix | Fixed server crash in interval partitioning with `CLOBBER_CACHE_ALWAYS` build. | RM44128/ DB‑132 | | Bug Fix | Changed datatype from int to double to avoid wrap-around in password profile. | RM44118 | | Bug Fix | Fixed handling of subtypes in the extended protocol. | RM44123 | | Bug Fix | Fixed `CREATE TRIGGER .. AUTHORIZATION` behavior to avoid potential `pg_upgrade` failures. | RM44109 | | Bug Fix | Cleared `rolpasswordsetat` when a user is renamed. | RM44108 +| Bug Fix | Don't try to prune a relation when an `OUTER JOIN` is present to avoid possible projection of incorrect results. | RM44049 +| Bug Fix | Fixed server crash while loading data into partition table having foreign key with `CLOBBER_CACHE_ALWAYS` build. | RM44145 +| Bug Fix | Fixed server crash while loading data into table through edbldr with `CLOBBER_CACHE_ALWAYS` build. | RM44146 diff --git a/product_docs/docs/epas/11/epas_rel_notes/29_epas11.6.13_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/29_epas11.6.13_rel_notes.mdx new file mode 100644 index 00000000000..47c3bf3c5b6 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/29_epas11.6.13_rel_notes.mdx @@ -0,0 +1,31 @@ +--- +title: Version 11.6.13 +--- + +EDB Postgres Advanced Server 11.6.13 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.6. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-6.html) for details. | | | +| Bug Fix | Apply user default privileges with CREATE FOREIGN TABLE. [Support Ticket: #919364] | RM44104 | | +| Bug Fix | Fix data loss when executing query using oci link and small temporary tablespace at redwood side. [Support Ticket: #864275] | RM43979 | oci_dblink | +| Bug Fix | Avoid WAL-logging bulkload information when the system is in recovery mode. [Support Ticket: #915987] |RM44112 | | +| Bug Fix | Fix libpq bulk API to get into ready for new query state after the query execution fail. [Support Ticket: #755248] | RM43965 | | +| Bug Fix | Fix failure to attach ROWNUM qual to the correct part of the query. [Support Ticket: #812764] | RM43787 | | +| Bug Fix | Rectify an assumption while dumping queue callback actions. | RM44110 | dbms_aq | +| Bug Fix | Replaced ”’” with "'" as the former special character may not be converted to some other locale like zh_CN resulting in an error. | RM44122 | | +| Bug Fix | Fix cache flush hazard with the use of already released system cache. | RM41952 | | +| Bug Fix | Changed testcase wordings from ‘postgres’ to ‘edb-postgres’ to start the advanced server. | RM44117 | | +| Bug Fix | Fix restrict `FREEZE` option for partition table like `COPY`. | RM44079 | edbldr | +| Bug Fix | Fix 004_logrotate TAP test failure related to update of current_logfiles. | RM44083 | pg_ctl | +| Bug Fix | Testcase to verify server is not crashing with `oci` subquery. | RM44097 | | +| Bug Fix | Fix DROP TRIGGER by name with the partitioned table. | RM44010 | | +| Bug Fix | Fixed restricted use of `LEVEL` with `PRIOR` in `CONNECT BY`. | RM43998 | | +| Bug Fix | Record owner dependency for redwood-style trigger functions. | RM44044 | | +| Bug Fix | Fix OOM while inserting huge data into multiple tables. | RM43985 | edbldr | +| Bug Fix | Correct the method used to release tuple descriptor. | RM44060 | | +| Bug Fix | Fixed re-establishment of ocidblink connections after server or user mapping changes. | RM43963 | | +| Bug Fix | Fix `ALTER SERVER` for `connstr` option. | RM44058 | ocilink | +| Bug Fix | Fix identity column with not null constraint. | RM44016 | edbldr | + + From 4a9c9a08849886ffe53d13b747c773eaecc5e75a Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 21 Dec 2021 10:54:11 -0500 Subject: [PATCH 08/16] Four more point releases for v11, and removed a duplicate point release --- .../17_epas11.11.20_rel_notes copy.mdx | 16 ------------- .../31_epas11.5.12_rel_notes.mdx | 23 ++++++++++++++++++ .../33_epas11.4.11_rel_notes.mdx | 16 +++++++++++++ .../35_epas11.3.10_rel_notes.mdx | 24 +++++++++++++++++++ .../37_epas11.2.9_rel_notes.mdx | 20 ++++++++++++++++ 5 files changed, 83 insertions(+), 16 deletions(-) delete mode 100644 product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes copy.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/31_epas11.5.12_rel_notes.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/33_epas11.4.11_rel_notes.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/35_epas11.3.10_rel_notes.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/37_epas11.2.9_rel_notes.mdx diff --git a/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes copy.mdx b/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes copy.mdx deleted file mode 100644 index 1aa5dfad1b9..00000000000 --- a/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes copy.mdx +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Version 11.11.20 ---- - -EDB Postgres Advanced Server 11.11.20 includes the following bug fixes: - -| Type | Description | ID | Category | -| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | -| Upstream Merge | Merged with community PostgreSQL 11.11. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-11.html) for details. | | | -| Bug Fix | Use correct relation information while loading into multiple tables with a single control file to avoid unexpected behavior. [Support Ticket: #1165964] | DB-973 | edbldr | -| Bug Fix | For nested subprocedures, verify that a set-returning function is called in a valid place or not. | DB-946 | | -| Bug Fix | Skip SQL Protect-related files when running `pg_checksums` or `pg_verify_checksums`. [Support Ticket: #1140841] | DB-919 | | -| Bug Fix | Forbid `CONNECT_BY_ROOT` and `SYS_CONNECT_BY_PATH` in join expressions. | DB-914 | | -| Bug Fix | Correct start position handling with multibyte encodings in `INSTR`. [Support Ticket: #1133262] | DB-911 | | -| Bug Fix | Remove obsolete function overloading check condition in `pg_dump`. [Support Ticket: #1133344] | DB-892 | | -| Bug Fix | Fix handling of whitespaces when the delimiter is whitespace. [Support Ticket: #1060286] | DB-739 | edbldr | \ No newline at end of file diff --git a/product_docs/docs/epas/11/epas_rel_notes/31_epas11.5.12_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/31_epas11.5.12_rel_notes.mdx new file mode 100644 index 00000000000..8d5a2682b6d --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/31_epas11.5.12_rel_notes.mdx @@ -0,0 +1,23 @@ +--- +title: Version 11.5.12 +--- + +EDB Postgres Advanced Server 11.5.12 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.5. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-5.html) for details. | | | +| Bug Fix | Adjust the `CALL` statement to account for CallStmt rule. [Support Ticket: #872479] | RM44027 | ecpg | +| Bug Fix | Fix to minimize the memory consumption for partition table select. [Support Ticket: #861800] | RM43945 | dblink_oci | +| Bug Fix | Convert `RAW` to `VARCHAR` before assigning to `VARCHAR` variable. [Support Ticket: #865427] | RM43983 | | +| Bug Fix | Fix `to_char()` for formats like spell mode, "IWW", "SPTH", "DDD", "D (day of the week)" etc. to make `to_char()` more redwood and PG compatible. [Support Ticket: #865427] | RM43873 | | +| Bug Fix | Fix buffer overflow issue in CALL statement handling. | RM44055 | ecpg | +| Bug Fix | Prevent parallel index build under autonomous transaction. | RM44023 | | +| Bug Fix | Fix `WARNING` for not owning proper lock on relation during resource cleanup. | RM44030 | edbldr | +| Bug Fix | Fix test_decoding slot test. | RM44026 | | +| Bug Fix | Add support for `edb_filter_log`. | RM44025 | | +| Bug Fix | Don't allow password and obfuscated_password options together. | RM43962 | dblink_oci | +| Bug Fix | Fixed error "unexpected varattno 2 in expression to be mapped" in subpartition when `edb_enable_pruning` is ON. | RM44020 | | +| Bug Fix | Fix `INSERT` mode for partitioning table. | RM44011 | edbldr | +| Bug Fix | Fix concurrency problem in autonomous transactions. | RM43824 | | +| Bug Fix | Fix `__remote_rowid_` attribute values correctly. | RM44002 | | diff --git a/product_docs/docs/epas/11/epas_rel_notes/33_epas11.4.11_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/33_epas11.4.11_rel_notes.mdx new file mode 100644 index 00000000000..1a86ead753f --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/33_epas11.4.11_rel_notes.mdx @@ -0,0 +1,16 @@ +--- +title: Version 11.4.11 +--- + +EDB Postgres Advanced Server 11.4.11 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.4. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-4.html) for details. | | | +| Bug Fix | Don't mark portal as FAILED when executing `SPL ROLLBACK`. [Support Ticket: #870716] | RM43994 | SPL | +| Bug Fix | Fix `REASSIGN OWNED BY` for `dbms_aq` objects. | RM43959 | | +| Bug Fix | Suppress line numbers (#line directive) with `-l` option. | RM43689 | ecpg | +| Bug Fix | Add missing `sepgsql` checks for namespace lookups | RM43055 | | +| Bug Fix | Reject non-`QT_NORMAL SELECT` statements in `SPI_is_cursor_plan()`. | RM43970 | | +| Bug Fix | Throw a user-friendly error when package type has dropped attributes. | RM43938 | | +| Bug Fix | Cloneschema fails when applying FK constraints on the target if rows are constantly being inserted in the source. [Support Ticket: #860472] | DI-166 | | diff --git a/product_docs/docs/epas/11/epas_rel_notes/35_epas11.3.10_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/35_epas11.3.10_rel_notes.mdx new file mode 100644 index 00000000000..847bf054b73 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/35_epas11.3.10_rel_notes.mdx @@ -0,0 +1,24 @@ +--- +title: Version 11.3.10 +--- + +EDB Postgres Advanced Server 11.3.10 includes the following bug fixes: + +| Type | Description | ID | Category | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | +| Upstream Merge | Merged with community PostgreSQL 11.3. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-3.html) for details. | | | +| Bug Fix | ROMNUM with partitioning table by making sure rownum filter gets applied over Append node. | RM43887 | | +| Bug Fix | REASSIGN/DROP OWNED BY for synonym and directory objects. [Support Ticket: #855677] | RM43930 | | +| Bug Fix | Add support for synonym resolution to the type. [Support Ticket: #712701] | RM43047 | | +| Bug Fix | Fix for `dbms_scheduler` to set the job next run correctly. [Support Ticket: #851452] | RM43917 | | +| Bug Fix | `ALTER USER MAPPING` to treat `obfuscated_password` and `password` as same option. [Support Ticket: #847443] | RM43903 | | +| Bug Fix | Fix an issue with addition of timestamp and negative numbers. [Support Ticket: #853879] | RM43923 | | +| Bug Fix | Use `localVarCxt` to store trigger variable values. [Support Ticket: #] | RM43956 | SPL | +| Bug Fix | Serialize tuple and index updates while exchanging statistics for partition during exchange partition. [Support Ticket: #864189] | RM43952 | | +| Bug Fix | Fix for server crash with `edb_enable_pruning` is true. [Support Ticket: #853578] | RM43939 | | +| Bug Fix | Fixed 'C' multiline macro handling. | RM43920 | ecpg | +| Bug Fix | Fix for `pg_dump`/`restore` for `VIEW`s having `DECODE` expression by displaying implicit cast while deparsing `DECODE` expressions. [Support Ticket: #851791] | RM43915 | | +| Bug Fix | Fix server crash with `ASSERT` statement. | RM43940 | SPL | +| Bug Fix | Preserve the shared dependencies on package in case of replace. | RM41964 | | +| Bug Fix | Quote nested sub-procedure name while deparsing it into a `CREATE` statement. | RM41007 | | +| Bug Fix | Qualify type name while deparsing package variable. | RM41958 | | \ No newline at end of file diff --git a/product_docs/docs/epas/11/epas_rel_notes/37_epas11.2.9_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/37_epas11.2.9_rel_notes.mdx new file mode 100644 index 00000000000..86c74bf62c4 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/37_epas11.2.9_rel_notes.mdx @@ -0,0 +1,20 @@ +--- +title: Version 11.2.9 +--- + +EDB Postgres Advanced Server 11.2.9 includes the following bug fixes: + +| Type | Description | ID | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | +| Upstream Merge | Merged with community PostgreSQL 11.2. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-2.html) for details. | | +| Bug Fix | Allow `COMMIT`/`ROLLBACK` in any function-like expressions invoked via `CALL`. | RM43886 | +| Bug Fix | Fix possible `pg_upgrade` failure for databases containing `DBMS_AQ` queue tables. | RM41964 | +| Bug Fix | Fix possible server crash when attempting to use named-parameter syntax to call a function whose arguments are unnamed. | RM41971 | +| Bug Fix | Fix failure to dump triggers on child partitioned tables when upgrading from a version prior to v10. | RM43593 | +| Bug Fix | Fix incorrect handling of `IYYY` mask when `to_char()` is used. [Support Ticket: #823847] | RM43843 | +| Bug Fix | Fix wrong answer when `to_date()` is used with the RR mask for with or during a year ending in 00. [Support Ticket: #822321] | RM43832 | +| Bug Fix | Fix possible cache lookup failure when accessing the definition of an object type from which an attribute has been dropped. | RM43739 | +| Bug Fix | Don't permit modifications to `aq_administrator_role`, since it is a special role used by the database system. [Support Ticket: #731149] | RM43281 | +| Bug Fix | Fix locking bug when renaming a redaction policy. | | +| Bug Fix | Fix failure to apply insert performance optimization when inserting into a partitioned table. | | +| Bug Fix | Fix `CREATE ROLE` to honor `NOSUPERUSER` even when `IDENTIFIED BY` is specified. [Support Ticket: #818138] | RM43809 | From c291bf09536c50168182fcbb4c13134900dae7e4 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 21 Dec 2021 11:34:15 -0500 Subject: [PATCH 09/16] Updated supported platform files, started work on base release for v11 --- .../02_supported_platforms.mdx | 22 ------ .../epas/11/epas_platform_support/index.mdx | 40 +++++++++++ .../39_epas11.1.7_rel_notes.mdx | 68 +++++++++++++++++++ .../docs/epas/11/epas_rel_notes/index.mdx | 57 ---------------- .../01_supported_platforms.mdx | 15 ---- 5 files changed, 108 insertions(+), 94 deletions(-) delete mode 100644 product_docs/docs/epas/11/epas_inst_linux/02_supported_platforms.mdx create mode 100644 product_docs/docs/epas/11/epas_platform_support/index.mdx create mode 100644 product_docs/docs/epas/11/epas_rel_notes/39_epas11.1.7_rel_notes.mdx delete mode 100644 product_docs/docs/epas/11/epas_upgrade_guide/01_supported_platforms.mdx diff --git a/product_docs/docs/epas/11/epas_inst_linux/02_supported_platforms.mdx b/product_docs/docs/epas/11/epas_inst_linux/02_supported_platforms.mdx deleted file mode 100644 index 575da719c6e..00000000000 --- a/product_docs/docs/epas/11/epas_inst_linux/02_supported_platforms.mdx +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: "Supported Platforms" - -legacyRedirectsGenerated: - # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. - - "/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/installation-guide-for-linux/11/EDB_Postgres_Advanced_Server_Installation_Guide_Linux.1.05.html" ---- - - - -For information about the platforms and versions supported by Advanced Server, see [Platform Compatibility](https://www.enterprisedb.com/platform-compatibility#epas). - - -!!! Note - Advanced Server is no longer supported on RHEL/CentOS/OL 6.x platforms. It is strongly recommended that EDB products running on these platforms be migrated to a supported platform. - -**Limitations** - -The following limitations apply to EDB Postgres Advanced Server: - -- The `data` directory of a production database should not be stored on an NFS file system. -- The LLVM JIT package is supported on RHEL or CentOS 7.x, 8.x, and SLES. LLVM JIT is not supported on PPC-LE 64 running RHEL or CentOS 7.x or 8.x. \ No newline at end of file diff --git a/product_docs/docs/epas/11/epas_platform_support/index.mdx b/product_docs/docs/epas/11/epas_platform_support/index.mdx new file mode 100644 index 00000000000..5b267e93429 --- /dev/null +++ b/product_docs/docs/epas/11/epas_platform_support/index.mdx @@ -0,0 +1,40 @@ +--- +title: "Supported Platforms" + +redirects: + - /epas/11/epas_inst_linux/02_supported_platforms + - /epas/11/epas_upgrade_guide/01_supported_platforms +--- + +EDB Postgres Advanced Server v11 installers support 64 bit Linux and Windows server platforms. The Advanced Server 11 RPM packages are supported on the following 64-bit Linux platforms: + +- Red Hat Enterprise Linux (x86_64) 6.x and 7.x +- CentOS (x86_64) 6.x and 7.x +- PPC-LE 8 running CentOS/RHEL 7.x +- SLES 12 + +The EDB Postgres Advanced Server 11 native packages are supported on the following 64-bit Linux platforms: + +- Debian 9.x +- Ubuntu 18.04 LTS + +Graphical installers are supported on the following 64-bit Windows platforms: + +- Windows Server 2016 +- Windows Server 2012 R2 Server + +!!! Note + Connectors Installer will be supported on Windows 7, 8, & 10. + +!!! Note + Advanced Server is no longer supported on RHEL/CentOS/OL 6.x platforms. It is strongly recommended that EDB products running on these platforms be migrated to a supported platform. + +See [Platform Compatibility](https://www.enterprisedb.com/platform-compatibility) + for additional information about supported platforms. + +**Limitations** + +The following limitations apply to EDB Postgres Advanced Server: + +- The `data` directory of a production database should not be stored on an NFS file system. +- The LLVM JIT package is supported on RHEL or CentOS 7.x, 8.x, and SLES. LLVM JIT is not supported on PPC-LE 64 running RHEL or CentOS 7.x or 8.x. diff --git a/product_docs/docs/epas/11/epas_rel_notes/39_epas11.1.7_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/39_epas11.1.7_rel_notes.mdx new file mode 100644 index 00000000000..3abe63355e0 --- /dev/null +++ b/product_docs/docs/epas/11/epas_rel_notes/39_epas11.1.7_rel_notes.mdx @@ -0,0 +1,68 @@ +--- +title: "Version 11.1.7" +--- + +New features, enhancements, bug fixes, and other changes in EDB Postgres Advanced Server 11 include: + +| Type | Category | Description | +| ----------- | -------------- | ---------------- | +| Upstream Merge | | Merged with community PostgreSQL 11. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11.html) for details. | | | +| Feature | General Functionality | Advanced Server no longer creates the `dbo` system catalog in its databases. | +| Feature | General Functionality | Advanced Server now supports data redaction. Data redaction is a technique for limiting the exposure of sensitive data to certain users. Data redaction results in the alteration of the displayed data to such users. This is accomplished by defining redaction policies for tables with sensitive data. | +| Feature | General Functionality | You can use the `edb_filter_log.redact_password_commands` extension to instruct the server to redact stored passwords from the log file. | +| Feature | General Functionality | Advanced Server now supports EDB wait states, which is a background worker that probes each running session at a regular interval collecting information such as the database connection, the logged in user, the running query, and the wait events. | +| Feature | General Functionality | Advanced Server now permits an infinite number of tries for custom plan generation if you set the `edb_custom_plan_tries` parameter to `-1`. | +| Feature | General Functionality | The output format of the `version()` function has been changed to appear more consistent with the PostgreSQL community version output. | +| Feature | General Functionality | Advanced Server now supports SPL standalone procedure overloading. Note that this feature is not compatible with Oracle databases. | +| Feature | General Functionality | Advanced Server now supports the `PRAGMA AUTONOMOUS_TRANSACTION` directive within any SPL block to provide the autonomous transaction capability. | +| Feature | General Functionality | Advanced server now offers performance improvements to libpq dblink and the OCI. | +| Enhancement | Partitioning | Allow faster partition elimination during query processing; this speeds access to partitioned tables with many partitions. | +| Enhancement | Partitioning | Allow partition elimination during query execution. | +| Enhancement | Partitioning | Allow the creation of partitions based on hashing a key. | +| Enhancement | Partitioning | Allow updated rows to automatically move to new partitions based on the new row contents. | +| Enhancement | Partitioning | Allow partitioned tables to have a default partition. | +| Enhancement | Partitioning | Allow `UNIQUE` indexes on partitioned tables if the partition key guarantees uniqueness. | +| Enhancement | Partitioning | Allow indexes on a partitioned table to be automatically created in any child partitions. The new command `ALTER INDEX ATTACH PARTITION` allows indexes to be attached to partitions. This does not behave as a global index since the contents are private to each index. `WARN WHEN USING AN EXISTING INDEX?` | +| Enhancement | Partitioning | Allow foreign keys on partitioned tables. | +| Enhancement | Partitioning | Allow `INSERT`, `UPDATE`, and `COPY` on partitioned tables to properly route rows to foreign partitions. This is supported by `postgres_fdw` foreign tables.| +| Enhancement | Partitioning | Allow `FOR EACH ROW` triggers on partitioned tables. | +| Enhancement | Partitioning | Allow equality joins between partitioned tables with identically partitioned child tables to join the child tables directly. | +| Enhancement | Partitioning | Perform aggregation on each partition, and then merge the results. | +| Enhancement | Partitioning | Allow `postgres_fdw` to push down aggregates to foreign tables that are partitions. | +| Enhancement | Indexes | | +| Enhancement | Optimizer | | +| Enhancement | Performance | | +| Enhancement | Monitoring | | +| Enhancement | Authentication | | +| Enhancement | Server Configuration | | +| Enhancement | Streaming Replication and Recovery | | +| Enhancement | Utility Commands | | +| Enhancement | Data Types | | +| Enhancement | Functions | | +| Bug Fix | XML Functions | | +| Enhancement | PL/PgSQL | | +| Enhancement | Client Interfaces | | +| Enhancement | Client Applications | | +| Enhancement | psql | | +| Enhancement | pgbench | | +| Enhancement | Server Applications | | +| Enhancement | pg_dump, pg_dumpall, pg_restore | | +| Enhancement | Source Code | | +| Enhancement | Additional Modules | | +| +## Component Certification + +The following components are included in the EDB Postgres Advanced Server v12 release: + +- Procedural Language Packs – PL/Perl 5.26, PL/Python 3.7, PL/TCL 8.6 +- CloneSchema 1.10 +- Parallel Clone 1.5 +- pgAgent 4.15 +- Slony 2.2.8 +- Connectors JDBC 42.2.8, ODBC 12.00.0000 .NET 4.0.10.1, OCI 11.0.3.1 +- pgAdmin 4.15 +- pgBouncer 1.12.0 +- pgPool-II & pgPool-IIExtensions 4.0.6 +- MTK 53.0.0 +- EDBPlus 38.0.0 +- PostGIS-JDBC 2.2.1 \ No newline at end of file diff --git a/product_docs/docs/epas/11/epas_rel_notes/index.mdx b/product_docs/docs/epas/11/epas_rel_notes/index.mdx index 4a6216252c3..780326ff59b 100644 --- a/product_docs/docs/epas/11/epas_rel_notes/index.mdx +++ b/product_docs/docs/epas/11/epas_rel_notes/index.mdx @@ -44,28 +44,7 @@ Documentation is provided on the EnterpriseDB website, visit: ## Supported Platforms -EDB Postgres Advanced Server v11 installers support 64 bit Linux and Windows server platforms. The Advanced Server 11 RPM packages are supported on the following 64-bit Linux platforms: -- Red Hat Enterprise Linux (x86_64) 6.x and 7.x -- CentOS (x86_64) 6.x and 7.x -- PPC-LE 8 running CentOS/RHEL 7.x -- SLES 12 - -The EDB Postgres Advanced Server 11 native packages are supported on the following 64-bit Linux platforms: - -- Debian 9.x -- Ubuntu 18.04 LTS - -Graphical installers are supported on the following 64-bit Windows platforms: - -- Windows Server 2016 -- Windows Server 2012 R2 Server - -!!! Note - Connectors Installer will be supported on Windows 7, 8, & 10. - -See [Platform Compatibility](https://www.enterprisedb.com/platform-compatibility) - for additional information about supported platforms. ## Component Certification @@ -88,15 +67,6 @@ The following components are included in the EDB Postgres Advanced Server v11 re The major highlights of this release are : -- Advanced Server no longer creates the `dbo` system catalog in its databases. -- Advanced Server now supports data redaction. Data redaction is a technique for limiting the exposure of sensitive data to certain users. Data redaction results in the alteration of the displayed data to such users. This is accomplished by defining redaction policies for tables with sensitive data. -- You can use the `edb_filter_log.redact_password_commands` extension to instruct the server to redact stored passwords from the log file. -- Advanced Server now supports EDB wait states, which is a background worker that probes each running session at a regular interval collecting information such as the database connection, the logged in user, the running query, and the wait events. -- Advanced Server now permits an infinite number of tries for custom plan generation if you set the `edb_custom_plan_tries` parameter to `-1`. -- The output format of the `version()` function has been changed to appear more consistent with the PostgreSQL community version output. -- Advanced Server now supports SPL standalone procedure overloading. Note that this feature is not compatible with Oracle databases. -- Advanced Server now supports the `PRAGMA AUTONOMOUS_TRANSACTION` directive within any SPL block to provide the autonomous transaction capability. -- Advanced server now offers performance improvements to libpq dblink and the OCI. For information about Advanced Server features that are compatible with Oracle databases, see the following guides: @@ -115,33 +85,6 @@ The following updates are available in PostgreSQL 11: ### Partitioning Updates -Allow faster partition elimination during query processing; this speeds access to partitioned tables with many partitions. - -Allow partition elimination during query execution. - -Allow the creation of partitions based on hashing a key. - -Allow updated rows to automatically move to new partitions based on the new row contents. - -Allow partitioned tables to have a default partition. - -Allow `UNIQUE` indexes on partitioned tables if the partition key guarantees uniqueness. - -Allow indexes on a partitioned table to be automatically created in any child partitions. The new command `ALTER INDEX ATTACH PARTITION` allows indexes to be attached to partitions. This does not behave as a global index since the contents are private to each index. `WARN WHEN USING AN EXISTING INDEX?` - -Allow foreign keys on partitioned tables. - -Allow `INSERT`, `UPDATE`, and `COPY` on partitioned tables to properly route rows to foreign partitions. - -This is supported by `postgres_fdw` foreign tables. - -Allow `FOR EACH ROW` triggers on partitioned tables. - -Allow equality joins between partitioned tables with identically partitioned child tables to join the child tables directly. - -Perform aggregation on each partition, and then merge the results. - -Allow `postgres_fdw` to push down aggregates to foreign tables that are partitions. ### Parallel Queries diff --git a/product_docs/docs/epas/11/epas_upgrade_guide/01_supported_platforms.mdx b/product_docs/docs/epas/11/epas_upgrade_guide/01_supported_platforms.mdx deleted file mode 100644 index 7c502bb9163..00000000000 --- a/product_docs/docs/epas/11/epas_upgrade_guide/01_supported_platforms.mdx +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: "Supported Platforms" - -legacyRedirectsGenerated: - # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. - - "/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/upgrade-guide/11/EDB_Postgres_Advanced_Server_Upgrade_Guide.1.05.html" ---- - - - -For information about the platforms and versions supported by Advanced Server, see [Platform Compatibility](https://www.enterprisedb.com/platform-compatibility#epas) - - -!!! Note - Advanced Server is no longer supported on RHEL/CentOS/OL 6.x platforms. It is strongly recommended that EDB products running on these platforms be migrated to a supported platform. \ No newline at end of file From 144d599e90b7f36b99bcbfd8a9ec7159acd90918 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Tue, 21 Dec 2021 21:59:55 +0000 Subject: [PATCH 10/16] Fix various issues with admonitions introduced during import --- product_docs/docs/bdr/3.7/nodes.mdx | 8 ++++---- product_docs/docs/bdr/4.0/functions.mdx | 2 +- product_docs/docs/bdr/4.0/nodes.mdx | 8 ++++---- product_docs/docs/bdr/4.0/repsets.mdx | 4 ++-- product_docs/docs/bdr/4.0/striggers.mdx | 6 +++--- product_docs/docs/bdr/4.0/transaction-streaming.mdx | 7 +------ 6 files changed, 15 insertions(+), 20 deletions(-) diff --git a/product_docs/docs/bdr/3.7/nodes.mdx b/product_docs/docs/bdr/3.7/nodes.mdx index 5df1b27a5fa..e03768f10cd 100644 --- a/product_docs/docs/bdr/3.7/nodes.mdx +++ b/product_docs/docs/bdr/3.7/nodes.mdx @@ -1162,12 +1162,12 @@ the node which is being removed. However, just to make it clear, once the node is PARTED it can not *part* other nodes in the cluster. !!! Note - you are *parting* the local node you must set `wait_for_completion` - false, otherwise it will error. + If you are *parting* the local node you must set `wait_for_completion` + to false, otherwise it will error. !!! Warning - s action is permanent. If you wish to temporarily halt replication - a node, see `bdr.alter_subscription_disable()`. + This action is permanent. If you wish to temporarily halt replication + to a node, see `bdr.alter_subscription_disable()`. #### Synopsis diff --git a/product_docs/docs/bdr/4.0/functions.mdx b/product_docs/docs/bdr/4.0/functions.mdx index 55330295c6a..d276c4bdfc4 100644 --- a/product_docs/docs/bdr/4.0/functions.mdx +++ b/product_docs/docs/bdr/4.0/functions.mdx @@ -47,7 +47,7 @@ connected to. This allows an application to figure out what node it is connected to even behind a transparent proxy. It is also used in combination with CAMO, see the -[CAMO.md#connection-pools-and-proxies]\(Connection pools and proxies) +[Connection pools and proxies](camo.md#connection-pools-and-proxies) section. ### bdr.last_committed_lsn diff --git a/product_docs/docs/bdr/4.0/nodes.mdx b/product_docs/docs/bdr/4.0/nodes.mdx index 5dea8e10b24..cef4ea1cd12 100644 --- a/product_docs/docs/bdr/4.0/nodes.mdx +++ b/product_docs/docs/bdr/4.0/nodes.mdx @@ -1143,12 +1143,12 @@ the node which is being removed. However, just to make it clear, once the node is PARTED it can not *part* other nodes in the cluster. !!! Note - you are *parting* the local node you must set `wait_for_completion` - false, otherwise it will error. + If you are *parting* the local node you must set `wait_for_completion` + to false, otherwise it will error. !!! Warning - s action is permanent. If you wish to temporarily halt replication - a node, see `bdr.alter_subscription_disable()`. + This action is permanent. If you wish to temporarily halt replication + to a node, see `bdr.alter_subscription_disable()`. #### Synopsis diff --git a/product_docs/docs/bdr/4.0/repsets.mdx b/product_docs/docs/bdr/4.0/repsets.mdx index 1a7e8549af5..44dc54d5d13 100644 --- a/product_docs/docs/bdr/4.0/repsets.mdx +++ b/product_docs/docs/bdr/4.0/repsets.mdx @@ -278,11 +278,11 @@ transaction. another node, because this will stop replication on that node. Should this happen, please unsubscribe the affected node from that replication set. - the same reason, you should not drop a replication set if + For the same reason, you should not drop a replication set if there is a join operation in progress, and the node being joined is a member of that replication set; replication set membership is only checked at the beginning of the join. - s happens because the information on replication set usage is + This happens because the information on replication set usage is local to each node, so that it can be configured on a node before it joins the group. diff --git a/product_docs/docs/bdr/4.0/striggers.mdx b/product_docs/docs/bdr/4.0/striggers.mdx index b2a59d7169e..d9bd62fe75e 100644 --- a/product_docs/docs/bdr/4.0/striggers.mdx +++ b/product_docs/docs/bdr/4.0/striggers.mdx @@ -175,11 +175,11 @@ otherwise data divergence will occur. Technical Support recommends that all conf triggers are formally tested using the isolationtester tool supplied with BDR. -!!!Warning -- Multiple conflict triggers can be specified on a single table, but +!!! Warning + - Multiple conflict triggers can be specified on a single table, but they should match distinct event, i.e. each conflict should only match a single conflict trigger. - Multiple triggers matching the same event on the same table are + - Multiple triggers matching the same event on the same table are not recommended; they might result in inconsistent behaviour, and will be forbidden in a future release. diff --git a/product_docs/docs/bdr/4.0/transaction-streaming.mdx b/product_docs/docs/bdr/4.0/transaction-streaming.mdx index 8fbf2fea371..dc2f6e8ab06 100644 --- a/product_docs/docs/bdr/4.0/transaction-streaming.mdx +++ b/product_docs/docs/bdr/4.0/transaction-streaming.mdx @@ -47,12 +47,7 @@ processes on each subscriber, which is leveraged to provide the following enhanc frequent deadlocks between writers !!! Note - ect streaming to writer is still an experimental feature and must - -be used with caution. For specifically, it may not work well with -conflict resolutions since the commit timestamp of the streaming may not -be available (as the transaction may not have yet committed on the -origin). + Direct streaming to writer is still an experimental feature and must be used with caution. For specifically, it may not work well with conflict resolutions since the commit timestamp of the streaming may not be available (as the transaction may not have yet committed on the origin). ## Configuration From 6eaa5639988b5b4dbe7b44a383b9fccbad22f5e0 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 22 Dec 2021 11:34:03 -0500 Subject: [PATCH 11/16] Finalized base release, restructured topics in v11 --- .../39_epas11.1.7_rel_notes.mdx | 170 ++++++- .../docs/epas/11/epas_rel_notes/index.mdx | 455 +----------------- product_docs/docs/epas/11/index.mdx | 4 +- 3 files changed, 172 insertions(+), 457 deletions(-) diff --git a/product_docs/docs/epas/11/epas_rel_notes/39_epas11.1.7_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/39_epas11.1.7_rel_notes.mdx index 3abe63355e0..aceb7608be9 100644 --- a/product_docs/docs/epas/11/epas_rel_notes/39_epas11.1.7_rel_notes.mdx +++ b/product_docs/docs/epas/11/epas_rel_notes/39_epas11.1.7_rel_notes.mdx @@ -29,27 +29,155 @@ New features, enhancements, bug fixes, and other changes in EDB Postgres Advance | Enhancement | Partitioning | Allow equality joins between partitioned tables with identically partitioned child tables to join the child tables directly. | | Enhancement | Partitioning | Perform aggregation on each partition, and then merge the results. | | Enhancement | Partitioning | Allow `postgres_fdw` to push down aggregates to foreign tables that are partitions. | -| Enhancement | Indexes | | -| Enhancement | Optimizer | | -| Enhancement | Performance | | -| Enhancement | Monitoring | | -| Enhancement | Authentication | | -| Enhancement | Server Configuration | | -| Enhancement | Streaming Replication and Recovery | | -| Enhancement | Utility Commands | | -| Enhancement | Data Types | | -| Enhancement | Functions | | -| Bug Fix | XML Functions | | -| Enhancement | PL/PgSQL | | -| Enhancement | Client Interfaces | | -| Enhancement | Client Applications | | -| Enhancement | psql | | -| Enhancement | pgbench | | -| Enhancement | Server Applications | | -| Enhancement | pg_dump, pg_dumpall, pg_restore | | -| Enhancement | Source Code | | -| Enhancement | Additional Modules | | -| +| Enhancement | Parallel Queries | Allow B-tree indexes to be built in parallel. | +| Enhancement | Parallel Queries | Allow hash joins to be performed in parallel using a shared hash table. | +| Enhancement | Parallel Queries | Allow `UNION` to run each `SELECT` in parallel if the individual SELECTs cannot be parallelized. | +| Enhancement | Parallel Queries | Allow partition scans to more efficiently use parallel workers. | +| Enhancement | Parallel Queries | Allow `LIMIT` to be passed to parallel workers. | +| Enhancement | Parallel Queries | Allow single-evaluation queriea (for example, `WHERE` clause aggregate queries) and functions in the target list to be parallelized. | +| Enhancement | Parallel Queries | Add server option `parallel_leader_participation` to control if the leader executes subplans. | +| Enhancement | Parallel Queries | Allow parallelization of commands `CREATE TABLE .. AS`, `SELECT INTO`, and `CREATE MATERIALIZED VIEW`. | +| Enhancement | Parallel Queries | Improve performance of sequential scans with many parallel workers. | +| Enhancement | Parallel Queries | Add reporting of parallel worker sort activity to `EXPLAIN`. | +| Enhancement | Indexes | Allow indexes to `INCLUDE` columns that are not part of the unique constraint but are available for index-only scans. This is also useful for including columns that don't have B-tree support. | +| Enhancement | Indexes | Remember the highest B-tree index page to optimize future monotonically increasing index additions. | +| Enhancement | Indexes | Allow entire hash index pages to be scanned. Previously for each hash index entry, we need to refind the scan position within the page. This cuts down on lock/unlock traffic. | +| Enhancement | Indexes | Add predicate locking for hash, GiST and GIN indexes. | +| Enhancement | Indexes | Allow heap-only-tuple (HOT) updates for expression indexes when the values of the expressions are unchanged. | +| Enhancement | SP-Gist | Add `TEXT` prefix operator ^@ which is supported by SP-GiST. | +| Enhancement | SP-Gist | Allow polygons to be indexed with SP-GiST. | +| Enhancement | SP-Gist | Allow SP-GiST to use lossy representation of leaf keys. | +| Enhancement | Optimizer | Improve the selection of the optimizer statistics' most-common-values. | +| Enhancement | Optimizer | Improve selectivity estimates for `>=` and `<=` when the constants are not common values. | +| Enhancement | Optimizer | Optimize `var = var` to `var IS NOT NULL` where equivalent. | +| Enhancement | Optimizer | Improve row count optimizer estimates for `EXISTS` and `NOT EXISTS` queries. | +| Enhancement | Optimizer | Add optimizer selectivity costs for `HAVING` clauses. | +| Enhancement | Performance | Add Just-in-Time (JIT) compilation of some parts of query plans to improve execution speed. | +| Enhancement | Performance | Allow bitmap scans to perform index-only scans when possible. | +| Enhancement | Performance | Update the free space map during vacuum. | +| Enhancement | Performance | Allow vacuum to avoid unnecessary index scans. | +| Enhancement | Performance | Improve performance of committing multiple concurrent transactions. | +| Enhancement | Performance | Reduce memory usage for queries using set-returning functions in their target lists. | +| Enhancement | Performance | Allow `postgres_fdw` to push UPDATEs and DELETEs using joins to foreign servers. | +| Enhancement | Monitoring | Show memory usage in `log_statement_stats`, `log_parser_stats`, `log_planner_stats`, `log_executor_stats`. | +| Enhancement | Monitoring | Add `pg_stat_activity.backend_type` now shows the type of background worker. | +| Enhancement | Monitoring | Add `bgw_type` to the background worker C structure. This is displayed to the user in `pg_stat_activity.backend_type` and ps output. | +| Enhancement | Monitoring | Have `log_autovacuum_min_duration` log skipped tables that are concurrently being dropped. | +| Enhancement | Information Schema | Add `information_schema` columns related to table constraints and triggers. | +| Enhancement | Authentication | Allow the server to specify more complex LDAP specifications in search+bind mode. | +| Enhancement | Authentication | Allow LDAP authentication to use ldaps. | +| Enhancement | Authentication | Improve LDAP logging of errors. | +| Enhancement | Permissions | Add default roles which control file system access. | +| Enhancement | Permissions | Allow access to file system functions to be controlled by `GRANT/REVOKE` permissions, rather than superuser checks. | +| Enhancement | Permissions | Use `GRANT/REVOKE` to control access to `lo_import()` and `lo_export()`. | +| Enhancement | Permissions | Compile-time option `ALLOW_DANGEROUS_LO_FUNCTIONS` has been removed. | +| Enhancement | Permissions | Use viewowner not session owner when preventing non-password access to `postgres_fdw` tables. | +| Enhancement | Permissions | Fix invalid locking permission check in `SELECT FOR UPDATE` on views. | +| Enhancement | Server Configuration | Add server setting `ssl_passphrase_command` to allow supplying of the passphrase for SSL key files. | +| Enhancement | Server Configuration | Add storage parameter `toast_tuple_target` to control the minimum length before `TOAST` storage will be considered for new rows. | +| Enhancement | Server Configuration | Allow server options related to memory and file sizes to be specified as a number of bytes. | +| Enhancement | Write-Ahead Log (WAL) | Allow the WAL file size to be set via `initdb`. | +| Enhancement | Write-Ahead Log (WAL) | Do not retain WAL that spans two checkpoints. | +| Enhancement | Write-Ahead Log (WAL) | Fill the unused portion of force-switched WAL segment files with zeros for improved compressibility. | +| Enhancement | Base Backup and Streaming Replication | Replicate `TRUNCATE` activity when using logical replication. | +| Enhancement | Base Backup and Streaming Replication | Pass prepared transaction information to logical replication subscribers. | +| Enhancement | Base Backup and Streaming Replication | Exclude unlogged tables, temporary tables, and `pg_internal.init` files from streaming base backups. | +| Enhancement | Base Backup and Streaming Replication | Allow heap pages checksums to be checked during streaming base backup. | +| Enhancement | Base Backup and Streaming Replication | Allow replication slots to be advanced programmatically rather than be consumed by subscribers. | +| Enhancement | Base Backup and Streaming Replication | Add timeline information to the `backup_label` file. | +| Enhancement | Base Backup and Streaming Replication | Add host and port connection information to the `pg_stat_wal_receiver` system view. | +| Enhancement | Window Functions | Add window function features to complete SQL:2011 compliance. | +| Enhancement | Utility Commands | Allow `ALTER TABLE` to add a column with a non-null default without a table rewrite. | +| Enhancement | Utility Commands | Allow views to be locked by locking the underlying tables. | +| Enhancement | Utility Commands | Allow `ALTER INDEX` to set statistics-gathering targets for expression indexes. | +| Enhancement | Utility Commands | In psql, \d+ now shows the statistics target for indexes. | +| Enhancement | Utility Commands | Allow multiple tables to be specified in one `VACUUM` or `ANALYZE` command. Also, if any table mentioned in VACUUM uses a column list, then the `ANALYZE` keyword must be supplied; previously, `ANALYZE` was implied in such cases. | +| Enhancement | Utility Commands | Add parenthesized options syntax to `ANALYZE`. This is similar to the syntax supported by `VACUUM`. | +| Enhancement | Utility Commands | Add `CREATE AGGREGATE` option to specify the behavior of the aggregate finalization function. | +| Enhancement | Data Types | Allow the creation of arrays of domains. | +| Enhancement | Data Types | Support domains over composite types. | +| Enhancement | Data Types | Add casts from jsonb scalars to numeric and boolean data types. | +| Enhancement | Functions | Add SHA-2 family of hash functions; specifically, `sha224()`, `sha256()`, `sha384()`, `sha512()` were added. | +| Enhancement | Functions | Add support for 64-bit non-cryptographic hash functions. | +| Enhancement | Functions | Allow `to_char()` and `to_timestamp()` to specify the time zone's hours and minutes from UTC. | +| Enhancement | Functions | Improve the speed of aggregate computations. | +| Enhancement | Functions | Add text search function `websearch_to_tsquery()` that supports a query syntax similar to that used by web search engines. | +| Enhancement | Functions | Add function `json(b)_to_tsvector()` to create a text search query for matching JSON/JSONB values. | +| Enhancement | Server-Side Languages | Add SQL-level procedures, which can start and commit their own transactions. They are created with the new `CREATE PROCEDURE` command and invoked via `CALL`. The new `ALTER/DROP ROUTINE` commands allows altering/dropping of procedures, functions, and aggregates. | +| Enhancement | Server-Side Languages | Add transaction control to PL/pgSQL, PL/Perl, PL/Python, PL/Tcl, and SPI server-side languages. | +| Enhancement | Server-Side Languages | Add the ability to define PL/pgSQL record types as not null, constant, or with initial values. | +| Enhancement | Server-Side Languages | Allow PL/pgSQL to handle changes to composite types (e.g. record, row) that happen between the first and later function executions in the same session. Previously such circumstances generated errors. | +| Enhancement | Server-Side Languages | Add extension `jsonb_plpython` to transform JSONB to/from PL/Python types. | +| Enhancement | Server-Side Languages | Add extension `jsonb_plperl` to transform JSONB to/from PL/Perl types. | +| Enhancement | Client Interfaces | Change libpq to disable compression by default. | +| Enhancement | Client Interfaces | Add `DO CONTINUE` action to the `ECPG WHENEVER` statement. | +| Enhancement | Client Interfaces | Add ecpg mode to enable Oracle Pro*C handling of char arrays. This mode is enabled with -C. | +| Enhancement | psql | Add psql command `\gdesc` to display the column names and types of the query output. | +| Enhancement | psql | Add psql variables to report query activity and errors. | +| Enhancement | psql | Allow psql to test for the existence of a variable. | +| Enhancement | psql | Add `PSQL_PAGER` to control psql's pager. | +| Enhancement | psql | Have psql `\d+` always show the partition information. | +| Enhancement | psql | Have psql report the proper user name before the password prompt. | +| Enhancement | psql | Allow quit and exit to exit psql when used in an empty buffer. | +| Enhancement | psql | Have psql hint at using control-D when `\q` is entered alone on a line but ignored. | +| Enhancement | psql | Improve tab-completion for `ALTER INDEX RESET/SET`. | +| Enhancement | psql | Add infrastructure to allow psql to customize tab completion queries based on the server version. Previously tab completion queries could fail. | +| Enhancement | pgbench | Add pgbench expressions support for NULLs, booleans, and some functions and operators. | +| Enhancement | pgbench | Add `\if` conditional support to pgbench. | +| Enhancement | pgbench | Allow the use of non-ASCII characters in pgbench variable names. | +| Enhancement | pgbench | Add pgbench option `--init-steps` to control the initialization steps performed. | +| Enhancement | pgbench | Add an approximated Zipfian-distributed random generator to pgbench. | +| Enhancement | pgbench | Allow the random seed to be set in pgbench. | +| Enhancement | pgbench | Allow pgbench to do exponentiation with `pow()` and `power()`. | +| Enhancement | pgbench | Add hashing functions to pgbench. | +| Enhancement | pgbench | Make pgbench statistics more accurate when using `--latency-limit` and `--rate`. | +| Enhancement | Server Applications | Add an option to `pg_basebackup` that creates a named replication slot. The option `--create-slot` creates the named replication slot `(--slot)` when the WAL streaming method `(--wal-method=stream)` is used. | +| Enhancement | Server Applications | Allow `initdb` to set group read access to the data directory. | +| Enhancement | Server Applications | Add `pg_verify_checksums` tool to verify database checksums while offline. | +| Enhancement | Server Applications | Allow `pg_resetwal` to change the WAL segment size via `--wal-segsize`. | +| Enhancement | Server Applications | Add long options to `pg_resetwal` and `pg_controldata`. | +| Enhancement | Server Applications | Add `pg_receivewal` option `--no-sync` to prevent synchronous WAL writes, for testing. | +| Enhancement | Server Applications | Add `pg_receivewal` option `--endpos` to specify when WAL receiving should stop. | +| Enhancement | Server Applications | Allow `pg_ctl` to send the `SIGKILL` signal to processes. | +| Enhancement | Server Applications | Reduce the number of files copied by `pg_rewind`. | +| Enhancement | Server Applications | Prevent `pg_rewind` from running as root. | +| Enhancement | pg_dump, pg_dumpall, pg_restore | Add `pg_dumpall` option `--encoding` to control encoding. | +| Enhancement | pg_dump, pg_dumpall, pg_restore | Add `pg_dump` option `--load-via-partition-root` to force loading of data into the partition's root table rather than the original partitions. | +| Enhancement | pg_dump, pg_dumpall, pg_restore | Add an option to suppress dumping and restoring comments. | +| Enhancement | Source Code | Add support for large pages on Windows. | +| Enhancement | Source Code | Add support for ARMv8 hardware CRC calculations. | +| Enhancement | Source Code | Convert documentation to DocBook XML. | +| Enhancement | Source Code | Use `stdbool.h` to define type bool on platforms where it's suitable. | +| Enhancement | Source Code | Add ability to use channel binding when using SCRAM authentication. | +| Enhancement | Source Code | Overhaul the way system tables are defined for bootstrap use. | +| Enhancement | Source Code | Allow background workers to attach to databases that normally disallow connections. | +| Enhancement | Source Code | Speed up lookups of built-in function names matching OIDs. | +| Enhancement | Source Code | Speed up construction of query results. | +| Enhancement | Source Code | Improve access speed to system caches. | +| Enhancement | Source Code | Add a generational memory allocator which is optimized for serial allocation/deallocation. | +| Enhancement | Source Code | Make the computation of system column `pg_class.reltuples` consistent. | +| Enhancement | Source Code | Update to use perltidy version. | +| Enhancement | Additional Modules | Allow extension `pg_prewarm` to restore the previous shared buffer contents on startup. | +| Enhancement | Additional Modules | Add `pg_trgm` function `strict_word_similarity()` to compute the similarity of whole words.| +| Enhancement | Additional Modules | Allow creation of indexes on citext extension columns that can be used by `LIKE` comparisons. | +| Enhancement | Additional Modules | Allow `btree_gin` to index bool, bpchar, name and uuid data types. | +| Enhancement | Additional Modules | Allow cube and seg extensions using GiST indexes to perform index-only scans. | +| Enhancement | Additional Modules | Allow retrieval of negative cube coordinates using the `~>` operator. | +| Enhancement | Additional Modules | Add Vietnamese letter detection to the unaccent extension. | +| Enhancement | Additional Modules | Enhance amcheck to check that each heap tuple has an index entry. | +| Enhancement | Additional Modules | Have adminpack use the new default file system access roles. | +| Enhancement | Additional Modules | Increase `pg_stat_statement`'s query id to 64 bits. | +| Enhancement | Additional Modules | Install `errcodes.txt` to provide access to the error codes reported by PostgreSQL. | +| Enhancement | Additional Modules | Prevent extensions from creating custom server variables that take a quoted list of values. | +| Enhancement | Additional Modules | Removed `contrib/start-scripts/osx`. | +| Enhancement | Additional Modules | Removed the chkpass extension. | + +## Deprecated Features + +The PL/Java package is deprecated in EDB Postgres Advanced Server 11 and will be unavailable in EDB Postgres Advanced Server 12. + +Advanced Server no longer supports the Infinite Cache feature. All related components have been removed such as the functions, scripts, configuration parameters, and columns from statistical tables and views. + ## Component Certification The following components are included in the EDB Postgres Advanced Server v12 release: diff --git a/product_docs/docs/epas/11/epas_rel_notes/index.mdx b/product_docs/docs/epas/11/epas_rel_notes/index.mdx index 780326ff59b..11ce07a3683 100644 --- a/product_docs/docs/epas/11/epas_rel_notes/index.mdx +++ b/product_docs/docs/epas/11/epas_rel_notes/index.mdx @@ -1,20 +1,8 @@ --- navTitle: Release Notes title: "EDB Postgres Advanced Server Release Notes" - -legacyRedirectsGenerated: - # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. - - "/edb-docs/d/edb-postgres-advanced-server/user-guides/language-pack-guide/11/EDB_Postgres_Language_Pack_Guide.1.03.html" - - "/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/release-notes/11/toc.html" - - "/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/release-notes/11/EPAS_Release_Notes.1.7.html" - - "/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/release-notes/11/EPAS_Release_Notes.1.6.html" - - "/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/release-notes/11/EPAS_Release_Notes.1.4.html" - - "/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/release-notes/11/EPAS_Release_Notes.1.3.html" - - "/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/release-notes/11/EPAS_Release_Notes.1.5.html" - - "/edb-docs/d/edb-postgres-advanced-server/installation-getting-started/installation-guide-for-windows/11/EDB_Postgres_Advanced_Server_Installation_Guide_Windows.1.05.html" --- -- With this release of EDB Postgres Advanced Server 11, EnterpriseDB continues to lead as the only worldwide company to deliver innovative and low cost open-source-derived database solutions with commercial quality, ease of use, compatibility, scalability, and performance for small or large-scale enterprises. EDB Postgres Advanced Server 11 is built on the open source PostgreSQL 11. EDB Postgres Advanced Server 11 adds a number of new features that will please developers and DBAs alike, including: @@ -24,426 +12,23 @@ EDB Postgres Advanced Server 11 is built on the open source PostgreSQL 11. EDB P - Performance diagnostics - Various performance improvements to OCI dblink -## Installers and Documentation - -EDB Postgres Advanced Server v11 is packaged and delivered as interactive installers for Windows; visit the EnterpriseDB website: - - - -RPM Packages are available for Linux from: - - - -If you need to request repository access, visit: - - - -Documentation is provided on the EnterpriseDB website, visit: - - - -## Supported Platforms - - - -## Component Certification - -The following components are included in the EDB Postgres Advanced Server v11 release: - -- Procedural Language Packs – PL/Perl 5.26, PL/Python 3.6, PL/TCL 8.6 -- CloneSchema 1.8 -- pgAgent 4.0 -- Slony 2.2.7 -- Connectors 11.0.1 -- Connectors JDBC 42.2.5.1, ODBC 10.03.0000.02, .NET 4.0.2.1, OCI 11.0.1.1 -- pgAdmin 4 Client 2.0 -- pgBouncer 1.9.0.1 -- pgPool-II & pgPool-IIExtensions 3.7.5 -- MTK 52.0.0 -- EDBPlus 37.0.0 -- PostGIS 2.5.x - -## EDB Postgres Advanced Server v11 Features - -The major highlights of this release are : - - -For information about Advanced Server features that are compatible with Oracle databases, see the following guides: - -- *Database Compatibility for Oracle Developers Guide* -- *Database Compatibility for Oracle Developers Reference Guide* -- *Database Compatibility for Oracle Developers Built-in Package Guide* -- *Database Compatibility for Oracle Developers Tools and Utilities Guide* - -### Community PostgreSQL 11 Updates - -Advanced Server 11 integrates all of the community PostgreSQL 11 features. To review a complete list of changes to the community PostgreSQL project and the contributors names, see the PostgreSQL 11 Release Notes at: - - - -The following updates are available in PostgreSQL 11: - -### Partitioning Updates - - -### Parallel Queries - -Allow btree indexes to be built in parallel. - -Allow hash joins to be performed in parallel using a shared hash table. - -Allow `UNION` to run each `SELECT` in parallel if the individual SELECTs cannot be parallelized. - -Allow partition scans to more efficiently use parallel workers. - -Allow `LIMIT` to be passed to parallel workers. - -Allow single-evaluation queries, e.g. `WHERE` clause aggregate queries, and functions in the target list to be parallelized. - -Add server option `parallel_leader_participation` to control if the leader executes subplans. - -Allow parallelization of commands `CREATE TABLE .. AS`, `SELECT INTO`, and `CREATE MATERIALIZED VIEW`. - -Improve performance of sequential scans with many parallel workers. - -Add reporting of parallel worker sort activity to `EXPLAIN`. - -### Indexes - -Allow indexes to `INCLUDE` columns that are not part of the unique constraint but are available for index-only scans. - -This is also useful for including columns that don't have btree support. - -Remember the highest btree index page to optimize future monotonically increasing index additions. - -Allow entire hash index pages to be scanned. - -Previously for each hash index entry, we need to refind the scan position within the page. This cuts down on lock/unlock traffic. - -Add predicate locking for hash, GiST and GIN indexes. - -Allow heap-only-tuple (HOT) updates for expression indexes when the values of the expressions are unchanged. - -### SP-Gist - -Add `TEXT` prefix operator ^@ which is supported by SP-GiST. - -Allow polygons to be indexed with SP-GiST. - -Allow SP-GiST to use lossy representation of leaf keys. - -### Optimizer - -Improve the selection of the optimizer statistics' most-common-values. - -Improve selectivity estimates for `>=` and `<=` when the constants are not common values. - -Optimize `var = var` to `var IS NOT NULL` where equivalent. - -Improve row count optimizer estimates for `EXISTS` and `NOT EXISTS` queries. - -Add optimizer selectivity costs for `HAVING` clauses. - -### General Performance - -Add Just-in-Time (JIT) compilation of some parts of query plans to improve execution speed. - -Allow bitmap scans to perform index-only scans when possible. - -Update the free space map during vacuum. - -Allow vacuum to avoid unnecessary index scans. - -Improve performance of committing multiple concurrent transactions. - -Reduce memory usage for queries using set-returning functions in their target lists. - -Allow `postgres_fdw` to push UPDATEs and DELETEs using joins to foreign servers. - -### Monitoring - -Show memory usage in `log_statement_stats`, `log_parser_stats`, `log_planner_stats`, `log_executor_stats`. - -Add `pg_stat_activity.backend_type` now shows the type of background worker. - -Add `bgw_type` to the background worker C structure. This is displayed to the user in `pg_stat_activity.backend_type` and ps output. - -Have `log_autovacuum_min_duration` log skipped tables that are concurrently being dropped. - -### Information Schema - -Add `information_schema` columns related to table constraints and triggers. - -### Authentication - -Allow the server to specify more complex LDAP specifications in search+bind mode. - -Allow LDAP authentication to use ldaps. - -Improve LDAP logging of errors. - -### Permissions - -Add default roles which control file system access. - -Allow access to file system functions to be controlled by `GRANT/REVOKE` permissions, rather than superuser checks. - -Use `GRANT/REVOKE` to control access to `lo_import()` and `lo_export()`. - -Compile-time option `ALLOW_DANGEROUS_LO_FUNCTIONS` has been removed. - -Use viewowner not session owner when preventing non-password access to `postgres_fdw` tables. - -Fix invalid locking permission check in `SELECT FOR UPDATE` on views. - -### Server Configuration - -Add server setting `ssl_passphrase_command` to allow supplying of the passphrase for SSL key files. - -Add storage parameter `toast_tuple_target` to control the minimum length before `TOAST` storage will be considered for new rows. - -Allow server options related to memory and file sizes to be specified as number of bytes. - -### Write-Ahead Log (WAL) - -Allow the WAL file size to be set via initdb. - -No longer retain WAL that spans two checkpoints. - -Fill the unused portion of force-switched WAL segment files with zeros for improved compressibility. - -### Base Backup and Streaming Replication - -Replicate `TRUNCATE` activity when using logical replication. - -Pass prepared transaction information to logical replication subscribers. - -Exclude unlogged tables, temporary tables, and `pg_internal.init` files from streaming base backups. - -Allow heap pages checksums to be checked during streaming base backup. - -Allow replication slots to be advanced programmatically, rather than be consumed by subscribers. - -Add timeline information to the `backup_label` file. - -Add host and port connection information to the `pg_stat_wal_receiver` system view. - -### Window Functions - -Add window function features to complete SQL:2011 compliance. - -### Utility Commands - -Allow `ALTER TABLE` to add a column with a non-null default without a table rewrite. - -Allow views to be locked by locking the underlying tables. - -Allow `ALTER INDEX` to set statistics-gathering targets for expression indexes. - -In psql, \d+ now shows the statistics target for indexes. - -Allow multiple tables to be specified in one `VACUUM` or `ANALYZE` command. Also, if any table mentioned in VACUUM uses a column list, then the `ANALYZE` keyword must be supplied; previously, `ANALYZE` was implied in such cases. - -Add parenthesized options syntax to `ANALYZE`. This is similar to the syntax supported by `VACUUM`. - -Add `CREATE AGGREGATE` option to specify the behavior of the aggregate finalization function. - -### Data Types - -Allow the creation of arrays of domains. - -Support domains over composite types. - -Add casts from jsonb scalars to numeric and boolean data types. - -### Functions - -Add SHA-2 family of hash functions; specifically, `sha224()`, `sha256()`, `sha384()`, `sha512()` were added. - -Add support for 64-bit non-cryptographic hash functions. - -Allow `to_char()` and `to_timestamp()` to specify the time zone's hours and minutes from UTC. - -Improve the speed of aggregate computations. - -Add text search function `websearch_to_tsquery()` that supports a query syntax similar to that used by web search engines. - -Add function `json(b)_to_tsvector()` to create a text search query for matching JSON/JSONB values. - -### Server-Side Languages - -Add SQL-level procedures, which can start and commit their own transactions. They are created with the new `CREATE PROCEDURE` command and invoked via `CALL`. The new `ALTER/DROP ROUTINE` commands allows altering/dropping of procedures, functions, and aggregates. - -Add transaction control to PL/pgSQL, PL/Perl, PL/Python, PL/Tcl, and SPI server-side languages. - -Add the ability to define PL/pgSQL record types as not null, constant, or with initial values. - -Allow PL/pgSQL to handle changes to composite types (e.g. record, row) that happen between the first and later function executions in the same session. Previously such circumstances generated errors. - -Add extension `jsonb_plpython` to transform JSONB to/from PL/Python types. - -Add extension `jsonb_plperl` to transform JSONB to/from PL/Perl types. - -### Client Interface - -Change libpq to disable compression by default. - -Add `DO CONTINUE` action to the `ECPG WHENEVER` statement. - -Add ecpg mode to enable Oracle Pro*C handling of char arrays. - -This mode is enabled with -C. - -### Client Applications - -**psql** - -Add psql command `\gdesc` to display the column names and types of the query output. - -Add psql variables to report query activity and errors. - -Allow psql to test for the existence of a variable. - -Add `PSQL_PAGER` to control psql's pager. - -Have psql `\d+` always show the partition information. - -Have psql report the proper user name before the password prompt. - -Allow quit and exit to exit psql when used in an empty buffer. - -Have psql hint at using control-D when `\q` is entered alone on a line but ignored. - -Improve tab-completion for `ALTER INDEX RESET/SET`. - -Add infrastructure to allow psql to customize tab completion queries based on the server version. - -Previously tab completion queries could fail. - -**pgbench** - -Add pgbench expressions support for NULLs, booleans, and some functions and operators. - -Add `\if` conditional support to pgbench. - -Allow the use of non-ASCII characters in pgbench variable names. - -Add pgbench option `--init-steps` to control the initialization steps performed. - -Add an approximated Zipfian-distributed random generator to pgbench. - -Allow the random seed to be set in pgbench. - -Allow pgbench to do exponentiation with `pow()` and `power()`. - -Add hashing functions to pgbench. - -Make pgbench statistics more accurate when using `--latency-limit` and `--rate`. - -### Server Applications - -Add an option to `pg_basebackup` that creates a named replication slot. The option `--create-slot` creates the named replication slot `(--slot)` when the WAL streaming method `(--wal-method=stream)` is used. - -Allow `initdb` to set group read access to the data directory. - -Add `pg_verify_checksums` tool to verify database checksums while offline. - -Allow `pg_resetwal` to change the WAL segment size via `--wal-segsize`. - -Add long options to `pg_resetwal` and `pg_controldata`. - -Add `pg_receivewal` option `--no-sync` to prevent synchronous WAL writes, for testing. - -Add `pg_receivewal` option `--endpos` to specify when WAL receiving should stop. - -Allow `pg_ctl` to send the `SIGKILL` signal to processes. - -Reduce the number of files copied by `pg_rewind`. - -Prevent `pg_rewind` from running as root. - -### pg_dump, pg_dumpall, pg_restore - -Add `pg_dumpall` option `--encoding` to control encoding. - -Add `pg_dump` option `--load-via-partition-root` to force loading of data into the partition's root table, rather than the original partitions. - -Add an option to suppress dumping and restoring comments. - -### Source Code - -Add support for large pages on Windows. - -Add support for ARMv8 hardware CRC calculations. - -Convert documentation to DocBook XML. - -Use `stdbool.h` to define type bool on platforms where it's suitable. - -Add ability to use channel binding when using SCRAM authentication. - -Overhaul the way system tables are defined for bootstrap use. - -Allow background workers to attach to databases that normally disallow connections. - -Speed up lookups of built-in function names matching OIDs. - -Speed up construction of query results. - -Improve access speed to system caches. - -Add a generational memory allocator which is optimized for serial allocation/deallocation. - -Make the computation of system column `pg_class.reltuples` consistent. - -Update to use perltidy version. - -### Additional Modules - -Allow extension `pg_prewarm` to restore the previous shared buffer contents on startup. - -Add `pg_trgm` function `strict_word_similarity()` to compute the similarity of whole words. - -Allow creation of indexes on citext extension columns that can be used by `LIKE` comparisons. - -Allow `btree_gin` to index bool, bpchar, name and uuid data types. - -Allow cube and seg extensions using GiST indexes to perform index-only scans. - -Allow retrieval of negative cube coordinates using the `~>` operator. - -Add Vietnamese letter detection to the unaccent extension. - -Enhance amcheck to check that each heap tuple has an index entry. - -Have adminpack use the new default file system access roles. - -Increase `pg_stat_statement`'s query id to 64 bits. - -Install `errcodes.txt` to provide access to the error codes reported by PostgreSQL. - -Prevent extensions from creating custom server variables that take a quoted list of values. - -Removed `contrib/start-scripts/osx`. - -Removed the chkpass extension. - -### Deprecated Features - -Please note that the following items will be deprecated: - -The PL/Java package is deprecated in EDB Postgres Advanced Server 11 and will be unavailable in EDB Postgres Advanced Server 12. - -Advanced Server no longer supports the Infinite Cache feature. All related components have been removed such as the functions, scripts, configuration parameters, and columns from statistical tables and views. - -### How to Report Problems - -To report any issues you are having please contact EnterpriseDB’s technical support staff: - -- Email: [support@enterprisedb.com](mailto:support@enterprisedb.com) - -- Phone: - - US: +1-732-331-1320 or 1-800-235-5891 - - \ No newline at end of file +The EDB Postgres Advanced Server (Advanced Server) documentation describes the latest version of Advanced Server 11 including minor releases and patches. The release notes in this section provide information on what was new in each release. + +| Version | Release Date | Upstream Merge | +| ------- | ------------ | -------------- | +| [11.14.24](10_epas11.14.24_rel_notes.mdx) | 2021 Nov 11 | [11.14](https://www.postgresql.org/docs/11/release-11-14.html) | +| [11.13.23](11_epas11.13.23_rel_notes.mdx) | 2021 Sep 08 | [11.13](https://www.postgresql.org/docs/11/release-11-13.html) | +| [11.12.22](13_epas11.12.22_rel_notes.mdx) | 2021 May 05 | [11.12](https://www.postgresql.org/docs/11/release-11-12.html) | +| [11.12.21](15_epas11.12.21_rel_notes.mdx) | 2021 Apr 15 | [11.12](https://www.postgresql.org/docs/11/release-11-12.html) | +| [11.11.20](17_epas11.11.20_rel_notes.mdx) | 2021 Feb 12 | [11.11](https://www.postgresql.org/docs/11/release-11-11.html) | +| [11.10.19](19_epas11.10.19_rel_notes.mdx) | 2020 Nov 20 | [11.10](https://www.postgresql.org/docs/11/release-11-10.html) | +| [11.9.17](21_epas11.9.17_rel_notes.mdx) | 2020 Aug 18 | [11.9](https://www.postgresql.org/docs/11/release-11-9.html) | +| [11.9.16](23_epas11.9.16_rel_notes.mdx) | 2020 Aug 17 | [11.9](https://www.postgresql.org/docs/11/release-11-9.html) | +| [11.8.15](25_epas11.8.15_rel_notes.mdx) | 2020 May 18 | [11.8](https://www.postgresql.org/docs/11/release-11-8.html) | +| [11.7.14](27_epas11.7.14_rel_notes.mdx) | 2020 Feb 14 | [11.7](https://www.postgresql.org/docs/11/release-11-7.html) | +| [11.6.13](29_epas11.6.13_rel_notes.mdx) | 2019 Nov 19 | [11.6](https://www.postgresql.org/docs/11/release-11-6.html) | +| [11.5.12](31_epas11.5.12_rel_notes.mdx) | 2019 Aug 26 | [11.5](https://www.postgresql.org/docs/11/release-11-5.html) | +| [11.4.11](33_epas11.4.11_rel_notes.mdx) | 2019 Jun 25 | [11.4](https://www.postgresql.org/docs/11/release-11-4.html) | +| [11.3.10](35_epas11.3.10_rel_notes.mdx) | 2019 May 13 | [11.3](https://www.postgresql.org/docs/11/release-11-3.html) | +| [11.2.9](37_epas11.2.9_rel_notes.mdx) | 2019 Feb 22 | [11.2](https://www.postgresql.org/docs/11/release-11-2.html) | +| [11.1.7](39_epas11.1.7_rel_notes.mdx) | 2018 Nov 28 | [11.1](https://www.postgresql.org/docs/11/release-11-1.html) | \ No newline at end of file diff --git a/product_docs/docs/epas/11/index.mdx b/product_docs/docs/epas/11/index.mdx index 6675b9d1463..a3242bfecec 100644 --- a/product_docs/docs/epas/11/index.mdx +++ b/product_docs/docs/epas/11/index.mdx @@ -1,10 +1,12 @@ --- title: EDB Postgres Advanced Server + navigation: + - epas_rel_notes - "#Getting Started" + - epas_platform_support - epas_inst_linux - epas_inst_windows - - epas_rel_notes - epas_upgrade_guide - "#For Oracle Developers" - epas_compat_ora_dev_guide From 7a828201b1161e78f1be19dd4f584f538b0a08ba Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 22 Dec 2021 11:45:38 -0500 Subject: [PATCH 12/16] Added non-breaking hyphens --- .../docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx | 2 +- .../docs/epas/11/epas_rel_notes/13_epas11.12.22_rel_notes.mdx | 2 +- .../docs/epas/11/epas_rel_notes/15_epas11.12.21_rel_notes.mdx | 2 +- .../docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx index fb0ade4ca45..07d7269f7c4 100644 --- a/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx +++ b/product_docs/docs/epas/11/epas_rel_notes/10_epas11.14.24_rel_notes.mdx @@ -9,6 +9,6 @@ EDB Postgres Advanced Server 11.14.24 includes the following bug fixes: | Upstream Merge | Merged with communuity PostgreSQL 11.14. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-14.html) for details. | | | | Bug Fix | Obey the AM meridian indicator correctly in `to_timestamp()`. [Support Ticket: #74035] | DB-149 | | | Bug Fix | Prevent possible crash after implicit rollback handling `Parse` protocol message. [Support Ticket: #72626] | DB-1449 | | -| Bug Fix | Fix possible server crash when the package is dropped from another session | DB-1403 | SPL | +| Bug Fix | Fix possible server crash when the package is dropped from another session | DB‑1403 | SPL | | Bug Fix | Populate the event type for missing node type. | DB-1184 | edb_audit | | Bug Fix | Fix server crash when the package is re-compiled in the same session. [Support Ticket: #1181417] | DB-1038 | SPL | \ No newline at end of file diff --git a/product_docs/docs/epas/11/epas_rel_notes/13_epas11.12.22_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/13_epas11.12.22_rel_notes.mdx index 982b81d7875..b52bb9405ac 100644 --- a/product_docs/docs/epas/11/epas_rel_notes/13_epas11.12.22_rel_notes.mdx +++ b/product_docs/docs/epas/11/epas_rel_notes/13_epas11.12.22_rel_notes.mdx @@ -8,4 +8,4 @@ EDB Postgres Advanced Server 11.12.22 includes the following bug fixes: | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | Upstream Merge | Merged with communuity PostgreSQL 11.12. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-12.html) for details. | | | Bug Fix | Fix `pg_upgrade` to allow the system catalog composite type used in user tables. | DB-1237 | -| Bug Fix | Fix possible misbehavior when aborting an autonomous transaction and also fix interaction of autonomous transactions with `edb_stmt_level_tx=on`. | DB-1034 | +| Bug Fix | Fix possible misbehavior when aborting an autonomous transaction and also fix interaction of autonomous transactions with `edb_stmt_level_tx=on`. | DB‑1034 | diff --git a/product_docs/docs/epas/11/epas_rel_notes/15_epas11.12.21_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/15_epas11.12.21_rel_notes.mdx index f6bd43149a3..b77caf2d841 100644 --- a/product_docs/docs/epas/11/epas_rel_notes/15_epas11.12.21_rel_notes.mdx +++ b/product_docs/docs/epas/11/epas_rel_notes/15_epas11.12.21_rel_notes.mdx @@ -12,7 +12,7 @@ EDB Postgres Advanced Server 11.12.21 includes the following bug fixes: | Bug Fix | Support password redaction in `edb_filter_log` with the extended protocol, specially used with connectors. [Support Ticket: #1234131] | DB-1139 | | | Bug Fix | Correct `QUEUE` object handling in `EVENT TRIGGER`. | DB-1129 | | | Bug Fix | Correct `REDACTION COLUMN` object handling in `EVENT TRIGGER`. | DB-1129 | | -| Bug Fix | Fix `pg_upgrade` to not fail when a custom configuration file directory is used. [Support Ticket: #1200560] | DB-1084 | | +| Bug Fix | Fix `pg_upgrade` to not fail when a custom configuration file directory is used. [Support Ticket: #1200560] | DB‑1084 | | | Bug Fix | Refrain from dropping trigger on parent table through partitioning dependency. [Support Ticket: #1187215] | DB-1063 | | | Bug Fix | Free temporary memory to avoid PGA memory accumulation and exceeds errors. [Support Ticket: #1129386] | DB-1061 | dblink_ora | | Bug Fix | Fix possible server crash with partition-wish join push-down code path. | DB-1042 | edb_dblink_oci | diff --git a/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx b/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx index 1aa5dfad1b9..21bd0d4ae34 100644 --- a/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx +++ b/product_docs/docs/epas/11/epas_rel_notes/17_epas11.11.20_rel_notes.mdx @@ -7,7 +7,7 @@ EDB Postgres Advanced Server 11.11.20 includes the following bug fixes: | Type | Description | ID | Category | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------------- | | Upstream Merge | Merged with community PostgreSQL 11.11. See the community [Release Notes](https://www.postgresql.org/docs/11/release-11-11.html) for details. | | | -| Bug Fix | Use correct relation information while loading into multiple tables with a single control file to avoid unexpected behavior. [Support Ticket: #1165964] | DB-973 | edbldr | +| Bug Fix | Use correct relation information while loading into multiple tables with a single control file to avoid unexpected behavior. [Support Ticket: #1165964] | DB‑973 | edbldr | | Bug Fix | For nested subprocedures, verify that a set-returning function is called in a valid place or not. | DB-946 | | | Bug Fix | Skip SQL Protect-related files when running `pg_checksums` or `pg_verify_checksums`. [Support Ticket: #1140841] | DB-919 | | | Bug Fix | Forbid `CONNECT_BY_ROOT` and `SYS_CONNECT_BY_PATH` in join expressions. | DB-914 | | From 5068670f437cf7c658becb48e4b221e471345bb7 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Wed, 22 Dec 2021 13:27:29 -0500 Subject: [PATCH 13/16] updated references to v4.2 where appropriate Updated all references in install and upgrade topics. Kept references to 4.2 in HA use cases. Kept references to 4.2 in Configuring for Eager Failover example For other topics,If references were within text, changed to 4.<2> If references were within code examples, changed to 4.3 --- .../docs/efm/4/efm_quick_start/index.mdx | 100 +++++++++--------- .../docs/efm/4/efm_user/03_installing_efm.mdx | 17 +-- .../01_encrypting_database_password.mdx | 77 -------------- .../01_cluster_properties/index.mdx | 6 +- .../02_encrypting_database_password.mdx | 18 ++-- .../04_configuring_efm/03_cluster_members.mdx | 4 +- .../04_extending_efm_permissions.mdx | 24 ++--- .../05_using_vip_addresses.mdx | 2 +- .../docs/efm/4/efm_user/05_using_efm.mdx | 50 +++------ .../efm/4/efm_user/07_using_efm_utility.mdx | 50 +++------ .../4/efm_user/08_controlling_efm_service.mdx | 26 ++--- .../efm/4/efm_user/09_controlling_logging.mdx | 10 +- .../docs/efm/4/efm_user/10_notifications.mdx | 2 +- .../12_upgrading_existing_cluster.mdx | 14 +-- 14 files changed, 137 insertions(+), 263 deletions(-) delete mode 100644 product_docs/docs/efm/4/efm_user/04_configuring_efm/01_cluster_properties/01_encrypting_database_password.mdx diff --git a/product_docs/docs/efm/4/efm_quick_start/index.mdx b/product_docs/docs/efm/4/efm_quick_start/index.mdx index 9ba5e001274..902b1840bda 100644 --- a/product_docs/docs/efm/4/efm_quick_start/index.mdx +++ b/product_docs/docs/efm/4/efm_quick_start/index.mdx @@ -21,7 +21,7 @@ Perform these basic installation and configuration steps before starting the tut - Install Failover Manager on each primary and standby node. During Advanced Server installation, you configured an EnterpriseDB repository on each database host. You can use the EnterpriseDB repository and the `yum install` command to install Failover Manager on each node of the cluster: ```text - yum install edb-efm42 + yum install edb-efm43 ``` During the installation process, the installer creates a user named efm that has privileges to invoke scripts that control the Failover Manager service for clusters owned by enterprisedb or postgres. The example that follows creates a cluster named `efm`. @@ -30,25 +30,25 @@ Start the configuration process on a primary or standby node. Then, copy the con 1. Create working configuration files. Copy the provided sample files to create Failover Manager configuration files, and correct the ownership: -```text -cd /etc/edb/efm-4.2 + ```text + cd /etc/edb/efm-4.3 -cp efm.properties.in efm.properties + cp efm.properties.in efm.properties -cp efm.nodes.in efm.nodes + cp efm.nodes.in efm.nodes -chown efm:efm efm.properties + chown efm:efm efm.properties -chown efm:efm efm.nodes -``` + chown efm:efm efm.nodes + ``` 1. Create an [encrypted password](/efm/latest/efm_user/04_configuring_efm/02_encrypting_database_password/) needed for the properties file: -```text -/usr/edb/efm-4.2/bin/efm encrypt efm -``` + ```text + /usr/edb/efm-4.3/bin/efm encrypt efm + ``` - Follow the onscreen instructions to produce the encrypted version of your database password. + Follow the onscreen instructions to produce the encrypted version of your database password. 1. Update `efm.properties`. The `.properties` file (`efm.properties` in this example) contains parameters that specify connection properties and behaviors for your Failover Manager cluster. Modifications to property settings are applied when Failover Manager starts. @@ -56,22 +56,22 @@ chown efm:efm efm.nodes Provide values for the following properties on all cluster nodes: -| Property | Description | -| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -| `db.user` | The name of the database user. | -| `db.password.encrypted` | The encrypted password of the database user. | -| `db.port` | The port monitored by the database. | -| `db.database` | The name of the database. | -| `db.service.owner` | The owner of the `data` directory (usually `postgres` or `enterprisedb`). Required only if the database is running as a service. | -| `db.service.name` | The name of the database service (used to restart the server). Required only if the database is running as a service. | -| `db.bin` | The path to the `bin` directory (used for calls to `pg_ctl`). | -| `db.data.dir` | The `data` directory in which EFM will find or create the `recovery.conf` file or the `standby.signal` file. | -| `user.email` | An email address at which to receive email notifications (notification text is also in the agent log file). | -| `bind.address` | The local address of the node and the port to use for Failover Manager. The format is: `bind.address=1.2.3.4:7800` | -| `is.witness` | `true` on a witness node and `false` if it is a primary or standby. | -| `ping.server.ip` | If you are running on a network without Internet access, set `ping.server.ip` to an address that is available on your network. | -| `auto.allow.hosts` | On a test cluster, set to `true` to simplify startup; for production usage, consult the user's guide. | -| `stable.nodes.file` | On a test cluster, set to `true` to simplify startup; for production usage, consult the user's guide. | + | Property | Description | + | ----------------------- | -------------------------------------------------------------------------------------------------------------------------------- | + | `db.user` | The name of the database user. | + | `db.password.encrypted` | The encrypted password of the database user. | + | `db.port` | The port monitored by the database. | + | `db.database` | The name of the database. | + | `db.service.owner` | The owner of the `data` directory (usually `postgres` or `enterprisedb`). Required only if the database is running as a service. | + | `db.service.name` | The name of the database service (used to restart the server). Required only if the database is running as a service. | + | `db.bin` | The path to the `bin` directory (used for calls to `pg_ctl`). | + | `db.data.dir` | The `data` directory in which EFM will find or create the `recovery.conf` file or the `standby.signal` file. | + | `user.email` | An email address at which to receive email notifications (notification text is also in the agent log file). | + | `bind.address` | The local address of the node and the port to use for Failover Manager. The format is: `bind.address=1.2.3.4:7800` | + | `is.witness` | `true` on a witness node and `false` if it is a primary or standby. | + | `ping.server.ip` | If you are running on a network without Internet access, set `ping.server.ip` to an address that is available on your network. | + | `auto.allow.hosts` | On a test cluster, set to `true` to simplify startup; for production usage, consult the user's guide. | + | `stable.nodes.file` | On a test cluster, set to `true` to simplify startup; for production usage, consult the user's guide. | 1. Update `efm.nodes`. The `.nodes` file (`efm.nodes` in this example) is read at startup to tell an agent how to find the rest of the cluster or, in the case of the first node started, can be used to simplify authorization of subsequent nodes. Add the addresses and ports of each node in the cluster to this file. One node acts as the membership coordinator. Include in the list at least the membership coordinator's address. For example: @@ -83,44 +83,44 @@ chown efm:efm efm.nodes The Failover Manager agent doesn't validate the addresses in the `efm.nodes` file. The agent expects that some of the addresses in the file can't be reached (for example, that another agent hasn’t been started yet). -1. Configure the other nodes. Copy the `efm.properties` and `efm.nodes` files to `/etc/edb/efm-4.2` on the other nodes in your sample cluster. After copying the files, change the file ownership so the files are owned by efm:efm. The `efm.properties` file can be the same on every node, except for the following properties: +1. Configure the other nodes. Copy the `efm.properties` and `efm.nodes` files to `/etc/edb/efm-4.3` on the other nodes in your sample cluster. After copying the files, change the file ownership so the files are owned by efm:efm. The `efm.properties` file can be the same on every node, except for the following properties: -- Modify the `bind.address` property to use the node’s local address. -- Set `is.witness` to `true` if the node is a witness node. If the node is a witness node, the properties relating to a local database installation are ignored. + - Modify the `bind.address` property to use the node’s local address. + - Set `is.witness` to `true` if the node is a witness node. If the node is a witness node, the properties relating to a local database installation are ignored. -1. Start the Failover Manager cluster. On any node, start the Failover Manager agent. The agent is named `edb-efm-4.2`; you can use your platform-specific service command to control the service. For example, on a CentOS/RHEL 7.x or CentOS/RHEL 8.x host, use the command: +1. Start the Failover Manager cluster. On any node, start the Failover Manager agent. The agent is named `edb-efm-4.3`; you can use your platform-specific service command to control the service. For example, on a CentOS/RHEL 7.x or CentOS/RHEL 8.x host, use the command: -```text -systemctl start edb-efm-4.2 -``` + ```text + systemctl start edb-efm-4.3 + ``` On a a CentOS or RHEL 6.x host, use the command: -```text -service edb-efm-4.2 start -``` + ```text + service edb-efm-4.3 start + ``` 1. After the agent starts, run the following command to see the status of the single-node cluster. The addresses of the other nodes appear in the `Allowed node host` list. -```text -/usr/edb/efm-4.2/bin/efm cluster-status efm -``` + ```text + /usr/edb/efm-4.3/bin/efm cluster-status efm + ``` 1. Start the agent on the other nodes. Run the `efm cluster-status efm` command on any node to see the cluster status. If any agent fails to start, see the startup log for information about what went wrong: -```text -cat /var/log/efm-4.2/startup-efm.log -``` + ```text + cat /var/log/efm-4.3/startup-efm.log + ``` ## Perform a switchover If the cluster status output shows that the primary and standby nodes are in sync, you can perform a switchover: -```text -/usr/edb/efm-4.2/bin/efm promote efm -switchover -``` + ```text + /usr/edb/efm-4.3/bin/efm promote efm -switchover + ``` The command promotes a standby and reconfigures the primary database as a new standby in the cluster. To switch back, run the command again. @@ -128,6 +128,6 @@ The command promotes a standby and reconfigures the primary database as a new st For quick access to online help, use: -```text -/usr/edb/efm-4.2/bin/efm --help -``` + ```text + /usr/edb/efm-4.3/bin/efm --help + ``` diff --git a/product_docs/docs/efm/4/efm_user/03_installing_efm.mdx b/product_docs/docs/efm/4/efm_user/03_installing_efm.mdx index da14d94cd3f..c3235cdb474 100644 --- a/product_docs/docs/efm/4/efm_user/03_installing_efm.mdx +++ b/product_docs/docs/efm/4/efm_user/03_installing_efm.mdx @@ -300,11 +300,12 @@ Components are installed in the following locations: | Component | Location | | --------------------------------- | --------------------------- | -| Executables | `/usr/edb/efm-4.2/bin` | -| Libraries | `/usr/edb/efm-4.2/lib` | -| Cluster configuration files | `/etc/edb/efm-4.2` | -| Logs | `/var/log/efm- 4.2` | -| Lock files | `/var/lock/efm-4.2` | -| Log rotation file | `/etc/logrotate.d/efm-4.2` | -| sudo configuration file | `/etc/sudoers.d/efm-42` | -| Binary to access VIP without sudo | `/usr/edb/efm-4.2/bin/secure` | +| Executables | `/usr/edb/efm-<4.3>/bin` | +| Libraries | `/usr/edb/efm-<4.3>/lib` | +| Cluster configuration files | `/etc/edb/efm-<4.3>` | +| Logs | `/var/log/efm-<4.3>` | +| Lock files | `/var/lock/efm-<4.3>` | +| Log rotation file | `/etc/logrotate.d/efm-<4.3>` | +| sudo configuration file | `/etc/sudoers.d/efm-<4.3>` | +| Binary to access VIP without sudo | `/usr/edb/efm-<4.3>/bin/secure` | + diff --git a/product_docs/docs/efm/4/efm_user/04_configuring_efm/01_cluster_properties/01_encrypting_database_password.mdx b/product_docs/docs/efm/4/efm_user/04_configuring_efm/01_cluster_properties/01_encrypting_database_password.mdx deleted file mode 100644 index 619a0af0da7..00000000000 --- a/product_docs/docs/efm/4/efm_user/04_configuring_efm/01_cluster_properties/01_encrypting_database_password.mdx +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: "Encrypting your database password" ---- - - - -Failover Manager requires you to encrypt your database password before including it in the cluster properties file. To encrypt the password, use the [efm utility](../../07_using_efm_utility/#efm_encrypt) (located in the `/usr/edb/efm-4.2/bin` directory). When encrypting a password, you can either pass the password on the command line when you invoke the utility or use the `EFMPASS` environment variable. - -To encrypt a password, use the command: - -```text -# efm encrypt [ --from-env ] -``` - -Where `` specifies the name of the Failover Manager cluster. - -If you include the `--from-env` option, you must export the value you want to encrypt before invoking the encryption utility. For example: - -```text -export EFMPASS=password -``` - -If you don't include the `--from-env` option, Failover Manager prompts you to enter the database password twice before generating an encrypted password for you to place in your cluster property file. When the utility shares the encrypted password, copy and paste the encrypted password into the cluster property files. - -!!! Note - Many Java vendors ship their version of Java with full-strength encryption included but not enabled due to export restrictions. If you encounter an error that refers to an illegal key size when attempting to encrypt the database password, download and enable a Java cryptography extension (JCE) that provides an unlimited policy for your platform. - -The following example shows using the `encrypt` utility to encrypt a password for the `acctg` cluster: - -```text -# efm encrypt acctg -This utility will generate an encrypted password for you to place in - your Failover Manager cluster property file: -/etc/edb/efm-4.2/acctg.properties -Please enter the password and hit enter: -Please enter the password again to confirm: -The encrypted password is: 516b36fb8031da17cfbc010f7d09359c -Please paste this into your acctg.properties file -db.password.encrypted=516b36fb8031da17cfbc010f7d09359c -``` - -!!! Note - The utility notifies you if a properties file doesn't exist. - -After receiving your encrypted password, paste the password into the properties file and start the Failover Manager service. If there's a problem with the encrypted password, the Failover Manager service doesn't start: - -```text -[witness@localhost ~]# systemctl start edb-efm-4.2 -Job for edb-efm-4.2.service failed because the control process exited with error code. See "systemctl status edb-efm-4.2.service" and "journalctl -xe" for details. -``` - -If you receive this message when starting the Failover Manager service, see the startup log located in `/var/log/efm-4.2/startup-efm.log` for more information. - -If you're using RHEL/CentOS 7.x or RHEL/CentOS 8.x, startup information is also available with the following command: - -```text -systemctl status edb-efm-4.2 -``` - -To prevent a cluster from inadvertently connecting to the database of another cluster, the cluster name is incorporated into the encrypted password. If you modify the cluster name, you need to re-encrypt the database password and update the cluster properties file. - -## Using the EFMPASS environment variable - -The following example shows using the --from-env environment variable when encrypting a password. Before invoking the `efm encrypt` command, set the value of `EFMPASS` to the password (`1safepassword`): - -```text -# export EFMPASS=1safepassword -``` - -Then, invoke `efm encrypt`, specifying the `--from-env` option: - -```text -# efm encrypt acctg --from-env -# 7ceecd8965fa7a5c330eaa9e43696f83 -``` - -The encrypted password `7ceecd8965fa7a5c330eaa9e43696f83` is returned as a text value. When using a script, you can check the exit code of the command to confirm that the command succeeded. A successful execution returns `0`. diff --git a/product_docs/docs/efm/4/efm_user/04_configuring_efm/01_cluster_properties/index.mdx b/product_docs/docs/efm/4/efm_user/04_configuring_efm/01_cluster_properties/index.mdx index d9cfd925f6d..a486203c681 100644 --- a/product_docs/docs/efm/4/efm_user/04_configuring_efm/01_cluster_properties/index.mdx +++ b/product_docs/docs/efm/4/efm_user/04_configuring_efm/01_cluster_properties/index.mdx @@ -8,12 +8,12 @@ legacyRedirectsGenerated: -Each node in a Failover Manager cluster has a properties file (by default, named `efm.properties`) that contains the properties of the individual node on which it resides. The Failover Manager installer creates a file template for the properties file named `efm.properties.in` in the `/etc/edb/efm-4.2` directory. +Each node in a Failover Manager cluster has a properties file (by default, named `efm.properties`) that contains the properties of the individual node on which it resides. The Failover Manager installer creates a file template for the properties file named `efm.properties.in` in the `/etc/edb/efm-4.` directory. After completing the Failover Manager installation, make a working copy of the template before modifying the file contents: ```text -# cp /etc/edb/efm-4.2/efm.properties.in /etc/edb/efm-4.2/efm.properties +# cp /etc/edb/efm-4.3/efm.properties.in /etc/edb/efm-4.3/efm.properties ``` After copying the template file, change the owner of the file to efm: @@ -27,7 +27,7 @@ After copying the template file, change the owner of the file to efm: After creating the cluster properties file, add or modify configuration parameter values as required. For detailed information about each property, see [Specifying cluster properties](#specifying-cluster-properties). -The property files are owned by root. The Failover Manager service script expects to find the files in the `/etc/edb/efm-4.2` directory. If you move the property file to another location, you must create a symbolic link that specifies the new location. +The property files are owned by root. The Failover Manager service script expects to find the files in the `/etc/edb/efm-4.` directory. If you move the property file to another location, you must create a symbolic link that specifies the new location. !!! Note All user scripts referenced in the properties file are invoked as the Failover Manager user. diff --git a/product_docs/docs/efm/4/efm_user/04_configuring_efm/02_encrypting_database_password.mdx b/product_docs/docs/efm/4/efm_user/04_configuring_efm/02_encrypting_database_password.mdx index 41a0b670e9b..57fee2de37e 100644 --- a/product_docs/docs/efm/4/efm_user/04_configuring_efm/02_encrypting_database_password.mdx +++ b/product_docs/docs/efm/4/efm_user/04_configuring_efm/02_encrypting_database_password.mdx @@ -8,13 +8,11 @@ legacyRedirectsGenerated: -Failover Manager requires you to encrypt your database password before including it in the cluster properties file. Use the [efm utility](../07_using_efm_utility/#efm_encrypt) located in the `/usr/edb/efm-4.2/bin` directory to encrypt the password. When encrypting a password, you can either pass the password on the command line when you invoke the utility or use the `EFMPASS` environment variable. +Failover Manager requires you to encrypt your database password before including it in the cluster properties file. Use the [efm utility](../07_using_efm_utility/#efm_encrypt) located in the `/usr/edb/efm-4./bin` directory to encrypt the password. When encrypting a password, you can either pass the password on the command line when you invoke the utility or use the `EFMPASS` environment variable. To encrypt a password, use the command: -```text -# efm encrypt [ --from-env ] -``` +`efm encrypt [ --from-env ]` Where `` specifies the name of the Failover Manager cluster. @@ -35,7 +33,7 @@ This example shows using the `encrypt` utility to encrypt a password for the `ac # efm encrypt acctg This utility will generate an encrypted password for you to place in your Failover Manager cluster property file: -/etc/edb/efm-4.2/acctg.properties +/etc/edb/efm-4.3/acctg.properties Please enter the password and hit enter: Please enter the password again to confirm: The encrypted password is: 516b36fb8031da17cfbc010f7d09359c @@ -49,21 +47,21 @@ db.password.encrypted=516b36fb8031da17cfbc010f7d09359c After receiving your encrypted password, paste the password into the properties file and start the Failover Manager service. If there's a problem with the encrypted password, the Failover Manager service doesn't start: ```text -[witness@localhost ~]# systemctl start edb-efm-4.2 -Job for edb-efm-4.2.service failed because the control process exited with error code. See "systemctl status edb-efm-4.2.service" and "journalctl -xe" for details. +[witness@localhost ~]# systemctl start edb-efm-4.3 +Job for edb-efm-4.3.service failed because the control process exited with error code. See "systemctl status edb-efm-4.3.service" and "journalctl -xe" for details. ``` -If you receive this message when starting the Failover Manager service, see the startup log `/var/log/efm-4.2/startup-efm.log` for more information. +If you receive this message when starting the Failover Manager service, see the startup log `/var/log/efm-4.3/startup-efm.log` for more information. If you are using RHEL/CentOS 7.x or RHEL/CentOS 8.x, startup information is also available with the following command: ```text -systemctl status edb-efm-4.2 +systemctl status edb-efm-4.3 ``` To prevent a cluster from inadvertently connecting to the database of another cluster, the cluster name is incorporated into the encrypted password. If you modify the cluster name, you must re-encrypt the database password and update the cluster properties file. -## Using the EFMPASS environment variable** +## Using the EFMPASS environment variable This example shows using the `--from-env` environment variable when encrypting a password. Before invoking the `efm encrypt` command, set the value of `EFMPASS` to the password `1safepassword`: diff --git a/product_docs/docs/efm/4/efm_user/04_configuring_efm/03_cluster_members.mdx b/product_docs/docs/efm/4/efm_user/04_configuring_efm/03_cluster_members.mdx index 46492400557..2f888c9535c 100644 --- a/product_docs/docs/efm/4/efm_user/04_configuring_efm/03_cluster_members.mdx +++ b/product_docs/docs/efm/4/efm_user/04_configuring_efm/03_cluster_members.mdx @@ -8,12 +8,12 @@ legacyRedirectsGenerated: -Each node in a Failover Manager cluster has a cluster members file (by default named `efm.nodes`) that contains a list of the current Failover Manager cluster members. When an agent starts, it uses the file to locate other cluster members. The Failover Manager installer creates a file template for the cluster members file named `efm.nodes.in` in the `/etc/edb/efm-4.2` directory. +Each node in a Failover Manager cluster has a cluster members file (by default named `efm.nodes`) that contains a list of the current Failover Manager cluster members. When an agent starts, it uses the file to locate other cluster members. The Failover Manager installer creates a file template for the cluster members file named `efm.nodes.in` in the `/etc/edb/efm-4.` directory. After completing the Failover Manager installation, make a working copy of the template: ```text -cp /etc/edb/efm-4.2/efm.nodes.in /etc/edb/efm-4.2/efm.nodes +cp /etc/edb/efm-4.3/efm.nodes.in /etc/edb/efm-4.3/efm.nodes ``` After copying the template file, change the owner of the file to efm: diff --git a/product_docs/docs/efm/4/efm_user/04_configuring_efm/04_extending_efm_permissions.mdx b/product_docs/docs/efm/4/efm_user/04_configuring_efm/04_extending_efm_permissions.mdx index 71d591d3f10..36768a96b56 100644 --- a/product_docs/docs/efm/4/efm_user/04_configuring_efm/04_extending_efm_permissions.mdx +++ b/product_docs/docs/efm/4/efm_user/04_configuring_efm/04_extending_efm_permissions.mdx @@ -22,7 +22,7 @@ The `sudoers` file contains entries that allow the user efm to control the Failo The `efm-42` file is located in `/etc/sudoers.d` and contains the following entries: ```text -# Copyright EnterpriseDB Corporation, 2014-2020. All Rights Reserved. +# Copyright EnterpriseDB Corporation, 2014-2021. All Rights Reserved. # # Do not edit this file. Changes to the file may be overwritten # during an upgrade. @@ -34,18 +34,18 @@ The `efm-42` file is located in `/etc/sudoers.d` and contains the following entr # If you run your db service under a non-default account, you will need to copy # this file to grant the proper permissions and specify the account in your efm # cluster properties file by changing the 'db.service.owner' property. -efm ALL=(postgres) NOPASSWD: /usr/edb/efm-4.2/bin/efm_db_functions -efm ALL=(enterprisedb) NOPASSWD: /usr/edb/efm-4.2/bin/efm_db_functions +efm ALL=(postgres) NOPASSWD: /usr/edb/efm-4.3/bin/efm_db_functions +efm ALL=(enterprisedb) NOPASSWD: /usr/edb/efm-4.3/bin/efm_db_functions # Allow user 'efm' to sudo efm_root_functions as 'root' to write/delete the PID file, # validate the db.service.owner property, etc. -efm ALL=(ALL) NOPASSWD: /usr/edb/efm-4.2/bin/efm_root_functions +efm ALL=(ALL) NOPASSWD: /usr/edb/efm-4.3/bin/efm_root_functions # Allow user 'efm' to sudo efm_address as root for VIP tasks. -efm ALL=(ALL) NOPASSWD: /usr/edb/efm-4.2/bin/efm_address +efm ALL=(ALL) NOPASSWD: /usr/edb/efm-4.3/bin/efm_address # Allow user 'efm' to sudo efm_pgpool_functions as root for pgpool tasks. -efm ALL=(ALL) NOPASSWD: /usr/edb/efm-4.2/bin/efm_pgpool_functions +efm ALL=(ALL) NOPASSWD: /usr/edb/efm-4.3/bin/efm_pgpool_functions # relax tty requirement for user 'efm' Defaults:efm !requiretty @@ -78,7 +78,7 @@ To run Failover Manager without sudo, you must select a database process owner w usermod -a -G efm enterprisedb ``` - This command allows the user to write to `/var/run/efm-4.2` and `/var/lock/efm-4.2`. + This command allows the user to write to `/var/run/efm-4.` and `/var/lock/efm-4.`. 2. If you're reusing a cluster name, remove any previously created log files. The new user can't write to log files created by the default or other owner. @@ -87,9 +87,9 @@ To run Failover Manager without sudo, you must select a database process owner w ```text su - enterprisedb - cp /etc/edb/efm-4.2/efm.properties.in .properties + cp /etc/edb/efm-4.3/efm.properties.in .properties - cp /etc/edb/efm-4.2/efm.nodes.in /.nodes + cp /etc/edb/efm-4.3/efm.nodes.in /.nodes ``` Then, modify the cluster properties file, providing the name of the user in the `db.service.owner` property. Also make sure that the `db.service.name` property is blank. Without sudo, you can't run services without root access. @@ -97,19 +97,19 @@ Then, modify the cluster properties file, providing the name of the user in the After modifying the configuration, the new user can control Failover Manager with the following command: ```text -/usr/edb/efm-4.2/bin/runefm.sh start|stop .properties +/usr/edb/efm-4.3/bin/runefm.sh start|stop .properties ``` Where `` specifies the full path of the cluster properties file. The user provides the full path to the properties file whenever the nondefault user is controlling agents or using the `efm` script. To allow the new user to manage Failover Manager as a service, provide a custom script or unit file. -Failover Manager uses a binary named `manage-vip` that resides in `/usr/edb/efm-4.2/bin/secure/` to perform VIP management operations without sudo privileges. This script uses setuid to acquire with the privileges needed to manage virtual IP addresses. +Failover Manager uses a binary named `manage-vip` that resides in `/usr/edb/efm-4./bin/secure/` to perform VIP management operations without sudo privileges. This script uses setuid to acquire with the privileges needed to manage virtual IP addresses. - This directory is accessible only to root and users in the efm group. - The binary is executable only by root and the efm group. -For security reasons, we recommend against modifying the access privileges of the `/usr/edb/efm-4.2/bin/secure/` directory or the `manage-vip` script. +For security reasons, we recommend against modifying the access privileges of the `/usr/edb/efm-4./bin/secure/` directory or the `manage-vip` script. For more information about using Failover Manager without sudo, visit: diff --git a/product_docs/docs/efm/4/efm_user/04_configuring_efm/05_using_vip_addresses.mdx b/product_docs/docs/efm/4/efm_user/04_configuring_efm/05_using_vip_addresses.mdx index 84c7382364e..2e4f541bad3 100644 --- a/product_docs/docs/efm/4/efm_user/04_configuring_efm/05_using_vip_addresses.mdx +++ b/product_docs/docs/efm/4/efm_user/04_configuring_efm/05_using_vip_addresses.mdx @@ -15,7 +15,7 @@ Failover Manager uses the `efm_address` script to assign or release a virtual IP By default, the script resides in: - `/usr/edb/efm-4.2/bin/efm_address` + `/usr/edb/efm-4./bin/efm_address` Failover Manager uses the following command variations to assign or release an IPv4 or IPv6 IP address. diff --git a/product_docs/docs/efm/4/efm_user/05_using_efm.mdx b/product_docs/docs/efm/4/efm_user/05_using_efm.mdx index 30ced019faa..64ebe2dd7d1 100644 --- a/product_docs/docs/efm/4/efm_user/05_using_efm.mdx +++ b/product_docs/docs/efm/4/efm_user/05_using_efm.mdx @@ -34,9 +34,8 @@ You can start the nodes of a Failover Manager cluster in any order. To start the Failover Manager cluster on RHEL/CentOS 7.x or RHEL/CentOS 8.x, assume superuser privileges, and invoke the command: -```text -systemctl start edb-efm-4.2 -``` +`systemctl start edb-efm-4.` + If the cluster properties file for the node specifies that `is.witness` is `true`, the node starts as a witness node. @@ -57,9 +56,7 @@ You can add a node to a Failover Manager cluster at any time. When you add a nod 1. Unless `auto.allow.hosts` is set to `true`, use the `efm allow-node` command to add the address of the new node to the Failover Manager allowed node host list. When invoking the command, specify the cluster name and the address of the new node: - ```text - efm allow-node
- ``` + `efm allow-node
` For more information about using the `efm allow-node` command or controlling a Failover Manager service, see [Using the efm utility](07_using_efm_utility/#efm_allow_node). @@ -69,9 +66,7 @@ You can add a node to a Failover Manager cluster at any time. When you add a nod 3. Assume superuser privileges on the new node, and start the Failover Manager agent. To start the Failover Manager cluster on RHEL/CentOS 7.x or RHEL/CentOS 8.x, invoke the command: - ```text - systemctl start edb-efm-4.2 - ``` + `systemctl start edb-efm-4.` When the new node joins the cluster, Failover Manager sends a notification to the administrator email provided in the `user.email` property, and invokes the specified notification script. @@ -100,9 +95,7 @@ efm set-priority acctg 10.0.1.10 1 In the event of a failover, Failover Manager first retrieves information from Postgres streaming replication to confirm which standby node has the most recent data and promote the node with the least chance of data loss. If two standby nodes contain equally up-to-date data, the node with a higher user-specified priority value is promoted to orimary unless [use.replay.tiebreaker](04_configuring_efm/01_cluster_properties/#use_replay_tiebreaker) is set to `true`. To check the priority value of your standby nodes, use the command: -```text -efm cluster-status -``` +`efm cluster-status ` @@ -118,9 +111,7 @@ You can invoke `efm promote` on any node of a Failover Manager cluster to start Perform manual promotion only during a maintenance window for your database cluster. If you don't have an up-to-date standby database available, you are prompted before continuing. To start a manual promotion, assume the identity of efm or the OS superuser, and invoke the command: -```text -efm promote [-switchover] [-sourcenode
] [-quiet] [-noscripts]` -``` +`efm promote [-switchover] [-sourcenode
] [-quiet] [-noscripts]` Where: @@ -155,15 +146,11 @@ This command instructs the service to ignore the value specified in the `auto.fa To return a node to the role of primary, place the node first in the promotion list: -```text -efm set-priority
-``` +`efm set-priority
` Then, perform a manual promotion: -```text -efm promote ‑switchover -``` +`efm promote ‑switchover` For more information about the efm utility, see [Using the efm utility](07_using_efm_utility/#using_efm_utility). @@ -175,11 +162,9 @@ When you stop an agent, Failover Manager removes the node's address from the clu To stop the Failover Manager agent on RHEL/CentOS 7.x or RHEL/CentOS 8.x, assume superuser privileges and invoke the command: -```text -systemctl stop edb-efm-4.2 -``` +`systemctl stop edb-efm-4.` -Until you invoke the `efm disallow-node` command (removing the node's address from the Allowed Node host list), you can use the `service edb-efm-4.2 start` command to restart the node later without first running the `efm allow-node` command again. +Until you invoke the `efm disallow-node` command (removing the node's address from the Allowed Node host list), you can use the `service edb-efm-4. start` command to restart the node later without first running the `efm allow-node` command again. Stopping an agent doesn't signal the cluster that the agent has failed unless the [primary.shutdown.as.failure](04_configuring_efm/01_cluster_properties/#primary_shutdown_as_failure) property is set to `true`. @@ -188,9 +173,7 @@ Stopping an agent doesn't signal the cluster that the agent has failed unless th To stop a Failover Manager cluster, connect to any node of a Failover Manager cluster, assume the identity of efm or the OS superuser, and invoke the command: -```text -efm stop-cluster -``` +`efm stop-cluster ` The command causes all Failover Manager agents to exit. Terminating the Failover Manager agents completely disables all failover functionality. @@ -201,9 +184,7 @@ The command causes all Failover Manager agents to exit. Terminating the Failover The `efm disallow-node` command removes the IP address of a node from the Failover Manager Allowed Node host list. Assume the identity of efm or the OS superuser on any existing node that is currently part of the running cluster. Then invoke the `efm disallow-node` command, specifying the cluster name and the IP address of the node: -```text -efm disallow-node
-``` +`efm disallow-node
` The `efm disallow-node` command doesn't stop a running agent. The service continues to run on the node until you [stop the agent](#stop_efm_agent). If the agent or cluster is later stopped, the node isn't allowed to rejoin the cluster and is removed from the failover priority list. It becomes ineligible for promotion. @@ -280,7 +261,7 @@ After creating the `acctg.properties` and `sales.properties` files, create a ser ### RHEL/CentOS 7.x or RHEL/CentOS 8.x -If you're using RHEL/CentOS 7.x or RHEL/CentOS 8.x, copy the `edb-efm-4.2` unit file to a new file with a name that is unique for each cluster. For example, if you have two clusters named acctg and sales, the unit file names might be: +If you're using RHEL/CentOS 7.x or RHEL/CentOS 8.x, copy the `edb-efm-4.` unit file to a new file with a name that is unique for each cluster. For example, if you have two clusters named acctg and sales, the unit file names might be: ```text /usr/lib/systemd/system/efm-acctg.service @@ -297,7 +278,7 @@ Environment=CLUSTER=acctg Also update the value of the `PIDfile` parameter to specify the new cluster name. For example: ```text -PIDFile=/var/run/efm-4.2/acctg.pid +PIDFile=/var/run/efm-4.3/acctg.pid ``` After copying the service scripts, enable the services: @@ -314,5 +295,4 @@ Then, use the new service scripts to start the agents. For example, to start the # systemctl start efm-acctg` ``` -For information about customizing a unit file, see [Understanding and administering systemd -](https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html). +For information about customizing a unit file, see [Understanding and administering systemd](https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html). diff --git a/product_docs/docs/efm/4/efm_user/07_using_efm_utility.mdx b/product_docs/docs/efm/4/efm_user/07_using_efm_utility.mdx index 47648d00240..dfbd9a9be51 100644 --- a/product_docs/docs/efm/4/efm_user/07_using_efm_utility.mdx +++ b/product_docs/docs/efm/4/efm_user/07_using_efm_utility.mdx @@ -8,15 +8,13 @@ legacyRedirectsGenerated: -Failover Manager provides the `efm` utility to assist with cluster management. The RPM installer adds the utility to the `/usr/edb/efm-4.2/bin` directory when you install Failover Manager. +Failover Manager provides the `efm` utility to assist with cluster management. The RPM installer adds the utility to the `/usr/edb/efm-4./bin` directory when you install Failover Manager. ## efm allow-node -```text -efm allow-node -``` +`efm allow-node ` Invoke the `efm allow-node` command to allow the specified node to join the cluster. When invoking the command, provide the name of the cluster and the IP address of the joining node. @@ -26,9 +24,7 @@ This command must be invoked by efm, a member of the efm group, or root. -```text -efm disallow-node
-``` +`efm disallow-node
` Invoke the `efm disallow-node` command to remove the specified node from the allowed hosts list and prevent the node from joining a cluster. Provide the name of the cluster and the address of the node when calling the `efm disallow-node` command. This command must be invoked by efm, a member of the efm group, or root. @@ -36,9 +32,7 @@ Invoke the `efm disallow-node` command to remove the specified node from the all -```text -efm cluster-status -``` +`efm cluster-status ` Invoke the `efm cluster-status` command to display the status of a Failover Manager cluster. For more information about the status report, see [Monitoring a Failover Manager cluster](06_monitoring_efm_cluster/#monitoring_efm_cluster). @@ -46,9 +40,7 @@ Invoke the `efm cluster-status` command to display the status of a Failover Mana -```text -efm cluster-status-json -``` +`efm cluster-status-json ` Invoke the `efm cluster-status-json` command to display the status of a Failover Manager cluster in JSON format. While the format of the displayed information is different from the display generated by the `efm cluster-status` command, the information source is the same. @@ -102,9 +94,7 @@ The following example is generated by querying the status of a healthy cluster w -```text -efm encrypt [--from-env] -``` +`efm encrypt [--from-env]` Invoke the `efm encrypt` command to encrypt the database password before including the password in the cluster properties file. Include the `--from-env` option to instruct Failover Manager to use the value specified in the `EFMPASS` environment variable and execute without user input. For more information, see [Encrypting your database password](04_configuring_efm/01_cluster_properties/01_encrypting_database_password/#encrypting_database_password). @@ -112,9 +102,7 @@ Invoke the `efm encrypt` command to encrypt the database password before includi -```text -efm promote cluster_name [-switchover [-sourcenode
][-quiet][-noscripts] -``` +`efm promote cluster_name [-switchover [-sourcenode
][-quiet][-noscripts]` The `efm promote` command instructs Failover Manager to perform a manual failover of standby to primary. @@ -131,9 +119,7 @@ This command must be invoked by efm, a member of the efm group, or root. -```text -efm resume -``` +`efm resume ` Invoke the `efm resume` command to resume monitoring a previously stopped database. This command must be invoked by efm, a member of the efm group, or root. @@ -141,9 +127,7 @@ Invoke the `efm resume` command to resume monitoring a previously stopped databa -```text -efm set-priority
-``` +`efm set-priority
` Invoke the `efm set-priority` command to assign a failover priority to a standby node. The value specifies the order in which to use the node in the event of a failover. This command must be invoked by efm, a member of the efm group, or root. @@ -153,9 +137,7 @@ Use the priority option to specify the place for the node in the priority list. -```text -efm stop-cluster -``` +`efm stop-cluster ` Invoke the `efm stop-cluster` command to stop Failover Manager on all nodes. This command instructs Failover Manager to connect to each node on the cluster and instruct the existing members to shut down. The command has no effect on running databases, but when the command completes, there's no failover protection in place. @@ -168,9 +150,7 @@ This command must be invoked by efm, a member of the efm group, or root. -```text -efm upgrade-conf [-source ] -``` +`efm upgrade-conf [-source ]` Invoke the `efm upgrade-conf` command to copy the configuration files from an existing Failover Manager installation and add parameters required by a Failover Manager installation. Provide the name of the previous cluster when invoking the utility. This command must be invoked with root privileges. @@ -180,9 +160,7 @@ If you're upgrading from a Failover Manager configuration that doesn't use sudo, -```text -efm node-status-json -``` +`efm node-status-json ` Invoke the `efm node-status-json` command to display the status of a local node in JSON format. A successful execution of this command returns `0` as its exit code. In case of a database failure or an agent status becoming IDLE, the command returns `1` as exit code. @@ -202,8 +180,6 @@ The following is an example output of the `efm node-status-json` command: -```text -efm --help -``` +`efm --help` Invoke the `efm --help` command to display online help for the Failover Manager utility commands. diff --git a/product_docs/docs/efm/4/efm_user/08_controlling_efm_service.mdx b/product_docs/docs/efm/4/efm_user/08_controlling_efm_service.mdx index 3826cb5ea21..6433e254bb9 100644 --- a/product_docs/docs/efm/4/efm_user/08_controlling_efm_service.mdx +++ b/product_docs/docs/efm/4/efm_user/08_controlling_efm_service.mdx @@ -21,35 +21,29 @@ The commands that control the Failover Manager service are platform specific. ## Using the systemctl utility on RHEL/CentOS 7.x and RHEL/CentOS 8.x -On RHEL/CentOS 7.x and RHEL/CentOS 8.x, Failover Manager runs as a Linux service named (by default) `edb-efm-4.2.service` that is located in `/usr/lib/systemd/system`. Each database cluster monitored by Failover Manager runs a copy of the service on each node of the replication cluster. +On RHEL/CentOS 7.x and RHEL/CentOS 8.x, Failover Manager runs as a Linux service named (by default) `edb-efm-4..service` that is located in `/usr/lib/systemd/system`. Each database cluster monitored by Failover Manager runs a copy of the service on each node of the replication cluster. Use the following systemctl commands to control a Failover Manager agent that resides on a RHEL/CentOS 7.x and RHEL/CentOS 8.x host: -```text -systemctl start edb-efm-4.2 -``` +`systemctl start edb-efm-4.` The `start` command starts the Failover Manager agent on the current node. The local Failover Manager agent monitors the local database and communicates with Failover Manager on the other nodes. You can start the nodes in a Failover Manager cluster in any order. This command must be invoked by root. -```text -systemctl stop edb-efm-4.2 -``` +`systemctl stop edb-efm-4.` Stop the Failover Manager on the current node. This command must be invoked by root. -```text -systemctl status edb-efm-4.2 -``` +`systemctl status edb-efm-4.` The `status` command returns the status of the Failover Manager agent on which it is invoked. You can invoke the status command on any node to instruct Failover Manager to return status and server startup information. ```text -[root@ONE ~]}> systemctl status edb-efm-4.2 - edb-efm-4.2.service - EnterpriseDB Failover Manager 4.2 - Loaded: loaded (/usr/lib/systemd/system/edb-efm-4.2.service; disabled; vendor preset: disabled) +[root@ONE ~]}> systemctl status edb-efm-4.3 + edb-efm-4.3.service - EnterpriseDB Failover Manager 4.3 + Loaded: loaded (/usr/lib/systemd/system/edb-efm-4.3.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2013-02-14 14:02:16 EST; 4s ago - Process: 58125 ExecStart=/bin/bash -c /usr/edb/edb-efm-4.2/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS) + Process: 58125 ExecStart=/bin/bash -c /usr/edb/edb-efm-4.3/bin/runefm.sh start ${CLUSTER} (code=exited, status=0/SUCCESS) Main PID: 58180 (java) - CGroup: /system.slice/edb-efm-4.2.service - └─58180 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/bin/java -cp /usr/edb/edb-efm-4.2/lib/EFM-4.2.0.jar -Xmx128m... + CGroup: /system.slice/edb-efm-4.3.service + └─58180 /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/jre/bin/java -cp /usr/edb/edb-efm-4.3/lib/EFM-4.3.0.jar -Xmx128m... ``` diff --git a/product_docs/docs/efm/4/efm_user/09_controlling_logging.mdx b/product_docs/docs/efm/4/efm_user/09_controlling_logging.mdx index 854183ab1ba..d1d3a1c9e7d 100644 --- a/product_docs/docs/efm/4/efm_user/09_controlling_logging.mdx +++ b/product_docs/docs/efm/4/efm_user/09_controlling_logging.mdx @@ -8,7 +8,7 @@ legacyRedirectsGenerated: -Failover Manager writes and stores one log file per agent and one startup log per agent in `/var/log/-4.2` (where `` specifies the name of the cluster). +Failover Manager writes and stores one log file per agent and one startup log per agent in `/var/log/-4.`. You can control the level of detail written to the agent log by modifying the `jgroups.loglevel` and `efm.loglevel` parameters in the [cluster properties file](04_configuring_efm/01_cluster_properties/#loglevel): @@ -35,9 +35,11 @@ The logging facilities use the Java logging library and logging levels. The log For example, if you set the `efm.loglevel` parameter to `WARN`, Failover Manager only logs messages at the `WARN` level and above, that is, `WARN` and `ERROR`. -By default, Failover Manager log files are rotated daily, compressed, and stored for a week. You can modify the file rotation schedule by changing settings in the log rotation file (`/etc/logrotate.d/efm-4.2`). For more information about modifying the log rotation schedule, consult the logrotate man page: +By default, Failover Manager log files are rotated daily, compressed, and stored for a week. You can modify the file rotation schedule by changing settings in the log rotation file (`/etc/logrotate.d/efm-4.`). For more information about modifying the log rotation schedule, consult the logrotate man page: -> `$ man logrotate` +``` +$ man logrotate +``` @@ -65,7 +67,7 @@ After modifying the syslog configuration file, restart the `rsyslog` service to > `systemctl restart rsyslog.service` -After modifying the `rsyslog.conf` file on the Failover Manager host, modify the Failover Manager properties to enable logging. Use your choice of editor to [modify the properties file](04_configuring_efm/01_cluster_properties/#logtype_enabled) `/etc/edb/efm-4.2/efm.properties.in`, specifying the type of logging that you want to implement: +After modifying the `rsyslog.conf` file on the Failover Manager host, modify the Failover Manager properties to enable logging. Use your choice of editor to [modify the properties file](04_configuring_efm/01_cluster_properties/#logtype_enabled) `/etc/edb/efm-4./efm.properties.in`, specifying the type of logging that you want to implement: ```text # Which logging is enabled. diff --git a/product_docs/docs/efm/4/efm_user/10_notifications.mdx b/product_docs/docs/efm/4/efm_user/10_notifications.mdx index 56e2db2a2a5..dba62f307db 100644 --- a/product_docs/docs/efm/4/efm_user/10_notifications.mdx +++ b/product_docs/docs/efm/4/efm_user/10_notifications.mdx @@ -41,7 +41,7 @@ The severity level designates the urgency of the notification. A notification wi You can use the [`notification.level`](04_configuring_efm/01_cluster_properties/#notification_level) property to specify the minimum severity level to trigger a notification. !!! Note - In addition to sending notices to the administrative email address, all notifications are recorded in the agent log file (`/var/log/efm-4.2/<*cluster name*>.log`). + In addition to sending notices to the administrative email address, all notifications are recorded in the agent log file (`/var/log/efm-4./<*cluster name*>.log`). The conditions listed in this table trigger an INFO level notification: diff --git a/product_docs/docs/efm/4/efm_user/12_upgrading_existing_cluster.mdx b/product_docs/docs/efm/4/efm_user/12_upgrading_existing_cluster.mdx index f5cfc9bcefa..c4d5a9476d0 100644 --- a/product_docs/docs/efm/4/efm_user/12_upgrading_existing_cluster.mdx +++ b/product_docs/docs/efm/4/efm_user/12_upgrading_existing_cluster.mdx @@ -10,17 +10,17 @@ legacyRedirectsGenerated: Failover Manager provides a utility to assist you when upgrading a Failover Manager cluster. To upgrade an existing cluster, you must: -1. Install Failover Manager 4.2 on each node of the cluster. For detailed information about installing Failover Manager, see [Installing Failover Manager](03_installing_efm/#installing_efm). +1. Install Failover Manager 4.3 on each node of the cluster. For detailed information about installing Failover Manager, see [Installing Failover Manager](03_installing_efm/#installing_efm). -2. After installing Failover Manager, invoke the `efm upgrade-conf` utility to create the `.properties` and `.nodes` files for Failover Manager 4.2. The Failover Manager installer installs the upgrade utility ([efm upgrade-conf](07_using_efm_utility/#efm_upgrade_conf)) to the `/usr/edb/efm-4.2/bin directory`. To invoke the utility, assume root privileges, and invoke the command: +2. After installing Failover Manager, invoke the `efm upgrade-conf` utility to create the `.properties` and `.nodes` files for Failover Manager 4.3. The Failover Manager installer installs the upgrade utility ([efm upgrade-conf](07_using_efm_utility/#efm_upgrade_conf)) to the `/usr/edb/efm-4.3/bin directory`. To invoke the utility, assume root privileges, and invoke the command: ```text efm upgrade-conf ``` -The efm `upgrade-conf` utility locates the `.properties` and `.nodes` files of preexisting clusters and copies the parameter values to a new configuration file for use by Failover Manager. The utility saves the updated copy of the configuration files in the `/etc/edb/efm-4.2` directory. +The efm `upgrade-conf` utility locates the `.properties` and `.nodes` files of preexisting clusters and copies the parameter values to a new configuration file for use by Failover Manager. The utility saves the updated copy of the configuration files in the `/etc/edb/efm-4.3` directory. -3. Modify the `.properties` and `.nodes` files for Failover Manager 4.2, specifying any new preferences. Use your choice of editor to modify any additional properties in the properties file (located in the `/etc/edb/efm-4.2` directory) before starting the service for that node. For detailed information about property settings, see [The cluster properties file](04_configuring_efm/01_cluster_properties/#cluster_properties). +3. Modify the `.properties` and `.nodes` files for Failover Manager 4.3, specifying any new preferences. Use your choice of editor to modify any additional properties in the properties file (located in the `/etc/edb/efm-4.3` directory) before starting the service for that node. For detailed information about property settings, see [The cluster properties file](04_configuring_efm/01_cluster_properties/#cluster_properties). !!! Note `db.bin` is a required property. When modifying the properties file, ensure that the `db.bin` property specifies the location of the Postgres `bin` directory. @@ -33,12 +33,12 @@ The efm `upgrade-conf` utility locates the `.properties` and `.nodes` files of p /usr/efm-4.1/bin/efm stop-cluster efm ``` -1. Start the new [Failover Manager service](08_controlling_efm_service/#controlling_efm_service) (`edb-efm-4.2`) on each node of the cluster. +1. Start the new [Failover Manager service](08_controlling_efm_service/#controlling_efm_service) (`edb-efm-4.3`) on each node of the cluster. The following example shows invoking the upgrade utility to create the `.properties` and `.nodes` files for a Failover Manager installation: ```text -[root@k8s-worker ~]# /usr/edb/efm-4.2/bin/efm upgrade-conf efm +[root@k8s-worker ~]# /usr/edb/efm-4.3/bin/efm upgrade-conf efm Checking directory /etc/edb/efm-4.1 Processing efm.properties file @@ -60,7 +60,7 @@ If you're [using a Failover Manager configuration without sudo](04_configuring_e ## Uninstalling Failover Manager -After upgrading to Failover Manager 4.2, you can use your native package manager to remove previous installations of Failover Manager. For example, use the following command to remove Failover Manager 4.1 and any unneeded dependencies: +After upgrading to Failover Manager 4.3, you can use your native package manager to remove previous installations of Failover Manager. For example, use the following command to remove Failover Manager 4.1 and any unneeded dependencies: - On RHEL or CentOS 7.x: From 5baa62b23b81866c27569109a190dde81453519e Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Wed, 22 Dec 2021 15:26:58 -0500 Subject: [PATCH 14/16] Incorporated Bobby's correction --- product_docs/docs/efm/4/efm_user/09_controlling_logging.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/efm/4/efm_user/09_controlling_logging.mdx b/product_docs/docs/efm/4/efm_user/09_controlling_logging.mdx index d1d3a1c9e7d..6f34c39902f 100644 --- a/product_docs/docs/efm/4/efm_user/09_controlling_logging.mdx +++ b/product_docs/docs/efm/4/efm_user/09_controlling_logging.mdx @@ -8,7 +8,7 @@ legacyRedirectsGenerated: -Failover Manager writes and stores one log file per agent and one startup log per agent in `/var/log/-4.`. +Failover Manager writes and stores one log file per agent and one startup log per agent in `/var/log/-4`. You can control the level of detail written to the agent log by modifying the `jgroups.loglevel` and `efm.loglevel` parameters in the [cluster properties file](04_configuring_efm/01_cluster_properties/#loglevel): From 56dd66c71b4c935d48dbdd70387b4b26018eaef0 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Wed, 22 Dec 2021 21:16:56 +0000 Subject: [PATCH 15/16] Normalize spacing to placate Pandoc (PDF generation) --- .../01_portal_access.mdx | 91 ++--- .../03_account_activity.mdx | 36 +- .../01_understanding_qotas_in_azure.mdx | 44 +-- .../01_preparing_azure/index.mdx | 211 +++++------ .../02_connect_cloud_account.mdx | 89 +++-- .../01_cluster_networking.mdx | 9 +- .../creating_a_cluster/index.mdx | 74 ++-- .../release/getting_started/index.mdx | 2 - .../release/migration/cold_migration.mdx | 153 ++++---- .../biganimal/release/migration/index.mdx | 16 +- .../release/overview/02_high_availability.mdx | 9 +- .../release/overview/03_security.mdx | 22 +- .../overview/04_responsibility_model.mdx | 34 +- .../overview/05_database_version_policy.mdx | 13 +- .../biganimal/release/overview/06_support.mdx | 56 +-- .../docs/biganimal/release/overview/index.mdx | 7 +- .../release/pricing_and_billing/index.mdx | 26 +- .../docs/biganimal/release/reference/api.mdx | 89 +++-- .../docs/biganimal/release/reference/cli.mdx | 116 +++--- .../using_cluster/01_postgres_access.mdx | 28 +- .../01_private_endpoint.mdx | 264 +++++++------- .../02_virtual_network_peering.mdx | 80 ++--- .../01_connecting_from_azure/03_vnet_vnet.mdx | 122 +++---- .../01_connecting_from_azure/index.mdx | 34 +- .../02_connecting_your_cluster/index.mdx | 26 +- .../05_db_configuration_parameters.mdx | 23 +- .../03_modifying_your_cluster/index.mdx | 41 +-- .../using_cluster/04_backup_and_restore.mdx | 16 +- .../05_monitoring_and_logging/06_metrics.mdx | 337 +++++++++--------- .../05_monitoring_and_logging/index.mdx | 40 +-- .../05a_deleting_your_cluster.mdx | 27 +- .../06_demonstration_oracle_compatibility.mdx | 28 +- .../biganimal/release/using_cluster/index.mdx | 2 - 33 files changed, 1113 insertions(+), 1052 deletions(-) diff --git a/product_docs/docs/biganimal/release/administering_cluster/01_portal_access.mdx b/product_docs/docs/biganimal/release/administering_cluster/01_portal_access.mdx index 92ab335199c..1faa84a3c57 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/01_portal_access.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/01_portal_access.mdx @@ -15,12 +15,14 @@ Each BigAnimal organization is associated with an Azure AD tenant. Azure AD esta BigAnimal supports role-based access control policies. A user with the owner role can assign roles to other users in the same organization. ## Roles + Access to BigAnimal is controlled by roles. Roles are sets of permissions. You use roles to manage permissions assigned to users. Each organization has three default roles available: - * owner - * reader - * contributor + +- owner +- reader +- contributor You can edit these roles by changing their name or description. @@ -28,9 +30,9 @@ You can edit these roles by changing their name or description. Permissions are generally represented in the format *action*:*object* where *action* represents an operation to perform and *object* represents a category of portal functionality. -* The available actions are: create, read, update, and delete. +- The available actions are: create, read, update, and delete. -* The available objects are: backups, billing, clusters, events, permissions, roles, users, and versions. +- The available objects are: backups, billing, clusters, events, permissions, roles, users, and versions. !!! Note Not every object supports all the actions. For example, versions objects are always read-only. @@ -39,44 +41,46 @@ Permissions are generally represented in the format *action*:*object* where *act The following are the default permission by role: -| Role | Action |backups | billing | clusters | events | roles | permissions | users | versions | -|-------------|--------|--------|---------|-----------|--------|-------|-------------|--------|----------| -| owner | create | x | | x | | | | | | -| | read | x | x | x | x | x | x | x | x | -| | update | x | | x | | | | x | | -| | delete | x | | x | | | | | | -| contributor | create | x | | x | | | | | | -| | read | x | x | x | x | x | x | x | x | -| | update | x | | x | | | | | | -| | delete | x | | x | | | | | | -| reader | create | | | | | | | | | -| | read | x | x | x | x | x | x | x | x | -| | update | | | | | | | | | -| | delete | | | | | | | | | - +| Role | Action | backups | billing | clusters | events | roles | permissions | users | versions | +| ----------- | ------ | ------- | ------- | -------- | ------ | ----- | ----------- | ----- | -------- | +| owner | create | x | | x | | | | | | +| | read | x | x | x | x | x | x | x | x | +| | update | x | | x | | | | x | | +| | delete | x | | x | | | | | | +| contributor | create | x | | x | | | | | | +| | read | x | x | x | x | x | x | x | x | +| | update | x | | x | | | | | | +| | delete | x | | x | | | | | | +| reader | create | | | | | | | | | +| | read | x | x | x | x | x | x | x | x | +| | update | | | | | | | | | +| | delete | | | | | | | | | ### Edit roles -1. Navigate to **Admin > Roles**. +1. Navigate to **Admin > Roles**. -3. Select the edit icon for the role in the list. +2. Select the edit icon for the role in the list. #### Change role name -1. Select the **Settings** tab. +1. Select the **Settings** tab. -2. Edit **Name** or **Description**. -3. Select **Save**. +2. Edit **Name** or **Description**. + +3. Select **Save**. #### Change role permissions You can change permissions associated with the role. -1. Select the **Permissions** tab. +1. Select the **Permissions** tab. + +2. Select **Change Permissions** in the top right. + +3. Select the list of permissions you want to associate with the role. -2. Select **Change Permissions** in the top right. -3. Select the list of permissions you want to associate with the role. -4. Select **Submit**. +4. Select **Submit**. !!! Note Changing role permissions affects every user who is assigned that role. @@ -87,12 +91,15 @@ When you configured your Azure subscription, you also enabled BigAnimal to authe ### Assign roles to users -1. Navigate to **Admin > Users**. +1. Navigate to **Admin > Users**. -2. Select the edit icon for the user. -3. Select **Assign Roles**. -4. Select or clear roles for the user. -5. Select **Submit**. +2. Select the edit icon for the user. + +3. Select **Assign Roles**. + +4. Select or clear roles for the user. + +5. Select **Submit**. !!! Note For a user's role assignment to take effect, the user must log out from BigAnimal and log in again. @@ -101,16 +108,18 @@ When you configured your Azure subscription, you also enabled BigAnimal to authe You can view all users from your organization who have logged in at least once. -1. Navigate to **Admin > Users**. +1. Navigate to **Admin > Users**. -2. View the list of users sorted by most recent login. +2. View the list of users sorted by most recent login. ## Example scenario -1. The BigAnimal organization is created, and Tom logs in and is granted the owner role. +1. The BigAnimal organization is created, and Tom logs in and is granted the owner role. + +2. Tom asks Jerry to log in, using his Azure AD account. Jerry's account in BigAnimal is created. + +3. Tom grants Sally the contributor role. Sally logs out and back in. She can now create BigAnimal clusters. -2. Tom asks Jerry to log in, using his Azure AD account. Jerry's account in BigAnimal is created. -3. Tom grants Sally the contributor role. Sally logs out and back in. She can now create BigAnimal clusters. -4. Sally asks Jerry to log in and grants him the reader role. -5. Jerry logs out and back in. He can now see the clusters that Sally created. +4. Sally asks Jerry to log in and grants him the reader role. +5. Jerry logs out and back in. He can now see the clusters that Sally created. diff --git a/product_docs/docs/biganimal/release/administering_cluster/03_account_activity.mdx b/product_docs/docs/biganimal/release/administering_cluster/03_account_activity.mdx index f7dd4ac67d7..605f7b2ac82 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/03_account_activity.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/03_account_activity.mdx @@ -1,23 +1,26 @@ --- title: "Reviewing account activity" --- + The activity log collects BigAnimal events based on user activity in the portal. You can use the log to audit activities performed by users from your organizations or research activities that might have affected your account. ## Events Events describe actions performed by users. The available actions are: -* create -* read -* update -* delete + +- create +- read +- update +- delete Events are related to the following resource types: -* cluster -* data plane -* user -* user roles -* role permissions -* organization + +- cluster +- data plane +- user +- user roles +- role permissions +- organization !!! Note Database events are not logging activity on the Postgres server. They are logging the use of the portal to create or modify database clusters. @@ -28,10 +31,9 @@ To view events, navigate to the [Activity Log](https://portal.biganimal.com/acti The following fields are in the activity log: -| Field | Description | -| ---------------------| ---------------------------------------------------------------------------- | -| **Activity Name** | Name of an event in the format _Action Resource-Type, Resource-name_ | -| **User** | User responsible for the event | -| **Date** | Date when the action was performed | -| **Resource** | Resource type of the resource | - +| Field | Description | +| ----------------- | -------------------------------------------------------------------- | +| **Activity Name** | Name of an event in the format *Action Resource-Type, Resource-name* | +| **User** | User responsible for the event | +| **Date** | Date when the action was performed | +| **Resource** | Resource type of the resource | diff --git a/product_docs/docs/biganimal/release/getting_started/01_preparing_azure/01_understanding_qotas_in_azure.mdx b/product_docs/docs/biganimal/release/getting_started/01_preparing_azure/01_understanding_qotas_in_azure.mdx index dd892f97a24..58b596635a1 100644 --- a/product_docs/docs/biganimal/release/getting_started/01_preparing_azure/01_understanding_qotas_in_azure.mdx +++ b/product_docs/docs/biganimal/release/getting_started/01_preparing_azure/01_understanding_qotas_in_azure.mdx @@ -11,21 +11,20 @@ BigAnimal creates and manages some of the resources using resource providers. Fo To prevent failures while creating your clusters, ensure that each of the following Azure resource providers are registered in your Azure subscription. - -| Provider Namespace | Description | -| ------------------------------ | --------------------------------------------------------------------------------------------------------------- | -| Microsoft.Compute | Runs cluster workloads on a virtual machine managed by the Azure Kubernetes Service. | -| Microsoft.ContainerInstance | Manages the Azure resource and regular maintenance job. | -| Microsoft.Capacity | Checks the Azure resource quota. | -| Microsoft.AlertsManagement | Monitors failure anomalies. | -| Microsoft.ContainerService | Manages cluster workloads run on the Azure Kubernetes Service. | -| Microsoft.KeyVault | Encrypts and stores keys of the clusters' data volume and Azure's credential information. | -| Microsoft.Storage | Backs up data to the Azure Service Account. | -| Microsoft.ManagedIdentity | Manages software access to the local Azure services using Azure Managed-Identity. | -| Microsoft.Network | Manages cluster workloads run in the Azure Kubernetes Service in the dedicated VNet. | -| Microsoft.OperationalInsights | Manages clusters and performs workload logging (log workspace).. | -| Microsoft.OperationsManagement | Monitors workloads and provides container insight. | -| Microsoft.Portal | Provides a dashboard to monitor the running status of the clusters (using aggregated logs and metrics). | +| Provider Namespace | Description | +| ------------------------------ | ------------------------------------------------------------------------------------------------------- | +| Microsoft.Compute | Runs cluster workloads on a virtual machine managed by the Azure Kubernetes Service. | +| Microsoft.ContainerInstance | Manages the Azure resource and regular maintenance job. | +| Microsoft.Capacity | Checks the Azure resource quota. | +| Microsoft.AlertsManagement | Monitors failure anomalies. | +| Microsoft.ContainerService | Manages cluster workloads run on the Azure Kubernetes Service. | +| Microsoft.KeyVault | Encrypts and stores keys of the clusters' data volume and Azure's credential information. | +| Microsoft.Storage | Backs up data to the Azure Service Account. | +| Microsoft.ManagedIdentity | Manages software access to the local Azure services using Azure Managed-Identity. | +| Microsoft.Network | Manages cluster workloads run in the Azure Kubernetes Service in the dedicated VNet. | +| Microsoft.OperationalInsights | Manages clusters and performs workload logging (log workspace).. | +| Microsoft.OperationsManagement | Monitors workloads and provides container insight. | +| Microsoft.Portal | Provides a dashboard to monitor the running status of the clusters (using aggregated logs and metrics). | ## Virtual machine SKU restrictions @@ -49,11 +48,11 @@ Any time a new VM is deployed in Azure, the vCPUs for the VMs must not exceed th Clusters deployed in the region use Esv3 virtual machine cores. The number of cores depends on the *Instance Type* and *High Availability (HA)* options of the clusters. You can calculate the number of Esv3 cores required for your cluster based on the following: -* A virtual machine instance of type E{N}sv3 uses {N} cores. For example, an instance of type E64sv3 uses 64 Esv3 cores. -* A cluster running on an E{N}sv3 instance with HA not enabled uses exactly {N} Esv3 cores. -* A cluster running on an E{N}sv3 instance with HA enabled uses 3 * {N} Esv3 cores. +- A virtual machine instance of type E{N}sv3 uses {N} cores. For example, an instance of type E64sv3 uses 64 Esv3 cores. +- A cluster running on an E{N}sv3 instance with HA not enabled uses exactly {N} Esv3 cores. +- A cluster running on an E{N}sv3 instance with HA enabled uses 3 \* {N} Esv3 cores. -For example, if you provision the largest virtual machine E64sv3 with high availability enabled, it requires (3 * 64) = 192 Esv3 cores per region. +For example, if you provision the largest virtual machine E64sv3 with high availability enabled, it requires (3 \* 64) = 192 Esv3 cores per region. BigAnimal requires an additional eight Dv4 virtual machine cores per region. @@ -64,6 +63,7 @@ The default number of total vCPU (cores) per subscription per region is 20. For ##### Recommended limits BigAnimal recommends the following per region when requesting virtual machine resource limit increases: -* Total Regional vCPUs: minimum of 60 per designated region -* Standard Esv3 Family vCPUs: minimum of 50 per designated region -* Standard Dv4 Family vCPUs: minimum of 10 per designated region \ No newline at end of file + +- Total Regional vCPUs: minimum of 60 per designated region +- Standard Esv3 Family vCPUs: minimum of 50 per designated region +- Standard Dv4 Family vCPUs: minimum of 10 per designated region diff --git a/product_docs/docs/biganimal/release/getting_started/01_preparing_azure/index.mdx b/product_docs/docs/biganimal/release/getting_started/01_preparing_azure/index.mdx index eec1777b0a0..7c47e8e3c95 100644 --- a/product_docs/docs/biganimal/release/getting_started/01_preparing_azure/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/01_preparing_azure/index.mdx @@ -6,10 +6,10 @@ redirects: BigAnimal requires you to configure your Azure subscription before you deploy your clusters. The configurations that you perform ensure that your Azure subscription is prepared to meet your clusters' requirements and resource limits, such as: - * Are the necessary Azure resource providers registered for your subscription? - * Is there a restriction on SKUs for the Standard Esv3 and Standard Dv4 VM size families? - * Is there a sufficient limit on the number of vCPU or Public IP addresses in your region? - +- Are the necessary Azure resource providers registered for your subscription? +- Is there a restriction on SKUs for the Standard Esv3 and Standard Dv4 VM size families? +- Is there a sufficient limit on the number of vCPU or Public IP addresses in your region? + !!! Note Before proceeding, see [Understanding requirements in Azure](../01_preparing_azure/01_understanding_qotas_in_azure) for details on planning for your clusters' requirements and resource limits in Azure. @@ -17,102 +17,103 @@ BigAnimal requires you to configure your Azure subscription before you deploy yo EDB recommends using the `biganimal-preflight-azure` script to check whether all requirements and resource limits are met in your subscription. However, you can also manually check the requirements using the Azure CLI or the Azure Portal. - * [Method 1: Use EDB's shell script](#method-1-use-edbs-shell-script) (recommended) - * [Method 2: Manually check requirements](#method-2-manually-check-requirements) +- [Method 1: Use EDB's shell script](#method-1-use-edbs-shell-script) (recommended) +- [Method 2: Manually check requirements](#method-2-manually-check-requirements) ### Method 1: Use EDB's shell script EDB provides a shell script called [`biganimal-preflight-azure`](https://github.com/EnterpriseDB/cloud-utilities/blob/main/azure/biganimal-preflight-azure), which automatically checks whether requirements and resource limits are met in your Azure subscription based on the clusters you plan to deploy. -1. Open the [Azure Cloud Shell](https://shell.azure.com/) in your browser. - -1. From the Azure Cloud Shell, use the following command to run the `biganimal-preflight-azure` script. - - ```sh - curl -sL https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/azure/biganimal-preflight-azure | bash -s \ - -- \ - --instance-type \ +1. Open the [Azure Cloud Shell](https://shell.azure.com/) in your browser. +2. From the Azure Cloud Shell, use the following command to run the `biganimal-preflight-azure` script. + + ```sh + curl -sL https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/azure/biganimal-preflight-azure | bash -s \ + -- \ + --instance-type \ + --high-availability \ + --endpoint \ + --activate-region \ + --onboard + ``` + + Replace the variables in the arguments and options as follows: + + - ``: Enter the Azure subscription ID of your BigAnimal deployment. + - ``: Enter the Azure region where your clusters are being deployed. + - ``: Enter **e2s_v3**, which indicates the type of virtual machine instance (Esv3 with 2 cores) used in BigAnimal. + - `--high-availability` or `-a`: Indicates that high availability is enabled in your clusters. + - ``: Indicates either **public** or **private** endpoints. + - `--activate-region` or `-r`: Indicates that no clusters are currently deployed in the region. + - `--onboard`: Checks if the user and subscription are correctly configured. + + For example, if you want to deploy a cluster in an Azure subscription having an ID of `12412ab3d-1515-2217-96f5-0338184fcc04`, in the `eastus2` region, in a `public` network, and with no existing cluster deployed, run the following command: + + ```sh + curl -sL https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/azure/biganimal-preflight-azure | bash -s \ + -- 12412ab3d-1515-2217-96f5-0338184fcc04 eastus2 \ + --instance-type e2s_v3 \ --high-availability \ - --endpoint \ - --activate-region \ - --onboard - ``` - - Replace the variables in the arguments and options as follows: - - ``: Enter the Azure subscription ID of your BigAnimal deployment. - - ``: Enter the Azure region where your clusters are being deployed. - - ``: Enter **e2s_v3**, which indicates the type of virtual machine instance (Esv3 with 2 cores) used in BigAnimal. - - `--high-availability` or `-a`: Indicates that high availability is enabled in your clusters. - - ``: Indicates either **public** or **private** endpoints. - - `--activate-region` or `-r`: Indicates that no clusters are currently deployed in the region. - - `--onboard`: Checks if the user and subscription are correctly configured. - - For example, if you want to deploy a cluster in an Azure subscription having an ID of `12412ab3d-1515-2217-96f5-0338184fcc04`, in the `eastus2` region, in a `public` network, and with no existing cluster deployed, run the following command: - - ```sh - curl -sL https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/azure/biganimal-preflight-azure | bash -s \ - -- 12412ab3d-1515-2217-96f5-0338184fcc04 eastus2 \ - --instance-type e2s_v3 \ - --high-availability \ - --endpoint public \ - --activate-region - ``` + --endpoint public \ + --activate-region + ``` The script displays the following output: - * A list of required Azure resource providers and their registration status. Ensure that you register the resource providers that are displayed as `NotRegistered` in the `RegistrationState` column. See [Register Azure resource providers](#register-azure-service-providers). - ``` - ####################### - # Provider # - ####################### - - Namespace RegistrationPolicy RegistrationState ProviderAuthorizationConsentState - --------------------------------------- -------------------- ----------------- - Microsoft.Capacity RegistrationRequired NotRegistered - Microsoft.ContainerInstance RegistrationRequired NotRegistered - Microsoft.Compute RegistrationRequired NotRegistered - Microsoft.ContainerService RegistrationRequired NotRegistered - Microsoft.KeyVault RegistrationRequired NotRegistered - Microsoft.ManagedIdentity RegistrationRequired NotRegistered - Microsoft.Network RegistrationRequired NotRegistered - Microsoft.OperationalInsights RegistrationRequired NotRegistered - Microsoft.OperationsManagement RegistrationRequired NotRegistered - Microsoft.Portal RegistrationFree Registered - Microsoft.Storage RegistrationRequired Registered - Microsoft.AlertsManagement RegistrationRequired NotRegistered - ``` - - - * Whether your Azure subscription restricts vCPUs for the `Standard_D2_v4` and `Standard_E2s_v3` VM size families in your region (and availability zone, if HA is enabled). Open a support request to remove SKU restrictions for the VM families with `NotAvailableForSubscription` displayed in the `Restrictions` column. See [Fix issues with SKU restrictions](#fix-issues-with-sku-restrictions). - ``` - ####################### - # Virtual-Machine SKU # - ####################### - - ResourceType Locations Name Zones Restrictions - ------------ --------- ---- ----- ------------ - virtualMachines eastus2 Standard_D2_v4 1,2,3 None - virtualMachines eastus2 Standard_E2s_v3 1,2,3 NotAvailableForSubscription, type: Zone, locations: eastus2, zones: 1,3 - - ``` - - * Whether your Azure subscription has sufficient limits on vCPUs and IP addresses for your region. Open a support request to raise limits for the vCPUs and IP addresses if they exceed the available VM families with `NotAvailableForSubscription` displayed in the `Restrictions` column. See [Increase Public IP addresses](#increase-public-ip-addresses-limits) and [Increase vCPU limits](#increase-vcpu-limits). - - ``` - ####################### - # Quota Limitation # - ####################### - - Resource Limit Used Available Gap Suggestion - Total Regional vCPUs 130 27 103 89 OK - Standard D2v4 Family vCPUs 20 14 6 0 Need Increase - Standard E2sv3 Family vCPUs 20 4 16 8 OK - Public IP Addresses — Basic 20 11 9 8 OK - Public IP Addresses — Standard 20 3 17 16 OK - ``` - +- A list of required Azure resource providers and their registration status. Ensure that you register the resource providers that are displayed as `NotRegistered` in the `RegistrationState` column. See [Register Azure resource providers](#register-azure-service-providers). + + ``` + ####################### + # Provider # + ####################### + + Namespace RegistrationPolicy RegistrationState ProviderAuthorizationConsentState + --------------------------------------- -------------------- ----------------- + Microsoft.Capacity RegistrationRequired NotRegistered + Microsoft.ContainerInstance RegistrationRequired NotRegistered + Microsoft.Compute RegistrationRequired NotRegistered + Microsoft.ContainerService RegistrationRequired NotRegistered + Microsoft.KeyVault RegistrationRequired NotRegistered + Microsoft.ManagedIdentity RegistrationRequired NotRegistered + Microsoft.Network RegistrationRequired NotRegistered + Microsoft.OperationalInsights RegistrationRequired NotRegistered + Microsoft.OperationsManagement RegistrationRequired NotRegistered + Microsoft.Portal RegistrationFree Registered + Microsoft.Storage RegistrationRequired Registered + Microsoft.AlertsManagement RegistrationRequired NotRegistered + ``` + +- Whether your Azure subscription restricts vCPUs for the `Standard_D2_v4` and `Standard_E2s_v3` VM size families in your region (and availability zone, if HA is enabled). Open a support request to remove SKU restrictions for the VM families with `NotAvailableForSubscription` displayed in the `Restrictions` column. See [Fix issues with SKU restrictions](#fix-issues-with-sku-restrictions). + + ``` + ####################### + # Virtual-Machine SKU # + ####################### + + ResourceType Locations Name Zones Restrictions + ------------ --------- ---- ----- ------------ + virtualMachines eastus2 Standard_D2_v4 1,2,3 None + virtualMachines eastus2 Standard_E2s_v3 1,2,3 NotAvailableForSubscription, type: Zone, locations: eastus2, zones: 1,3 + + ``` + +- Whether your Azure subscription has sufficient limits on vCPUs and IP addresses for your region. Open a support request to raise limits for the vCPUs and IP addresses if they exceed the available VM families with `NotAvailableForSubscription` displayed in the `Restrictions` column. See [Increase Public IP addresses](#increase-public-ip-addresses-limits) and [Increase vCPU limits](#increase-vcpu-limits). + + ``` + ####################### + # Quota Limitation # + ####################### + + Resource Limit Used Available Gap Suggestion + Total Regional vCPUs 130 27 103 89 OK + Standard D2v4 Family vCPUs 20 14 6 0 Need Increase + Standard E2sv3 Family vCPUs 20 4 16 8 OK + Public IP Addresses — Basic 20 11 9 8 OK + Public IP Addresses — Standard 20 3 17 16 OK + ``` ### Method 2: Manually check requirements + If you are manually checking the requirements instead of using the `biganimal-preflight-azure` script, perform the following procedures: #### Check Azure resource provider registrations using Azure Cloud Shell @@ -146,12 +147,12 @@ Alternatively, to check for SKU restrictions using the Azure Portal, see [Soluti To check if you have adequate Azure resources to provision new clusters: -1. In the [Azure Portal](https://portal.azure.com/), select **Subscription**. -2. Select your specific subscription. -3. Select **Usage + quotas** in the **Settings** section. -4. Search for *Total Regional vCPUs* and select the **Location** to check the regional vCPUs limits. -5. Search for *Dv4* and *Esv3* to view virtual machine limits. -6. Search for Public IP addresses to view network limits. +1. In the [Azure Portal](https://portal.azure.com/), select **Subscription**. +2. Select your specific subscription. +3. Select **Usage + quotas** in the **Settings** section. +4. Search for *Total Regional vCPUs* and select the **Location** to check the regional vCPUs limits. +5. Search for *Dv4* and *Esv3* to view virtual machine limits. +6. Search for Public IP addresses to view network limits. ## Configure your Azure subscription @@ -160,13 +161,14 @@ After checking whether the requirements and resource limits are met, configure y !!! Note Before proceeding, see [Understanding requirements in Azure](../01_preparing_azure/01_understanding_qotas_in_azure) for details on planning for your clusters' requirements and resource limits in Azure. -### Register Azure resource providers +### Register Azure resource providers + To register resource providers using the Azure Portal: -1. In the [Azure Portal](https://portal.azure.com/), select **Subscription**. -2. Select your specific subscription. -3. In the navigation panel **Settings** group, select **Resource providers**. -4. Review the status of the required providers. To register a provider, select the provider, and, on the top menu, select **Register**. +1. In the [Azure Portal](https://portal.azure.com/), select **Subscription**. +2. Select your specific subscription. +3. In the navigation panel **Settings** group, select **Resource providers**. +4. Review the status of the required providers. To register a provider, select the provider, and, on the top menu, select **Register**. To register resource providers using the Azure CLI, use the [register command](https://docs.microsoft.com/en-us/cli/azure/provider?view=azure-cli-latest#az_provider_register). For example: @@ -186,15 +188,14 @@ Increase the limit of `Public IP Addresses - Basic` and `Public IP Addresses - S You can increase the number of public IP addresses for your account either by using the Azure portal or by submitting a support request. See: -* [Request networking quota increase at subscription level using Usages + quotas](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/networking-quota-requests#request-networking-quota-increase-at-subscription-level-using-usage--quotas) - -* [Request Networking quota increase at subscription level using Help + support](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/networking-quota-requests#request-networking-quota-increase-at-subscription-level-using-help--support) +- [Request networking quota increase at subscription level using Usages + quotas](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/networking-quota-requests#request-networking-quota-increase-at-subscription-level-using-usage--quotas) +- [Request Networking quota increase at subscription level using Help + support](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/networking-quota-requests#request-networking-quota-increase-at-subscription-level-using-help--support) ### Increase vCPU limits You can increase the number of Dv4 or Esv3 family virtual machines per region by using the Azure Portal or by submitting a support request. See: - * [Request a quota increase at a subscription level using Usages + quotas](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/classic-deployment-model-quota-increase-requests#request-quota-increase-for-the-classic-deployment-model-from-usage--quotas) +- [Request a quota increase at a subscription level using Usages + quotas](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/classic-deployment-model-quota-increase-requests#request-quota-increase-for-the-classic-deployment-model-from-usage--quotas) - * [Request a quota increase by region from Help + support](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/regional-quota-requests#request-a-quota-increase-by-region-from-help--support) +- [Request a quota increase by region from Help + support](https://docs.microsoft.com/en-us/azure/azure-portal/supportability/regional-quota-requests#request-a-quota-increase-by-region-from-help--support) diff --git a/product_docs/docs/biganimal/release/getting_started/02_connect_cloud_account.mdx b/product_docs/docs/biganimal/release/getting_started/02_connect_cloud_account.mdx index e7de2a4e864..aeeddf96400 100644 --- a/product_docs/docs/biganimal/release/getting_started/02_connect_cloud_account.mdx +++ b/product_docs/docs/biganimal/release/getting_started/02_connect_cloud_account.mdx @@ -6,56 +6,60 @@ Set up your BigAnimal account on Azure Marketplace. Your Azure subscription for ## Before you connect your cloud account -1. Ensure you have an active Microsoft Azure subscription. If you need to create one, see [Create an additional Azure subscription](https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/create-subscription). +1. Ensure you have an active Microsoft Azure subscription. If you need to create one, see [Create an additional Azure subscription](https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/create-subscription). -1. In Azure Active Directory, ensure your role is owner and your user type is member (not guest) for the subscription you are using. +2. In Azure Active Directory, ensure your role is owner and your user type is member (not guest) for the subscription you are using. -1. Create an Azure Active Directory Application client to delegate Identity and Access Management functions to Azure Active Directory (AD). You can create the Azure Active Directory Application using the Azure Portal, but a simpler and less error-prone approach is to use the `create-spn` script (see [Create Azure Active Directory Application using `create-spn`](#create-azure-active-directory-application-using-create-spn)). The script approach requires the Azure API. +3. Create an Azure Active Directory Application client to delegate Identity and Access Management functions to Azure Active Directory (AD). You can create the Azure Active Directory Application using the Azure Portal, but a simpler and less error-prone approach is to use the `create-spn` script (see [Create Azure Active Directory Application using `create-spn`](#create-azure-active-directory-application-using-create-spn)). The script approach requires the Azure API. !!! Note Some steps of the subscription process require approval of an Azure AD administrator. Your Azure role in the Azure AD must be either: - - Global Administrator - - Privileged Role Administrator - - or you need the cooperation of a user with one of those roles in your organization. + - Global Administrator + - Privileged Role Administrator + or you need the cooperation of a user with one of those roles in your organization. ### Create Azure Active Directory Application using the Azure portal !!! Note Create your Azure AD Application in the same tenant as the subscription you want to associate it with.  -1. Register an application with Azure AD and create a service principal. See [Register an application with Azure AD and create a service principal](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#register-an-application-with-azure-ad-and-create-a-service-principal) for instructions. -Take note of the **Application (client) ID**, as you need it to configure your BigAnimal account. Also take note of the **Display name** value of the Azure AD application. You need to enter this value later. +1. Register an application with Azure AD and create a service principal. See [Register an application with Azure AD and create a service principal](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#register-an-application-with-azure-ad-and-create-a-service-principal) for instructions. + Take note of the **Application (client) ID**, as you need it to configure your BigAnimal account. Also take note of the **Display name** value of the Azure AD application. You need to enter this value later. + +2. Select *application secret* as an authentication option for the application. See [Create a new Azure AD application secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret) for instructions. Take note of the Azure AD App Secret, as you need it to configure your cloud account. + +3. Select *API permissions* to configure API permissions for the application. See [Configure a client application to access Azure Active Directory API](https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api) for instructions. Add Application permissions with Microsoft Graph *Application.ReadWrite.OwnedBy* and *Directory.Read.All* to your application and grant admin consent for your cloud account. -1. Select _application secret_ as an authentication option for the application. See [Create a new Azure AD application secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret) for instructions. Take note of the Azure AD App Secret, as you need it to configure your cloud account. -1. Select _API permissions_ to configure API permissions for the application. See [Configure a client application to access Azure Active Directory API](https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api) for instructions. Add Application permissions with Microsoft Graph _Application.ReadWrite.OwnedBy_ and _Directory.Read.All_ to your application and grant admin consent for your cloud account. -1. Assign the owner role to the application. See [Assign a role to the application](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#assign-a-role-to-the-application) for instructions. In the **Select** field of the **Add role assignment** panel, enter the display name of the Azure AD application. See [Open the Add role assignment pane](https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal?tabs=current#step-2-open-the-add-role-assignment-page) for instructions. +4. Assign the owner role to the application. See [Assign a role to the application](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#assign-a-role-to-the-application) for instructions. In the **Select** field of the **Add role assignment** panel, enter the display name of the Azure AD application. See [Open the Add role assignment pane](https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal?tabs=current#step-2-open-the-add-role-assignment-page) for instructions. ### Create Azure Active Directory Application using `create-spn` To simplify the process of creating an Azure AD Application, EDB provides the [`create-spn`](https://github.com/EnterpriseDB/cloud-utilities/blob/main/azure/create-spn.sh) script for Azure API users. The script automates the creation of the Active Directory Application. You can download the script [here](https://github.com/EnterpriseDB/cloud-utilities/tree/main/azure). Before using the script, ensure that these utilities are installed on your machine: -* [jq command-line JSON processor](https://stedolan.github.io/jq/) -* [azure cli](https://docs.microsoft.com/en-us/cli/azure/) v2.26 or above + +- [jq command-line JSON processor](https://stedolan.github.io/jq/) +- [azure cli](https://docs.microsoft.com/en-us/cli/azure/) v2.26 or above The syntax of the command is: + ``` curl -sL https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/azure/create-spn.sh | bash -s \ -- --display-name \ ----subscription \ --years ``` + Flag and option details: -| Flag/option shortcut | Flag/option long name | Description | -| -------------------- | --------------------- | ----------- | -| -d *NAME* | --display-name *NAME* | Name of Azure AD Application. | -| -s *SUBSCRIPTION_ID* | --subscription *SUBSCRIPTION_ID* | Azure Subscription ID used by BigAnimal. | -| -y *YEARS* | --years *YEARS* | Integer value specifying the number of years for which the credentials are valid. The default is one year. | -| -h | --help | Displays information on the syntax and usage of the script. | +| Flag/option shortcut | Flag/option long name | Description | +| -------------------- | -------------------------------- | ---------------------------------------------------------------------------------------------------------- | +| -d *NAME* | --display-name *NAME* | Name of Azure AD Application. | +| -s *SUBSCRIPTION_ID* | --subscription *SUBSCRIPTION_ID* | Azure Subscription ID used by BigAnimal. | +| -y *YEARS* | --years *YEARS* | Integer value specifying the number of years for which the credentials are valid. The default is one year. | +| -h | --help | Displays information on the syntax and usage of the script. | The script creates the Azure AD Service Principal and configures its access to Azure resources in the specified subscription. @@ -94,7 +98,9 @@ Add Azure AD Service Principal Owners... "subscription": "c808xxxx-xxxx-xxxx-xxxx-xxxxxxxxb712" } ``` + If you receive the following error message, you need to request admin consent for your cloud account. Only users with the Azure AD Global Administrator or Privileged Role Administrator role can grant admin consent. + ``` ... Error: Please request Azure AD Global Administrator or Privileged Role Administrator to grant admin consent permissions for Service Principal hello-s(77bbxxxx-xxxx-xxxx-xxxx-xxxxxxxx7c54) @@ -106,39 +112,46 @@ Connect your cloud account with your Azure subscription. #### 1. Select the EDB offer in the Azure portal. -1. Sign in to the [Azure portal](https://portal.azure.com/) and go to Azure **Marketplace**. +1. Sign in to the [Azure portal](https://portal.azure.com/) and go to Azure **Marketplace**. + +2. Find an offer from **EnterpriseDB Corporation** and select it. -2. Find an offer from **EnterpriseDB Corporation** and select it. -3. From the **Select Plan** list, select an available plan. -4. Select **Set up + subscribe**. +3. From the **Select Plan** list, select an available plan. + +4. Select **Set up + subscribe**. #### 2. Fill out the details for your plan. -1. In the **Project details** section, enter or create a resource group for your subscription. See [What is a resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal#what-is-a-resource-group) for more information. -2. In the **SaaS details** section, enter the SaaS subscription name. -3. Select **Review + subscribe**. +1. In the **Project details** section, enter or create a resource group for your subscription. See [What is a resource group](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal#what-is-a-resource-group) for more information. + +2. In the **SaaS details** section, enter the SaaS subscription name. + +3. Select **Review + subscribe**. #### 3. Accept terms of use. -1. Review the terms of use provided by EDB. -2. Select **Subscribe**. +1. Review the terms of use provided by EDB. + +2. Select **Subscribe**. #### 4. Configure your account. + !!! Note After step 1, you are prompted for approval by an Azure AD administrator with either the Global Administrator or Privileged Role Administrator role. -1. To configure BigAnimal to use your Azure subscription and your Azure AD Application. select **Configure account now**. + +1. To configure BigAnimal to use your Azure subscription and your Azure AD Application. select **Configure account now**. -2. Fill in the parameters in the form: +2. Fill in the parameters in the form: - | Parameter | Description | - | ---------------------------------------------------- | ---------------------------------------------------------------------------- | - | **Azure AD: Application Client ID** | Application client ID you noted when [creating your Azure AD Application](#create-azure-active-directory-application-using-the-azure-portal) or that was generated from the [`create-spn`](#create-azure-active-directory-application-using-create-spn) script.| - | **Azure AD: Application Client Secret Value** | Application client secret value you noted when [creating your Azure AD Application](#create-azure-active-directory-application-using-the-azure-portal) or that was generated from the [`create-spn`](#create-azure-active-directory-application-using-create-spn) script.| - | **Azure Subscription ID** | Azure subscription ID for BigAnimal available from the Subscriptions page of your Azure account. | - | **Your BigAnimal Organization Name** | SaaS Subscription Name you assigned as your BigAnimal Organization (see [Step 2. Fill out the details for your plan.](#2-fill-out-the-details-for-your-plan)). | + | Parameter | Description | + | --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | + | **Azure AD: Application Client ID** | Application client ID you noted when [creating your Azure AD Application](#create-azure-active-directory-application-using-the-azure-portal) or that was generated from the [`create-spn`](#create-azure-active-directory-application-using-create-spn) script. | + | **Azure AD: Application Client Secret Value** | Application client secret value you noted when [creating your Azure AD Application](#create-azure-active-directory-application-using-the-azure-portal) or that was generated from the [`create-spn`](#create-azure-active-directory-application-using-create-spn) script. | + | **Azure Subscription ID** | Azure subscription ID for BigAnimal available from the Subscriptions page of your Azure account. | + | **Your BigAnimal Organization Name** | SaaS Subscription Name you assigned as your BigAnimal Organization (see [Step 2. Fill out the details for your plan.](#2-fill-out-the-details-for-your-plan)). | -11. Select **Submit**. +3. Select **Submit**. ## What's next diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/01_cluster_networking.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/01_cluster_networking.mdx index 60303ca9f1e..9c759282244 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/01_cluster_networking.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/01_cluster_networking.mdx @@ -3,9 +3,10 @@ title: Cluster networking architecture --- BigAnimal clusters can be exposed to client applications in two ways: -- Public — The cluster is available on the Internet. -- Private — Access to the cluster is restricted to specific -sources, and network traffic never leaves the Azure network. + +- Public — The cluster is available on the Internet. +- Private — Access to the cluster is restricted to specific + sources, and network traffic never leaves the Azure network. ## Basic architecture @@ -59,5 +60,3 @@ Clusters can be changed from public to private and vice versa at any time. When this happens, the IP address previously assigned to the cluster is deallocated, a new one is assigned, and DNS is updated accordingly. - - diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index 17f31919a50..ddfedea4fd8 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -6,64 +6,73 @@ redirects: - ../03_create_cluster/ --- -!!! Note +!!!Note Prior to creating your cluster, make sure you have adequate Azure resources or your request to create a cluster will fail. See [Raising your Azure resource limits](../01_check_resource_limits). !!! ## Create a cluster -1. Sign in to the [BigAnimal](https://portal.biganimal.com) portal. +1. Sign in to the [BigAnimal](https://portal.biganimal.com) portal. + +2. Select **Create New Cluster** in the top right of the **Overview** page or **Clusters** page. + +3. On the **Create Cluster** page, specify the cluster settings on the following tabs: + + - [**Cluster Info**](#cluster-info-tab) -3. Select **Create New Cluster** in the top right of the **Overview** page or **Clusters** page. -4. On the **Create Cluster** page, specify the cluster settings on the following tabs: - - [**Cluster Info**](#cluster-info-tab) + - [**Operational Settings**](#operational-settings-tab) - - [**Operational Settings**](#operational-settings-tab) - - [**DB Configuration** ](#db-configuration-tab) (optional) - - [ **Availability** ](#availability-tab) (optional) + - [**DB Configuration** ](#db-configuration-tab) (optional) -8. Select **Create Cluster**. It might take a few minutes to deploy. + - [ **Availability** ](#availability-tab) (optional) + +4. Select **Create Cluster**. It might take a few minutes to deploy. !!! Note When you don't configure settings on optional tabs, the default values are used. ### Cluster Info tab -1. Enter the name for your cluster in the **Cluster Name** field. -2. Enter a password for your cluster in the **Password** field. This is the password for the user edb_admin. -3. Select **Next: Operational Settings**. +1. Enter the name for your cluster in the **Cluster Name** field. + +2. Enter a password for your cluster in the **Password** field. This is the password for the user edb_admin. + +3. Select **Next: Operational Settings**. ### Operational Settings tab -1. In the **Database Type** section: - 1. Select the type of Postgres you want to use in the **Postgres Type** field: - - [*PostgreSQL*](/supported-open-source/postgresql/) is an open-source object-relational database management system. - - [*EDB Postgres Advanced Server*](/epas/latest/) is EDB’s secure, Oracle-compatible PostgreSQL. View [a quick demonstration of Oracle compatibility on EDB Cloud](../../using_cluster/06_demonstration_oracle_compatibility). +1. In the **Database Type** section: + + 1. Select the type of Postgres you want to use in the **Postgres Type** field: - 2. In the **Version** field, select the version of Postgres that you want to use. See [Database Version Policy](../../overview/05_database_version_policy) for more information. -2. In the **Provider** field, select the cloud provider for your cluster. - !!! Note - Microsoft Azure is the only option for the Preview. -3. In the **Region** field, select the region where you want to deploy your cluster. For the best performance, EDB typically recommends that this region be the same as other resources you have that communicate with your cluster. -4. In the **Instance Type** section, select the number of vCPUs and amount of memory you want. -5. In the **Storage** section, select **Volume Type**. In **Volume Properties**, select the type and amount of storage needed for your cluster. - !!! Note - BigAnimal currently supports Azure Premium SSD storage types. See [the Azure documentation](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssds) for more information. -6. In the **Networking** section, you specify whether to use private or public networking. Networking is set to Public by default. Public means that any client can connect to your cluster’s public IP address over the internet. Optionally, you can limit traffic to your public cluster by specifying an IP allowlist, which only allows access to certain blocks of IP addresses. To limit access, add one or more Classless Inter-Domain Routing (CIDR) blocks in the **IP Allowlists** section. CIDR is a method for allocating IP addresses and IP routing to a whole network or subnet. If you have any CIDR block entries, access is limited to those IP addresses. If none are specified, all network traffic is allowed. + - [*PostgreSQL*](/supported-open-source/postgresql/) is an open-source object-relational database management system. - Private networking allows only IP addresses within your private network to connect to your cluster. See [Cluster networking architecture](01_cluster_networking) for more information. + - [*EDB Postgres Advanced Server*](/epas/latest/) is EDB’s secure, Oracle-compatible PostgreSQL. View [a quick demonstration of Oracle compatibility on EDB Cloud](../../using_cluster/06_demonstration_oracle_compatibility). + 2. In the **Version** field, select the version of Postgres that you want to use. See [Database Version Policy](../../overview/05_database_version_policy) for more information. +2. In the **Provider** field, select the cloud provider for your cluster. + !!! Note + Microsoft Azure is the only option for the Preview. +3. In the **Region** field, select the region where you want to deploy your cluster. For the best performance, EDB typically recommends that this region be the same as other resources you have that communicate with your cluster. +4. In the **Instance Type** section, select the number of vCPUs and amount of memory you want. +5. In the **Storage** section, select **Volume Type**. In **Volume Properties**, select the type and amount of storage needed for your cluster. + !!! Note + BigAnimal currently supports Azure Premium SSD storage types. See [the Azure documentation](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssds) for more information. +6. In the **Networking** section, you specify whether to use private or public networking. Networking is set to Public by default. Public means that any client can connect to your cluster’s public IP address over the internet. Optionally, you can limit traffic to your public cluster by specifying an IP allowlist, which only allows access to certain blocks of IP addresses. To limit access, add one or more Classless Inter-Domain Routing (CIDR) blocks in the **IP Allowlists** section. CIDR is a method for allocating IP addresses and IP routing to a whole network or subnet. If you have any CIDR block entries, access is limited to those IP addresses. If none are specified, all network traffic is allowed. -7. To optionally make updates to your database configuration parameters, select **Next: DB Configuration**. + Private networking allows only IP addresses within your private network to connect to your cluster. See [Cluster networking architecture](01_cluster_networking) for more information. + + +7. To optionally make updates to your database configuration parameters, select **Next: DB Configuration**. ### DB Configuration tab + In the **Parameters** section, you can update the value of the database configuration parameters, as needed. To update the parameter values, see [Modifying Your Database Configuration Parameters](../../using_cluster/03_modifying_your_cluster/05_db_configuration_parameters). - - ### Availability tab + Enable or disable high availability using the **High Availability** control. High availability is enabled by default. When high availability is enabled, clusters are configured with one primary and two replicas with synchronous streaming replication. Clusters are configured across availability zones in regions with availability zones. When high availability is disabled, only one instance is provisioned. @@ -73,6 +82,5 @@ To update the parameter values, see [Modifying Your Database Configuration Param After you create your cluster, use these resources to learn about cluster use and management: -* [Using your cluster](../../using_cluster/) -* [Managing Postgres access](../../using_cluster/01_postgres_access/) - +- [Using your cluster](../../using_cluster/) +- [Managing Postgres access](../../using_cluster/01_postgres_access/) diff --git a/product_docs/docs/biganimal/release/getting_started/index.mdx b/product_docs/docs/biganimal/release/getting_started/index.mdx index 1fd6157eca1..4a6974a8f69 100644 --- a/product_docs/docs/biganimal/release/getting_started/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/index.mdx @@ -4,5 +4,3 @@ indexCards: simple --- As an Azure subscription administrators, you can set up your BigAnimal account, invite others to join you in exploring what EDB has to offer, and create initial clusters as an account owner so that development can begin. - - diff --git a/product_docs/docs/biganimal/release/migration/cold_migration.mdx b/product_docs/docs/biganimal/release/migration/cold_migration.mdx index 621bd4888eb..33605668c83 100644 --- a/product_docs/docs/biganimal/release/migration/cold_migration.mdx +++ b/product_docs/docs/biganimal/release/migration/cold_migration.mdx @@ -6,11 +6,11 @@ The simplest way to import a database into BigAnimal is using logical backups ta The high-level steps are: -1. [Export existing roles](#export-existing-roles). -1. [Import existing roles](#import-the-roles). -1. For each database, you are migrating: - 1. [logical export using `pg_dump`](#export-a-database) - 1. [logical import with `pg_restore`](#import-a-database) +1. [Export existing roles](#export-existing-roles). +2. [Import existing roles](#import-the-roles). +3. For each database, you are migrating: + 1. [logical export using `pg_dump`](#export-a-database) + 2. [logical import with `pg_restore`](#import-a-database) In case your source PostgreSQL instance hosts multiple databases, you can segment them in multiple BigAnimal clusters for easier management, better performance, increased predictability, and finer control of resources. For example, if your host has 10 databases, you can import one database (and related users) on a different BigAnimal cluster, one at a time. @@ -19,36 +19,37 @@ In case your source PostgreSQL instance hosts multiple databases, you can segmen This approach requires suspending write operations to the database application for the duration of the export/import process. You can then resume the write operations on the new system. This is because `pg_dump` takes an online snapshot of the source database. As a result, the changes after the backup starts aren't included in the output. The required downtime depends on many factors, including: - - Size of the database - - Speed of the network between the two systems - - Your team's familiarity with the migration procedure + +- Size of the database +- Speed of the network between the two systems +- Your team's familiarity with the migration procedure To minimize the downtime, you can test the process as many times as needed before the actual migration. You can perform the export with `pg_dump` online, and the process is repeatable and measurable. ## Before you begin Make sure that you: - - Understand the [terminology conventions](#terminology-conventions) used in this topic. - - Have the [required Postgres client binaries and libraries](#postgres-client-libraries). - - Can [access the source and target databases](#access-to-the-source-and-target-database). -### Terminology conventions +- Understand the [terminology conventions](#terminology-conventions) used in this topic. +- Have the [required Postgres client binaries and libraries](#postgres-client-libraries). +- Can [access the source and target databases](#access-to-the-source-and-target-database). -| Term | Alias | Description | -| ---- | ----- | ---------- | -| source database | `pg-source` | Postgres instance from which you want to import your data. | -| target database | `pg-target` | Postgres cluster in BigAnimal where you want to import your data. | -| migration host | `pg-migration` | Temporary Linux machine in your trusted network from which to execute the export of the database and the subsequent import into BigAnimal. The migration host needs access to both the source and target databases. Or, if your source and target databases are on the same version of Postgres, the source host can serve as your migration host. | +### Terminology conventions +| Term | Alias | Description | +| --------------- | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| source database | `pg-source` | Postgres instance from which you want to import your data. | +| target database | `pg-target` | Postgres cluster in BigAnimal where you want to import your data. | +| migration host | `pg-migration` | Temporary Linux machine in your trusted network from which to execute the export of the database and the subsequent import into BigAnimal. The migration host needs access to both the source and target databases. Or, if your source and target databases are on the same version of Postgres, the source host can serve as your migration host. | ### Postgres client libraries The following client binaries must be on the migration host: -- `pg_dumpall` -- `pg_dump` -- `pg_restore` -- `psql` +- `pg_dumpall` +- `pg_dump` +- `pg_restore` +- `psql` They must be the same version as the Postgres version of the target database. For example, if you want to import a PostgreSQL 10 database from your private network into a PostgreSQL 14 database in BigAnimal, use the client libraries and binaries from version 14. @@ -56,26 +57,27 @@ They must be the same version as the Postgres version of the target database. Fo Access requirements: -- PostgreSQL superuser access to the source database. This could be the `postgres` user or another user with `SUPERUSER` privileges. -- Access to the target database in BigAnimal as the `edb_admin` user. - +- PostgreSQL superuser access to the source database. This could be the `postgres` user or another user with `SUPERUSER` privileges. +- Access to the target database in BigAnimal as the `edb_admin` user. #### Verify your access -1. Connect to the source database using `psql`. For example: - ``` - psql -d "host=pg-source user=postgres dbname=postgres" - ``` - Replace `pg-source` with the actual hostname or IP address of the source database and the `user` and `dbname` values as appropriate. If the connection doesn't work, contact your system and database administrators to make sure that you can access the source database. This might require changes to your `pg_hba.conf` and network settings. If `pg_hba.conf` changes, reload the configuration with either `SELECT pg_reload_conf();` via a psql connection or `pg_ctl reload` in a shell connection to the database host. +1. Connect to the source database using `psql`. For example: + ``` + psql -d "host=pg-source user=postgres dbname=postgres" + ``` + Replace `pg-source` with the actual hostname or IP address of the source database and the `user` and `dbname` values as appropriate. If the connection doesn't work, contact your system and database administrators to make sure that you can access the source database. This might require changes to your `pg_hba.conf` and network settings. If `pg_hba.conf` changes, reload the configuration with either `SELECT pg_reload_conf();` via a psql connection or `pg_ctl reload` in a shell connection to the database host. -1. Connect to the target database using the `edb_admin` user. For example: - ``` - psql -d "host=pg-target user=edb_admin dbname=edb_admin" - ``` - Replace `pg-target` with the actual hostname of your BigAnimal cluster. +2. Connect to the target database using the `edb_admin` user. For example: + ``` + psql -d "host=pg-target user=edb_admin dbname=edb_admin" + ``` + Replace `pg-target` with the actual hostname of your BigAnimal cluster. ## Export existing roles + Export the existing roles from your source Postgres instance by running the following command on the migration host: + ``` pg_dumpall -r -d "host=pg-source user=postgres dbname=postgres" > roles.sql ``` @@ -103,49 +105,51 @@ SET standard_conforming_strings = on; -- ``` - - ## Import the roles -1. Your BigAnimal cluster already contains the `edb_admin` user, as well as the following system required roles: - - `postgres` (the superuser, needed by the BigAnimal to manage the cluster) - - `streaming_replica` (required to manage streaming replication) - - As a result, you need to modify the `roles.sql` file to: - 1. Remove the lines involving the `postgres` user. For example, remove lines like these: - ``` - CREATE ROLE postgres; - ALTER ROLE postgres WITH SUPERUSER ….; - ``` - 2. Remove any role with superuser or replication privileges. For example, remove lines like these: - ``` - CREATE ROLE admin; - ALTER ROLE admin WITH SUPERUSER ….; - ``` - 3. For every role that is created, grant the new role to the `edb_admin` user immediately after creating the user. For example: - ``` - CREATE ROLE my_role; - GRANT my_role TO edb_admin; - ``` - 4. Remove the `NOSUPERUSER`, `NOCREATEROLE`, `NOCREATEDB`, `NOREPLICATION`, `NOBYPASSRLS` permission attributes on the other users. - - The role section in the modified file, then, looks similar to: - ``` - CREATE ROLE my_role; - GRANT my_role TO edb_admin; - ALTER ROLE my_role WITH INHERIT LOGIN PASSWORD 'SCRAM-SHA-256$4096:my-Scrambled-Password'; - ``` -1. From the migration host, execute: - - ``` - psql -1 -f roles.sql -d "postgres://edb_admin@pg-target:5432/edb_admin?sslmode=verify-full" - ``` - (Replace `pg-target` with the Fully Qualified Domain Name (FQDN) of your BigAnimal cluster). - +1. Your BigAnimal cluster already contains the `edb_admin` user, as well as the following system required roles: + +- `postgres` (the superuser, needed by the BigAnimal to manage the cluster) +- `streaming_replica` (required to manage streaming replication) + + As a result, you need to modify the `roles.sql` file to: + +1. Remove the lines involving the `postgres` user. For example, remove lines like these: + ``` + CREATE ROLE postgres; + ALTER ROLE postgres WITH SUPERUSER ….; + ``` +2. Remove any role with superuser or replication privileges. For example, remove lines like these: + ``` + CREATE ROLE admin; + ALTER ROLE admin WITH SUPERUSER ….; + ``` +3. For every role that is created, grant the new role to the `edb_admin` user immediately after creating the user. For example: + ``` + CREATE ROLE my_role; + GRANT my_role TO edb_admin; + ``` +4. Remove the `NOSUPERUSER`, `NOCREATEROLE`, `NOCREATEDB`, `NOREPLICATION`, `NOBYPASSRLS` permission attributes on the other users. + + The role section in the modified file, then, looks similar to: + + ``` + CREATE ROLE my_role; + GRANT my_role TO edb_admin; + ALTER ROLE my_role WITH INHERIT LOGIN PASSWORD 'SCRAM-SHA-256$4096:my-Scrambled-Password'; + ``` +5. From the migration host, execute: + + ``` + psql -1 -f roles.sql -d "postgres://edb_admin@pg-target:5432/edb_admin?sslmode=verify-full" + ``` + + (Replace `pg-target` with the Fully Qualified Domain Name (FQDN) of your BigAnimal cluster). This command tries to create the roles in a single transaction. In case of errors, the transaction is rolled back, leaving the database cluster in the same state as before the import attempt. This behavior is enforced using the `-1` option of `psql`. ## Export a database + From the migration host, use the `pg_dump` command to export the source database into the target database in BigAnimal. For example: ``` @@ -158,17 +162,20 @@ pg_dump -Fc -d "host=pg-source user=postgres dbname=app" -f app.dump The command generates a custom .dump archive (`app.dump` in this example), which contains the compressed dump of the source database. The duration of the command execution varies depending on several variables including size of the database, network speed, disk speed, and CPU of both the source instance and the migration host. You can inspect the table of contents of the dump with `pg_restore -l .dump`. As with any other custom format dump produced with `pg_dump`, you can take advantage of the features that `pg_restore` provides you with, including: -- Selecting a subset of the import tasks by editing the table of contents and passing it to the `-L` option. -- Running the command in parallel using the `-j` option with the directory format. + +- Selecting a subset of the import tasks by editing the table of contents and passing it to the `-L` option. +- Running the command in parallel using the `-j` option with the directory format. For further information, see the [`pg_restore` documentation](https://www.postgresql.org/docs/current/app-pgrestore.html). ## Import a database + Use the `pg_restore` command and the .dump file you created when exporting the source database to import the database into BigAnimal. For example: ``` pg_restore -C -d "postgres://edb_admin@pg-target:5432/edb_admin?sslmode=verify-full" app.dump ``` + This process might take some time depending on the size of the database and the speed of the network. In case of error, repeat the restore operation once you have deleted the database using the following command: diff --git a/product_docs/docs/biganimal/release/migration/index.mdx b/product_docs/docs/biganimal/release/migration/index.mdx index 006a4177873..4639b044558 100644 --- a/product_docs/docs/biganimal/release/migration/index.mdx +++ b/product_docs/docs/biganimal/release/migration/index.mdx @@ -4,21 +4,17 @@ title: Migrating databases to BigAnimal EDB provides migration tools to bring data from Oracle, PostgresSQL, and EDB Postgres Advanced Server databases into BigAnimal. These tools include Migration Portal and Migration Toolkit for Oracle migrations. More sophisticated migration processes can use tools such as [Replication Server](/eprs/latest/) for ongoing migrations and [LiveCompare](/livecompare/latest/) for data comparisons. - - ## Migrating from Oracle The [Migration Portal documentation](/migration_portal/latest) provides the details for executing the migration steps using Migration Portal: - 1. [Schema extraction](/migration_portal/latest/04_mp_migrating_database/01_mp_schema_extraction/) - 1. [Schema assessment](/migration_portal/latest/04_mp_migrating_database/02_mp_schema_assessment/) - 1. [Schema migration](/migration_portal/latest/04_mp_migrating_database/03_mp_schema_migration/) - 1. [Data migration](/migration_portal/latest/04_mp_migrating_database/04_mp_data_migration/) - +1. [Schema extraction](/migration_portal/latest/04_mp_migrating_database/01_mp_schema_extraction/) +2. [Schema assessment](/migration_portal/latest/04_mp_migrating_database/02_mp_schema_assessment/) +3. [Schema migration](/migration_portal/latest/04_mp_migrating_database/03_mp_schema_migration/) +4. [Data migration](/migration_portal/latest/04_mp_migrating_database/04_mp_data_migration/) + The Migration Portal documentation describes how to use Migration Toolkit for the data migration step. This toolkit is a good option for smaller databases. ## Migrating from Postgres -Several options are available for migrating EDB Postgres Advanced Server and PostgreSQL databases to BigAnimal. One option is to use the Migration Toolkit. Another simple option for many use cases is to import an existing PostgreSQL or EDB Postgres Advanced Server database BigAnimal. See [Importing an existing Postgres database](cold_migration). - - +Several options are available for migrating EDB Postgres Advanced Server and PostgreSQL databases to BigAnimal. One option is to use the Migration Toolkit. Another simple option for many use cases is to import an existing PostgreSQL or EDB Postgres Advanced Server database BigAnimal. See [Importing an existing Postgres database](cold_migration). diff --git a/product_docs/docs/biganimal/release/overview/02_high_availability.mdx b/product_docs/docs/biganimal/release/overview/02_high_availability.mdx index b8c780f48f9..78f2359cf97 100644 --- a/product_docs/docs/biganimal/release/overview/02_high_availability.mdx +++ b/product_docs/docs/biganimal/release/overview/02_high_availability.mdx @@ -8,11 +8,10 @@ BigAnimal enables deploying a cluster with or without high availability. The opt ## High availability—enabled -The high availability option is provided to minimize downtime in cases of failures. High-availability clusters—one _primary_ and two _replicas_—are configured automatically, with replicas staying up to date through physical streaming replication. In cloud regions with availability zones, clusters are provisioned across multiple availability zones to provide fault tolerance in the face of a datacenter failure. +The high availability option is provided to minimize downtime in cases of failures. High-availability clusters—one *primary* and two *replicas*—are configured automatically, with replicas staying up to date through physical streaming replication. In cloud regions with availability zones, clusters are provisioned across multiple availability zones to provide fault tolerance in the face of a datacenter failure. - -* Replicas are usually called _standby servers_. You can also use them for read-only workloads. -* In case of temporary or permanent unavailability of the primary, a standby replica becomes the primary. +- Replicas are usually called *standby servers*. You can also use them for read-only workloads. +- In case of temporary or permanent unavailability of the primary, a standby replica becomes the primary. ![*BigAnimal Cluster4*](images/high-availability.png) @@ -26,4 +25,4 @@ For nonproduction use cases where high availability is not a primary concern, a In case of permanent unavailability of the primary, a restore from a backup is required. -![*BigAnimal Cluster4*](images/ha-not-enabled.png ) +![*BigAnimal Cluster4*](images/ha-not-enabled.png) diff --git a/product_docs/docs/biganimal/release/overview/03_security.mdx b/product_docs/docs/biganimal/release/overview/03_security.mdx index 171f28b76af..3bfe932eb4f 100644 --- a/product_docs/docs/biganimal/release/overview/03_security.mdx +++ b/product_docs/docs/biganimal/release/overview/03_security.mdx @@ -3,12 +3,16 @@ title: "Security" --- BigAnimal runs in your own cloud account, isolates your data from other users, and gives you control over our access to it. The key security features are: -- **Data isolation:** Clusters are installed and managed in your cloud environment. Complete segregation of your data is ensured. Your data never leaves your cloud account, and compromise of another BigAnimal customer's systems doesn't put your data at risk. - -- **Granular access control:** You can use single sign-on (SSO) and define your own sets of roles and role-based access control (RBAC) policies to manage your individual cloud environments. See [Managing portal access](../administering_cluster/01_portal_access) for more information. -- **Data encryption:** All data in BigAnimal is encrypted in motion and at rest. Network traffic is encrypted using Transport Layer Security (TLS) v1.2 or greater, where applicable. Data at rest is encrypted using AES with 256-bit keys. Data encryption keys are envelope encrypted, and the wrapped data encryption keys are securely stored in an Azure Key Vault instance in your account. Encryption keys never leave your environment. -- **Portal audit logging:** Activities in the portal, such as those related to user roles, organization updates, and cluster creation and deletion, are tracked and viewed in the activity log. -- **Database logging and auditing:** Functionality to track and analyze database activities is enabled automatically. For PostgreSQL, the PostgreSQL Audit Extension (pgAudit) is enabled for you when deploying a Postgres cluster. For EDB Postgres Advanced Server, the EDB Audit extension (edbAudit) is enabled for you. - - **pgAudit:** The classes of statements being logged for pgAudit are set globally on a cluster with `pgaudit.log = 'write,ddl'`. The following statements made on tables are logged by default when the cluster type is PostgreSQL: `INSERT`, `UPDATE`, `DELETE`, `TRUNCATE`, AND `COPY`. All `DDL` is logged. - -- **Database cluster permissions** The edb_admin account created during the *create cluster* process includes the `CREATEDB` and `CREATEROLE` database roles. EDB recommends using the edb_admin account to create a new application user and new application database for further isolation. See [Managing Postgres access](../using_cluster/01_postgres_access) for more information. + +- **Data isolation:** Clusters are installed and managed in your cloud environment. Complete segregation of your data is ensured. Your data never leaves your cloud account, and compromise of another BigAnimal customer's systems doesn't put your data at risk. + +- **Granular access control:** You can use single sign-on (SSO) and define your own sets of roles and role-based access control (RBAC) policies to manage your individual cloud environments. See [Managing portal access](../administering_cluster/01_portal_access) for more information. + +- **Data encryption:** All data in BigAnimal is encrypted in motion and at rest. Network traffic is encrypted using Transport Layer Security (TLS) v1.2 or greater, where applicable. Data at rest is encrypted using AES with 256-bit keys. Data encryption keys are envelope encrypted, and the wrapped data encryption keys are securely stored in an Azure Key Vault instance in your account. Encryption keys never leave your environment. + +- **Portal audit logging:** Activities in the portal, such as those related to user roles, organization updates, and cluster creation and deletion, are tracked and viewed in the activity log. + +- **Database logging and auditing:** Functionality to track and analyze database activities is enabled automatically. For PostgreSQL, the PostgreSQL Audit Extension (pgAudit) is enabled for you when deploying a Postgres cluster. For EDB Postgres Advanced Server, the EDB Audit extension (edbAudit) is enabled for you. + - **pgAudit:** The classes of statements being logged for pgAudit are set globally on a cluster with `pgaudit.log = 'write,ddl'`. The following statements made on tables are logged by default when the cluster type is PostgreSQL: `INSERT`, `UPDATE`, `DELETE`, `TRUNCATE`, AND `COPY`. All `DDL` is logged. + +- **Database cluster permissions** The edb_admin account created during the *create cluster* process includes the `CREATEDB` and `CREATEROLE` database roles. EDB recommends using the edb_admin account to create a new application user and new application database for further isolation. See [Managing Postgres access](../using_cluster/01_postgres_access) for more information. diff --git a/product_docs/docs/biganimal/release/overview/04_responsibility_model.mdx b/product_docs/docs/biganimal/release/overview/04_responsibility_model.mdx index 4c720204e5e..202b1dac626 100644 --- a/product_docs/docs/biganimal/release/overview/04_responsibility_model.mdx +++ b/product_docs/docs/biganimal/release/overview/04_responsibility_model.mdx @@ -6,28 +6,32 @@ Security and confidentiality are a shared responsibility between you and EDB. ED The following responsibility model describes the distribution of specific responsibilities between you and EDB. - ## High availability -- EDB is responsible for deploying clusters with one primary and two replicas. In cloud regions with availability zones, clusters are deployed across multiple availability zones. -- You are responsible for choosing whether to enable high availability. +- EDB is responsible for deploying clusters with one primary and two replicas. In cloud regions with availability zones, clusters are deployed across multiple availability zones. +- You are responsible for choosing whether to enable high availability. ## Database performance -- EDB is responsible for managing and monitoring the underlying infrastructure resources. -- You are responsible for data modeling, query performance, and scaling the cluster to meet your performance needs. -## Scaling -- EDB is responsible for managing and monitoring the underlying infrastructure. -- You are responsible for choosing the appropriate resources for your workload, including instance type, storage, and connections. You are also responsible for managing your Azure resource limits to ensure the underlying infrastructure can scale. +- EDB is responsible for managing and monitoring the underlying infrastructure resources. +- You are responsible for data modeling, query performance, and scaling the cluster to meet your performance needs. + +## Scaling + +- EDB is responsible for managing and monitoring the underlying infrastructure. +- You are responsible for choosing the appropriate resources for your workload, including instance type, storage, and connections. You are also responsible for managing your Azure resource limits to ensure the underlying infrastructure can scale. ## Backups and restores -- EDB is responsible for taking automatic backups and storing them in cross-regional Azure Blob Storage. -- You are responsible for periodically restoring and verifying the restores to ensure that archives are completing frequently and successfully to meet your needs. + +- EDB is responsible for taking automatic backups and storing them in cross-regional Azure Blob Storage. +- You are responsible for periodically restoring and verifying the restores to ensure that archives are completing frequently and successfully to meet your needs. ## Encryption -- EDB is responsible for data encryption at rest and in transit. EDB is also responsible for encrypting backups. -- You are responsible for column-level encryption to protect sensitive database attributes from unauthorized access by authorized users and applications of the database. -## Credential management -- EDB is responsible for making credentials available to you. -- You are responsible for managing and securing your passwords, both for BigAnimal and for your database. +- EDB is responsible for data encryption at rest and in transit. EDB is also responsible for encrypting backups. +- You are responsible for column-level encryption to protect sensitive database attributes from unauthorized access by authorized users and applications of the database. + +## Credential management + +- EDB is responsible for making credentials available to you. +- You are responsible for managing and securing your passwords, both for BigAnimal and for your database. diff --git a/product_docs/docs/biganimal/release/overview/05_database_version_policy.mdx b/product_docs/docs/biganimal/release/overview/05_database_version_policy.mdx index 9241bc057bd..1337c6b57ec 100644 --- a/product_docs/docs/biganimal/release/overview/05_database_version_policy.mdx +++ b/product_docs/docs/biganimal/release/overview/05_database_version_policy.mdx @@ -4,22 +4,17 @@ title: "Database version policy" ## Supported Postgres types and versions - -| **Postgres type** | **Major versions** | -| ----- | ------------------------- | -| PostgreSQL | 11–14 | -| EDB Postgres Advanced Server | 11–14 | - +| **Postgres type** | **Major versions** | +| ---------------------------- | ------------------ | +| PostgreSQL | 11–14 | +| EDB Postgres Advanced Server | 11–14 | ## Major version support PostgreSQL and EDB Postgres Advanced Server major versions are supported from the date they are made available until the version is retired by EDB (generally five years). See [Platform Compatibility ](https://www.enterprisedb.com/platform-compatibility#epas) for more details. - - ## Minor version support EDB performs periodic maintenance to ensure stability and security. EDB performs minor version upgrades and patch updates as part of periodic maintenance. Customers are notified in the BigAnimal portal prior to maintenance occurring. You cannot configure minor versions. EDB reserves the right to upgrade customers to the latest minor version without prior notice in an extraordinary circumstance. - diff --git a/product_docs/docs/biganimal/release/overview/06_support.mdx b/product_docs/docs/biganimal/release/overview/06_support.mdx index c35d2e68874..98ac242b1b0 100644 --- a/product_docs/docs/biganimal/release/overview/06_support.mdx +++ b/product_docs/docs/biganimal/release/overview/06_support.mdx @@ -8,41 +8,47 @@ If you can’t log in to your account, send us an email at [cloudsupport@enterpr ## Create a support case from the Support portal (recommended) -1. Start a support case using any one of these options: +1. Start a support case using any one of these options: - - Go to the [Support portal](https://support.biganimal.com/hc/en-us) and select **Submit a request** at the top right of the page. + - Go to the [Support portal](https://support.biganimal.com/hc/en-us) and select **Submit a request** at the top right of the page. - - Log in to [BigAnimal](https://portal.biganimal.com), select the question mark (?) at the top right of the page, select **Support Portal**, and select **Submit a request** at the top right of the page. - - Log in to [BigAnimal](https://portal.biganimal.com), select the question mark (?) at the top right of the page, and select **Create Support ticket**. -1. Enter a description in the **Subject** field. + - Log in to [BigAnimal](https://portal.biganimal.com), select the question mark (?) at the top right of the page, select **Support Portal**, and select **Submit a request** at the top right of the page. -1. (Optional) Select the cluster name from the **Cluster name** list. -1. Enter values for **Severity Level** and **Description**. -1. (Optional) Attach files to provide more details about the issue. -1. Select **Submit**. + - Log in to [BigAnimal](https://portal.biganimal.com), select the question mark (?) at the top right of the page, and select **Create Support ticket**. + +2. Enter a description in the **Subject** field. + +3. (Optional) Select the cluster name from the **Cluster name** list. + +4. Enter values for **Severity Level** and **Description**. + +5. (Optional) Attach files to provide more details about the issue. + +6. Select **Submit**. ## Create a support case from the **Support** widget -1. Log in to BigAnimal and select **Support** on the bottom of the left navigation pane. +1. Log in to BigAnimal and select **Support** on the bottom of the left navigation pane. -1. Fill in the **Leave us a message** form. - 1. (Optional) The **Your Name** field is prefilled, but you can edit it. - - 3. The **Email Address** field is prefilled, but you can edit it. - 4. (Optional) Select the cluster name from the **Cluster name** list. - 5. Enter values for **Severity Level** and **Description**. - 6. (Optional) Attach files to provide more details about the issue. - 7. Select **Submit**. +2. Fill in the **Leave us a message** form. -## Case severity level + 1. (Optional) The **Your Name** field is prefilled, but you can edit it. + + 2. The **Email Address** field is prefilled, but you can edit it. -| Level | Description | -| -------- | ----------- | -| Severity 1 | The cloud service is down or there's an error with critical impact on your production environment. Covers urgent problems including database service outage, data loss, and cluster provisioning failure. | -| Severity 2 | There's a cloud service error or an issue significantly impacting your production environment. The system is functioning but in a severely reduced capacity. Covers problems including database service interruption and backup failures. | -| Severity 3 | There's an error or issue that doesn't have a significant impact on your production environment. Covers problems including API issues, monitoring metrics, and logging issues. -| Severity 4 | General questions, inquiries, or nontechnical requests. + 3. (Optional) Select the cluster name from the **Cluster name** list. + 4. Enter values for **Severity Level** and **Description**. + 5. (Optional) Attach files to provide more details about the issue. + 6. Select **Submit**. + +## Case severity level +| Level | Description | +| ---------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Severity 1 | The cloud service is down or there's an error with critical impact on your production environment. Covers urgent problems including database service outage, data loss, and cluster provisioning failure. | +| Severity 2 | There's a cloud service error or an issue significantly impacting your production environment. The system is functioning but in a severely reduced capacity. Covers problems including database service interruption and backup failures. | +| Severity 3 | There's an error or issue that doesn't have a significant impact on your production environment. Covers problems including API issues, monitoring metrics, and logging issues. | +| Severity 4 | General questions, inquiries, or nontechnical requests. | diff --git a/product_docs/docs/biganimal/release/overview/index.mdx b/product_docs/docs/biganimal/release/overview/index.mdx index 0897543f7fe..117b73e107a 100644 --- a/product_docs/docs/biganimal/release/overview/index.mdx +++ b/product_docs/docs/biganimal/release/overview/index.mdx @@ -1,12 +1,7 @@ --- title: "Overview of service" --- + BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. Provision [PostgreSQL](https://www.enterprisedb.com/docs/supported-open-source/postgresql/) or [EDB Postgres Advanced Server](https://www.enterprisedb.com/docs/epas/latest/) with Oracle compatibility. - - - - - - diff --git a/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx b/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx index 1937d743939..d30046b06fc 100644 --- a/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx +++ b/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx @@ -2,20 +2,21 @@ title: "Pricing and billing " --- - This section covers the pricing breakdown for BigAnimal as well as how to view invoices and infrastructure usage through Microsoft Azure. It also covers management costs. ## Pricing + Pricing is based on the number of virtual central processing units (vCPUs) provisioned for the database software offering. Consumption of vCPUs is metered hourly. A deployment is made up of either one instance or one primary and two replica instances of either PostgreSQL or EDB Postgres Advanced Server. When high availability is enabled, multiply the number of vCPUs per instance by three to calculate the full price for all resources used. The table shows the cost breakdown: -| Database type | Hourly price | Monthly price\* | -| --------------------- | -------------- | -------------- | -| PostgreSQL | $0.1655 / vCPU | $120.82 / vCPU | -| EDB Postgres Advanced Server | $0.2397 / vCPU | $174.98 / vCPU | +| Database type | Hourly price | Monthly price\* | +| ---------------------------- | -------------- | --------------- | +| PostgreSQL | $0.1655 / vCPU | $120.82 / vCPU | +| EDB Postgres Advanced Server | $0.2397 / vCPU | $174.98 / vCPU | \* The monthly cost is approximate and assumes 730 hours in a month. Microsoft bills on actual hours in a given month. ## Billing + All billing is handled directly by Microsoft Azure. You can view invoices and usage on the Azure Portal billing page. [Learn more](https://docs.microsoft.com/en-us/azure/cost-management-billing/). ## Cloud infrastructure costs @@ -23,16 +24,17 @@ All billing is handled directly by Microsoft Azure. You can view invoices and us EDB doesn't bill you for cloud infrastructure such as compute, storage, data transfer, monitoring, and logging. BigAnimal clusters run in your Microsoft Azure account. Azure bills you directly for the cloud infrastructure provisioned according to the terms of your account agreement. You can view invoices and usage on the Azure Portal billing page. [Learn more](https://docs.microsoft.com/en-us/azure/cost-management-billing/). ## Management costs + To give you full control over your data, BigAnimal deploys infrastructure in each region to manage the clusters in that region. BigAnimal doesn’t charge you for this infrastructure. Azure bills directly for the infrastructure according to the terms of your account agreement. The table shows a breakdown of management costs. -| Type | Details | -|--------|---------| -| Azure Kubernetes Service (AKS) | BigAnimal uses AKS to orchestrate and manage the database service Managed Kubernetes Virtual Machines provisioned on Windows and Linux. | -| Azure Monitor | BigAnimal feeds all logs and metrics to [Azure Monitor](https://www.enterprisedb.com/docs/biganimal/latest/using_cluster/05_monitoring_and_logging/). | -| Azure blob storage | BigAnimal uses blob storage to store metadata about your account. | -| Key vault | BigAnimal uses key vault to securely store credentials for managing your infrastructure. | +| Type | Details | +| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------- | +| Azure Kubernetes Service (AKS) | BigAnimal uses AKS to orchestrate and manage the database service Managed Kubernetes Virtual Machines provisioned on Windows and Linux. | +| Azure Monitor | BigAnimal feeds all logs and metrics to [Azure Monitor](https://www.enterprisedb.com/docs/biganimal/latest/using_cluster/05_monitoring_and_logging/). | +| Azure blob storage | BigAnimal uses blob storage to store metadata about your account. | +| Key vault | BigAnimal uses key vault to securely store credentials for managing your infrastructure. | At list price, estimated overall management costs are $400–$700 for a single cluster. Check with your Azure account manager for specifics that apply to your account. @@ -41,5 +43,3 @@ If you deploy a large number of clusters in a single region, for example, more t To get a better sense of your Azure costs, check out the Azure pricing [calculator](https://azure.microsoft.com/en-us/pricing/calculator/) and speak with your BigAnimal representative. If you want to remove management resources provisioned from Azure, reach out to [Customer Support](../overview/06_support). - - diff --git a/product_docs/docs/biganimal/release/reference/api.mdx b/product_docs/docs/biganimal/release/reference/api.mdx index 55371b7dadf..e2412cadb73 100644 --- a/product_docs/docs/biganimal/release/reference/api.mdx +++ b/product_docs/docs/biganimal/release/reference/api.mdx @@ -4,32 +4,31 @@ title: Using the BigAnimal API Use the BigAnimal API to integrate directly with BigAnimal for management activities such as cluster provisioning, de-provisioning, and scaling. -The API reference documentation is available from the [BigAnimal portal](https://portal.biganimal.com). The direct documentation link is [https://portal.biganimal.com/api/docs/](https://portal.biganimal.com/api/docs/). +The API reference documentation is available from the [BigAnimal portal](https://portal.biganimal.com). The direct documentation link is . To access the API, you need a token. The high-level steps to obtain a token are: -1. [Query the authentication endpoint](#query-the-authentication-endpoint). -2. [Request the device code](#request-the-device-code-using-curl). -3. [Authorize as a user](#authorize-as-a-user). -4. [Request the raw token](#request-the-raw-token-using-curl). -5. [Exchange for the raw token for the BigAnimal token](#exchange-the-biganimal-token-using-curl). +1. [Query the authentication endpoint](#query-the-authentication-endpoint). +2. [Request the device code](#request-the-device-code-using-curl). +3. [Authorize as a user](#authorize-as-a-user). +4. [Request the raw token](#request-the-raw-token-using-curl). +5. [Exchange for the raw token for the BigAnimal token](#exchange-the-biganimal-token-using-curl). EDB provides an optional script to simplify getting your device code and getting and refreshing your tokens. See [Using the `get-token` script](#using-the-get-token-script) for details. - ## Query the authentication endpoint This call returns the information that either: -- You need later if you are using `curl` to request the device code and tokens. -- The `get-token` script uses to generate the tokens for you. - +- You need later if you are using `curl` to request the device code and tokens. +- The `get-token` script uses to generate the tokens for you. ``` curl https://portal.biganimal.com/api/v1/auth/provider ``` The response returns the `clientId`, `issuerUri`, `scope`, and `audience`. For example: + ``` { "clientId": "pM8PRguGtW9yVnrsvrvpaPyyeS9fVvFh", @@ -47,13 +46,17 @@ ISSUER_URL=https://auth.biganimal.com SCOPE="openid profile email offline_access" AUDIENCE="https://portal.biganimal.com/api" ``` + The following example calls use these environment variables. ## Request the device code using `curl` + !!!note The `get-token` script executes this step. You don't need to make this call if you are using the script. !!! + This call gets a device code: + ``` curl --request POST \ --url "$ISSUER_URL/oauth/device/code" \ @@ -65,14 +68,15 @@ curl --request POST \ The response returns: -- `device_code` — The unique code for the device. When you go to the `verification_uri` in your browser-based device, this code is bound to your session. You use this code in your [request for a token](#request-the-raw-token-using-curl). -- `user_code` — The code you input at the `verification_uri` to authorize the device. You use this code when you [authorize yourself as a user](#authorize-as-a-user). -- `verification_uri` — The URL you use to authorize your device. -- `verification_uri_complete` — The complete URL you use to authorize the device. You can use this URL to embed the user code in your app's URL. -- `expires_in` — The lifetime (in seconds) of the `device_code` and `user_code`. -- `interval` — The interval (in seconds) at which the app polls the token URL to request a token. +- `device_code` — The unique code for the device. When you go to the `verification_uri` in your browser-based device, this code is bound to your session. You use this code in your [request for a token](#request-the-raw-token-using-curl). +- `user_code` — The code you input at the `verification_uri` to authorize the device. You use this code when you [authorize yourself as a user](#authorize-as-a-user). +- `verification_uri` — The URL you use to authorize your device. +- `verification_uri_complete` — The complete URL you use to authorize the device. You can use this URL to embed the user code in your app's URL. +- `expires_in` — The lifetime (in seconds) of the `device_code` and `user_code`. +- `interval` — The interval (in seconds) at which the app polls the token URL to request a token. For example: + ``` { "device_code": "KEOY2_5YjuVsRuIrrR-aq5gs", @@ -92,21 +96,22 @@ DEVICE_CODE=KEOY2_5YjuVsRuIrrR-aq5gs ## Authorize as a user -1. Go to the `verification_uri` in your web browser, enter your `user_code`, and select **Continue**. - -2. On the Device Confirmation dialog, select **Confirm**. +1. Go to the `verification_uri` in your web browser, enter your `user_code`, and select **Continue**. -3. On the BigAnimal Welcome screen, select **Continue with Microsoft Azure AD**. +2. On the Device Confirmation dialog, select **Confirm**. -4. Log in with your Azure AD credentials. +3. On the BigAnimal Welcome screen, select **Continue with Microsoft Azure AD**. +4. Log in with your Azure AD credentials. ## Request the raw token using `curl` + !!!note The `get-token` script executes this step. You don't need to make this call if you are using the script. See [Request your token using `get-token`](#request-your-token-using-get-token). !!! The `curl --request POST` call requests a token. For example: + ``` curl --request POST \ --url "$ISSUER_URL/oauth/token" \ @@ -115,14 +120,17 @@ curl --request POST \ --data "device_code=$DEVICE_CODE" \ --data "client_id=$CLIENT_ID" ``` + If successful, the call returns: -- `access_token` — Use to exchange for the token to access BigAnimal API. -- `refresh_token` — Use to obtain a new access token or ID token after the previous one expires. (See [Refresh tokens](https://auth0.com/docs/tokens/refresh-tokens) for more information.) Refresh tokens expire after 30 days. +- `access_token` — Use to exchange for the token to access BigAnimal API. + +- `refresh_token` — Use to obtain a new access token or ID token after the previous one expires. (See [Refresh tokens](https://auth0.com/docs/tokens/refresh-tokens) for more information.) Refresh tokens expire after 30 days. -- `expires_in` — Means the token expires after 24 hours from its creation. +- `expires_in` — Means the token expires after 24 hours from its creation. For example: + ``` { "access_token": "eyJhbGc.......1Qtkaw2fyho", @@ -146,13 +154,17 @@ The access token obtained at this step is used only in the next step to exchange !!! If not successful, you receive one of the following errors: -- `authorization_pending` — Continue polling using the suggested interval retrieved when you [requested the device code](#request-the-device-code-using-curl). -- `slow_down` — Slow down and use the suggested interval retrieved when you [requested the device code](#request-the-device-code-using-curl). To avoid receiving this error due to network latency, start counting each interval after receipt of the last polling request's response. -- `expired_token` — You didn't authorize the device quickly enough, so the `device_code` expired. Your application notifies you that it has expired and to restore it. -- `access_denied` +- `authorization_pending` — Continue polling using the suggested interval retrieved when you [requested the device code](#request-the-device-code-using-curl). + +- `slow_down` — Slow down and use the suggested interval retrieved when you [requested the device code](#request-the-device-code-using-curl). To avoid receiving this error due to network latency, start counting each interval after receipt of the last polling request's response. + +- `expired_token` — You didn't authorize the device quickly enough, so the `device_code` expired. Your application notifies you that it has expired and to restore it. + +- `access_denied` ## Exchange the BigAnimal token using `curl` + !!!note The `get-token` script executes this step. You don't need to make this call if you are using the script. !!! @@ -167,9 +179,11 @@ curl -s --request POST \ ``` If successful, the call returns: -- `token` — The bearer token used to access the BigAnimal API. + +- `token` — The bearer token used to access the BigAnimal API. For example: + ``` { "token": "eyJhbGc.......0HFkr_19Vr7w" @@ -183,7 +197,7 @@ Store this token in environment variables for future use. For example: ACCESS_TOKEN="eyJhbGc.......0HFkr_19Vr7w" ``` -!!! Tip +!!!Tip Contact [Customer Support](../overview/06_support) if you have trouble obtaining a valid access token to access the BigAnimal API. !!! @@ -199,6 +213,7 @@ curl --request GET \ ``` Example response: + ``` { "pgTypesList": [ @@ -218,17 +233,18 @@ Example response: } ``` - ## Refresh your token You use the refresh token to get a new raw-access token. Usually you need a new access token only after the previous one expires or when gaining access to a new resource for the first time. Don't call the endpoint to get a new access token every time you call an API. Rate limits can throttle the amount of requests to the endpoint that can be executed using the same token from the same IP. ### Refresh your token using `curl` + !!!note The `get-token` script has an option to execute this step. See [Refresh the token using `get-token`](#refresh-the-token-using-get-token). !!! If you aren't using the `get-token` script to refresh your token, make a POST request to the `/oauth/token` endpoint in the Authentication API, using `grant_type=refresh_token`. For example: + ``` curl --request POST \ --url "$ISSUER_URL/oauth/token" \ @@ -237,11 +253,13 @@ curl --request POST \ --data "client_id=$CLIENT_ID" \ --data "refresh_token=$REFRESH_TOKEN" ``` + The `refresh_token` is in the response when you [requested the token](#request-the-device-code-using-curl). The `client_id` is always the same one in the response when you [queried the authentication endpoint](#query-the-authentication-endpoint). The response of this API call includes the `access_token`. For example: + ``` { "access_token": "eyJ...MoQ", @@ -265,11 +283,8 @@ The token you obtain from this step is the raw-access token. You need to exchang !!! Note You need to save the refresh token retrieved from this response for the next refresh call. The refresh token in the response when you originally [requested the token](#request-the-raw-token-using-curl) becomes obsolete once you use it. - - ## Using the `get-token` script - To simplify the process of getting tokens, EDB provides the `get-token` script. You can download it [here](https://github.com/EnterpriseDB/cloud-utilities/tree/main/api). To use the script, install the [jq command-line JSON processor](https://stedolan.github.io/jq/) specific to your OS. @@ -295,6 +310,7 @@ Reference: https://www.enterprisedb.com/docs/biganimal/latest/reference/ ``` ### Request your token using `get-token` + To use the `get-token` script to get your tokens, use the script without the `--refresh` option. For example: ``` @@ -313,7 +329,9 @@ xxxxxxxxxx ##### Expires In Seconds ########## 86400 ``` -### Refresh the token using `get-token` + +### Refresh the token using `get-token` + To use the `get-token` script to refresh your token, use the script with the `--refresh ` option. For example: ``` @@ -326,4 +344,3 @@ To use the `get-token` script to refresh your token, use the script with the `-- "token_type": "Bearer" } ``` - diff --git a/product_docs/docs/biganimal/release/reference/cli.mdx b/product_docs/docs/biganimal/release/reference/cli.mdx index 5a994d9d359..16e75fbb92a 100644 --- a/product_docs/docs/biganimal/release/reference/cli.mdx +++ b/product_docs/docs/biganimal/release/reference/cli.mdx @@ -2,46 +2,46 @@ title: Using the BigAnimal CLI --- - Use the command line interface (CLI) for BigAnimal management activities such as cluster provisioning and getting cluster status from your terminal. The CLI is an efficient way to integrate with BigAnimal and enables system administrators and developers to script and automate the BigAnimal administrative operations. - - ## Installing the CLI + The CLI is available for Linux, MacOS, and Windows operating systems. ### Download the binary executable -- For Linux operating systems, use the following command to get the latest version of the binary executable: - ``` - curl -LO "https://cli.biganimal.com/download/$(uname -s)/$(uname -m)/latest/biganimal" - ``` - -- For all operating systems, download the executable binary [here](http://cli.biganimal.com/). +- For Linux operating systems, use the following command to get the latest version of the binary executable: + ``` + curl -LO "https://cli.biganimal.com/download/$(uname -s)/$(uname -m)/latest/biganimal" + ``` +- For all operating systems, download the executable binary [here](http://cli.biganimal.com/). ### (Optional) Validate the download -- For Linux operating systems, including Linux running from MacOS platforms: - 1. Download the `checksum` file with following command: - ``` - curl -L0 "https://cli.biganimal.com/download/$(uname -s)/$(uname -m)/latest/biganimal.sha256" - ``` - 1. From a shell, validate the binary executable file against the `checksum` file: - ``` - echo "$(