diff --git a/advocacy_docs/pg_extensions/advanced_storage_pack/index.mdx b/advocacy_docs/pg_extensions/advanced_storage_pack/index.mdx index 2623ee98275..eb878803c34 100644 --- a/advocacy_docs/pg_extensions/advanced_storage_pack/index.mdx +++ b/advocacy_docs/pg_extensions/advanced_storage_pack/index.mdx @@ -22,4 +22,4 @@ See [Autocluster example](using/#autocluster-example) for an example use case. The Refdata TAM is optimized for mostly static data, which contains an occasional INSERT and very few DELETE and UPDATE commands. For database schemas that use foreign keys to reference data, this TAM can provide performance gains of 5-10% and increased scalability. Whenever anyone modifies a Refdata table, the modifying transaction takes a table-level ExclusiveLock, blocking out concurrent modifications by any other session as well as modifications to tables that reference the table. -See [Refdata exampe](using/#refdata-example) for an example use case. +See [Refdata example](using/#refdata-example) for an example use case. diff --git a/advocacy_docs/supported-open-source/barman/pg-backup-api/01-installation.mdx b/advocacy_docs/supported-open-source/barman/pg-backup-api/01-installation.mdx index b61a9c7e6e9..36fde1d0293 100644 --- a/advocacy_docs/supported-open-source/barman/pg-backup-api/01-installation.mdx +++ b/advocacy_docs/supported-open-source/barman/pg-backup-api/01-installation.mdx @@ -130,3 +130,4 @@ The json payload should look like this: ``` +Congrats! [Carry on with Operations](03-tasks) for further information about how to create tasks operations with the API. diff --git a/advocacy_docs/supported-open-source/barman/pg-backup-api/03-tasks.mdx b/advocacy_docs/supported-open-source/barman/pg-backup-api/03-tasks.mdx new file mode 100644 index 00000000000..ea96ab828ad --- /dev/null +++ b/advocacy_docs/supported-open-source/barman/pg-backup-api/03-tasks.mdx @@ -0,0 +1,87 @@ +--- +title: 'Operations' +description: 'Instructions for creating and querying tasks operations on the Postgres Backup API' +tags: + - barman + - backup + - recovery + - postgresql + - pg-backup-api +--- + +### Prerequisites + + You can ask Barman to perform operations on upstream servers. If you plan to use that feature, the following setup is expected: + + * Postgres backup API (pg-backup-api) must be installed on the same server as Barman. Please check [How to install Postgres backup API](01-installation) if you haven't done it already. + * SSH passwordless login must be in place among the different hosts. + * Ownership of destination directory or write permissions must be granted to the user Barman will be connecting as to the Postgres nodes. + * For security reasons, the API will listen to on localhost only. You need to use some proxy to forward the traffic. The Apache http server was used during our tests. Please [read our notes about how to define a virtual host](02-securing-pg-backup-api#adding-a-virtualhost-definition-for-postgres-backup-api). + +### Available endpoints + + * /servers/NAME/operations/ + * /servers/NAME/operations/OPERATION_ID + +NAME is an available Barman server configuration. Usually in a file like /etc/barman.d/myserver.conf + +OPERATION_ID is a short string that represents the ID of an operation created by the API. + +You can find out how to fetch operations created by the API below + +### Available operations + +#### GET: /servers/NAME/operations + +Returns all tasks created by the API, if any. + +```bash +curl http://barman-server.my.org/servers/db_server_two/operations +{ + "operations": [ + "20230223T092433", + "20230223T092630" + ] +} +``` + +#### POST: /servers/NAME/operations + +This method is one of the most important ones, because it is where you ask pg-backup-api to create operations. Pay attention on the response below because we will use the ID to query its status later. + +You need to send instructions about the operation you want to be created. This step is done by a json message (payload) which should look like this: + +The first supported (and only for the moment) operation type is "recover". + +``` +{"operation_type": "recover", + "backup_id": "20230221T155931", + "remote_ssh_command": "ssh postgres@db_server_two.my.org", + "destination_directory": "/var/lib/pgdata"} +``` + +!!! Note + You need to set the "content-type" header as "application/json" like the example below, otherwise you will receive a 400 Bad Request error. + + +```bash +curl -X POST http://barman-server.my.org/servers/db_server_two/operations -H "content-type: application/json" -d@payload-pg-backup-api.json +{ + "operation_id": { "20230223T093201" }, +} +``` + +In the response above, "20230223T093201" is the OPERATION_ID which we will use next to check its status. + + +#### GET: /servers/NAME/operations/OPERATION_ID + +This method allows you to check if an operation is: DONE, SUCCESS or IN_PROGRESS. + +```bash +curl http://barman-server.my.org/servers/db_server_two/operations/20230223T093201 +{ + "recovery_id": "20230223T093201", + "status": "DONE" +} +``` diff --git a/product_docs/docs/biganimal/release/getting_started/activating_regions.mdx b/product_docs/docs/biganimal/release/getting_started/activating_regions.mdx index c1d0f1ba383..25d570e6527 100644 --- a/product_docs/docs/biganimal/release/getting_started/activating_regions.mdx +++ b/product_docs/docs/biganimal/release/getting_started/activating_regions.mdx @@ -5,7 +5,7 @@ title: "Activating regions" When you activate a region, BigAnimal prepares the compute and networking resources required to deploy clusters. Note that these added resources can increase your cloud costs. -You can activate a region ahead of time or when you create or restore a cluster. +You must activate a region prior to creating or restoring a cluster. Each region you activate displays a status. The status is available on the Create cluster and Restore cluster pages and the Regions page. For more information on the different region statuses, see [Region status reference](#region-status-reference). @@ -13,7 +13,7 @@ Each region you activate displays a status. The status is available on the Creat ## Activate a new region from the Regions page -You can activate a region ahead of time using the **Regions** page. Alternatively, you can activate a region by selecting an inactive region at the time of cluster creation or restore. +You can activate a region ahead of time using the **Regions** page. 1. To activate a region ahead of cluster creation, go to the **Regions** page. diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index 74d2ff92482..23d3d5d5e0c 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -10,7 +10,7 @@ redirects: Prior to creating your cluster, make sure you have enough resources. Without enough resources, your request to create a cluster fails. - If your cloud provider is Azure, see [Preparing your Azure account](/biganimal/release/getting_started/preparing_cloud_account/01_preparing_azure). - If your cloud provider is AWS, see [Preparing your AWS account](/biganimal/release/getting_started/preparing_cloud_account/02_preparing_aws). -- Activate a region during cluster creation or ahead of time. See [Activating regions](/biganimal/latest/getting_started/activating_regions). +- Activate a region prior to cluster creation. See [Activating regions](/biganimal/latest/getting_started/activating_regions). !!! @@ -69,8 +69,6 @@ Prior to creating your cluster, make sure you have enough resources. Without eno 1. In the **Region** section, select the region where you want to deploy your cluster. - You can select a region for deploying your cluster even if it is not yet *Active*. Your cluster creation request is added to a queue, and the cluster is created after you activate the region. See [Activating regions](/biganimal/latest/getting_started/activating_regions) for more information. - !!! Tip For the best performance, EDB strongly recommends that this region be the same as your other resources that communicate with your cluster. For a list of available regions, see [Supported regions](../../overview/03a_region_support). If you are interested in deploying a cluster to a region that isn't currently available, contact [Support](/biganimal/latest/overview/support/). diff --git a/product_docs/docs/epas/15/epas_rel_notes/epas15_2_0_rel_notes.mdx b/product_docs/docs/epas/15/epas_rel_notes/epas15_2_0_rel_notes.mdx index 57e6c10112c..ee80a29ee61 100644 --- a/product_docs/docs/epas/15/epas_rel_notes/epas15_2_0_rel_notes.mdx +++ b/product_docs/docs/epas/15/epas_rel_notes/epas15_2_0_rel_notes.mdx @@ -7,26 +7,24 @@ EDB Postgres Advanced Server 15.2.0 includes the following enhancements and bug | Type | Description | Category | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------| --------------------- | | Upstream merge | Merged with community PostgreSQL 15.2. See the [PostgreSQL 15 Release Notes](https://www.postgresql.org/docs/15/release-15.html) for more information. | | -| Feature | Transparent Data Encryption (TDE) encrypts, transparently to the user, any user data stored in the database system. User data includes the actual data stored in tables and other objects, as well as system catalog data such as the names of objects. See [TDE docs](/tde/latest) for more information. | Security | -| Enhancement | EDB Postgres Advanced Server now allows non-superuser to load data using EDB*Loader. | edbldr | -| Enhancement | Enable multi-insert support for the dynamic partition for EDB*Loader and COPY command. | | -| Enhancement | EDB Postgres Advanced Server now provides the ability to obfuscate the LDAP password in the `pg_hba.conf` file. The user can supply a module which will transform the `ldapbindpasswd` value in the `pg_hba.conf` file before it is passed to the LDAP server. | Security | -| Enhancement | Adding OCI dblink configuration file approach to restrict pushdowns. This enhancement adds the infrastructure of the configuration file where you can define the list of operators/functions that can push down to the remote server. It also allows users to add/modify the list as required. | | -| Enhancement | MERGE syntax - Adding support for WHERE clause to the UPDATE and INSERT of MERGE command for Oracle compatibility.| Oracle compatibility | -| Enhancement | Adding HTP and HTF packages to built-in packages for Oracle compatibility. | Oracle compatibility | -| Enhancement | Allow INTO clause to accept multiple composite row type targets in SPL. This enhancement allows a SELECT list having a mix of scalar and composite type values that are fetched from a table, to be assigned to corresponding scalar or composite variables (including collection variables) in the SPL code. -| Enhancement | Skip IN/OUT/IN OUT modifiers in the USING expression. USING clause in EXECUTE IMMEDIATE does support passing parameters to embedded SPL blocks. These parameters are treated as IN OUT only, and there is currently no way to specify whether the parameter is IN, OUT, or IN OUT. However, to ease migration from Oracle, these modifiers are skipped at the beginning of the expression whenever possible. | Oracle compatibility | -| Enhancement | Adding FORMAT_ERROR_STACK() and FORMAT_ERROR_BACKTRACE() functions to the DBMS_UTILITY package. These functions are used in a stored procedure, function or package to return the current exception name. These functions are useful for debugging and logging purposes. | | -| Enhancement | Adding Oracle compatible UPDATE..SET ROW syntax support. UPDATE changes the values of the specified columns in all rows that satisfy the condition. Only the columns to modify are mentioned in the SET clause; columns not to modify explicitly retain their previous values. The SET ROW clause enables users to update a target record using a record type variable or row type objects. The condition is that the record or row used should have compatible data types with table's columns in order. | Oracle compatibility | +| Feature | Transparent Data Encryption (TDE) encrypts any user data stored in the database system. This encryption is transparent to the user. User data includes the actual data stored in tables and other objects, as well as system catalog data such as the names of objects. See [TDE docs](/tde/latest) for more information. | Security | +| Enhancement | EDB Postgres Advanced Server now allows non-superusers to load data using EDB*Loader. | edbldr | +| Enhancement | Enabled multi-insert support for the dynamic partition for EDB*Loader and COPY command. | | +| Enhancement | EDB Postgres Advanced Server now lets you obfuscate the LDAP password in the `pg_hba.conf` file. You can supply a module that transforms the `ldapbindpasswd` value in the `pg_hba.conf` file before the value is passed to the LDAP server. | Security | +| Enhancement | Added OCI dblink configuration file approach to restrict pushdowns. This enhancement adds the infrastructure to the configuration file in which you can define the list of operators and functions that can push down to the remote server. It also allows you to add to or modify the list as needed. | | +| Enhancement | Added support for WHERE clause to the UPDATE and INSERT of MERGE command for Oracle compatibility.| Oracle compatibility | +| Enhancement | Added the HTP and HTF packages to built-in packages for Oracle compatibility. | Oracle compatibility | +| Enhancement | The INTO clause now accepts multiple composite-row type targets in SPL. This enhancement allows you to assign a SELECT list having a mix of scalar and composite type values that are fetched from a table to corresponding scalar or composite variables (including collection variables) in the SPL code. +| Enhancement | EDB Postgres Advanced Server now skips IN/OUT/IN OUT modifiers in the USING expression. A USING clause in EXECUTE IMMEDIATE supports passing parameters to embedded SPL blocks. However, these parameters are treated as IN OUT only, and there was previously no way to specify whether the parameter is IN, OUT, or IN OUT. To ease migration from Oracle, these modifiers are now skipped at the beginning of the expression whenever possible. | Oracle compatibility | +| Enhancement | Added the FORMAT_ERROR_STACK() and FORMAT_ERROR_BACKTRACE() functions to the DBMS_UTILITY package. These functions are used in a stored procedure, function, or package to return the current exception name. These functions are useful for debugging and logging purposes. | | +| Enhancement | Added Oracle-compatible UPDATE..SET ROW syntax support. UPDATE changes the values of the specified columns in all rows that satisfy the condition. Only the columns being modify are mentioned in the SET clause. Columns not being modified explicitly retain their previous values. The SET ROW clause enables you to update a target record using a record-type variable or row-type objects. The record or row used must have compatible data types with table's columns in order. | Oracle compatibility | | Enhancement | EDB Postgres Advanced Server now provides INDEX and NO_INDEX hints for the partitioned table. The optimizer hints apply to the inherited index in the partitioned table. The execution plan internally expands to include the corresponding inherited child indexes and applies them in later processing. | | -| Enhancement | Adding SQLCODE() and SQLERRM()functions. In an exception handler, the SQLCODE function returns the numeric code of the exception being handled. Outside an exception handler, SQLCODE returns 0. The SQLERRM function returns the error message associated with an SQLCODE variable value. If the error code value is passed to the SQLERRM function it returns an error message associated with the passed error code value, irrespective of the current error raised. | | -| Enhancement | Adding TO_MULTI_BYTE() and TO_SINGLE_BYTE() functions. | Oracle compatibility | -| Enhancement | Adding TO_NCHAR()function. TO_NCHAR() is the wrapper function that casts input to NVARCHAR2. Note that the size of the input is limited to PostgreSQL's supported size limit for that type. | | -| Enhancement | Adding TO_DSINTERVAL() function. Converts a character string of CHAR, VARCHAR2, NCHAR, or NVARCHAR2 datatype to an interval datatype. | | -| Enhancement | Adding FROM_TZ() function. Converts a TIMESTAMP value and a time zone value to an equivalent TIMESTAMP WITH TIME ZONE value. | | +| Enhancement | Added the SQLCODE() and SQLERRM()functions. In an exception handler, the SQLCODE function returns the numeric code of the exception being handled. Outside an exception handler, SQLCODE returns 0. The SQLERRM function returns the error message associated with an SQLCODE variable value. If the error code value is passed to the SQLERRM function, it returns an error message associated with the passed error code value, regardless of the current error raised. | | +| Enhancement | Added the TO_MULTI_BYTE() and TO_SINGLE_BYTE() functions. | Oracle compatibility | +| Enhancement | Added the TO_NCHAR()function, the wrapper function that casts input to NVARCHAR2. The size of the input is limited to the PostgreSQL supported size limit for that type. | | +| Enhancement | Added the TO_DSINTERVAL() function. Converts a character string of CHAR, VARCHAR2, NCHAR, or NVARCHAR2 data type to an interval data type. | | +| Enhancement | Added the FROM_TZ() function. Converts a TIMESTAMP value and a time zone value to an equivalent TIMESTAMP WITH TIME ZONE value. | | | Enhancement | Adding TO_CLOB() and TO_BLOB() functions. These are the only wrapper functions that cast input to CLOB or BLOB types respectively. | | -| Enhancement | EDB Postgres Advanced Server users can now view the package specification and package body definition using the psql meta-commands `\sps` and `\spb`, respectively. | | +| Enhancement | You can now view the package specification and package body definition using the psql meta-commands `\sps` and `\spb`, respectively. | | | Enhancement | `index _advisor` is now a separate extension. | Index advisor | -| Change | The Window installer no longer installs pgAdmin and the parallel-clone and clonescheme extensions are no longer included in an EDB Postgres Advanced Server installation. To download pgAdmin, see the [pgAdmin download page](https://www.pgadmin.org/download/). | - - +| Change | The Windows installer no longer installs pgAdmin, and the parallel-clone and clonescheme extensions are no longer included in an EDB Postgres Advanced Server installation. To download pgAdmin, see the [pgAdmin download page](https://www.pgadmin.org/download/). | diff --git a/product_docs/docs/pem/9/pem_rel_notes/910_rel_notes.mdx b/product_docs/docs/pem/9/pem_rel_notes/910_rel_notes.mdx index 2e34db90b9e..6ea2c4e920e 100644 --- a/product_docs/docs/pem/9/pem_rel_notes/910_rel_notes.mdx +++ b/product_docs/docs/pem/9/pem_rel_notes/910_rel_notes.mdx @@ -6,8 +6,8 @@ New features, enhancements, bug fixes, and other changes in PEM 9.1.0 include: | Type | Description | | ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Enhancement | Support for EDB Advanced Server 15 and EDB Postgres Distributed 5. | +| Enhancement | Added support for EDB Advanced Server 15 and EDB Postgres Distributed 5. | | Enhancement | Added the ability to copy probes and alerts to all servers in a group without having to select them individually. | -| Bug Fix | Fixed an issue whereby alert notifications were sent for Low or Medium alerts when the user had selected not to receive them. [Support Ticket #87664] | -| Bug Fix | Fixed an issue whereby webhooks would not be called if there was an inactive endpoint in the spool. [Support Ticket #87365] | -| Other | The PGD Worker Errors probe and associated alert template and dashboard chart have been removed as they were found to produce incorrect data. They will be replaced with a new probe in a future release. | \ No newline at end of file +| Bug Fix | Fixed an issue whereby alert notifications were sent for Low or Medium alerts when the user disabled the option to send them. [Support Ticket #87664] | +| Bug Fix | Fixed an issue whereby webhooks weren't called if there was an inactive endpoint in the spool. [Support Ticket #87365] | +| Other | The PGD Worker Errors probe and associated alert template and dashboard chart were removed as they produced incorrect data. They will be replaced with a new probe in a future release. | \ No newline at end of file