Skip to content

Commit

Permalink
Merge pull request #5790 from EnterpriseDB/release/20-06-2024a
Browse files Browse the repository at this point in the history
Release: 20-06-2024a
  • Loading branch information
gvasquezvargas authored Jun 20, 2024
2 parents dade85f + eee2d44 commit 4ca8413
Show file tree
Hide file tree
Showing 22 changed files with 154 additions and 90 deletions.
8 changes: 4 additions & 4 deletions advocacy_docs/edb-postgres-ai/analytics/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ operational data on the EDB Postgres® AI platform.

Here's how it fits together:

[![Level 50 basic architecture](/svg/edb-postgres-ai/analytics/level-50.svg)](/svg/edb-postgres-ai/analytics/level-50.svg)
[![Level 50 basic architecture](images/level-50.svg)](images/level-50.svg)

### Lakehouse node

Expand Down Expand Up @@ -103,7 +103,7 @@ its attached hard drives. Instead, you can keep data in object storage (and also
in highly compressible formats), and only provision the compute needed to query
it when necessary.

[![Level 100 Architecture](/svg/edb-postgres-ai/analytics/level-100.svg)](/svg/edb-postgres-ai/analytics/level-100.svg)
[![Level 100 Architecture](images/level-100.svg)](images/level-100.svg)

On the compute side, a Vectorized Query Engine is optimized to query Lakehouse
Tables, but still fall back to Postgres for full compatibility.
Expand All @@ -115,10 +115,10 @@ columnar storage formats optimzied for Analytics.

Here's a slightly more comprehensive diagram of how these services fit together:

[![Level 200 Architecture](/svg/edb-postgres-ai/analytics/level-200.svg)](/svg/edb-postgres-ai/analytics/level-200.svg)
[![Level 200 Architecture](images/level-200.svg)](images/level-200.svg)

### Level 300

Here's the more detailed, zoomed-in view of "what's in the box":

[![Level 200 Architecture](/svg/edb-postgres-ai/analytics/level-300.svg)](/svg/edb-postgres-ai/analytics/level-300.svg)
[![Level 200 Architecture](images/level-300.svg)](images/level-300.svg)
Original file line number Diff line number Diff line change
Expand Up @@ -52,5 +52,5 @@ The Lakehouse sync process organizes the transactional database data into Lakeho
12. If successful, you will see your Lakehouse sync with the 'Creating' status under 'MOST RECENT' migrations on the Migrations page. The time taken to perform a sync can depend upon how much data is being synchronized and may take several hours.

!!! Warning
The first sync in a project will take a couple hours due to the provisioning of required infrastructure.
The first sync in a project will take a couple of hours due to the provisioning of the required infrastructure.
!!!
7 changes: 7 additions & 0 deletions gatsby-node.js
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,13 @@ exports.onCreateNode = async ({
mimeType: "text/plain; charset=utf-8",
});
}

// For "natural" linking to work, this also depends on support from the link-rewriter in src/components/layout.js
if (node.extension === "svg") {
await makeFileNodePublic(node, createNodeId, actions, {
mimeType: "image/svg+xml",
});
}
}

if (node.internal.type !== "Mdx") return;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,5 @@ BigAnimal's May 2024 release includes the following enhancements and bug fixes:

| Type | Description |
|-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Enhancement | As part of this launch, we have rebranded BigAnimal to the EDB Postgres AI Cloud Service. Our cloud service is part of a broader launch of [EDB Postgres AI](https://www.enterprisedb.com/products/edb-postgres-ai). EDB Postgres AI is an intelligent platform for transactional, analytical, and AI workloads managed across hybrid and multi-cloud environments.<br/><br/>Through the EDB Postgres AI console, users can now discover new capabilities including:<br/><br/>• **Analytics**: Postgres Lakehouse clusters are now available in the EDB Postgres AI Cloud Service (formerly named BigAnimal), enabling 30X average faster analytical queries versus standard Postgres. This is initially limited to AWS-hosted customers only.<br/><br/>• **Hybrid data estate management**: Customers can now have visibility of their self-managed Postgres databases, RDS Postgres databases, and EDB Postgres AI Cloud Service DB clusters from a “single pane of glass.” Using an installable agent, customers can collect metadata from their self-managed DBs and view it in the EDB Postgres AI Console (formerly BigAnimal console).<br/><br/>With this release, we also added a new **Health Status** tab to the cluster view. This provides real-time insight into the topology and heath of the cluster including the role of each node in the cluster, the status of each replication slot and a selection of key health metrics. |
| Enhancement | As part of this launch, we rebranded BigAnimal to the EDB Postgres AI Cloud Service. Our cloud service is part of a broader launch of [EDB Postgres AI](https://www.enterprisedb.com/products/edb-postgres-ai). EDB Postgres AI is an intelligent platform for transactional, analytical, and AI workloads managed across hybrid and multi-cloud environments.<br/><br/>Through the EDB Postgres AI Console, you can now discover new capabilities including:<br/><br/>• **Analytics**: Postgres Lakehouse clusters are now available in the EDB Postgres AI Cloud Service, enabling 30X average faster analytical queries versus standard Postgres. This feature is initially limited to AWS-hosted customers only.<br/><br/>• **Hybrid data estate management**: You can now have visibility into your self-managed Postgres databases, RDS Postgres databases, and EDB Postgres AI Cloud Service DB clusters from a “single pane of glass.” Using an installable agent, you can collect metadata from your self-managed DBs and view it in the EDB Postgres AI Console.<br/><br/>With this release, we also added a new **Health Status** tab to the cluster view. This tab provides real-time insight into the topology and heath of the cluster, including the role of each node in the cluster, the status of each replication slot, and a selection of key health metrics. |
| Enhancement | BigAnimal now supports the pg_squeeze extension for clusters running on PostgreSQL, EDB Postgres Extended Server, and EDB Postgres Advanced Server. Learn more about how pg_squeeze can enhance disk space efficiency and improve query performance [here](https://www.enterprisedb.com/docs/pg_extensions/pg_squeeze/). |



7 changes: 2 additions & 5 deletions product_docs/docs/efm/4/efm_rel_notes/index.mdx
Original file line number Diff line number Diff line change
@@ -1,11 +1,8 @@
---
title: "Release notes"
---
The Failover Manager documentation describes the latest version of Failover Manager 4,
including minor releases and patches. These release notes
cover what was new in each release. For new functionality introduced
in a minor or patch release, there are also indicators in the content
about the release that introduced the feature.
The Failover Manager documentation describes the latest version of Failover Manager 4. These release notes
cover what was new in each release.

| Version | Release Date |
| ------- | ------------ |
Expand Down
22 changes: 11 additions & 11 deletions product_docs/docs/pem/9/monitoring_performance/notifications.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -140,33 +140,33 @@ Use the **Notifications** tab to specify an alert level for webhook endpoints:
- Set **All alerts** to **Yes** to enable all alert levels to send notifications.
- To send a notification when a specific alert level is reached, set the slider next to an alert level to **Yes**. You must set **All alerts** to **No** to configure an individual alert level.

### Example: sending notifications to Slack
### Example: Sending notifications to Slack

In Slack, follow the instructions in [Getting started with incoming webhooks](https://api.slack.com/messaging/webhooks) to:
- Create a Slack app
- Activate incoming webhooks for that app
- Add a webhook that posts to a channel or user of your choice
- Create a Slack app.
- Activate incoming webhooks for that app.
- Add a webhook that posts to a channel or user of your choice.

The newly created webhook will have a unique URL something like https://hooks.slack.com/services/x/y/z . We can now configure PEM to send notifications to this URL.
The new webhook has a unique URL similar to `https://hooks.slack.com/services/x/y/z`. You can now configure PEM to send notifications to this URL.

In PEM, [create a new webhook](#creating-a-webhook), give it a descriptive name and copy the URL you obtained above to the 'URL' field.
Ensure that 'Request method' is set to 'POST' and 'Enable?' is set to 'Yes'. Set all the sliders under 'Alert Notifications' to 'Yes'.
In PEM, [create a new webhook](#creating-a-webhook), give it a descriptive name, and copy the URL you obtained earlier to the **URL** field.
Ensure that **Request method** is set to **POST** and **Enable?** is set to **Yes**. Set all the sliders under **Alert Notifications** to **Yes**.

Add a header under HTTP headers with the key `Content-Type` and the value `application/json`.

Under Payload, delete the default template and specify a template with `text` as the top-level key as in the example below.
Under **Payload**, delete the default template and specify a template with `text` as the top-level key as in the following example:

```
{"text": "%AlertName% on %ObjectType% %ObjectName% is now %CurrentState%"}
```

You can now test the connection. If it succeeds, you will get a notification in PEM, and the template above will appear in your Slack channel as a message.
You can now test the connection. If it succeeds, PEM issues a notification, and the template you specified appears in your Slack channel as a message.

Save the webhook and continue using PEM as usual. Now, PEM will send all the alerts to your Slack channel.
Save the webhook and continue using PEM as usual. PEM now sends all the alerts to your Slack channel.

### Deleting a webhook

To mark a webhook for deletion, in the Webhooks table, select the webhook name and select **Delete** to the left of the name. The alert remains in the list but in strike-through font.
To mark a webhook for deletion, in the Webhooks table, select the webhook name and select **Delete** to the left of the name. The alert remains in the list but in strikethrough font.

**Delete** is a toggle. You can undo the deletion by selecting **Delete** a second time. Select **Save** to permanently delete the webhook definition.

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/appusage/behavior.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ partitions are replicated downstream.
By default, triggers execute only on the origin node. For example, an INSERT
trigger executes on the origin node and is ignored when you apply the change on
the target node. You can specify for triggers to execute on both the origin node
at execution time and on the target when it's replicated ("apply time") by using
at execution time and on the target when it's replicated (*apply time*) by using
`ALTER TABLE ... ENABLE ALWAYS TRIGGER`. Or, use the `REPLICA` option to execute
only at apply time: `ALTER TABLE ... ENABLE REPLICA TRIGGER`.

Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/5/appusage/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@ Developing an application with PGD is mostly the same as working with any Postgr

* [Table access methods](table-access-methods) (TAMs) notes the TAMs available with PGD and how to enable them.

* [Feature compatibility](feature-compatibility) shows which server features work with which commit scopes and which commit scopes can be daisychained together.
* [Feature compatibility](feature-compatibility) shows which server features work with which commit scopes and which commit scopes can be daisy chained together.
49 changes: 29 additions & 20 deletions product_docs/docs/pgd/5/appusage/table-access-methods.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,23 +3,32 @@ title: Use of table access methods (TAMs) in PGD
navTitle: Table access methods
---

PGD 5.0 supports two table access methods (TAMs) released with EDB Postgres 15.0. These
two TAMs were certified and are allowed in PGD 5.0:

* Auto cluster
* Ref data

Any other TAM is restricted until certified by EDB. If you're planning to use
any of the TAMs on a table, you need to configure that TAM on
each participating node in the PGD cluster. To configure auto cluster or ref
data TAM, on each node:

1. Update `postgresql.conf` to specify TAMs `autocluster` or `refdata` for the
`shared_preload_libraries` parameter.
1. Restart the server and execute `CREATE EXTENSION autocluster;` or
`CREATE EXTENSION refdata;`.

After you create the extension, you can use TAM to create a table using `CREATE
TABLE test USING autocluster;` or `CREATE TABLE test USING refdata;`. These commands
replicate to all the PGD nodes. For more information on these table access
methods, see [`CREATE TABLE`](/epas/latest/reference/oracle_compatibility_reference/epas_compat_sql/36_create_table/).
The [EDB Advanced Storage Pack](/pg_extensions/advanced_storage_pack/) provides a selection
of table access methods (TAMs), available from EDB Postgres 15.0.

The following TAMs were certified for use with PGD 5.0:

* [Autocluster](/pg_extensions/advanced_storage_pack/#autocluster)
* [Refdata](/pg_extensions/advanced_storage_pack/#refdata)

Usage of any other TAM is restricted until certified by EDB.

To use one of these TAMs on a PGD cluster, the appropriate extension library
(`autocluster` and/or `refdata`) must be added to the
`shared_preload_libraries` parameter on each node, and the PostgreSQL server
restarted.

Once the extension library is present in `shared_preload_libraries` on all nodes
in the cluster, the extension itself can be created with `CREATE EXTENSION
autocluster;` or `CREATE EXTENSION refdata;`. The `CREATE EXTENSION` command
only needs to be executed on one node; it will be replicated to the other
nodes in the cluster.

After you create the extension, use `CREATE TABLE test USING autocluster;` or
`CREATE TABLE test USING refdata;` to create a table with the specified TAM. These commands
replicate to all PGD nodes in the cluster.

For more information on these table access methods, see:

- [Autocluster example](/pg_extensions/advanced_storage_pack/using/#autocluster-example)
- [Refdata example](pg_extensions/advanced_storage_pack/using/#refdata-example)
4 changes: 1 addition & 3 deletions product_docs/docs/pgd/5/cli/discover_connections.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ As with TPA, EDB PGD for Kubernetes is very flexible, and there are multiple way

Consult your configuration file to determine this information.

Establish a host name or IP address, port, database name, and username. The default database name is `bdrdb`, and the default username is enterprisedb for EDB Postgres Advanced Server and postgres for PostgreSQL and EDB Postgres Extended Server.
Establish a host name or IP address, port, database name, and username. The default database name is `bdrdb`. The default username is enterprisedb for EDB Postgres Advanced Server and postgres for PostgreSQL and EDB Postgres Extended Server.

You can then assemble a connection string based on that information:

Expand All @@ -93,5 +93,3 @@ You can then assemble a connection string based on that information:
```

If the deployment's configuration requires it, add `sslmode=<sslmode>`.


20 changes: 20 additions & 0 deletions product_docs/docs/pgd/5/compatibility.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
title: PGD Compatibility by PostgreSQL Version
navTitle: Compatibility
description: Compatibility of EDB Postgres Distributed with different versions of PostgreSQL
---

The following table shows which major versions of PostgreSQL are compatible with each version of EDB Postgres Distributed (PGD).

| Postgres<br/>Version | PGD 5 | PGD 4 | PGD 3.7 | PGD 3.6 |
|----------------------|--------------|--------------|------------------|------------------|
| 16 | [5.3+](/pgd/5/) | | | |
| 15 | [5](/pgd/5/) | | | |
| 14 | [5](/pgd/5/) | [4](/pgd/4/) | | |
| 13 | [5](/pgd/5/) | [4](/pgd/4/) | [3.7](/pgd/3.7/) | |
| 12 | [5](/pgd/5/) | [4](/pgd/4/) | [3.7](/pgd/3.7/) | |
| 11 | | | [3.7](/pgd/3.7/) | [3.6](/pgd/3.6/) |
| 10 | | | | [3.6](/pgd/3.6/) |



Loading

2 comments on commit 4ca8413

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.