Skip to content

Commit

Permalink
Merge pull request #5757 from EnterpriseDB/release/2024-06-11a
Browse files Browse the repository at this point in the history
Release: 2024-06-11a
  • Loading branch information
gvasquezvargas authored Jun 11, 2024
2 parents ced8e41 + be7be76 commit 200c7ac
Show file tree
Hide file tree
Showing 76 changed files with 2,312 additions and 852 deletions.
56 changes: 56 additions & 0 deletions advocacy_docs/edb-postgres-ai/analytics/how_to_lakehouse_sync.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
---
title: Lakehouse Sync
navTitle: Lakehouse Sync
description: How to perform a Lakehouse Sync.
deepToC: true
---

## Overview

Performing a Lakehouse Sync is a way to capture information from a transactional database at a point in time and sync that information to a Managed Store Location (MSL).

The Lakehouse sync process organizes the transactional database data into Lakehouse tables stored in the MSL. This process allows the data to be queried by a Lakehouse node, which is optimized for higher-performance queries using a vectorized query engine designed for Lakehouse tables.

## Performing a Lakehouse Sync

### Prerequisites

- a Postgres cluster hosted and managed by EDB Postgres AI® Cloud Service

### Navigate to Lakehouse Sync

1. Go to the [EDB Postgres AI Console]().

2. From the landing page, select the project with the database instance you want to sync. If it is not shown on the landing page, select the **View Projects** link in the **Projects** section and select your project from there.

3. Select the **Migrate** dropdown in the left navigation bar and then select **Migrations**.

4. Select the **Create New Migration** button.

### Define Lakehouse Sync

5. Give the sync a **Name**, then select a **Source Cluster** and the **Database** you want to sync.

6. If you have already created an MSL you want to use, select that MSL from the list of available MSLs and move on to [Selecting Tables](#selecting-tables) below. If not, select the **Create New Managed Storage Location** button to open the **Add Managed Storage Location** dialog.

7. Select the AWS region for the new MSL.

8. Set a location prefix in the form near the bottom of the **Add Managed Storage Location** dialog to complete the definition of the MSL. A location prefix is a unique name used to identify any resources and assets associated with the MSL.

![List of MSLs](./images/msl_list.png)

9. Select the **Create Managed Storage Location** button.

### Selecting Tables

10. Select the **Tables** tab next to the **Get Started** tab near the top of the page and select which tables and columns you wish to be included in the migration.

### Start Lakehouse Sync

11. Select the **Start Lakehouse Sync** button.

12. If successful, you will see your Lakehouse sync with the 'Creating' status under 'MOST RECENT' migrations on the Migrations page. The time taken to perform a sync can depend upon how much data is being synchronized and may take several hours.

!!! Warning
The first sync in a project will take a couple hours due to the provisioning of required infrastructure.
!!!
3 changes: 3 additions & 0 deletions advocacy_docs/edb-postgres-ai/analytics/images/msl_list.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion advocacy_docs/edb-postgres-ai/console/agent/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ navigation:

To monitor your self-managed Postgres database with Beacon Agent, you will need to:

* [Create a machine user](create-machine-user) in the EDB Postgres® AI Console. This will provide an access key for the agent.
* [Create a machine user](create-machine-user) in the EDB Postgres® AI Console. This provides an access key for the agent.
* [Install Beacon Agent](install-agent) on the server where your Postgres instance is running. You will use the access key to enable the agent to communicate with the EDB Postgres AI Estate service.
* [Run Beacon Agent as a service](agent-as-a-service) to have it start automatically on system startup and restart after errors.

121 changes: 121 additions & 0 deletions advocacy_docs/edb-postgres-ai/console/estate/monitor_aws.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
---
title: Monitoring AWS resources in EDB Postgres AI
navTitle: Cloud Hosted Databases - AWS resources
description: How to monitor AWS resources in EDB Postgres AI Estate.
deepToC: true
---

## Overview

Setting up the EDB Postgres® AI Console to monitor your RDS instances and S3 buckets on AWS involves adding a specific policiy and role in AWS. Once these are configured, you need to enter the role ARN of the newly created role into the **Cloud Hosted Databases** UI, accessible via the **Estate** page in the EDB Postgres AI Console.

Using this role ARN and a custom policy, the EDB Postgres AI server will have access to the RDS and S3 information in your AWS account.

After providing the role ARN in the Cloud Hosted Databases UI, you will see the selected AWS resources (RDS instances and/or S3 buckets) in the chosen AWS regions on your **Estate** page in the **Cloud Hosted Databases** section.

## Setting up monitoring of AWS resources in EDB Postgres AI Estate

### Starting the Cloud Hosted Databases UI

1. Go to **EDB Postgres AI Console**.

2. Scroll down to the **Cloud Hosted Databases** section, select the **Manage Access** button, and choose your project.

3. The **Cloud Hosted Databases** UI shows **Step 1 - Create custom policy**.

### Creating the AWS custom policy

4. Go to the console of your AWS account with the RDS instances and S3 buckets you want to monitor.

5. Navigate to IAM, and in the navigation pane on the left side of the AWS console, select **Policies**.

6. On the **Policies** dashboard page, select the **Create policy** button.

7. In the **Policy editor** section, choose the JSON option.

8. Type or paste the following JSON policy document into the JSON editor:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"rds:DescribeDBInstances",
"s3:ListAllMyBuckets",
"rds:DescribeDBClusters"
],
"Resource": "*"
}
]
}
```

9. Select **Next**, give the policy a name, for example, `edb-postgres-ai-addon-policy` and select **Create Policy**. This policy allows EDB Postgres AI server to query metadata of your AWS RDS and S3 services.


### Creating the AWS role

10. Next, in the Cloud Hosted Databases UI, select the **Next: Create a Role** button. The Cloud Hosted Databases UI should now show **Step 2 - Create a Role**.

11. Go to the AWS console UI, and in the left-hand navigation pane, choose **Roles** and then select the **Create role** button.

12. Select **Custom trust policy** role type.

13. In the **Custom trust policy** section, paste the trust policy you obtained from **Step 2** in the Cloud Hosted Databases UI. It looks similar to this:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::292478331082:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "<project-id>"
}
}
}
]
}
```

!!! Note
The EDB Postgres AI Cloud Hosted Databases UI shows a snippet like the one above but with the `<project-id>` already specified.
!!!

14. Select the **Next** button.

15. Select the policy you created earlier. In this example, we used `edb-postgres-ai-addon-policy`.

16. Select the **Next** button.

17. Give the role a name. Note that you must give the role a name that starts with `biganimal-role`, such as `biganimal-role-beacon`.

18. Select the **Create role** button.

### Entering the role ARN into the EDB Postgres AI UI

19. Still in the AWS console, select the **View role** button in the green banner at the top of the **Roles** dashboard in the AWS console.

20. Copy the role ARN from the Summary section of the Role page in AWS console and paste it into the form at the bottom of the Cloud Hosted Databases UI labeled **Role ARN**.

21. Select the **Next: Regions and Services** button in the Cloud Hosted Databases UI to move to the next step.

### Selecting the scope of regions and services

22. For **Step 3 - Regions and Services**, select the regions that you want to monitor and the services you want to monitor in those regions.

23. Select the **Next: Review and submit** button.

24. Review your regions and services selections, then select the **Submit** button. If you notice a mistake, you can always use the **Prev: Regions and Services** button and go back a step.

25. Upon success, you will see a notification at the top of the Estate page saying, "The configuration has been submitted successfully."

26. Within a moment, you should start to see the **Cloud Hosted Databases** section of your **Estate** page populate with the available S3 buckets and RDS instances.
Original file line number Diff line number Diff line change
Expand Up @@ -98,12 +98,12 @@ The `page-header-check` option has been introduced in the [2.46](https://github.

#### pgBackRest

[`archive-header-check`](https://pgbackrest.org/configuration.html#section-archive/option-archive-header-check)
[`checksum-page`](https://pgbackrest.org/configuration.html#section-backup/option-checksum-page)
[`delta`](https://pgbackrest.org/configuration.html#section-general/option-delta)
[`log-level-console`](https://pgbackrest.org/configuration.html#section-log/option-log-level-console)
[`log-level-file`](https://pgbackrest.org/configuration.html#section-log/option-log-level-file)
[`page-header-check`](https://pgbackrest.org/configuration.html#section-backup/option-page-header-check)
[`process-max`](https://pgbackrest.org/configuration.html#section-general/option-process-max)
[`repo-cipher-pass`](https://pgbackrest.org/configuration.html#section-repository/option-repo-cipher-pass)
[`repo-cipher-type`](https://pgbackrest.org/configuration.html#section-repository/option-repo-cipher-type)
* [`archive-header-check`](https://pgbackrest.org/configuration.html#section-maintainer/option-archive-header-check)
* [`checksum-page`](https://pgbackrest.org/configuration.html#section-backup/option-checksum-page)
* [`delta`](https://pgbackrest.org/configuration.html#section-general/option-delta)
* [`log-level-console`](https://pgbackrest.org/configuration.html#section-log/option-log-level-console)
* [`log-level-file`](https://pgbackrest.org/configuration.html#section-log/option-log-level-file)
* [`page-header-check`](https://pgbackrest.org/configuration.html#section-maintainer/option-page-header-check)
* [`process-max`](https://pgbackrest.org/configuration.html#section-general/option-process-max)
* [`repo-cipher-pass`](https://pgbackrest.org/configuration.html#section-repository/option-repo-cipher-pass)
* [`repo-cipher-type`](https://pgbackrest.org/configuration.html#section-repository/option-repo-cipher-type)
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
title: 'Non-superuser support with pgBackRest'
description: "How to configure a non-superuser as the pgBackRest user"
---

pgBackRest supports non-superuser backups and restores. This feature is useful when you want to delegate backup and restore tasks to non-superusers. To configure non-superuser support, you need to grant the necessary permissions to the non-superuser.

For example, to allow the `pgbackrest` user to perform backups and restores, you can grant the following permissions:

```sql
grant pg_read_all_settings to pgbackrest ;
```

For EDB Postgres Advanced Server 14 and later:

```sql
GRANT EXECUTE on FUNCTION pg_switch_wal to pgbackrest;
GRANT EXECUTE ON FUNCTION pg_start_backup(text, boolean, boolean) to pgbackrest;
GRANT EXECUTE ON FUNCTION pg_stop_backup(boolean, boolean) TO pgbackrest;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_create_restore_point(text) TO pgbackrest;
```

For EDB Postgres 15 and later:

In EDB Postgres 15 and later versions, `pg_start_backup` and `pg_stop_backup` functions are changed to `pg_backup_start` and `pg_backup_stop`.

```sql

GRANT EXECUTE on FUNCTION pg_switch_wal to pgbackrest ;
GRANT EXECUTE ON FUNCTION pg_backup_start(label text, fast boolean) TO pgbackrest;
GRANT EXECUTE ON FUNCTION pg_backup_stop(wait_for_archive boolean) TO pgbackrest;
GRANT EXECUTE ON FUNCTION pg_catalog.pg_create_restore_point(text) TO pgbackrest;
```


Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
{% extends "products/postgres-enterprise-manager-server/base.njk" %}
{% set platformBaseTemplate = "almalinux-8-or-rocky-linux-8" %}
{% block prerequisites %}{% endblock prerequisites %}
{% set ssutilsName %}sslutils_<x> postgresql<x>-contrib{% endset %}
{% set ssutilsExtendedName %}edb-postgresextended<x>-contrib{% endset %}
{% set ssutilsExtendedFirstName %}edb-postgresextended<x>-sslutils{% endset %}
{% block prerequisites %}{% endblock prerequisites %}
{% block firewallCommand %}```shell
firewall-cmd --permanent --zone=public --add-port=8443/tcp

firewall-cmd --reload
```{% endblock firewallCommand %}
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
{% extends "products/postgres-enterprise-manager-server/base.njk" %}
{% set platformBaseTemplate = "almalinux-9-or-rocky-linux-9" %}
{% block prerequisites %}{% endblock prerequisites %}
{% set ssutilsName %}sslutils_<x> postgresql<x>-contrib{% endset %}
{% set ssutilsExtendedName %}edb-postgresextended<x>-contrib{% endset %}
{% set ssutilsExtendedFirstName %}edb-postgresextended<x>-sslutils{% endset %}
{% block prerequisites %}{% endblock prerequisites %}
{% block firewallCommand %}```shell
firewall-cmd --permanent --zone=public --add-port=8443/tcp

firewall-cmd --reload
```{% endblock firewallCommand %}
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
{% extends "platformBase/" + platformBaseTemplate + '.njk' %}
{% set packageName %}edb-pem{% endset %}
{% set ssutilsName = ssutilsName or 'postgresql-<x>-sslutils' %}
{% set ssutilsExtendedFirstName = ssutilsExtendedFirstName or 'edb-postgresextended-sslutils-<x>'%}
{% set ssutilsExtendedName = ssutilsExtendedName or '' %}
{% set upgradeCommand = upgradeCommand or 'upgrade' %}
{% import "platformBase/_deploymentConstants.njk" as deploy %}
{% block frontmatter %}
{#
Expand All @@ -17,38 +21,65 @@ redirects:

{% endblock frontmatter %}

{% block product_prerequisites %}
- Set up the repository.

Setting up the repository is a one-time task. If you have already set up your repository, you do not need to perform this step.
{%- filter indent(2) -%}
{% block repocheck %}
To determine if your repository exists, enter this command:

`dnf repolist | grep enterprisedb`
{% endblock repocheck %}
{%- endfilter %}
{% block introductory_notes %}You can install PEM on a single server, or you can install the web application server and the backend database on two separate servers. You must prepare your servers for PEM installation.

- To set up the EDB repository:
After fulfilling the prerequisites and completing the installation procedure described in the following steps, you must [configure](/pem/9/installing/configuring_the_pem_server_on_linux.mdx) PEM. If you're using two servers, install and configure PEM on both servers.{% endblock introductory_notes %}

1. Go to [EDB repositories](https://www.enterprisedb.com/repos-downloads).
{% block product_prerequisites %}

1. Select the button that provides access to the EDB repo.

1. Select the platform and software that you want to download.
1. Install a [supported Postgres instance](/pem/latest/#postgres-compatibility) for PEM to use as a backend database.

- To set up the PostgreSQL community repository, go to the [downloads page for PostgreSQL](https://www.postgresql.org/download/).
You can install this instance on the same server to be used for the PEM web application or on a separate server. You can also use an existing Postgres instance if it is configured as detailed in the next steps.

!!! Note
2. Configure authentication on the Postgres backend database by updating the `pg_hba.conf` file.

The PostgreSQL community repository is required only if you are using PostgreSQL as the backend database for PEM server.
Make the following changes manually, prior to configuration. (Additional changes are necessary during [configuration](/pem/8/installing/configuring_the_pem_server_on_linux.mdx).)

- To create the relations required for PEM, the PEM configuration script connects to the Postgres backend database as a superuser of your choice using password authentication. This requires you to permit your chosen superuser to authenticate using a password. This user must be able to connect from any location where you run the configuration script. In practice, this means the server where the backend database is located and the server where the PEM web application is to be installed, if they're different.

!!!
- To allow the chosen superuser to connect using password authentication, add a line to `pg_hba.conf` that allows `host` connections using `md5` or `scram-sha-256` authentication, such as `host all superusername 127.0.0.1/32 scram-sha-256`.

- Install the Postgres server. See [Installing EDB Postgres Advanced Server on Linux](/epas/latest/installing/) or [Installing PostgreSQL](/supported-open-source/postgresql/installing/).
!!! Note
If you're using EDB Postgres Advanced Server, see [Modifying the pg_hba.conf file](/pem/latest/managing_database_server/#modifying-the-pg_hbaconf-file).

If you're using PostgreSQL, see [Client Authentication](https://www.postgresql.org/docs/current/auth-pg-hba-conf.html).
!!!

- Review [configuration and authentication requirements](../prerequisites/) for PEM.
3. Verify that the `sslutils` extension is installed on your Postgres server.

If you're using PostgreSQL or EDB Postgres Extended Server on RHEL/AlmaLinux/Rocky Linux or SLES, you also need to install the `hstore contrib` module.

- If you're using EDB Postgres Advanced Server, you can install the `sslutils` extension as follows, where `<x>` is the EDB Postgres Advanced server version.

```shell
sudo {{packageManager}} install edb-as<x>-server-sslutils
```

- If you're using PostgreSQL, you can install the `sslutils` and, if required, `hstore` modules as follows, where `<x>` is the PostgreSQL version.

```shell
sudo {{packageManager}} install {{ssutilsName}}
```

- If you're using EDB Postgres Extended Server, you can install the `sslutils` and, if required, `hstore` modules as follows, where `<x>` is the EDB Postgres Extended Server version.

```shell
sudo {{packageManager}} install {{ssutilsExtendedFirstName}} {{ssutilsExtendedName}}
```

{% block debianUbuntuNote %}{% endblock debianUbuntuNote %}

4. If you're using a firewall, allow access to port 8443 on the server where the PEM web application will be located:

{% block firewallCommand %}{% endblock firewallCommand %}{% block firewallDebianCommand %}{% endblock firewallDebianCommand %}

5. Make sure the components Postgres Enterprise Manager depends on are up to date on all servers. You can do this by updating the whole system using your package manager as shown below.
If you prefer to update individual packages, a full list of dependencies is provided in [Dependencies of the PEM Server and Agent on Linux](../dependencies.md).

```shell
sudo {{packageManager}} {{upgradeCommand}}
```

{% endblock product_prerequisites %}
{% block postinstall %}
## Initial configuration
Expand All @@ -63,4 +94,8 @@ For more details, see [Configuring the PEM server on Linux](../configuring_the_p
!!! Note

- The operating system user pem is created while installing the PEM server. The PEM server web application is a WSGI application, which runs under Apache HTTPD. The pem application data and the session is saved to this user's home directory.

## Supported locales

Currently, the Postgres Enterprise Manager server and web interface support a locale of `English(US) en_US` and use of a period (.) as a language separator character. Using an alternate locale or a separator character other than a period might cause errors.
{% endblock postinstall %}
Loading

2 comments on commit 200c7ac

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.