Skip to content

Commit

Permalink
Merge branch 'develop' into docs/edits_to_pg4k_pr5482
Browse files Browse the repository at this point in the history
  • Loading branch information
gvasquezvargas authored Apr 30, 2024
2 parents f7a1d97 + 06cb37e commit 94357d0
Show file tree
Hide file tree
Showing 69 changed files with 2,607 additions and 831 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ To add a user:
4. Depending on the level of access you want for the user, select the appropriate role.
5. Select **Submit**.

You can enable in-app inbox or email notifications to get alerted when a user is invited to a project. For more information, see [managing notifications](../notifications/#manage-notifications).
You can enable in-app inbox or email notifications to get alerted when a user is invited to a project. For more information, see [managing notifications](notifications/#manage-notifications).

## Creating a project

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,13 @@ The third-party integrations available in BigAnimal are:

## Metric naming

When metrics from BigAnimal are exported to third-party monitoring services, they are renamed in accordance with the naming conventions of the target platform.
When metrics from BigAnimal are exported to third-party monitoring services, they're renamed according to the naming conventions of the target platform.

The name below provides a mapping between [BigAnimal metric names](/biganimal/release/using_cluster/05_monitoring_and_logging/metrics/)
and the name that metric will be assigned when exported to a third-party services.
The following table provides a mapping between [BigAnimal metric names](/biganimal/release/using_cluster/05_monitoring_and_logging/metrics/)
and the name that metric will be assigned when exported to a third-party service.

!!! Note Kubernetes metrics
In addition to the metrics listed below, which pertain to the Postgres instances, BigAnimal also exports metrics from the underlying Kubernetes infrastructure. These are prefixed with `k8s.`.
In addition to these metrics, which pertain to the Postgres instances, BigAnimal also exports metrics from the underlying Kubernetes infrastructure. These are prefixed with `k8s.`.
!!!

| BigAnimal metric name | Metric name for third-party integrations |
Expand Down
114 changes: 114 additions & 0 deletions product_docs/docs/biganimal/release/using_cluster/pgd_cli_ba.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
---
title: PGD CLI on BigAnimal
navTitle: PGD CLI on BigAnimal
deepToC: true
---

When running a distributed high-availability cluster on BigAnimal, you can use the [PGD CLI](../../../pgd/latest/cli/) to manage cluster operations, such as switching over write leaders, performing cluster health checks, and viewing various details about nodes, groups, or other aspects of the cluster.

## Installing the PGD CLI

To [install the PGD CLI](../../../pgd/latest/cli/installing_cli/), replace `<your-token>` with your EDB subscription token in the following command for Debian and Ubuntu machines:

```bash
curl -1sLf 'https://downloads.enterprisedb.com/&lt;your-token&gt;/postgres_distributed/setup.deb.sh' | sudo -E bash
sudo apt-get install edb-pgd5-cli
```

or in this command for RHEL, Rocky, AlmaLinux, or Oracle Linux machines:

```bash
curl -1sLf 'https://downloads.enterprisedb.com/&lt;your-token&gt;/postgres_distributed/setup.rpm.sh' | sudo -E bash
sudo yum install edb-pgd5-cli
```

## Connecting to your BigAnimal cluster

### Discovering your database connection string

To connect to your distributed high-availability BigAnimal cluster via the PGD CLI, you need to [discover the database connection string](../../../pgd/latest/cli/discover_connections/) from your BigAnimal console:

1. Log into the [BigAnimal clusters](https://portal.biganimal.com/clusters) view.
1. In the filter, set **Cluster Type** to **Distributed High Availability** to show only clusters that work with PGD CLI.
1. Select your cluster.
1. In the view of your cluster, select the **Connect** tab.
1. Copy the read/write URI from the connection info. This is your connection string.

### Using the PGD CLI with your database connection string

!!! Important
PGD does not prompt for interactive passwords. Accordingly, you should have a [`.pgpass` file](https://www.postgresql.org/docs/current/libpq-pgpass.html) properly configured to allow access to the cluster. Your BigAnimal cluster's connection information page has all the necessary information needed for the file.

Without a properly configured `.pgpass`, you receive a database connection error when using a PGD CLI command, even when using the correct database connection string with the `--dsn` flag.
!!!

To use the PGD CLI with your database connection string, use the `--dsn` flag with your PGD CLI command:

```bash
pgd show-nodes --dsn "<your_connection_string>"
```

## PGD commands in BigAnimal

!!! Note
There are three EDB Postgres Distributed CLI commands that don't work with distributed high-availability BigAnimal clusters: `create-proxy`, `delete-proxy`, and `alter-proxy-option`. These are managed by BigAnimal, as BigAnimal runs on Kubernetes, and it is a technical best practice to have the Kubernetes operator handle these functions.
!!!

What follows are some examples of the most common PGD CLI commands with a BigAnimal cluster.

### `pgd check-health`

`pgd check-health` provides statuses with relevant messaging regarding the clock skew of node pairs, node accessibility, the current raft leader, replication slot health, and versioning consistency:

```
$ pgd check-health --dsn "postgres://[email protected]:5432/bdrdb?sslmode=require"
__OUTPUT__
Check Status Message
----- ------ -------
ClockSkew Ok All BDR node pairs have clockskew within permissible limit
Connection Ok All BDR nodes are accessible
Raft Warning There is no RAFT_LEADER, an election might be in progress
Replslots Ok All BDR replication slots are working correctly
Version Ok All nodes are running same BDR versions
```

### `pgd show-nodes`

`pgd show-nodes` returns all the nodes in the DHA cluster and their summaries, including name, node id, group, and current/target state:

```
$ pgd show-nodes --dsn "postgres://[email protected]:5432/bdrdb?sslmode=require"
__OUTPUT__
Node Node ID Group Type Current State Target State Status Seq ID
---- ------- ----- ---- ------------- ------------ ------ ------
p-mbx2p83u9n-a-1 3537039754 dc1 data ACTIVE ACTIVE Up 1
p-mbx2p83u9n-a-2 3155790619 p-mbx2p83u9n-a data ACTIVE ACTIVE Up 2
p-mbx2p83u9n-a-3 2604177211 p-mbx2p83u9n-a data ACTIVE ACTIVE Up 3
```

### `pgd show-groups`

`pgd show-groups` returns all groups in your DHA BigAnimal cluster. It also notes which node is the current write leader of each group:


```
$ pgd show-groups --dsn "postgres://[email protected]:5432/bdrdb?sslmode=require"
__OUTPUT__
Group Group ID Type Parent Group Location Raft Routing Write Leader
----- -------- ---- ------------ -------- ---- ------- ------------
world 3239291720 global true true p-mbx2p83u9n-a-1
dc1 4269540889 data p-mbx2p83u9n-a false false
p-mbx2p83u9n-a 2800873689 data world true true p-mbx2p83u9n-a-3
```

### `pgd switchover`

`pgd switchover` manually changes the write leader of the group, and can be used to simulate a [failover](../../../pgd/latest/quickstart/further_explore_failover).

```
$ pgd switchover --group-name world --node-name p-mbx2p83u9n-a-2 --dsn "postgres://[email protected]:5432/bdrdb?sslmode=require"
__OUTPUT__
switchover is complete
```

See the [PGD CLI command reference](../../../pgd/latest/cli/command_ref/) for the full range of PGD CLI commands and their descriptions.
4 changes: 2 additions & 2 deletions product_docs/docs/efm/4/efm_rel_notes/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ about the release that introduced the feature.
| [4.4](06_efm_44_rel_notes) | 05 Jan 2022|
| [4.3](07_efm_43_rel_notes) | 18 Dec 2021|
| [4.2](08_efm_42_rel_notes) | 19 Apr 2021 |
| [4.1](09_efm_41_rel_notes) | 11 Dec 2021|
| [4.0](10_efm_40_rel_notes) | 02 Sep 2021 |
| [4.1](09_efm_41_rel_notes) | 11 Dec 2020|
| [4.0](10_efm_40_rel_notes) | 02 Sep 2020 |



49 changes: 49 additions & 0 deletions product_docs/docs/lasso/4/describe.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -257,6 +257,22 @@ Hardware info through `lspci`.
**Security impact:** Low &mdash;
No known security impact.

### HTTP(s) proxies in use for package downloads (`linux_http_proxy_configuration`)

Gathers information about HTTP(s) proxies in use for package
downloads. Passwords are redacted.

**Report output:**

* File `/linux/packages-yum-config-manager.data`: YUM configuration
* File `/linux/packages-dnf-config-manager.data`: DNF configuration
* File `/linux/etc_environment.data`: Contents of /etc/environment

**Depth:** Surface

**Security Impact:** *Low* &mdash;
No known security impact.

### Hypervisor (`linux_hypervisor_collector`)

Information about the type of virtualization used, as returned by the
Expand Down Expand Up @@ -344,6 +360,26 @@ Information about the system packages installed using `rpm` or `dpkg`.
**Security impact:** Low &mdash;
No known security impact.

### Installed packages origins (`linux_packages_origin_info`)

Information about the packages origins.

**Report output:**

* File `/linux/packages-apt_conf.data`: `apt` configuration
* File `/linux/packages-apt-cache-policy.data`: `apt` configuration
* File `/linux/packages-apt-list-installed.data`: Repositories that were used to install packages
* File `/linux/packages-yum-repolist.data`: Repositories that are enabled in `yum`
* File `/linux/packages-dnf-module-list.data`: Repositories that are enabled in `dnf`
* File `/linux/packages-dnf-repolist.data`: Repositories that are enabled in `dnf`
* File `/linux/packages-yum-list-installed.data`: Repositories that were used to install packages
* File `/linux/packages-dnf-list-installed.data`: Repositories that were used to install packages

**Depth:** Surface

**Security Impact:** *Low* &mdash;
No known security impact.

### PostgreSQL disk layout (`linux_postgresql_disk_layout`)

List all files in the PostgreSQL data directory using `find` for
Expand Down Expand Up @@ -1958,6 +1994,19 @@ List of tables replicated by pglogical.
**Security impact:** Low &mdash;
No known security impact.

### Database functions (`postgresql_db_pkgs`)

Database packages/functions/procedures with arguments.

**Report output:**

* File `pkgs.out`

**Depth:** Shallow

**Security impact:** Low &mdash;
No known security impact.

### Database functions (`postgresql_db_procs`)

Functions in the database.
Expand Down
16 changes: 16 additions & 0 deletions product_docs/docs/lasso/4/release-notes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,22 @@
title: Release notes
---

## Lasso - Version 4.15.0

Released: 23 Apr 2024

Lasso Version 4.15.0 includes the following enhancements and bug fixes:

| Type | Description | Addresses |
|-----------------|-------------|-----------|
| Feature | Lasso now gathers information about the package origins: list of repositories, repository configuration and HTTP(S) proxies in use for package download, if any. | DC-31 |
| Feature | Lasso now gathers information about the EPAS code packages, including functions and procedures inside the packages. | DC-320 |
| Feature | Packages for Debian 12 ("Bookworm") | DC-888 |
| Improvement | Lasso now shows a hint message if connecting to the database with an user that doesn't have access to the custom schema the edb_wait_states extension was installed on | DC-977 |
| Bug fix | Fix issue where Lasso was trying to set lock_timeout on PostgreSQL older than 9.3 | DC-219 |
| Doc improvement | Lasso bundle is no longer mentioned in the Lasso documentation and Knowledge Base Articles | DC-885 |


## Lasso - Version 4.14.0

Released: 05 Mar 2024
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/4/bdr/nodes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -348,7 +348,7 @@ For these reasons, we generally recommend to use either logical standby nodes
or a subscribe-only group instead of physical standby nodes. They both
have better operational characteristics in comparison.

You can can manually ensure the group slot is advanced on all nodes
You can manually ensure the group slot is advanced on all nodes
(as much as possible), which helps hasten the creation of BDR-related
replication slots on a physical standby using the following SQL syntax:

Expand Down
12 changes: 7 additions & 5 deletions product_docs/docs/pgd/4/deployments/index.mdx
Original file line number Diff line number Diff line change
@@ -1,16 +1,18 @@
---
title: "Deployment options"
indexCards: simple

navigation:
- tpaexec
- manually
---

You can deploy and install EDB Postgres Distributed products using the following methods:

- TPAexec is an orchestration tool that uses Ansible to build Postgres clusters as specified by TPA (Trusted Postgres Architecture), a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations are as applicable to quick testbed setups as to production environments.
- TPAexec is an orchestration tool that uses Ansible to build Postgres clusters as specified by TPA (Trusted Postgres Architecture), a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations are as applicable to quick testbed setups as to production environments. To deploy PGD using TPA, see the [TPA documentation](/admin-tpa/installing/).

- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high availability support through EDB Postres Distributed allows single-region or multi-region clusters with one or two data groups. See the [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) topic in the [BigAnimal documentation](/biganimal/latest) for more information.
- Manual installation is also available where TPA is not an option. Details of how to deploy PGD manually are in the [manual installation](/pgd/4/deployments/manually/) section of the documentation.

Coming soon:
- BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. The addition of distributed high availability support through EDB Postres Distributed allows single-region or multi-region clusters with one or two data groups. See the [Distributed high availability](/biganimal/latest/overview/02_high_availability/distributed_highavailability/) topic in the [BigAnimal documentation](/biganimal/latest) for more information.

- EDB Postgres Distributed for Kubernetes will be a Kubernetes operator is designed, developed, and supported by EDB that covers the full lifecycle of a highly available Postgres database clusters with a multi-master architecture, using BDR replication. It is based on the open source CloudNativePG operator, and provides additional value such as compatibility with Oracle using EDB Postgres Advanced Server and additional supported platforms such as IBM Power and OpenShift.
- EDB Postgres Distributed for Kubernetes is a Kubernetes operator is designed, developed, and supported by EDB that covers the full lifecycle of a highly available Postgres database clusters with a multi-master architecture, using BDR replication. It is based on the open source CloudNativePG operator, and provides additional value such as compatibility with Oracle using EDB Postgres Advanced Server and additional supported platforms such as IBM Power and OpenShift.

Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
title: Step 1 - Provisioning hosts
navTitle: Provisioning hosts
deepToC: true
---

## Provisioning hosts

The first step in the process of deploying PGD is to provision and configure hosts.

You can deploy to virtual machine instances in the cloud with Linux installed, on-premises virtual machines with Linux installed, or on-premises physical hardware, also with Linux installed.

Whichever [supported Linux operating system](https://www.enterprisedb.com/resources/platform-compatibility#bdr) and whichever deployment platform you select, the result of provisioning a machine must be a Linux system that you can access using SSH with a user that has superuser, administrator, or sudo privileges.

Each machine provisioned must be able to make connections to any other machine you're provisioning for your cluster.

On cloud deployments, this can be done over the public network or over a VPC.

On-premises deployments must be able to connect over the local network.

!!! Note Cloud provisioning guides

If you're new to cloud provisioning, these guides may provide assistance:

Vendor | Platform | Guide
------ | -------- | ------
Amazon | AWS | [Tutorial: Get started with Amazon EC2 Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html)
Microsoft | Azure | [Quickstart: Create a Linux virtual machine in the Azure portal](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu)
Google | GCP | [Create a Linux VM instance in Compute Engine](https://cloud.google.com/compute/docs/create-linux-vm-instance)

!!!

### Configuring hosts

#### Create an admin user

We recommend that you configure an admin user for each provisioned instance.
The admin user must have superuser or sudo (to superuser) privileges.
We also recommend that the admin user be configured for passwordless SSH access using certificates.

#### Ensure networking connectivity

With the admin user created, ensure that each machine can communicate with the other machines you're provisioning.

In particular, the PostgreSQL TCP/IP port (5444 for EDB Postgres Advanced
Server, 5432 for EDB Postgres Extended and community PostgreSQL) must be open
to all machines in the cluster. If you plan to deploy PGD Proxy, its port must be
open to any applications that will connect to the cluster. Port 6432 is typically
used for PGD Proxy.

## Worked example

For the example in this section, three hosts are provisioned with Red Hat Enterprise Linux 9.

* `host-one`
* `host-two`
* `host-three`

Each is configured with an admin user named admin.

These hosts have been configured in the cloud. As such, each host has both a public and private IP address.

Name | Public IP | Private IP
------|-----------|----------------------
host-one | 172.24.117.204 | 192.168.254.166
host-two | 172.24.113.247 | 192.168.254.247
host-three | 172.24.117.23 | 192.168.254.135

For this example, the cluster's `/etc/hosts` file was edited to use those private IP addresses:

```
192.168.254.166 host-one
192.168.254.247 host-two
192.168.254.135 host-three
```
Loading

0 comments on commit 94357d0

Please sign in to comment.