Skip to content

Commit

Permalink
Merge pull request #3404 from EnterpriseDB/release/2022-12-01
Browse files Browse the repository at this point in the history
Release: 2022-12-01
  • Loading branch information
drothery-edb authored Dec 1, 2022
2 parents 464761d + 65cfc6e commit 9f75550
Show file tree
Hide file tree
Showing 10 changed files with 413 additions and 172 deletions.
440 changes: 286 additions & 154 deletions advocacy_docs/pg_extensions/index.mdx

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,9 @@ When enabling read-only workloads, keep in mind the following:

For information on replication lag while using read-only workloads, see [Synchronous replication](/biganimal/latest/overview/02_high_availability/#synchronous-replication).

### Authentication

Enable **Identity and Access Management (IAM) Authentication** to turn on the ability to log in to Postgres using your AWS IAM credentials. For this feature to take effect, after you create the cluster, you must add each user to a role that uses AWS IAM authentication in Postgres. For details, see [IAM authentication for Postgres](../../using_cluster/01_postgres_access/#iam_authentication_for_postgres).

## What’s next

Expand Down
2 changes: 2 additions & 0 deletions product_docs/docs/biganimal/release/getting_started/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ navigation:

As a cloud administrator, you can set up BigAnimal with your existing Azure subscription or AWS account, invite others to join you in exploring what EDB has to offer, and create initial clusters as an account owner so that development can begin.

<!-- Alternatively, you can use [BigAnimal's Terraform provider](../using_cluster/terraform_provider) to incorporate BigAnimal into your existing cloud infrastructure workflows. -->

If you purchase BigAnimal directly from EDB Sales, you need to [set up your own identity provider](/biganimal/release/getting_started/identity_provider). If you purchased from Azure Marketplace, you need to [set up your Azure account](/biganimal/release/getting_started/02_azure_market_setup).

After setting up your organization through your identity provider or Azure Marketplace account, you connect your cloud account to BigAnimal.
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,18 @@ title: "Pricing and billing "

The costs include database pricing for BigAnimal and the associated costs from other providers. You can also view usage and metering information.

## Database pricing
## Database pricing

Pricing is based on the number of virtual central processing units (vCPUs) provisioned for the database software offering. Consumption of vCPUs is metered hourly. A deployment is made up of either one instance or one primary and two standby replica instances of either PostgreSQL or EDB Postgres Advanced Server. When high availability is enabled, multiply the number of vCPUs per instance by three to calculate the full price for all resources used. The table shows the cost breakdown:

| Database type | Hourly price | Monthly price\* |
| ---------------------------- | -------------- | --------------- |
| PostgreSQL | $0.1655 / vCPU | $120.82 / vCPU |
| EDB Postgres Advanced Server | $0.2397 / vCPU | $174.98 / vCPU |
| Database type | Hourly price | Monthly price\* | Subscription plan |
| ---------------------------- | -------------- | --------------- | ----------------- |
| PostgreSQL | $0.0856 / vCPU | $62.49 / vCPU | Community 360 |
| PostgreSQL | $0.1655 / vCPU | $120.82 / vCPU | Standard |
| EDB Postgres Advanced Server | $0.2397 / vCPU | $174.98 / vCPU | Enterprise |

\* The monthly cost is approximate and assumes 730 hours in a month.


## Cloud infrastructure costs

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,3 +76,39 @@ If a single database is used to host multiple schemas, create a database owner a
prod1=# create schema app1 authorization app1;
prod1=# create schema app2 authorization app2;
```
## IAM authentication for Postgres

Any AWS user with an AWS account connected to a BigAnimal subscription who has the Postgres role of "iam_aws" can authenticate to the database using their AWS IAM credentials.

### Configuring IAM for Postgres

Provision your cluster before configuring IAM for Postgres.

1. In BigAnimal, turn on the IAM authentication feature when creating or modifying the cluster:
1. On the **Additional Settings** tab, under **Authentication**, select **Identity and Access Management (IAM) Authentication**.
1. Select **Create Cluster** or **Save**.
1. In AWS, get the ARN of each IAM user requiring database access. In the AWS account connected to BigAnimal, use AWS Identity and Access Management (IAM) to perform user management. See the [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html).

1. In Postgres, if the IAM role doesn’t exist yet, run this Postgres command:

```
CREATE ROLE "iam_aws";
```

1. For each IAM user, run this Postgres command:

```
CREATE USER "<ARN>" IN ROLE iam_aws;
```

### Logging in to Postgres using IAM credentials

If IAM integration is configured for your cluster, you can log in to Postgres using your AWS Amazon Resource Name (ARN) and access key. Using this ARN + access key combination allows you to connect to your Postgres database using your AWS IAM standard credentials.

!!! Note
You can continue to log in using your Postgres username and password. However, doing so doesn’t provide IAM authentication even if this feature is configured.

1. Using your AWS CLI or Cloud shell, obtain your ARN and access key. For guidance on obtaining your ARN and access key, see [Managing access keys for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html).
1. Connect to Postgres using your IAM credentials.
1. When prompted for the password, enter your access key (&lt;access key ID>:&lt;secret access key>).

Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,8 @@ You can also modify your cluster by installing Postgres extensions. See [Postgre
| Database configuration parameters | DB Configuration | \- |
| Retention period for backups | Additional Settings | \- |
| Read-only workloads | Additional Settings | Enabling read-only workloads can incur higher cloud infrastructure charges. |

| Identity and Access Management (IAM) Authentication | Additional Settings | Turn on the ability to log in to Postgres using AWS IAM credentials. You must then run a command to add each user’s credentials to a role that uses IAM authentication in Postgres. See [IAM authentication for Postgres](../01_postgres_access/#iam-authentication-for-postgres).

5. Save your changes.
!!! Note
Saving changes might require a database restart.
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
---
title: BigAnimal Terraform provider
---

BigAnimal’s [Terraform provider](https://registry.terraform.io/providers/EnterpriseDB/biganimal/latest/docs) is an infrastructure-as-code service that allows you to provision cloud resources with the Terraform CLI and incorporate those resources into your existing BigAnimal cloud infrastructure workflows.

The current version of the Terraform provider offers modules for creating, reading, updating, and deleting clusters and regions.

The Terraform provider is licensed under the [MPL v2](https://www.mozilla.org/en-US/MPL/2.0/).

!!!note
We provide support for the BigAnimal Terraform provider itself and not for the underlying environment.

## Prerequisites
To use Terraform with BigAnimal, you need:

- A BigAnimal account with an organization set up. If you don't already have a BigAnimal account, see [Getting started with the BigAnimal free trial](/biganimal/latest/free_trial).

- [Terraform](https://www.terraform.io/downloads) (version 0.13*x* or later) downloaded and installed.
- A BigAnimal API token for use within the Terraform application. See [Getting an API Token](#getting-an-api-token).

## Example usage

```terraform
# Configure the BigAnimal Provider
provider "biganimal" {
ba_bearer_token = "<redacted>"
//ba_api_uri = "https://portal.biganimal.com/api/v2" // Optional
}
# Manage the resources
```

## Getting an API Token

To use the BigAnimal API, use the following procedure to fetch an API bearer token and export it into your environment. For additional information about using the BigAnimal API, see [here](/biganimal/latest/reference/).

Optionally, credentials can also be provided by using the `BA_API_URI` environment variable.

1. Access the script located [here](https://github.com/EnterpriseDB/cloud-utilities/blob/main/api/get-token.sh).
1. Open the script in `Raw` format.
1. Copy the script and save it locally with the name `get-token.sh`.
1. Modify permissions for the script in your local shell.
1. Run the script locally using a command like the following:
```
sh <local path>/get-token.sh
```
The resulting output instructs you to log in to a URL with an 8-digit user code. For example:
```
Please login to https://auth.biganimal.com/activate?user_code=JWPL-RCXL with your BigAnimal account
```
1. In a browser, access the URL, confirm, and re-authenticate if necessary.
You should receive a notice that the code has been verified.
1. In your local shell a prompt asks:
```
Have you finished the login successfully. (y/n)
```
1. When you enter `y`, the shell responds with output that provides the access token, refresh token, scope, expiration period, and token type.
1. Export the access token into your environment as follows, replacing `<REDACTED>` with the access token.
```bash
export BA_BEARER_TOKEN=<REDACTED>
```
Rather than export the token as described in this step, you can use the token to set the value of the `ba_bearer_token` when configuring the BigAnimal provider, as shown in [Example usage](#example-usage).
1. Now you can follow along with the [examples](https://github.com/EnterpriseDB/terraform-provider-biganimal/blob/main/examples/README.md) in the Terraform repository.


22 changes: 11 additions & 11 deletions product_docs/docs/pgd/4/harp/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ redirects:
- /pgd/4/harp/02_overview
---

High Availability Routing for Postgres (HARP) is new approach for managing high availabiliity for
High Availability Routing for Postgres (HARP) is a new approach for managing high availabiliity for
EDB Postgres Distributed clusters versions 3.6 or later. All application traffic within a single location
(data center or region) is routed to only one BDR node at a time in a semi-exlusive manner. This node,
designated the lead master, acts as the principle write target to reduce the potential for data conflicts.
(data center or region) is routed to only one BDR node at a time in a semi-exclusive manner. This node,
designated the lead master, acts as the principal write target to reduce the potential for data conflicts.

HARP leverages a distributed consensus model to determine availability of the BDR nodes in the cluster.
On failure or unavailability of the lead master, HARP elects a new lead master and redirects application traffic.
Expand Down Expand Up @@ -49,7 +49,7 @@ Even in multi-master-capable approaches such as BDR, it can be helpful to
reduce the amount of necessary conflict management to derive identical data
across the cluster. In clusters that consist of multiple BDR nodes per physical
location or region, this usually means a single BDR node acts as a "leader" and
remaining nodes are "shadow." These shadow nodes are still writable, but writing to
remaining nodes are "shadow". These shadow nodes are still writable, but writing to
them is discouraged unless absolutely necessary.

By leveraging quorum, it's possible for all nodes to agree on the exact
Expand All @@ -72,9 +72,9 @@ Postgres instance:

![HARP Unit](images/ha-unit.png)

The consensus layer is an external entity where Harp Manager maintains
The consensus layer is an external entity where HARP Manager maintains
information it learns about its assigned Postgres node, and HARP Proxy
translates this information to a valid Postgres node target. Because Proxy
translates this information to a valid Postgres node target. Because HARP Proxy
obtains the node target from the consensus layer, several such instances can
exist independently.

Expand All @@ -101,16 +101,16 @@ This is a typical design using two BDR nodes in a single data center organized i

![HARP Cluster](images/ha-ao.png)

When using BDR as the HARP consensus layer, at least three
When using BDR as the HARP consensus layer, at least three
fully qualified BDR nodes must be present to ensure a quorum majority. (Not shown in the diagram are connections between BDR nodes.)

![HARP Cluster w/BDR Consensus](images/ha-ao-bdr.png)

## How it works

When managing a EDB Postgres Distributed cluster, HARP maintains at most one leader node per
When managing an EDB Postgres Distributed cluster, HARP maintains at most one leader node per
defined location. This is referred to as the lead master. Other BDR
nodes that are eligible to take this position are shadow master state until they take the leader role.
nodes that are eligible to take this position are in shadow master state until they take the leader role.

Applications can contact the current leader only through the proxy service.
Since the consensus layer requires quorum agreement before conveying leader
Expand All @@ -122,7 +122,7 @@ multiple nodes.
### Determining a leader

As an example, consider the role of lead master in a locally subdivided
BDR Always-On group as can exist in a single data center. When any
BDR Always On group as can exist in a single data center. When any
Postgres or Manager resource is started, and after a configurable refresh
interval, the following must occur:

Expand Down Expand Up @@ -225,7 +225,7 @@ For multiple BDR nodes to be eligible to take the lead master lock in
a location, you must define a location in the `config.yml` configuration
file.

To reproduce the BDR Always-On reference architecture shown in the diagram, include these lines in the `config.yml`
To reproduce the BDR Always On reference architecture shown in the diagram, include these lines in the `config.yml`
configuration for BDR Nodes 1 and 2:

```yaml
Expand Down
2 changes: 1 addition & 1 deletion product_docs/docs/pgd/4/upgrades/upgrade_paths.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Currently there are no direct upgrade paths from 3.6 to 4. You must first upgra

## Upgrading from version 3.7 to version 4

Currently it is recommended that you are using 3.7.15 or later before upgrading to 4. See [Upgrading within from 3.7](/pgd/3.7/bdr/updrades/supported_paths/#upgrading-within-version-37) in the 3.7 documentation for more information. After upgrading to 3.7.15 or later the following combinations are allowed
Currently it is recommended that you are using 3.7.15 or later before upgrading to 4. See [Upgrading within from 3.7](/pgd/3.7/bdr/upgrades/supported_paths/#upgrading-within-version-37) in the 3.7 documentation for more information. After upgrading to 3.7.15 or later the following combinations are allowed

| 3.7.15 | 3.7.16 | 3.7.17 | Target BDR version |
|--------|--------|--------|--------------------|
Expand Down
1 change: 1 addition & 0 deletions scripts/source/extensions-table.js
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,7 @@ async function buildTable(auth) {
const columnMetadata = sheet.data.sheets[0].data[0].columnMetadata;
let rows = sheet.data.sheets[0].data[0].rowData;
for (let i = 0, row = rows[i]; i < rows.length; ++i, row = rows[i]) {
row.values = row.values || [];
row.values = row.values
.map((cell, j) => {
const merge = merges.find(
Expand Down

1 comment on commit 9f75550

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.