Skip to content

Commit

Permalink
Merge pull request #4857 from EnterpriseDB/release/2023-09-27
Browse files Browse the repository at this point in the history
Release: 2023-09-27
  • Loading branch information
nidhibhammar authored Sep 27, 2023
2 parents b98579b + 9e1c63c commit 9b118a9
Show file tree
Hide file tree
Showing 6 changed files with 16 additions and 18 deletions.
16 changes: 8 additions & 8 deletions advocacy_docs/pg_extensions/advanced_storage_pack/using.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The following are scenarios where the EDB Advanced Storage Pack TAMs are useful.

## Refdata example

A scenario where Refdata is useful is when creating a reference table of all
A scenario where Refdata is useful is creating a reference table of all
the New York Stock Exchange (NYSE) stock symbols and their corporate names.
This data is expected to change very rarely and be referenced frequently from a
table tracking all stock trades for the entire market.
Expand All @@ -32,7 +32,7 @@ CREATE INDEX ON nyse_trade USING BTREE(nyse_symbol_id);
```

When `heap` is used for `nyse_symbol`, manipulating rows in `nyse_trade` causes
row locks to be created in `nyse_symbol`, but only row locks are used in
row locks to be created in `nyse_symbol`. But only row locks are used in
`nyse_symbol`:

```sql
Expand Down Expand Up @@ -82,7 +82,7 @@ CREATE TABLE nyse_symbol (
) USING refdata;
```

In this case, manipulating data in `nyse_trade` does not generate row locks in `nyse_symbol`. But manipulating `nyse_symbol` directly cause an `EXCLUSIVE` lock to be acquired on the entire relation:
In this case, manipulating data in `nyse_trade` doesn't generate row locks in `nyse_symbol`. But manipulating `nyse_symbol` directly causes an `EXCLUSIVE` lock to be acquired on the entire relation:

```sql
=# BEGIN;
Expand Down Expand Up @@ -143,7 +143,7 @@ SELECT autocluster.autocluster(
```

!!! Note
The `cols` parameter specifies which table is clustered. In this case, `{1}` corresponds to the first column of the table, `thermostat_id`, which is the most common access pattern.
The `cols` parameter specifies the table that's clustered. In this case, `{1}` corresponds to the first column of the table, `thermostat_id`, which is the most common access pattern.
!!!

Populate the table with the `thermostat_id` and `recordtime` data:
Expand Down Expand Up @@ -243,19 +243,19 @@ ANALYZE nyse_trade;
```

Given that the inserts intercalated `nyse_symbol_id`, a query that consults one
stock would touch most pages if the table used `heap`, but would touch far
stock touches most pages if the table uses heap, but touches far
fewer pages using Autocluster.

The following query operates on attributes that must be fetched from the table
after an index scan, and shows the number of buffers touched:
after an index scan and shows the number of buffers touched:

```sql
EXPLAIN (ANALYZE, BUFFERS, TIMING OFF, SUMMARY OFF, COSTS OFF)
SELECT AVG(trade_volume * trade_price)
FROM nyse_trade WHERE nyse_symbol_id = 10;
```

This is the query plan using `autocluster`:
This is the query plan using Autocluster:

```
QUERY PLAN
Expand All @@ -272,7 +272,7 @@ This is the query plan using `autocluster`:
(9 rows)
```

For contrast, this is the query plan using `heap`:
For contrast, this is the query plan using heap:

```
QUERY PLAN
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ Distributed high-availability clusters are powered by [EDB Postgres Distributed]

Distributed high-availability clusters support both EDB Postgres Advanced Server and EDB Postgres Extended Server database distributions.

Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. One of these data nodes is the leader at any given time, while the rest are shadow nodes. We don't recommend you use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/).
Distributed high-availability clusters contain one or two data groups. Your data groups can contain either three data nodes or two data nodes and one witness node. One of these data nodes is the leader at any given time, while the rest are shadow nodes. We recommend that you don't use two data nodes and one witness node in production unless you use asynchronous [commit scopes](/pgd/latest/durability/commit-scopes/).

[PGD Proxy](/pgd/latest/routing/proxy) routes all application traffic to the leader node, which acts as the principal write target to reduce the potential for data conflicts. PGD Proxy leverages a distributed consensus model to determine availability of the data nodes in the cluster. On failure or unavailability of the leader, PGD Proxy elects a new leader and redirects application traffic. Together with the core capabilities of EDB Postgres Distributed, this mechanism of routing application traffic to the leader node enables fast failover and switchover.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,11 @@ To connect to a BigAnimal cluster:
2. Select **Analyze > Connections**.
3. Select **+ Database**.
4. In the Add Database dialog box, enter a value for **Database Name**.
5. To connect to the database, you need database user with a password. Enter the connection string for your cluster in the **SQLALCHEMY URI** field, using the following format:
5. To connect to the database, you need a database user with a password. Enter the connection string for your cluster in the **SQLALCHEMY URI** field, using the following format:

`postgresql://{<username>}:{<password>}@{<host>}:{<port>}/{<dbname>}?sslmode=verify-full`
!!!note
Your password is always encrypted before storage and never leaves your cloud environment. It's used only by the Superset software running in your BigAnimal infrastructure. As a defense-in-depth mechanism, we recommend using a Postgres user dedicated to Superset with a minimal set of privileges to just the database you're connecting. Never use your edb_admin, superuser or equivalent user with Superset.
Your password is always encrypted before storage and never leaves your cloud environment. It's used only by the Superset software running in your BigAnimal infrastructure. As a defense-in-depth mechanism, we recommend using a Postgres user dedicated to Superset with a minimal set of privileges to just the database you're connecting. Never use your edb_admin superuser or equivalent user with Superset.
!!!

6. Check the connection by selecting **Test Connection**. Select **Add** if the connection was successful.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ EDB Postgres Advanced Server supports these values for each parameter:
- `DEFAULT` &mdash; The value of `PASSWORD_LOCK_TIME` specified in the `DEFAULT` profile.
- `UNLIMITED` &mdash; The account is locked until manually unlocked by a database superuser.

`PASSWORD_LIFE_TIME` specifies the number of days that the current password can be used before the user is prompted to provide a new password. Include the `PASSWORD_GRACE_TIME` clause when using the `PASSWORD_LIFE_TIME` clause to specify the number of days to pass after the password expires before connections by the role are rejected. If you don't specify `PASSWORD_GRACE_TIME`, the password expires on the day specified by the default value of `PASSWORD_GRACE_TIME`, and the user isn't allowed to execute any command until a new password is provided. Supported values are:
`PASSWORD_LIFE_TIME` specifies the number of days that the current password can be used before the user is prompted to provide a new password. Include the `PASSWORD_GRACE_TIME` clause when using the `PASSWORD_LIFE_TIME` clause to specify the number of days to pass after the password expires before the user is forced to change their password. If you don't specify `PASSWORD_GRACE_TIME`, the password expires on the day specified by the default value of `PASSWORD_GRACE_TIME`, and the user isn't allowed to execute any command until a new password is provided. Supported values are:

- A `NUMERIC` value greater of `0` or greater. To specify a fractional portion of a day, specify a decimal value. For example, use the value `4.5` to specify 4 days, 12 hours.
- `DEFAULT` &mdash; The value of `PASSWORD_LIFE_TIME` specified in the `DEFAULT` profile.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ redirects:

You can install Replication Server 7 when you have existing single-master or multi-master replication systems that are running under Replication Server version 7.

It is assumed that you are installing Replication Server 7.x on the same host machine that is currently running the earlier version of Replication Server you are upgrading from and that you will then manage the existing replication systems using Replication Server 7.x.
It's assumed that you're installing Replication Server 7.x on the same host machine that's currently running the earlier version of Replication Server you're upgrading from and that you plan to then manage the existing replication systems using Replication Server 7.x.

If you are using a version of Replication Server earlier than 6.2.15, first upgrade to 6.2.15 or a later 6.2.x point version before upgrading to 7.x.
If you're using a version of Replication Server earlier than 6.2.15, first upgrade to 6.2.15 or a later 6.2.x point version before upgrading to 7.x.

!!!note
Version 7.x provides a non-breaking upgrade path for existing 6.2.x based cluster deployments; however, we strongly recommended that you verify the upgrade in a staging or nonproduction environment before applying the upgrade in a production environment. There is no downgrade path from version 7.x to version 6.2.x so it is essential to test the upgrade first before applying it to the production environment.
Version 7.x provides a non-breaking upgrade path for existing 6.2.x based cluster deployments. However, we strongly recommended that you verify the upgrade in a staging or nonproduction environment before applying the upgrade in a production environment. There's no downgrade path from version 7.x to version 6.2.x, so it's essential to test the upgrade first before applying it to the production environment.



Expand All @@ -27,5 +27,3 @@ For more details on upgrading Replication Server, see:
- [Upgrading with the graphical user interface installer](upgrading_with_gui_installer)

After upgrading and before using Replication Server, you need to download a JDBC driver and create a symlink to it (for Linux) or rename the driver (for Windows). See [Installing a JDBC driver](../installing_jdbc_driver/) for more information.


2 changes: 1 addition & 1 deletion product_docs/docs/pem/9/troubleshooting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -146,4 +146,4 @@ __OUTPUT__
PING
```

Where, `SERVER_ADDR` is the IP address of your PEM server. The output `PING` confirms the PEM web server is up and running.
Where `SERVER_ADDR` is the IP address of your PEM server. The output `PING` confirms the PEM web server is up and running.

2 comments on commit 9b118a9

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸŽ‰ Published on https://edb-docs.netlify.app as production
πŸš€ Deployed on https://651414d3e52ecc11da91db6f--edb-docs.netlify.app

Please sign in to comment.