Skip to content

Commit

Permalink
Merge pull request #3553 from EnterpriseDB/release/2023-01-19
Browse files Browse the repository at this point in the history
Release: 2023-01-19
  • Loading branch information
drothery-edb authored Jan 19, 2023
2 parents fbfedda + 9eb9e2b commit 8ce60cf
Show file tree
Hide file tree
Showing 113 changed files with 1,601 additions and 1,768 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -66,34 +66,38 @@ PostgreSQL runs as a service in the background; the PostgreSQL service account i

The specified password must conform to any security policies existing on the PostgreSQL host. After entering a password in the `Password` field, and confirming the password in the `Retype Password` field, click `Next` to continue.

Use the `Port` field to specify the port number on which the server should listen. The default listener port is `5432`. Click `Next` to continue.
![The Port dialog](../images/the_port_dialog.png)

Use the `Locale` field to specify the locale that will be used by the new database cluster. The `Default locale` is the operating system locale. Click `Next` to continue.
Fig. 6: The Port dialog

Use the `Port` field to specify the port number on which the server should listen. The default listener port is `5432`. Click `Next` to continue.

![The Advanced Options dialog](../images/aug11_ad.png)

Fig. 6: The Advanced Options dialog
Fig. 7: The Advanced Options dialog

The `Pre Installation Summary` dialog displays the installation preferences that you have specified with the installation wizard. Review the settings; you can use the `Back` button to return to a previous dialog to modify a setting, or click `Next` to continue.
Use the `Locale` field to specify the locale that will be used by the new database cluster. The `Default locale` is the operating system locale. Click `Next` to continue.

![The Pre Installation Summary dialog](../images/preinstallation.png)

Fig. 7: The Pre Installation Summary dialog
Fig. 8: The Pre Installation Summary dialog

The `Pre Installation Summary` dialog displays the installation preferences that you have specified with the installation wizard. Review the settings; you can use the `Back` button to return to a previous dialog to modify a setting, or click `Next` to continue.

The wizard will inform you that it has the information required to install PostgreSQL; click `Next` to continue.

![The Ready to Install dialog](../images/ready_to_install.png)

Fig. 8: The Ready to Install dialog
Fig. 9: The Ready to Install dialog

During the installation, the setup wizard confirms the installation progress of PostgreSQL via a series of progress bars.

![The Installing dialog](../images/installing.png)

Fig. 9: The Installing dialog
Fig. 10: The Installing dialog

Before the setup wizard completes the PostgreSQL installation, it offers to launch Stack Builder at exit. The Stack Builder utility provides a graphical interface that downloads and installs applications and drivers that work with PostgreSQL. You can optionally uncheck the `Stack Builder` box and click `Finish` to complete the PostgreSQL installation or accept the default and proceed to launch Stack Builder.

![The installation wizard offers to Launch Stack Builder at exit](../images/installation_complete.png)

Fig. 10: The installation wizard offers to Launch Stack Builder at exit
Fig. 11: The installation wizard offers to Launch Stack Builder at exit
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -63,25 +63,39 @@ To add the dashboard:

1. Clone the [cloud-utilities](https://github.com/EnterpriseDB/cloud-utilities) repository on your local system.

1. Using Python 3.4 or later, use the following syntax to create an output JSON file:
2. Using Python 3.4 or later, create an output JSON file:

```py
db_name_change.py <database_name> -i <input_file> -o <output_file>
```
1. Change your working directory:

```py
cd cloud-utilities/utils/superset
```

1. Change the permissions on the script to make it executable:

```
chmod +x db_name_change.py
```

1. Run the script:

```
./db_name_change.py <database_name> -i <input_file> -o <output_file>
```

For example:
For example:

```py
python3 db_name_change.py edb -i utils/superset/pgd_monitoring_template.json -o utils/superset/upload.json
```
```py
./db_name_change.py edb -i utils/superset/pgd_monitoring_template.json -o utils/superset/upload.json
```

To get more information on the `db_name_change` script, run:

```py
python3 db_name_change.py -h
./db_name_change.py -h
```

1. In Superset, import your output file by selecting **Analyze > Dashboards > Import dashboard**.
3. In Superset, import your output file by selecting **Analyze > Dashboards > Import dashboard**.

## Using Superset charts

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@ chown efm:efm efm.nodes

By default, Failover Manager expects the cluster members file to be named `efm.nodes`. If you name the cluster members file something other than `efm.nodes`, modify the Failover Manager service script to instruct Failover Manager to use the new name.

The cluster members file on the first node started can be empty. This node becomes the membership coordinator. On each subsequent node, the cluster member file must contain the address and port number of the membership coordinator. Each entry in the cluster members file must be listed in an address:port format. Separate multiple entries with a space.
The cluster members file on the first node started can be empty. This node becomes the membership coordinator. On each subsequent node, the cluster member file must contain the address and port number of at least the membership coordinator. Each entry in the cluster members file must be listed in an address:port format. Separate multiple entries with a space.

The agents update the contents of the `efm.nodes` file to match the current members of the cluster. As agents join or leave the cluster, the `efm.nodes` files on other agents are updated to reflect the current cluster membership. If you invoke the [`efm stop-cluster`](../07_using_efm_utility/#efm_stop_cluster) command, Failover Manager doesn't modify the file.
The agents update the contents of the `efm.nodes` file to match the current members of the cluster. As agents join or leave the cluster, the `efm.nodes` files on other agents are updated to reflect the current cluster membership. If you invoke the [`efm stop-cluster`](../07_using_efm_utility/#efm_stop_cluster) command, Failover Manager doesn't modify the file. Note: an agent doesn't write its own information to the file as it does not need its own location to discover other members at startup.

If the membership coordinator leaves the cluster, another node assumes the role. You can use the [`efm cluster-status`](../07_using_efm_utility/#efm_cluster_status) command to find the address of the membership coordinator. If a node joins or leaves a cluster while an agent is down, before starting that agent, manually ensure that the file includes at least the current membership coordinator address and port.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -170,9 +170,8 @@ Restart the EDB Postgres Advanced Server service:
systemctl restart edb-as-14
```

The script file that you need to modify to include the `LD_LIBRARY_PATH` setting depends on the EDB Postgres Advanced Server version, the Linux system on which it was installed, and whether it was installed with the graphical installer or an RPM package.
The script file that you need to modify to include the `LD_LIBRARY_PATH` setting depends on the EDB Postgres Advanced Server version and the Linux system on which it was installed.

See the appropriate version of the [EDB Postgres Advanced Server installation documentation](https://www.enterprisedb.com/docs/epas/latest/epas_inst_linux/) to determine the service script that affects the startup environment.

### Oracle instant client for Windows

Expand Down
28 changes: 14 additions & 14 deletions product_docs/docs/epas/14/epas_compat_sql/44_delete.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: "DELETE"

## Name

`DELETE` -- delete rows of a table.
`DELETE` &mdash; Delete rows of a table.

## Synopsis

Expand All @@ -20,52 +20,52 @@ DELETE [ <optimizer_hint> ] [ FROM ] <table>[@<dblink> ]

## Description

`DELETE` deletes rows that satisfy the `WHERE` clause from the specified table. If the `WHERE` clause is absent, the effect is to delete all rows in the table. The result is a valid, but empty table.
`DELETE` deletes rows that satisfy the `WHERE` clause from the specified table. Omitting the `WHERE` clause deletes all rows in the table, leaving an empty table. You need the `DELETE` privilege on the table to delete from it. You also need the `SELECT` privilege for any table whose values are read in the condition.

The `FROM` keyword is optional if EDB Postgres Advanced Server is installed in Oracle-compatible mode. It isn't optional if EDB Postgres Advanced Server is installed in Postgres mode.
The `FROM` keyword is optional if EDB Postgres Advanced Server is installed in Oracle-compatible mode. It's required if EDB Postgres Advanced Server is installed in Postgres mode.

!!! Note
The `TRUNCATE` command provides a faster mechanism to remove all rows from a table.
The `TRUNCATE` command is a faster way to remove all rows from a table.

The `RETURNING INTO { record | variable [, ...] }` clause may only be specified if the `DELETE` command is used within an SPL program. In addition the result set of the `DELETE` command must not include more than one row, otherwise an exception is thrown. If the result set is empty, then the contents of the target record or variables are set to null.
You can specify the `RETURNING INTO { record | variable [, ...] }` clause only if you use the `DELETE` command in an SPL program. The result set of the `DELETE` command can't include more than one row. If it does, an error occurs. If the result set is empty, then the contents of the target record or variables are set to null.

The `RETURNING BULK COLLECT INTO collection [, ...]` clause may only be specified if the `DELETE` command is used within an SPL program. If more than one `collection` is specified as the target of the `BULK COLLECT INTO` clause, then each `collection` must consist of a single, scalar field – i.e., `collection` must not be a record. The result set of the `DELETE` command may contain none, one, or more rows. `return_expression` evaluated for each row of the result set, becomes an element in `collection` starting with the first element. Any existing rows in `collection` are deleted. If the result set is empty, then `collection` will be empty.
You can specify the `RETURNING BULK COLLECT INTO collection [, ...]` clause only if you use the `DELETE` command in an SPL program. If you specify more than one `collection` as the target of the `BULK COLLECT INTO` clause, then each `collection` must consist of a single, scalar field. That is, `collection` can't be a record.

You must have the `DELETE` privilege on the table to delete from it, as well as the `SELECT` privilege for any table whose values are read in the condition.
The result set of the `DELETE` command can contain zero, one, or more rows. The `return_expression` evaluated for each row of the result set becomes an element in `collection`, starting with the first element. Any existing rows in `collection` are deleted. If the result set is empty, then `collection` is empty.

## Parameters

`optimizer_hint`

Comment-embedded hints to the optimizer for selection of an execution plan.
Comment-embedded hints to the optimizer for selecting an execution plan.

`table`

The name (optionally schema-qualified) of an existing table.

`dblink`

Database link name identifying a remote database. See the `CREATE DATABASE LINK` command for information on database links.
Database link name identifying a remote database. For more information, see [`CREATE DATABASE LINK`](21_create_public_database_link).

`condition`

A value expression that returns a value of type `BOOLEAN` that determines the rows which are to be deleted.
A value expression that returns a value of type `BOOLEAN` that determines the rows to delete.

`return_expression`

An expression that may include one or more columns from `table`. If a column name from `table` is specified in the `return_expression`, the value substituted for the column when `return_expression` is evaluated is the value from the deleted row.
An expression that can include one or more columns from `table`. If you specify a column name from `table` in `return_expression`, the value substituted for the column when `return_expression` is evaluated is the value from the deleted row.

`record`

A record whose field the evaluated `return_expression` is to be assigned. The first `return_expression` is assigned to the first field in `record`, the second `return_expression` is assigned to the second field in `record`, etc. The number of fields in `record` must exactly match the number of expressions and the fields must be type-compatible with their assigned expressions.
A record whose field to assign the evaluated `return_expression`. The first `return_expression` is assigned to the first field in `record`, the second `return_expression` is assigned to the second field in `record`, and so on. The number of fields in `record` must exactly match the number of expressions, and the fields must be type-compatible with their assigned expressions.

`variable`

A variable to which the evaluated `return_expression` is to be assigned. If more than one `return_expression` and `variable` are specified, the `first return_expression` is assigned to the first `variable`, the second `return_expression` is assigned to the second `variable`, etc. The number of variables specified following the `INTO` keyword must exactly match the number of expressions following the `RETURNING` keyword and the variables must be type-compatible with their assigned expressions.
A variable to which to assign the evaluated `return_expression`. If you specify more than one `return_expression` and `variable`, the `first return_expression` is assigned to the first `variable`, the second `return_expression` is assigned to the second `variable`, and so on. The number of variables specified following the `INTO` keyword must exactly match the number of expressions following the `RETURNING` keyword, and the variables must be type-compatible with their assigned expressions.

`collection`

A collection in which an element is created from the evaluated `return_expression`. There can be either a single collection which may be a collection of a single field or a collection of a record type, or there may be more than one collection in which case each collection must consist of a single field. The number of return expressions must match in number and order the number of fields in all specified collections. Each corresponding `return_expression` and `collection` field must be type-compatible.
A collection in which an element is created from the evaluated `return_expression`. You can have a single collection, which can be a collection of a single field or a collection of a record type. Alternatively, you can have more than one collection. In that case, each collection must consist of a single field. The number of return expressions must match in number and order the number of fields in all specified collections. Each corresponding `return_expression` and `collection` field must be type-compatible.

## Examples

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ legacyRedirectsGenerated:

## Name

`DROP DATABASE LINK` -- remove a database link.
`DROP DATABASE LINK` &mdash; Remove a database link.

## Synopsis

Expand All @@ -20,27 +20,27 @@ DROP [ PUBLIC ] DATABASE LINK <name>

## Description

`DROP DATABASE LINK` drops existing database links. To execute this command you must be a superuser or the owner of the database link.
`DROP DATABASE LINK` drops existing database links. To execute this command, you must be a superuser or the owner of the database link.

## Parameters

`name`

The name of a database link to be removed.
The name of a database link to remove.

`PUBLIC`

Indicates that `name` is a public database link.

## Examples

Remove the public database link named, `oralink`:
Remove the public database link named `oralink`:

```text
DROP PUBLIC DATABASE LINK oralink;
```

Remove the private database link named, `edblink`:
Remove the private database link named `edblink`:

```text
DROP DATABASE LINK edblink;
Expand Down
10 changes: 5 additions & 5 deletions product_docs/docs/epas/14/epas_compat_sql/46_drop_directory.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ legacyRedirectsGenerated:

## Name

`DROP DIRECTORY` -- remove a directory alias for a file system directory path.
`DROP DIRECTORY` &mdash; Remove a directory alias for a file system directory path.

## Synopsis

Expand All @@ -20,19 +20,19 @@ DROP DIRECTORY <name>

## Description

`DROP DIRECTORY` drops an existing alias for a file system directory path that was created with the `CREATE DIRECTORY` command. To execute this command you must be a superuser.
`DROP DIRECTORY` drops an existing alias for a file system directory path that was created with the `CREATE DIRECTORY` command. To execute this command, you must be a superuser.

When a directory alias is deleted, the corresponding physical file system directory is not affected. The file system directory must be deleted using the appropriate operating system commands.
When you delete a directory alias, the corresponding physical file system directory isn't affected. To delete the file system directory, use operating system commands.

## Parameters

`name`

The name of a directory alias to be removed.
The name of a directory alias to remove.

## Examples

Remove the directory alias named `empdir`:
Remove the directory alias `empdir`:

```text
DROP DIRECTORY empdir;
Expand Down
Loading

2 comments on commit 8ce60cf

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸŽ‰ Published on https://edb-docs.netlify.app as production
πŸš€ Deployed on https://63c9a93873a2ea1c201d0f57--edb-docs.netlify.app

Please sign in to comment.