Skip to content

Commit

Permalink
Merge pull request #1524 from EnterpriseDB/release/2021-07-01
Browse files Browse the repository at this point in the history
Release: 2021-07-01
Former-commit-id: 4697aeb
  • Loading branch information
josh-heyer authored Jul 1, 2021
2 parents 4b3c38e + 54fa115 commit d8a78ed
Show file tree
Hide file tree
Showing 409 changed files with 1,254 additions and 2,598 deletions.
2 changes: 1 addition & 1 deletion .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"appPort": [8000],

// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "yarn install && git submodule update --init",
"postCreateCommand": "yarn install",

// docker in docker (https://github.com/microsoft/vscode-dev-containers/blob/master/script-library/docs/docker-in-docker.md)
"runArgs": ["--init", "--privileged"],
Expand Down
2 changes: 0 additions & 2 deletions .github/workflows/deploy-develop.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ jobs:
with:
ref: develop
fetch-depth: 0 # fetch whole repo so git-restore-mtime can work
- name: Update submodules
run: git submodule update --init --remote
- name: Adjust file watchers limit
run: echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

Expand Down
2 changes: 0 additions & 2 deletions .github/workflows/deploy-main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ jobs:
with:
ref: main
fetch-depth: 0 # fetch whole repo so git-restore-mtime can work
- name: Update submodules
run: git submodule update --init --remote
- name: Adjust file watchers limit
run: echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

Expand Down
2 changes: 0 additions & 2 deletions .github/workflows/update-pdfs-on-develop.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@ jobs:
with:
ref: develop
ssh-key: ${{ secrets.ADMIN_SECRET_SSH_KEY }}
- name: Update submodules
run: git submodule update --init --remote

- uses: actions/setup-node@v1
with:
Expand Down
3 changes: 0 additions & 3 deletions .gitmodules

This file was deleted.

2 changes: 0 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,6 @@ We recommend using MacOS to work with the EDB Docs application.

1. Install all required packages by running `yarn`.

1. Pull the shared icon files down with `git submodule update --init`. This needs to be run inside of the project folder, if you have cloned the repo using GitHub Desktop, ensure that you have `cd` into the project.

1. And finally, you can start up the site locally with `yarn develop`, which should make it live at `http://localhost:8000/`. Huzzah!

### Installation of PDF / Doc Conversion Tools (optional)
Expand Down
12 changes: 4 additions & 8 deletions advocacy_docs/community/contributing/repo.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -60,19 +60,15 @@ Example:
```shell
yarn
```
12. Pull down shared icon files
```shell
git submodule update --init
```
13. Select sources with yarn
12. Select sources with yarn
```shell
yarn config-sources
```
You'll be prompted to choose the packages you'd like to access; you can use comma separated values to choose more than one
```shell
yarn develop
```
14. Go to the localhost port in your browser, e.g. http://localhost:8000/ (your url will be mentioned in Terminal)
13. Go to the localhost port in your browser, e.g. http://localhost:8000/ (your url will be mentioned in Terminal)

### Tips to Use Terminal
- <kbd>⌘</kbd>+<kbd>T</kbd> (press the command key and T key together) opens a new tab - use this to execute terminal commands while <code>yarn develop</code> is running
Expand All @@ -81,10 +77,10 @@ Example:
## How to Make changes and Submit Pull Requests
You'll make edits and additions via your IDE (VS Code). We recommend using [Github Desktop](https://desktop.github.com/) unless you're familiar with git.

15. Get in the habit of pulling the latest changes from the `develop` branch of the Github repository on a regular basis, so that you don't have to manually merge your changes with those made by others. Use the "Fetch origin" and "Pull origin" buttons in Github Desktop at the start of your work day to update your local files with those from the server. You should also do this again before submitting a pull request. For details, see: [Syncing your branch
14. Get in the habit of pulling the latest changes from the `develop` branch of the Github repository on a regular basis, so that you don't have to manually merge your changes with those made by others. Use the "Fetch origin" and "Pull origin" buttons in Github Desktop at the start of your work day to update your local files with those from the server. You should also do this again before submitting a pull request. For details, see: [Syncing your branch
](https://docs.github.com/en/desktop/contributing-and-collaborating-using-github-desktop/syncing-your-branch)

16. To submit a pull request
15. To submit a pull request
- Make changes to the repository in VS code (or your IDE of choice)
- Save changes (<kbd>CTRL</kbd>+<kbd>S</kbd> in VS Code)
- Create a new branch and name it in Github Desktop
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
"build": "gatsby build --prefix-paths",
"serve-build": "gatsby serve --prefix-paths",
"prepare": "husky install",
"update-icons": "git submodule update --init --remote && node scripts/createIconTypes.js && node scripts/createIconNames.js",
"update-icons": "node scripts/createIconTypes.js && node scripts/createIconNames.js",
"build-pdf": "docker-compose -f docker/docker-compose.build-pdf.yaml run --rm --entrypoint scripts/pdf/generate_pdf.py pdf-builder",
"build-all-pdfs": "for i in product_docs/docs/**/*/ ; do echo \"$i\"; yarn build-pdf ${i%} || exit 1; done",
"build-all-pdfs-ci": "for i in product_docs/docs/**/*/ ; do echo \"$i\"; python3 scripts/pdf/generate_pdf.py ${i%} || exit 1; done",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,17 +36,17 @@ There are three PostgreSQL backend nodes, one Primary and two Standby nodes. Con
backend_hostname0 = ‘server1_IP'
backend_port0 = 5444
backend_weight0 = 1
backend_flag0 = 'DISALLOW_TO_FAILOVER'
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_hostname1 = ‘server2_IP'
backend_port1 = 5444
backend_weight1 = 1
backend_flag1 = 'DISALLOW_TO_FAILOVER'
backend_flag1 = 'ALLOW_TO_FAILOVER'
backend_hostname2 = ‘server3_IP'
backend_port2 = 5444
backend_weight2 = 1
backend_flag2 = 'DISALLOW_TO_FAILOVER'
backend_flag2 = 'ALLOW_TO_FAILOVER'
```

**Enable Load-balancing and streaming replication mode**
Expand All @@ -65,7 +65,7 @@ Health-checking and failover must be handled by EFM and hence, these must be dis

```text
health_check_period = 0
fail_over_on_backend_error = off
failover_on_backend_error = off
failover_if_affected_tuples_mismatch = off
failover_command = ‘’
failback_command = ‘’
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,17 +32,17 @@ There are three PostgreSQL backend nodes, one Primary and two Standby nodes. Con
backend_hostname0 = ‘server1_IP'
backend_port0 = 5444
backend_weight0 = 1
backend_flag0 = 'DISALLOW_TO_FAILOVER'
backend_flag0 = 'ALLOW_TO_FAILOVER'
backend_hostname1 = ‘server2_IP'
backend_port1 = 5444
backend_weight1 = 1
backend_flag1 = 'DISALLOW_TO_FAILOVER'
backend_flag1 = 'ALLOW_TO_FAILOVER'
backend_hostname2 = ‘server3_IP'
backend_port2 = 5444
backend_weight2 = 1
backend_flag2 = 'DISALLOW_TO_FAILOVER'
backend_flag2 = 'ALLOW_TO_FAILOVER'
```

**Enable Load-balancing and streaming replication mode**
Expand Down Expand Up @@ -70,7 +70,7 @@ Health-checking and failover must be handled by EFM and hence, these must be dis

```text
health_check_period = 0
fail_over_on_backend_error = off
failover_on_backend_error = off
failover_if_affected_tuples_mismatch = off
failover_command = ‘’
failback_command = ‘’
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: "EFM Pgpool Integration Using Azure Network Load Balancer"

<div id="appendix_a" class="registered_link"></div>

This section describes a specific use case for EFM Pgpool integration, where the database, EFM, and Pgpool are installed on CentOS 8 Virtual Machines in Azure. For this specific use case, Azure Load Balancer (LNB) has been used to distribute the traffic amongst all the active Pgpool Instances instead of directing the traffic using Pgpool VIP.
This section describes a specific use case for EFM Pgpool integration, where the database, EFM, and Pgpool are installed on CentOS 8 Virtual Machines in Azure. For this specific use case, Azure Load Balancer (NLB) has been used to distribute the traffic amongst all the active Pgpool Instances instead of directing the traffic using Pgpool VIP.

![Architecture diagram for EFM and Pgpool integration using Azure Load Balancer](images/EFM_PgPool_Azure.png)

Expand Down
8 changes: 8 additions & 0 deletions product_docs/docs/efm/4.2/efm_user/03_installing_efm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,14 @@ title: "Installing Failover Manager"

<div id="linux_installation" class="registered_link"></div>

For information about the platforms and versions supported by Failover Manager, visit the EnterpriseDB website at:

<https://www.enterprisedb.com/services-support/edb-supported-products-and-platforms#efm>

!!! Note

A mixed mode for CPU architecture is supported where the Primary and Standby nodes are on Linux on IBM Power and Witness node is on x86-64 Linux.

To request credentials that allow you to access an EnterpriseDB repository, visit the EDB website at:

<https://info.enterprisedb.com/rs/069-ALB-339/images/Repository%20Access%2004-09-2019.pdf>
Expand Down
6 changes: 1 addition & 5 deletions product_docs/docs/efm/4.2/efm_user/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,7 @@ title: "EDB Failover Manager User Guide"

EDB Postgres Failover Manager (EFM) is a high-availability module from EnterpriseDB that enables a Postgres primary node to automatically failover to a Standby node in the event of a software or hardware failure on the primary.

This guide provides information about installing, configuring, and using Failover Manager. For information about the platforms and versions supported by Failover Manager, visit the EnterpriseDB website at:

<https://www.enterprisedb.com/services-support/edb-supported-products-and-platforms#efm>

This document uses Postgres to mean either the PostgreSQL or EDB Postgres Advanced Server database.
This guide provides information about installing, configuring, and using Failover Manager. The document uses Postgres to mean either the PostgreSQL or EDB Postgres Advanced Server database.

<div class="toctree" maxdepth="3">

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ Where `ENV_VARIABLE` is the environment variable that is set to the directory pa
The `EDBLDR_ENV_STYLE` environment variable instructs Advanced Server to interpret environment variable references as Windows-styled references or Linux-styled references irregardless of the operating system on which EDB\*Loader resides. You can use this environment variable to create portable control files for EDB\*Loader.

- On a Windows system, set `EDBLDR_ENV_STYLE` to `linux` or `unix` to instruct Advanced Server to recognize Linux-style references within the control file.
- On a Linux system, set `EDBLDR_ENV_STYLE` to windows to instruct Advanced Server to recognize Windows-style references within the control file.
- On a Linux system, set `EDBLDR_ENV_STYLE` to `windows` to instruct Advanced Server to recognize Windows-style references within the control file.

The operating system account `enterprisedb` must have read permission on the directory and file specified by `data_file`.

Expand Down Expand Up @@ -514,7 +514,7 @@ The following is the corresponding delimiter-separated data file:
9104,"JONES, JR.",MANAGER,7839,02-APR-09,7975.00,20
```

The use of the `TRAILING NULLCOLS` clause allows the last field supplying the comm column to be omitted from the first and last records. The `comm` column is set to null for the rows inserted from these records.
The use of the `TRAILING NULLCOLS` clause allows the last field supplying the `comm` column to be omitted from the first and last records. The `comm` column is set to null for the rows inserted from these records.

The double quotation mark enclosure character surrounds the value `JONES, JR.` in the last record since the comma delimiter character is part of the field value.

Expand Down Expand Up @@ -747,7 +747,7 @@ SELECT * FROM emp WHERE empno > 9100;

**NULLIF Clause**

The following example uses the `NULLIF` clause on the sal column to set it to null for employees of job `MANAGER` as well as on the comm column to set it to null if the employee is not a `SALESMAN` and is not in department `30`. In other words, a comm value is accepted if the employee is a `SALESMAN` or is a member of department `30`.
The following example uses the `NULLIF` clause on the `sal` column to set it to null for employees of job `MANAGER` as well as on the `comm` column to set it to null if the employee is not a `SALESMAN` and is not in department `30`. In other words, a `comm` value is accepted if the employee is a `SALESMAN` or is a member of department `30`.

The following is the control file:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -830,7 +830,7 @@ The information displayed in the `DATA from pg_statio_all_tables` section includ
| `TIDX READ` | The number of toast index blocks read. |
| `TIDX HIT` | The number of toast index blocks hit. |

```Text
```text
DATA from pg_stat_all_indexes

SCHEMA RELATION INDEX
Expand Down Expand Up @@ -1059,7 +1059,7 @@ function_name(<beginning_id>, <ending_id>, <top_n>, <scope>)

`scope`

scope determines which tables the function returns statistics about. Specify `SYS`, `USER` or `ALL`:
`scope` determines which tables the function returns statistics about. Specify `SYS`, `USER` or `ALL`:

- `SYS` indicates that the function should return information about system defined tables. A table is considered a system table if it is stored in one of the following schemas: `pg_catalog`, `information_schema`, or `sys`.
- `USER` indicates that the function should return information about user-defined tables.
Expand Down Expand Up @@ -1186,7 +1186,7 @@ statio_tables_rpt(<beginning_id>, <ending_id>, <top_n>, <scope>)

`scope`

scope determines which tables the function returns statistics about. Specify `SYS`, `USER` or `ALL`:
`scope` determines which tables the function returns statistics about. Specify `SYS`, `USER` or `ALL`:

- `SYS` indicates that the function should return information about system defined tables. A table is considered a system table if it is stored in one of the following schemas: `pg_catalog`, `information_schema`, or `sys`.
- `USER` indicates that the function should return information about user-defined tables.
Expand Down Expand Up @@ -1273,7 +1273,7 @@ stat_indexes_rpt(<beginning_id>, <ending_id>, <top_n>, <scope>)

`scope`

scope determines which tables the function returns statistics about. Specify `SYS`, `USER` or `ALL`:
`scope` determines which tables the function returns statistics about. Specify `SYS`, `USER` or `ALL`:

- `SYS` indicates that the function should return information about system defined tables. A table is considered a system table if it is stored in one of the following schemas: `pg_catalog, information_schema`, or `sys`.
- `USER` indicates that the function should return information about user-defined tables.
Expand Down
6 changes: 2 additions & 4 deletions product_docs/docs/epas/12/epas_compat_tools_guide/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,8 @@ The EDB\*Plus command line client provides a user interface to Advanced Server t
- Execute OS commands
- Record output

For detailed installation and usage information about EDB\*Plus, see the EDB\*Plus User's Guide, available from the EnterpriseDB website at:

[https://www.enterprisedb.com/docs/p/edbplus](/epas/latest/edb_plus/)
For detailed installation and usage information about EDB\*Plus, see the [EDB\*Plus User's Guide](https://www.enterprisedb.com/docs/epas/12/edb_plus/) available from EnterpriseDB website.

For detailed information about the features supported by Advanced Server, consult the complete library of Advanced Server guides available at:

[https://www.enterprisedb.com/docs](/epas/latest/)
[https://www.enterprisedb.com/docs](/epas/12/)
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,7 @@ The following table shows the summary of configuration parameters:
| `log_temp_files` | Session | Immediate | Superuser | Log the use of temporary files larger than this number of kilobytes. | |
| `log_timezone` | Cluster | Reload | EPAS service account | Sets the time zone to use in log messages. | |
| `log_truncate_on_rotation` | Cluster | Reload | EPAS service account | Truncate existing log files of same name during log rotation. | |
| `logging_collector` | Cluster | Restart | EPAS service account | Start a subprocess to capture stderr output and/or csvlogs into log files. | |
| `logging_collector` | Cluster | Restart | EPAS service account | Start a subprocess to capture stderr output and/or csv logs into log files. | |
| `maintenance_work_mem` | Session | Immediate | User | Sets the maximum memory to be used for maintenance operations. | |
| `max_connections` | Cluster | Restart | EPAS service account | Sets the maximum number of concurrent connections. | |
| `max_files_per_process` | Cluster | Restart | EPAS service account | Sets the maximum number of simultaneously open files for each server process. | |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -60,4 +60,7 @@ Use the following configuration parameters to control database auditing. See [Su

Set to `syslog` to use the syslog process and its location as configured in the `/etc/syslog.conf` file. The `syslog` setting is valid for Advanced Server running on a Linux host and is not supported on Windows systems. **Note:** In recent Linux versions, syslog has been replaced by `rsyslog` and the configuration file is in `/etc/rsyslog.conf`.

!!! Note
Advanced Server allows administrative users associated with administrative privileges to audit statements by any user, group, or role. By auditing specific users, you can minimize the number of audit records generated. For information, see the examples under [Selecting SQL Statements to Audit](../05_edb_audit_logging/02_selecting_sql_statements_to_audit/#selecting_sql_statements_to_audit).

The following section describes selection of specific SQL statements for auditing using the `edb_audit_statement` parameter.
Loading

0 comments on commit d8a78ed

Please sign in to comment.