Skip to content

Commit

Permalink
Merge pull request #1118 from EnterpriseDB/release/2021-03-24
Browse files Browse the repository at this point in the history
Production Release 2021-03-24

Former-commit-id: e0ff728
  • Loading branch information
epbarger authored Mar 24, 2021
2 parents 10bd9c8 + 3e696e4 commit 2f16df8
Show file tree
Hide file tree
Showing 93 changed files with 807 additions and 475 deletions.
1 change: 0 additions & 1 deletion gatsby-config.js
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,6 @@ module.exports = {
},
},
},
// 'gatsby-plugin-remove-fingerprints', // speeds up Netlify, see https://github.com/narative/gatsby-plugin-remove-fingerprints
'gatsby-plugin-sitemap',
{
resolve: `gatsby-plugin-manifest`,
Expand Down
1 change: 0 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,6 @@
"gatsby-plugin-nginx-redirect": "^0.0.11",
"gatsby-plugin-react-helmet": "^4.0.0",
"gatsby-plugin-react-svg": "^3.0.0",
"gatsby-plugin-remove-fingerprints": "^0.0.2",
"gatsby-plugin-sass": "^4.0.2",
"gatsby-plugin-sharp": "^3.0.1",
"gatsby-plugin-sitemap": "^3.0.0",
Expand Down
42 changes: 3 additions & 39 deletions product_docs/docs/bart/2.6.2/bart_inst/04_upgrading_bart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,6 @@ legacyRedirectsGenerated:

This section outlines the process of upgrading BART from an existing version to the latest version.

- [Upgrading from BART 2.0](#upgrading-from-bart-20) describes the upgrade process from BART 2.0 to the latest version.
- [Upgrading from Older Versions of BART (except 2.0)](#upgrading_from_older_versions_except_2_0_to_latest_versions_of_bart) describes the upgrade process from previous BART versions (except 2.0) to the latest version.

**Upgrade Restrictions**

The following restrictions apply with regard to previous BART versions.
Expand All @@ -22,9 +19,9 @@ The following restrictions apply with regard to previous BART versions.

<div id="upgrading_from_older_versions_except_2_0_to_latest_versions_of_bart" class="registered_link"></div>

## Upgrading from Older Versions of BART (except 2.0)
## Upgrading from Older Versions of BART

Perform the following steps to upgrade from older versions of BART (except 2.0) to the latest version:
Perform the following steps to upgrade from older versions of BART to the latest version:

**Step 1:** Assume the identity of the BART user account and invoke the following command to stop the BART WAL scanner program (`bart-scanner`):

Expand Down Expand Up @@ -62,37 +59,4 @@ The `PATH` setting should be the same as set for BART 2.6.2 since all versions u
!!! Note
After upgrading to the latest BART version, you must take a new full backup of your system before performing an incremental backup.

<div id="upgrading_from_bart_2_0" class="registered_link"></div>

## Upgrading from BART 2.0

Perform the following steps to upgrade BART 2.0 to the latest version of BART:

**Step 1:** Install the latest version of BART. For information about how to install, see [installing BART](02_installing_bart/#installing-bart).

**Step 2:** Save a copy of your BART 2.0 configuration file. The default location of the BART 2.0 configuration file is `/usr/edb/bart2.0/etc/bart.cfg`.

**Step 3:** Invoke the following command to remove BART 2.0:

On CentOS 7:

```text
yum remove edb-bart20
```

**Step 4:** Place the BART 2.0 configuration file (`bart.cfg`) that you saved in Step 2 in the newly created `/usr/edb/bart/etc` directory. You can use many of the same configuration parameters for BART 2.6.2, but note that you must use a new directory for the BART backup catalog. A new set of full backups and incremental backups taken using BART 2.6.2 must be stored in a new BART backup catalog.

To specify an alternative configuration file name or location, use the `-c` option with BART subcommands. For more information about the `-c` option, see the EDB Backup and Recovery Us available at the [EDB website](/bart/latest/bart_user/).

!!! Note
The `bart.cfg` configuration file is only required on the BART 2.6.2 host from which you will invoke BART subcommands. BART does not require the `bart.cfg` file on hosts on which an incremental backup will be restored.

**Step 5:** Adjust the setting of the `PATH` environment variable to include the location of the BART 2.6.2 executable (the `bin` subdirectory) in the `~/.bashrc` or `~/.bash_profile` files for the following user accounts:

- The BART user account on the BART host.
- The user account on the remote host to which incremental backups will be restored. For details, see the EDB Backup and Recovery User Guide available at the [EDB website](/bart/latest/bart_user/).

**Step 6:** Perform the BART 2.6.2 installation and BART 2.0 removal process on each remote host on which an incremental backup was restored using BART 2.0.

!!! Note
After upgrading to the latest BART version, you must take a new full backup of your system before performing an incremental backup.
<div id="upgrading_from_bart_2_0" class="registered_link"></div>
12 changes: 0 additions & 12 deletions product_docs/docs/efm/3.5/index.mdx

This file was deleted.

5 changes: 4 additions & 1 deletion product_docs/docs/eprs/6.2/01_introduction/01_whats_new.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,10 @@ title: "What’s New"

The following features have been added to xDB Replication Server version 6.1 to create xDB Replication Server version 6.2:

> - Registering your xDB Replication Server product with an EnterpriseDB product license key is no longer required. Thus, all components related to registering the product have been removed. The following are the removed components: 1) the Product Registration dialog box accessed from the xDB Replication Console Help menu, 2) the `license_key` parameter located in the xDB Replication Configuration file, and 3) the xDB Replication Server CLI `registerkey` command.
> - Registering your xDB Replication Server product with an EnterpriseDB product license key is no longer required. Thus, all components related to registering the product have been removed. The following are the removed components:
- The Product Registration dialog box accessed from the xDB Replication Console Help menu
- The `license_key` parameter located in the xDB Replication Configuration file
- The xDB Replication Server CLI `registerkey` command.
> - Partitioned tables created using the declarative partitioning feature of PostgreSQL and Advanced Server version 10 and later can now be replicated in a log-based single-master or multi-master replication system. For more information, see [Replicating Postgres Partitioned Tables](../07_common_operations/10_replicating_postgres_partitioned_tables/#replicating_postgres_partitioned_tables).
> - In a single-master replication system, removal of a table from a publication that has one or more existing subscriptions is now permitted as long as the table to be removed is not the parent referenced in a foreign key constraint from a child table that is not being removed as well. Previously, no tables from a publication in a single-master replication system could be removed if there are existing subscriptions. For more information, see [Removing Tables from a Publication](../07_common_operations/06_managing_publication/03_updating_pub/#remove_tables_from_pub).
> - Versions 11 and 12 of PostgreSQL and Advanced Server are now supported.
6 changes: 3 additions & 3 deletions product_docs/docs/eprs/6.2/01_introduction/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@
title: "Introduction"
---

<div id="introduction" class="registered_link"></div>

This document describes the installation, configuration, architecture, and operation of the <span class="title-ref">EDB xDB Replication Server</span>. EDB xDB (cross database) Replication Server (referred to hereafter as <span class="title-ref">xDB Replication Server</span>) is an asynchronous replication system available for PostgreSQL and for EDB Postgres Advanced Server. The latter will be referred to simply as <span class="title-ref">Advanced Server</span>.

xDB Replication Server can be used to implement replication systems based on either of two different replication models – single-master (primary-to-secondary) replication or multi-master replication.
Expand Down Expand Up @@ -32,10 +30,12 @@ For multi-master replication, xDB Replication Server supports the following conf

- Between PostgreSQL database servers
- Between PostgreSQL database servers and Advanced Servers in PostgreSQL compatible mode
- Between Advanced Servers in PostgreSQL compatible mode
- Between Advanced Servers in Oracle compatible mode

The reader is assumed to have basic SQL knowledge and basic Oracle, SQL Server, or PostgreSQL database administration skills (whichever are applicable) so that databases, users, schemas, and tables can be created and database object privileges assigned.

- The remainder of Chapter 1 describes conventions used throughout this user’s guide along with suggested sections to read based upon your purpose for using this guide.
- The remainder of Chapter [Introduction](../01_introduction/#introduction) describes conventions used throughout this user’s guide along with suggested sections to read based upon your purpose for using this guide.
- Chapter [Overview](../02_overview/#overview) provides an overview of xDB Replication Server including basic replication concepts and definitions, architecture and components of xDB Replication Server, and design guidelines for setting up a replication system.
- Chapter [Installation and Uninstallation](../03_installation/#installation) gives instructions for installing and uninstalling xDB Replication Server.
- Chapter [Introduction to the xDB Replication Console](../04_intro_xdb_console/#intro_xdb_console) provides an overview of the xDB Replication Console, the graphical user interface for using xDB Replication Server.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@
title: "Offloading Reporting and Business Intelligence Queries"
---

<div id="offloading_reporting_and_bi_queries" class="registered_link"></div>

In this use case, users take all or just a subset of data from a production OLTP system and replicate it to another database whose sole purpose is to support reporting queries. This can have multiple benefits:

1. Reporting loads are removed from the OLTP system, improving transaction processing performance.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,6 @@
title: "Comparison of Single-Master and Multi-Master Replication"
---

<div id="smr_mmr_comparison" class="registered_link"></div>

In write-intensive applications, multi-master replication allows you to utilize multiple database servers on separate hosts to process write transactions independently of each other on their own primary databases. Changes can then be reconciled across primary databases according to your chosen schedule.

There are two models of replication systems supported by xDB Replication Server:

- **Single-Master Replication (SMR).** Changes (inserts, updates, and deletions) to table rows are allowed to occur in a designated primary database. These changes are replicated to tables in one or more secondary databases. The replicated tables in the secondary databases are not permitted to accept any changes except from its designated primary database. (This is also known as primary-to-secondary replication.)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,16 +33,28 @@ The preceding diagram illustrates that a table that has been created as a member

The following diagram illustrates a multi-master replication system with three primary nodes.

![Publications in one database replicating to subscriptions in another database](/../../images/image3.png)
![Publications in one database replicating to subscriptions in another database](../../images/image3.png)

![Publications replicating to two subscription databases](/../../images/image4.png)
**Publications in one database replicating to subscriptions in another database**

![*Publications in two databases replicating to one subscription database*](/../../images/image5.png)

![*Cascading Replication: Tables used in both a subscription and a publication*](/../../images/image6.png)

![Publications replicating to two subscription databases](../../images/image4.png)

**Publications replicating to two subscription databases**

![*Publications in two databases replicating to one subscription database*](../../images/image5.png)

**Publications in two databases replicating to one subscription database**

![*Cascading Replication: Tables used in both a subscription and a publication*](../../images/image6.png)

**Cascading Replication: Tables used in both a subscription and a publication**

The preceding diagram illustrates that a table that has been created as a member of a subscription can be used in a publication replicating to another subscription. This scenario is called cascading replication.

The following diagram illustrates a multi-master replication system with three primary nodes.

![*Multi-master replication system*](/../../images/image7.png)
![*Multi-master replication system*](../../images/image7.png)

**Multi-master replication system**
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,6 @@ Generally, changes must not be made to the definitions of the publication tables

Changes must not be made to the rows of the subscription tables. If such changes are made, they are not propagated back to the publication. If changes are made to the subscription table rows, it is fairly likely that the rows will no longer match their publication counterparts. There is also a risk that future replication attempts may fail.

![Single-Master (Primary-to-secondary) replication](/../../images/image8.png)
![Single-Master (Primary-to-secondary) replication](../../images/image8.png)

**Single-Master (Primary-to-secondary) replication**
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,6 @@ Once the multi-master replication system is defined, changes (inserts, updates,

Generally, changes must not be made to the table definitions in any of the primary nodes including the primary definition node. If such changes are made, they are not propagated to other nodes in the multi-master replication system unless they are made using the DDL change replication feature described in [Replicating DDL Changes](../../07_common_operations/08_replicating_ddl_changes/#replicating_ddl_changes). If changes are made to tables without using the DDL change replication feature, there is a risk that future replication attempts may fail.

![In a multi-master replication system, table rows can be updated at any primary node](/../../images/image9.png)
![In a multi-master replication system, table rows can be updated at any primary node](../../images/image9.png)

**In a multi-master replication system, table rows can be updated at any primary node**
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,13 @@ title: "Synchronization Replication with the Trigger-Based Method"

If a publication in a single-master replication system is created that will be used in synchronization replications with the trigger-based method, the publication server installs an insert trigger, an update trigger, and a delete trigger on each publication table. In a multi-master replication system, each replicated table in each primary node employing the trigger-based method has an insert trigger, an update trigger, and a delete trigger.

The publication server also creates a shadow table for each source table on which triggers have been created. A shadow table is a table used by xDB Replication Server to record the changes (inserts, updates, and deletions) made to a given source table. A shadow table records three types of record ../../images: For each row inserted into the source table, the shadow table records the image of the inserted row. For each existing row that is updated in the source table, the shadow table records the after image of the updated row. For each row deleted from the source table, the shadow table records the primary key value of the deleted row.
The publication server also creates a shadow table for each source table on which triggers have been created. A shadow table is a table used by xDB Replication Server to record the changes (inserts, updates, and deletions) made to a given source table. A shadow table records three types of record images:

- For each row inserted into the source table, the shadow table records the image of the inserted row.

- For each existing row that is updated in the source table, the shadow table records the after image of the updated row.

- For each row deleted from the source table, the shadow table records the primary key value of the deleted row.

!!! Note
In a multi-master replication system, the before image of an updated row is also stored in the shadow table in order to perform update conflict detection. See [Conflict Resolution](../../06_mmr_operation/06_conflict_resolution/#conflict_resolution) for information on conflict detection in a multi-master replication system.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,14 +40,17 @@ The total number of primary nodes is six. Multiply the number of primary node da
The following table shows the required, minimum settings for `max_replication_slots` as well as `max_wal_senders`.
**Table 2-1: Replication Origin Configuration Parameter Settings**
| | | |
| ---------------------------------------------------------- | ------------------- | -------------------------------- |
| **Postgres Database Server** | **max_wal_senders** | **max_replication_slots** |
| <br /><br /><br />Cluster #1 (3 primary nodes)<br /><br /> | 3 | <br /><br /><br />18<br /><br /> |
| <br /><br /><br />Cluster #2 (2 primary nodes)<br /><br /> | 2 | <br /><br /><br />12<br /><br /> |
| <br /><br /><br />Cluster #3 (1 primary node)<br /><br /> | 1 | <br /><br /><br />6<br /><br /> |
**Replication Origin Configuration Parameter Settings**



If the `max_replication_slots` parameter is not set to a high enough value, synchronization replication still succeeds, but without the replication origin performance advantage.

Expand Down
Loading

0 comments on commit 2f16df8

Please sign in to comment.