Skip to content

Commit

Permalink
Merge pull request #2383 from EnterpriseDB/release/2022-02-24a
Browse files Browse the repository at this point in the history
Release: 2022-02-24a
  • Loading branch information
drothery-edb authored Feb 24, 2022
2 parents 62b0086 + 1083681 commit 6c2577b
Show file tree
Hide file tree
Showing 10 changed files with 247 additions and 220 deletions.
157 changes: 157 additions & 0 deletions product_docs/docs/harp/2.0/01_release-notes.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
---
navTitle: Release Notes
title: Release Notes
---
## Release 2.0.2 (2022-2-24)
### Enhancements

* **Connection Throttling for Builtin Proxy**
You can now specify the maximum number of connections that could be used by `builtin` proxy. The proxy will adjust downward the number of connections to fit within your calculated system resource limits. (75406, 79250, HNG-489, HNG-498, HNG-503, HNG-508)

* **CAMO disabled for BDR DCS**
HARP disables CAMO for its connection to the database to avoid performance degradation when using BDR for the Distributed Consensus System (DCS). (HNG-438)

* **Improved Security for HARP Proxy**
You can specify a user with read-only DCS permissions for the `builtin` proxy. (75406, HNG-452)

* **Start, Stop, Status hooks for managing Postgres**
You can provide start, stop, and status commands in the HARP Manager configuration for starting postgres, stopping postgres, and retrieving the status of postgres. If you do not provide commands then systemd is used by default. (HNG-492)

* **Pgbouncer has been removed as a dependency for HARP Proxy rpm and deb packages.**
Pgbouncer will not be installed unless you select `pgbouncer` as the `harp_proxy_mode`. (HNG-511)

* **HARP Manager has improved performance communicating with BDR DCS**
HARP Manager only communicates with BDR DCS using a local UNIX domain socket. (78516,HNG-494)

* **Builtin proxy is now the default proxy**
If pgbouncer was being used by default in a previous release, the `harp_proxy_mode` must now be specified as `pgbouncer` to continue as the proxy on upgrade. (HNG-511)

* **Binaries now match service names**
The `harp_manager` binary is now named `harp-manager`. Correspondingly, the ` harp_proxy` binary is now named `harp-proxy`. Symlinks with the previous names are provided. (HNG-514)

* **Improved Harp Manager defaults**
Lag configuration defaults are now set to be off by default. Duration of leader lease now has a default setting of 6 seconds. Leader lease renewal setting is now defaulted to 2 seconds. (HNG-520)

### Bug Fixes

* HARP Manager now stops the database on exit. (HNG-497)
* HARP Proxy no longer leaks connections when using the `builtin` proxy. (75406,HNG-445)
* HARP Proxy no longer erroneously reports “splice: connection reset by peer” when connections are closed for `builtin` proxy. (75406, HNG-445,)
* Harpctl now returns when querying for `builtin` or `pgbouncer` proxy status. (HNG-499)
* `harpctl get cluster` output no longer contains leader and previous leader fields.
(HNG-483)
* Harpctl now validates proxy name when executing proxy related commands. (HNG-471)
* Harpctl now reports correct routing status for leader. (HNG-441)
* HARP configuration files now contain the correct configuration parameters for the corresponding proxy-- `builtin` or `pgbouncer`. (78516, HNG-456)
* TPAExec no longer creates a confusing DSN for DCS endpoint with a duplicate user. (78516, HNG-495)
* `request_timeout` configuration parameter does not need unit specified, which are in milliseconds. (78363, HNG-504)
* The `listen_port` and `listen_host` settings can now be configured per proxy instance using TPAExec. (78848, HNG-456)
* Subscriber only nodes,which cannot become leader nodes,are no longer considered for leadership. (78516, HNG-411)

### Known Issues

* When a previously isolated shadow node returns back as an active cluster node this triggers a raft election and leadership change.
* Promoting a node may cause a different node to be promoted. A race for leadership occurs between the eligible nodes. The first eligible node will become leader. Use the `--force` option with the promote command to have the desired node become leader.
* Harpctl cannot return the HARP Proxy version if HARP Proxy is configured with read only user access for BDR DCS. The version information cannot be stored by a user with read only permissions. This leads to missing version information for proxy when using harpctl to query version information.
* After fencing the database with the stop database option, if the HARP Manager is restarted and BDR DCS is configured, the database will be restarted, but will be in a fenced state.
* `use_unix_sock` will not work when deploying EDB Postgres Advanced Server. The default UNIX socket directory is not determined correctly for EDB Postgres Advanced Server.

## Release 2.0.1 (2022-1-31)

### Enhancements

* BDR consensus now generally available

HARP offers multiple options for Distributed Consensus Service (DCS) source: etcd and BDR. The BDR consensus option can be used in deployments where etcd is not present. Use of the BDR consensus option is no longer considered beta and is now supported for use in production environments.

* Transport layer proxy now generally available

HARP offers multiple proxy options for routing connections between the client application and database: application layer (L7) and transport layer (L4). The network layer 4 or transport layer proxy simply forwards network packets, whereas layer 7 terminates network traffic. The transport layer proxy, previously called simple proxy, is no longer considered beta and is now supported for use in production environments.

## Release 2.0.0 (2021-12-01)

### Engine

* Complete rewrite of system in golang to optimize all operations
* Cluster state can now be bootstrapped or revised via YAML

### Configuration

* Rewritten in YAML
* Configuration file changed from `harp.ini` to `config.yml`

### Enhancements

* HARP Proxy deprecates need for HAProxy in supported architecture.

The use of HARP Router to translate DCS contents into appropriate online or
offline states for HTTP-based URI requests meant a load balancer or HAProxy
was necessary to determine the Lead Master. HARP Proxy now does this
automatically without periodic iterative status checks.

* Utilizes DCS key subscription to respond directly to state changes.

With relevant cluster state changes, the cluster will respond immediately, thereby resulting in improved failover and switchover times.

* Compatibility with etcd SSL settings.

It is now possible to communicate with etcd through SSL encryption.

* Zero transaction lag on switchover.

The new lead node will not have transactions routed to it until all replicated transactions are replayed, thereby reducing the potential for conflicts.

* Experimental BDR Consensus layer

Using BDR Consensus as the Distributed Consensus Service (DCS) reduces amount of change needed for implementations.

* Experimental Proxy

Proxy implementation for increased session control.

## Release 1.0.1 (2021-06-23)

### Documentation

* Standardize resolution of the `HARP` acronym

### Bug fixes

* Fix CAMO lag check to accommodate cases where `maximum_camo_lag` is set to `0`

## Release 1.0 (2021-06-15)

### Enhancements

* `--dry-run` option added to `harpctl leader set`
* minimum configuration values will be enforced
* `lock_interval` parameter can be specified as fractions of a second
* logging and output improvements
* replication lag query updated to handle parallel apply

### Bug fixes

* `harpctl` returns an error code if `leader set` fails
* prevent corner-case failure when node peer progress not returned
* handle potentially empty node record
* catch unhandled exception when deciding the lead node candidate

## Release 0.2 (2021-02-23)

This is a maintenance release with following changes:

* documentation available via the EnterpriseDB customer portal
* report non-availability of nodes other than the lead master
* when using BDR as a DCS layer, fix potential failure situations when a
BDR node is not running
* fixes RPM packaging issue preventing a new start on fresh installations

## Release 0.1 (2020-08-13)

This is an initial beta release providing HARP support for BDR, including:

* Usage of native BDR (3.6.21 and later) as a consensus layer
* Usage of etcd as a consensus layer

Note that currently HARP does not support operation on a physical streaming
replica when BDR is used as a consensus layer.
7 changes: 1 addition & 6 deletions product_docs/docs/harp/2.0/02_overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ that node leads the cluster in that Location.

Once the role of the Lead Master is established, connections are handled
with a similar deterministic result as reflected by HARP Proxy. Consider a case
where HAProxy needs to determine the connection target for a particular backend
where HARP Proxy needs to determine the connection target for a particular backend
resource:

1. HARP Proxy interrogates the Consensus layer for the current Lead Master in
Expand Down Expand Up @@ -256,8 +256,3 @@ And for BDR Nodes 3 and 4:
```sql
SELECT bdr.set_node_location('dcb');
```

Afterwards, future versions of HARP Manager would derive the `location` field
directly from BDR itself. This HARP functionality is not available yet, so we
recommend using this and the setting in `config.yml` until HARP reports
compatibility with this BDR API method.
4 changes: 2 additions & 2 deletions product_docs/docs/harp/2.0/03_installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ title: Installation

A standard installation of HARP includes two system services:

* HARP Manager (`harp_manager`) on the node being managed
* HARP Proxy (`harp_router`) elsewhere
* HARP Manager (`harp-manager`) on the node being managed
* HARP Proxy (`harp-proxy`) elsewhere

There are generally two ways to install and configure these services to manage
Postgres for proper Quorum-based connection routing.
Expand Down
35 changes: 19 additions & 16 deletions product_docs/docs/harp/2.0/04_configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,18 @@ needs at least one more setting:
`bdr.local_node_summary.node_name` view column. Alphanumeric characters
and underscores only.

- **`start_command`**: This can be used instead of the information in DCS for
starting the database to be monitored. This is required if using bdr as the
consensus layer.

- **`status_command`**: This can be used instead of the information in DCS for
the Harp Manager to determine whether or not the database is running. This is
required if using bdr as the consensus layer.

- **`stop_command`**: This can be used instead of the information in DCS for
stopping the database.


Thus a complete configuration example for HARP Manager could resemble this:

```yaml
Expand Down Expand Up @@ -288,8 +300,6 @@ modified with a `harpctl set node` command.
Postgres data directory itself. In these cases, this should be set to that
expected location.

* Default `db_data_dir`

- **`db_log_file`**: Location of Postgres log file.

* Default `/tmp/pg_ctl.out`
Expand All @@ -303,15 +313,15 @@ modified with a `harpctl set node` command.
grace period to refresh the lock, before expiration allows another node to
obtain the Lead Master lock instead.

* Default: 30
* Default: 6

- **`lease_refresh_interval`**: Amount of time in milliseconds between
refreshes of the Lead Master lease. This essentially controls the time
between each series of checks HARP Manager performs against its assigned
Postgres node, and when the status of the node is updated in the Consensus
layer.

* Default: 5000
* Default: 2000
- **`max_dcs_failures`**: The amount of DCS request failures before marking a node as fenced according to fence_node_on_dcs_failure. This prevents transient communication disruptions from shutting down database nodes.

* Default: 10
Expand All @@ -321,7 +331,7 @@ modified with a `harpctl set node` command.
take the Lead Master lock. This prevents nodes experiencing terminal amounts
of lag from taking the Lead Master lock. Set to -1 to disable this check.

* Default: 1048576 (1MB)
* Default: -1

- **`maximum_camo_lag`**: Highest allowable variance (in bytes) between last
received LSN and applied LSN between this node and its CAMO partner(s).
Expand All @@ -331,7 +341,7 @@ modified with a `harpctl set node` command.
this very low, or even to 0 to avoid any unapplied CAMO transactions. Set to
-1 to disable this check.

* Default: 1048576 (1MB)
* Default: -1

- **`ready_status_duration`**: Amount of time in seconds the node's readiness
status will persist if not refreshed. This is a failsafe that will remove a
Expand All @@ -351,14 +361,6 @@ modified with a `harpctl set node` command.

* Default: 100

- **`safety_interval`**: Time in milliseconds required before allowing routing
to a newly promoted Lead Master. This is intended to allow automated checks
against HARP Router to fail across the cluster before transitioning new
connections to the promoted node. This helps enforce fully synchronized
routing targets. 0 to disable.

* Default: 100

- **`stop_database_when_fenced`**: Rather than simply removing a node from all possible routing, stop the database on a node when it is fenced. This is an extra safeguard to prevent data from other sources than HARP Proxy from reaching the database, or in case proxies are unable to disconnect clients for some other reason.

* Default: False
Expand Down Expand Up @@ -392,7 +394,8 @@ without altering a configuration file. Many of these settings are direct
mappings to their PgBouncer equivalent, and we will note these where relevant.

Settings here should be set under a `proxies` YAML heading during bootstrap, or
modified with a `harpctl set proxy` command.
modified with a `harpctl set proxy` command.
Properties set via `harpctl set proxy` require a restart of the proxy.

- **`auth_file`**: The full path to a PgBouncer-style `userlist.txt` file.
HARP Proxy will use this file to store a `pgbouncer` user which will have
Expand Down Expand Up @@ -529,5 +532,5 @@ When using `harpctl` to change any of these settings for all proxies, use the
`global` keyword in place of the proxy name. Example:

```bash
harpctl set proxy global max_client_conn 1000
harpctl set proxy global max_client_conn=1000
```
8 changes: 4 additions & 4 deletions product_docs/docs/harp/2.0/05_bootstrapping.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -138,9 +138,9 @@ Once Nodes are bootstrapped, they should show up with a quick examination:
```bash
> harpctl get nodes
Cluster Name Ready Role Type Location Fenced Lease Duration
------- ---- ----- ---- ---- -------- ------ --------------
mycluster node1 true dc1 false 30
Cluster Name Location Ready Fenced Allow Routing Routing Status Role Type Lock Duration
------- ---- -------- ----- ------ ------------- -------------- ---- ---- -------------
mycluster bdra1 dc1 true false true ok primary bdr 30
```

## Proxy Bootstrapping
Expand All @@ -166,7 +166,7 @@ cluster:
proxies:
monitor_interval: 5
default_pool_size: 20
max_client_connections: 1000
max_client_conn: 1000
database_name: bdrdb
instances:
- name: proxy1
Expand Down
12 changes: 9 additions & 3 deletions product_docs/docs/harp/2.0/06_harp_manager.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,13 @@ ineligible nodes from Leader consideration.

Every Postgres node in the cluster should have an associated HARP Manager.
Other nodes may exist, but they will not be able to participate as Lead or
Shadow Master roles, or other functionality HARP supports in the future.
Shadow Master roles, or any other functionality that requires a HARP Manager.

!!! Important
HARP Manager expects the be used to start and stop the database. Stopping HARP Manager
will stop the database. Starting HARP Manager will start the database if it
isn't already started. If another method is used to stop the database then
HARP Manager will try and restart it.

## How it Works

Expand Down Expand Up @@ -92,14 +98,14 @@ See [Configuration](04_configuration) for further details.
This is the basic usage for HARP Manager:

```bash
Usage of ./harp_manager:
Usage of ./harp-manager:
-f string
Optional path to config file (shorthand)
--config string
Optional path to config file
```

Note that there are no arguments to launch `harp_manager` as a forked daemon.
Note that there are no arguments to launch `harp-manager` as a forked daemon.
This software is designed to be launched through systemd or within a container
as a top-level process. This also means output is directed to STDOUT and STDERR
for capture and access through journald or an attached container terminal.
Expand Down
Loading

0 comments on commit 6c2577b

Please sign in to comment.