diff --git a/product_docs/docs/harp/2.0/01_release-notes.mdx b/product_docs/docs/harp/2.0/01_release-notes.mdx new file mode 100644 index 00000000000..1402803db96 --- /dev/null +++ b/product_docs/docs/harp/2.0/01_release-notes.mdx @@ -0,0 +1,157 @@ +--- +navTitle: Release Notes +title: Release Notes +--- +## Release 2.0.2 (2022-2-24) +### Enhancements + +* **Connection Throttling for Builtin Proxy** +You can now specify the maximum number of connections that could be used by `builtin` proxy. The proxy will adjust downward the number of connections to fit within your calculated system resource limits. (75406, 79250, HNG-489, HNG-498, HNG-503, HNG-508) + +* **CAMO disabled for BDR DCS** +HARP disables CAMO for its connection to the database to avoid performance degradation when using BDR for the Distributed Consensus System (DCS). (HNG-438) + +* **Improved Security for HARP Proxy** +You can specify a user with read-only DCS permissions for the `builtin` proxy. (75406, HNG-452) + +* **Start, Stop, Status hooks for managing Postgres** +You can provide start, stop, and status commands in the HARP Manager configuration for starting postgres, stopping postgres, and retrieving the status of postgres. If you do not provide commands then systemd is used by default. (HNG-492) + +* **Pgbouncer has been removed as a dependency for HARP Proxy rpm and deb packages.** +Pgbouncer will not be installed unless you select `pgbouncer` as the `harp_proxy_mode`. (HNG-511) + +* **HARP Manager has improved performance communicating with BDR DCS** +HARP Manager only communicates with BDR DCS using a local UNIX domain socket. (78516,HNG-494) + +* **Builtin proxy is now the default proxy** +If pgbouncer was being used by default in a previous release, the `harp_proxy_mode` must now be specified as `pgbouncer` to continue as the proxy on upgrade. (HNG-511) + +* **Binaries now match service names** +The `harp_manager` binary is now named `harp-manager`. Correspondingly, the ` harp_proxy` binary is now named `harp-proxy`. Symlinks with the previous names are provided. (HNG-514) + +* **Improved Harp Manager defaults** +Lag configuration defaults are now set to be off by default. Duration of leader lease now has a default setting of 6 seconds. Leader lease renewal setting is now defaulted to 2 seconds. (HNG-520) + +### Bug Fixes + +* HARP Manager now stops the database on exit. (HNG-497) +* HARP Proxy no longer leaks connections when using the `builtin` proxy. (75406,HNG-445) +* HARP Proxy no longer erroneously reports “splice: connection reset by peer” when connections are closed for `builtin` proxy. (75406, HNG-445,) +* Harpctl now returns when querying for `builtin` or `pgbouncer` proxy status. (HNG-499) +* `harpctl get cluster` output no longer contains leader and previous leader fields. + (HNG-483) +* Harpctl now validates proxy name when executing proxy related commands. (HNG-471) +* Harpctl now reports correct routing status for leader. (HNG-441) +* HARP configuration files now contain the correct configuration parameters for the corresponding proxy-- `builtin` or `pgbouncer`. (78516, HNG-456) +* TPAExec no longer creates a confusing DSN for DCS endpoint with a duplicate user. (78516, HNG-495) +* `request_timeout` configuration parameter does not need unit specified, which are in milliseconds. (78363, HNG-504) +* The `listen_port` and `listen_host` settings can now be configured per proxy instance using TPAExec. (78848, HNG-456) +* Subscriber only nodes,which cannot become leader nodes,are no longer considered for leadership. (78516, HNG-411) + +### Known Issues + +* When a previously isolated shadow node returns back as an active cluster node this triggers a raft election and leadership change. +* Promoting a node may cause a different node to be promoted. A race for leadership occurs between the eligible nodes. The first eligible node will become leader. Use the `--force` option with the promote command to have the desired node become leader. +* Harpctl cannot return the HARP Proxy version if HARP Proxy is configured with read only user access for BDR DCS. The version information cannot be stored by a user with read only permissions. This leads to missing version information for proxy when using harpctl to query version information. +* After fencing the database with the stop database option, if the HARP Manager is restarted and BDR DCS is configured, the database will be restarted, but will be in a fenced state. +* `use_unix_sock` will not work when deploying EDB Postgres Advanced Server. The default UNIX socket directory is not determined correctly for EDB Postgres Advanced Server. + +## Release 2.0.1 (2022-1-31) + +### Enhancements + +* BDR consensus now generally available + + HARP offers multiple options for Distributed Consensus Service (DCS) source: etcd and BDR. The BDR consensus option can be used in deployments where etcd is not present. Use of the BDR consensus option is no longer considered beta and is now supported for use in production environments. + +* Transport layer proxy now generally available + + HARP offers multiple proxy options for routing connections between the client application and database: application layer (L7) and transport layer (L4). The network layer 4 or transport layer proxy simply forwards network packets, whereas layer 7 terminates network traffic. The transport layer proxy, previously called simple proxy, is no longer considered beta and is now supported for use in production environments. + +## Release 2.0.0 (2021-12-01) + +### Engine + +* Complete rewrite of system in golang to optimize all operations +* Cluster state can now be bootstrapped or revised via YAML + +### Configuration + +* Rewritten in YAML +* Configuration file changed from `harp.ini` to `config.yml` + +### Enhancements + +* HARP Proxy deprecates need for HAProxy in supported architecture. + + The use of HARP Router to translate DCS contents into appropriate online or + offline states for HTTP-based URI requests meant a load balancer or HAProxy + was necessary to determine the Lead Master. HARP Proxy now does this + automatically without periodic iterative status checks. + +* Utilizes DCS key subscription to respond directly to state changes. + + With relevant cluster state changes, the cluster will respond immediately, thereby resulting in improved failover and switchover times. + +* Compatibility with etcd SSL settings. + + It is now possible to communicate with etcd through SSL encryption. + +* Zero transaction lag on switchover. + + The new lead node will not have transactions routed to it until all replicated transactions are replayed, thereby reducing the potential for conflicts. + +* Experimental BDR Consensus layer + + Using BDR Consensus as the Distributed Consensus Service (DCS) reduces amount of change needed for implementations. + +* Experimental Proxy + + Proxy implementation for increased session control. + +## Release 1.0.1 (2021-06-23) + +### Documentation + +* Standardize resolution of the `HARP` acronym + +### Bug fixes + +* Fix CAMO lag check to accommodate cases where `maximum_camo_lag` is set to `0` + +## Release 1.0 (2021-06-15) + +### Enhancements + +* `--dry-run` option added to `harpctl leader set` +* minimum configuration values will be enforced +* `lock_interval` parameter can be specified as fractions of a second +* logging and output improvements +* replication lag query updated to handle parallel apply + +### Bug fixes + +* `harpctl` returns an error code if `leader set` fails +* prevent corner-case failure when node peer progress not returned +* handle potentially empty node record +* catch unhandled exception when deciding the lead node candidate + +## Release 0.2 (2021-02-23) + +This is a maintenance release with following changes: + +* documentation available via the EnterpriseDB customer portal +* report non-availability of nodes other than the lead master +* when using BDR as a DCS layer, fix potential failure situations when a + BDR node is not running +* fixes RPM packaging issue preventing a new start on fresh installations + +## Release 0.1 (2020-08-13) + +This is an initial beta release providing HARP support for BDR, including: + +* Usage of native BDR (3.6.21 and later) as a consensus layer +* Usage of etcd as a consensus layer + +Note that currently HARP does not support operation on a physical streaming +replica when BDR is used as a consensus layer. diff --git a/product_docs/docs/harp/2.0/02_overview.mdx b/product_docs/docs/harp/2.0/02_overview.mdx index 16d7726355d..b90b7845730 100644 --- a/product_docs/docs/harp/2.0/02_overview.mdx +++ b/product_docs/docs/harp/2.0/02_overview.mdx @@ -150,7 +150,7 @@ that node leads the cluster in that Location. Once the role of the Lead Master is established, connections are handled with a similar deterministic result as reflected by HARP Proxy. Consider a case -where HAProxy needs to determine the connection target for a particular backend +where HARP Proxy needs to determine the connection target for a particular backend resource: 1. HARP Proxy interrogates the Consensus layer for the current Lead Master in @@ -256,8 +256,3 @@ And for BDR Nodes 3 and 4: ```sql SELECT bdr.set_node_location('dcb'); ``` - -Afterwards, future versions of HARP Manager would derive the `location` field -directly from BDR itself. This HARP functionality is not available yet, so we -recommend using this and the setting in `config.yml` until HARP reports -compatibility with this BDR API method. diff --git a/product_docs/docs/harp/2.0/03_installation.mdx b/product_docs/docs/harp/2.0/03_installation.mdx index b7aacdfcc13..918105dc5dc 100644 --- a/product_docs/docs/harp/2.0/03_installation.mdx +++ b/product_docs/docs/harp/2.0/03_installation.mdx @@ -5,8 +5,8 @@ title: Installation A standard installation of HARP includes two system services: -* HARP Manager (`harp_manager`) on the node being managed -* HARP Proxy (`harp_router`) elsewhere +* HARP Manager (`harp-manager`) on the node being managed +* HARP Proxy (`harp-proxy`) elsewhere There are generally two ways to install and configure these services to manage Postgres for proper Quorum-based connection routing. diff --git a/product_docs/docs/harp/2.0/04_configuration.mdx b/product_docs/docs/harp/2.0/04_configuration.mdx index da37fa18f1a..6f98ce1ac16 100644 --- a/product_docs/docs/harp/2.0/04_configuration.mdx +++ b/product_docs/docs/harp/2.0/04_configuration.mdx @@ -138,6 +138,18 @@ needs at least one more setting: `bdr.local_node_summary.node_name` view column. Alphanumeric characters and underscores only. +- **`start_command`**: This can be used instead of the information in DCS for + starting the database to be monitored. This is required if using bdr as the + consensus layer. + +- **`status_command`**: This can be used instead of the information in DCS for + the Harp Manager to determine whether or not the database is running. This is + required if using bdr as the consensus layer. + +- **`stop_command`**: This can be used instead of the information in DCS for + stopping the database. + + Thus a complete configuration example for HARP Manager could resemble this: ```yaml @@ -288,8 +300,6 @@ modified with a `harpctl set node` command. Postgres data directory itself. In these cases, this should be set to that expected location. - * Default `db_data_dir` - - **`db_log_file`**: Location of Postgres log file. * Default `/tmp/pg_ctl.out` @@ -303,7 +313,7 @@ modified with a `harpctl set node` command. grace period to refresh the lock, before expiration allows another node to obtain the Lead Master lock instead. - * Default: 30 + * Default: 6 - **`lease_refresh_interval`**: Amount of time in milliseconds between refreshes of the Lead Master lease. This essentially controls the time @@ -311,7 +321,7 @@ modified with a `harpctl set node` command. Postgres node, and when the status of the node is updated in the Consensus layer. - * Default: 5000 + * Default: 2000 - **`max_dcs_failures`**: The amount of DCS request failures before marking a node as fenced according to fence_node_on_dcs_failure. This prevents transient communication disruptions from shutting down database nodes. * Default: 10 @@ -321,7 +331,7 @@ modified with a `harpctl set node` command. take the Lead Master lock. This prevents nodes experiencing terminal amounts of lag from taking the Lead Master lock. Set to -1 to disable this check. - * Default: 1048576 (1MB) + * Default: -1 - **`maximum_camo_lag`**: Highest allowable variance (in bytes) between last received LSN and applied LSN between this node and its CAMO partner(s). @@ -331,7 +341,7 @@ modified with a `harpctl set node` command. this very low, or even to 0 to avoid any unapplied CAMO transactions. Set to -1 to disable this check. - * Default: 1048576 (1MB) + * Default: -1 - **`ready_status_duration`**: Amount of time in seconds the node's readiness status will persist if not refreshed. This is a failsafe that will remove a @@ -351,14 +361,6 @@ modified with a `harpctl set node` command. * Default: 100 -- **`safety_interval`**: Time in milliseconds required before allowing routing - to a newly promoted Lead Master. This is intended to allow automated checks - against HARP Router to fail across the cluster before transitioning new - connections to the promoted node. This helps enforce fully synchronized - routing targets. 0 to disable. - - * Default: 100 - - **`stop_database_when_fenced`**: Rather than simply removing a node from all possible routing, stop the database on a node when it is fenced. This is an extra safeguard to prevent data from other sources than HARP Proxy from reaching the database, or in case proxies are unable to disconnect clients for some other reason. * Default: False @@ -392,7 +394,8 @@ without altering a configuration file. Many of these settings are direct mappings to their PgBouncer equivalent, and we will note these where relevant. Settings here should be set under a `proxies` YAML heading during bootstrap, or -modified with a `harpctl set proxy` command. +modified with a `harpctl set proxy` command. +Properties set via `harpctl set proxy` require a restart of the proxy. - **`auth_file`**: The full path to a PgBouncer-style `userlist.txt` file. HARP Proxy will use this file to store a `pgbouncer` user which will have @@ -529,5 +532,5 @@ When using `harpctl` to change any of these settings for all proxies, use the `global` keyword in place of the proxy name. Example: ```bash -harpctl set proxy global max_client_conn 1000 +harpctl set proxy global max_client_conn=1000 ``` diff --git a/product_docs/docs/harp/2.0/05_bootstrapping.mdx b/product_docs/docs/harp/2.0/05_bootstrapping.mdx index 5c285bbdf01..9808e5eb858 100644 --- a/product_docs/docs/harp/2.0/05_bootstrapping.mdx +++ b/product_docs/docs/harp/2.0/05_bootstrapping.mdx @@ -138,9 +138,9 @@ Once Nodes are bootstrapped, they should show up with a quick examination: ```bash > harpctl get nodes -Cluster Name Ready Role Type Location Fenced Lease Duration -------- ---- ----- ---- ---- -------- ------ -------------- -mycluster node1 true dc1 false 30 +Cluster Name Location Ready Fenced Allow Routing Routing Status Role Type Lock Duration +------- ---- -------- ----- ------ ------------- -------------- ---- ---- ------------- +mycluster bdra1 dc1 true false true ok primary bdr 30 ``` ## Proxy Bootstrapping @@ -166,7 +166,7 @@ cluster: proxies: monitor_interval: 5 default_pool_size: 20 - max_client_connections: 1000 + max_client_conn: 1000 database_name: bdrdb instances: - name: proxy1 diff --git a/product_docs/docs/harp/2.0/06_harp_manager.mdx b/product_docs/docs/harp/2.0/06_harp_manager.mdx index 1467c6ee639..797f8bc2f60 100644 --- a/product_docs/docs/harp/2.0/06_harp_manager.mdx +++ b/product_docs/docs/harp/2.0/06_harp_manager.mdx @@ -11,7 +11,13 @@ ineligible nodes from Leader consideration. Every Postgres node in the cluster should have an associated HARP Manager. Other nodes may exist, but they will not be able to participate as Lead or -Shadow Master roles, or other functionality HARP supports in the future. +Shadow Master roles, or any other functionality that requires a HARP Manager. + +!!! Important + HARP Manager expects the be used to start and stop the database. Stopping HARP Manager + will stop the database. Starting HARP Manager will start the database if it + isn't already started. If another method is used to stop the database then + HARP Manager will try and restart it. ## How it Works @@ -92,14 +98,14 @@ See [Configuration](04_configuration) for further details. This is the basic usage for HARP Manager: ```bash -Usage of ./harp_manager: +Usage of ./harp-manager: -f string Optional path to config file (shorthand) --config string Optional path to config file ``` -Note that there are no arguments to launch `harp_manager` as a forked daemon. +Note that there are no arguments to launch `harp-manager` as a forked daemon. This software is designed to be launched through systemd or within a container as a top-level process. This also means output is directed to STDOUT and STDERR for capture and access through journald or an attached container terminal. diff --git a/product_docs/docs/harp/2.0/07_harp_proxy.mdx b/product_docs/docs/harp/2.0/07_harp_proxy.mdx index 3e37f268a75..29307ac4041 100644 --- a/product_docs/docs/harp/2.0/07_harp_proxy.mdx +++ b/product_docs/docs/harp/2.0/07_harp_proxy.mdx @@ -9,16 +9,43 @@ identity of the current Lead Master node and directs traffic to that location. In the event of a planned switchover or unplanned failover, it will automatically redirect to the new Lead Master node as dictated by the DCS. -You may select between pgbouncer or builtin for HARP Proxy. When using pgbouncer, -HARP Proxy is an interface layer between the DCS and PgBouncer. As such, PgBouncer -is a prerequisite and should be installed in addition, in order for HARP Proxy to +You may select between pgbouncer or builtin for HARP Proxy. If you do not specify +which proxy type; the default is `builtin`. When using pgbouncer, HARP Proxy is +an interface layer between the DCS and PgBouncer. As such, PgBouncer is a +prerequisite and should be installed in addition, in order for HARP Proxy to fully manage its activity. The builtin proxy does not require any additional software. When using builtin, HARP Proxy functions as a level 4 pass-through proxy. + +# Builtin Proxy +## How it Works + +Upon starting, HARP Proxy will listen for incoming connections on the listening +address and listening port specified in the bootstrap file per proxy instance. +All application client traffic will then pass through Builtin Proxy into the +current Lead Master node for the Location where this proxy is operating. + +In the event the Lead Master lease is not set, HARP Proxy will disconnect all +connection traffic until a new Lead Master is established. This also applies +to circumstances when `harpctl promote` is used to invoke a planned transition +to a new Lead Master. The disconnect is immediate. + +## Configuration + +The built-in proxy is chosen by setting the proxy type to `builtin`. The only +other option applicable to the built-in proxy is `max_client_conn` which +specifies the maximum allowed client connections. If `max_client_conn` is +higher than what the system can handle-- it will be lowered to a setting +that is within the capability of the system that the proxy is on. + # PgBouncer ## How it Works +!!! Note + If greater configurability of pgbouncer than what harp proxy provides is needed, + the recommended setup is to use built-in proxy and have pgbouncer point to it. + Upon starting, HARP Proxy will launch PgBouncer if it is not already running, and leave client connections in a paused state. Afterwards, it will contact the DCS to determine the identity of the Lead Master, configure PgBouncer to use @@ -38,56 +65,18 @@ to a new Lead Master. It uses a PgBouncer `PAUSE` command for this, so existing sessions are allowed to complete any pending transactions before they are held in stasis. -## Configuration - -HARP Proxy expects the `dcs`, `cluster`, and `proxy` configuration stanzas. The -following is a functional example: - -```yaml -cluster: - name: mycluster - -dcs: - driver: etcd - endpoints: - - host1:2379 - - host2:2379 - - host3:2379 - -proxy: - name: proxy1 -``` - -## Usage - -This is the basic usage for HARP Proxy: - -```bash -Usage of ./harp_proxy: - -f string - Optional path to config file (shorthand) - --config string - Optional path to config file -``` - -Note that there are no arguments to launch `harp_proxy` as a forked daemon. -This software is designed to be launched through systemd or within a container -as a top-level process. This also means output is directed to STDOUT and STDERR -for capture and access through journald or an attached container terminal. - ## PgBouncer Configuration File -Since HARP Proxy currently utilizes PgBouncer for connection management and -redirection, a `pgbouncer.ini` file must exist. HARP Manager builds this file -based on various run-time directives as defined in the -[Proxy Directives](04_configuration) documentation. +When HARP Proxy utilizes PgBouncer for connection management and redirection, +a `pgbouncer.ini` file must exist. HARP Manager builds this file based on various +run-time directives as defined in the [Proxy Directives](04_configuration) documentation. This file will be located in the same folder as the `config.yml` used by HARP Proxy. Any PgBouncer process launched by HARP Proxy will use this configuration file, and it may be used for debugging or information purposes. Modifications to this automatically generated `pgbouncer.ini` file will be lost any time HARP Proxy is restarted, so use `harpctl set proxy` to alter these settings -instead. +instead. Calling `harpctl set proxy` does not update the `pgbouncer.ini` file until the proxy has been restarted. ## Disabling and Re-enabling HARP Proxy Node Management @@ -108,12 +97,12 @@ Proxy node management is enabled by default. ## Passthrough User Authentication -We strongly recommend configuring HARP Proxy to use the `auth_user` and +With pgbouncer we strongly recommend configuring HARP Proxy to use the `auth_user` and `auth_query` run-time directives. If these are not set, the PgBouncer `userlist.txt` file must include username and password hash combinations for every user PgBouncer needs to authenticate on Postgres' behalf. -this should *not* be the `pgbouncer` user itself, as this is utilized by HARP +This should *not* be the `pgbouncer` user itself, as this is utilized by HARP Proxy as an admin-level user in order to operate the underlying PgBouncer service. @@ -154,8 +143,9 @@ cluster: proxies: monitor_interval: 5 default_pool_size: 20 - max_client_connections: 1000 + max_client_conn: 1000 auth_user: pgb_auth + type: pgbouncer auth_query: "SELECT * FROM pg_catalog.pgbouncer_get_auth($1)" database_name: bdrdb instances: @@ -174,19 +164,6 @@ harpctl set proxy global auth_user=pgb_auth Proxy will need a `.pgpass` file so that `auth_user` can authenticate against Postgres. -# Builtin Proxy -## How it Works - -Upon starting, HARP Proxy will listen for incoming connections on the listening -address and listening port specified in the bootstrap file per proxy instance. -All application client traffic will then pass through Builtin Proxy into the -current Lead Master node for the Location where this proxy is operating. - -In the event the Lead Master lease is not set, HARP Proxy will disconnect all -connection traffic until a new Lead Master is established. This also applies -to circumstances when `harpctl promote` is used to invoke a planned transition -to a new Lead Master. The disconnect is immediate. - ## Configuration HARP Proxy expects the `dcs`, `cluster`, and `proxy` configuration stanzas. The @@ -213,14 +190,14 @@ Each proxy will connect to the DCS to retrieve what hosts and ports to listen on This is the basic usage for HARP Proxy: ```bash -Usage of ./harp_proxy: +Usage of ./harp-proxy: -f string Optional path to config file (shorthand) --config string Optional path to config file ``` -Note that there are no arguments to launch `harp_proxy` as a forked daemon. +Note that there are no arguments to launch `harp-proxy` as a forked daemon. This software is designed to be launched through systemd or within a container as a top-level process. This also means output is directed to STDOUT and STDERR for capture and access through journald or an attached container terminal. diff --git a/product_docs/docs/harp/2.0/08_harpctl.mdx b/product_docs/docs/harp/2.0/08_harpctl.mdx index da01801e365..e8f52ed44dd 100644 --- a/product_docs/docs/harp/2.0/08_harpctl.mdx +++ b/product_docs/docs/harp/2.0/08_harpctl.mdx @@ -8,7 +8,7 @@ contents to fit desired cluster geometry. It can be used to e.g. examine node status, "promote" a node to Lead Master, disable/enable cluster management, bootstrap cluster settings, and so on. -## Synposis +## Synopsis ```bash $ harpctl --help @@ -167,9 +167,9 @@ Fetches information stored in the Consensus Layer for the current cluster: ```bash > harpctl get cluster -Name Leader Previous Leader Enabled Lock Duration ----- ------ --------------- ------- ------------- -mycluster true 30 +Name Enabled +---- ------- +mycluster true ``` ### `harpctl get leader` @@ -229,7 +229,7 @@ Example: > harpctl get node mynode Cluster Name Location Ready Fenced Allow Routing Routing Status Role Type Lock Duration ------- ---- -------- ----- ------ ------------- -------------- ---- ---- ------------- -mycluster mynode dc1 true false true primary bdr 30 +mycluster mynode dc1 true false true ok primary bdr 30 ``` @@ -242,11 +242,11 @@ Example: ```bash > harpctl get nodes -Cluster Name Location Ready Fenced Allow Routing Routing Status Role Type Lock Duration -------- ---- -------- ----- ------ ------------- -------------- ---- ---- ------------- -mycluster mynode dc1 true false true primary bdr 30 - -mycluster thatnode dc1 true false false primary bdr 30 +Cluster Name Location Ready Fenced Allow Routing Routing Status Role Type Lock Duration +------- ---- -------- ----- ------ ------------- -------------- ---- ---- ------------- +myclusters bdra1 dc1 true false true ok primary bdr 30 +myclusters bdra2 dc1 true false false N/A primary bdr 30 +myclusters bdra3 dc1 true false false N/A primary bdr 30 ``` ### `harpctl get proxy` @@ -355,8 +355,7 @@ object types: ### `harpctl set cluster` -Sets cluster-related attributes only. There's only one of these at the moment, -but future versions of HARP may add more. +Sets cluster-related attributes only. Example: @@ -364,14 +363,6 @@ Example: harpctl set cluster event_sync_interval=200 ``` -### `harpctl set location` - -Sets location-related attributes only. There are none of these at the moment, -and calling this command will result in an error regarding unrecognized options. - -!!! Note - This is a placeholder for future capabilities. - ### `harpctl set node` Sets node-related attributes for the named node. Any options mentioned in the @@ -388,7 +379,8 @@ harpctl set node mynode priority=500 Sets proxy-related attributes for the named proxy. Any options mentioned in the "Proxy Directives" section of the [Configuration](04_configuration) -documentation are valid here. +documentation are valid here. +Properties set this way require a restart of the proxy before the new value takes effect. Example: diff --git a/product_docs/docs/harp/2.0/10_security.mdx b/product_docs/docs/harp/2.0/10_security.mdx index 30001423fe1..1f36c98e47c 100644 --- a/product_docs/docs/harp/2.0/10_security.mdx +++ b/product_docs/docs/harp/2.0/10_security.mdx @@ -58,8 +58,8 @@ means the HARP-enabled user requires the following: GRANT bdr_superuser TO foobar; ``` -This may change in future versions of BDR, but currently access to the BDR -consensus model does require superuser equivalent permission. +Currently access to the BDR consensus model does require superuser equivalent +permission. !!! Important BDR Superusers *are not* Postgres superusers. The `bdr_superuser` role is diff --git a/product_docs/docs/harp/2.0/11_release-notes.mdx b/product_docs/docs/harp/2.0/11_release-notes.mdx deleted file mode 100644 index 61f8263289e..00000000000 --- a/product_docs/docs/harp/2.0/11_release-notes.mdx +++ /dev/null @@ -1,103 +0,0 @@ ---- -navTitle: Release Notes -title: Release Notes ---- -## Release 2.0.1 (2022-1-31) - -### Enhancements - -* BDR consensus now generally available - - HARP offers multiple options for Distributed Consensus Service (DCS) source: etcd and BDR. The BDR consensus option can be used in deployments where etcd is not present. Use of the BDR consensus option is no longer considered beta and is now supported for use in production environments. - -* Transport layer proxy now generally available - - HARP offers multiple proxy options for routing connections between the client application and database: application layer (L7) and transport layer (L4). The network layer 4 or transport layer proxy simply forwards network packets, whereas layer 7 terminates network traffic. The transport layer proxy, previously called simple proxy, is no longer considered beta and is now supported for use in production environments. - -## Release 2.0.0 (2021-12-01) - -### Engine - -* Complete rewrite of system in golang to optimize all operations -* Cluster state can now be bootstrapped or revised via YAML - -### Configuration - -* Rewritten in YAML -* Configuration file changed from `harp.ini` to `config.yml` - -### Enhancements - -* HARP Proxy deprecates need for HAProxy in supported architecture. - - The use of HARP Router to translate DCS contents into appropriate online or - offline states for HTTP-based URI requests meant a load balancer or HAProxy - was necessary to determine the Lead Master. HARP Proxy now does this - automatically without periodic iterative status checks. - -* Utilizes DCS key subscription to respond directly to state changes. - - With relevant cluster state changes, the cluster will respond immediately, thereby resulting in improved failover and switchover times. - -* Compatibility with etcd SSL settings. - - It is now possible to communicate with etcd through SSL encryption. - -* Zero transaction lag on switchover. - - The new lead node will not have transactions routed to it until all replicated transactions are replayed, thereby reducing the potential for conflicts. - -* Experimental BDR Consensus layer - - Using BDR Consensus as the Distributed Consensus Service (DCS) reduces amount of change needed for implementations. - -* Experimental Proxy - - Proxy implementation for increased session control. - -## Release 1.0.1 (2021-06-23) - -### Documentation - -* Standardize resolution of the `HARP` acronym - -### Bug fixes - -* Fix CAMO lag check to accommodate cases where `maximum_camo_lag` is set to `0` - -## Release 1.0 (2021-06-15) - -### Enhancements - -* `--dry-run` option added to `harpctl leader set` -* minimum configuration values will be enforced -* `lock_interval` parameter can be specified as fractions of a second -* logging and output improvements -* replication lag query updated to handle parallel apply - -### Bug fixes - -* `harpctl` returns an error code if `leader set` fails -* prevent corner-case failure when node peer progress not returned -* handle potentially empty node record -* catch unhandled exception when deciding the lead node candidate - -## Release 0.2 (2021-02-23) - -This is a maintenance release with following changes: - -* documentation available via the EnterpriseDB customer portal -* report non-availability of nodes other than the lead master -* when using BDR as a DCS layer, fix potential failure situations when a - BDR node is not running -* fixes RPM packaging issue preventing a new start on fresh installations - -## Release 0.1 (2020-08-13) - -This is an initial beta release providing HARP support for BDR, including: - -* Usage of native BDR (3.6.21 and later) as a consensus layer -* Usage of etcd as a consensus layer - -Note that currently HARP does not support operation on a physical streaming -replica when BDR is used as a consensus layer.