Skip to content

Commit

Permalink
Merge pull request #2796 from EnterpriseDB/release/2022-06-14a
Browse files Browse the repository at this point in the history
Release: 2022-06-14a
  • Loading branch information
drothery-edb authored Jun 14, 2022
2 parents c18c31c + daf9c24 commit e139915
Show file tree
Hide file tree
Showing 29 changed files with 3,633 additions and 76 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -50,13 +50,13 @@ You can install all Replication Server components with a single install command,
To install all Replication Server components:

```shell
dnf -y install edb-xdb
zypper -y install edb-xdb
```

To install an individual component:

```shell
dnf install package_name
zypper install package_name
```

Where `package_name` is:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,13 +50,13 @@ You can install all Replication Server components with a single install command,
To install all Replication Server components:

```shell
dnf -y install edb-xdb
zypper -y install edb-xdb
```

To install an individual component:

```shell
dnf install package_name
zypper install package_name
```

Where `package_name` is:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,10 @@ For platform-specific install instructions, see:
- [CentOS 7](x86_amd64/04_hadoop_centos7_x86)
- [SLES 15](x86_amd64/05_hadoop_sles15_x86)
- [SLES 12](x86_amd64/07_hadoop_sles12_x86)
- [Ubuntu 20.04 or 18.04/Debian 10](x86_amd64/09_hadoop__ubuntu18_20_deb10_x86)
- [Debian 9](x86_amd64/11_hadoop__deb9_x86)
- [Ubuntu 20.04](x86_amd64/09_hadoop_ubuntu20_x86)
- [Ubuntu 18.04](x86_amd64/09a_hadoop_ubuntu18_x86)
- [Debian 10](x86_amd64/09b_hadoop_deb10_x86)
- [Debian 9](x86_amd64/11_hadoop_deb9_x86)

- Linux on IBM Power (ppc64le):

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: "Installing Hadoop Foreign Data Wrapper on Ubuntu 20.04 or 18.04/Debian 10 x86"
navTitle: "Ubuntu 20.04 or 18.04/Debian 10"
title: "Installing Hadoop Foreign Data Wrapper on Ubuntu 20.04 x86"
navTitle: "Ubuntu 20.04"
---

To install the Hadoop Foreign Data Wrapper on a Debian or Ubuntu host, you must have credentials that allow access to the EDB repository. To request credentials for the repository, visit the [EDB website](https://www.enterprisedb.com/repository-access-request/).
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
title: "Installing Hadoop Foreign Data Wrapper on Ubuntu 18.04 x86"
navTitle: "Ubuntu 18.04"
---

To install the Hadoop Foreign Data Wrapper on a Debian or Ubuntu host, you must have credentials that allow access to the EDB repository. To request credentials for the repository, visit the [EDB website](https://www.enterprisedb.com/repository-access-request/).

The following steps will walk you through on using the EDB apt repository to install a Debian package. When using the commands, replace the `username` and `password` with the credentials provided by EDB.

1. Assume superuser privileges:

```text
sudo su –
```

2. Set up the EDB repository:

```text
sh -c 'echo "deb [arch=amd64] https://apt.enterprisedb.com/$(lsb_release -cs)-edb/ $(lsb_release -cs) main" > /etc/apt/sources.list.d/edb-$(lsb_release -cs).list'
```

3. Substitute your EDB credentials for the `username` and `password` in the following command:

```text
sh -c 'echo "machine apt.enterprisedb.com login <username> password <password>" > /etc/apt/auth.conf.d/edb.conf'
```

3. Add support to your system for secure APT repositories:

```text
apt-get install apt-transport-https
```

4. Add the EDB signing key:

```text
wget -q -O - https://username:password
@apt.enterprisedb.com/edb-deb.gpg.key | apt-key add -
```

5. Update the repository metadata:

```text
apt-get update
```

6. Install the package:

```text
apt-get install edb-as<xx>-hdfs-fdw
```

where `xx` is the server version number.

Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
---
title: "Installing Hadoop Foreign Data Wrapper on Debian 10 x86"
navTitle: "Debian 10"
---

To install the Hadoop Foreign Data Wrapper on a Debian or Ubuntu host, you must have credentials that allow access to the EDB repository. To request credentials for the repository, visit the [EDB website](https://www.enterprisedb.com/repository-access-request/).

The following steps will walk you through on using the EDB apt repository to install a Debian package. When using the commands, replace the `username` and `password` with the credentials provided by EDB.

1. Assume superuser privileges:

```text
sudo su –
```

2. Set up the EDB repository:

```text
sh -c 'echo "deb [arch=amd64] https://apt.enterprisedb.com/$(lsb_release -cs)-edb/ $(lsb_release -cs) main" > /etc/apt/sources.list.d/edb-$(lsb_release -cs).list'
```

3. Substitute your EDB credentials for the `username` and `password` in the following command:

```text
sh -c 'echo "machine apt.enterprisedb.com login <username> password <password>" > /etc/apt/auth.conf.d/edb.conf'
```

3. Add support to your system for secure APT repositories:

```text
apt-get install apt-transport-https
```

4. Add the EDB signing key:

```text
wget -q -O - https://username:password
@apt.enterprisedb.com/edb-deb.gpg.key | apt-key add -
```

5. Update the repository metadata:

```text
apt-get update
```

6. Install the package:

```text
apt-get install edb-as<xx>-hdfs-fdw
```

where `xx` is the server version number.

Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,9 @@ For operating system-specific install instructions, see:
- [CentOS 7](04_hadoop_centos7_x86)
- [SLES 15](05_hadoop_sles15_x86)
- [SLES 12](07_hadoop_sles12_x86)
- [Ubuntu 20.04 or 18.04/Debian 10](09_hadoop__ubuntu18_20_deb10_x86)
- [Debian 9](11_hadoop__deb9_x86)
- [Ubuntu 20.04](09_hadoop_ubuntu20_x86)
- [Ubuntu 18.04](09a_hadoop_ubuntu18_x86)
- [Debian 10](09b_hadoop_deb10_x86)
- [Debian 9](11_hadoop_deb9_x86)

After you complete the installation, see [Configuring the Hadoop Foreign Data Wrapper](../../08_configuring_the_hadoop_data_adapter).
58 changes: 0 additions & 58 deletions product_docs/docs/harp/2/07_harp_proxy.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -104,64 +104,6 @@ Do *not* use the `pgbouncer` user, as this this is used by HARP
Proxy as an admin-level user to operate the underlying PgBouncer
service.

In clusters administered by TPAexec, a function is created and installed
in the `pg_catalog` schema in the `template1` database during provisioning.
This means any databases created later also include the function,
and it's available to PgBouncer regardless of the database the user is
attempting to contact.

If TPAexec isn't used, we still recommend this function definition:

```sql
CREATE OR REPLACE FUNCTION pg_catalog.pgbouncer_get_auth(p_usename TEXT)
RETURNS TABLE(username TEXT, password TEXT) AS $$
BEGIN
RETURN QUERY
SELECT usename::TEXT, passwd::TEXT FROM pg_catalog.pg_shadow
WHERE usename = p_usename;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER

REVOKE ALL ON FUNCTION pg_catalog.pgbouncer_get_auth(p_usename TEXT)
FROM PUBLIC

GRANT EXECUTE ON FUNCTION pg_catalog.pgbouncer_get_auth(p_usename TEXT)
TO <auth_user>;
```

Substitute `<auth_user>` for the `auth_user` field supplied to
HARP Proxy.

Then in the Bootstrap file, the following completes the configuration:

```yaml
cluster:
name: mycluster

proxies:
monitor_interval: 5
default_pool_size: 20
max_client_conn: 1000
auth_user: pgb_auth
type: pgbouncer
auth_query: "SELECT * FROM pg_catalog.pgbouncer_get_auth($1)"
database_name: bdrdb
instances:
- name: proxy1
- name: proxy2
```
You can also define these fields with `harpctl set proxy`:

```bash
harpctl set proxy global auth_user=pgb_auth
```

!!! Note
This means the `postgres` or `enterprisedb` OS user that launches HARP
Proxy needs a `.pgpass` file so that `auth_user` can authenticate
against Postgres.

### Configuration

HARP Proxy expects the `dcs`, `cluster`, and `proxy` configuration stanzas. The
Expand Down
Loading

0 comments on commit e139915

Please sign in to comment.