From cec55a1bf96f49e02f9ae9e117e84b8477087215 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 20 Jul 2023 17:24:06 -0400 Subject: [PATCH 01/65] First edit of Hashicorp/Valut --- .../HashicorpVault/02-PartnerInformation.mdx | 10 +-- .../HashicorpVault/03-SolutionSummary.mdx | 6 +- .../04-ConfiguringHashicorpVault.mdx | 82 +++++++++++-------- .../HashicorpVault/05-UsingHashicorpVault.mdx | 68 +++++++-------- .../06-CertificationEnvironment.mdx | 6 +- .../HashicorpVault/07-SupportandLogging.mdx | 20 ++--- .../partner_docs/HashicorpVault/index.mdx | 10 +-- 7 files changed, 109 insertions(+), 93 deletions(-) diff --git a/advocacy_docs/partner_docs/HashicorpVault/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/HashicorpVault/02-PartnerInformation.mdx index 4c6ff94a89e..b4d080e302a 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details of the partner' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Hashicorp | -| **Web Site** | https://www.hashicorp.com/ | -| **Partner Product** | Vault | +| **Partner name** | Hashicorp | +| **Website** | https://www.hashicorp.com/ | +| **Partner product** | Vault | | **Version** | Vault v1.12.6+ent, v1.13.2+ent | -| **Product Description** | Hashicorp Vault is an identity-based secrets and encryption management system. Used in conjunction with EDB Postgres Advanced Server or EDB Postgres Extended Server, it allows users to control access to encryption keys and certificates, as well as perform key management. | \ No newline at end of file +| **Product description** | Hashicorp Vault is an identity-based secrets and encryption management system. Used with EDB Postgres Advanced Server or EDB Postgres Extended Server, it allows users to control access to encryption keys and certificates, as well as perform key management. | \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HashicorpVault/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/HashicorpVault/03-SolutionSummary.mdx index 0e7d5e52a79..03fb10e1c11 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/03-SolutionSummary.mdx @@ -1,10 +1,10 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Explanation of the solution and its purpose' --- -Hashicorp Vault is an identity-based secrets and encryption management system. Used in conjunction with EDB Postgres Advanced Server versions 15.2 and above or EDB Postgres Extended Server versions 15.2 and above, it allows users to control access to encryption keys and certificates, as well as perform key management. Using Hashicorp Vault’s KMIP secrets engine allows Vault to act as a KMIP server provider and handle the lifecycle of KMIP managed objects. +Hashicorp Vault is an identity-based secrets and encryption management system. Used with EDB Postgres Advanced Server versions 15.2 and later or EDB Postgres Extended Server versions 15.2 and later, it allows users to control access to encryption keys and certificates, as well as perform key management. Using Hashicorp Vault’s KMIP secrets engine allows Vault to act as a KMIP server provider and handle the lifecycle of KMIP-managed objects. -Hashicorp Vault’s KMIP secrets engine manages its own listener to service any KMIP requests that operate on KMIP managed objects. The KMIP secrets engine determines the set of KMIP operations that the clients can perform based on roles that are assigned. +Hashicorp Vault’s KMIP secrets engine manages its own listener to service any KMIP requests that operate on KMIP-managed objects. The KMIP secrets engine determines the set of KMIP operations that the clients can perform based on roles that are assigned. ![Hashicorp Vault Architecture](Images/HashicorpVaultSolutionSummaryImage.png) diff --git a/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx b/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx index 5c819cc989f..ae2c9aafa89 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx @@ -1,41 +1,45 @@ --- title: 'Configuration' -description: 'Walkthrough on configuring the integration' +description: 'Walkthrough of configuring the integration' --- -Implementing Hashicorp Vault with EDB Postgres Advanced Server version 15.2 and above and EDB Postgres Extended Server version 15.2 and above requires the following components: -!!! Note - The EDB Postgres Advanced Server version 15.2 and above and EDB Postgres Extended Server version 15.2 and above, products will be referred to as EDB Postgres distribution. The specific distribution type will be dependent upon customer need or preference. +Implementing Hashicorp Vault with EDB Postgres Advanced Server version 15.2 and later and EDB Postgres Extended Server version 15.2 and later requires the following components: - EDB Postgres Distribution (15.2 or later) - Hashicorp Vault Enterprise version 1.13.2+ent or 1.12.6+ent - [Pykmip](https://pypi.org/project/PyKMIP/#files) - Python +!!! Note + The EDB Postgres Advanced Server version 15.2 and later and EDB Postgres Extended Server version 15.2 and later products will be referred to as EDB Postgres distribution. The specific distribution type depends on customer need or preference. + ## Prerequisites - A running EDB Postgres distribution with Python and pykmip installed - Hashicorp Vault Enterprise edition with enterprise licensing installed and deployed per your VM environment -## Check/Install Python on Server +## Check/install Python on server + +Many Unix-compatible operating systems, such as macOS and some Linux distributions, have Python installed by default as it's included in a base installation. -Many Unix-compatible operating systems such as macOS and some Linux distributions have Python installed by default as it is included in a base installation. +To check your version of Python on your machine, or to see if it's installed, enter `python3`. The python version is returned. You can also enter `ps -ef |grep python` to return a python running process. -To check your version of Python on your machine, or to see if it is installed, simply type `python3` and it will return the version. You can also type `ps -ef |grep python` to return a python running process. ```bash root@ip-172-31-46-134:/home/ubuntu# python Python 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. ``` -If you run a check and find that your system does not have Python installed, you can follow the docs and download it from [Python.org](https://www.python.org/downloads/). Simply select your specific OS and download and install on your system. + +If you run a check and find that your system doesn't have Python installed, you can download it from [Python.org](https://www.python.org/downloads/). Select your specific OS and download and install iton your system. ## Install Pykmip -Once you have your EDB Repository installed on your server, you can then install the Pykmip utility that is needed. +Once your EDB Repository is installed on your server, you can then install the Pykmip utility. + +- As root user, issue the `install python3-pykmip` command. This example uses a RHEL8 server, so the command is `dnf install python3-pymkip`. -- As `root` user issue the `install python3-pykmip` command, for our example we have a RHEL8 server so it would be `dnf install python3-pymkip`. +The output looks something like: -The output should look something like: ```bash [root@ip-172-31-7-145 ec2-user]# dnf install python3-pykmip Updating Subscription Management repositories. @@ -88,46 +92,51 @@ Installed: Complete! ``` -## Configure Hashicorp Vault KMIP Secrets Engine +## Configure Hashicorp Vault KMIP secrets engine !!! Note - You have to set your environment variable with Hashicorp Vault before you can configure the Hashicorp Vault server using the API IP address and port. If you receive this error message “Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client” you need to issue this in your command line export VAULT_ADDR="http://127.0.0.1:8200". + You have to set your environment variable with Hashicorp Vault before you can configure the Hashicorp Vault server using the API IP address and port. If you receive the error message “Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client,” you need to issue this at your command line: `export VAULT_ADDR="http://127.0.0.1:8200"`. -1. After your Hashicorp Vault configuration is installed and deployed per the guidelines in the [Hashicorp documentation](https://developer.hashicorp.com/vault/tutorials/getting-started/getting-started-install), you will then need to enable the KMIP capabilities. +After your Hashicorp Vault configuration is installed and deployed per the guidelines in the [Hashicorp documentation](https://developer.hashicorp.com/vault/tutorials/getting-started/getting-started-install), you then need to enable the KMIP capabilities. -2. Assume root user. +1. Assume root user. + +1. As the root user, enter `vault secrets enable kmip`: -3. When you are the root user, type `vault secrets enable kmip`. ```bash root@ip-172-31-46-134:/home/ubuntu# vault secrets enable kmip Success! Enabled the kmip secrets engine at: kmip/ ``` -4. You will then need to configure the Hashicorp Vault secrets engine with the desired kmip listener address. + You then need to configure the Hashicorp Vault secrets engine with the desired kmip listener address. + +5. Enter `vault write kmip/config listen_addrs=0.0.0.0:5696`: -5. Enter `vault write kmip/config listen_addrs=0.0.0.0:5696`. ```bash root@ip-172-31-46-134:/home/ubuntu# vault write kmip/config listen_addrs=0.0.0.0:5696 Success! Data written to: kmip/config ``` -6. Enter `vault write -f kmip/scope/*scope_name*` to create the scope that will be used to define the allowed operations a role can perform. +6. To create the scope to use to define the allowed operations a role can perform, enter `vault write -f kmip/scope/`: + ```bash root@ip-172-31-46-134:/home/ubuntu# vault write -f kmip/scope/edb Success! Data written to: kmip/scope/edb ``` !!! Note - To view your scopes you have created you can enter `vault list kmip/scope`. + To view the scopes you created, enter `vault list kmip/scope`. + +7. To define the role for the scope, enter `vault write kmip/scope/*scope_name*/role/ operation_all=true`. In this example, the role of admin is for the scope `edb`: -7. Enter `vault write kmip/scope/*scope_name*/role/*role_name* operation_all=true` to define the role for the scope. In our example the role of `admin` is for the scope `edb`. ```bash root@ip-172-31-46-134:/home/ubuntu# vault write kmip/scope/edb/role/admin operation_all=true Success! Data written to: kmip/scope/edb/role/admin ``` -8. You can read your scope and role with this command `vault read kmip/scope/*scope_name*/role/*role_name*` +8. You can read your scope and role with this command `vault read kmip/scope/*scope_name*/role/`: + ```bash root@ip-172-31-46-134:/home/ubuntu# vault read kmip/scope/edb/role/admin Key Value @@ -138,30 +147,33 @@ tls_client_key_type n/a tls_client_ttl 0s ``` -## Generate Client Certificates +## Generate client certificates -After a scope and role have been created you will need to generate client certificates that will be used within your pykmip.conf file for key management. These certificates can be used to establish communication with Hashicorp Vault’s KMIP Server. +After you create a scope and a role, you need to generate client certificates to use in your `pykmip.conf` file for key management. You can use these certificates to establish communication with Hashicorp Vault’s KMIP server. -1. Generate the client certificate, this will provide the CA Chain, the private key and the certificate. +1. Generate the client certificate, which provides the CA Chain, the private key, and the certificate. -2. Enter `vault write -f -field=certificate \ kmip/scope/*scope_name*/role/*role_name*/credential/generate > *certificate_name*.pem`. +2. Enter `vault write -f -field=certificate \ kmip/scope//role//credential/generate > .pem`. -In our example we used role: `edb`, scope: `admin` and certificate name: `kmip-cert.pem`. +This example uses the used edb, the scope `admin`, and the certificate name `kmip-cert.pem`. ```bash root@ip-172-31-46-134:/home/ubuntu# vault write -f -field=certificate \ kmip/scope/edb/role/admin/credential/generate > kmip-cert.pem ``` -3. To view your certificates type `cat *certificate_name*.pem*` and this will return the certificates from Hashicorp Vault. +3. To view your certificates, enter `cat .pem`, which returns the certificates from Hashicorp Vault. + ```bash root@ip-172-31-46-134:/home/ubuntu# cat kmip-cert.pem ``` -4. You will need to separate the individual certificates into `.pem` files so they can be used in your pykmip.conf file. +4. You need to separate the individual certificates into `.pem` files so they can be used in your `pykmip.conf` file. + !!! Note - Make sure to include ----BEGIN ------ and ----END ------ in the .pem certificate files. + Make sure to include ----BEGIN ------ and ----END ------ in the `.pem` certificate files. + +5. Create a `key.pem` file that contains the private key in the certificate chain: -5. Create a `key.pem` file contains the private key in the certificate chain. ```bash ubuntu@ip-172-31-46-134:/tmp$ cat key.pem -----BEGIN EC PRIVATE KEY----- @@ -171,7 +183,8 @@ wmmW4klCuDzRdSBvtdcA5LguWrSBimKXDw== -----END EC PRIVATE KEY----- ``` -6. Create a `cert.pem` file contains the first certificate in the certificate chain. +6. Create a `cert.pem` file that contains the first certificate in the certificate chain: + ```bash ubuntu@ip-172-31-46-134:/tmp$ cat cert.pem -----BEGIN CERTIFICATE----- @@ -188,7 +201,8 @@ Xlg2U8LToGCBEvf1quZU7T8ZQkbQCA== -----END CERTIFICATE----- ``` -7. Create a `ca.pem` file contains the last two certificates in the certificate chain. +7. Create a `ca.pem` file that contains the last two certificates in the certificate chain: + ```bash ubuntu@ip-172-31-46-134:/tmp$ cat ca.pem -----BEGIN CERTIFICATE----- @@ -216,4 +230,4 @@ IgIhAMb3y3xRXwddt2ejaow1GytysRz4LoxC3B5dLn1LoCpI -----END CERTIFICATE----- ``` -Now that you have all of the required certificates you are ready to use Hashicorp Vault Secrets Engine with your EDB Postgres distribution with TDE. \ No newline at end of file +Now that you have all of the required certificates, you are ready to use Hashicorp Vault secrets engine with your EDB Postgres distribution with TDE. \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx b/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx index 41377f665c7..bad76683535 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx @@ -3,24 +3,26 @@ title: 'Using' description: 'Walkthrough of example usage scenarios' --- -After you have configured all of the Hashicorp Vault certificates, as stated in the Configuring section, you will be able to use them in conjunction with your EDB Postgres distribution. +After you have configured all of the Hashicorp Vault certificates, you can use them with your EDB Postgres distribution. !!! Note - It is important to note that this doc is intended for versions 15.2 and above of EDB Postgres Advanced Server and versions 15.2 and above of EDB Postgres Extended Server as these versions support Transparent Data Encryption (TDE). + This content is intended for versions 15.2 and later of EDB Postgres Advanced Server and versions 15.2 and later of EDB Postgres Extended Server, as only these versions support transparent data encryption (TDE). -To implement Hashicorp Vault Secrets Engine with your EDB Postgres distribution, you must ensure that you have the following downloaded to your system: +To implement Hashicorp Vault secrets engine with your EDB Postgres distribution, ensure that you have the following downloaded to your system: - Python - [pykmip](https://pypi.org/project/PyKMIP/#files) -- edb-tde-kmip-client downloaded from your EDB Repos access +- `edb-tde-kmip-client` downloaded from your EDB Repos access -All of the `.pem` files that you created in the Configuring section, `key.pem`, `cert.pem` and `ca.pem`, need to be copied to the system where your EDB Postgres distribution is installed. For our example, all of the `.pem` files and the `edb_tde_kmip_client.py` program are in the `/tmp/` directory. +You need to copy all of the `.pem` files that you created in the Configuring section—`key.pem`, `cert.pem`, and `ca.pem`—to the system where your EDB Postgres distribution is installed. For this example, all of the `.pem` files and the `edb_tde_kmip_client.py` program are in the `/tmp/` directory. -## Check Prerequisites and Download edb-tde-kmip-client -Ensure that you have the prerequisite software (Python and Pykmip) installed on your system as stated in the Configuring section. +## Check prerequisites and download edb-tde-kmip-client -To install the edb-tde-kmip-client on your system assume `root` user and issue the install command for `edb-tde-kmip-client`. For our example we installed it on a RHEL8 Server so it would be `dnf install edb-tde-kmip-client`. +Ensure that you have the prerequisite software (Python and Pykmip) installed on your system. + +To install the `edb-tde-kmip-client` on your system, assume root user and issue the install command for `edb-tde-kmip-client`. This example installs it on a RHEL8 rerver: `dnf install edb-tde-kmip-client`. + +Some output is returned that looks like the following: -You should receive some output that looks like the following: ```bash [root@ip-172-31-7-145 ec2-user]# dnf install edb-tde-kmip-client Updating Subscription Management repositories. @@ -61,9 +63,9 @@ Complete! ## Create pykmip.conf File -1. On your system where you have your EDB Postgres distribution, navigate to the directory where you have saved your `.pem` files and the `edb_tde_kmip_client.py` client. +1. On your system where you have your EDB Postgres distribution, navigate to the directory where you saved your `.pem` files and the `edb_tde_kmip_client.py` client. -2. In that directory create a file called `pykmip.conf` and input the following: +2. In that directory, create a file called `pykmip.conf` and input the following: - Host - Port - Keyfile @@ -82,13 +84,13 @@ ca_certs=/tmp/ca.pem ``` !!! Note - For more information on the pykmip.conf file and the contents of it you can visit the [pykmip documentation](https://pykmip.readthedocs.io/en/latest/client.html). + For more information on the `pykmip.conf` file and its contents, see the [pykmip documentation](https://pykmip.readthedocs.io/en/latest/client.html). -## Create a Key on Hashicorp Vault Secrets Engine +## Create a key on Hashicorp Vault secrets engine 1. On your system where you have your EDB Postgres distribution, assume root user to create the key on the Hashicorp Vault Secrets Engine. -2. Type `python3` and then input the following, making adjustments per your system setup and directory paths: +2. Enter `python3` and then input the following, making adjustments per your system setup and directory paths: ```bash >>> from kmip.pie import client @@ -103,22 +105,22 @@ ca_certs=/tmp/ca.pem >>> c.close() ``` -3. If this runs without error then your key has been successfully created. You cannot view keys that you create in Hashicorp Vault. +3. If this runs without error, then your key was successfully created. You can't view keys that you create in Hashicorp Vault. -## Verify Encryption and Decryption +## Verify encryption and decryption -To ensure that your key you created will be able to encrypt and decrypt data, run the following two commands as the root user on your system with your EDB Postgres distribution. +To ensure that your key you created can encrypt and decrypt data, run the following two commands as the root user on your system with your EDB Postgres distribution: 1. `printf secret | python3 /tmp/edb_tde_kmip_client.py encrypt --out-file=test.bin --pykmip-config-file=/tmp/pykmip.conf --key-uid='key_output_here’ --variant=pykmip` -- Location of the KMIP Client: /tmp/edb_tde_kmip_client.py -- Output file: test.bin -- Location of pykmip configuration file: /tmp/pykmip.conf -- Encrypted Key Output: TDE key output +- Location of the KMIP Client: `/tmp/edb_tde_kmip_client.py` +- Output file: `test.bin` +- Location of pykmip configuration file: `/tmp/pykmip.conf` +- Encrypted key output: TDE key output - Variant: Allows for KMIP compatibility with HashiCorp Vault 2. `python3 /tmp/edb_tde_kmip_client.py decrypt --in-file=test.bin --pykmip-config-file=/tmp/pykmip.conf --key-uid='key_output_here' --variant=pykmip` -3. If this is successful it should produce the output of **secret**. + If this command is successful, it produce the output of `secret`: ```bash root@ip-172-31-46-134:/etc/vault.d# printf secret | python3 /tmp/edb_tde_kmip_client.py encrypt --out-file=test.bin --pykmip-config-file=/tmp/pykmip.conf --key-uid='nfTCV2Cp5sffhQuRrOVfgCUyu8qh9kwd' --variant=pykmip @@ -127,17 +129,17 @@ Secret root@ip-172-31-46-134:/etc/vault.d# ``` -## Perform initdb for the Database +## Perform initdb for the database -After you have completed the above steps you will be able to export the PGDATAKEYWRAPCMD and PGDATAKEYUNWRAPCMD to wrap and unwrap your encryption key and initialize your database. +After you complete the previosu steps, can export the PGDATAKEYWRAPCMD and PGDATAKEYUNWRAPCMD to wrap and unwrap your encryption key and initialize your database. -1. Login to your EDB Postgres distribution as the Superuser. For our example: **enterprisedb user**, `sudo su - enterprisedb`. +1. Log in to your EDB Postgres distribution as the superuser. For our example, use the enterprisedb user: `sudo su - enterprisedb`. -2. Navigate to the `/bin` directory where your executables live. In our example it is `/usr/lib/edb-as/15/bin`. +2. Navigate to the `/bin` directory where your executables are. In this example, it's `/usr/lib/edb-as/15/bin`. -3. Type: `export PGDATAKEYWRAPCMD='python3 /tmp/edb_tde_kmip_client.py encrypt --pykmip-config-file=/tmp/pykmip.conf --key-uid=key_ouput_here --out-file=%p --variant=pykmip’` +3. Enter `export PGDATAKEYWRAPCMD='python3 /tmp/edb_tde_kmip_client.py encrypt --pykmip-config-file=/tmp/pykmip.conf --key-uid=key_ouput_here --out-file=%p --variant=pykmip’` -4. Type: `export PGDATAKEYUNWRAPCMD='python3 /tmp/edb_tde_kmip_client.py decrypt --pykmip-config-file=/tmp/pykmip.conf --key-uid=key_output_here --in-file=%p --variant=pykmip’` +4. Enter `export PGDATAKEYUNWRAPCMD='python3 /tmp/edb_tde_kmip_client.py decrypt --pykmip-config-file=/tmp/pykmip.conf --key-uid=key_output_here --in-file=%p --variant=pykmip’` ```bash enterprisedb@ip-172-31-46-134:/usr/lib/edb-as/15/bin$ export PGDATAKEYWRAPCMD='python3 /tmp/edb_tde_kmip_client.py encrypt --pykmip-config-file=/tmp/pykmip.conf --key-uid=nfTCV2Cp5sffhQuRrOVfgCUyu8qh9kwd --out-file=%p --variant=pykmip' @@ -146,7 +148,8 @@ enterprisedb@ip-172-31-46-134:/usr/lib/edb-as/15/bin$ export PGDATAKEYUNWRAPCMD= ``` 5. Perform your initdb per your database requirements, for example: `./initdb -D dd12 -y`. -6. If all is successful you should get an output that looks like this: +6. If all is successful, your output looks like this: + ```bash enterprisedb@ip-172-31-46-134:/usr/lib/edb-as/15/bin$ ./initdb -D /var/lib/edb-as/15/dd12 -y @@ -280,7 +283,8 @@ Success. You can now start the database server using: ``` -7. Start your database and navigate to your `/data` directory to view the postgresql.conf file to ensure that your `data_encryption_key_unwrap_command`, which you set with your `export PGDATAUNWRAPCMD`, is present under the Authentication section. +7. Start your database and navigate to your `/data` directory to view the `postgresql.conf` file to ensure that your `data_encryption_key_unwrap_command`, which you set with your `export PGDATAUNWRAPCMD`, is present under the Authentication section. + ```bash # - Authentication - @@ -315,6 +319,4 @@ data_encryption_key_unwrap_command = 'python3 /tmp/edb_tde_kmip_client.py decryp ``` - -For more information on how TDE is incorporated with EDB Postgres Advanced Server and EDB Postgres Extended Server visit the [EDB Transparent Data Encryption](https://www.enterprisedb.com/docs/tde/latest/) documentation. - +For more information on how TDE is incorporated with EDB Postgres Advanced Server and EDB Postgres Extended Server, see the [EDB Transparent Data Encryption](/tde/latest/) documentation. diff --git a/advocacy_docs/partner_docs/HashicorpVault/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/HashicorpVault/06-CertificationEnvironment.mdx index 7fe7f9fb06a..557bb054e31 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/06-CertificationEnvironment.mdx @@ -1,11 +1,11 @@ --- -title: 'Certification Environment' -description: 'Overview of the Certification Environment' +title: 'Certification environment' +description: 'Overview of the certification environment' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | May 3rd, 2023 | +| **Certification test date** | May 3rd, 2023 | | **EDB Postgres Advanced Server** | 15.2 | | **EDB Postgres Extended Server** | 15.2 | | **Thales CipherTrust Manager** | Vault v1.12.6+ent, Vault v1.13.2+ent | \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx b/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx index 423d58ebc43..036ca793c92 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx @@ -1,26 +1,26 @@ --- -title: 'Support and Logging Details' +title: 'Support and logging details' description: 'Details of the support process and logging information' --- ## Support -Technical support for the use of these products is provided by both EDB and Hashicorp. A proper support contract is required to be in place at both EDB and Hashicorp. A support ticket can be opened on either side to start the process. If it is determined through the support ticket that resources from the other vendor is required, the customer should open a support ticket with that vendor through normal support channels. This will allow both companies to work together to help the customer as needed. +Technical support for the use of these products is provided by both EDB and Hashicorp. A support contract is required to be in place at both EDB and Hashicorp. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging -**EDB Postgres Advanced Server Logs:** +### EDB Postgres Advanced Server logs -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance and from here you can navigate to `log`, `current_logfiles` or you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From here, you can navigate to `log`, `current_logfiles`, or you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. -**EDB Postgres Extended Server Logs** +### EDB Postgres Extended Server logs -Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance and from here you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended logs: `/var/lib/edb-pge/15/data/log`. +Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance. From here, you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended logs is `/var/lib/edb-pge/15/data/log`. -** Hashicorp Vault Logs** +### Hashicorp Vault Logs -Customers can use the `journalctl` function to call logs for Hashicorp Vault. +You can use the `journalctl` function to call logs for Hashicorp Vault. -If you just want to view the Vault logs you can do so by entering `journalctl -ex -u vault` in the command line. +If you only want to view the Vault logs, you can do so by entering `journalctl -ex -u vault` at the command line. -If you want to view logs for a specific day and output those results to a `.txt` file you can do so by entering `journalctl -u vault -S today > vaultlog.txt` in the command line, adjusting the date to your needed date and the text title. \ No newline at end of file +If you want to view logs for a specific day and output those results to a `.txt` file, you can do so by entering `journalctl -u vault -S today > vaultlog.txt` at the command line. Adjust the date to your needed date and the text title. \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HashicorpVault/index.mdx b/advocacy_docs/partner_docs/HashicorpVault/index.mdx index 357cf020714..1aece66f82f 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/index.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/index.mdx @@ -1,14 +1,14 @@ --- -title: 'Hashicorp Vault Implementation Guide' +title: 'Implementing Hashicorp Vault' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+ +[Partner Program Logo]](Images/PartnerProgram.jpg.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Hashicorp Vault

-

This document is intended to augment each vendor’s product documentation in order to guide the reader in getting the products working together. It is not intended to show the optimal configuration for the certified integration.

\ No newline at end of file +

This document is intended to augment each vendor’s product documentation to guide the reader in getting the products working together. It isn't intended to show the optimal configuration for the certified integration.

\ No newline at end of file From 9cef26957ada1235c2c668e1d60a7364ef53a64d Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Fri, 21 Jul 2023 11:10:24 -0400 Subject: [PATCH 02/65] Hashicorp vault second read --- .../HashicorpVault/02-PartnerInformation.mdx | 2 +- .../HashicorpVault/03-SolutionSummary.mdx | 4 +-- .../04-ConfiguringHashicorpVault.mdx | 30 ++++++++--------- .../HashicorpVault/05-UsingHashicorpVault.mdx | 32 +++++++++---------- .../HashicorpVault/07-SupportandLogging.mdx | 12 ++++--- .../partner_docs/HashicorpVault/index.mdx | 4 +-- 6 files changed, 43 insertions(+), 41 deletions(-) diff --git a/advocacy_docs/partner_docs/HashicorpVault/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/HashicorpVault/02-PartnerInformation.mdx index b4d080e302a..227c43c00b8 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/02-PartnerInformation.mdx @@ -9,4 +9,4 @@ description: 'Details of the partner' | **Website** | https://www.hashicorp.com/ | | **Partner product** | Vault | | **Version** | Vault v1.12.6+ent, v1.13.2+ent | -| **Product description** | Hashicorp Vault is an identity-based secrets and encryption management system. Used with EDB Postgres Advanced Server or EDB Postgres Extended Server, it allows users to control access to encryption keys and certificates, as well as perform key management. | \ No newline at end of file +| **Product description** | Hashicorp Vault is an identity-based secrets and encryption management system. Used with EDB Postgres Advanced Server or EDB Postgres Extended Server, it allows you to control access to encryption keys and certificates and perform key management. | \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HashicorpVault/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/HashicorpVault/03-SolutionSummary.mdx index 03fb10e1c11..092b10e7cc1 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/03-SolutionSummary.mdx @@ -3,8 +3,8 @@ title: 'Solution summary' description: 'Explanation of the solution and its purpose' --- -Hashicorp Vault is an identity-based secrets and encryption management system. Used with EDB Postgres Advanced Server versions 15.2 and later or EDB Postgres Extended Server versions 15.2 and later, it allows users to control access to encryption keys and certificates, as well as perform key management. Using Hashicorp Vault’s KMIP secrets engine allows Vault to act as a KMIP server provider and handle the lifecycle of KMIP-managed objects. +Hashicorp Vault is an identity-based secrets and encryption management system. Used with EDB Postgres Advanced Server versions 15.2 and later or EDB Postgres Extended Server versions 15.2 and later, it allows you to control access to encryption keys and certificates and perform key management. Using Hashicorp Vault’s KMIP secrets engine allows Vault to act as a KMIP server provider and handle the lifecycle of KMIP-managed objects. -Hashicorp Vault’s KMIP secrets engine manages its own listener to service any KMIP requests that operate on KMIP-managed objects. The KMIP secrets engine determines the set of KMIP operations that the clients can perform based on roles that are assigned. +Hashicorp Vault’s KMIP secrets engine manages its own listener to service any KMIP requests that operate on KMIP-managed objects. The KMIP secrets engine determines the set of KMIP operations that the clients can perform based on roles they are assigned. ![Hashicorp Vault Architecture](Images/HashicorpVaultSolutionSummaryImage.png) diff --git a/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx b/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx index ae2c9aafa89..0cf064b2914 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuration' +title: 'Configuring Hashicorp Vault' description: 'Walkthrough of configuring the integration' --- @@ -7,22 +7,22 @@ Implementing Hashicorp Vault with EDB Postgres Advanced Server version 15.2 and - EDB Postgres Distribution (15.2 or later) - Hashicorp Vault Enterprise version 1.13.2+ent or 1.12.6+ent -- [Pykmip](https://pypi.org/project/PyKMIP/#files) +- [PyKMIP](https://pypi.org/project/PyKMIP/#files) - Python !!! Note - The EDB Postgres Advanced Server version 15.2 and later and EDB Postgres Extended Server version 15.2 and later products will be referred to as EDB Postgres distribution. The specific distribution type depends on customer need or preference. + We refer to EDB Postgres Advanced Server version 15.2 and later and EDB Postgres Extended Server version 15.2 and later products as EDB Postgres distribution. The specific distribution type depends on your needs and preferences. ## Prerequisites -- A running EDB Postgres distribution with Python and pykmip installed +- A running EDB Postgres distribution with Python and PyKMIP installed - Hashicorp Vault Enterprise edition with enterprise licensing installed and deployed per your VM environment ## Check/install Python on server -Many Unix-compatible operating systems, such as macOS and some Linux distributions, have Python installed by default as it's included in a base installation. +Many Unix-compatible operating systems, such as macOS and some Linux distributions, have Python installed by default, as it's included in a base installation. -To check your version of Python on your machine, or to see if it's installed, enter `python3`. The python version is returned. You can also enter `ps -ef |grep python` to return a python running process. +To check your version of Python on your machine, or to see if it's installed, enter `python3`. The Python version is returned. You can also enter `ps -ef |grep python` to return a Python running process. ```bash root@ip-172-31-46-134:/home/ubuntu# python @@ -31,10 +31,10 @@ Python 3.8.10 (default, May 26 2023, 14:05:08) Type "help", "copyright", "credits" or "license" for more information. ``` -If you run a check and find that your system doesn't have Python installed, you can download it from [Python.org](https://www.python.org/downloads/). Select your specific OS and download and install iton your system. +If you run a check and find that your system doesn't have Python installed, you can download it from [Python.org](https://www.python.org/downloads/). Select your OS and download and install it on your system. ## Install Pykmip -Once your EDB Repository is installed on your server, you can then install the Pykmip utility. +Once your EDB Repository is installed on your server, you can then install the PyKMIP utility. - As root user, issue the `install python3-pykmip` command. This example uses a RHEL8 server, so the command is `dnf install python3-pymkip`. @@ -95,7 +95,7 @@ Complete! ## Configure Hashicorp Vault KMIP secrets engine !!! Note - You have to set your environment variable with Hashicorp Vault before you can configure the Hashicorp Vault server using the API IP address and port. If you receive the error message “Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client,” you need to issue this at your command line: `export VAULT_ADDR="http://127.0.0.1:8200"`. + You have to set your environment variable with Hashicorp Vault before you can configure the Hashicorp Vault server using the API IP address and port. If you receive the error message, “Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client,” you need to issue this at your command line: `export VAULT_ADDR="http://127.0.0.1:8200"`. After your Hashicorp Vault configuration is installed and deployed per the guidelines in the [Hashicorp documentation](https://developer.hashicorp.com/vault/tutorials/getting-started/getting-started-install), you then need to enable the KMIP capabilities. @@ -108,7 +108,7 @@ root@ip-172-31-46-134:/home/ubuntu# vault secrets enable kmip Success! Enabled the kmip secrets engine at: kmip/ ``` - You then need to configure the Hashicorp Vault secrets engine with the desired kmip listener address. + You then need to configure the Hashicorp Vault secrets engine with the desired KMIP listener address. 5. Enter `vault write kmip/config listen_addrs=0.0.0.0:5696`: @@ -135,7 +135,7 @@ root@ip-172-31-46-134:/home/ubuntu# vault write kmip/scope/edb/role/admin operat Success! Data written to: kmip/scope/edb/role/admin ``` -8. You can read your scope and role with this command `vault read kmip/scope/*scope_name*/role/`: +8. You can read your scope and role with the command `vault read kmip/scope/*scope_name*/role/`: ```bash root@ip-172-31-46-134:/home/ubuntu# vault read kmip/scope/edb/role/admin @@ -151,11 +151,11 @@ tls_client_ttl 0s After you create a scope and a role, you need to generate client certificates to use in your `pykmip.conf` file for key management. You can use these certificates to establish communication with Hashicorp Vault’s KMIP server. -1. Generate the client certificate, which provides the CA Chain, the private key, and the certificate. +1. Generate the client certificate, which provides the CA chain, the private key, and the certificate. 2. Enter `vault write -f -field=certificate \ kmip/scope//role//credential/generate > .pem`. -This example uses the used edb, the scope `admin`, and the certificate name `kmip-cert.pem`. +This example uses the user edb, the scope `admin`, and the certificate name `kmip-cert.pem`: ```bash root@ip-172-31-46-134:/home/ubuntu# vault write -f -field=certificate \ kmip/scope/edb/role/admin/credential/generate > kmip-cert.pem @@ -170,7 +170,7 @@ root@ip-172-31-46-134:/home/ubuntu# cat kmip-cert.pem 4. You need to separate the individual certificates into `.pem` files so they can be used in your `pykmip.conf` file. !!! Note - Make sure to include ----BEGIN ------ and ----END ------ in the `.pem` certificate files. + Make sure to include `----BEGIN ------` and `----END ------` in the `.pem` certificate files. 5. Create a `key.pem` file that contains the private key in the certificate chain: @@ -230,4 +230,4 @@ IgIhAMb3y3xRXwddt2ejaow1GytysRz4LoxC3B5dLn1LoCpI -----END CERTIFICATE----- ``` -Now that you have all of the required certificates, you are ready to use Hashicorp Vault secrets engine with your EDB Postgres distribution with TDE. \ No newline at end of file +Once you have all of the required certificates, you're ready to use the Hashicorp Vault secrets engine with your EDB Postgres distribution with TDE. \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx b/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx index bad76683535..67bd04580f9 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx @@ -1,19 +1,19 @@ --- -title: 'Using' +title: 'Using Hashicorp Vault' description: 'Walkthrough of example usage scenarios' --- -After you have configured all of the Hashicorp Vault certificates, you can use them with your EDB Postgres distribution. +After you configure all of the Hashicorp Vault certificates, you can use them with your EDB Postgres distribution. !!! Note - This content is intended for versions 15.2 and later of EDB Postgres Advanced Server and versions 15.2 and later of EDB Postgres Extended Server, as only these versions support transparent data encryption (TDE). + This content is intended for versions 15.2 and later of EDB Postgres Advanced Server and versions 15.2 and later of EDB Postgres Extended Server. Only these versions support transparent data encryption (TDE). -To implement Hashicorp Vault secrets engine with your EDB Postgres distribution, ensure that you have the following downloaded to your system: +To implement Hashicorp Vault secrets engine with your EDB Postgres distribution, make sure that you have the following downloaded to your system: - Python -- [pykmip](https://pypi.org/project/PyKMIP/#files) +- [PyKMIP](https://pypi.org/project/PyKMIP/#files) - `edb-tde-kmip-client` downloaded from your EDB Repos access -You need to copy all of the `.pem` files that you created in the Configuring section—`key.pem`, `cert.pem`, and `ca.pem`—to the system where your EDB Postgres distribution is installed. For this example, all of the `.pem` files and the `edb_tde_kmip_client.py` program are in the `/tmp/` directory. +You need to copy all of the `.pem` files that you created in [Configuring Hashicorp Vault](04-ConfiguringHashicorpVault)—`key.pem`, `cert.pem`, and `ca.pem`—to the system where your EDB Postgres distribution is installed. For this example, all of the `.pem` files and the `edb_tde_kmip_client.py` program are in the `/tmp/` directory. ## Check prerequisites and download edb-tde-kmip-client @@ -61,7 +61,7 @@ Installed: Complete! ``` -## Create pykmip.conf File +## Create the pykmip.conf file 1. On your system where you have your EDB Postgres distribution, navigate to the directory where you saved your `.pem` files and the `edb_tde_kmip_client.py` client. @@ -84,13 +84,13 @@ ca_certs=/tmp/ca.pem ``` !!! Note - For more information on the `pykmip.conf` file and its contents, see the [pykmip documentation](https://pykmip.readthedocs.io/en/latest/client.html). + For more information on the `pykmip.conf` file and its contents, see the [PyKMIP documentation](https://pykmip.readthedocs.io/en/latest/client.html). -## Create a key on Hashicorp Vault secrets engine +## Create a key on the Hashicorp Vault secrets engine -1. On your system where you have your EDB Postgres distribution, assume root user to create the key on the Hashicorp Vault Secrets Engine. +1. On your system where you have your EDB Postgres distribution, assume root user to create the key on the Hashicorp Vault secrets engine. -2. Enter `python3` and then input the following, making adjustments per your system setup and directory paths: +2. Enter `python3`, and then input the following, making adjustments per your system setup and directory paths: ```bash >>> from kmip.pie import client @@ -105,11 +105,11 @@ ca_certs=/tmp/ca.pem >>> c.close() ``` -3. If this runs without error, then your key was successfully created. You can't view keys that you create in Hashicorp Vault. +3. If this runs without error, then your key was successfully created. (You can't view keys that you create in Hashicorp Vault.) ## Verify encryption and decryption -To ensure that your key you created can encrypt and decrypt data, run the following two commands as the root user on your system with your EDB Postgres distribution: +To ensure that the key you created can encrypt and decrypt data, run the following commands as the root user on the system with your EDB Postgres distribution: 1. `printf secret | python3 /tmp/edb_tde_kmip_client.py encrypt --out-file=test.bin --pykmip-config-file=/tmp/pykmip.conf --key-uid='key_output_here’ --variant=pykmip` - Location of the KMIP Client: `/tmp/edb_tde_kmip_client.py` @@ -120,7 +120,7 @@ To ensure that your key you created can encrypt and decrypt data, run the follow 2. `python3 /tmp/edb_tde_kmip_client.py decrypt --in-file=test.bin --pykmip-config-file=/tmp/pykmip.conf --key-uid='key_output_here' --variant=pykmip` - If this command is successful, it produce the output of `secret`: + If this command is successful, it produces the output of `secret`: ```bash root@ip-172-31-46-134:/etc/vault.d# printf secret | python3 /tmp/edb_tde_kmip_client.py encrypt --out-file=test.bin --pykmip-config-file=/tmp/pykmip.conf --key-uid='nfTCV2Cp5sffhQuRrOVfgCUyu8qh9kwd' --variant=pykmip @@ -131,7 +131,7 @@ root@ip-172-31-46-134:/etc/vault.d# ## Perform initdb for the database -After you complete the previosu steps, can export the PGDATAKEYWRAPCMD and PGDATAKEYUNWRAPCMD to wrap and unwrap your encryption key and initialize your database. +After you complete the previous steps, can export the PGDATAKEYWRAPCMD and PGDATAKEYUNWRAPCMD to wrap and unwrap your encryption key and initialize your database. 1. Log in to your EDB Postgres distribution as the superuser. For our example, use the enterprisedb user: `sudo su - enterprisedb`. @@ -283,7 +283,7 @@ Success. You can now start the database server using: ``` -7. Start your database and navigate to your `/data` directory to view the `postgresql.conf` file to ensure that your `data_encryption_key_unwrap_command`, which you set with your `export PGDATAUNWRAPCMD`, is present under the Authentication section. +7. Start your database and navigate to your `/data` directory to view the `postgresql.conf` file. Make sure that your `data_encryption_key_unwrap_command`, which you set with your `export PGDATAUNWRAPCMD`, is present under the Authentication section. ```bash # - Authentication - diff --git a/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx b/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx index 036ca793c92..1aef718bd5b 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx @@ -9,18 +9,20 @@ Technical support for the use of these products is provided by both EDB and Hash ## Logging +The following logs are available. + ### EDB Postgres Advanced Server logs -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From here, you can navigate to `log`, `current_logfiles`, or you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Or, you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. ### EDB Postgres Extended Server logs -Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance. From here, you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended logs is `/var/lib/edb-pge/15/data/log`. +Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance. From there, you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended Server logs is `/var/lib/edb-pge/15/data/log`. -### Hashicorp Vault Logs +### Hashicorp Vault logs You can use the `journalctl` function to call logs for Hashicorp Vault. -If you only want to view the Vault logs, you can do so by entering `journalctl -ex -u vault` at the command line. +If you want to view only the Vault logs, you can do so by entering `journalctl -ex -u vault` at the command line. -If you want to view logs for a specific day and output those results to a `.txt` file, you can do so by entering `journalctl -u vault -S today > vaultlog.txt` at the command line. Adjust the date to your needed date and the text title. \ No newline at end of file +If you want to view logs for a specific day and output those results to a `.txt` file, you can do so by entering `journalctl -u vault -S today > vaultlog.txt` at the command line. Adjust the date as needed and the text title. \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HashicorpVault/index.mdx b/advocacy_docs/partner_docs/HashicorpVault/index.mdx index 1aece66f82f..85888dd7fe1 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/index.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/index.mdx @@ -6,9 +6,9 @@ directoryDefaults: --- -[Partner Program Logo]](Images/PartnerProgram.jpg.png) +[Partner Program Logo](Images/PartnerProgram.jpg.png)

EDB GlobalConnect Technology Partner Implementation Guide

Hashicorp Vault

-

This document is intended to augment each vendor’s product documentation to guide the reader in getting the products working together. It isn't intended to show the optimal configuration for the certified integration.

\ No newline at end of file +

This document is intended to augment each vendor’s product documentation to guide you in getting the products working together. It isn't intended to show the optimal configuration for the certified integration.

\ No newline at end of file From a26b9407e4e275aa87c1cb8b66078e55f627df67 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Fri, 21 Jul 2023 12:36:32 -0400 Subject: [PATCH 03/65] Update 07-SupportandLogging.mdx --- .../partner_docs/HashicorpVault/07-SupportandLogging.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx b/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx index 1aef718bd5b..43af09b9763 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/07-SupportandLogging.mdx @@ -5,7 +5,7 @@ description: 'Details of the support process and logging information' ## Support -Technical support for the use of these products is provided by both EDB and Hashicorp. A support contract is required to be in place at both EDB and Hashicorp. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. +Technical support for the use of these products is provided by both EDB and Hashicorp. A support contract must be in place at both EDB and Hashicorp. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging @@ -25,4 +25,4 @@ You can use the `journalctl` function to call logs for Hashicorp Vault. If you want to view only the Vault logs, you can do so by entering `journalctl -ex -u vault` at the command line. -If you want to view logs for a specific day and output those results to a `.txt` file, you can do so by entering `journalctl -u vault -S today > vaultlog.txt` at the command line. Adjust the date as needed and the text title. \ No newline at end of file +If you want to view logs for a specific day and output those results to a `.txt` file, you can do so by entering `journalctl -u vault -S today > vaultlog.txt` at the command line. Adjust the date as needed and the text title. From ce5159e312c1470ea3a2cc228c224ee8cdbb5046 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Fri, 21 Jul 2023 12:52:27 -0400 Subject: [PATCH 04/65] Update 05-UsingHashicorpVault.mdx --- .../partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx b/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx index 67bd04580f9..ff657df94b8 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx @@ -319,4 +319,4 @@ data_encryption_key_unwrap_command = 'python3 /tmp/edb_tde_kmip_client.py decryp ``` -For more information on how TDE is incorporated with EDB Postgres Advanced Server and EDB Postgres Extended Server, see the [EDB Transparent Data Encryption](/tde/latest/) documentation. +For more information on how TDE is incorporated with EDB Postgres Advanced Server and EDB Postgres Extended Server, see the [EDB Transparent Data Encryption documentation](/tde/latest/). From c49d9f5e09aa2eddcc3c421720b77f3279c0bc23 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Fri, 21 Jul 2023 12:53:39 -0400 Subject: [PATCH 05/65] Edits to secrets engine doc --- .../02-PartnerInformation.mdx | 10 +++--- .../03-SolutionSummary.mdx | 8 ++--- .../04-ConfiguringTransitSecretsEngine.mdx | 28 +++++++++-------- .../05-UsingTransitSecretsEngine.mdx | 31 ++++++++++--------- .../06-CertificationEnvironment.mdx | 4 +-- .../07-Support.mdx | 22 +++++++------ .../index.mdx | 9 +++--- 7 files changed, 58 insertions(+), 54 deletions(-) diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/02-PartnerInformation.mdx index 016eedac129..6711a9d665c 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details of the partner' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Hashicorp | -| **Web Site** | https://www.hashicorp.com/ | -| **Partner Product** | Vault Transit Secrets Engine | +| **Partner name** | Hashicorp | +| **Website** | https://www.hashicorp.com/ | +| **Partner product** | Vault Transit Secrets Engine | | **Version** | Vault v1.13.3 | -| **Product Description** | Hashicorp Vault is an identity-based secrets and encryption management system. Used in conjunction with EDB Postgres Advanced Server and EDB Postgres Extended Server, Hashicorp Vault Transit secrets engine allows Vault to handle cryptographic functions on data in-transit. | +| **Product description** | Hashicorp Vault is an identity-based secrets and encryption management system. Used with EDB Postgres Advanced Server and EDB Postgres Extended Server, Hashicorp Vault transit secrets engine allows Vault to handle cryptographic functions on data in transit. | diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/03-SolutionSummary.mdx index 375f5732b2a..2763087399e 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/03-SolutionSummary.mdx @@ -1,11 +1,11 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Explanation of the solution and its purpose' --- -Hashicorp Vault is an identity-based secrets and encryption management system. Used in conjunction with EDB Postgres Advanced Server versions 15.2 and above or EDB Postgres Extended Server versions 15.2 and above, it allows users to control access to encryption keys and certificates, as well as perform key management. Using Hashicorp Vault’s Transit secrets engine allows Vault to handle cryptographic functions on data in-transit. Hashicorp Vault Transit secrets engine can be referred to as "encryption as a service". +Hashicorp Vault is an identity-based secrets and encryption management system. Used with EDB Postgres Advanced Server versions 15.2 and later or EDB Postgres Extended Server versions 15.2 and later, it allows you to control access to encryption keys and certificates and perform key management. Using Hashicorp Vault’s transit secrets engine allows Vault to handle cryptographic functions on data in transit. Hashicorp Vault transit secrets engine can be referred to as "encryption as a service." -Hashicorp Vault’s primary use case for Transit secrets engine is to encrypt data from applications while simultaneously storing encrypted data in some primary data store. Hashicorp Vault Transit Secrets Engine can also generate hashes, sign and verify data and generate HMACs of data. Hashicorp Vault Transit Secrets Engine can work with EDB Postgres Advanced Server and EDB Postgres Extended Server by securely storing the data key that is generated by `initdb`. Normally the key, that lives in `pg_encryption/key.bin`, is stored in plaintext format, but using Hashicorp Vault Transit Secrets Engine as an external key store manages the data encryption key and provides further security to the key itself. +Hashicorp Vault’s primary use case for transit secrets engine is to encrypt data from applications while simultaneously storing encrypted data in some primary data store. Hashicorp Vault transit secrets engine can also generate hashes, sign and verify data, and generate HMACs of data. Hashicorp Vault transit secrets engine can work with EDB Postgres Advanced Server and EDB Postgres Extended Server by securely storing the data key that's generated by `initdb`. Normally the key, which lives in `pg_encryption/key.bin`, is stored in plaintext format. However, using Hashicorp Vault transit secrets engine as an external key store manages the data encryption key and provides further security to the key. -The below image shows how Hashicorp Vault Transit Secrets Engine works to encrypt and decrypt data. +The image shows how Hashicorp Vault transit secrets engine works to encrypt and decrypt data. ![Hashicorp Vault Transit Secrets Engine Architecture](Images/HashicorpVaultTransitSecretsEngineArchitecture.png) diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx index 71822226255..8bae7473fa7 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx @@ -1,35 +1,36 @@ --- -title: 'Configuration' +title: 'Configuring Hashicorp Vault transit secrets engine' description: 'Walkthrough on configuring the integration' --- -Implementing Hashicorp Vault with EDB Postgres Advanced Server version 15.2 and above or EDB Postgres Extended Server version 15.2 and above, requires the following components: -!!! Note - The EDB Postgres Advanced Server version 15.2 and above and EDB Postgres Extended Server version 15.2 and above, products will be referred to as EDB Postgres distribution. The specific distribution type will be dependent upon customer need or preference. +Implementing Hashicorp Vault with EDB Postgres Advanced Server version 15.2 and later or EDB Postgres Extended Server version 15.2 and later requires the following components: - EDB Postgres distribution (15.2 or later) - Hashicorp Vault v1.13.3 +!!! Note + We refer to EDB Postgres Advanced Server version 15.2 and later and EDB Postgres Extended Server version 15.2 and later as the EDB Postgres distribution. The specific distribution type depends on your needs and preferences. + ## Prerequisites - A running EDB Postgres distribution - Hashicorp Vault installed and deployed per your VM environment -## Enable Hashicorp Vault Transit Secrets Engine +## Enable Hashicorp Vault transit secrets engine !!! Note - You have to set your environment variable with Hashicorp Vault. If you receive this error message “Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client” you need to issue this in your command line `export VAULT_ADDR="http://127.0.0.1:8200`". + You must set your environment variable with Hashicorp Vault. If you receive the error message “Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client”, you need to issue this command at your command line: `export VAULT_ADDR="http://127.0.0.1:8200`. -1. After your Hashicorp Vault configuration is installed and deployed per the guidelines in the [Hashicorp documentation](https://developer.hashicorp.com/vault/tutorials/getting-started/getting-started-install), you will then need to enable the transit secrets engine. +After you install and deploy your Hashicorp Vault configuration per the guidelines in the [Hashicorp documentation](https://developer.hashicorp.com/vault/tutorials/getting-started/getting-started-install), you must then enable the transit secrets engine. -2. Assume root user. +1. Assume root user. -3. First set your two variables, your API address and token you receieved during installation and setup. +2. Set your two variables, your API address, and the token you receieved during installation and setup: ```bash root@ip-172-31-50-151:/home/ubuntu# export VAULT_ADDR='http://127.0.0.1:8200' root@ip-172-31-50-151:/home/ubuntu# export VAULT_TOKEN="hvs.D9lfoRBZYtdJY2t3lG3f6yUa" ``` -4. Before you enable the Transit Secrets Engine you can check your Vault Server status with `vault status`. +3. Before you enable the transit secrets engine, you can check your Vault server status using `vault status`: ```bash root@ip-172-31-50-151:/home/ubuntu# vault status Key Value @@ -47,15 +48,16 @@ Cluster ID 83012ee7-18f0-9480-e8b6-3ff02c285ba2 HA Enabled false ``` -5. Type `vault secrets enable transit`. +4. Enter `vault secrets enable transit`: ```bash root@ip-172-31-50-151:/home/ubuntu# vault secrets enable transit Success! Enabled the transit secrets engine at: transit/ ``` -6. Next you will create your encryption key with an identifiable name. For example: `vault write -f transit/keys/pg-tde-master-1` +5. Create your encryption key with an identifiable name, for example: `vault write -f transit/keys/pg-tde-master-1` ```bash root@ip-172-31-50-151:/usr/lib/edb-pge/15/bin# vault write -f transit/keys/pg-tde-master-1 Success! Data written to: transit/keys/pg-tde-master-1 ``` -7. You now have your encryption key set and are ready to export your WRAP and UNWRAP commands and initialize your database. \ No newline at end of file + +Your encryption key is now set and are ready to export your WRAP and UNWRAP commands and initialize your database. \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/05-UsingTransitSecretsEngine.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/05-UsingTransitSecretsEngine.mdx index 997ef875d20..1fcc522c91f 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/05-UsingTransitSecretsEngine.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/05-UsingTransitSecretsEngine.mdx @@ -1,26 +1,26 @@ --- -title: 'Using' +title: 'Using Hashicorp Vault transit secrets engine' description: 'Walkthrough of example usage scenarios' --- -After you have configured Hashicorp Vault Transit Secrets Engine as stated in the Configuring section, you will be able to then encrypt your EDB Postgres distribution database. +After you configure Hashicorp Vault transit secrets engine, you can then encrypt your EDB Postgres distribution database. !!! Note - It is important to note that this doc is intended for versions 15.2 and above of EDB Postgres Advanced Server or versions 15.2 and above of EDB Postgres Extended Server as these versions support Transparent Data Encryption (TDE). + This content is intended for versions 15.2 and later of EDB Postgres Advanced Server or versions 15.2 and later of EDB Postgres Extended Server, as these versions support transparent data encryption (TDE). -After the Hashicorp Vault Transit secrets engine is configured and a user/machine has a Vault token with the proper permissions, this was configured during your install and setup of Transit Secrets Engine, it can use this secrets engine to encrypt a key. +After you configure the Hashicorp Vault transit secrets engine and a user/machine has a Vault token with the proper permissions (configured during your install and setup of transit secrets engine), it can use this secrets engine to encrypt a key. -## Perform initdb for the Database +## Perform initdb for the database -After you have enabled Hashicorp Vault Transit Secrets Engine and created a key, you will be able to export the PGDATAKEYWRAPCMD and PGDATAKEYUNWRAPCMD to wrap and unwrap your encryption key and initialize your database. +After you enable Hashicorp Vault transit secrets engine and create a key, you can export the PGDATAKEYWRAPCMD and PGDATAKEYUNWRAPCMD to wrap and unwrap your encryption key and initialize your database. -1. Login to your EDB Postgres distribution as the database superuser, for example `sudo su - enterprisedb`. +1. Log in to your EDB Postgres distribution as the database superuser, for example, `sudo su - enterprisedb`. -2. Navigate to the `/bin` directory where your executables live. In our example it is `/usr/lib/edb-as/15/bin`. +2. Navigate to the `/bin` directory where your executables are. In this example, it's `/usr/lib/edb-as/15/bin`. -3. Type: `export PGDATAKEYWRAPCMD='base64 | vault write -field=ciphertext transit/encrypt/pg-tde-master-1 plaintext=- > %p'` +3. Enter `export PGDATAKEYWRAPCMD='base64 | vault write -field=ciphertext transit/encrypt/pg-tde-master-1 plaintext=- > %p'` -4. Type: `export PGDATAKEYUNWRAPCMD='cat %p | vault write -field=plaintext transit/decrypt/pg-tde-master-1 ciphertext=- | base64 --decode'` +4. Enter `export PGDATAKEYUNWRAPCMD='cat %p | vault write -field=plaintext transit/decrypt/pg-tde-master-1 ciphertext=- | base64 --decode'` ```bash root@ip-172-31-50-151:/usr/lib/edb-pge/15/bin# su - enterprisedb @@ -31,7 +31,7 @@ enterprisedb@ip-172-31-50-151:~$ export PGDATAKEYUNWRAPCMD='cat %p | vault write ``` 5. Perform your initdb per your database requirements, for example: `./initdb -D dd12 -y`. -6. If all is successful you should get an output that looks like this: + If all is successful, the output looks like this: ```bash enterprisedb@ip-172-31-46-134:/usr/lib/edb-as/15/bin$ ./initdb -D /var/lib/edb-as/15/dd12 -y @@ -165,7 +165,7 @@ Success. You can now start the database server using: ``` -7. Start your database and navigate to your `/data` directory to view the postgresql.conf file to ensure that your `data_encryption_key_unwrap_command` that you set with your `export PGDATAUNWRAPCMD` is present under the Authentication section. +6. Start your database and navigate to your `/data` directory to view the `postgresql.conf` file. Make sure that the `data_encryption_key_unwrap_command` that you set with your `export PGDATAUNWRAPCMD` is present under the Authentication section. ```bash # - Authentication - @@ -201,7 +201,7 @@ data_encryption_key_unwrap_command = 'cat %p | vault write -field=plaintext tran ``` ## Encrypt Plaintext Data -Hashicorp Vault Transit Secrets Engine can also encrypt some plaintext data. However any plaintext data needs to be base64-encoded. This is a requirement as Hashicorp Vault does not require that the plaintext data is "text", it could also be another type of file. +Hashicorp Vault transit secrets engine can also encrypt some plaintext data. However, any plaintext data needs to be base64-encoded. This is a requirement, as Hashicorp Vault doesn't require that the plaintext data is "text." It can also be another type of file. ```bash enterprisedb@ip-172-31-50-151:~$ export VAULT_TOKEN="hvs.D9lfoRBZYtdJY2t3lG3f6yUa" @@ -211,7 +211,8 @@ Key Value ciphertext vault:v1:/laUa+i1RVs4kFDD+a6Dmm+mJvVuo8jW0JHWISlzEe/ur/nUlfswEyYShA== key_version 1 ``` -As an added note, Hashicorp Vault does not store any data, that is up to the database user. For any more information on Hashicorp Vault Transit Secrets Engine visit the [Hashicorp](https://developer.hashicorp.com/vault/docs/secrets/transit) documentation. +!!! Note + Hashicorp Vault doesn't store any data. Storing data is up to the database user. For more information on Hashicorp Vault transit secrets engine, see the [Hashicorp documentation](https://developer.hashicorp.com/vault/docs/secrets/transit). -For more information on how TDE is incorporated with EDB Postgres Advanced Server and EDB Postgres Extended Server visit the [EDB Transparent Data Encryption](https://www.enterprisedb.com/docs/tde/latest/) documentation. +For more information on how TDE is incorporated with EDB Postgres Advanced Server and EDB Postgres Extended Server see the [EDB Transparent Data Encryption documentation](/tde/latest/). diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/06-CertificationEnvironment.mdx index 11263baf41a..404d7446ada 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/06-CertificationEnvironment.mdx @@ -1,11 +1,11 @@ --- -title: 'Certification Environment' +title: 'Certification environment' description: 'Overview of the certification environment' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | June 12, 2023 | +| **Certification test date** | June 12, 2023 | | **EDB Postgres Advanced Server** | 15.2 | | **EDB Postgres Extended Server** | 15.2 | | **Hashicorp Vault** | v1.13.3 | \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/07-Support.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/07-Support.mdx index 423d58ebc43..d8fafa9b9a3 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/07-Support.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/07-Support.mdx @@ -1,26 +1,28 @@ --- -title: 'Support and Logging Details' +title: 'Support and logging details' description: 'Details of the support process and logging information' --- ## Support -Technical support for the use of these products is provided by both EDB and Hashicorp. A proper support contract is required to be in place at both EDB and Hashicorp. A support ticket can be opened on either side to start the process. If it is determined through the support ticket that resources from the other vendor is required, the customer should open a support ticket with that vendor through normal support channels. This will allow both companies to work together to help the customer as needed. +Technical support for the use of these products is provided by both EDB and Hashicorp. A support contract must be in place at both EDB and Hashicorp. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, the customer should open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging -**EDB Postgres Advanced Server Logs:** +The following logs are available. -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance and from here you can navigate to `log`, `current_logfiles` or you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. +### EDB Postgres Advanced Server logs -**EDB Postgres Extended Server Logs** +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Or, you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. -Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance and from here you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended logs: `/var/lib/edb-pge/15/data/log`. +### EDB Postgres Extended Server logs -** Hashicorp Vault Logs** +Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance. From there, you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended Server logs is `/var/lib/edb-pge/15/data/log`. -Customers can use the `journalctl` function to call logs for Hashicorp Vault. +### Hashicorp Vault logs -If you just want to view the Vault logs you can do so by entering `journalctl -ex -u vault` in the command line. +You can use the `journalctl` function to call logs for Hashicorp Vault. -If you want to view logs for a specific day and output those results to a `.txt` file you can do so by entering `journalctl -u vault -S today > vaultlog.txt` in the command line, adjusting the date to your needed date and the text title. \ No newline at end of file +If you want to view only the Vault logs, you can do so by entering `journalctl -ex -u vault` at the command line. + +If you want to view logs for a specific day and output those results to a `.txt` file, you can do so by entering `journalctl -u vault -S today > vaultlog.txt` at the command line. Adjust the date as needed and the text title. \ No newline at end of file diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx index 083cc5d871d..03052f25181 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx @@ -1,14 +1,13 @@ --- -title: 'Hashicorp Vault Transit Secrets Engine Implementation Guide' +title: 'Implementing Hashicorp Vault Transit Secrets Engine' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+[Partner Program Logo](Images/PartnerProgram.jpg.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Hashicorp Vault Transit Secrets Engine

-

This document is intended to augment each vendor’s product documentation in order to guide the reader in getting the products working together. It is not intended to show the optimal configuration for the certified integration.

\ No newline at end of file +

This document is intended to augment each vendor’s product documentation to guide you in getting the products working together. It isn't intended to show the optimal configuration for the certified integration.

\ No newline at end of file From 88fe455090fb78df849a46e656c0addba61a56b2 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 24 Jul 2023 12:34:40 -0400 Subject: [PATCH 06/65] Edits to Imperva partner doc --- .../02-PartnerInformation.mdx | 12 ++-- .../03-SolutionSummary.mdx | 10 ++-- ...4-ConfiguringImpervaDataSecurityFabric.mdx | 34 +++++------ .../05-UsingImpervaDataSecurityFabric.mdx | 60 ++++++++++--------- .../06-CertificationEnvironment.mdx | 10 ++-- .../ImpervaDataSecurityFabric/07-Support.mdx | 26 ++++---- .../ImpervaDataSecurityFabric/index.mdx | 8 +-- 7 files changed, 81 insertions(+), 79 deletions(-) diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/02-PartnerInformation.mdx index 21881edaea9..a1f51002bf9 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details of the partner' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Imperva | -| **Web Site** | https://www.imperva.com/ | -| **Partner Solution** | Imperva Data Security Fabric | -| **Partner Product** | Imperva Agent 14.6.0.20 | -| **Product Description** | Imperva Data Security Fabric assesses and helps mitigate database risks and detects compliance and security policy violations. Customers can use Imperva Data Security Fabric and the Imperva Agent and their EDB Postgres Advanced Server and/or PostgreSQL databases to see where their sensitive data resides, who is accessing the data, and determine whether the user data access activity is good or bad. | +| **Partner name** | Imperva | +| **Website** | https://www.imperva.com/ | +| **Partner solution** | Imperva Data Security Fabric | +| **Partner product** | Imperva Agent 14.6.0.20 | +| **Product description** | Imperva Data Security Fabric assesses and helps mitigate database risks and detects compliance and security policy violations. You can use Imperva Data Security Fabric and the Imperva Agent and your EDB Postgres Advanced Server or PostgreSQL databases to see where your sensitive data resides and who's accessing the data. You can also use them to determine whether the user data access activity is good or bad. | diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/03-SolutionSummary.mdx index be5e9e0c240..5dd7ee0a81d 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/03-SolutionSummary.mdx @@ -1,16 +1,16 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Explanation of the solution and its purpose' --- -Imperva Data Security Fabric (DSF), previously known as Imperva SecureSphere, protects data with a hybrid security solution that is usable by all data types. Imperva Data Security Fabric can help organizations reduce security risks from inside and outside sources, whether it be unintended or malicious. +Imperva Data Security Fabric (DSF), previously known as Imperva SecureSphere, protects data with a hybrid security solution that all data types can use. Imperva Data Security Fabric can help organizations reduce security risks from inside and outside sources, whether they're unintended or malicious. -The Imperva Data Security Fabric Agent is an integral part of the Imperva Data Security Fabric solution, and requires minimal resources. This allows the database server to continue to function normally while providing constant, real-time monitoring of databases and database traffic. +The Imperva Data Security Fabric Agent is an integral part of the Imperva Data Security Fabric solution and requires minimal resources. This integration allows the database server to continue to function normally while providing constant, real-time monitoring of databases and database traffic. -The diagram below shows how you can integrate the Imperva Data Security Fabric Agent with your EDB Postgres Advanced Server and PostgreSQL deployments. The Imperva Data Security Fabric agent resides on your databases allowing it to monitor and continuously analyze all data access activity on both database user and privileged user accounts, in order to check for compliance or violations of security policy. +The diagram shows how you can integrate the Imperva Data Security Fabric Agent with your EDB Postgres Advanced Server and PostgreSQL deployments. The Imperva Data Security Fabric agent resides on your databases, so it can monitor and continuously analyze all data access activity on both database user and privileged-user accounts. It checks for compliance to or violations of security policy. ![ImpervaDataSecurityFabricforPostgreSQL](Images/ImpervaDataSecurityFabricforPostgreSQL.png) -**The list below shows the components of Imperva Data Security Fabric and the tasks they perform.** +The table shows the components of Imperva Data Security Fabric and the tasks they perform. ![ImpervaDataSecurityFabricOverview](Images/ImpervaDataSecurityFabricOverview.png) \ No newline at end of file diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/04-ConfiguringImpervaDataSecurityFabric.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/04-ConfiguringImpervaDataSecurityFabric.mdx index bd5e87b71a6..09e8ce33557 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/04-ConfiguringImpervaDataSecurityFabric.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/04-ConfiguringImpervaDataSecurityFabric.mdx @@ -1,9 +1,9 @@ --- -title: 'Configuration' -description: 'Walkthrough on configuring the integration' +title: 'Configuring Imperva Data Security Fabric' +description: 'Walkthrough of configuring the integration' --- -Implementing Imperva Data Security Fabric, with EDB Postgres Advanced Server and PostgreSQL requires the following components: +Implementing Imperva Data Security Fabric with EDB Postgres Advanced Server and PostgreSQL requires the following components: - Imperva Agent - Imperva Database Security Gateway - Imperva Management Server @@ -11,32 +11,32 @@ Implementing Imperva Data Security Fabric, with EDB Postgres Advanced Server and ## Prerequisites -- A running Imperva Data Security Fabric environment. -- Register the Imperva Gateway to the Imperva Management server. -- Imperva Agent installed on the EDB database. -- The Imperva Agent registered to the Imperva Gateway. +- A running Imperva Data Security Fabric environment +- Imperva Gateway registered to the Imperva Management server +- Imperva Agent installed on the EDB database +- Imperva Agent registered to the Imperva Gateway ## Configure Imperva Data Security Fabric for EDB Postgres Advanced Server -The following steps show how to configure the Imperva Data Security Fabric Agent so it can be used to help monitor your EDB Postgres Advanced Server database or your PostgreSQL database. +Configure the Imperva Data Security Fabric Agent so you can use it to help monitor your EDB Postgres Advanced Server database or your PostgreSQL database. -## Basic Imperva Data Security Fabric Configuration Overview +## Basic Imperva Data Security Fabric configuration overview -The primary objective of the basic configuration is to identify what traffic you want to protect. This is accomplished by defining the basic building blocks that comprise the Imperva Data Security Fabric domain using a number of parameters including IP addresses, ports, protocols and more. The screen shots below illustrates basic configuration of these components. +The primary objective of the basic configuration is to identify the traffic you want to protect. You can accomplish this objective by defining the basic building blocks that comprise the Imperva Data Security Fabric domain. To do so, you use several parameters, including IP addresses, ports, protocols, and more. The screen shots show basic configurations of these components. ![ImpervaSitesTree](Images/ImpervaSitesTree.png) ![ImpervaAgentConfiguration](Images/ImpervaAgentConfiguration.png) -There are several configuration steps that can be taken with the Imperva Data Security Fabric Agent in your environment, as it is a highly configurable agent. These configuration steps provide custom protection and support for your EDB Postgres Advanced Server and PostgreSQL databases. This guide will list some basic configuration methods, but for a more complete list of configuration options, please see the Basic Imperva Data Security Fabric Configuration Doc at: -[Basic Imperva Data Security Fabric Configuration Doc](https://docs.imperva.com/bundle/v14.9-database-activity-monitoring-user-guide/page/2383.htm) +You can take several configuration steps with the Imperva Data Security Fabric Agent in your environment, as it's a highly configurable agent. These configuration steps provide custom protection and support for your EDB Postgres Advanced Server and PostgreSQL databases. For a complete list of configuration options, see the +[Basic Imperva Data Security Fabric configuration documentation](https://docs.imperva.com/bundle/v14.9-database-activity-monitoring-user-guide/page/2383.htm). -The basic components are configured in the following order and are defined below: +Configure the basic components in the following order: 1. **Sites**: The primary container of all other objects. -2. **Server Groups**: One or more servers at a specific location based on IP addresses. To create and manage server groups, see [Basic Server Group Configuration](https://docs.imperva.com/bundle/v14.9-database-activity-monitoring-user-guide/page/454.htm) in the Imperva Docs. +2. **Server groups**: One or more servers at a specific location based on IP addresses. To create and manage server groups, see [Basic Server Group Configuration](https://docs.imperva.com/bundle/v14.9-database-activity-monitoring-user-guide/page/454.htm) in the Imperva documentation. -3. **Services**: A service type hosted by a server group based on service. Represents a database service for example PostgreSQL Database. Inside a server group, there can be a number of services on different ports and - various database types. +3. **Services**: A service type hosted by a server group based on service. Represents a database service, for example, PostgreSQL database. Inside a server group can be a number of services on different ports and +various database types. -4. **Application**: An application represents one or more databases and schemas operating on a database service. For example, inside a single PostgreSQL database server installation, there may be two databases running. By assigning to different database applications, you can assign different policies to protect each database. +4. **Application**: Represents one or more databases and schemas operating on a database service. For example, inside a single PostgreSQL database server installation, two databases can be running. By assigning to different database applications, you can assign different policies to protect each database. diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx index 6864bc3e336..e800d4a785e 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx @@ -1,63 +1,65 @@ --- -title: 'Using' +title: 'Using Imperva Data Security Fabric' description: 'Walkthrough of example usage scenarios' --- -This section features use cases that show how Imperva Data Security Fabric will monitor an EDB Postgres Advanced Server Database. There are many different options and reports that can be configured with the Imperva Data Security Fabric Agent installed on your EDB Postgres Advanced Server or PostgreSQL database. +These use cases show how Imperva Data Security Fabric monitors an EDB Postgres Advanced Server database. You can configure many different options and reports with the Imperva Data Security Fabric Agent installed on your EDB Postgres Advanced Server or PostgreSQL database. + Some of these options are: -- Scan and assess the servers -- Configure security policies -- Create and/or manage reports -- Configure and view audited traffic -- Configure or work with a Database Cluster -For in depth information on the many pieces of Database Activity monitoring with the Imperva Data Security Fabric Agent, please visit the Database Activity Monitoring User Guide on the Imperva Website: [Database Activity Monitoring](https://docs.imperva.com/bundle/v14.9-database-activity-monitoring-user-guide/page/70414.htm) +- Scan and assess the servers. +- Configure security policies. +- Create and manage reports. +- Configure and view audited traffic. +- Configure or work with a database cluster. + +For in-depth information on the many pieces of database activity monitoring with the Imperva Data Security Fabric agent, see [Database Activity Monitoring User Guide](https://docs.imperva.com/bundle/v14.9-database-activity-monitoring-user-guide/page/70414.htm) on the Imperva website. -## Monitor EDB Postgres Advanced Server or PostgreSQL Database Traffic +## Monitor EDB Postgres Advanced Server or PostgreSQL database traffic -One of the uses of the Imperva Data Security Fabric Agent is monitoring database traffic for compliance and auditing. Many regulations and industry standards require organizations to monitor and track data access and changes for auditing purposes. This can include tracking access to sensitive data such as financial information or personal data of customers. +One of the uses of the Imperva Data Security Fabric agent is monitoring database traffic for compliance and auditing. Many regulations and industry standards require organizations to monitor and track data access and changes for auditing purposes. This can include tracking access to sensitive data, such as financial information or customers' personal data. -Imperva Agents within the Imperva Data Security Fabric solution can be used to track and review the actions of employees and 3rd parties to ensure they are complying with policies and procedures. +You can use Imperva agents within the Imperva Data Security Fabric solution to track and review the actions of employees and third parties to ensure they're complying with policies and procedures. -## Set-up to Monitor Database Traffic with Imperva Data Security Fabric Agent +### Setting up to monitor database traffic with Imperva Data Security Fabric agent -The following steps need to be taken in order to set-up the Imperva Data Security Fabric Agent to monitor EDB Postgres Advanced Server traffic. +Set up the Imperva Data Security Fabric agent to monitor EDB Postgres Advanced Server traffic. -1. Install Imperva Data Security Fabric Agent. +1. Install Imperva Data Security Fabric. 2. Run the basic management configuration. -3. Connect to the database using an external client and run queries. +3. Connect to the database using an external client, and run queries. 4. Check that the traffic was intercepted and displayed in the management audit screen. -Here is an example of a query to table name “my_test_table” and the DB is PostgreSQL. The table does not exist and therefore the SQL exception is marked as true. +This example shows a query to table name `my_test_table`. The database is PostgreSQL. The table doesn't exist, so the SQL exception is marked as true. ![ImpervaDBAudits](Images/ImpervaDBAudits.png) -## Secure an EDB Postgres Advanced Server or PostgreSQL Database +## Secure an EDB Postgres Advanced Server or PostgreSQL database -Imperva agents can also be used within an EDB Postgres Advanced Server or PostgreSQL server for security reasons. Imperva agents can be used to block transactions like unauthorized access or changes to data. This can help organizations protect against data breaches, cyber attacks and other threats. +You can also use Imperva agents within an EDB Postgres Advanced Server or PostgreSQL server for security reasons. You can use Imperva agents to block transactions like unauthorized access or changes to data, which can help to protect against data breaches, cyber attacks, and other threats. -## Imperva Data Security Fabric Agent Traffic Block Conditions +### Imperva Data Security Fabric agent traffic block conditions -An Imperva Agent, within the Imperva Data Security Fabric Solution, can block traffic when all the following conditions are met: +An Imperva agent within the Imperva Data Security Fabric Solution can block traffic when all the following conditions are met: 1. An applicable security policy blocks the traffic. -2. `Enable Blocking` is selected withing the Imperva Agent's settings tab. - -3. The server group is not in simulation mode. +2. **Enable Blocking** is selected on the Imperva Agent's **Settings** tab. -When `Enable Blocking` is selected, `Default Connection Mode` in the Imperva Agent’s Settings tab must be set to either `Sniffing` or `Inline`. In both cases, the Imperva Agent forwards the traffic to the Gateway. +3. The server group isn't in simulation mode. -If `Default Connection Mode` is set to `Sniffing`, then the Imperva Agent allows the traffic to pass to the database. If the Imperva Agent later receives a notification from the Gateway that the traffic must be blocked, it does so, but in the meantime, some undesirable traffic will have gotten through to the database. +When **Enable Blocking** is selected, **Default Connection Mode** in Imperva Agent’s **Settings** tab must be set to either **Sniffing** or **Inline**. In both cases, Imperva Agent forwards the traffic to the gateway. -The advantage of `Sniffing` is that no latency is introduced. Its disadvantage is that undesirable traffic can reach the database. +If **Default Connection Mode** is set to **Sniffing**, then Imperva Agent allows the traffic to pass to the database. If Imperva Agent later receives a notification from the gateway that the traffic must be blocked, it does so. However, in the meantime, some undesirable traffic can get through to the database. -If `Default Connection Mode` is set to `Inline`, then the Imperva Agent holds the traffic until it receives a notification from the Gateway indicating whether the traffic should be allowed or blocked. +The advantage of **Sniffing** is that it doesn't introduce latency. Its disadvantage is that undesirable traffic can reach the database. -The advantage of `Inline` is that no undesirable traffic can reach the database. Its disadvantage is that it introduces latency. +If **Default Connection Mode** is set to **Inline**, then Imperva Agent holds the traffic until it receives a notification from the gateway indicating whether to allow or block the traffic. -**For more information:** Please visit the Imperva Data Activity Monitoring User Guide on the Imperva website: [Database Activity Monitoring User Guide](https://docs.imperva.com/bundle/v14.10-database-activity-monitoring-user-guide/page/70414.htm) +The advantage of the **Inline** option is that no undesirable traffic can reach the database. The disadvantage is that it introduces latency. +!!! Note For more information + See the [Imperva Database Activity Monitoring User Guide](https://docs.imperva.com/bundle/v14.10-database-activity-monitoring-user-guide/page/70414.htm) on the Imperva website. diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/06-CertificationEnvironment.mdx index 5625155caaa..b6a06bda49f 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/06-CertificationEnvironment.mdx @@ -1,12 +1,12 @@ --- -title: 'Certification Environment' -description: 'Overview of the Certification Environment' +title: 'Certification environment' +description: 'Overview of the certification environment' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | December, 2022 | +| **Certification test date** | December, 2022 | | **EDB Postgres Advanced Server** | 11, 12, 13, 14 | | **PostgreSQL** | 11, 12, 13, 14 | -| **Imperva Data Security Fabric ** | 14.9 | -| **Imperva Agent ** | 14.6.0.20 | \ No newline at end of file +| **Imperva Data Security Fabric** | 14.9 | +| **Imperva Agent** | 14.6.0.20 | \ No newline at end of file diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/07-Support.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/07-Support.mdx index 5ff6d10c95a..ad7156eb0b6 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/07-Support.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/07-Support.mdx @@ -1,19 +1,21 @@ --- -title: 'Support and Logging Details' -description: 'Details of the Support process and logging information' +title: 'Support and logging details' +description: 'Details of the support process and logging information' --- ## Support -Technical support for the use of these products is provided by both EDB and Imperva. A proper support contract is required to be in place at both EDB and Imperva. A support ticket can be opened on either side to start the process. If it is determined through the support ticket that resources from the other vendor is required, the customer should open a support ticket with that vendor through normal support channels. This will allow both companies to work together to help the customer as needed. +Technical support for the use of these products is provided by both EDB and Imperva. A support contract is must in place at both EDB and Imperva. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging -**EDB Postgres Advanced Server Logs:** +The following logs are available. -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance and from here you can navigate to `log`, `current_logfiles` or you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. +### EDB Postgres Advanced Server logs -**PostgreSQL Logs** +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Or, you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. + +### PostgreSQL logs The default log directories for PostgreSQL logs vary depending on the operating system: @@ -23,15 +25,15 @@ The default log directories for PostgreSQL logs vary depending on the operating - Windows: `C:\Program Files\PostgreSQL\9.3\data\pg_log` -**Imperva Logs** +### Imperva logs To collect the logs from your Imperva instance: -1. Click on `Setup` in your main dashboard area. -2. Click on `Agents`. -3. Right-Click on the agent that you want to collect logs for. -4. Click `Log Retrieval`. -5. Click `Get Agent Technical Help`. +1. In your main dashboard area, select **Setup**. +2. Select **Agents**. +3. Right-click the agent that you want to collect logs for. +4. Select **Log Retrieval**. +5. Select **Get Agent Technical Help**. ![ImpervaLogRetrieval](Images/ImpervaLogRetrieval.png) \ No newline at end of file diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/index.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/index.mdx index fef8ad43d67..4f53ae3f5df 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/index.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/index.mdx @@ -1,14 +1,12 @@ --- -title: 'Imperva Data Security Fabric Implementation Guide' +title: 'Implementing Imperva Data Security Fabric' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+[Partner Program Logo](Images/PartnerProgram.jpg.png)

EDB GlobalConnect Technology Partner Implementation Guide

Imperva Data Security Fabric

-

This document is intended to augment each vendor’s product documentation in order to guide the reader in getting the products working together. It is not intended to show the optimal configuration for the certified integration.

\ No newline at end of file +

This document is intended to augment each vendor’s product documentation to guide you in getting the products working together. It isn't intended to show the optimal configuration for the certified integration.

\ No newline at end of file From 04813aa5b6197e7a329191f7ccf05a4cce400b10 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 24 Jul 2023 15:01:06 -0400 Subject: [PATCH 07/65] Liquibase Pro edits --- .../LiquibasePro/02-PartnerInformation.mdx | 12 ++-- .../LiquibasePro/03-SolutionSummary.mdx | 9 ++- .../04-ConfiguringLiquibasePro.mdx | 17 +++--- .../LiquibasePro/05-UsingLiquibasePro.mdx | 58 ++++++++----------- .../06-CertificationEnvironment.mdx | 4 +- .../partner_docs/LiquibasePro/07-Appendix.mdx | 4 +- .../partner_docs/LiquibasePro/index.mdx | 8 +-- 7 files changed, 49 insertions(+), 63 deletions(-) diff --git a/advocacy_docs/partner_docs/LiquibasePro/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/LiquibasePro/02-PartnerInformation.mdx index 9257dcb3fdb..24f8aa73b99 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details for Liquibase Pro' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Liquibase | -| **Partner Product** | Liquibase Pro | -| **Web Site** | https://www.liquibase.com | -| **Version & Platform** | Liquibase Pro 4.3.3: CentOS7 | -| **Product Description** | Liquibase is a database-independent library for tracking, managing and applying database schema changes. | +| **Partner name** | Liquibase | +| **Partner product** | Liquibase Pro | +| **Website** | https://www.liquibase.com | +| **Version & platform** | Liquibase Pro 4.3.3: CentOS7 | +| **Product description** | Liquibase is a database-independent library for tracking, managing, and applying database schema changes. | diff --git a/advocacy_docs/partner_docs/LiquibasePro/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/LiquibasePro/03-SolutionSummary.mdx index 999086ea754..d3e788f131c 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/03-SolutionSummary.mdx @@ -1,12 +1,11 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Brief explanation of the solution and its purpose' --- Easily track, version, and deploy EDB Postgres Advanced Server schema changes with Liquibase. Liquibase enables your team to deploy safer, faster, automated database releases across all your environments. Liquibase integrates with most application build and deployment tools to help track, version, and deploy EDB Postgres Advanced Server database changes. -The desired changes are applied on EDB Postgres Advanced Server using Liquibase changesets. The details of the changes can be stored on the Liquibase Hub to provide analysis of the changes. +Apply the desired changes on EDB Postgres Advanced Server using Liquibase changesets. You can store the details of the changes on the Liquibase Hub to provide analysis of the changes. + +![Liquibase Pro configuration](Images/Configuration.png) -

- -

diff --git a/advocacy_docs/partner_docs/LiquibasePro/04-ConfiguringLiquibasePro.mdx b/advocacy_docs/partner_docs/LiquibasePro/04-ConfiguringLiquibasePro.mdx index 3db93f93263..81e3e7e49c3 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/04-ConfiguringLiquibasePro.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/04-ConfiguringLiquibasePro.mdx @@ -1,6 +1,6 @@ --- title: 'Configuring Liquibase Pro' -description: 'Walkthrough on configuring Liquibase Pro' +description: 'Walkthrough of configuring Liquibase Pro' --- Implementing Liquibase with EDB Postgres Advanced Server requires the following components: @@ -16,17 +16,14 @@ Implementing Liquibase with EDB Postgres Advanced Server requires the following ## Configure Liquibase for EDB Postgres Advanced Server -1. Download the postgresql JAR from https://jdbc.postgresql.org/download.html. -2. Move the postgresql JAR to the Liquibase Pro directory. -3. Log in to your Liquibase Hub account and select the **Settings** icon on the left side of the page to access the API key. +1. Download the PostgreSQL JAR from the [PostgreSQL JDBC site](https://jdbc.postgresql.org). +2. Move the PostgreSQL JAR to the Liquibase Pro directory. +3. To access the API key, log in to your Liquibase Hub account and select the **Settings** icon on the left side of the page. 4. Copy the API key, which connects the information generated by your changelogs and other operations to your Liquibase Hub projects. -

- Configuration of API Key on Liquibase Hub -

+ ![Configuration of API Key on Liquibase Hub](Images/ConfigurationPrerequisites1.png) - -5. Create a liquibase.properties file in the Liquibase Pro directory to contain: +5. Create a `liquibase.properties` file in the Liquibase Pro directory to contain: - Driver class path @@ -38,4 +35,4 @@ Implementing Liquibase with EDB Postgres Advanced Server requires the following - Liquibase Hub API key -See a sample [liquibase.properties](07-Appendix.mdx) file in the appendix for configuring Liquibase for EDB Postgres Advanced Server. +See a sample [liquibase.properties](07-Appendix.mdx) file for configuring Liquibase for EDB Postgres Advanced Server. diff --git a/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx b/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx index 23fe3a5052f..0a90a2dd02b 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx @@ -3,29 +3,25 @@ title: 'Using Liquibase Pro' description: 'Walkthroughs of multiple Liquibase Pro usage scenarios' --- -Liquibase is a development tool that allows changes to be applied to the EDB database using the Liquibase CLI and viewed on the Liquibase Hub. +Liquibase is a development tool that allows you to apply changes to the EDB database using the Liquibase CLI and view them on the Liquibase Hub. -### Creating a Project on Liquibase Hub +### Creating a project on Liquibase Hub -Use the following steps to create a project for the target database instance on the Liquibase Hub. All data related to the target database is stored in this project. +Create a project for the target database instance on the Liquibase Hub. All data related to the target database is stored in this project. 1. Log in to the Liquibase Hub console. -2. Select the **Projects** menu. +2. Create a Project on the Liquibase Hub. Select **Projects > Create Project**. -3. Create a Project on the Liquibase Hub by selecting **Create Project**. +3. On the Create Project page, enter the name and description of the project. -4. On the **Create Project** page, enter the name and description of the project. +4. Select **Create Project**. -5. Select **Create Project**. +![Create Project page](Images/IntegrationViews2.png) -

- Create Project page -

+### Applying catabase changes -### Applying Database Changes - -This section provides examples of applying database changes using Liquibase changesets including: +These examples show how to apply database changes using Liquibase changesets, including: - [Updating tables](#updating-tables) @@ -34,7 +30,7 @@ This section provides examples of applying database changes using Liquibase chan - [Viewing database changes on Liquibase Hub](#viewing-database-changes-on-liquibase-hub) -Refer to the [Liquibase documentation](https://docs.liquibase.com/change-types/home.html) for available change types. +See the [Liquibase documentation](https://docs.liquibase.com/change-types/home.html) for available change types. !!! Note All Liquibase commands in the examples are executed from the directory where Liquibase Pro is installed. @@ -62,10 +58,10 @@ INSERT INTO tp_sales_db VALUES (100,';Person 1';,';CITY 1';,DEFAULT,10); INSERT INTO tp_department_db VALUES (10,';Development';,';Pakistan';); ``` -#### Updating Tables +#### Updating tables 1. Create a changelog file using one of the following options for creating a changelog file: - - Create the file manually. See the following changelog file example. For detailed information on changelogs, refer to the [changelog documentation](https://docs.liquibase.com/concepts/basic/xml-format.html) from Liquibase. + - Create the file manually. See the following changelog file example. (For detailed information on changelogs, see the [changelog documentation](https://docs.liquibase.com/concepts/basic/xml-format.html) from Liquibase.) ```bash xml version="1.0" encoding="UTF-8"?> @@ -120,40 +116,34 @@ INSERT INTO public.tp_department_db (deptno, dname, location) VALUES ('10', `./liquibase --changeLogFile=edb_dbchangelog.xml generateChangeLog` !!! Note - Before the command is run Liquibase will take a snapshot of the database, which is an essential step in the process. + Before the command runs, Liquibase takes a snapshot of the database, which is an essential step in the process. -2. For each database change, add a changeset entry to the changelog file. For detailed information on changesets, refer to the [changeset documentation](https://docs.liquibase.com/concepts/basic/changeset.html?__hstc=128893969.8ca9a9f8d7d5d8d684aac6cd417ffd04.1625139651397.1625246103494.1625257119630.4&__hssc=128893969.1.1625257119630&__hsfp=3718144884&_ga=2.245054725.1619392786.1625139438-28195040.1625139438&_gac=1.123026681.1625233402.Cj0KCQjw8vqGBhC_ARIsADMSd1AVyJsGu4_9-E-Pvh8OdNFqVt5qHR8FHhUyvRnYbA2ODKYlPHr3ujcaAijVEALw_wcB) from Liquibase. +2. For each database change, add a changeset entry to the changelog file. For detailed information on changesets, see the [changeset documentation](https://docs.liquibase.com/concepts/basic/changeset.html?__hstc=128893969.8ca9a9f8d7d5d8d684aac6cd417ffd04.1625139651397.1625246103494.1625257119630.4&__hssc=128893969.1.1625257119630&__hsfp=3718144884&_ga=2.245054725.1619392786.1625139438-28195040.1625139438&_gac=1.123026681.1625233402.Cj0KCQjw8vqGBhC_ARIsADMSd1AVyJsGu4_9-E-Pvh8OdNFqVt5qHR8FHhUyvRnYbA2ODKYlPHr3ujcaAijVEALw_wcB) from Liquibase. -3. To register the changelog with the Liquibase Hub and provide the project name, execute this command: +3. Register the changelog with the Liquibase Hub and provide the project name: `./liquibase registerChangeLog` -Sample output: + Sample output: -

- -

+ ![Sample Output from Registering Changelog](Images/IntegrationViews3.png) -4. To update the table in the target database, use this command: +4. Update the table in the target database: `./liquibase update` -#### Rolling Back Changes +#### Rolling back changes Roll back changes made to the table using the `rollbackCount` command. For example, this command uses the sample changelog file: `./liquibase rollbackCount 1` -#### Viewing Database Changes On Liquibase Hub +#### Viewing database changes on Liquibase Hub -To view the details of the changes made on the target database, select a ChangeLog link. For example, select dbchangelog.xml to see its details. +To view the details of the changes made on the target database, select a changelog link. For example, select `dbchangelog.xml` to see its details. -

- Viewing Database Changes On Liquibase Hub -

+![Viewing Database Changes On Liquibase Hub](Images/IntegrationViews4.png) -The diagram below shows the details for the selected changeset. +The diagram shows the details for the selected changeset. -

- Changeset details -

+![Changeset details](Images/IntegrationViews5.png) diff --git a/advocacy_docs/partner_docs/LiquibasePro/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/LiquibasePro/06-CertificationEnvironment.mdx index 81343903fa8..4192e3e5828 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/06-CertificationEnvironment.mdx @@ -1,10 +1,10 @@ --- -title: 'Certification Environment' +title: 'Certification environment' description: 'Overview of the certification environment used in the certification of Liquibase Pro' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | June 7, 2021 | +| **Certification test date** | June 7, 2021 | | **EDB Postgres Advanced Server** | 12, 13 | | **Liquibase Pro** | 4.3.3 | diff --git a/advocacy_docs/partner_docs/LiquibasePro/07-Appendix.mdx b/advocacy_docs/partner_docs/LiquibasePro/07-Appendix.mdx index 437e6ce02cc..877acde9c9a 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/07-Appendix.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/07-Appendix.mdx @@ -1,6 +1,6 @@ --- -title: 'Appendix' -description: 'Sample properties file' +title: 'Sample properties file' +description: 'Sample changelog properties file' --- Sample `liquibase.properties` file: diff --git a/advocacy_docs/partner_docs/LiquibasePro/index.mdx b/advocacy_docs/partner_docs/LiquibasePro/index.mdx index e78a6be42aa..8c8f5bbf831 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/index.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/index.mdx @@ -1,12 +1,12 @@ --- -title: 'Liquibase Pro Implementation Guide' +title: 'Implementing Liquibase Pro' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+ +![Partner Program Logo](Images/PartnerProgram.jpg.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Liquibase Pro

From c4208cb25bb43e2d4a6eabe13f9b7dc5151a45ca Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 24 Jul 2023 15:25:54 -0400 Subject: [PATCH 08/65] Update index.mdx --- .../partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx index 03052f25181..3a20f8cc2b6 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx @@ -5,9 +5,9 @@ directoryDefaults: iconName: handshake --- -[Partner Program Logo](Images/PartnerProgram.jpg.png) +![Partner Program Logo](Images/PartnerProgram.jpg.png)

EDB GlobalConnect Technology Partner Implementation Guide

Hashicorp Vault Transit Secrets Engine

-

This document is intended to augment each vendor’s product documentation to guide you in getting the products working together. It isn't intended to show the optimal configuration for the certified integration.

\ No newline at end of file +

This document is intended to augment each vendor’s product documentation to guide you in getting the products working together. It isn't intended to show the optimal configuration for the certified integration.

From 7b523377ebe458d92b06a593b3eb08c52ff54025 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 24 Jul 2023 16:13:12 -0400 Subject: [PATCH 09/65] Edits to Nutanix doc --- .../NutanixAHV/02-PartnerInformation.mdx | 10 ++-- .../NutanixAHV/03-SolutionSummary.mdx | 21 +++---- .../NutanixAHV/04-ConfiguringNutanixAHV.mdx | 57 ++++++++----------- .../NutanixAHV/05-UsingNutanixAHV.mdx | 12 ++-- .../06-CertificationEnvironment.mdx | 8 +-- .../partner_docs/NutanixAHV/07-Support.mdx | 20 ++++--- .../partner_docs/NutanixAHV/index.mdx | 8 +-- 7 files changed, 61 insertions(+), 75 deletions(-) diff --git a/advocacy_docs/partner_docs/NutanixAHV/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/NutanixAHV/02-PartnerInformation.mdx index eed86a5caae..bae2cb363c2 100644 --- a/advocacy_docs/partner_docs/NutanixAHV/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/NutanixAHV/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details of the partner' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Nutanix | -| **Web Site** | [www.nutanix.com](www.nutanix.com) | -| **Partner Product** | Nutanix AHV AOS 5.15.x, 6.5.x, ESXi AOS 5.15.x, 6.5.x | -| **Product Description** | Nutanix AHV is the native Nutanix hypervisor that offers virtualization capabilities needed to deploy and manage enterprise applications. Utilizing EDB Postgres Advanced Server, EDB Postgres Extended Server and PostgreSQL Server with Nutanix AHV allows for enterprise-level features to run on a secure, enterprise-grade virtulization platform. | +| **Partner name** | Nutanix | +| **Website** | [www.nutanix.com](www.nutanix.com) | +| **Partner product** | Nutanix AHV AOS 5.15.x, 6.5.x, ESXi AOS 5.15.x, 6.5.x | +| **Product description** | Nutanix AHV is the native Nutanix hypervisor that offers virtualization capabilities needed to deploy and manage enterprise applications. Using EDB Postgres Advanced Server, EDB Postgres Extended Server, and PostgreSQL Server with Nutanix AHV allows for enterprise-level features to run on a secure, enterprise-grade virtulization platform. | diff --git a/advocacy_docs/partner_docs/NutanixAHV/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/NutanixAHV/03-SolutionSummary.mdx index f371c4a2996..aceed68b5dc 100644 --- a/advocacy_docs/partner_docs/NutanixAHV/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/NutanixAHV/03-SolutionSummary.mdx @@ -1,19 +1,16 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Explanation of the solution and its purpose' --- -EDB Postgres Advanced Server, EDB Postgres Extended Server, EDB Failover Manager, Postgres Enterprise Manager, and Barman can each be deployed on virtual machines created via the native Nutanix hypervisor, -AHV or VMWare ESXi. AHV represents a unique approach to virtualization that offers powerful capabilities needed -to deploy and manage enterprise applications. AHV complements the value of Hyper-converged infrastructure (HCI) -by integrating native virtualization along with networking, infrastructure, and operations management within -a single intuitive interface - Nutanix Prism. +You can deploy EDB Postgres Advanced Server, EDB Postgres Extended Server, EDB Failover Manager, Postgres Enterprise Manager, and Barman on virtual machines created using the native Nutanix hypervisor, +AHV, or VMWare ESXi. AHV represents a unique approach to virtualization that offers powerful capabilities needed +to deploy and manage enterprise applications. AHV complements the value of hyper-converged infrastructure (HCI) +by integrating native virtualization along with networking, infrastructure, and operations management in +a single intuitive interface: Nutanix Prism. -The following diagram shows a high-level architecture of the Nutanix platform: +The following diagram shows a high-level architecture of the Nutanix platform. -

- Solution architecture -

+![Solution Architecture](Images/NutanixSolutionSummary.png) - -For all things Nutanix or if you need any further Nutanix guidance, refer to the [Nutanix Bible.](https://www.nutanixbible.com/) \ No newline at end of file +For all things Nutanix or if you need any further Nutanix guidance, see [The Nutanix Bible](https://www.nutanixbible.com/). \ No newline at end of file diff --git a/advocacy_docs/partner_docs/NutanixAHV/04-ConfiguringNutanixAHV.mdx b/advocacy_docs/partner_docs/NutanixAHV/04-ConfiguringNutanixAHV.mdx index 3099591c9f5..7cbbb67ea99 100644 --- a/advocacy_docs/partner_docs/NutanixAHV/04-ConfiguringNutanixAHV.mdx +++ b/advocacy_docs/partner_docs/NutanixAHV/04-ConfiguringNutanixAHV.mdx @@ -1,44 +1,40 @@ --- -title: 'Configuration' -description: 'Walkthrough on configuring the integration' +title: 'Configuring Nutanix AHV' +description: 'Walkthrough of configuring the integration' --- -Implementing EDB software on Nutanix AHV requires the following components: -!!! Note - The EDB Postgres Advanced Server, EDB Postgres Extended Server and PostgreSQL Server products will be referred to as Postgres Distribution. The specific Distribution type will be dependant upon customer need or preference. +Implementing EDB software on Nutanix AHV requires the following components:. - Postgres Distribution - Nutanix software -Sample deployment: +!!! Note + We refer to the EDB Postgres Advanced Server, EDB Postgres Extended Server, and PostgreSQL Server products as a Postgres distribution. The specific distribution type depends on your needs and preferences. -

- Sample deployment -

+The diagram shows a sample deployment. -## Prerequisites +![Sample Deployment](Images/SampleDeployment.png) -- A running Nutanix cluster with AHV. -- Access to the Prism web console. -

- Prism web console -

+## Prerequisites + +- A running Nutanix cluster with AHV +- Access to the Prism web console - For more details, read the [Prism Central Guide](https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-Prism-v5_19:Prism-Central-Guide-Prism-v5_19). +![Prism web console](Images/PrismWebConsole.png) -## Deploying VMs Using AHV +For more details, see the [Prism Central Guide](https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-Prism-v5_19:Prism-Central-Guide-Prism-v5_19). -To create a Virtual Machine (VM) via AHV: +## Deploying VMs using AHV +To create a virtual machine (VM) using AHV: 1. On Prism Central, select **Create VM**. Watch this [video](https://www.youtube.com/watch?v=q4wBewXfDs8) from Nutanix for more information. -

- Create a VM -

+ ![Create a VM](Images/CreateaVM2.png) + +2. Enter the appropriate values for your configuration. For example, these are the specifications for a test environment: -2. Enter the appropriate values for your configuration. For example, these are the specifications for a test environment: ``` vCPU(s): 2 Memory: 4 GiB @@ -46,20 +42,13 @@ To create a Virtual Machine (VM) via AHV: Guest OS: CentOS7 ``` + !!! Note + Mount the CD-ROM with CentOS7 ISO available using the image service. -!!! Note - Mount the CD-ROM with CentOS7 ISO available via the Image Service. - -

- Test environment specifics -

+ ![Test environment specifics](Images/VMConfiguration2.png) 3. Select **Save**. -2. Install your preferred Postgres Distribution. For example, for EDB Postgres Advanced Server refer to the [EDB Postgres Advanced Server documentation](https://www.enterprisedb.com/docs/epas/latest/). - -3. Install the other EDB tools, such as [EDB Failover Manager (EFM)](https://www.enterprisedb.com/docs/efm/latest/), [Postgres Enterprise Manager (PEM)](https://www.enterprisedb.com/docs/pem/latest/), or [Barman](https://www.enterprisedb.com/docs/supported-open-source/barman/), as needed for your configuration in the appropriate VMs. Refer to the [EDB documentation](https://www.enterprisedb.com/docs). - - - +2. Install your preferred Postgres Distribution. For example, for EDB Postgres Advanced Server, see the [EDB Postgres Advanced Server documentation](/epas/latest/). +3. Install the other EDB tools, such as [EDB Failover Manager (EFM)](/efm/latest/), [Postgres Enterprise Manager (PEM)](/pem/latest/), or [Barman](/supported-open-source/barman/), as needed for your configuration in the appropriate VMs. See the [EDB documentation site](https://www.enterprisedb.com/docs) for the complete documentation. diff --git a/advocacy_docs/partner_docs/NutanixAHV/05-UsingNutanixAHV.mdx b/advocacy_docs/partner_docs/NutanixAHV/05-UsingNutanixAHV.mdx index 1032b2bcd0a..4df10a5ec25 100644 --- a/advocacy_docs/partner_docs/NutanixAHV/05-UsingNutanixAHV.mdx +++ b/advocacy_docs/partner_docs/NutanixAHV/05-UsingNutanixAHV.mdx @@ -1,15 +1,15 @@ --- -title: 'Using' +title: 'Using Nutanix AHV' description: 'Walkthrough of example usage scenarios' --- -Nutanix AHV hosts the virtual machines created so that you can deploy / redeploy them as needed. +Nutanix AHV hosts the virtual machines created so that you can deploy and redeploy them as needed. To use Nutanix AHV: 1. Log in to Prism. -1. Go to the **Table** tab where you can view the virtual machines deployed via Nutanix AHV. For example, this screenshot shows VMs that host the following EDB products: +1. Select the **Table** tab, where you can view the virtual machines deployed using Nutanix AHV. For example, this screenshot shows VMs that host the following EDB products: - EDB Postgres Advanced Server - EDB Postgres Extended Server @@ -18,9 +18,7 @@ To use Nutanix AHV: - Barman -

- Viewing VMs on AHV -

+ ![Viewing VMs on AHV](Images/IntegrationViews.png) !!! Note - The screenshot contains information about our test environment and is not intended for a production environment. + The screenshot contains information about our test environment and isn't intended for a production environment. diff --git a/advocacy_docs/partner_docs/NutanixAHV/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/NutanixAHV/06-CertificationEnvironment.mdx index e61de8768c6..87593e41cd9 100644 --- a/advocacy_docs/partner_docs/NutanixAHV/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/NutanixAHV/06-CertificationEnvironment.mdx @@ -1,12 +1,12 @@ --- -title: 'Certification Environment' +title: 'Certification environment' description: 'Overview of the certification environment' --- -## Hypervisor (AHV) 20220304.342 Test Environment +## Hypervisor (AHV) 20220304.342 test environment |   |   | | ----------- | ----------- | -| **Certification Test Date** | March 24 2023 | +| **Certification test date** | March 24 2023 | | **EDB Postgres Advanced Server** | 12,13,14,15 | | **EDB Postgres Extended Server** | 12,13,14,15 | | **Postgres Enterprise Manager** | 9.1.0 | @@ -18,7 +18,7 @@ description: 'Overview of the certification environment' ## VMware ESXi 7.0U2 Test Environment |   |   | | ----------- | ----------- | -| **Certification Test Date** | March 24 2023 | +| **Certification test date** | March 24 2023 | | **EDB Postgres Advanced Server** | 12,13,14,15 | | **EDB Postgres Extended Server** | 12,13,14,15 | | **Postgres Enterprise Manager** | 9.1.0 | diff --git a/advocacy_docs/partner_docs/NutanixAHV/07-Support.mdx b/advocacy_docs/partner_docs/NutanixAHV/07-Support.mdx index 870b52f2d0c..c3440ccf28c 100644 --- a/advocacy_docs/partner_docs/NutanixAHV/07-Support.mdx +++ b/advocacy_docs/partner_docs/NutanixAHV/07-Support.mdx @@ -1,23 +1,25 @@ --- -title: 'Support and Logging Details' +title: 'Support and logging details' description: 'Details of the support process and logging information' --- ## Support -Technical support for the use of these products is provided by both EDB and Nutanix. A proper support contract is required to be in place at both EDB and Nutanix. A support ticket can be opened on either side to start the process. If it is determined through the support ticket that resources from the other vendor is required, the customer should open a support ticket with that vendor through normal support channels. This will allow both companies to work together to help the customer as needed. +Technical support for the use of these products is provided by both EDB and Nutanix. A support contract must be in place at both EDB and Nutanix. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging -**EDB Postgres Advanced Server Logs** +The following logs are available. -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance and from here you can navigate to `log`, `current_logfiles` or you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. An example of the full path to view EDB Postgres Advanced Server logs: `/var/lib/edb/as15/data/log`. +### EDB Postgres Advanced Server logs -**EDB Postgres Extended Server Logs** +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Or, you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. An example of the full path to view EDB Postgres Advanced Server logs is `/var/lib/edb/as15/data/log`. -Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance and from here you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended logs: `/var/lib/edb-pge/15/data/log`. +### EDB Postgres Extended Server logs -**PostgreSQL Server Logs** +Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance. From there, you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended Server logs is `/var/lib/edb-pge/15/data/log`. + +### PostgreSQL Server logs The default log directories for PostgreSQL logs vary depending on the operating system: @@ -27,6 +29,6 @@ The default log directories for PostgreSQL logs vary depending on the operating - Windows: `C:\Program Files\PostgreSQL\9.3\data\pg_log` -**Nutanix Logs** +### Nutanix logs -For Nutanix logging and support, please contact the Nutanix Support team to assist you. \ No newline at end of file +For Nutanix logging and support, contact the Nutanix Support team to assist you. \ No newline at end of file diff --git a/advocacy_docs/partner_docs/NutanixAHV/index.mdx b/advocacy_docs/partner_docs/NutanixAHV/index.mdx index d43fbd12281..472cd377779 100644 --- a/advocacy_docs/partner_docs/NutanixAHV/index.mdx +++ b/advocacy_docs/partner_docs/NutanixAHV/index.mdx @@ -1,13 +1,13 @@ --- -title: 'Nutanix AHV Implementation Guide' +title: 'Implementing Nutanix AHV' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+ +![Partner Program Logo](Images/PartnerProgram.jpg.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Nutanix AHV

From 2ac208ade97eb08f87ed94529a8a0a1fcf82ba59 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 24 Jul 2023 17:10:48 -0400 Subject: [PATCH 10/65] beginning edits of Precisely content --- .../02-PartnerInformation.mdx | 10 +- .../03-SolutionSummary.mdx | 10 +- .../04-Configuratingpreciselyconnectcdc.mdx | 30 ++-- .../05-Usingpreciselyconnectcdc.mdx | 154 +++++++++--------- 4 files changed, 102 insertions(+), 102 deletions(-) diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/02-PartnerInformation.mdx index 4e1aa84fc81..1f790e0bfed 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details of the partner' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Precisely | -| **Partner Product** | Precisely Connect CDC (Change Data Capture) | -| **Web Site** | https://www.precisely.com/ | +| **Partner name** | Precisely | +| **Partner product** | Precisely Connect CDC (Change Data Capture) | +| **Website** | https://www.precisely.com/ | | **Version** | Precisely Connect CDC V5.8 | -| **Product Description** | Precisely Connect CDC (Change Data Capture) provides real-time data replication and change data capture functionality. Integrate data seamlessly from legacy systems into next-gen cloud and data platforms with one solution. Seamless data access and collection, Precisely Connect CDC helps you take control of your data from mainframe to cloud. Integrate data through batch and real-time ingestion for advanced analytics, comprehensive machine learning and seamless data migration. Precisely Connect CDC leverages the expertise Precisely has built over decades as a leader in mainframe sort and IBM i data availability and security. Access to all your enterprise data for the most critical business projects is ensured by support for a wide range of sources and targets for all your ETL and change data capture (CDC) needs including EDB Postgres Advanced Server and PostgreSQL. | \ No newline at end of file +| **Product description** | Precisely Connect CDC (Change Data Capture) provides real-time data replication and change data capture functionality. Integrate data seamlessly from legacy systems into next-gen cloud and data platforms with one solution. Using seamless data access and collection, Precisely Connect CDC helps you take control of your data from mainframe to cloud. Integrate data through batch and real-time ingestion for advanced analytics, comprehensive machine learning, and seamless data migration. Precisely Connect CDC leverages the expertise Precisely has built over decades as a leader in mainframe sort and IBM i data availability and security. Access to all your enterprise data for the most critical business projects is ensured by support for a wide range of sources and targets for all your ETL and CDC needs, including EDB Postgres Advanced Server and PostgreSQL. | \ No newline at end of file diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/03-SolutionSummary.mdx index 59089ef5405..59b9f85bcc8 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/03-SolutionSummary.mdx @@ -1,14 +1,12 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Explanation of the solution and its purpose' --- -Precisely Connect CDC (Change Data Capture) provides real-time data replication and change data capture functionality. Precisely Connect CDC real-time replication ensures that databases are in-sync for reporting, analytics, and data warehousing. You can replicate changes as they happen across hierarchical data stores (IMS, VSAM), relational databases, streaming frameworks, and the cloud. +Precisely Connect CDC (Change Data Capture) provides real-time data replication and change data capture functionality. Precisely Connect CDC real-time replication ensures that databases are in sync for reporting, analytics, and data warehousing. You can replicate changes as they happen across hierarchical data stores (IMS, VSAM), relational databases, streaming frameworks, and the cloud. Precisely Connect CDC supports EDB Postgres Advanced Server and PostgreSQL databases in different modes as either a source or target database for real-time data replication and change data capture functionality. -

- -

+![Precisely Connect CDC Solution Summary](Images/ConnectCDCSolutionSummary.png) -EDB Postgres Advanced Server and PostgreSQL can be either the Source Database or Target Database in the diagram dependent on your configuration. \ No newline at end of file +In the diagram, EDB Postgres Advanced Server and PostgreSQL can be either the source database or target database, depending on your configuration. \ No newline at end of file diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx index fb221639d72..99e868c3dc7 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx @@ -7,32 +7,32 @@ description: 'Walkthrough on configuring the integration' Implementing Precisely Connect CDC with EDB Postgres Advanced Server requires the following components: -!!! Note - The EDB Postgres Advanced Server, EDB Postgres Extended Server and PostgreSQL Server products will be referred to as Postgres Distribution. The specific Distribution type will be dependent upon customer need or preference. - -- Precisely supported source and target systems. The source, target or both can be your Postgres Distribution(s). In our example we will be using EDB Postgres Advanced Server for both the source and target database. +- Precisely supported source and target systems. The source, target, or both can be your Postgres distributions. The example uses EDB Postgres Advanced Server for both the source and target database. - Precisely Connect CDC software. -- PostgreSQL management tool such as pgAdmin or Postgres Enterprise Manager. In our example we will use pgAdmin. This tool will be used to verify that the data is being replicated as expected. +- PostgreSQL management tool, such as pgAdmin or Postgres Enterprise Manager. The example uses pgAdmin to verify that the data is being replicated as expected. + +!!! Note + We refer to the EDB Postgres Advanced Server, EDB Postgres Extended Server, and PostgreSQL Server products as the Postgres distribution. The specific distribution type depends on your needs and preferences. -## Configuring your PostgreSQL Distribution +## Configuring your PostgreSQL distribution -These components are needed before integrating PostgreSQL Distribution with Precisely Connect CDC: +You need these components before integrating PostgreSQL Distribution with Precisely Connect CDC: -1. Two running instances of EDB Postgres Advanced Server. +- Two running instances of EDB Postgres Advanced Server -2. pgAdmin must be installed on the Source and Target EDB Postgres Advanced Server instances. +- pgAdmin installed on the source and target EDB Postgres Advanced Server instances -3. A running instance of Precisely Connect CDC. +- A running instance of Precisely Connect CDC ![Configuration of PostgreSQL Distribution](Images/configureepas.png) -## Enable WAL Archiving +## Enable WAL archiving -The following steps demonstrate how to enable WAL Archiving on the Source EDB Postgres Advanced Server instance, which is required for Precisely Connect CDC’s replication functionality. +Enable WAL archiving on the source EDB Postgres Advanced Server instance, which is required for Precisely Connect CDC’s replication functionality. -1. Go to the installation directory of EDB Postgres Advanced Server (e.g `C:\Program Files\edb\as14\`). +1. Go to the installation directory of EDB Postgres Advanced Server (for example, `C:\Program Files\edb\as14\`). -2. Go to the data directory and open the postgresql.conf file. +2. In the data directory, open the `postgresql.conf` file. 3. Modify the following settings as per the given values. @@ -47,4 +47,4 @@ max_replication_slots = 20 4. Restart the EDB Postgres Advanced Server Source instance. !!! Note - No configuration e.g WAL Archiving is required on Target EDB Postgres Advanced Server instance. + No configuration, such as WAL archiving, is required on the target EDB Postgres Advanced Server instance. diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx index 759f1aaf89b..434de6bb15a 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx @@ -3,9 +3,11 @@ title: 'Using' description: 'Walkthrough of example usage scenarios' --- -After you have configured your EDB Postgres Distribution, as stated in the Configuring section, you will be able to then use Precisely CDC's Replication functionality. For the examples in this guide, the replication functionality is demonstrated using EDB Postgres Advanced Server. +After you configure your EDB Postgres distribution, you can use Precisely CDC's replication functionality. Thes examples show the replication functionality using EDB Postgres Advanced Server. -- Configure Postgres Distribution as a Source Instance in Connect CDC Director + + +- Configure Postgres Distribution as a source instance in Connect CDC Director - Setup and configuration of EDB Postgres Advanced Server as a source instance for Connect CDC Director. - Configure Postgres Distribution as a Target Instance in Connect CDC Director - Setup and configuration of EDB Postgres Advanced Server as a target instance to Connect CDC Director. @@ -20,70 +22,70 @@ After you have configured your EDB Postgres Distribution, as stated in the Confi - Execute Replication using Connect CDC MonCon - Replication distribution will replicate all the changes done on the Source Tables to the Target Tables. -With Precisely Connect CDC you also have the ability to “Transform” your data as you replicate it from a source to target using Data Transformation. See the section on “Data Transformation” on how to enable this feature. +With Precisely Connect CDC, you can also have “transform” your data as you replicate it from a source to target using Data Transformation. See the section on “Data Transformation” on how to enable this feature. - Data Transformation - - Modify the source data you are distributing or even create or derive the data you distribute to the target system. + - Modify the source data you're distributing or even create or derive the data you distribute to the target system. -## Configure Postgres Distribution as a Source Instance in Connect CDC Director +## Configure Postgres distribution as a source instance in Connect CDC Director -The following steps demonstrate the setup and configuration of EDB Postgres Advanced Server as a source instance for Connect CDC Director. +Set up and configure EDB Postgres Advanced Server as a source instance for Connect CDC Director. 1. Open Connect CDC Director. -2. Click `New` option. +2. Select **New**. -3. Right click `Hosts, Servers, Tables` and select `New Host`. +3. Right-click **Hosts, Servers, Tables** and select **New Host**. -4. Enter information for the `Host`, `Host` is the machine where Connect CDC Director is running. +4. Enter information for the host, Host is the machine where Connect CDC Director is running. ![Create New Host](Images/newhost.png) -5. Right Click on the newly added `Host` and select `Test Connection`, a success message will be displayed. +5. Right-click the newly added host and select **Test Connection**. A success message appears. -6. Right Click on the newly added `Host` and select `New Server`. +6. Right-click the newly added host and select **New Server**. -7. Enter Connection Information for EDB Postgres Advanced Server as `Source` Instance (Server name is where the database name is given). +7. Enter connection information for EDB Postgres Advanced Server as source instance. (**Server Name** is where you give the database name.) ![Create New Server](Images/newserver.png) -8. Right Click on the newly added `Server` and select `Install Source Metabase`. +8. Right-click the newly added server and select **Install Source Metabase**. -9. Provide EDB Postgres Advanced Server Source Instance User and Password on `User ID and Password` Screen. +9. On the User ID and Password screen, provide the EDB Postgres Advanced Server source instance user name and password. -10. Provide password of `rpuser` (refer to the user name and password given on `New Server` Screen) when prompted on the User ID and Password screen. +10. when prompted on the User ID and Password screen, provide the password of rpuser. Refer to the user name and password given on New Server screen. !!! Note - `rpuser` is the replication user added by default by Precisely Connect CDC. + rpuser is the replication user added by default by Precisely Connect CDC. -11. Click Ok on `Install Source Metabase` screen. +11. On the Install Source Metabase screen, select **OK**. -12. A success message will be displayed once the `Source Metabase` is created on the EDB Postgres Advanced Server `Source` Instance. + A success message appears once the source metabase is created on the EDB Postgres Advanced Server source instance. -13. Open pgAdmin and connect to your EDB Postgres Advanced Server Source instance, navigate to Schemas and under the Schemas `rpuser` Schema has been created. +13. Open pgAdmin and connect to your EDB Postgres Advanced Server Source instance. Navigate to Schemas. Under the schemas, the rpuser schema was created. ![pgAdmin](Images/pgadming.png) -14. Navigate back to Connect CDC Director and right Click on the newly added Server and select `Test Connection`. A success message will be displayed. +14. Navigate back to Connect CDC Director. Right-click the newly added server and select **Test Connection**. A success message appears. -15. Right Click on the newly added `Server` and select `Prepare User Database`, this will add the replication user to the public database. +15. To add the replication user to the public database, right-click the newly added server and select **Prepare User Database**. -16. On `Database/Schema name` Screen provide the schema name for the schema you wish to replicate. In our example we are using schema `public`. +16. On Database/Schema name screen, provide the schema name for the schema you want to replicate. This example uses the schema `public`. -17. On the `User ID and Password` Screen, provide EDB Postgres Advanced Server Source Instance `User` and `Password`. +17. On the User ID and Password screen, provide the EDB Postgres Advanced Server source instance user name and password. -18. Click Ok on `Prepare User Database` Screen. +18. On the Prepare User Database screen, select **OK**. -19. A success message will be displayed once the `Prepare User Database` operation is successful. + After the Prepare User Database operation is successful, a success message appears. -20. Right Click on the newly added `Server` and select `Refresh Available Tables`. +20. Right-click the newly added server and select **Refresh Available Tables**. -21. Click on the Refresh list and then select schema `public` and click ok. +21. Select the Refresh list. Select the schema `public` and click **OK**. ![ Refresh Available Tables](Images/refreshtables.png) -22. To display the available tables click newly added `Server` —> `Tables` —> `Available Tables` —> `Public` (Schema name), a list of available tables will be displayed. +22. To display the available table, select the newly added Server and select **Tables > Available Tables > Public > (Schema name). A list of available tables appears. ![Available Tables](Images/availabletables.png) @@ -91,35 +93,35 @@ The following steps demonstrate the setup and configuration of EDB Postgres Adva The following steps demonstrate the setup and configuration of EDB Postgres Advanced Server as a target instance to Connect CDC Director. -1. Right click `DBMS Servers` and `New Server`. +1. Right click **DBMS Servers** and select **New Server**. 2. Provide Connection Information for the EDB Postgres Advanced Server Target Instance. ![Target Server](Images/targetserver.png) -3. Right Click on the newly added `Server` and select `Install Target Only Metabase`. +3. Right-click the newly added Server and select **Install Target Only Metabase**. -4. Provide EDB Postgres Advanced Server Target Instance User and Password on `User ID and Password` Screen. +4. Provide EDB Postgres Advanced Server Target Instance User and Password on User ID and Password screen. -5. Provide password of `rpuser` (refer to the user name and password given on `New Server` Screen) when prompted on the `User ID and Password` screen. +5. Provide password of rpuser (refer to the user name and password given on New Server screen) when prompted on the User ID and Password screen. -6. Click Ok on `Install Target Only Metabase` screen. +6. Select **OK** on Install Target Only Metabase screen. -7. A success message will be displayed once the `Target Only Metabase` is created. +7. A success message will be displayed once the Target Only Metabase is created. -8. Open pgAdmin and connect to EDB Postgres Advanced Server `Target` Instance, and under Schemas `rpuser` Schema is created. +8. Open pgAdmin and connect to EDB Postgres Advanced Server Target Instance, and under Schemas rpuser Schema is created. ![pgAdmin](Images/targetpgadmin.png) -9. Right Click on the newly added `Server` and select `Prepare User Database`, this will add the replication user to EDB Postgres Advanced Server Target Instance. +9. Right-click the newly added Server and select **Prepare User Database**, this will add the replication user to EDB Postgres Advanced Server Target Instance. -10. On `Database/Schema Name` provide schema name `public`. +10. On **Database/Schema Name** provide schema name `public`. -11. On `User ID and Password` Screen, provide EDB Postgres Advanced Server Target Instance user and password. +11. On User ID and Password screen, provide EDB Postgres Advanced Server Target Instance user and password. -12. Click Ok on the `Prepare User Database` screen. +12. Select **OK** on the Prepare User Database screen. -13. A success message will be displayed once the `Prepare User Database` is successful. +13. A success message will be displayed once the Prepare User Database is successful. ## Create Distribution on Connect CDC Director @@ -127,35 +129,35 @@ The following steps demonstrate the creation of Distribution which will define h 1. Open Connect CDC Director. -2. Right click on `Distributions` and select `New Distribution`. +2. Right click **Distributions** and select **New Distribution**. -3. Enter Distribution information e.g `distribution name`, `description` etc and click ok. +3. Enter Distribution information e.g distribution name, description etc and click **OK**. ![New Distribution](Images/newdistribution.png) -4. Go to Source DBMS `Server` —> `Tables` —> `Available Tables` —> `Public` (Schema name). +4. Go to Source DBMS `Server` > `Tables` > `Available Tables` > `Public` (Schema name). -5. Select the table(s) to replicate using the checking box in front of them and right click and select `Select for Distribution` —> `All Checked Tables`. +5. Select the table(s) to replicate using the checking box in front of them and right click and select **Select for Distribution > All Checked Tables**. ![Select Tables for Replication](Images/selecttableforreplication.png) -6. On the `Distributed Tables` Screen, select the `Target Server` under `Select one or more target servers` and select `Create tables on target` option and select the distribution under `Select a Distribution` option and click ok. +6. On the Distributed Tables screen, select the Target Server under **Select one or more target servers** and select **Create tables on target** option and select the distribution under **Select a Distribution** option and click **OK**. ![Distributed Tables](Images/distributetables.png) -7. On Connect CDC Director Screen, select Yes. +7. On Connect CDC Director screen, select Yes. -8. On the `Target Tables` Screen, enter the Target Schema against each table and Click Next. +8. On the Target Tables screen, enter the Target Schema against each table and Select Next. ![Target Tables Screen](Images/targettablescreen.png) -9. On `Target Server Login Details` Screen, enter EDB Postgres Advanced Server Target Instance user and password and click test connect. Once successful, click Add and then click `Create`. +9. On Target Server Login Details screen, enter EDB Postgres Advanced Server Target Instance user and password and click test connect. Once successful, click Add and then click **Create**. ![Target Server Login Details Screen](Images/targetserverlogin.png) -10. Once done, click `Finish`. +10. Once done, click **Finish**. -11. To confirm the tables are created on `Target Server`, click the `Target DBMS Server` —> `Tables` —> `Receiving Tables`, the list of the tables will be displayed which are available on `Source DBMS Server`. +11. To confirm the tables are created on Target Server, click the `Target DBMS Server` > `Tables` > `Receiving Tables`, the list of the tables will be displayed which are available on Source DBMS Server. ![Confirm Tables Are Created on Target Server](Images/confirmtablesontarget.png) @@ -163,13 +165,13 @@ The following steps demonstrate the creation of Distribution which will define h ![pgAdmin](Images/pgAdmin.png) -13. Mappings of tables setup for replication from EDB Postgres Advanced Server Source Instance to EDB Postgres Advanced Server Target Instance will be created under `Distributions` —> `Mappings`. +13. Mappings of tables setup for replication from EDB Postgres Advanced Server Source Instance to EDB Postgres Advanced Server Target Instance will be created under **Distributions > Mappings**. ![Mappings of Tables](Images/tablemappings.png) -14. Open `Distributions` —> `Newly Created Distribution` —> `Requests` —> `New Request` —> `Copy`. Copy will copy/replicate all the data in tables from EDB Postgres Advanced Server Source Instance to EDB Postgres Advanced Server Target Instance. +14. Open **Distributions > Newly Created Distribution > Requests > New Request > Copy**. Copy will copy/replicate all the data in tables from EDB Postgres Advanced Server Source Instance to EDB Postgres Advanced Server Target Instance. -15. Provide required information and click Ok on `Copy Request Properties` Screen. +15. Provide required information and click **OK** on Copy Request Properties screen. ![Copy Request](Images/copyrequest.png) @@ -177,19 +179,19 @@ The following steps demonstrate the creation of Distribution which will define h The Model created in the above steps needs to be saved, validated and committed before the Replication can be performed. -1. To save the Model, click `File` and select `Save` option and provide the name for the `Model`. +1. To save the Model, click **File > Save** option and provide the name for the Model. -2. Right Click `Enterprise Data Movement Model` and select `Validate` to validate the Model. +2. Right-click **Enterprise Data Movement Model** and select **Validate** to validate the Model. 3. No error will be displayed if the Model is valid. ![Model Validation](Images/validmodel.png) -4. Right Click `Enterprise Data Movement Model` and select `Commit` —> `Full`. +4. Right-click **Enterprise Data Movement Model** and select **Commit > Full**. ![Model Commit](Images/modelcommit.png) -5. Click Ok on `Commit Model` Screen. This will create the `Model` files. +5. Select **OK** on Commit Model screen. This will create the Model files. ## Execute Copy Replication using Connect CDC MonCon @@ -197,17 +199,17 @@ Connect CDC MonCon is a GUI application, separate from the Connect CDC Director, 1. Open Connect CDC MonCon. -2. Click `Model` —> `New Model` from Menu. +2. Select **Model > New Model** from Menu. -3. Provide `Hostname/IP` and select the saved `Model` from the drop down list. +3. Provide Hostname/IP and select the saved Model from the drop down list. ![New Model](Images/newmodel.png) -4. In the `Request` Section, select the `Distribution Model` that was created for copying data created in the above steps and right click and select `start`. +4. In the **Request** Section, select the Distribution Model that was created for copying data created in the above steps and right click and select **start**. ![Distribution Model](Images/distributionmodel.png) -5. Click on `Process` to see the progress. +5. Select **Process** to see the progress. 6. Open pgAdmin once the copy operation is completed successfully and connect the EDB Postgres Advanced Server Target Instance and check the tables and the data is copied to the tables. @@ -221,21 +223,21 @@ Once the Copy replication operation is successful then a Replication option will 1. Open Connect CDC Director. -2. Create the `Replication Request` from `Distributions` —> `Newly Created Distribution` —> `Requests` —> `New Request` —> `Replication`. +2. Create the Replication Request from **Distributions > Newly Created Distribution > Requests > New Request > Replication**. -3. Enter the name and press `Ok`. +3. Enter the name and select **OK**. ![Replication Request](Images/replicationrequest.png) ![Replication Request](Images/replicationrequest2.png) -4. Save the changes in the `Model`. +4. Save the changes in the Model. -5. Right Click `Enterprise Data Movement Model` and select `Validate` to validate the Model. +5. Right-click **Enterprise Data Movement Model** and select **Validate** to validate the Model. -6. Right Click `Enterprise Data Movement Model` and select `Commit` —> `Full`. +6. Right-click **Enterprise Data Movement Model** and select **Commit > Full**. -7. Click Ok on `Commit Model` Screen. This will create the Model files. +7. Select **OK** on Commit Model screen. This will create the Model files. ## Execute Replication using Connect CDC MonCon @@ -243,17 +245,17 @@ Replication distribution will replicate all the changes done on the Source Table 1. Open Connect CDC MonCon. -2. As the Model is updated in the Connect CDC Director to add the `Replication Distribution`, click on the `Model Update`, this will add the `Replication Distribution` to the Connect CDC MonCon Interface. +2. As the Model is updated in the Connect CDC Director to add the Replication Distribution, click the Model Update, this will add the Replication Distribution to the Connect CDC MonCon Interface. -3. The replication request will be displayed once the `Model` is updated successfully. +3. The replication request will be displayed once the Model is updated successfully. -4. In the `Request` Section, select the `Distribution Model` created for Replication and right click and select `Start`. +4. In the **Request** Section, select the Distribution Model created for Replication and right click and select **Start**. ![Distribution Model](Images/distrmodel.png) -5. On the `Source Instance`, update the data to be replicated in the `Source Tables`. +5. On the Source Instance, update the data to be replicated in the Source Tables. -6. On the `Target Instance`, check data in the `Target Tables`, data will be updated there. +6. On the Target Instance, check data in the Target Tables, data will be updated there. ![pgAdmin](Images/pgAdmintargettables.png) @@ -273,11 +275,11 @@ The following steps demonstrate data transformations. 1. Open Connect CDC Director. -2. Go to `Source DBMS Server` —> `Tables` —> `Sending Tables`. +2. Go to **Source DBMS Server > Tables > Sending Tables**. -3. Select the table(s) to perform data transformation using the checking box in front of them and right click and select `Properties` —> `Mappings`. +3. Select the table(s) to perform data transformation using the checking box in front of them and right click and select **Properties > Mappings**. -4. On the `Mapping` tab, you define the column mappings for each of your target tables with respect to the source tables. The `Mapping` tab displays the corresponding columns in the `Target Column` and `Source Data` columns in each row. Connect CDC Director automatically mapped target columns to source columns that have the same name. +4. On the **Mapping** tab, you define the column mappings for each of your target tables with respect to the source tables. The **Mapping** tab displays the corresponding columns in the **Target Column** and **Source Data** columns in each row. Connect CDC Director automatically mapped target columns to source columns that have the same name. When the Connect CDC Director maps a source table to a same-named target table, it also associates individual source table columns with same-named target table columns. You then select which of these column pairings you want to include in your data distribution, which additional column pairings you want to arrange, and what type of data transformation, if any, you want to assign to each column pairing. @@ -294,7 +296,7 @@ The Mapping tab grid displays each target table column and its datatype as well - DKey : This column identifies the Distribution Key. By default, DKey is marked for the Primary key. You can deselect the default and add as many check marks as necessary. - Method : It contains a list of data transformation functions. -5. In our example, click on `Method` drop down list for the target column loc and select lower case, this data transformation will convert the data in loc column of target table to lower case after the replication is done. +5. In our example, click on **Method** drop down list for the target column loc and select lower case, this data transformation will convert the data in loc column of target table to lower case after the replication is done. ![Data Transformation](Images/datatransformation1.png) From d7cd1ff825dbdbf7157b928ac80d0874b425cdcc Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 25 Jul 2023 13:25:14 -0400 Subject: [PATCH 11/65] major cleanup of Precisely Connect CDC --- .../02-PartnerInformation.mdx | 2 +- .../04-Configuratingpreciselyconnectcdc.mdx | 18 +- .../05-Usingpreciselyconnectcdc.mdx | 238 +++++++++--------- .../06-CertificationEnvironment.mdx | 6 +- .../PreciselyConnectCDC/07-Support.mdx | 26 +- .../PreciselyConnectCDC/index.mdx | 9 +- 6 files changed, 151 insertions(+), 148 deletions(-) diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/02-PartnerInformation.mdx index 1f790e0bfed..fedfe75ce9c 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/02-PartnerInformation.mdx @@ -9,4 +9,4 @@ description: 'Details of the partner' | **Partner product** | Precisely Connect CDC (Change Data Capture) | | **Website** | https://www.precisely.com/ | | **Version** | Precisely Connect CDC V5.8 | -| **Product description** | Precisely Connect CDC (Change Data Capture) provides real-time data replication and change data capture functionality. Integrate data seamlessly from legacy systems into next-gen cloud and data platforms with one solution. Using seamless data access and collection, Precisely Connect CDC helps you take control of your data from mainframe to cloud. Integrate data through batch and real-time ingestion for advanced analytics, comprehensive machine learning, and seamless data migration. Precisely Connect CDC leverages the expertise Precisely has built over decades as a leader in mainframe sort and IBM i data availability and security. Access to all your enterprise data for the most critical business projects is ensured by support for a wide range of sources and targets for all your ETL and CDC needs, including EDB Postgres Advanced Server and PostgreSQL. | \ No newline at end of file +| **Product description** | Precisely Connect CDC (Change Data Capture) provides real-time data replication and change data capture functionality. Integrate data seamlessly from legacy systems into next-gen cloud and data platforms with one solution. Using seamless data access and collection, Precisely Connect CDC helps you take control of your data from mainframe to cloud. Integrate data through batch and real-time ingestion for advanced analytics, comprehensive machine learning, and seamless data migration. Precisely Connect CDC leverages the expertise Precisely has built over decades as a leader in mainframe sort and IBM data availability and security. Access to all your enterprise data for the most critical business projects is ensured by support for a wide range of sources and targets for all your ETL and CDC needs, including EDB Postgres Advanced Server and PostgreSQL. | \ No newline at end of file diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx index 99e868c3dc7..61e3023c600 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx @@ -1,22 +1,22 @@ --- -title: 'Configuration' -description: 'Walkthrough on configuring the integration' +title: 'Configuring Precisely Connect CDC' +description: 'Walkthrough of configuring the integration' --- ## Prerequisites +!!! Note + We refer to the EDB Postgres Advanced Server, EDB Postgres Extended Server, and PostgreSQL Server products as the Postgres distribution. The specific distribution type depends on your needs and preferences. + Implementing Precisely Connect CDC with EDB Postgres Advanced Server requires the following components: - Precisely supported source and target systems. The source, target, or both can be your Postgres distributions. The example uses EDB Postgres Advanced Server for both the source and target database. - Precisely Connect CDC software. - PostgreSQL management tool, such as pgAdmin or Postgres Enterprise Manager. The example uses pgAdmin to verify that the data is being replicated as expected. -!!! Note - We refer to the EDB Postgres Advanced Server, EDB Postgres Extended Server, and PostgreSQL Server products as the Postgres distribution. The specific distribution type depends on your needs and preferences. - ## Configuring your PostgreSQL distribution -You need these components before integrating PostgreSQL Distribution with Precisely Connect CDC: +You need these components before integrating you PostgreSQL distribution with Precisely Connect CDC: - Two running instances of EDB Postgres Advanced Server @@ -28,9 +28,9 @@ You need these components before integrating PostgreSQL Distribution with Precis ## Enable WAL archiving -Enable WAL archiving on the source EDB Postgres Advanced Server instance, which is required for Precisely Connect CDC’s replication functionality. +Enable WAL archiving on the source EDB Postgres Advanced Server instance. WAL archiving is required for Precisely Connect CDC’s replication functionality. -1. Go to the installation directory of EDB Postgres Advanced Server (for example, `C:\Program Files\edb\as14\`). +1. Go to the installation directory of EDB Postgres Advanced Server, for example, `C:\Program Files\edb\as14\`. 2. In the data directory, open the `postgresql.conf` file. @@ -47,4 +47,4 @@ max_replication_slots = 20 4. Restart the EDB Postgres Advanced Server Source instance. !!! Note - No configuration, such as WAL archiving, is required on the target EDB Postgres Advanced Server instance. + Configuration, such as WAL archiving, isn't required on the target EDB Postgres Advanced Server instance. diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx index 434de6bb15a..a7a56212a95 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx @@ -1,31 +1,29 @@ --- -title: 'Using' +title: 'Using Precisely Connect CDC' description: 'Walkthrough of example usage scenarios' --- -After you configure your EDB Postgres distribution, you can use Precisely CDC's replication functionality. Thes examples show the replication functionality using EDB Postgres Advanced Server. +After you configure your EDB Postgres distribution, you can use Precisely Connect CDC's replication functionality. These examples show the replication functionality using EDB Postgres Advanced Server. - +- Configure Postgres Distribution as a source instance in Connect CDC Director. + - Set up and configure EDB Postgres Advanced Server as a source instance for Connect CDC Director. +- Configure Postgres Distribution as a target instance in Connect CDC Director. + - Set up and configure EDB Postgres Advanced Server as a target instance to Connect CDC Director. +- Create distribution on Connect CDC Director. + - Define how to replicate data from source to target in a model. +- Save the model. + - Save, validate, and commit the model created for replication. +- Execute **Copy Replication** using Connect CDC MonCon. + - Execute **Copy Replication** to copy all the data from the source tables to the target tables. +- Create replication distribution using Connect CDC Director. + - Replicate all the changes made on the source tables to the target tables. +- Execute replication using Connect CDC MonCon. + - Replication distribution replicates all the changes done on the source tables to the target tables. -- Configure Postgres Distribution as a source instance in Connect CDC Director - - Setup and configuration of EDB Postgres Advanced Server as a source instance for Connect CDC Director. -- Configure Postgres Distribution as a Target Instance in Connect CDC Director - - Setup and configuration of EDB Postgres Advanced Server as a target instance to Connect CDC Director. -- Create Distribution on Connect CDC Director - - Define how data will be replicated from source to target in a Model. -- Save the Model - - Save, validate and commit the Model created for Replication. -- Execute Copy Replication using Connect CDC MonCon - - Execute the Copy Replication to copy all the data from the Source Tables to the Target Tables. -- Create Replication Distribution using Connect CDC Director - - Replicate all the changes made on the Source Tables to the Target Tables. -- Execute Replication using Connect CDC MonCon - - Replication distribution will replicate all the changes done on the Source Tables to the Target Tables. +With Precisely Connect CDC, you can also transform your data as you replicate it from a source to target using data transformation. For inoformation on how to enable this feature, see [Data transformation](#data-transformation). -With Precisely Connect CDC, you can also have “transform” your data as you replicate it from a source to target using Data Transformation. See the section on “Data Transformation” on how to enable this feature. - -- Data Transformation - - Modify the source data you're distributing or even create or derive the data you distribute to the target system. +- Data transformation + - Modify the source data you're distributing or create or derive the data you distribute to the target system. ## Configure Postgres distribution as a source instance in Connect CDC Director @@ -36,42 +34,42 @@ Set up and configure EDB Postgres Advanced Server as a source instance for Conne 2. Select **New**. -3. Right-click **Hosts, Servers, Tables** and select **New Host**. +3. Right-click **Hosts, Servers, Tables**, and select **New Host**. -4. Enter information for the host, Host is the machine where Connect CDC Director is running. +4. Enter information for the host. The host is the machine where Connect CDC Director is running. ![Create New Host](Images/newhost.png) -5. Right-click the newly added host and select **Test Connection**. A success message appears. +5. Right-click the newly added host, and select **Test Connection**. A success message appears. -6. Right-click the newly added host and select **New Server**. +6. Right-click the newly added host, and select **New Server**. -7. Enter connection information for EDB Postgres Advanced Server as source instance. (**Server Name** is where you give the database name.) +7. Enter connection information for EDB Postgres Advanced Server as the source instance. (**Server Name** is where you give the database name.) ![Create New Server](Images/newserver.png) -8. Right-click the newly added server and select **Install Source Metabase**. +8. Right-click the newly added server, and select **Install Source Metabase**. 9. On the User ID and Password screen, provide the EDB Postgres Advanced Server source instance user name and password. -10. when prompted on the User ID and Password screen, provide the password of rpuser. Refer to the user name and password given on New Server screen. +10. When prompted on the User ID and Password screen, provide the password of rpuser. Refer to the user name and password given on the New Server screen. !!! Note rpuser is the replication user added by default by Precisely Connect CDC. 11. On the Install Source Metabase screen, select **OK**. - A success message appears once the source metabase is created on the EDB Postgres Advanced Server source instance. + After the source metabase is created on the EDB Postgres Advanced Server source instance, a success message appears. -13. Open pgAdmin and connect to your EDB Postgres Advanced Server Source instance. Navigate to Schemas. Under the schemas, the rpuser schema was created. +13. Open pgAdmin and connect to your EDB Postgres Advanced Server source instance. Navigate to Schemas. The rpuser schema was created there. ![pgAdmin](Images/pgadming.png) -14. Navigate back to Connect CDC Director. Right-click the newly added server and select **Test Connection**. A success message appears. +14. Navigate back to Connect CDC Director. Right-click the newly added server, and select **Test Connection**. A success message appears. -15. To add the replication user to the public database, right-click the newly added server and select **Prepare User Database**. +15. To add the replication user to the public database, right-click the newly added server, and select **Prepare User Database**. -16. On Database/Schema name screen, provide the schema name for the schema you want to replicate. This example uses the schema `public`. +16. On the Database/Schema Name screen, provide the schema name for the schema you want to replicate. This example uses the schema `public`. 17. On the User ID and Password screen, provide the EDB Postgres Advanced Server source instance user name and password. @@ -79,229 +77,233 @@ Set up and configure EDB Postgres Advanced Server as a source instance for Conne After the Prepare User Database operation is successful, a success message appears. -20. Right-click the newly added server and select **Refresh Available Tables**. +20. Right-click the newly added server, and select **Refresh Available Tables**. -21. Select the Refresh list. Select the schema `public` and click **OK**. +21. Select **Refresh list**. Select the schema `public` and select **OK**. -![ Refresh Available Tables](Images/refreshtables.png) +![Refresh Available Tables](Images/refreshtables.png) -22. To display the available table, select the newly added Server and select **Tables > Available Tables > Public > (Schema name). A list of available tables appears. +22. To display the available tables, select the newly added server, and select **Tables > Available Tables > Public > (Schema name)**. A list of available tables appears. ![Available Tables](Images/availabletables.png) -## Configure Postgres Distribution as a Target Instance in Connect CDC Director +## Configure Postgres Distribution as a target instance in Connect CDC Director -The following steps demonstrate the setup and configuration of EDB Postgres Advanced Server as a target instance to Connect CDC Director. +Set up and configure EDB Postgres Advanced Server as a target instance to Connect CDC Director. -1. Right click **DBMS Servers** and select **New Server**. +1. Right-click **DBMS Servers**, and select **New Server**. -2. Provide Connection Information for the EDB Postgres Advanced Server Target Instance. +2. Provide connection information for the EDB Postgres Advanced Server target instance. ![Target Server](Images/targetserver.png) -3. Right-click the newly added Server and select **Install Target Only Metabase**. +3. Right-click the newly added server, and select **Install Target Only Metabase**. -4. Provide EDB Postgres Advanced Server Target Instance User and Password on User ID and Password screen. +4. On the User ID and Password screen, provide the EDB Postgres Advanced Server target instance user name and password. -5. Provide password of rpuser (refer to the user name and password given on New Server screen) when prompted on the User ID and Password screen. +5. When prompted on the User ID and Password screen, provide the password of rpuser. (Refer to the user name and password given on New Server screen.) -6. Select **OK** on Install Target Only Metabase screen. +6. On the Install Target Only Metabase screen, select **OK**. -7. A success message will be displayed once the Target Only Metabase is created. + After the target-only metabase is created, a success message appears. -8. Open pgAdmin and connect to EDB Postgres Advanced Server Target Instance, and under Schemas rpuser Schema is created. +7. Open pgAdmin and connect to the EDB Postgres Advanced Server target instance. Under Schemas, the `rpuser` schema was created. ![pgAdmin](Images/targetpgadmin.png) -9. Right-click the newly added Server and select **Prepare User Database**, this will add the replication user to EDB Postgres Advanced Server Target Instance. +8. Right-click the newly added server, and select **Prepare User Database**, which adds the replication user to EDB Postgres Advanced Server target instance. -10. On **Database/Schema Name** provide schema name `public`. +9. In **Database/Schema Name**, provide the schema name `public`. -11. On User ID and Password screen, provide EDB Postgres Advanced Server Target Instance user and password. +10. On the User ID and Password screen, provide the EDB Postgres Advanced Server target instance user and password. -12. Select **OK** on the Prepare User Database screen. +11. On the Prepare User Database screen, select **OK**. -13. A success message will be displayed once the Prepare User Database is successful. + After the prepare user database operation completes, success a message appears. -## Create Distribution on Connect CDC Director +## Create distribution on Connect CDC Director -The following steps demonstrate the creation of Distribution which will define how data will be replicated from source to target. In our case from EDB Postgres Advanced Server source to EDB Postgres Advanced Server target. +Create a distribution that defines how to replicate data from source to target. This example replicates from an EDB Postgres Advanced Server source to an EDB Postgres Advanced Server target. 1. Open Connect CDC Director. -2. Right click **Distributions** and select **New Distribution**. +2. Right-click **Distributions**, and select **New Distribution**. -3. Enter Distribution information e.g distribution name, description etc and click **OK**. +3. Enter the distribution information, such as distribution name and description. Select **OK**. ![New Distribution](Images/newdistribution.png) -4. Go to Source DBMS `Server` > `Tables` > `Available Tables` > `Public` (Schema name). +4. Go to the source DBMS server **Tables > Available Tables > Public > (Schema name)**. -5. Select the table(s) to replicate using the checking box in front of them and right click and select **Select for Distribution > All Checked Tables**. +5. Using the check boxes, select the tables to replicate. Right-click, and select **Select for Distribution > All Checked Tables**. ![Select Tables for Replication](Images/selecttableforreplication.png) -6. On the Distributed Tables screen, select the Target Server under **Select one or more target servers** and select **Create tables on target** option and select the distribution under **Select a Distribution** option and click **OK**. +6. On the Distributed Tables screen, under **Select one or more target servers**, select the target server. Select the **Create tables on target** option, the distribution under the **Select a Distribution** option, and select **OK**. ![Distributed Tables](Images/distributetables.png) -7. On Connect CDC Director screen, select Yes. +7. On the Connect CDC Director screen, select **Yes**. -8. On the Target Tables screen, enter the Target Schema against each table and Select Next. +8. On the Target Tables screen, enter the target schema for each table, and select **Next**. ![Target Tables Screen](Images/targettablescreen.png) -9. On Target Server Login Details screen, enter EDB Postgres Advanced Server Target Instance user and password and click test connect. Once successful, click Add and then click **Create**. +9. On the Target Server Login Details screen, enter the EDB Postgres Advanced Server target instance user name and password, and select **Test Connect**. Once successful, select **Add**, and then select **Create**. ![Target Server Login Details Screen](Images/targetserverlogin.png) -10. Once done, click **Finish**. +10. Select **Finish**. -11. To confirm the tables are created on Target Server, click the `Target DBMS Server` > `Tables` > `Receiving Tables`, the list of the tables will be displayed which are available on Source DBMS Server. +11. To confirm the tables were created on the target server, select **Target DBMS Server > Tables > Receiving Tables**. The list of the tables that are available on the source DBMS server is displayed. ![Confirm Tables Are Created on Target Server](Images/confirmtablesontarget.png) -12. Open pgAdmin and connect to EDB Postgres Advanced Server Target Instance, and under Schema `public`, the tables will be created. +12. Open pgAdmin, and connect to the EDB Postgres Advanced Server target instance. Under the schema `public`, the tables were created. ![pgAdmin](Images/pgAdmin.png) -13. Mappings of tables setup for replication from EDB Postgres Advanced Server Source Instance to EDB Postgres Advanced Server Target Instance will be created under **Distributions > Mappings**. + Mappings of tables set up for replication from the EDB Postgres Advanced Server source instance to the EDB Postgres Advanced Server target instance were created under **Distributions > Mappings**. ![Mappings of Tables](Images/tablemappings.png) -14. Open **Distributions > Newly Created Distribution > Requests > New Request > Copy**. Copy will copy/replicate all the data in tables from EDB Postgres Advanced Server Source Instance to EDB Postgres Advanced Server Target Instance. +13. Open **Distributions > Newly Created Distribution > Requests > New Request > Copy**. Copy copies/replicates all the data in tables from the EDB Postgres Advanced Server source instance to the EDB Postgres Advanced Server target instance. -15. Provide required information and click **OK** on Copy Request Properties screen. +14. On the Copy Request Properties screen, provide the required information, and select **OK**. ![Copy Request](Images/copyrequest.png) -## Save the Model +## Save the model -The Model created in the above steps needs to be saved, validated and committed before the Replication can be performed. +You need to save, validate, and commit the model you created before you can perform the replication. -1. To save the Model, click **File > Save** option and provide the name for the Model. +1. To save the model, select **File > Save**, and provide the name for the model. -2. Right-click **Enterprise Data Movement Model** and select **Validate** to validate the Model. +2. To validate the model, right-click **Enterprise Data Movement Model**, and select **Validate**. -3. No error will be displayed if the Model is valid. + If the model is valid, no error occurs. ![Model Validation](Images/validmodel.png) -4. Right-click **Enterprise Data Movement Model** and select **Commit > Full**. +4. Right-click **Enterprise Data Movement Model**, and select **Commit > Full**. ![Model Commit](Images/modelcommit.png) -5. Select **OK** on Commit Model screen. This will create the Model files. +5. On the Commit Model screen, select **OK**, which creates the model files. -## Execute Copy Replication using Connect CDC MonCon +## Execute copy replication using Connect CDC MonCon -Connect CDC MonCon is a GUI application, separate from the Connect CDC Director, that you use for monitoring and control functions. We will use Connect CDC MonCon to execute the Copy Replication. Copy Replication will copy all the data from the Source Tables to the Target Tables. +Connect CDC MonCon is a GUI application, separate from the Connect CDC Director, that you use for monitoring and control functions. This example uses Connect CDC MonCon to execute the copy replication. Copy replication copies all the data from the source tables to the target tables. 1. Open Connect CDC MonCon. -2. Select **Model > New Model** from Menu. +2. Select **Model > New Model**. -3. Provide Hostname/IP and select the saved Model from the drop down list. +3. Provide the hostname/IP, and select the saved model from the list. ![New Model](Images/newmodel.png) -4. In the **Request** Section, select the Distribution Model that was created for copying data created in the above steps and right click and select **start**. +4. In the **Request** section, select the distribution model that was created for copying data. Right-click, and select **Start**. -![Distribution Model](Images/distributionmodel.png) +![Distribution model](Images/distributionmodel.png) -5. Select **Process** to see the progress. +5. To see the progress, select **Process**. -6. Open pgAdmin once the copy operation is completed successfully and connect the EDB Postgres Advanced Server Target Instance and check the tables and the data is copied to the tables. +6. After the copy operation has completed successfully, open pgAdmin and connect the EDB Postgres Advanced Server target instance. Check the tables to see whether the data was copied to them. ![pgAdmin](Images/pgAdmincopydone.png) ![pgAdmin](Images/pgAdmincopydone2.png) -## Create Replication Distribution using Connect CDC Director +## Create replication distribution using Connect CDC Director -Once the Copy replication operation is successful then a Replication option will be executed which will replicate all the changes made on the Source Tables to the Target Tables. +Once the copy replication operation is successful, then execute a replication option to replicate all the changes made on the source tables to the target tables. 1. Open Connect CDC Director. -2. Create the Replication Request from **Distributions > Newly Created Distribution > Requests > New Request > Replication**. +2. Create the replication request using **Distributions > Newly Created Distribution > Requests > New Request > Replication**. -3. Enter the name and select **OK**. +3. Enter the name, and select **OK**. ![Replication Request](Images/replicationrequest.png) ![Replication Request](Images/replicationrequest2.png) -4. Save the changes in the Model. +4. Save the changes in the model. -5. Right-click **Enterprise Data Movement Model** and select **Validate** to validate the Model. +5. Right-click **Enterprise Data Movement Model**, and select **Validate** to validate the model. -6. Right-click **Enterprise Data Movement Model** and select **Commit > Full**. +6. Right-click **Enterprise Data Movement Model**, and select **Commit > Full**. -7. Select **OK** on Commit Model screen. This will create the Model files. +7. To create the model files, on the Commit Model screen, select **OK**. -## Execute Replication using Connect CDC MonCon +## Execute replication using Connect CDC MonCon -Replication distribution will replicate all the changes done on the Source Tables to the Target Tables. +Replication distribution replicates all the changes done on the source tables to the target tables. 1. Open Connect CDC MonCon. -2. As the Model is updated in the Connect CDC Director to add the Replication Distribution, click the Model Update, this will add the Replication Distribution to the Connect CDC MonCon Interface. +2. As the model is updated in the Connect CDC Director, to add the Replication Distribution, select **Model Update**. This adds the replication distribution to the Connect CDC MonCon interface. -3. The replication request will be displayed once the Model is updated successfully. +3. After the model updates successfully, the replication request is displayed. -4. In the **Request** Section, select the Distribution Model created for Replication and right click and select **Start**. +4. In the **Request** section, select the distribution model created for replication. Right-click, and select **Start**. -![Distribution Model](Images/distrmodel.png) +![Distribution model](Images/distrmodel.png) -5. On the Source Instance, update the data to be replicated in the Source Tables. +5. On the source instance, update the data to be replicated in the source tables. -6. On the Target Instance, check data in the Target Tables, data will be updated there. +6. On the target instance, check data in the target tables. The data is updated there. ![pgAdmin](Images/pgAdmintargettables.png) -## Data Transformation +## Data transformation -Precisely Connect CDC has options that let you modify the source data you are distributing or even create or derive the data you distribute. You can use predefined functions to accomplish these data transformations or you can construct your own operations or functions. +Precisely Connect CDC has options that let you modify the source data you're distributing or even create or derive the data you distribute. You can use predefined functions to accomplish these data transformations, or you can construct your own operations or functions. For example, to express source currency values in terms of a different currency, you specify the transformation algorithm to use for that column mapping. For some common transformations, the algorithm is supplied as a predefined Connect CDC function. -The algorithms you specify are known in the Connect CDC Director as expressions. The Connect CDC Expression Handler parses them and then creates runtime code that calculates the result for the expressions. +The algorithms you specify are known in the Connect CDC Director as *expressions*. The Connect CDC Expression Handler parses them and then creates runtime code that calculates the result for the expressions. -In addition to or instead of calls to predefined functions, expressions may have an arbitrary number of constants, column references, and calculations on those constants and columns and functions. Their results must be compatible in type and length with the target column. +In addition to or instead of calls to predefined functions, expressions can have an arbitrary number of constants, column references, and calculations on those constants, columns, and functions. Their results must be compatible with the target column in type and length. ### Steps to demonstrate data transformations -The following steps demonstrate data transformations. +To demonstrate data transformations: 1. Open Connect CDC Director. 2. Go to **Source DBMS Server > Tables > Sending Tables**. -3. Select the table(s) to perform data transformation using the checking box in front of them and right click and select **Properties > Mappings**. +3. Using the check boxes, select the tables to perform data transformation. Right-click, and select **Properties > Mapping**. + +4. On the **Mapping** tab, you define the column mappings for each of your target tables with respect to the source tables. The **Mapping** tab displays the corresponding columns in the **Target Column** and **Source Data** columns in each row. Connect CDC Director maps target columns to source columns that have the same name. -4. On the **Mapping** tab, you define the column mappings for each of your target tables with respect to the source tables. The **Mapping** tab displays the corresponding columns in the **Target Column** and **Source Data** columns in each row. Connect CDC Director automatically mapped target columns to source columns that have the same name. + When the Connect CDC Director maps a source table to a same-named target table, it also associates individual source table columns with same-named target table columns. You then select: -When the Connect CDC Director maps a source table to a same-named target table, it also associates individual source table columns with same-named target table columns. You then select which of these column pairings you want to include in your data distribution, which additional column pairings you want to arrange, and what type of data transformation, if any, you want to assign to each column pairing. + - The column pairings you want to include in your data distribution + - The additional column pairings you want to arrange + - The type of data transformation, if any, you want to assign to each column pairing -Mappings Tab has the following options + The **Mapping** tab has the following options: - - Receiving server list : The name of the target server is displayed. - - Receiving table list : The name of the target table is displayed. If it does not (for example, because its name does not match your source table’s), select the target table from the Receiving table list. The list includes all tables in the list of Available tables for the target server. Any table name with [M]preceding it is already mapped to a source table. + - **Receiving server list**: The name of the target server is displayed. + - **Receiving table list**: The name of the target table is displayed. If it doesn't, for example, because its name doesn't match your source table’s, select the target table from the **Receiving table** list. The list includes all tables in the list of available tables for the target server. Any table name with [M]preceding it is already mapped to a source table. -The Mapping tab grid displays each target table column and its datatype as well as its default corresponding source column, if any. Copy column under Method in the grid means the source column value is to be distributed without any special handling or transformation. + The **Mapping** tab grid displays each target table column and its datatype as well as its default corresponding source column, if any. Copy column under Method in the grid means the source column value is to be distributed without any special handling or transformation. - - Target Column : Contains all the columns in the target table. - - Datatype : Datatype of the column for both Source and Target tables as per the mapping defined between source and target table. - - Source Data : Contains all the columns in the Source table. - - DKey : This column identifies the Distribution Key. By default, DKey is marked for the Primary key. You can deselect the default and add as many check marks as necessary. - - Method : It contains a list of data transformation functions. + - **Target** column: Contains all the columns in the target table. + - **Datatype**: Datatype of the column for both source and target tables as per the mapping defined between source and target table. + - **Source Data**: Contains all the columns in the source table. + - **DKey**: Identifies the distribution key. By default, **DKey** is marked for the primary key. You can clear the default and add as many check marks as necessary. + - **Method**: Contains a list of data transformation functions. -5. In our example, click on **Method** drop down list for the target column loc and select lower case, this data transformation will convert the data in loc column of target table to lower case after the replication is done. +5. For this example, from the **Method** list for the target column `loc`, select lower case. This data transformation converts the data in the `loc` column of the target table to lower case after the replication is done. ![Data Transformation](Images/datatransformation1.png) 6. Perform the replication from Connect CDC MonCon. -7. On the Target Instance, check data in the Target Table dept, data will be transformed for column loc and it will be stored as lower case. +7. On the target instance, check the data in the target table `dept`. `data` is transformed for the column `loc` and is stored as lower case. ![Data Transformation](Images/datatransformation2.png) diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/06-CertificationEnvironment.mdx index e7eb95e39b3..50897972a07 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/06-CertificationEnvironment.mdx @@ -1,10 +1,10 @@ --- -title: 'Certification Environment' -description: 'Overview of the Certification Environment' +title: 'Certification environment' +description: 'Overview of the certification environment' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | June 12, 2023 | +| **Certification test date** | June 12, 2023 | | **EDB Postgres Advanced Server** | 15,14,13,12 | | **Precisely Connect CDC** | 5.8 | diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/07-Support.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/07-Support.mdx index d61d02a3bfc..764aa871b4f 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/07-Support.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/07-Support.mdx @@ -1,19 +1,21 @@ --- -title: 'Support and Logging Details' +title: 'Support and logging details' description: 'Details of the Support process and logging information' --- ## Support -Technical support for the use of these products is provided by both EDB and Precisely. A proper support contract is required to be in place at both EDB and Precisely. A support ticket can be opened on either side to start the process. If it is determined through the support ticket that resources from the other vendor is required, the customer should open a support ticket with that vendor through normal support channels. This will allow both companies to work together to help the customer as needed. +Technical support for the use of these products is provided by both EDB and Precisely. A support contract must be in place at both EDB and Precisely. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging -**EDB Postgres Advanced Server Logs:** +The following log files are available. -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance and from here you can navigate to `log`, `current_logfiles` or you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. +### EDB Postgres Advanced Server logs -**PostgreSQL Logs** +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Or, you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. + +### PostgreSQL logs The default log directories for PostgreSQL logs vary depending on the operating system: @@ -23,20 +25,20 @@ The default log directories for PostgreSQL logs vary depending on the operating - Windows: `C:\Program Files\PostgreSQL\9.3\data\pg_log` -**Precisely Logs** +### Precisely logs -To collect the logs from your Precisely Connect CDC instance: +Collect the logs from your Precisely Connect CDC instance. -**Get logs with Connect CDC Director** +To get logs with Connect CDC Director: 1. Open Connect CDC Director. -2. Click the Tools button. +2. Select **Tools**. -3. Log File name and location will be displayed under Log File. + The log file name and location appear under **Log File**. -**Get logs with Connect CDC MonCon** +To get logs with Connect CDC MonCon: 1. Open Connect CDC MonCon. -2. The Connect CDC MonCon log file is located in the Connect CDC program directory in the kernel subdirectory. The name of the file is moncon.log. +2. The Connect CDC MonCon log file is located in the Connect CDC program directory in the kernel subdirectory. The name of the file is `moncon.log`. diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx index b2f757e2f42..cb012f4d309 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx @@ -1,14 +1,13 @@ --- -title: 'Precisely Connect CDC Implementation Guide' +title: 'Implementing Precisely Connect CDC' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+[Partner Program Logo](Images/PartnerProgram.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Precisely Connect CDC

-

This document is intended to augment each vendor’s product documentation in order to guide the reader in getting the products working together. It is not intended to show the optimal configuration for the certified integration.

+

This document is intended to augment each vendor’s product documentation to guide you in getting the products working together. It isn't intended to show the optimal configuration for the certified integration.

From b7da7a6831a7950f6a80543df6be2aa403b75325 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 25 Jul 2023 14:00:30 -0400 Subject: [PATCH 12/65] Pure Storage edits --- .../02-PartnerInformation.mdx | 12 ++--- .../03-SolutionSummary.mdx | 8 +-- .../04-ConfiguringPureStorageFlashArray.mdx | 53 +++++++++---------- .../05-UsingPureStorageFlashArray.mdx | 22 ++++---- .../06-CertificationEnvironment.mdx | 6 +-- .../07-SupportandLoggingDetails.mdx | 20 +++---- .../PureStorageFlashArray/index.mdx | 7 ++- 7 files changed, 64 insertions(+), 64 deletions(-) diff --git a/advocacy_docs/partner_docs/PureStorageFlashArray/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/PureStorageFlashArray/02-PartnerInformation.mdx index 96924568d4f..0d685a036a7 100644 --- a/advocacy_docs/partner_docs/PureStorageFlashArray/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/PureStorageFlashArray/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' -description: 'Details of the Partner' +title: 'Partner information' +description: 'Details of the partner' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Pure Storage | -| **Partner Product** | FlashArray | -| **Web Site** | https://www.purestorage.com/ | +| **Partner name** | Pure Storage | +| **Partner product** | FlashArray | +| **Website** | https://www.purestorage.com/ | | **Version** | FlashArray//X, FlashArray//XL, FlashArray//C, Purity 6.2+ | -| **Product Description** | Pure Storage FlashArray is an enterprise class storage array that runs exclusively on the nonvolatile memory express protocol for memory access and storage. You can implement Pure Storage FlashArray with your EDB Postgres Advanced Server, Postgres Extended Server and PostgreSQL Server instances for a simple storage solution. | +| **Product description** | Pure Storage FlashArray is an enterprise-class storage array that runs exclusively on the nonvolatile memory express protocol for memory access and storage. You can implement Pure Storage FlashArray with your EDB Postgres Advanced Server, Postgres Extended Server, and PostgreSQL Server instances for a simple storage solution. | diff --git a/advocacy_docs/partner_docs/PureStorageFlashArray/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/PureStorageFlashArray/03-SolutionSummary.mdx index bd87a9082c0..2ad0f4ef017 100644 --- a/advocacy_docs/partner_docs/PureStorageFlashArray/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/PureStorageFlashArray/03-SolutionSummary.mdx @@ -1,13 +1,13 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Explanation of the solution and its purpose' --- -Pure Storage FlashArray is an enterprise-ready flash storage solution that provides enterprise performance, reliability and availability for your enterprise databases. FlashArray provides faster transactions via the protocols it deploys, even with big demands on applications. FlashArray runs exclusively on the nonvolatile memory express (NVMe) protocol for memory access and storage. FlashArray also utilizes High Availability to allow for less downtime in your databases, should a failure occur. +Pure Storage FlashArray is an enterprise-ready, flash-storage solution that provides enterprise performance, reliability, and availability for your enterprise databases. FlashArray provides faster transactions by way of the protocols it deploys, even with big demands on applications. FlashArray runs exclusively on the nonvolatile memory express (NVMe) protocol for memory access and storage. FlashArray also uses high availability to allow for less downtime in your databases if a failure occurs. -Pure Storage provides customers with enterprise-grade flash storage arrays to address storage needs. You can utilize these flash storage arrays with your EDB Postgres Advanced Server, EDB Postgres Extended Server or PostgreSQL Server databases for your data storage. FlashArray//X is a performance-optimized, all-flash storage array that provides block storage for Tier 0 and Tier 1 applications, while FlashArray//C is capacity-optimized all-flash storage for Tier 2 applications. +Pure Storage provides you with enterprise-grade flash storage arrays to address storage needs. You can use these flash storage arrays with your EDB Postgres Advanced Server, EDB Postgres Extended Server, or PostgreSQL Server databases for your data storage. FlashArray//X is a performance-optimized, all-flash storage array that provides block storage for Tier 0 and Tier 1 applications. FlashArray//C is capacity-optimized, all-flash storage for Tier 2 applications. -The following image shows how Pure Storage Flash Array integrates with a customer’s hardware server and an EDB Postgres Advanced Server, EDB Postgres Extended Server or PostgreSQL Server to provide the servers with Flash Array storage capabilities. +The following image shows how Pure Storage Flash Array integrates with your hardware server and an EDB Postgres Advanced Server, EDB Postgres Extended Server, or PostgreSQL Server to provide the servers with FlashArray storage capabilities. ![PureStorageSolutionSummary](Images/PureStorageSolutionSummary.png) diff --git a/advocacy_docs/partner_docs/PureStorageFlashArray/04-ConfiguringPureStorageFlashArray.mdx b/advocacy_docs/partner_docs/PureStorageFlashArray/04-ConfiguringPureStorageFlashArray.mdx index 8e4d5f98cb5..6b70f144d8f 100644 --- a/advocacy_docs/partner_docs/PureStorageFlashArray/04-ConfiguringPureStorageFlashArray.mdx +++ b/advocacy_docs/partner_docs/PureStorageFlashArray/04-ConfiguringPureStorageFlashArray.mdx @@ -1,48 +1,47 @@ --- title: 'Configuration' -description: 'Walkthrough on configuring the integration' +description: 'Walkthrough of configuring the integration' --- -Implementing Pure Storage FlashArray with EDB Postgres Advanced Server, EDB Postgres Extended Server or PostgreSQL Server requires the following components: !!! Note - The EDB Postgres Advanced Server, EDB Postgres Extended Server and PostgreSQL Server products will be referred to as Postgres Distribution. The specific Distribution type will be dependant upon customer need or preference. + We refer to the EDB Postgres Advanced Server, EDB Postgres Extended Server, and PostgreSQL Server products as a Postgres distribution. The specific distribution type depends on your needs and preferences. -- An active Postgres Distribution. -- Pure Storage FlashArray subscription. -- Additional hardware as described in the [Hardware Environment Requirements](https://support.purestorage.com/Solutions/PostgreSQL/Getting_Started/PostgreSQL_on_FlashArray_Implementation_and_Best_Practices) section of the Pure Storage Best Practices document. +Implementing Pure Storage FlashArray with EDB Postgres Advanced Server, EDB Postgres Extended Server, or PostgreSQL Server requires the following components: -## Prerequisites +- An active Postgres distribution +- Pure Storage FlashArray subscription +- Additional hardware as described in the [Hardware Environment Requirements](https://support.purestorage.com/Solutions/PostgreSQL/Getting_Started/PostgreSQL_on_FlashArray_Implementation_and_Best_Practices) section of the Pure Storage Best Practices document -- A running Postgres Distribution. -- Linux or Windows File System Configuration as defined in the [FlashArray Implementation and Best Practices](https://support.purestorage.com/Solutions/PostgreSQL/Getting_Started/PostgreSQL_on_FlashArray_Implementation_and_Best_Practices) documentation. -- The hardware protocol for your instance. -- The layout defined for where the FlashArray volumes will be mounted. +## Prerequisites -## Configure Pure Storage FlashArray for Postgres Distribution +- A running Postgres distribution +- Linux or Windows file system configuration as defined in the [FlashArray Implementation and Best Practices](https://support.purestorage.com/Solutions/PostgreSQL/Getting_Started/PostgreSQL_on_FlashArray_Implementation_and_Best_Practices) documentation +- The hardware protocol for your instance +- The layout defined for mounting the FlashArray volumes -Configuring Pure Storage FlashArray for your Postgres Distribution requires the following steps to be taken. +## Configuring Pure Storage FlashArray for Postgres distribution -1. First you must select your OS (either Windows or Linux) and configure to your specifications according to the [PostgreSQL on FlashArray Implementation and Best Practices](https://support.purestorage.com/Solutions/PostgreSQL/Getting_Started/PostgreSQL_on_FlashArray_Implementation_and_Best_Practices) document. +1. Select your OS (either Windows or Linux) and configure it to your specifications according to the [PostgreSQL on FlashArray Implementation and Best Practices](https://support.purestorage.com/Solutions/PostgreSQL/Getting_Started/PostgreSQL_on_FlashArray_Implementation_and_Best_Practices) document. -2. Install [your preferred Postgres Distribution](https://www.enterprisedb.com/docs/epas/latest/). +2. Install [your preferred Postgres distribution](/epas/latest/). -3. Initialize your Postgres Distribution. +3. Initialize your Postgres distribution. -There are best practices for both Linux and Windows servers for Flash Array and these can be found at [Linux Recommended Settings](https://support.purestorage.com/Solutions/Linux/Linux_Reference/Linux_Recommended_Settings) and [Validate Windows Server with Test-WindowsBestPractices Cmdlet](https://support.purestorage.com/Solutions/Microsoft_Platform_Guide/FlashArray_Connectivity/aaa11_Validate_Windows_Server_with_Test-WindowsBestPractices_Cmdlet) respectively. +You can find best practices for Linux and Windows servers for FlashArray at [Linux Recommended Settings](https://support.purestorage.com/Solutions/Linux/Linux_Reference/Linux_Recommended_Settings) and [Validate Windows Server with Test-WindowsBestPractices Cmdlet](https://support.purestorage.com/Solutions/Microsoft_Platform_Guide/FlashArray_Connectivity/aaa11_Validate_Windows_Server_with_Test-WindowsBestPractices_Cmdlet), respectively. -## Selecting Pure Storage FlashArray Hardware Environment Requirements for Postgres Distribution +## Selecting Pure Storage FlashArray hardware environment requirements for Postgres distribution -In order to access your block storage on Pure Storage FlashArray, you must first define which protocol is best for your environment and then set up that protocol. Listed below are the different types of protocols for FlashArray and their requirements. +To access your block storage on Pure Storage FlashArray, you must first define the protocol that's best for your environment and then set up that protocol. The following are the different types of protocols for FlashArray and their requirements. ![FlashArrayHardwareRequirements](Images/FlashArrayHardwareRequirements.png) -More information on these protocols and their requirements can be found in the [PostgreSQL on FlashArray Implementation and Best Practices](https://support.purestorage.com/Solutions/PostgreSQL/Getting_Started/PostgreSQL_on_FlashArray_Implementation_and_Best_Practices) document. +For more information on these protocols and their requirements, see the [PostgreSQL on FlashArray Implementation and Best Practices](https://support.purestorage.com/Solutions/PostgreSQL/Getting_Started/PostgreSQL_on_FlashArray_Implementation_and_Best_Practices) document. -!!! Note - Note from Pure Storage: The recommendation for volumes from FlashArray is to mount a volume to the following locations with the correct permissions set before installing PostgreSQL: - **Microsoft Windows** - `C:\Program Files\PostgreSQL\` Permissions - PostgreSQL service account - READ Permissions on all directories leading up to the service directory, WRITE permissions are required only on the data directory - **Linux** - `/var/lib/pgsql` (Permissions - user postgres, group postgres, drwx------) +!!! Note Note from Pure Storage + The recommendation for volumes from FlashArray is to mount a volume to the following locations with the correct permissions set before installing PostgreSQL: + **Microsoft Windows** — `C:\Program Files\PostgreSQL\` Permissions - PostgreSQL service account - READ permissions on all directories leading up to the service directory. WRITE permissions are required only on the data directory. + **Linux** — `/var/lib/pgsql` (Permissions - user postgres, group postgres, drwx------) -**For EDB Postgres Advanced Server instances the volumes mount points are as follows:** - *Windows*- `C:\Program Files/edb/as-"version"` - *Linux*- `/var/lib/edb/"version"` \ No newline at end of file +For EDB Postgres Advanced Server instances, the volume mount points are as follows: + Windows — `C:\Program Files/edb/as-"version"` + Linux — `/var/lib/edb/"version"` \ No newline at end of file diff --git a/advocacy_docs/partner_docs/PureStorageFlashArray/05-UsingPureStorageFlashArray.mdx b/advocacy_docs/partner_docs/PureStorageFlashArray/05-UsingPureStorageFlashArray.mdx index 5f1a56cd75e..fc7fc07c008 100644 --- a/advocacy_docs/partner_docs/PureStorageFlashArray/05-UsingPureStorageFlashArray.mdx +++ b/advocacy_docs/partner_docs/PureStorageFlashArray/05-UsingPureStorageFlashArray.mdx @@ -3,19 +3,19 @@ title: 'Using' description: 'Walkthrough of example usage scenarios' --- -This section features some use cases that show how Pure Storage Flash Array can integrate with your Postgres Distribution. +These use cases show how Pure Storage FlashArray can integrate with your Postgres distribution. -## Use Pure Storage FlashArray with Postgres Distribution +## Use Pure Storage FlashArray with Postgres distribution -Pure Storage’s FlashArray provides enterprise grade availability with all-flash technology. This can be used with your Postgres Distribution for uses such as: -- Volume Snapshots -- Continuous Storage Replication -- Synchronous Replication, which can provide workload mobility and High Availability +Pure Storage’s FlashArray provides enterprise-grade availability with all-flash technology. You can use this with your Postgres distribution for uses such as: +- Volume snapshots +- Continuous storage replication +- Synchronous replication, which can provide workload mobility and high availability What these all provide for your Postgres Distribution are: -- Database cloning for copies or replication topologies. -- Snapshots for data protection (B&R) with asynchronous replication. -- Continuous Replication for near-0 RTO disaster recovery. -- High availability or storage mobility (moving from one array to another non-disruptively) with ActiveCluster. +- Database cloning for copies or replication topologies +- Snapshots for data protection (B&R) with asynchronous replication +- Continuous replication for near-0 RTO disaster recovery +- High availability or storage mobility (moving from one array to another non-disruptively) with ActiveCluster -Pure Storage FlashArray can help your Postgres Distribution databases maintain a storage High-Availability state with its Synchronous Replication using the ActiveCluster replication solution. ActiveCluster replication is part of the Purity Solution, which is the software heart of FlashArray. Purity ActiveCluster protects your Postgres Distribution databases using an active-active implementation, meaning that it will distribute the workload across several nodes in a cluster to keep your data safe and available in case of a failure. \ No newline at end of file +Pure Storage FlashArray can help your Postgres Distribution databases maintain a storage high-availability state with its synchronous replication using the ActiveCluster replication solution. ActiveCluster replication is part of the Purity Solution, which is the software heart of FlashArray. Purity ActiveCluster protects your Postgres distribution databases using an active-active implementation, meaning that it distributes the workload across several nodes in a cluster to keep your data safe and available in case of a failure. \ No newline at end of file diff --git a/advocacy_docs/partner_docs/PureStorageFlashArray/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/PureStorageFlashArray/06-CertificationEnvironment.mdx index 6fb93d23c9e..35d91b40df7 100644 --- a/advocacy_docs/partner_docs/PureStorageFlashArray/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/PureStorageFlashArray/06-CertificationEnvironment.mdx @@ -1,10 +1,10 @@ --- -title: 'Certification Environment' -description: 'Overview of the Certification Environment' +title: 'Certification environment' +description: 'Overview of the certification environment' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | May 2022 | +| **Certification test date** | May 2022 | | **EDB Postgres Advanced Server** | 14 | | **Pure Storage FlashArray** | //X, //XL, //C, Purity 6.2+ | \ No newline at end of file diff --git a/advocacy_docs/partner_docs/PureStorageFlashArray/07-SupportandLoggingDetails.mdx b/advocacy_docs/partner_docs/PureStorageFlashArray/07-SupportandLoggingDetails.mdx index b73546204a7..bdae379075b 100644 --- a/advocacy_docs/partner_docs/PureStorageFlashArray/07-SupportandLoggingDetails.mdx +++ b/advocacy_docs/partner_docs/PureStorageFlashArray/07-SupportandLoggingDetails.mdx @@ -1,23 +1,25 @@ --- -title: 'Support and Logging Details' +title: 'Support and logging details' description: 'Details of the support process and logging information' --- ## Support -Technical support for the use of these products is provided by both EDB and Pure Storage. A proper support contract is required to be in place at both EDB and Pure Storage. A support ticket can be opened on either side to start the process. If it is determined through the support ticket that resources from the other vendor is required, the customer should open a support ticket with that vendor through normal support channels. This will allow both companies to work together to help the customer as needed. +Technical support for the use of these products is provided by both EDB and Pure Storage. A support contract must be in place at both EDB and Pure Storage. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging -**EDB Postgres Advanced Server Logs** +The following logs are available. -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance and from here you can navigate to `log`, `current_logfiles` or you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. An example of the full path to view EDB Postgres Advanced Server logs: `/var/lib/edb/as15/data/log`. +### EDB Postgres Advanced Server logs -**EDB Postgres Extended Server Logs** +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Or, you can navigate to the `postgresql.conf` file, where you can customize logging options or enable `edb_audit` logs. An example of the full path to view EDB Postgres Advanced Server logs is `/var/lib/edb/as15/data/log`. -Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance and from here you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended logs: `/var/lib/edb-pge/15/data/log`. +### EDB Postgres Extended Server logs -**PostgreSQL Server Logs** +Navigate to the `Data` directory in your chosen EDB Postgres Extended Server instance. From there, you can navigate to `log`, or you can navigate to the `postgresql.conf` file where you can customize logging options. An example of the full path to view EDB Postgres Extended logs is `/var/lib/edb-pge/15/data/log`. + +### PostgreSQL Server logs The default log directories for PostgreSQL logs vary depending on the operating system: @@ -27,6 +29,6 @@ The default log directories for PostgreSQL logs vary depending on the operating - Windows: `C:\Program Files\PostgreSQL\9.3\data\pg_log` -**Pure Storage Logs** +### Pure Storage logs -For Pure Storage logging and support, please contact the Pure Storage Support team to assist you. \ No newline at end of file +For Pure Storage logging and support, contact the Pure Storage Support team to assist you. \ No newline at end of file diff --git a/advocacy_docs/partner_docs/PureStorageFlashArray/index.mdx b/advocacy_docs/partner_docs/PureStorageFlashArray/index.mdx index d904070a3d3..acc7083b9d1 100644 --- a/advocacy_docs/partner_docs/PureStorageFlashArray/index.mdx +++ b/advocacy_docs/partner_docs/PureStorageFlashArray/index.mdx @@ -1,13 +1,12 @@ --- -title: 'Pure Storage FlashArray Implementation Guide' +title: 'Implmenting Pure Storage FlashArray' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+![Partner Program Logo](Images/PartnerProgram.jpg.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Pure Storage FlashArray

From cf9f99b7d2cce12720736f0226802baf75453005 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 25 Jul 2023 14:01:45 -0400 Subject: [PATCH 13/65] Update index.mdx --- advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx index cb012f4d309..b12e32c91ef 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx @@ -5,7 +5,7 @@ directoryDefaults: iconName: handshake --- -[Partner Program Logo](Images/PartnerProgram.png) +![Partner Program Logo](Images/PartnerProgram.png)

EDB GlobalConnect Technology Partner Implementation Guide

Precisely Connect CDC

From 031da170e6f674de9d4d78ef595ac8529097136f Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 25 Jul 2023 14:02:28 -0400 Subject: [PATCH 14/65] Update 07-Support.mdx --- advocacy_docs/partner_docs/PreciselyConnectCDC/07-Support.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/07-Support.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/07-Support.mdx index 764aa871b4f..6c26032ba7e 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/07-Support.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/07-Support.mdx @@ -1,6 +1,6 @@ --- title: 'Support and logging details' -description: 'Details of the Support process and logging information' +description: 'Details of the support process and logging information' --- ## Support @@ -13,7 +13,7 @@ The following log files are available. ### EDB Postgres Advanced Server logs -Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Or, you can navigate to the `postgresql.conf` file where you can customize logging options or enable `edb_audit` logs. +Navigate to the `Data` directory in your chosen EDB Postgres Advanced Server instance. From there, you can navigate to `log` or `current_logfiles`. Or, you can navigate to the `postgresql.conf` file, where you can customize logging options or enable `edb_audit` logs. ### PostgreSQL logs From 025d4837d9e87b5f4950eba72037008aeb4ff99b Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 27 Jul 2023 11:24:54 -0400 Subject: [PATCH 15/65] Edits to Quest Toad Edge partner doc --- .../QuestToadEdge/02-PartnerInformation.mdx | 10 ++-- .../QuestToadEdge/03-SolutionSummary.mdx | 8 +-- .../04-ConfiguringQuestToadEdge.mdx | 57 +++++++++---------- .../QuestToadEdge/05-UsingQuestToadEdge.mdx | 51 +++++++---------- .../06-CertificationEnvironment.mdx | 8 +-- .../partner_docs/QuestToadEdge/index.mdx | 7 +-- 6 files changed, 61 insertions(+), 80 deletions(-) diff --git a/advocacy_docs/partner_docs/QuestToadEdge/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/QuestToadEdge/02-PartnerInformation.mdx index b704f3bd317..273342c48a9 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details for Quest Toad Edge' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Quest | -| **Partner Product** | Toad Edge | -| **Web Site** | https://www.quest.com/products/toad-edge/ | +| **Partner name** | Quest | +| **Partner product** | Toad Edge | +| **Website** | https://www.quest.com/products/toad-edge/ | | **Version** | Toad Edge 2.4.1 | -| **Product Description** | Quest Toad Edge is a lightweight, desktop toolset that simplifies the development and management of open source relational databases. Quest Toad Edge makes it easy for you to ramp-up quickly, with support for coding, editing, schema compare and sync, and DevOps CI processes. | +| **Product description** | Quest Toad Edge is a lightweight desktop toolset that simplifies developing and managing open-source relational databases. Quest Toad Edge makes it easy for you to ramp up quickly, with support for coding, editing, schema compare and sync, and DevOps CI processes. | diff --git a/advocacy_docs/partner_docs/QuestToadEdge/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/QuestToadEdge/03-SolutionSummary.mdx index ca43f1f92af..ed76f85dc91 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/03-SolutionSummary.mdx @@ -1,10 +1,8 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Brief explanation of the solution and its purpose' --- -Quest Toad Edge is a lightweight and reliable desktop database toolset that streamlines development and management tasks for EDB Postgres Advanced Server and EDB Postgres Extended Server. Its flexibility lies in it being built on Java and its ability to work with both Windows and Mac operating systems. Toad Edge supports coding, editing, schema compare and sync and DevOps CI processes, so you can manage EDB Postgres Advanced Server and EDB Postgres Extended Server. +Quest Toad Edge is a lightweight and reliable desktop database toolset that streamlines development and management tasks for EDB Postgres Advanced Server and EDB Postgres Extended Server. Its flexibility lies in being built on Java and its ability to work with both Windows and Mac operating systems. Toad Edge supports coding, editing, schema compare and sync, and DevOps CI processes, so you can manage EDB Postgres Advanced Server and EDB Postgres Extended Server. -

- -

+![Quest Toad Edge Solution Summary](Images/ToadEdgeUpdatedSolutionSummary.png) diff --git a/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx b/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx index aae4ea06e63..09f6c5ac4db 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx @@ -1,47 +1,44 @@ --- title: 'Configuring Quest Toad Edge' -description: 'Walkthrough on configuring Quest Toad Edge' +description: 'Walkthrough of configuring Quest Toad Edge' --- Implementing Quest Toad Edge with EDB Postgres Advanced Server or EDB Postgres Extended Server requires the following components: -- A running EDB Postgres Advanced Server or EDB Postgres Extended Server instance. -- Quest Toad Edge application installed on a system where you will manage the running database. (EDB JDBC Driver included) +- A running EDB Postgres Advanced Server or EDB Postgres Extended Server instance +- Quest Toad Edge application installed on a system where you'll manage the running database (EDB JDBC Driver included) -The following steps show how to configure Quest Toad Edge for EDB Postgres Advanced Server and EDB Postgres Extended Server: +## EDB Postgres Advanced Server configuration -## EDB Postgres Advanced Server Configuration +To configure Quest Toad Edge for EDB Postgres Advanced Server and EDB Postgres Extended Server: 1. Launch the Toad Edge application. -2. On the main menu, go to the `Connect` dropdown and select `New Connection`. -3. In the `New Connection` screen, enter the following values: `Hostname`,`Port`,`Database`,`Username`,`Password`. -4. Select `EDB Postgres Advanced Server` on the left pane. -5. Click `Test Connection` to verify connectivity to the database server. -6. If the connection is successful, click `Connect`. Otherwise verify your database information and try again. +2. On the main menu, select **Connect > New Connection**. +3. In the New Connection screen, enter the following values: **Hostname**,**Port**,**Database**,**Username**,**Password**. +4. On the left pane, select **EDB Postgres Advanced Server**. +5. To verify connectivity to the database server, select **Test Connection**. +6. If the connection is successful, select **Connect**. Otherwise, verify your database information and try again. -

- -

+![EDB Postgres Advanced Server Configuration](Images/EPASConfiguration.png) -### Update the Driver, If Required -If it is required, you can update the EDB JDBC driver by downloading the latest version from EDB and then replacing the default EDB JDBC jar. -1. Download the new Driver from EDB. -2. Go to the drivers directory for your Toad Edge installation. For example: `C:\Program Files\Quest Software\Toad Edge\lib\drivers`. -3. Rename or backup the existing EDB JDBC Driver so that you have a copy to restore if needed. -4. Copy the new EDB JDBC Driver to the `drivers` folder. -5. Launch Toad Edge and create a new connection for EDB Postgres Advanced Server as shown above in the [EDB Postgres Advanced Server Configuration](#edb-postgres-advanced-server-configuration) section. +### Update the driver, if required +If needed, you can update the EDB JDBC driver by downloading the latest version from EDB and then replacing the default EDB JDBC jar. +1. Download the new driver from EDB. +2. Go to the `drivers` directory for your Toad Edge installation, for example, `C:\Program Files\Quest Software\Toad Edge\lib\drivers`. +3. Rename or back up the existing EDB JDBC driver so that you have a copy to restore if needed. +4. Copy the new EDB JDBC driver to the `drivers` folder. +5. Launch Toad Edge, and create a new connection for EDB Postgres Advanced Server as shown in [EDB Postgres Advanced Server configuration](#edb-postgres-advanced-server-configuration). -## EDB Postgres Extended Server Configuration + +## EDB Postgres Extended Server configuration 1. Launch the Toad Edge application. -2. On the main menu, go to the `Connect` dropdown and select `New Connection`. -3. In the `New Connection` screen, enter the following values: `Hostname`,`Port`,`Database`,`Username`,`Password`. -4. Select `PostgreSQL` on the left pane. -5. Click `Test Connection` to verify connectivity to the database server. -6. If the connection is successful, click `Connect`. Otherwise verify your database information and try again. - -

- -

+2. On the main menu, select **Connect > New Connection**. +3. In the New Connection screen, enter the following values: **Hostname**,**Port**,**Database**,**Username**,**Password**. +4. On the left pane, select **PostgreSQL**. +5. To verify connectivity to the database server, select **Test Connection**. +6. If the connection is successful, select **Connect**. Otherwise, verify your database information and try again. + +![EDB Postgres Extended Server Configuration](Images/ExtendedConfiguration.png) diff --git a/advocacy_docs/partner_docs/QuestToadEdge/05-UsingQuestToadEdge.mdx b/advocacy_docs/partner_docs/QuestToadEdge/05-UsingQuestToadEdge.mdx index 60f653f093e..0b5fb37aa69 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/05-UsingQuestToadEdge.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/05-UsingQuestToadEdge.mdx @@ -3,55 +3,42 @@ title: 'Using Quest Toad Edge' description: 'Walkthroughs of multiple Quest Toad Edge usage scenarios' --- -Once an instance of EDB Postgres Advanced Server or EDB Postgres Extended is connected to Quest Toad Edge, you can access the capabilities of Toad Edge. +After you connect an instance of EDB Postgres Advanced Server or EDB Postgres Extended to Quest Toad Edge, you can access the capabilities of Toad Edge. -# Sample User Scenarios +# Sample user scenarios !!! Note - This section is provided as an example of Quest Toad Edge and EDB Postgres Advanced Server or EDB Postgres Extended Server. It is not intended to show all functionality. + These example show some sample uses of Quest Toad Edge with EDB Postgres Advanced Server or EDB Postgres Extended Server. They aren't intended to show all functionality. - - -## Connecting to the Database +## Connecting to the database 1. Launch Quest Toad Edge. -2. Right click the database connection you want to open and select `Connect`. You can also select the `Connect` button and select your database. +2. Right-click the database connection you want to open, and select **Connect**. Alternatively, select **Connect**, and then select your database. + + ![Connect Button](Images/ConnectButton.png) -

- -

+3. Enter the username and password, and select **OK**. -3. Enter the username and password and click `OK`. -4. You are now connected to your database instance. + You're now connected to your database instance. -## Viewing Tables and Data +## Viewing tables and data Using the appropriate SQL code, three tables (`AGENTS`, `CUSTOMERS`, and `ORDERS`) were created in the `edb` database and data was inserted to help visualize some basic functionality. -

- -

+![Basic Functionality](Images/BasicFunctionality.png) -1. When you connect to the database, a SQL Editor area is open for your database. In this example that is `edb`. If it isn't open, you can click on the `SQL Editor` button at the top to open a new one. +1. When you connect to the database, a SQL Editor area is open for your database. In this example, it's `edb`. If it isn't open, you can select **SQL Editor** at the top to open a new one. -

- -

+ ![Open SQL Editor](Images/OpenSQLEditor.png) -2. Once the tables have been created, double click on the table name to look at the table details. In this case we selected the `Customers` table. +2. After the tables are created, double-click the table name to look at the table details. The figure shows the `Customers` table. -

- -

+ ![Customers Table](Images/CustomersTable.png) -3. Click on the `Data` tab to view the data in the table. +3. To view the data in the table, select the **Data** tab. -

- -

+ ![Data Tab](Images/DataTab.png) -4. Click on the `Script` tab to see the SQL code for the table. +4. To see the SQL code for the table, select the **Script** tab. -

- -

+ ![Script Tab](Images/ScriptTab.png) diff --git a/advocacy_docs/partner_docs/QuestToadEdge/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/QuestToadEdge/06-CertificationEnvironment.mdx index c7ab5d81cfa..0e431fb1f47 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/06-CertificationEnvironment.mdx @@ -1,14 +1,14 @@ --- -title: 'Certification Environment' +title: 'Certification environment' description: 'Overview of the certification environment used in the certification of Quest Toad Edge' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | May 1, 2022 | +| **Certification test date** | May 1, 2022 | | **EDB Postgres Advanced Server** | 11, 12, 13, 14 | | **EDB Postgres Extended** | 13 | | **Quest Toad Edge** | 2.4.1 | -| **EDB JDBC Included Driver** | 42.2.5.2 | -| **EDB JDBC Updated Driver** | 42.3.3.1 | +| **EDB JDBC included driver** | 42.2.5.2 | +| **EDB JDBC update driver** | 42.3.3.1 | diff --git a/advocacy_docs/partner_docs/QuestToadEdge/index.mdx b/advocacy_docs/partner_docs/QuestToadEdge/index.mdx index 4c5c8537eed..46d610c43f3 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/index.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/index.mdx @@ -1,12 +1,11 @@ --- -title: 'Quest Toad Edge Implementation Guide' +title: 'Implementing Quest Toad Edge' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+![Partner Program Logo](Images/EDBPartnerProgram.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Quest Toad Edge

From b6b2bc02e6f18acd74fd2e84a7d9e12eab88304d Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 27 Jul 2023 15:18:17 -0400 Subject: [PATCH 16/65] edits to sib visions partner doc --- .../02-PartnerInformation.mdx | 10 +- .../SIBVisionsVisionX/03-SolutionSummary.mdx | 15 +- .../04-ConfiguringSIBVisionsVisionX.mdx | 4 +- .../05-UsingSIBVisionsVisionX.mdx | 226 +++++++++--------- .../06-CertificationEnvironment.mdx | 6 +- .../partner_docs/SIBVisionsVisionX/index.mdx | 7 +- 6 files changed, 131 insertions(+), 137 deletions(-) diff --git a/advocacy_docs/partner_docs/SIBVisionsVisionX/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/SIBVisionsVisionX/02-PartnerInformation.mdx index f93b30273d7..2f238a1e9fc 100644 --- a/advocacy_docs/partner_docs/SIBVisionsVisionX/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/SIBVisionsVisionX/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details for SIB Visions VisionX' --- |   |   | | ----------- | ----------- | -| **Partner Name** | SIB Visions | -| **Partner Product** | VisionX | -| **Web Site** | https://visionx.sibvisions.com/ | +| **Partner name** | SIB Visions | +| **Partner product** | VisionX | +| **Website** | https://visionx.sibvisions.com/ | | **Version** | VisionX 5.6.925 | -| **Product Description** | SIB Visions VisionX is a low-code platform for creating web and mobile applications. SIB Visions VisionX and its Oracle Forms Migration Extension are an efficient solution for semi-automated migration of Oracle Forms and Oracle APEX into modern web and mobile applications. SIB Visions VisionX, together with an EDB Postgres Advanced Server migration, can help provide an exit strategy from Oracle to Postgres. | +| **Product description** | SIB Visions VisionX is a low-code platform for creating web and mobile applications. SIB Visions VisionX and its Oracle Forms Migration Extension are an efficient solution for semi-automated migration of Oracle Forms and Oracle APEX into modern web and mobile applications. SIB Visions VisionX, together with an EDB Postgres Advanced Server migration, can help provide an exit strategy from Oracle to Postgres. | diff --git a/advocacy_docs/partner_docs/SIBVisionsVisionX/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/SIBVisionsVisionX/03-SolutionSummary.mdx index c82564ec433..2a99907a904 100644 --- a/advocacy_docs/partner_docs/SIBVisionsVisionX/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/SIBVisionsVisionX/03-SolutionSummary.mdx @@ -1,17 +1,16 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Brief explanation of the solution and its purpose' --- -SIB Visions VisionX is a flexible and independent low-code platform, enabling both business users and professional developers to visually develop web, desktop and native mobile applications quickly. These can be very simple applications that replace paper processes or Excel sheets, easy-to-use forms on ERP systems, dashboards, mobile apps, and even highly complex billing applications, customer portals or trading systems. +SIB Visions VisionX is a flexible and independent low-code platform, enabling both business users and professional developers to visually develop web, desktop, and native mobile applications quickly. These can be very simple applications that replace paper processes or Excel sheets, easy-to-use forms on ERP systems, dashboards, mobile apps, and even highly complex billing applications, customer portals, or trading systems. -The SIB Visions VisionX low-code platform supports major databases, such as Postgres, EDB Postgres Advanced Server and EDB Postgres Extended Server. SIB Visions VisionX is automatically bundled with a Postgres database and is a low-code platform for Postgres. +The SIB Visions VisionX low-code platform supports major databases, such as Postgres, EDB Postgres Advanced Server, and EDB Postgres Extended Server. SIB Visions VisionX is bundled with a Postgres database and is a low-code platform for Postgres. SIB Visions VisionX and its Oracle Forms Migration Extension are an efficient solution for semi-automated migration of Oracle Forms and Oracle APEX into modern web and mobile applications. Together with an Oracle database to an EDB Postgres Advanced Server migration, it provides a fast, smooth, and reliable Oracle exit strategy. -With SIB Visions VisionX, you are vendor independent and can create web application, no matter how complex. You own the generated Java code, which only uses open source libraries. The created applications can be modified independently of SIB Visions VisionX in any Java IDE and run independently of SIB Visions VisionX in any cloud or on-premise environment. +With SIB Visions VisionX, you're vendor independent and can create a web application, no matter how complex. You own the generated Java code, which uses only open source libraries. The created applications can be modified independently of SIB Visions VisionX in any Java IDE and run independently of SIB Visions VisionX in any cloud or on-premises environment. -When you change code outside of SIB Visions VisionX in Eclipse, the changes are pushed back into the SIB Visions VisionX visual development environment in real time. This two-way synchronization enables unlimited app development for Citizen Developers, Business Users, and Pro Developers. +When you change code outside of SIB Visions VisionX in Eclipse, the changes are pushed back into the SIB Visions VisionX visual development environment in real time. This two-way synchronization enables unlimited app development for citizen developers, business users, and pro developers. -

- -

+ + [System Architecture](Images/system-architecture-application.png) \ No newline at end of file diff --git a/advocacy_docs/partner_docs/SIBVisionsVisionX/04-ConfiguringSIBVisionsVisionX.mdx b/advocacy_docs/partner_docs/SIBVisionsVisionX/04-ConfiguringSIBVisionsVisionX.mdx index 24b87994616..9959fef5c39 100644 --- a/advocacy_docs/partner_docs/SIBVisionsVisionX/04-ConfiguringSIBVisionsVisionX.mdx +++ b/advocacy_docs/partner_docs/SIBVisionsVisionX/04-ConfiguringSIBVisionsVisionX.mdx @@ -1,6 +1,6 @@ --- title: 'Configuring SIB Visions VisionX' -description: 'Walkthrough on configuring SIB Visions VisionX' +description: 'Walkthrough of configuring SIB Visions VisionX' --- Implementing SIB Visions VisionX with EDB Postgres Advanced Server or EDB Postgres Extended Server requires the following components: @@ -9,7 +9,7 @@ Implementing SIB Visions VisionX with EDB Postgres Advanced Server or EDB Postgr - SIB Visions VisionX low-code platform installed - Eclipse with SIB Visions VisionX ePlug installed -The following will also be required if migrating an Oracle Forms or APEX application: +The following are also required if migrating an Oracle Forms or APEX application: - A running Oracle database instance - An Oracle Forms or Oracle APEX application diff --git a/advocacy_docs/partner_docs/SIBVisionsVisionX/05-UsingSIBVisionsVisionX.mdx b/advocacy_docs/partner_docs/SIBVisionsVisionX/05-UsingSIBVisionsVisionX.mdx index c099500a63b..2bbbf3f8c42 100644 --- a/advocacy_docs/partner_docs/SIBVisionsVisionX/05-UsingSIBVisionsVisionX.mdx +++ b/advocacy_docs/partner_docs/SIBVisionsVisionX/05-UsingSIBVisionsVisionX.mdx @@ -3,215 +3,211 @@ title: 'Using SIB Visions VisionX' description: 'Walkthroughs of multiple SIB Visions VisionX usage scenarios' --- -The examples below will walk you though some of the common usage scenarios that can be used with EDB Postgres Advanced Server or EDB Postgres Extended Server. These are examples to help get you started and show how the products can work together. +These examples walk you though some of the common usage scenarios that can be used with EDB Postgres Advanced Server or EDB Postgres Extended Server. These are examples to help get you started and show how the products can work together. -1. FMB-based migration of an Oracle Forms application to an EDB Postgres Advanced Server / SIB Visions VisionX Low Code application - - Part 1: FMB based application migration +1. FMB-based migration of an Oracle Forms application to an EDB Postgres Advanced Server/SIB Visions VisionX Low Code application + - Part 1: FMB-based application migration - Part 2: Oracle database to EDB Postgres Advanced Server migration - Part 3: Switch database connection to EDB Postgres Advanced Server -2. Data model-based migration of an Oracle Forms / APEX application to an EDB Postgres / SIB Visions VisionX Low Code application - - Part 1: Data model based application migration +2. Data model-based migration of an Oracle Forms/APEX application to an EDB Postgres/SIB Visions VisionX Low Code application + - Part 1: Data-model-based application migration - Part 2: Oracle database to EDB Postgres Advanced Server migration - Part 3: Switch database connection to EDB Postgres Advanced Server 3. Create a SIB Visions VisionX application that uses an EDB Postgres Advanced Server or EDB Postgres Extended Server instance as a data source - Create a screen that connects to an existing table from an EDB Postgres Advanced Server or EDB Postgres Extended Server instance - Create a screen that creates a table in the connected EDB Postgres Advanced Server or EDB Postgres Extended Server instance -## FMB-based migration of an Oracle Forms application to an EDB Postgres Advanced Server / SIB Visions VisionX Low Code application +## FMB-based migration of an Oracle Forms application to an EDB Postgres Advanced Server/SIB Visions VisionX low-code application -This example shows how an Oracle Forms application is migrated to a SIB Visions VisionX low-code application using an FMB file (Oracle Forms source file). The first step is to migrate the application. In the second step, the Oracle database is migrated to EDB Postgres Advanced Server or EDB Postgres Extended Server using the usual tools such as EDB's Migration Toolkit. Then the migrated application is connected to EDB Postgres Advanced Server or EDB Postgres Extended Server. +This example shows how an Oracle Forms application is migrated to a SIB Visions VisionX low-code application using an FMB file (Oracle Forms source file). The first step is to migrate the application. In the second step, the Oracle database is migrated to EDB Postgres Advanced Server or EDB Postgres Extended Server using the usual tools, such as EDB's Migration Toolkit. Then the migrated application is connected to EDB Postgres Advanced Server or EDB Postgres Extended Server. -### FMB based application migration +### FMB-based application migration -1. Open SIB Visions VisionX and create an application. The `Create a new Application` screen will then be opened. +1. Open SIB Visions VisionX and create an application. The Create a New Application screen opens. - ![Create a new Application](Images/FMBMigration1.png) + ![Create a New Application](Images/FMBMigration1.png) -2. At the `Choose Database` step, select `New Database User`. The default settings create a new database user in the Postgres database bundled with SIB Visions VisionX. Alternatively, Postgres, EDB Postgres Advanced Server, or EDB Postgres Extended Server can be used here. The database user is used to create the tables for the migrated application’s standard SIB Visions VisionX user management. This is used instead of the Oracle Forms user management. Optionally, other authentication systems such as Windows AD or Open ID system can easily be integrated. +2. At the Choose Database step, select **New Database User**. The default settings create a database user in the Postgres database bundled with SIB Visions VisionX. Alternatively, you can use Postgres, EDB Postgres Advanced Server, or EDB Postgres Extended Server here. The database user is used to create the tables for the migrated application’s standard SIB Visions VisionX user management instead of the Oracle Forms user management. Optionally, other authentication systems, such as Windows AD or Open ID system, can easily be integrated. - ![Create a new Application](Images/FMBMigration2.png) + ![Create a New Application](Images/FMBMigration2.png) - ![Create a new Application](Images/FMBMigration3.png) + ![Create a New Application](Images/FMBMigration3.png) -3. At the `Create Admin User` step, provide the user name and password and click `Finish`. +3. At the Create Admin User step, provide the user name and password and select **Finish**. - ![Create a new Application](Images/FMBMigration4.png) + ![Create a New Application](Images/FMBMigration4.png) -4. A `Task List` screen will display the Application creation progress. +4. A Task List screen shows the application creation progress. - ![Create a new Application](Images/FMBMigration5.png) + ![Create a New Application](Images/FMBMigration5.png) -5. Once the application is created, the `New Application Screen` is opened, which allows the user to create a screen. +5. Once the application is created, the New Application Screen screen is opened, which allows you to create a screen. - ![Create a new Application](Images/FMBMigration6.png) + ![Create a New Application](Images/FMBMigration6.png) -6. At the `Creation Mode` step, choose `Import an Oracle Forms (.fmb) module`. +6. At the Creation Mode step, select **Import an Oracle Forms (.fmb) module**. - ![Create a new Application](Images/FMBMigration7.png) + ![Create a New Application](Images/FMBMigration7.png) -7. At the next step (`Choose Database`), select `Existing Database User` and provide the credentials to the Oracle Database used by Oracle Forms. +7. At the next step (Choose Database), select **Existing Database User** and provide the credentials to the Oracle Database used by Oracle Forms. - ![Create a new Application](Images/FMBMigration8.png) + ![Create a New Application](Images/FMBMigration8.png) -8. At the `Choose Module` step, select the FMB file. +8. At the Choose Module step, select the FMB file. - ![Create a new Application](Images/FMBMigration9.png) + ![Create a New Application](Images/FMBMigration9.png) - In this example, SIB Visions VisionX analyzes the orders.fmb file and shows all included windows/canvases for the selection. + In this example, SIB Visions VisionX analyzes the `orders.fmb` file and shows all included windows/canvases for the selection. - The following screen shot shows the orders.fmb file in Oracle Forms Builder. + The following screenshot shows the `orders.fmb` file in Oracle Forms Builder. - ![Create a new Application](Images/FMBMigration10.png) + ![Create a New Application](Images/FMBMigration10.png) -9. After the .fmb files is selected, a `Task List` screen displays the Oracle Forms migration progress. +9. After the `.fmb` file is selected, a Task List screen displays the Oracle Forms migration progress. - ![Create a new Application](Images/FMBMigration11.png) + ![Create a New Application](Images/FMBMigration11.png) - After the migrated screen is created, it is displayed in Design Mode in SIB Visions VisionX, where the user can adjust it according to the requirements. + After the migrated screen is created, it's displayed in design mode in SIB Visions VisionX, where you can adjust it according to the requirements. - ![Create a new Application](Images/FMBMigration12.png) + ![Create a New Application](Images/FMBMigration12.png) - The SIB Visions VisionX Oracle Forms Migration Extension can migrate FMBs semi-automatically. All user interface elements (Windows/Canvas/Items) and the Oracle Forms Persistence (Blocks) can be migrated automatically. In a manual step, the Oracle Forms application logic (PL/SQL) is compiled into the database. The remaining UI Logic is then created in SIB Visions VisionX visually or manually with Java Code. + The SIB Visions VisionX Oracle Forms Migration Extension can migrate FMBs semi-automatically. All user interface elements (Windows/Canvas/Items) and the Oracle Forms Persistence (Blocks) can be migrated automatically. In a manual step, the Oracle Forms application logic (PL/SQL) is compiled into the database. The remaining UI logic is then created in SIB Visions VisionX visually or manually with Java Code. #### Add logic visually with actions in SIB Visions VisionX -![Create a new Application](Images/FMBMigration13.png) +![Create a New Application](Images/FMBMigration13.png) #### Modify or add any Java code using Eclipse -1. Import the application in Eclipse by clicking `shift+alt` in the SIB Visions VisionX main view and clicking the icon next to the application. +1. Import the application in Eclipse by pressing **shift+alt** in the SIB Visions VisionX main view and selecting the icon next to the application. - ![Create a new Application](Images/FMBMigration14.png) + ![Create a New Application](Images/FMBMigration14.png) -2. Select an UI element in SIB Visions VisionX and click the purple Eclipse button. +2. Select a UI element in SIB Visions VisionX and select the purple Eclipse button. - ![Create a new Application](Images/FMBMigration15.png) + ![Create a New Application](Images/FMBMigration15.png) -3. Eclipse navigates to the relevant Java Code. + Eclipse navigates to the relevant Java code. - ![Create a new Application](Images/FMBMigration16.png) + ![Create a New Application](Images/FMBMigration16.png) -4. Change the `Order Information` label in Eclipse. +3. Change the **Order Information** label in Eclipse. - ![Create a new Application](Images/FMBMigration17.png) + ![Create a New Application](Images/FMBMigration17.png) - ![Create a new Application](Images/FMBMigration18.png) + ![Create a New Application](Images/FMBMigration18.png) - If you change the code outside of SIB Visions VisionX in Eclipse, the changes are pushed back into the VisionX visual development environment in real time. This enables unlimited app development for Citizen Developers / Business Users and Pro Developers. + If you change the code outside of SIB Visions VisionX in Eclipse, the changes are pushed back into the VisionX visual development environment in real time. This capabioity enables unlimited app development for Citizen Developers/Business Users and Pro Developers. -5. Open the screen in the SIB Visions VisionX Live Preview and change some of the data. The data will be updated in the Oracle Database. +4. Open the screen in the SIB Visions VisionX Live Preview and change some of the data. The data is updated in the Oracle database. - ![Create a new Application](Images/FMBMigration19.png) + ![Create a New Application](Images/FMBMigration19.png) ### Oracle database to EDB Postgres Advanced Server migration In the second step, the Oracle database is migrated to EDB Postgres Advanced Server/EDB Postgres Extended Server using EDB's Migration Toolkit. -The migration steps in connection with an Oracle Forms migration are not different from a pure Oracle to EDB database migration. Therefore, we do not discuss this topic in detail here. Here is the link to Migration Toolkit: - -Refer to the [Migration Toolkit documentation](https://www.enterprisedb.com/docs/migration_toolkit/latest/) +The migration steps in connection with an Oracle Forms migration aren't different from a pure Oracle-to-EDB database migration. Therefore, this topic isn't discussed in detail here. Refer to the [Migration Toolkit documentation](/migration_toolkit/latest/). ### Switch database connection to EDB Postgres Advanced Server -Switching from the Oracle database is a simple step in SIB Visions VisionX. In the migrated SIB Visions VisionX application, simply click on the SIB Visions VisionX Settings screen and choose the `Datasources` tab. Click `Edit` to change the database connection from Oracle to PostgresSQL. +Switching from the Oracle database is a simple step in SIB Visions VisionX. In the migrated SIB Visions VisionX application, select the SIB Visions VisionX Settings screen and select the **Datasources** tab. Select **Edit** to change the database connection from Oracle to PostgresSQL. -![Create a new Application](Images/FMBMigration20.png) +![Create a New Application](Images/FMBMigration20.png) -## Data model-based migration of an Oracle Forms / APEX application to an EDB Postgres Advanced Server / SIB Visions VisionX Low Code application +## Data-model-based migration of an Oracle Forms/APEX application to an EDB Postgres Advanced Server/SIB Visions VisionX low-code application -This example shows how an Oracle Forms / APEX application is migrated to a SIB Visions VisionX low-code application using the existing data model. The first step is to migrate the application. In the second step, the Oracle database is migrated to EDB Postgres Advanced Server/EDB Postgres Extended using the usual tools such as EDBs Migration Toolkit. +This example shows how an Oracle Forms/APEX application is migrated to a SIB Visions VisionX low-code application using the existing data model. The first step is to migrate the application. In the second step, the Oracle database is migrated to EDB Postgres Advanced Server/EDB Postgres Extended using the usual tools, such as EDBs Migration Toolkit. Then the migrated application is connected to EDB Postgres Advanced Server/EDB Postgres Extended Server. ### Data model based application migration -Follow Steps 1 to 5 under section FMB-based application migration. and then go to step 1. +First follow Steps 1 to 5 under [FMB-based application migration](#fmb-based-application-migration) before proceeding to step 1. -1. At the `Screen Infos` step, provide the required information for the screen. +1. At the Screen Infos step, provide the required information for the screen. - ![Create a new Application](Images/FMBMigration21.png) + ![Create a New Application](Images/FMBMigration21.png) -2. At the `Choose Layout` step, select the required layout for the screen. +2. At the Choose Layout step, select the required layout for the screen. - ![Create a new Application](Images/FMBMigration22.png) + ![Create a New Application](Images/FMBMigration22.png) -3. At the next step (`Select Data Source`), select the `Use existing data from database tables` option, which allows the user to select the tables used in the original Oracle Forms / APEX screen. The screen will be migrated based on these tables. +3. At the next step (Select Data Source), select the **Use existing data from database tables** option, which allows you to select the tables used in the original Oracle Forms/APEX screen. The screen is migrated based on these tables. - ![Create a new Application](Images/FMBMigration23.png) + ![Create a New Application](Images/FMBMigration23.png) -4. At the `Choose Database` step, provide the connection information for the Oracle database that is used in the Oracle Forms / APEX application. +4. At the Choose Database step, provide the connection information for the Oracle database that's used in the Oracle Forms/APEX application. - ![Create a new Application](Images/FMBMigration24.png) + ![Create a New Application](Images/FMBMigration24.png) -5. On the next screen, select the tables used in the original Oracle Forms / APEX screen. The screen will be migrated based on these tables. +5. On the next screen, select the tables used in the original Oracle Forms/APEX screen. The screen is migrated based on these tables. - Much of the typically manually created logic in Oracle Forms and APEX is automatically recognized in SIB Visions VisionX. All drop-down lists are recognized based on the Foreign Keys to the master data. If these are missing, they can be defined manually in SIB Visions VisionX. Furthermore, all relevant detail tables are offered for each master table. This makes it very easy to define the master detail relations in the screen. We have selected the appropriate layout of the screen in the previous step. This layout can be changed in the in the UI Designer later. + Much of the typically manually created logic in Oracle Forms and APEX is automatically recognized in SIB Visions VisionX. All drop-down lists are recognized based on the foreign keys to the master data. If these are missing, they can be defined manually in SIB Visions VisionX. Furthermore, all relevant detail tables are offered for each master table. This makes it very easy to define the master detail relations in the screen. The appropriate layout of the screen was selected in the previous step. You can change this layout in the UI Designer later. - ![Create a new Application](Images/FMBMigration25.png) + ![Create a New Application](Images/FMBMigration25.png) - We choose `s_Ord` in this example because it is the master table of the Oracle Forms screen. + `s_Ord` was chosen in this example because it's the master table of the Oracle Forms screen. - ![Create a new Application](Images/FMBMigration26.png) + ![Create a New Application](Images/FMBMigration26.png) - The dropdown lists to `S_Customer`, `A_Payment Type`, and `S_Emp` for the sales rep of the order are detected automatically. + The dropdown lists for `S_Customer`, `A_Payment Type`, and `S_Emp` for the sales rep of the order are detected automatically. - In the next step, we click on `More` and add the Master/Detail relationship to the `S_Item` table. + In the next step, select **More** and add the Master/Detail relationship to the `S_Item` table. - ![Create a new Application](Images/FMBMigration27.png) + ![Create a New Application](Images/FMBMigration27.png) -6. Once the screen is created, it will be displayed in Design Mode where the user can adjust it according to the requirements. +6. Once the screen is created, it's displayed in design mode where you can adjust it according to the requirements. - ![Create a new Application](Images/FMBMigration28.png) + ![Create a New Application](Images/FMBMigration28.png) - To get the image for each product as in the original Oracle Form, we add the product table in the lower area using the `New Table` tab. + To get the image for each product as in the original Oracle Form, add the product table in the lower area using the **New Table** tab. - Here we use the same steps as for the selection of the `S_Ord` table, except that we select the `S_Product` table instead. We choose `S_Product` because it is a detail to the `S_Item` table. + Here, use the same steps as for the selection of the `S_Ord` table, except that you select the `S_Product` table instead. `S_Product` was chosen because it's a detail to the `S_Item` table. - ![Create a new Application](Images/FMBMigration29.png) + ![Create a New Application](Images/FMBMigration29.png) - Repeat the same for the `S_Image` table, because `S_Image` is a detail to the `S_Product` table. + Repeat the same for the `S_Image` table because `S_Image` is a detail to the `S_Product` table. - ![Create a new Application](Images/FMBMigration30.png) + ![Create a New Application](Images/FMBMigration30.png) - We then define the `Image` column as `Image` datatype by clicking on the datatype dropdown, so we can later position it on the screen as an image. + Then define the `Image` column as `Image` datatype by selecting the datatype dropdown. You can later position it on the screen as an image. - ![Create a new Application](Images/FMBMigration31.png) + ![Create a New Application](Images/FMBMigration31.png) - Now we connect the `S_Product` table as a detail to the `S_Item` table. To do this, we select the `S_Product` table at the bottom of the editor and click on the `Edit` icon. Then we click on `More` and select the `S_Item` as Master Table. + Next, connect the `S_Product` table as a detail to the `S_Item` table. To do this, select the `S_Product` table at the bottom of the editor and select the **Edit** icon. Then select **More** and select the `S_Item` as **Master Table**. - ![Create a new Application](Images/FMBMigration32.png) + ![Create a New Application](Images/FMBMigration32.png) - Repeat the same for the `S_Image` table, because `S_Image` is a detail to the `S_Product`. + Repeat the same for the `S_Image` table because `S_Image` is a detail to the `S_Product`. - ![Create a new Application](Images/FMBMigration33.png) + ![Create a New Application](Images/FMBMigration33.png) - Select the screen’s detail area by clicking `Details` on the orange bar at the top of the `Orders` screen. Then select the `S_Image` table in the lower area and position the image editor on the top right corner of the screen using drag & drop. + Select the screen’s detail area by selecting **Details** on the orange bar at the top of the Orders screen. Then select the `S_Image` table in the lower area and position the image editor on the top-right corner of the screen by dragging it. - ![Create a new Application](Images/FMBMigration34.png) + ![Create a New Application](Images/FMBMigration34.png) -7. Open the screen in the Live Preview. With a few steps we created the Oracle Forms screen in SIB Visions VisionX based on the current data model. We even created an additional list view with a search function. +7. Open the screen in the Live Preview. Just a few steps were needed to create the Oracle Forms screen in SIB Visions VisionX based on the current data model. An additional list view with a search function was also created. - ![Create a new Application](Images/FMBMigration35.png) + ![Create a New Application](Images/FMBMigration35.png) - ![Create a new Application](Images/FMBMigration36.png) + ![Create a New Application](Images/FMBMigration36.png) ### Oracle database to EDB Postgres Advanced Server migration In the second step, the Oracle database is migrated to EDB Postgres Advanced Server/EDB Postgres Extended Server using EDB's Migration Toolkit. -The migration steps in connection with an Oracle Forms migration are not different from a pure Oracle to EDB Postgres Advanced Server database migration. Therefore, we do not discuss this topic in detail here. Here is the link to Migration Toolkit: - -Refer to the [Migration Toolkit documentation](https://www.enterprisedb.com/docs/migration_toolkit/latest/) +The migration steps in connection with an Oracle Forms migration aren't different from a pure Oracle-to-EDB Postgres Advanced Server database migration. Therefore, this topic isn't discussed in detail here. Refer to the [Migration Toolkit documentation](/migration_toolkit/latest/). ### Switch database connection to EDB Postgres -Switching from Oracle is a simple step in SIB Visions VisionX. In the migrated SIB Visions VisionX application, simply click on the SIB Visions VisionX Settings screen and choose the `Datasources` tab. Click on `Edit` to change the database connection from Oracle to EDB Postgres Advanced Server. +Switching from Oracle is a simple step in SIB Visions VisionX. In the migrated SIB Visions VisionX application, select the SIB Visions VisionX Settings screen and choose the **Datasources** tab. Select **Edit** to change the database connection from Oracle to EDB Postgres Advanced Server. -![Create a new Application](Images/FMBMigration37.png) +![Create a New Application](Images/FMBMigration37.png) -## Create a SIB Visions VisionX application that uses an EDB Postgres Advanced Server / EDB Postgres Extended instance as a data source +## Create a SIB Visions VisionX application that uses an EDB Postgres Advanced Server/EDB Postgres Extended instance as a data source -### Create a screen that connects to an existing table from an EDB Postgres Advanced Server / EDB Postgres Extended instance +### Create a screen that connects to an existing table from an EDB Postgres Advanced Server/EDB Postgres Extended instance 1. For this example, the data was set up in an EDB Postgres Advanced Server 13 instance. @@ -260,29 +256,29 @@ Switching from Oracle is a simple step in SIB Visions VisionX. In the migrated S ``` -2. Open SIB Visions VisionX and create an application. The `Create a new Application` screen will be opened. +2. Open SIB Visions VisionX and create an application. The Create a New Application screen opens. - ![Create a new Application](Images/ImportExport24.png) + ![Create a New Application](Images/ImportExport24.png) -3. At the `Choose Database` step, enter the connection information for the EDB Postgres Advanced Server instance. +3. At the Choose Database step, enter the connection information for the EDB Postgres Advanced Server instance. ![Choose Database](Images/ImportExport25.png) ![Choose Database](Images/ImportExport26.png) -4. At the `Create Admin User` step, provide the user name and password. +4. At the Create Admin User step, provide the user name and password. ![Create Admin User](Images/ImportExport4.png) -5. A `Task List` screen will display the application creation progress. +5. A Task List screen shows the application creation progress. ![Task List](Images/ImportExport5.png) -6. Once the application is created, the `New Application Screen` is opened, which allows the user to create a screen that is connected to the EDB Postgres Advanced Server 13. +6. Once the application is created, the New Application Screen opens, which allows you to create a screen that's connected to the EDB Postgres Advanced Server 13. ![New Application Screen](Images/ImportExport6.png) -7. At the `Screen Infos` step, provide the required information for the screen. +7. At the Screen Infos step, provide the required information for the screen. ![Screen Infos](Images/ImportExport27.png) @@ -290,29 +286,29 @@ Switching from Oracle is a simple step in SIB Visions VisionX. In the migrated S ![Choose Layout](Images/ImportExport8.png) -9. At the `Select Data Source` step, select the `Use existing data from database tables` option, which allows the user to select the required table from the EDB Postgres Advanced Server 13 instance. +9. At the Select Data Source step, select the **Use existing data from database tables** option, which allows you to select the required table from the EDB Postgres Advanced Server 13 instance. ![Select Data Source](Images/ImportExport28.png) -10. At the `Choose Database` step, provide the connection information for the EDB Postgres Advanced Server 13 instance. +10. At the Choose Database step, provide the connection information for the EDB Postgres Advanced Server 13 instance. ![Choose Database](Images/ImportExport29.png) ![Choose Database](Images/ImportExport30.png) -11. At the next step, select the table that will be used in the screen from the drop down list. +11. At the next step, from the list, select the table to use in the screen. ![Choose Table](Images/ImportExport31.png) -12. A `Task List` screen will display the screen creation progress. +12. A Task List screen shows the screen creation progress. ![Task List](Images/ImportExport11.png) -13. Once the screen is created, it will be displayed in Design Mode where the user can adjust it according to his/her requirements. +13. Once the screen is created, it appears in design mode, where you can adjust it according to your requirements. ![Screen Design Mode](Images/ImportExport32.png) -14. Open the screen in the Live Preview and enter a few records. The data will be updated on the EDB Postgres Advanced Server 13 instance table. +14. Open the screen in the Live Preview and enter a few records. The data is updated on the EDB Postgres Advanced Server 13 instance table. ![Preview Mode](Images/ImportExport33.png) @@ -328,9 +324,9 @@ Switching from Oracle is a simple step in SIB Visions VisionX. In the migrated S ``` -### Create a screen that creates a table in the connected EDB Postgres Advanced Server / EDB Postgres Extended instance +### Create a screen that creates a table in the connected EDB Postgres Advanced Server/EDB Postgres Extended instance -1. In the application created in the previous section, click the `New Screen` button. +1. In the application created in the previous example, select **New Screen**. ![New Application Screen](Images/ImportExport6.png) @@ -338,11 +334,11 @@ Switching from Oracle is a simple step in SIB Visions VisionX. In the migrated S ![Screen Infos](Images/ImportExport34.png) -3. At the `Choose Layout` step, select the required layout for the screen. +3. At the Choose Layout step, select the required layout for the screen. ![Choose Layout](Images/ImportExport8.png) -4. At the `Select Data Source` step, select the `Define the information you want to manage` option, which allows the user to define the structure of the table that is created on Oracle. +4. At the Select Data Source step, select the **Define the information you want to manage** option, which allows you to define the structure of the table that's created on Oracle. ![Select Data Source](Images/ImportExport35.png) @@ -350,15 +346,15 @@ Switching from Oracle is a simple step in SIB Visions VisionX. In the migrated S ![Data/Informations](Images/ImportExport36.png) -6. A Task List screen will display the screen creation progress. +6. A Task List screen shows the screen creation progress. ![Task List](Images/ImportExport11.png) -7. Once the screen is created, it will displayed in Design Mode where the user can adjust it according to his/her requirements. +7. Once the screen is created, it appears in design mode, where you can adjust it according to your requirements. ![Screen Design Mode](Images/ImportExport37.png) -8. Open the screen in the Live Preview and enter a few records. The data will be updated on the EDB Postgres Advanced Server 13 instance table. +8. Open the screen in the Live Preview and enter a few records. The data is updated on the EDB Postgres Advanced Server 13 instance table. ![Screen Design Mode](Images/ImportExport38.png) diff --git a/advocacy_docs/partner_docs/SIBVisionsVisionX/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/SIBVisionsVisionX/06-CertificationEnvironment.mdx index 76ce4e1980e..1e32fc978a5 100644 --- a/advocacy_docs/partner_docs/SIBVisionsVisionX/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/SIBVisionsVisionX/06-CertificationEnvironment.mdx @@ -1,11 +1,11 @@ --- -title: 'Certification Environment' -description: 'Overview of the certification environment used in the certification of SIB Visions VisionX' +title: 'Certification environment' +description: 'Overview of the certification environment used for certifying SIB Visions VisionX' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | March 31, 2022 | +| **Certification test date** | March 31, 2022 | | **EDB Postgres Advanced Server** | 11,12,13,14 | | **EDB Postgres Extended Server** | 12,13 | | **SIB Visions VisionX** | 5.6.925 | diff --git a/advocacy_docs/partner_docs/SIBVisionsVisionX/index.mdx b/advocacy_docs/partner_docs/SIBVisionsVisionX/index.mdx index 347e0a19f6a..2ce8294c6d4 100644 --- a/advocacy_docs/partner_docs/SIBVisionsVisionX/index.mdx +++ b/advocacy_docs/partner_docs/SIBVisionsVisionX/index.mdx @@ -1,12 +1,11 @@ --- -title: 'SIB Visions VisionX Implementation Guide' +title: 'Implementing SIB Visions VisionX' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+![Partner Program Logo](Images/PartnerProgram.jpg.png) +

EDB GlobalConnect Technology Partner Implementation Guide

SIB Visions VisionX

From e3359ed9b1084ba98d19812343b3c655183ac5d1 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 31 Jul 2023 14:55:32 -0400 Subject: [PATCH 17/65] Edits to Thales Transparent Encryption doc --- .../02-PartnerInformation.mdx | 12 +- .../03-SolutionSummary.mdx | 23 ++-- ...ThalesCipherTrustTransparentEncryption.mdx | 111 +++++++----------- ...ThalesCipherTrustTransparentEncryption.mdx | 60 ++++------ .../06-CertificationEnvironment.mdx | 8 +- .../07-Appendix.mdx | 4 +- .../index.mdx | 7 +- 7 files changed, 93 insertions(+), 132 deletions(-) diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/02-PartnerInformation.mdx index 78506172921..1432e2264c4 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/02-PartnerInformation.mdx @@ -1,13 +1,13 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details for Thales CipherTrust Transparent Encryption (CTE)' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Thales | -| **Partner Product** | CipherTrust Transparent Encryption | -| **Web Site** | https://cpl.thalesgroup.com/encryption | -| **Version & Platform** | 7.1.0, Available platforms: Windows, Linux | -| **Product Description** | Thales CipherTrust Transparent Encryption (CTE) delivers data-at-rest encryption with centralized key management, privileged user access control, and detailed data access audit logging. This protects data wherever it resides: on-premises, across multiple clouds and within big data, and container environments. | +| **Partner name** | Thales | +| **Partner product** | CipherTrust Transparent Encryption | +| **Website** | https://cpl.thalesgroup.com/encryption | +| **Version & platform** | 7.1.0, Available platforms: Windows, Linux | +| **Product description** | Thales CipherTrust Transparent Encryption (CTE) delivers data-at-rest encryption with centralized key management, privileged user access control, and detailed data access audit logging. This approach protects data wherever it resides: on premises, across multiple clouds and within big data, and in container environments. | diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/03-SolutionSummary.mdx index ff93ba2705f..b834a7d25a1 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/03-SolutionSummary.mdx @@ -1,21 +1,16 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Brief explanation of the solution and its purpose' --- +Thales CipherTrust Transparent Encryption secures data at rest for Postgres databases and backups. +It uses file-system-level encryption backed by centralized key management, privileged user-access controls, and +detailed data-access audit logging. CipherTrust Transparent Encryption allows you to adopt Postgres +for highly sensitive and regulated data both on premises and in the cloud while also meeting your compliance +obligations. CipherTrust Transparent Encryption is certified with EDB Postgres Advanced Server +and with EDB Postgres Extended Server as part of a bi-directional replication cluster and with Barman. -Thales’ CipherTrust Transparent Encryption secures data at-rest for Postgres databases and backups -with file system-level encryption backed by centralized key management, privileged user access controls, and -detailed data access audit logging. CipherTrust Transparent Encryption allows customers to adopt Postgres -for highly-sensitive and regulated data both on-premises and in the cloud while also meeting their compliance -obligations. CipherTrust Transparent Encryption has been certified with EDB Postgres Advanced Server, -and with EDB Postgres Extended Server as part of a BDR (bi-directional replication) cluster, and with Barman. - -

- -

- - +![Solution Summary](Images/SolutionSummary.jpg.png) !!! Note - EDB Postgres Extended Server represents EDB Postgres Extended Server* with BDR (Bi-Directional Replication) and Barman. + EDB Postgres Extended Server represents EDB Postgres Extended Server with BDR (Bi-Directional Replication) and Barman. diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx index 0022cf7c5a6..e5cb9e87b76 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx @@ -1,24 +1,24 @@ --- title: 'Configuring Thales CipherTrust Transparent Encryption (CTE)' -description: 'Walkthrough on configuring Thales CipherTrust Transparent Encryption (CTE)' +description: 'Walkthrough of configuring Thales CipherTrust Transparent Encryption (CTE)' --- -**Implementing the CipherTrust Transparent Encryption (CTE) solution requires the following components:** - - Postgres server installed and operational. - - CipherTrust Manager installed and operational. - - A CTE agent installed on the Postgres host registered to the CipherTrust Manager. +The following diagram shows the basic flow of the CTE solution. +![Basic CTE Solution Implementation](Images/ImplementingCTE.png) -**The following diagram shows the basic flow of the CTE solution:** +## Prerequisites -

- -

+Implementing the CipherTrust Transparent Encryption (CTE) solution requires the following components: -### Prerequisites -#### Postgres Host -- Ensure that the Postgres server is installed and running. + - Postgres server installed and operational + - CipherTrust Manager installed and operational + - A CTE agent installed on the Postgres host registered to the CipherTrust Manager + +### Postgres host + +- Make sure that the Postgres server is installed and running. - For CentOS 7, you need to install the following repository: @@ -26,91 +26,70 @@ description: 'Walkthrough on configuring Thales CipherTrust Transparent Encrypti sudo yum install -y lsof ``` -#### CipherTrust Manager -1. Ensure CipherTrust Manager is installed and running. +### CipherTrust Manager -

- -

+Make sure that CipherTrust Manager is installed and running. -### Configuring CipherTrust Manager -Logon to the CipherTrust Manager (CM) Web GUI and perform the following steps: +![CypherTrust Manager](Images/CipherTrustManager.png) -1. Create a registration token. +## Configuring CipherTrust Manager - a. Navigate to **Key and Access Management** and select **Registration Tokens**. This token is used for the CTE agent enrollment to CM. +Log in to the CipherTrust Manager (CM) web UI. Then: - b. Select **New Registration Token** to create a new registration token. - +1. Create a registration token. -The following screenshot shows a registration token created with the name **edb**. + 1. Navigate to **Key and Access Management** and select **Registration Tokens**. This token is used for the CTE agent enrollment to CM. + 1. To create a registration token, select **New Registration Token**. -

- -

+ The screenshot shows a registration token created with the name **edb**. + +![Registration Token](Images/ConfiguringCipherTrustManager.png) 2. Create user sets. - a. Navigate to CTE and select Policies, Policy Elements and then User Sets. + 1. Navigate to **CTE** and select **Policies > Policy Elements > User Sets**. - b. Select Create User Set to create a new user set. + 1. To create the user set, select **Create User Set**. -Create the Postgres, EnterpriseDB and Barman user sets as shown in the following screenshots. + 1. Create the Postgres, EnterpriseDB, and Barman user sets as shown in the following screenshots. -

- -

-

- -

-

- -

+![Create User Sets1](Images/CreateUserSets1.png) -3. **Create Policies** +![Create User Sets2](Images/CreateUserSets2.png) - a. Navigate back to **Policies** and select **Create Policy**. +![Create User Sets2](Images/CreateUserSets3.png) - -**The following screenshots show Live Data Transformation (LDT) policies postgres-policy, epas-policy and barman-policy.** +3. Create the policies. Navigate back to **Policies** and select **Create Policy**. + +The following screenshots show the live data transformation (LDT) policies postgres-policy, epas-policy, and barman-policy. +![postgres-policy Screenshot](Images/CreatePolicies1.png) +![epas-policy Screenshot](Images/CreatePolicies2.png) +![barman-policy Screenshot](Images/CreatePolicies3.png) -

- -

-

- -

-

- -

!!! Note - The policies include the User Sets **Postgres** and **EnterpriseDB** respectively created in Step 2 and the same Key Rule for the policies: + The policies include the user sets Postgres and EnterpriseDB created in Step 2 and the same key rule for the policies: -

- -

+![Policy User Sets and Key Rule](Images/CreatePolicies4.png) -### Installing CTE Agent +### Installing CTE agent Refer to the following guides from Thales for installing the CTE agent on the Postgres host: -[CTE Agent Quick Start Guide](https://thalesdocs.com/ctp/cte/Books/Online-Files/7.0.0/CTE_Agent_Linux_Quick_Start_Guide_v7.0.0_Doc_v1.pdf) +- [CTE Agent Quick Start Guide](https://thalesdocs.com/ctp/cte/Books/Online-Files/7.0.0/CTE_Agent_Linux_Quick_Start_Guide_v7.0.0_Doc_v1.pdf) -[*CTE Agent Advanced Installation Guide*](https://thalesdocs.com/ctp/cte/Books/Online-Files/7.0.0/CTE_Agent_Linux_Adv_Config_Integration_Guide_v7.0.0_Doc_v6.pdf) +- [CTE Agent Advanced Installation Guide](https://thalesdocs.com/ctp/cte/Books/Online-Files/7.0.0/CTE_Agent_Linux_Adv_Config_Integration_Guide_v7.0.0_Doc_v6.pdf) !!! Note - You will need the Registration Token and host address of the CipherTrust Manager during the installation. + You need the registration token and host address of the CipherTrust Manager during the installation. After the CTE agent is successfully installed, verify the Postgres host is registered with CM. -1. Log on to the CM Web GUI and navigate to **CTE**. -2. Select **Clients**. The client status should appear as **Healthy** as shown below (you may have to wait a few seconds for the status to get updated). +1. Log in to the CM web UI and navigate to **CTE**. +2. Select **Clients**. The client status appears as Healthy. (You might have to wait a few seconds for the status to update). -The following screenshot shows clients registered with the CipherTrust Manager. +The screenshot shows clients registered with the CipherTrust Manager. -

- -

+![CipherTrust Manager Registered Clients](Images/InstallingCTEAgent.png) diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/05-UsingThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/05-UsingThalesCipherTrustTransparentEncryption.mdx index 167448bbd17..afb60144854 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/05-UsingThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/05-UsingThalesCipherTrustTransparentEncryption.mdx @@ -3,73 +3,61 @@ title: 'Using Thales CipherTrust Transparent Encryption (CTE)' description: 'Walkthroughs of multiple Thales CipherTrust Transparent Encryption (CTE) usage scenarios' --- -CTE protects data either at the file level or at the storage device level. A CTE Agent running on the (Postgres) host manages the files behind a GuardPoint by enforcing the policy associated with it, and communicates data access events to the CipherTrust Manager for logging. A GuardPoint is usually associated with a Linux mount point or a Windows volume, but may also be associated with a directory subtree. +CTE protects data either at the file level or at the storage device level. A CTE agent running on the Postgres host manages the files behind a GuardPoint by enforcing the policy associated with it and communicates data access events to the CipherTrust Manager for logging. A GuardPoint is usually associated with a Linux mount point or a Windows volume but can also be associated with a directory subtree. -**The following diagram shows the CTE architecture.** +The following diagram shows the CTE architecture. -

- -

+![CTE Architecture](Images/UsingCTE.png) -### Sample User Scenarios +## Sample user scenarios -This section describes sample user scenarios of deploying CTE solutions on EDB Postgres Advanced Server and EDB Postgres Extended Server with BDR hosts. -- **EDB Postgres Advanced Server** -- **EDB Postgres Extended Server with BDR** +These sample user scenarios show deploying CTE solutions on EDB Postgres Advanced Server and EDB Postgres Extended Server with BDR hosts. -**EDB Postgres Advanced Server (Single Instance)** +### EDB Postgres Advanced Server (single instance) 1. Install CTE agent on the Postgres host. -2. Login to the Postgres host and stop the postgres server. -3. Create the GuardPoints via the CM Web GUI using the **epas-policy** Policy on the postgres host. Set the following directories as the **Protected Path** on the EDB Postgres Advanced Server host (assuming PGDATA is set /var/lib/edb/as13/data on the host): +2. Log in to the Postgres host and stop the Postgres server. +3. Create the GuardPoints with the CM web UI using the epas-policy policy on the Postgres host. Set the following directories as the protected path on the EDB Postgres Advanced Server host, assuming PGDATA is set to `/var/lib/edb/as13/data` on the host: -

- -

+![Single Instance Use Case](Images/SampleUserScenarios1.png) -4. Restart the Postgres server on the Postgres host as the user **enterprisedb**. Make sure you are logged in using ssh (not sudo). +4. Restart the Postgres server on the Postgres host as the user enterprisedb. Make sure you're logged in using ssh, not sudo. -**EDB Postgres Extended Server with BDR-Always-ON** +### EDB Postgres Extended Server with BDR-Always-ON The following diagram shows the BDR-Always-ON architecture. For more details, refer to the [BDR-Always-ON Architecture](https://documentation.2ndquadrant.com/tpa/release/21.1-1/architecture-BDR-Always-ON/) documentation. !!! Note The documentation requires EDB access credentials. -

- -

+![Extended Server Configuration](Images/EDBPostgresExtendedwithBDRAlwaysOn.png) 1. Install CTE agents on all the postgres and barman nodes. -2. Create a GuardPoint via the CM Web GUI using the `barman-policy` Policy on the directory `/var/lib/barman/` on the barman node in data center A (DC A). The following screenshot shows a GuardPoint created for the barman node. +2. Create a GuardPoint with the CM web UI using the barman-policy policy on the directory `/var/lib/barman/` on the barman node in data center A (DC A). The following screenshot shows a GuardPoint created for the barman node. -

- -

+![Extended Server User Scenario](Images/SampleUserScenarios3.png) -3. Login to the Standby node in data center A and stop the postgres server. +3. Log in to the standby node in data center A and stop the Postgres server. -4. Create a GuardPoint on the Standby node via the CM Web GUI using the postgres-policy Policy on the PGDATA directory `/opt/postgres/data`. +4. Create a GuardPoint on the standby node with the CM web UI using the postgres-policy policy on the PGDATA directory `/opt/postgres/data`. -5. Restart the Postgres server on the Standby node as the user **postgres**. Make sure you are logged in using ssh (not sudo). +5. Restart the Postgres server on the standby node as the user postgres. Make sure you're logged in using ssh, not sudo. -6. Login to the Shadow Master node in data center A and stop the postgres server. +6. Log in to the Shadow Master node in data center A and stop the Postgres server. -7. Create a GuardPoint on the Shadow Master node via the CM Web GUI using the postgres-policy Policy on the PGDATA directory `/opt/postgres/data`. +7. Create a GuardPoint on the Shadow Master node with the CM web UI using the postgres-policy policy on the PGDATA directory `/opt/postgres/data`. -8. Restart the Postgres server on the Shadow Master node as the user **postgres**. Make sure you are logged in using ssh (not sudo). +8. Restart the Postgres server on the Shadow Master node as the user postgres. Make sure you're logged in using ssh, not sudo. -9. Login to the Lead Master node in data center A and stop the postgres server. +9. Log in to the Lead Master node in data center A and stop the Postgres server. -10. Create a GuardPoint on the Lead Master node via the CM Web GUI using the `postgres-policy` Policy on the PGDATA directory `/opt/postgres/data`. +10. Create a GuardPoint on the Lead Master node with the CM web UI using the postgres-policy policy on the PGDATA directory `/opt/postgres/data`. -11. Restart the Postgres server on the Lead Master node as the user `postgres`. Make sure you are logged in using ssh (not sudo). +11. Restart the Postgres server on the Lead Master node as the user postgres. Make sure you're logged in using ssh, not sudo. The following screenshot shows a GuardPoint created for Lead Master in data center A. -

- -

+![Guardpoint Created for Lead Master](Images/SampleUserScenarios4.png) 12. Repeat steps 2 through 11 for postgres and barman nodes in data center B (DC B). diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/06-CertificationEnvironment.mdx index 474a4364bc0..13966b47a9e 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/06-CertificationEnvironment.mdx @@ -1,19 +1,19 @@ --- -title: 'Certification Environment' +title: 'Certification environment' description: 'Overview of the certification environment used in the certification of Thales CipherTrust Transparent Encryption (CTE)' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | May 19 2021 | +| **Certification test date** | May 19 2021 | | **EDB Postgres Advanced Server** | 13.2.5 | | **CipherTrust Transparent Encryption** | 7.0.0.99 | |   |   | | ----------- | ----------- | -| **Certification Test Date**| May 19 2021 | +| **Certification test date**| May 19 2021 | | **EDB Postgres Extended Server** | 11 | | **EDB Postgres Distributed** | 3.6.25 | !!! Note - Refer to the [sample config.yml](07-Appendix.mdx) file in the Appendix for deployment details. + Refer to the [sample config.yml](07-Appendix.mdx) file for deployment details. diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/07-Appendix.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/07-Appendix.mdx index 17106dfb09d..6e906c52cc4 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/07-Appendix.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/07-Appendix.mdx @@ -1,6 +1,6 @@ --- -title: 'Appendix' -description: 'Sample properties file' +title: 'Sample properties file' +description: 'Sample config.yml properties file' --- ### Sample `config.yml` file diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/index.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/index.mdx index fd739703b59..a9b147f23f7 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/index.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/index.mdx @@ -1,12 +1,11 @@ --- -title: 'Thales CipherTrust Transparent Encryption Implementation Guide' +title: 'Implementing Thales CipherTrust Transparent Encryption' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+![Partner Program Logo](Images/PartnerProgram.jpg.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Thales CipherTrust Transparent Encryption

From 52743b7691c0b299ec74c8c53d3033b7ea5e7077 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Mon, 31 Jul 2023 16:08:14 -0400 Subject: [PATCH 18/65] Edits to Veritas doc --- .../02-PartnerInformation.mdx | 10 +- .../03-SolutionSummary.mdx | 8 +- ...nfiguringVeritasNetBackupforPostgreSQL.mdx | 98 ++++++++----------- .../05-UsingVeritasNetBackupForPostgreSQL.mdx | 52 +++++----- .../06-CertificationEnvironment.mdx | 8 +- .../VeritasNetBackupforPostgreSQL/index.mdx | 8 +- 6 files changed, 82 insertions(+), 102 deletions(-) diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/02-PartnerInformation.mdx index ca673a9b1f1..067384164cc 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/02-PartnerInformation.mdx @@ -1,12 +1,12 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details for Veritas NetBackup for PostgreSQL' --- |   |   | | ----------- | ----------- | -| **Partner Name** | Veritas | +| **Partner name** | Veritas | | **Partner Product** | NetBackup for PostgreSQL | -| **Web Site** | https://www.veritas.com/ | -| **Version & Platform** | NetBackup for PostgreSQL 9.1: Linux, Windows | -| **Product Description** | Veritas NetBackup gives enterprise IT a simple and powerful way to ensure the integrity and availability of their data – from edge to core to cloud. Veritas NetBackup for PostgreSQL Agent extends the capabilities of NetBackup to include backup and restore of PostgreSQL databases. | +| **Website** | https://www.veritas.com/ | +| **Version & platform** | NetBackup for PostgreSQL 9.1: Linux, Windows | +| **Product description** | Veritas NetBackup gives enterprise IT a simple and powerful way to ensure the integrity and availability of their dataa—from edge to core to cloud. Veritas NetBackup for PostgreSQL Agent extends the capabilities of NetBackup to include backup and restore of PostgreSQL databases. | diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx index 75460d1343a..4f7702515a6 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx @@ -1,9 +1,7 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Brief explanation of the solution and its purpose' --- -NetBackup provides a non-distruptive way of validating your resiliency plan for assurance and compliance through automated recovery and rehearsal of business-critical applications. Moving data and spinning up applications when and where you need to without risking data loss requires business-level resiliency. Veritas NetBackup for PostgreSQL Agent extends the capabilities of NetBackup to include backup and restore of PostgreSQL databases. If a NetBackup environment is operational within an organization, then users can backup and restore EDB Postgres Advanced Server and EDB Postgres Extended Server with the help of Veritas NetBackup for PostgreSQL Agent. +NetBackup provides a nondistruptive way of validating your resiliency plan for assurance and compliance through automated recovery and rehearsal of business-critical applications. Moving data and spinning up applications when and where you need to without risking data loss requires business-level resiliency. Veritas NetBackup for PostgreSQL Agent extends the capabilities of NetBackup to include backup and restore of PostgreSQL databases. If a NetBackup environment is operational within an organization, then users can back up and restore EDB Postgres Advanced Server and EDB Postgres Extended Server with the help of Veritas NetBackup for PostgreSQL Agent. -

- -

+![Veritas NetBackup for PostreSQL Achitecture](Images/ArchitectureUpdate4.png) diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx index b9efab15b15..253d23c0d6d 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx @@ -1,84 +1,69 @@ --- title: 'Configuring Veritas NetBackup for PostgreSQL' -description: 'Walkthrough on configuring Veritas NetBackup for PostgreSQL' +description: 'Walkthrough of configuring Veritas NetBackup for PostgreSQL' --- -**Implementing Veritas NetBackup solution for backup/restore of PostgreSQL databases requires the following components:** +Implementing Veritas NetBackup solution for backup/restore of PostgreSQL databases requires the following components: -- EDB Postgres Advanced Server. -- Veritas NetBackup Server. -- Veritas NetBackup Client. -- Veritas NetBackup Agent for PostgreSQL. +- EDB Postgres Advanced Server +- Veritas NetBackup server +- Veritas NetBackup client +- Veritas NetBackup Agent for PostgreSQL -### Prerequisites +## Prerequisites -- A running EDB Postgres Advanced Server. -- A running Veritas NetBackup Server. -- Veritas NetBackup Client installed on the EDB Postgres Advanced Server host. -- Veritas NetBackup PostgreSQL Agent installed on the EDB Postgres Advanced Server host. +- A running EDB Postgres Advanced Server +- A running Veritas NetBackup Server +- Veritas NetBackup client installed on the EDB Postgres Advanced Server host +- Veritas NetBackup PostgreSQL Agent installed on the EDB Postgres Advanced Server host Configuring Veritas NetBackup for PostgreSQL consists of configuring the following components: -- Veritas NetBackup Agent for PostgreSQL. -- PostgreSQL server. +- Veritas NetBackup Agent for PostgreSQL +- PostgreSQL server -The steps below show an example of how to configure Veritas NetBackup for PostgreSQL for EDB Postgres Advanced Server. +The example shows how to configure Veritas NetBackup for PostgreSQL for EDB Postgres Advanced Server. -### Configuring Veritas NetBackup for PostgreSQL +## Configuring Veritas NetBackup for PostgreSQL -1. Log on to the NetBackup Administration Console: +1. Log in to the NetBackup Administration console: - a. Use the credentials for the **root** user + 1. Use the credentials for the root user - b. Select the hostname for the NetBackup Master Server you want to administer + 1. Select the hostname for the NetBackup master server you want to administer. -

- -

+ ![NetBackup Admin Console](Images/NetBackupAdminConsole.png) 2. Create a policy for EDB Postgres Advanced Server: !!! Note - Refer to the [Veritas NetBackup Administrator's Guide](https://www.veritas.com/content/support/en_US/doc/18716246-126559472-0/v42176014-126559472) for detailed information on policies. + See the [Veritas NetBackup Administrator's Guide](https://www.veritas.com/content/support/en_US/doc/18716246-126559472-0/v42176014-126559472) for detailed information on policies. - a. Click on **NetBackup Management > Policies**, select the NetBackup server + 1. Select **NetBackup Management > Policies**. - b. Right mouse click and select **New Policy** + 1. Select the NetBackup server. Right-click and select **New Policy**. -

- -

+ ![New Policy](Images/NetBackupDataPolicy1.png) - c. Enter the policy name in the **Add New Policy** dialog box and click OK + 1. In the Add New Policy dialog box, enter the policy name and select **OK**. -

- -

+ ![Adding a Policy](Images/NetBackupDataPolicy2.png) - d. Select the **Clients** tab + 1. On the **Clients** tab, select **New**. - e. Click on **New** - - f. Enter the NetBackup client name in the **Add Client** dialog box, and click **OK** + 1. In the Add Client dialog box, enter the NetBackup client name and select **OK**. -

- -

- - g. Select the **Attributes** tab + ![Adding a Policy](Images/NetBackupDataPolicy3.png) - h. Select **DataStore** for **Policy type** + 1. On the **Attributes** tab, set **Policy type** to **DataStore**. - i. Select **Policy storage** from available values + 1. From available values, select **Policy storage**. - j. Set any other parameters you require for your policy and then click **OK** + 1. Set any other parameters you require for your policy, and then select **OK**. -

- -

- + ![Finish Adding a Policy](Images/NetBackupDataPolicy4.png) -3. On the NetBackup Client, update the agent configuration file `/usr/NBPostgreSQLAgent/nbpgsql.conf` to set the necessary parameters to make the agent work successfully with EDB Postgres Advanced Server: +3. On the NetBackup client, update the agent configuration file `/usr/NBPostgreSQLAgent/nbpgsql.conf`. Set the necessary parameters to make the agent work successfully with EDB Postgres Advanced Server: ``` DB_USER= enterprisedb DB_PORT=5444 @@ -91,21 +76,21 @@ The steps below show an example of how to configure Veritas NetBackup for Postgr ``` !!! Note - Value of **PGSQL_LIB_INSTALL_PATH** will be dependent on the version of EDB Postgres Advanced Server installed. - Values of **MASTER_SERVER_NAME** and **POLICY_NAME** parameters must match the names of your NetBackup Master Server and DataStore Policy respectively. + The value of `PGSQL_LIB_INSTALL_PATH` depends on the version of EDB Postgres Advanced Server installed. + Values of the `MASTER_SERVER_NAME` and `POLICY_NAME` parameters must match the names of your NetBackup master server and datastore policy, respectively. -In the sample configuration file above, the values for **DB_USER**, **DB_PORT**, **DB_INSTANCE_NAME**, and **PGSQL_LIB_INSTALL_PATH** have been substituted for EDB Postgres Advanced Server as the default values are for Postgres. -Refer to the [Veritas NetBackup for PostgreSQL Administrator's Guide](https://www.veritas.com/content/support/en_US/doc/129277259-150015228-0/v129276458-150015228) for detailed description of the parameters. +In the sample configuration file, the values for `DB_USER`, `DB_PORT`, `DB_INSTANCE_NAME`, and `PGSQL_LIB_INSTALL_PATH` were for EDB Postgres Advanced Server as the default values are for Postgres. +See the [Veritas NetBackup for PostgreSQL Administrator's Guide](https://www.veritas.com/content/support/en_US/doc/129277259-150015228-0/v129276458-150015228) for detailed description of the parameters. -### Configuring EDB Postgres Advanced Server +## Configuring EDB Postgres Advanced Server -Set up WAL archiving on the EDB Postgres Advanced Server server by using the steps below. WAL archiving prepares Postgresql/EDB Postgres Advanced Server database servers for backup/recovery operations and is a precondition for any backup/recovery tool to work with the database server. +Set up WAL archiving on the EDB Postgres Advanced Server server. WAL archiving prepares PostgreSQL/EDB Postgres Advanced Server database servers for backup/recovery operations and is a precondition for any backup/recovery tool to work with the database server. 1. Create a writeable `` directory at your desired location. -2. Set the required parameters in the `postgresql.conf` file to turn on WAL archiving: +2. To turn on WAL archiving, set the required parameters in the `postgresql.conf` file: ``` wal_level = archive archive_mode = on @@ -119,5 +104,4 @@ Set up WAL archiving on the EDB Postgres Advanced Server server by using the ste 3. Restart the PostgreSQL server. -Refer to the [Veritas NetBackup for PostgreSQL Administrator's Guide](https://www.veritas.com/content/support/en_US/doc/129277259-150015228-0/v129903049-150015228) for detailed information on how to configure EDB Postgres Advanced Server for Veritas NetBackup Agent for PostgreSQL. - +See the [Veritas NetBackup for PostgreSQL Administrator's Guide](https://www.veritas.com/content/support/en_US/doc/129277259-150015228-0/v129903049-150015228) for detailed information on how to configure EDB Postgres Advanced Server for Veritas NetBackup Agent for PostgreSQL. diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/05-UsingVeritasNetBackupForPostgreSQL.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/05-UsingVeritasNetBackupForPostgreSQL.mdx index b4ff290b59a..d25c0cdc6e5 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/05-UsingVeritasNetBackupForPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/05-UsingVeritasNetBackupForPostgreSQL.mdx @@ -4,48 +4,46 @@ description: 'Walkthroughs of multiple Veritas NetBackup for PostgreSQL usage sc --- Common backup/restore operations for PostgreSQL databases using Veritas NetBackup for PostgreSQL are: -- Performing backups - takes backup of the database and stores it in a predetermined location. -- Querying backups - lists available database backups. -- Performing restores - restores the database from a backup previously taken: - - Local restore - database files are restored to the same host as the client. - - Redirected restore - database files are restored to a different host than the client. +- Performing backups — Takes backup of the database and stores it in a predetermined location. +- Querying backups — Lists available database backups. +- Performing restores — Restores the database from a backup previously taken: + - Local restore — Database files are restored to the same host as the client. + - Redirected restore — Database files are restored to a host different from the client. -### Performing Backups +## Performing backups -To take a backup of the database, enter the command on the Veritas NetBackup Client: +To take a backup of the database, enter the command on the Veritas NetBackup client: `/usr/NBPostgreSQLAgent/nbpgsql -o backup` -### Querying Backups +## Querying backups -To list available database backups, enter the command on the Veritas NetBackup Client: +To list available database backups, enter the command on the Veritas NetBackup client: `/usr/NBPostgreSQLAgent/nbpgsql -o query` -

- -

-### Performing Restores +![Query Backup](Images/BackupQuery.png) -Database restores can be performed in the following two scenarios: +## Performing restores -- Local Restore. -- Redirected Restore. +You can perform database restores in the following two scenarios: +- Local restore +- Redirected restore -#### Local Restore -In this scenario, the database files are restored on the original (source) database server host (default -option). +### Local restore -To perform a local restore, use the steps below: +In this scenario, the database files are restored on the original (source) database server host. This is the default option. + +To perform a local restore: 1. Stop the database server. -2. Create the target directory to store the database files to be used for the restore operation. +2. Create the target directory to store the database files to use for the restore operation. 3. Determine the database backup id you want to use for the restore by querying available backups: @@ -55,7 +53,7 @@ To perform a local restore, use the steps below: `/usr/NBPostgreSQLAgent/nbpgsql -o restore -t -id ` -5. Once the restore operation is completed, replace the data directory (PGDATA) with the contents +5. Once the restore operation is complete, replace the data directory `PGDATA` with the contents of the target directory. 6. Set the `restore_command` parameter in the `postgresql.conf` file: @@ -67,15 +65,15 @@ of the target directory. -#### Redirected Restore +### Redirected restore -In this scenario, the database files are restored on a (target) database server host which is different from the original (source) database server. +In this scenario, the database files are restored on a target database server host that's different from the original (source) database server. -To perform a redirected restore, use the steps below: +To perform a redirected restore: 1. Stop the database server on the target host. -2. Create the target directory on the target host to store the database files to be used for the restore operation. +2. Create the target directory on the target host to store the database files to use for the restore operation. 3. Determine the database backup id you want to use for the restore by querying available backups on the source host: @@ -85,7 +83,7 @@ To perform a redirected restore, use the steps below: `/usr/NBPostgreSQLAgent/nbpgsql -o restore -t -id -C ` -5. Once the restore operation is completed, replace the data directory `PGDATA` on the target host with the contents of the target directory. +5. Once the restore operation is complete, replace the data directory `PGDATA` on the target host with the contents of the target directory. 6. Set the `restore_command` parameter in the `postgresql.conf` file on the target host: ``` diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/06-CertificationEnvironment.mdx index a6c2ff14d22..1539573025d 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/06-CertificationEnvironment.mdx @@ -1,13 +1,13 @@ --- -title: 'Certification Environment' +title: 'Certification environment' description: 'Overview of the certification environment used in the certification of NetBackup for PostgreSQL' --- |   |   | | ----------- | ----------- | -| **Certification Test Date** | December 28, 2021 | +| **Certification test date** | December 28, 2021 | | **EDB Postgres Advanced Server** | 11,12,13,14 | | **Veritas NetBackup for PostgreSQL** | 9.1 | -| **Veritas NetBackup Server** | 9.1 | -| **Veritas NetBackup Client** | 9.1 | +| **Veritas NetBackup server** | 9.1 | +| **Veritas NetBackup client** | 9.1 | diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/index.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/index.mdx index 348df79824d..b7330e5b89e 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/index.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/index.mdx @@ -1,12 +1,12 @@ --- -title: 'Veritas Implementation Guide' +title: 'Implementing Veritas' indexCards: simple directoryDefaults: iconName: handshake --- -

- -

+ +[Partner Program Logo](Images/EDBPartnerProgram.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Veritas NetBackup for PostgreSQL

From 11290a700f887b9358518ce85d43b24c6dcf0d17 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 09:39:31 -0400 Subject: [PATCH 19/65] Updates per Jennifer's feedback --- .../04-ConfiguringVeritasNetBackupforPostgreSQL.mdx | 2 +- .../05-UsingVeritasNetBackupForPostgreSQL.mdx | 2 +- .../partner_docs/VeritasNetBackupforPostgreSQL/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx index 253d23c0d6d..c678de43e70 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring Veritas NetBackup for PostgreSQL' +title: 'Configuring' description: 'Walkthrough of configuring Veritas NetBackup for PostgreSQL' --- diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/05-UsingVeritasNetBackupForPostgreSQL.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/05-UsingVeritasNetBackupForPostgreSQL.mdx index d25c0cdc6e5..2fbb3b67d4a 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/05-UsingVeritasNetBackupForPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/05-UsingVeritasNetBackupForPostgreSQL.mdx @@ -1,5 +1,5 @@ --- -title: 'Using Veritas NetBackup for PostgreSQL' +title: 'Using' description: 'Walkthroughs of multiple Veritas NetBackup for PostgreSQL usage scenarios' --- diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/index.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/index.mdx index b7330e5b89e..d8e90d8b500 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/index.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing Veritas' +title: 'Veritas Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From a59e7467626ab3886e3e5e1b760585f7fa0ebcb6 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 09:41:23 -0400 Subject: [PATCH 20/65] Updates per Jennifer's feedback --- .../SIBVisionsVisionX/04-ConfiguringSIBVisionsVisionX.mdx | 2 +- .../SIBVisionsVisionX/05-UsingSIBVisionsVisionX.mdx | 2 +- advocacy_docs/partner_docs/SIBVisionsVisionX/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/SIBVisionsVisionX/04-ConfiguringSIBVisionsVisionX.mdx b/advocacy_docs/partner_docs/SIBVisionsVisionX/04-ConfiguringSIBVisionsVisionX.mdx index 9959fef5c39..559f6487c74 100644 --- a/advocacy_docs/partner_docs/SIBVisionsVisionX/04-ConfiguringSIBVisionsVisionX.mdx +++ b/advocacy_docs/partner_docs/SIBVisionsVisionX/04-ConfiguringSIBVisionsVisionX.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring SIB Visions VisionX' +title: 'Configuring' description: 'Walkthrough of configuring SIB Visions VisionX' --- diff --git a/advocacy_docs/partner_docs/SIBVisionsVisionX/05-UsingSIBVisionsVisionX.mdx b/advocacy_docs/partner_docs/SIBVisionsVisionX/05-UsingSIBVisionsVisionX.mdx index 2bbbf3f8c42..459f669fc14 100644 --- a/advocacy_docs/partner_docs/SIBVisionsVisionX/05-UsingSIBVisionsVisionX.mdx +++ b/advocacy_docs/partner_docs/SIBVisionsVisionX/05-UsingSIBVisionsVisionX.mdx @@ -1,5 +1,5 @@ --- -title: 'Using SIB Visions VisionX' +title: 'Using' description: 'Walkthroughs of multiple SIB Visions VisionX usage scenarios' --- diff --git a/advocacy_docs/partner_docs/SIBVisionsVisionX/index.mdx b/advocacy_docs/partner_docs/SIBVisionsVisionX/index.mdx index 2ce8294c6d4..86f1b7671dc 100644 --- a/advocacy_docs/partner_docs/SIBVisionsVisionX/index.mdx +++ b/advocacy_docs/partner_docs/SIBVisionsVisionX/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing SIB Visions VisionX' +title: 'SIB Visions VisionX Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From 0d15cabff801d1b6f6d5430a8958917e02ab7d8d Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 11:19:25 -0400 Subject: [PATCH 21/65] Corrections per Jennifer's comments --- .../04-ConfiguringThalesCipherTrustTransparentEncryption.mdx | 2 +- .../05-UsingThalesCipherTrustTransparentEncryption.mdx | 2 +- .../ThalesCipherTrustTransparentEncryption/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx index e5cb9e87b76..3cf43e70356 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring Thales CipherTrust Transparent Encryption (CTE)' +title: 'Configuring' description: 'Walkthrough of configuring Thales CipherTrust Transparent Encryption (CTE)' --- diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/05-UsingThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/05-UsingThalesCipherTrustTransparentEncryption.mdx index afb60144854..a61583ac745 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/05-UsingThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/05-UsingThalesCipherTrustTransparentEncryption.mdx @@ -1,5 +1,5 @@ --- -title: 'Using Thales CipherTrust Transparent Encryption (CTE)' +title: 'Using' description: 'Walkthroughs of multiple Thales CipherTrust Transparent Encryption (CTE) usage scenarios' --- diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/index.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/index.mdx index a9b147f23f7..64a1b4c899c 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/index.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing Thales CipherTrust Transparent Encryption' +title: 'Thales CipherTrust Transparent Encryption Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From 8d65417c135f2ab198b6c38aa30e458d4c2382c4 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 11:54:54 -0400 Subject: [PATCH 22/65] Updates per Jennifer's comments --- .../partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx | 2 +- .../partner_docs/QuestToadEdge/05-UsingQuestToadEdge.mdx | 2 +- advocacy_docs/partner_docs/QuestToadEdge/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx b/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx index 09f6c5ac4db..fcfe05ebeca 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring Quest Toad Edge' +title: 'Configuring' description: 'Walkthrough of configuring Quest Toad Edge' --- diff --git a/advocacy_docs/partner_docs/QuestToadEdge/05-UsingQuestToadEdge.mdx b/advocacy_docs/partner_docs/QuestToadEdge/05-UsingQuestToadEdge.mdx index 0b5fb37aa69..2f4b607cefe 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/05-UsingQuestToadEdge.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/05-UsingQuestToadEdge.mdx @@ -1,5 +1,5 @@ --- -title: 'Using Quest Toad Edge' +title: 'Using' description: 'Walkthroughs of multiple Quest Toad Edge usage scenarios' --- diff --git a/advocacy_docs/partner_docs/QuestToadEdge/index.mdx b/advocacy_docs/partner_docs/QuestToadEdge/index.mdx index 46d610c43f3..e22032bbb48 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/index.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing Quest Toad Edge' +title: 'Quest Toad Edge Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From f18946af08a8e4d2ced9301635464f425a8f1871 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 12:17:00 -0400 Subject: [PATCH 23/65] Changes per Jennifer's feedback --- .../04-ConfiguringPureStorageFlashArray.mdx | 2 +- advocacy_docs/partner_docs/PureStorageFlashArray/index.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/advocacy_docs/partner_docs/PureStorageFlashArray/04-ConfiguringPureStorageFlashArray.mdx b/advocacy_docs/partner_docs/PureStorageFlashArray/04-ConfiguringPureStorageFlashArray.mdx index 6b70f144d8f..9ba7ffc5c37 100644 --- a/advocacy_docs/partner_docs/PureStorageFlashArray/04-ConfiguringPureStorageFlashArray.mdx +++ b/advocacy_docs/partner_docs/PureStorageFlashArray/04-ConfiguringPureStorageFlashArray.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuration' +title: 'Configuring' description: 'Walkthrough of configuring the integration' --- diff --git a/advocacy_docs/partner_docs/PureStorageFlashArray/index.mdx b/advocacy_docs/partner_docs/PureStorageFlashArray/index.mdx index acc7083b9d1..022899cfb7e 100644 --- a/advocacy_docs/partner_docs/PureStorageFlashArray/index.mdx +++ b/advocacy_docs/partner_docs/PureStorageFlashArray/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implmenting Pure Storage FlashArray' +title: 'Pure Storage FlashArray Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From 93edd75c9d4297a54a28d2df71302d6ff3aaa3ae Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 12:18:46 -0400 Subject: [PATCH 24/65] Updates per Jennifer's feedback --- .../PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx | 2 +- .../PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx | 2 +- advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx index 61e3023c600..b13f42abc62 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring Precisely Connect CDC' +title: 'ConfiguringC' description: 'Walkthrough of configuring the integration' --- diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx index a7a56212a95..0dd0f65801c 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx @@ -1,5 +1,5 @@ --- -title: 'Using Precisely Connect CDC' +title: 'Using' description: 'Walkthrough of example usage scenarios' --- diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx index b12e32c91ef..2f76ee861f1 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing Precisely Connect CDC' +title: 'Precisely Connect CDC Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From c8ff3effd1caeb3425cc22003671cf53f934d3e2 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 12:20:19 -0400 Subject: [PATCH 25/65] Updates per Jennifer's feedback --- .../partner_docs/NutanixAHV/04-ConfiguringNutanixAHV.mdx | 2 +- advocacy_docs/partner_docs/NutanixAHV/05-UsingNutanixAHV.mdx | 2 +- advocacy_docs/partner_docs/NutanixAHV/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/NutanixAHV/04-ConfiguringNutanixAHV.mdx b/advocacy_docs/partner_docs/NutanixAHV/04-ConfiguringNutanixAHV.mdx index 7cbbb67ea99..8fc0c10df71 100644 --- a/advocacy_docs/partner_docs/NutanixAHV/04-ConfiguringNutanixAHV.mdx +++ b/advocacy_docs/partner_docs/NutanixAHV/04-ConfiguringNutanixAHV.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring Nutanix AHV' +title: 'Configuring' description: 'Walkthrough of configuring the integration' --- diff --git a/advocacy_docs/partner_docs/NutanixAHV/05-UsingNutanixAHV.mdx b/advocacy_docs/partner_docs/NutanixAHV/05-UsingNutanixAHV.mdx index 4df10a5ec25..c61f0612ddf 100644 --- a/advocacy_docs/partner_docs/NutanixAHV/05-UsingNutanixAHV.mdx +++ b/advocacy_docs/partner_docs/NutanixAHV/05-UsingNutanixAHV.mdx @@ -1,5 +1,5 @@ --- -title: 'Using Nutanix AHV' +title: 'Using' description: 'Walkthrough of example usage scenarios' --- diff --git a/advocacy_docs/partner_docs/NutanixAHV/index.mdx b/advocacy_docs/partner_docs/NutanixAHV/index.mdx index 472cd377779..d67f2967f1a 100644 --- a/advocacy_docs/partner_docs/NutanixAHV/index.mdx +++ b/advocacy_docs/partner_docs/NutanixAHV/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing Nutanix AHV' +title: 'Nutanix AHV Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From f2c13e56bfb22620e61a91abddd6a8783d2bdc2e Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 12:21:46 -0400 Subject: [PATCH 26/65] Updates per Jennifer's feedback --- .../partner_docs/LiquibasePro/04-ConfiguringLiquibasePro.mdx | 2 +- .../partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx | 2 +- advocacy_docs/partner_docs/LiquibasePro/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/LiquibasePro/04-ConfiguringLiquibasePro.mdx b/advocacy_docs/partner_docs/LiquibasePro/04-ConfiguringLiquibasePro.mdx index 81e3e7e49c3..e7b6d8e6e62 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/04-ConfiguringLiquibasePro.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/04-ConfiguringLiquibasePro.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring Liquibase Pro' +title: 'Configuring' description: 'Walkthrough of configuring Liquibase Pro' --- diff --git a/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx b/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx index 0a90a2dd02b..3d53886284d 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx @@ -1,5 +1,5 @@ --- -title: 'Using Liquibase Pro' +title: 'Using' description: 'Walkthroughs of multiple Liquibase Pro usage scenarios' --- diff --git a/advocacy_docs/partner_docs/LiquibasePro/index.mdx b/advocacy_docs/partner_docs/LiquibasePro/index.mdx index 8c8f5bbf831..ae1e7ee6a69 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/index.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing Liquibase Pro' +title: 'Liquibase Pro Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From 4959674e100fa3383af4fb5c87f673e89648436b Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 12:23:21 -0400 Subject: [PATCH 27/65] Updates per Jennifer's feedback --- .../04-ConfiguringImpervaDataSecurityFabric.mdx | 2 +- .../05-UsingImpervaDataSecurityFabric.mdx | 2 +- advocacy_docs/partner_docs/ImpervaDataSecurityFabric/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/04-ConfiguringImpervaDataSecurityFabric.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/04-ConfiguringImpervaDataSecurityFabric.mdx index 09e8ce33557..a25c619a8e0 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/04-ConfiguringImpervaDataSecurityFabric.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/04-ConfiguringImpervaDataSecurityFabric.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring Imperva Data Security Fabric' +title: 'Configuring' description: 'Walkthrough of configuring the integration' --- diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx index e800d4a785e..9be4a1cede2 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx @@ -1,5 +1,5 @@ --- -title: 'Using Imperva Data Security Fabric' +title: 'Using' description: 'Walkthrough of example usage scenarios' --- diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/index.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/index.mdx index 4f53ae3f5df..ae4269e96ea 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/index.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing Imperva Data Security Fabric' +title: 'Imperva Data Security Fabric Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From 36d7d9c318bd67ea2d2b17fa07c822f3dbd409b1 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 12:26:11 -0400 Subject: [PATCH 28/65] Updates per Jennifer's feedback --- .../04-ConfiguringTransitSecretsEngine.mdx | 2 +- .../05-UsingTransitSecretsEngine.mdx | 2 +- .../partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx index 8bae7473fa7..4243810090a 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring Hashicorp Vault transit secrets engine' +title: 'Configuring' description: 'Walkthrough on configuring the integration' --- diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/05-UsingTransitSecretsEngine.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/05-UsingTransitSecretsEngine.mdx index 1fcc522c91f..043ea35db0e 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/05-UsingTransitSecretsEngine.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/05-UsingTransitSecretsEngine.mdx @@ -1,5 +1,5 @@ --- -title: 'Using Hashicorp Vault transit secrets engine' +title: 'Using' description: 'Walkthrough of example usage scenarios' --- diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx index 3a20f8cc2b6..5348e80bfc2 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing Hashicorp Vault Transit Secrets Engine' +title: 'Hashicorp Vault Transit Secrets Engine Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From d0a1e0aa469d5822a89f23fb3422b4a1e7683623 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 12:28:00 -0400 Subject: [PATCH 29/65] Updates per Jennifer's comments --- .../HashicorpVault/04-ConfiguringHashicorpVault.mdx | 2 +- .../partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx | 2 +- advocacy_docs/partner_docs/HashicorpVault/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx b/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx index 0cf064b2914..1024aff4756 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring Hashicorp Vault' +title: 'Configuring' description: 'Walkthrough of configuring the integration' --- diff --git a/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx b/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx index ff657df94b8..2660b97d8ea 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx @@ -1,5 +1,5 @@ --- -title: 'Using Hashicorp Vault' +title: 'Using' description: 'Walkthrough of example usage scenarios' --- diff --git a/advocacy_docs/partner_docs/HashicorpVault/index.mdx b/advocacy_docs/partner_docs/HashicorpVault/index.mdx index 85888dd7fe1..69c4276def1 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/index.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing Hashicorp Vault' +title: 'Hashicorp Vault Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From c934e3e0b0cdbaaffdaecbc73c1ac6ab0db85a8b Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 12:57:40 -0400 Subject: [PATCH 30/65] Edits to PR4502 --- .../upgrading_pem_installation_linux_rpm.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx b/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx index bbac8961638..32a22e6d05d 100644 --- a/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx +++ b/product_docs/docs/pem/9/upgrading/upgrading_pem_installation/upgrading_pem_installation_linux_rpm.mdx @@ -21,7 +21,7 @@ During an installation, the component installation detects an existing installat ## Upgrading a PEM server installation -If you want to upgrade a PEM server that is installed on a machine in an isolated network, you need to create a PEM repository on that machine before you upgrade the PEM server. For more information, see [Creating an EDB repository +If you want to upgrade a PEM server that's installed on a machine in an isolated network, you need to create a PEM repository on that machine before you upgrade the PEM server. For more information, see [Creating an EDB repository on an isolated network](../../installing/creating_pem_repository_in_isolated_network/). To upgrade a PEM server installation: @@ -41,7 +41,7 @@ Where `` is the package manager used with your operating system !!! Note - If you're doing a fresh installation of the PEM server on CentOS or RHEL 7.x host, the installer installs the `edb-python3-mod_wsgi` package along with the installation. The package is a requirement of the operating system. If you are upgrading the PEM server on CentOS or RHEL 7.x host, the `edb-python3-mod_wsgi` packages replaces the `mod_wsgi package` package to meet the requirements of the operating system. + If you're doing a fresh installation of the PEM server on CentOS or RHEL 7.x host, the installer installs the `edb-python3-mod_wsgi` package with the installation. The package is a requirement of the operating system. If you're upgrading the PEM server on CentOS or RHEL 7.x host, the `edb-python3-mod_wsgi` package replaces the `mod_wsgi package` package to meet the requirements of the operating system. After upgrading the PEM server, you must configure the PEM server. For detailed information, see [Configuring the PEM server](#configuring-the-pem-server). From 752ac5e26cd7dc91932ec60e77779c4ba24f50ee Mon Sep 17 00:00:00 2001 From: jkitchens32 Date: Thu, 10 Aug 2023 11:03:41 -0400 Subject: [PATCH 31/65] Archive Notes Update --- .../04-ConfiguringCohesityDataProtectforPostgreSQL.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/advocacy_docs/partner_docs/CohesityDataProtectforPostgreSQL/04-ConfiguringCohesityDataProtectforPostgreSQL.mdx b/advocacy_docs/partner_docs/CohesityDataProtectforPostgreSQL/04-ConfiguringCohesityDataProtectforPostgreSQL.mdx index c1261432f0b..8d42ce2c09c 100644 --- a/advocacy_docs/partner_docs/CohesityDataProtectforPostgreSQL/04-ConfiguringCohesityDataProtectforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/CohesityDataProtectforPostgreSQL/04-ConfiguringCohesityDataProtectforPostgreSQL.mdx @@ -74,6 +74,9 @@ Implementing Cohesity DataProtect for PostgreSQL with EDB Postgres Advanced Serv ### Configuring EDB Postgres Advanced Server +!!! Note When you run your first backup on the database, Cohesity will set up a file called postgresql.auto.conf with their archive command and they will set archive_mode=on in postgresql.conf and restart the database if you have not already set archive_mode=on. +!!! + Set up WAL archiving on the EDB Postgres Advanced Server server by using the steps below. WAL archiving prepares Postgresql/EDB Postgres Advanced Server database servers for backup/recovery operations and is a precondition for any backup/recovery tool to work with the database server. 1. Create a writeable `` directory at your desired location. @@ -90,7 +93,4 @@ Set up WAL archiving on the EDB Postgres Advanced Server server by using the ste !!! Note Replace `` in the `archive_command` parameter with the location of the directory created in Step 1. !!! -3. Restart the PostgreSQL server. - -!!! Note When you run your first backup on the database, Cohesity will set archive_mode=on in postgresql.conf and restart the database if you have not already set archive_mode=on. -!!! \ No newline at end of file +3. Restart the PostgreSQL server. \ No newline at end of file From 23d328bde306f7082a4a673efcb84bf5021b7c19 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Mon, 14 Aug 2023 07:15:57 -0400 Subject: [PATCH 32/65] Home page What's new tab updates --- src/pages/index.js | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/src/pages/index.js b/src/pages/index.js index 5ed097441ce..0cf2d826617 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -57,22 +57,22 @@ const Page = () => (

- BigAnimal's no-commitment free trial + Installing PostgreSQL made easy

- You can now get $300 of credits to try out fully managed - Postgres with BigAnimal's cloud account and no commitment. - Then you can move your concept to production with just the - swipe of a credit card. + EDB's PostgreSQL installers and installation packages simplify + the process of installing PostgreSQL. Check out recent + improvements to our install instructions including new + instructions for installing our Linux packages.

Find out more → From be82b3470575b630da34426ec36b8d0ad15d1eb0 Mon Sep 17 00:00:00 2001 From: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com> Date: Mon, 14 Aug 2023 10:33:20 -0400 Subject: [PATCH 33/65] fixed typo --- .../VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx index 4f7702515a6..882f3cf7d13 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx @@ -2,6 +2,6 @@ title: 'Solution summary' description: 'Brief explanation of the solution and its purpose' --- -NetBackup provides a nondistruptive way of validating your resiliency plan for assurance and compliance through automated recovery and rehearsal of business-critical applications. Moving data and spinning up applications when and where you need to without risking data loss requires business-level resiliency. Veritas NetBackup for PostgreSQL Agent extends the capabilities of NetBackup to include backup and restore of PostgreSQL databases. If a NetBackup environment is operational within an organization, then users can back up and restore EDB Postgres Advanced Server and EDB Postgres Extended Server with the help of Veritas NetBackup for PostgreSQL Agent. +NetBackup provides a nondisruptive way of validating your resiliency plan for assurance and compliance through automated recovery and rehearsal of business-critical applications. Moving data and spinning up applications when and where you need to without risking data loss requires business-level resiliency. Veritas NetBackup for PostgreSQL Agent extends the capabilities of NetBackup to include backup and restore of PostgreSQL databases. If a NetBackup environment is operational within an organization, then users can back up and restore EDB Postgres Advanced Server and EDB Postgres Extended Server with the help of Veritas NetBackup for PostgreSQL Agent. ![Veritas NetBackup for PostreSQL Achitecture](Images/ArchitectureUpdate4.png) From 324bcf973111d0af029fcc4484dd627e5a258e39 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Mon, 14 Aug 2023 11:29:17 -0400 Subject: [PATCH 34/65] Fixed type --- .../VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx index 882f3cf7d13..54f47aab426 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/03-SolutionSummary.mdx @@ -4,4 +4,4 @@ description: 'Brief explanation of the solution and its purpose' --- NetBackup provides a nondisruptive way of validating your resiliency plan for assurance and compliance through automated recovery and rehearsal of business-critical applications. Moving data and spinning up applications when and where you need to without risking data loss requires business-level resiliency. Veritas NetBackup for PostgreSQL Agent extends the capabilities of NetBackup to include backup and restore of PostgreSQL databases. If a NetBackup environment is operational within an organization, then users can back up and restore EDB Postgres Advanced Server and EDB Postgres Extended Server with the help of Veritas NetBackup for PostgreSQL Agent. -![Veritas NetBackup for PostreSQL Achitecture](Images/ArchitectureUpdate4.png) +![Veritas NetBackup for PostgreSQL Achitecture](Images/ArchitectureUpdate4.png) From 3a2245ae2a7322ccf0a1a8d5b030783dff061c26 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Mon, 14 Aug 2023 11:30:03 -0400 Subject: [PATCH 35/65] Small copy edit --- .../04-ConfiguringVeritasNetBackupforPostgreSQL.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx index c678de43e70..3a1e112bad9 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx @@ -28,7 +28,7 @@ The example shows how to configure Veritas NetBackup for PostgreSQL for EDB Post 1. Log in to the NetBackup Administration console: - 1. Use the credentials for the root user + 1. Use the credentials for the root user. 1. Select the hostname for the NetBackup master server you want to administer. From a661fb9a7ed83187a60cb3a1bd922e2cabc8b34f Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Mon, 14 Aug 2023 11:30:43 -0400 Subject: [PATCH 36/65] Small copy edit --- .../04-ConfiguringVeritasNetBackupforPostgreSQL.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx index 3a1e112bad9..56715945bea 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx @@ -80,7 +80,7 @@ The example shows how to configure Veritas NetBackup for PostgreSQL for EDB Post Values of the `MASTER_SERVER_NAME` and `POLICY_NAME` parameters must match the names of your NetBackup master server and datastore policy, respectively. In the sample configuration file, the values for `DB_USER`, `DB_PORT`, `DB_INSTANCE_NAME`, and `PGSQL_LIB_INSTALL_PATH` were for EDB Postgres Advanced Server as the default values are for Postgres. -See the [Veritas NetBackup for PostgreSQL Administrator's Guide](https://www.veritas.com/content/support/en_US/doc/129277259-150015228-0/v129276458-150015228) for detailed description of the parameters. +See the [Veritas NetBackup for PostgreSQL Administrator's Guide](https://www.veritas.com/content/support/en_US/doc/129277259-150015228-0/v129276458-150015228) for detailed descriptions of the parameters. From f8b6145e4f4b997d2b889de850f2195d542282d4 Mon Sep 17 00:00:00 2001 From: jingjingliu20 <86595232+jingjingliu20@users.noreply.github.com> Date: Tue, 15 Aug 2023 17:00:59 +0800 Subject: [PATCH 37/65] Update index.mdx Fix the grant command error at the example. --- .../release/using_cluster/01_postgres_access/index.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/01_postgres_access/index.mdx b/product_docs/docs/biganimal/release/using_cluster/01_postgres_access/index.mdx index 9a448bc5eae..83780e88c56 100644 --- a/product_docs/docs/biganimal/release/using_cluster/01_postgres_access/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/01_postgres_access/index.mdx @@ -58,7 +58,7 @@ For one database hosting a single application, replace `app1` with your preferre 2. Assign the new role to your edb_admin user. Assigning this role allows you to assign ownership to the new user in the next step. For example: ``` - edb_admin=# grant app1 to edb_admin; + edb_admin=# grant edb_admin to app1; ``` 3. Create a new database to store application data. For example: @@ -87,10 +87,10 @@ If you use a single database to host multiple schemas, create a database owner a ``` prod1=# create user app1 with password 'app1_pwd'; - prod1=# grant app1 to edb_admin; + prod1=# grant edb_admin to app1; prod1=# create user app2 with password 'app2_pwd'; - prod1=# grant app2 to edb_admin; + prod1=# grant edb_admin to app2; ``` 4. Create a new schema for each application with the `AUTHORIZATION` clause for the application owner. For example: From 3bd5c4e9718932da5b54699f21771b506a7b4a77 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 15 Aug 2023 09:50:56 -0400 Subject: [PATCH 38/65] Update 04-ConfiguringVeritasNetBackupforPostgreSQL.mdx --- .../04-ConfiguringVeritasNetBackupforPostgreSQL.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx index 56715945bea..b135cc2f49f 100644 --- a/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/VeritasNetBackupforPostgreSQL/04-ConfiguringVeritasNetBackupforPostgreSQL.mdx @@ -79,7 +79,7 @@ The example shows how to configure Veritas NetBackup for PostgreSQL for EDB Post The value of `PGSQL_LIB_INSTALL_PATH` depends on the version of EDB Postgres Advanced Server installed. Values of the `MASTER_SERVER_NAME` and `POLICY_NAME` parameters must match the names of your NetBackup master server and datastore policy, respectively. -In the sample configuration file, the values for `DB_USER`, `DB_PORT`, `DB_INSTANCE_NAME`, and `PGSQL_LIB_INSTALL_PATH` were for EDB Postgres Advanced Server as the default values are for Postgres. +The sample configuration file shows values for `DB_USER`, `DB_PORT`, `DB_INSTANCE_NAME`, and `PGSQL_LIB_INSTALL_PATH` set for EDB Postgres Advanced Server. The default values are for Postgres. See the [Veritas NetBackup for PostgreSQL Administrator's Guide](https://www.veritas.com/content/support/en_US/doc/129277259-150015228-0/v129276458-150015228) for detailed descriptions of the parameters. From a70c94f95d908846a9e6a8699024e7f3b1e4ac81 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 15 Aug 2023 10:07:47 -0400 Subject: [PATCH 39/65] Trying to fix subordination of steps --- .../04-ConfiguringThalesCipherTrustTransparentEncryption.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx index 3cf43e70356..64257e3beb7 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx @@ -38,7 +38,7 @@ Log in to the CipherTrust Manager (CM) web UI. Then: 1. Create a registration token. - 1. Navigate to **Key and Access Management** and select **Registration Tokens**. This token is used for the CTE agent enrollment to CM. + 1. Navigate to **Key and Access Management** and select **Registration Tokens**. This token is used for the CTE agent enrollment to CM. 1. To create a registration token, select **New Registration Token**. From 0aae1cc7fdb497458ac7cb8e0e2a9b0464eac85f Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 15 Aug 2023 10:08:06 -0400 Subject: [PATCH 40/65] Trying to fix subordination of steps --- .../04-ConfiguringThalesCipherTrustTransparentEncryption.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx index 64257e3beb7..ced6f633097 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx @@ -40,7 +40,7 @@ Log in to the CipherTrust Manager (CM) web UI. Then: 1. Navigate to **Key and Access Management** and select **Registration Tokens**. This token is used for the CTE agent enrollment to CM. - 1. To create a registration token, select **New Registration Token**. + 1. To create a registration token, select **New Registration Token**. The screenshot shows a registration token created with the name **edb**. From 14a19321da49d75c0e738ddeda3cbed1e93676fa Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 15 Aug 2023 10:08:17 -0400 Subject: [PATCH 41/65] Trying to fix subordination of steps --- .../04-ConfiguringThalesCipherTrustTransparentEncryption.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx index ced6f633097..e353c9534d5 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx @@ -42,7 +42,7 @@ Log in to the CipherTrust Manager (CM) web UI. Then: 1. To create a registration token, select **New Registration Token**. - The screenshot shows a registration token created with the name **edb**. + The screenshot shows a registration token created with the name **edb**. ![Registration Token](Images/ConfiguringCipherTrustManager.png) From 453baaffa120af1f351df16f756ca8eac608536c Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 15 Aug 2023 10:08:27 -0400 Subject: [PATCH 42/65] Trying to fix subordination of steps --- .../04-ConfiguringThalesCipherTrustTransparentEncryption.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx index e353c9534d5..90e1d355115 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx @@ -44,7 +44,7 @@ Log in to the CipherTrust Manager (CM) web UI. Then: The screenshot shows a registration token created with the name **edb**. -![Registration Token](Images/ConfiguringCipherTrustManager.png) + ![Registration Token](Images/ConfiguringCipherTrustManager.png) 2. Create user sets. From 437b29063442d2e5f1191a221992112c0177da28 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 15 Aug 2023 10:08:55 -0400 Subject: [PATCH 43/65] Trying to fix subordination of steps --- .../04-ConfiguringThalesCipherTrustTransparentEncryption.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx index 90e1d355115..7b6d3556992 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx @@ -48,7 +48,7 @@ Log in to the CipherTrust Manager (CM) web UI. Then: 2. Create user sets. - 1. Navigate to **CTE** and select **Policies > Policy Elements > User Sets**. + 1. Navigate to **CTE** and select **Policies > Policy Elements > User Sets**. 1. To create the user set, select **Create User Set**. From 35684e8fff923416e8c6d4d158646aa34f61ab12 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 15 Aug 2023 10:09:04 -0400 Subject: [PATCH 44/65] Trying to fix subordination of steps --- .../04-ConfiguringThalesCipherTrustTransparentEncryption.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx index 7b6d3556992..8474a406f88 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx @@ -50,7 +50,7 @@ Log in to the CipherTrust Manager (CM) web UI. Then: 1. Navigate to **CTE** and select **Policies > Policy Elements > User Sets**. - 1. To create the user set, select **Create User Set**. + 1. To create the user set, select **Create User Set**. 1. Create the Postgres, EnterpriseDB, and Barman user sets as shown in the following screenshots. From 30d25f473a9aa0b80983107535310a9bd474b23e Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 15 Aug 2023 10:29:10 -0400 Subject: [PATCH 45/65] Small improvements based on UI doc --- .../connecting_from_gcp/index.mdx | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx index 6c5ad5c0dc4..14db3add6a8 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx @@ -72,7 +72,7 @@ In the Google Cloud project connected to BigAnimal, to provide access to your cl 1. In the Filter area, under **Load Balancers**, select **Addresses** and filter for the host IP (`10.247.200.9`). Note the load balancer name (`a58262cd80b234a3aa917b719e69843f`). -1. Go to **Private Service Connect > Published Services > + Publish Service**. +1. Go to **Private Service Connect > Published Services**. 1. Select **+ Publish Service**. 1. Under **Load Balancer Type**: @@ -103,9 +103,9 @@ In the Google Cloud project connected to BigAnimal, to provide access to your cl 1. From the Google Cloud console, switch over to the project where your VM client/application resides (`test`). -1. To get the VPC of your VM (`client-app-vpc`), go to **Compute Engine > VM Instances > Network Interface > Network**. +1. To get the VPC of your VM (`client-app-vpc`), go to **Compute Engine > VM Instances**. Under **Network Interface**, note the network information. -1. To create an endpoint with the VPC, go to **Network Services > Private Service Connect - Connected Endpoints > +Connect Endpoint**. +1. To create an endpoint with the VPC, go to **Network Services > Private Service Connect**. Under **Connected Endpoints**, select **+ Connect Endpoint**. 1. For the target, select **Published service**, and use the service attachment captured earlier (`projects/development-001/regions/us-central1/serviceAttachments/p-mckwlbakq5`). 1. For the endpoint name, use the name of your VM client/application (`test-app-1`). @@ -128,4 +128,3 @@ In the Google Cloud project connected to BigAnimal, to provide access to your cl #### Step 3: (Optional) Set up a private DNS zone Setting up a private DNS zone in your Google Cloud project allows you to connect BigAnimal with the host. For instructions on setting up a private DNS zone, see [this knowledge base article](https://support.biganimal.com/hc/en-us/articles/20383247227801-GCP-Connect-to-BigAnimal-private-cluster-using-GCP-Private-Service-Connect#h_01H4QMHF1DJGKW5ED2BQ6YCT29). - From f230621be10748e5695cf15af73754c2b8327cbe Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 27 Jul 2023 14:12:46 -0400 Subject: [PATCH 46/65] Edits to Repostor partner doc --- .../02-PartnerInformation.mdx | 12 +-- .../03-SolutionSummary.mdx | 17 ++-- ...ringRepostorDataProtectorforPostgreSQL.mdx | 79 +++++++++---------- ...singRepostorDataProtectorforPostgreSQL.mdx | 51 ++++-------- .../06-CertificationEnvironment.mdx | 8 +- .../index.mdx | 7 +- 6 files changed, 76 insertions(+), 98 deletions(-) diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/02-PartnerInformation.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/02-PartnerInformation.mdx index 294f393d4b6..19f05b129e3 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/02-PartnerInformation.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/02-PartnerInformation.mdx @@ -1,5 +1,5 @@ --- -title: 'Partner Information' +title: 'Partner information' description: 'Details for Repostor Data Protector for PostgreSQL' redirects: - /partner_docs/RepostorGuide/02-PartnerInformation/ @@ -7,8 +7,8 @@ redirects: |   |   | | ----------- | ----------- | -| **Partner Name** | Repostor | -| **Partner Product** | Data Protector for PostgreSQL | -| **Web Site** | www.repostor.com | -| **Version & Platform** | 5.0.0.0-96 https://www.repostor.com/downloadpage/ https://www.repostor.com/products/#data | -| **Product Description** | Repostor Data Protector allows you to do fast, hot online backups in conjecture with IBM Spectrum Protect. After a full backup has been taken, an incremental backup handles the changes that have occurred to the database. | +| **Partner name** | Repostor | +| **Partner product** | Data Protector for PostgreSQL | +| **Website** | www.repostor.com | +| **Version & platform** | 5.0.0.0-96 https://www.repostor.com/downloadpage/ https://www.repostor.com/products/#data | +| **Product description** | Repostor Data Protector allows you to do fast, hot, online backups with IBM Spectrum Protect. After you take a full backup, an incremental backup handles the changes to the database. | diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/03-SolutionSummary.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/03-SolutionSummary.mdx index 1bb8232034e..6f457624453 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/03-SolutionSummary.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/03-SolutionSummary.mdx @@ -1,18 +1,17 @@ --- -title: 'Solution Summary' +title: 'Solution summary' description: 'Brief explanation of the solution and its purpose' --- The Repostor Data Protector for PostgreSQL (RDP) is an integration between EDB Postgres Advanced Server and IBM Spectrum -Protect. It enables backup and restore to Spectrum Protect and to have WAL logs archived instantly to IBM +Protect. It enables backup and restore to Spectrum Protect and having WAL logs archived instantly to IBM Spectrum Protect. It also enables backup and restore for the database administrators without the need for them -to have Spectrum Protect knowledge. The current version of the RDP uses high level calls to standard PostgreSQL -tools such as psql, pg_dump, pg_restore, pg_start_backup, pg_stop_backup. +to have Spectrum Protect knowledge. The current version of the RDP uses high-level calls to standard PostgreSQL +tools, such as psql, pg_dump, pg_restore, pg_start_backup, and pg_stop_backup. -The RDP supports backup on two levels, database level and instance level. Backup on the database level uses the pg_dump tool. +The RDP supports backup on two levels: database level and instance level. Backup on the database level uses the pg_dump tool. -The backup on the instance level uses file backup of the data_directory and any external tablespace locations. This requires activation of WAL archiving and the `archive_command` set to run the RDP tool logwriter. +The backup on the instance level uses file backup of the data_directory and any external tablespace locations. This capability requires activating WAL archiving and setting the `archive_command` to run the RDP tool logwriter. + +![Reposter Solution Summary](Images/RepostorSolutionSummary.png) -

- -

diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx index 1a3e83a9310..0ad4ec6bbe1 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx @@ -7,55 +7,55 @@ description: 'Walkthrough of configuring Repostor Data Protector for PostgreSQL' - The database host with the EDB Postgres Advanced Server environment needs the following components: - - IBM supported operating system for Spectrum Protect clients. + - IBM supported operating system for Spectrum Protect clients - - IBM Spectrum Protect BA client (used for regular file backup). + - IBM Spectrum Protect BA client (used for regular file backup) - - IBM Spectrum Protect API client. + - IBM Spectrum Protect API client - - EDB Postgres Advanced Server version 10 or above. + - EDB Postgres Advanced Server version 10 or later - - Repostor Data Protector client. + - Repostor Data Protector client -- A PostgreSQL user needs to be defined for use with the RDP. +- A PostgreSQL user must be defined for use with the RDP. - - This user needs to be able to connect to EDB Postgres Advanced Server and have sufficient permissions for + - This user must be able to connect to EDB Postgres Advanced Server and have sufficient permissions for database backup and restore. - This user needs access to local Spectrum Protect files to read configuration files and to write to log files. !!! Note - If the user is not the same user that owns the PostgreSQL server process, make sure that the server process owner has the correct SP client file access so they can execute `archive_command`. + If the user isn't the same user that owns the PostgreSQL server process, make sure that the server process owner has the correct SP client file access so they can execute `archive_command`. -- A Spectrum Protect node needs to be defined in the Spectrum Protect server environment in a +- A Spectrum Protect node must be defined in the Spectrum Protect server environment in a management class that suits the needs of the DBA team. -- The Spectrum Protect client details for setting up connection to Spectrum Protect server needs to be -available including the client Spectrum Protect password. +- The Spectrum Protect client details for setting up connection to Spectrum Protect server must be +available, including the client Spectrum Protect password. -## Installation and configuration of Repostor Data Protector for PostgreSQL +## Installing and configuring Repostor Data Protector for PostgreSQL The high-level steps for installing and configuring the integration are: 1. [Configure local Spectrum Protect configuration files.](/#configure-local-spectrum-protect-configuration-files) 1. [Install Repostor Data Protector client.](#install-repostor-data-protector-client) -1. [Verify the Connection to PostgreSQL psql.](/#verify-the-connection-to-postgresql-psql) +1. [Verify the connection to PostgreSQL psql.](/#verify-the-connection-to-postgresql-psql) 1. [Configure PostgreSQL `archive_command` to use logwriter.](#configure-postgresql-archive_command-to-use-logwriter) 1. [Set up a backup script.](#set-up-a-backup-script) -### Configure the Local Spectrum Protect Configuration Files +### Configure the local Spectrum Protect Configuration files Configure the local Spectrum Protect configuration files with details needed to connect to -Spectrum Protect server: +Spectrum Protect server. -1. Create Spectrum Protect options file with the logical servername. +1. Create a Spectrum Protect options file with the logical server name. -2. Edit the `dsm.sys` file by adding a section for the new logical servername for this Spectrum Protect node with +2. Edit the `dsm.sys` file by adding a section for the new logical server name for this Spectrum Protect node with connection details. -3. Create soft links from the API directory to the ba clients bin directory, for `dsm.sys` and `dsm.opt`. This example uses `dsm.postgres.opt` Spectrum Protect options filename: +3. Create soft links from the API directory to the ba clients `bin` directory for `dsm.sys` and `dsm.opt`. This example uses `dsm.postgres.opt` Spectrum Protect options for the filename: ``` ln -s /opt/tivoli/tsm/client/ba/bin/dsm.sys @@ -64,26 +64,26 @@ connection details. /opt/tivoli/tsm/client/api/bin64/dsm.opt ``` - Here is the listing (`ls -l` command) from API directory for this example showing the links: + Here's the listing (`ls -l` command) from the API directory for this example showing the links: ``` dsm.opt -> /opt/tivoli/tsm/client/ba/bin/dsm.postgres.opt dsm.sys -> /opt/tivoli/tsm/client/ba/bin/dsm.sys ``` -4. Set the `DSMI_CONFIG` variable for the OS user that runs the RDP tools. This is preferably the same user +4. Set the `DSMI_CONFIG` variable for the OS user that runs the RDP tools, preferably the same user that owns the PostgreSQL server process. -5. Verify that the OS user can connect to Spectrum Protect server. This also verifies the correct file access to local SP files. +5. Verify that the OS user can connect to Spectrum Protect server and the correct file access to local SP files: ``` dsmc q session -se=XXX ``` - where ‘XXX’ is your logical servername. + Where ‘XXX’ is your logical server name. -### Install the Repostor Data Protector Client +### Install the Repostor Data Protector client 1. Verify that IBM Spectrum Protect clients for API and BA are already installed. For example: @@ -93,11 +93,11 @@ that owns the PostgreSQL server process. TIVsm-BA-8.1.8-0.x86_64 ``` -2. Install RDP (verify that you have latest version on www.repostor.com): +2. Install RDP. (Verify that you have latest version on www.repostor.com.) ``` rpm -ivh rdp4Postgres-4.4.4.0-31.x86_64 ``` - If you are on Ubuntu, you need to prepare the package with the `alien` tool: + If you're on Ubuntu, you need to prepare the package with the `alien` tool: ``` sudo apt-get install alien @@ -107,26 +107,26 @@ that owns the PostgreSQL server process. 3. Install the license file. The `license.dat` file should be placed in the Repostor `/opt/repostor/rdp4Postgres/etc` directory. If no -license is available yet a trial license is automatically generated the first time a backup is run. This +license is available yet, a trial license is generated the first time you run a backup. This trial license needs to be cleared with a special `UNLOCK` key before changing to a contract license. 4. Add the Repostor `bin` directory to the PATH. All users that run RDP commands need to have the PATH set to include the Repostor `bin` directory. The path is `/opt/repostor/rdp4Postgres/bin`. -### Verify the Connection to PostgreSQL psql +### Verify the connection to PostgreSQL psql -The user used with the `-u` option to the RDP commands needs to be able to connect to PostgreSQL and be -allowed to backup/restore databases. +The user used with the `-u` option to the RDP commands must be able to connect to PostgreSQL and be +allowed to back up/restore databases. -For example, to verify the connection for user `enterprisedb`: +For example, to verify the connection for user enterprisedb: ``` psql -U enterprisedb -l ``` -### Configure PostgreSQL `archive_command` to Use logwriter +### Configure PostgreSQL `archive_command` to use logwriter -If your PostgreSQL environment has WAL activated and you plan to backup PostgreSQL on the instance -level (`-f` option with RDP), then you need to configure the PostgreSQL ‘archive_command’ to run the +If your PostgreSQL environment has WAL activated and you plan to back up PostgreSQL on the instance +level (`-f` option with RDP), then you need to configure the PostgreSQL `archive_command` to run the RDP `logwriter.script`. For example, this is a sample specification of the `archive_command` in the `postgresql.conf` file: @@ -136,15 +136,15 @@ For example, this is a sample specification of the `archive_command` in the `pos -s “%p” -d %f’ ``` !!! note - The instance name you specify with the `-S` option needs to be the same as the one you use with the RDP `postgresbackup` - command. The `logwriter.script` is a script that calls the logwriter binary. It is a script to allow for local configuration if you want + The instance name you specify with the `-S` option must be the same as the one you use with the RDP `postgresbackup` + command. The `logwriter.script` is a script that calls the logwriter binary. It's a script to allow for local configuration if you want to set a specific environment before running the logwriter. -### Set up a Backup Script +### Set up a backup script -If you run `postgresbackup` from a script that initiated from Spectrum Protect scheduler, you should set `LOGNAME`. +If you run `postgresbackup` from a script that initiated from Spectrum Protect scheduler, set `LOGNAME`. -Note that the PATHs and filenames are unique to each installation in this exampple: +In this example, the paths and filenames are unique to each installation: ``` #!/bin/bash @@ -160,6 +160,3 @@ Note that the PATHs and filenames are unique to each installation in this exampp ### Run postgresbackup command, send output to logfile under /tmp postgresbackup -u enterprisedb -f -z -v >/tmp/postgresbackup.log 2>&1 ``` - - - diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/05-UsingRepostorDataProtectorforPostgreSQL.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/05-UsingRepostorDataProtectorforPostgreSQL.mdx index 9f187ecef9a..f194616cc76 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/05-UsingRepostorDataProtectorforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/05-UsingRepostorDataProtectorforPostgreSQL.mdx @@ -2,54 +2,37 @@ title: 'Using Repostor Data Protector for PostgreSQL' description: 'Walkthroughs of multiple Repostor Data Protector for PostgreSQL usage scenarios' --- -The current RDP version is a command-line client. There are three commands that the user runs -(`postgresbackup`, `postgresquery` and `postgresrestore`) and there are the logwriter and logreader tools that are -automatically called by PostgreSQL during execution of `archive_command` and `restore_command`. +The current RDP version is a command-line client. You run three commands: +`postgresbackup`, `postgresquery`, and `postgresrestore`. The logwriter and logreader tools are called by PostgreSQL during execution of `archive_command` and `restore_command`. -The follwong screenshots are examples of daily operation tasks. For details and more examples, see chapter 5 in the *RDP User Guide*. +The screenshots that follow are examples of daily operation tasks. For details and more examples, see chapter 5 in the *RDP User Guide*. +## Instance-level backup example +![Instance-Level Backup](Images/InstanceLevelBackup.png) -## Instance Level Backup Example +## Query of available instance backups on the Spectrum Protect Server example -

- -

+![Query Available Instance Backups](Images/QueryofAvailableInstanceBackups.png) -## Query of Available Instance Backups on the Spectrum Protect Server Example +## Instance restore example -

- -

+![Instance Restore](Images/InstanceRestore.png) -## Instance Restore Example +## Database-level backup example -

- -

+![Database-Level Backup](Images/DatabaseLevelBackup.png) -## Database Level Backup Example +## Query of available backups on the Spectrum Protect Server example -

- -

+![Query of Available Backups](Images/AvailableBackups.png) -## Query of Available Backups on the Spectrum Protect Server Example - -

- -

- -## Restore of Database Level Backup Example +## Restore of database-level backup example Dropping database for visibility only. -

- -

+![Database-Level Backup](Images/RestoreofDatabaseLevelBackup.png) -## Redirected Restore of Database level Backup Example +## Redirected restore of database-level backup example -

- -

+![Redirected Restore](Images/RedirectedRestore.png) diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/06-CertificationEnvironment.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/06-CertificationEnvironment.mdx index 4ad09d78124..dd700ca1c67 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/06-CertificationEnvironment.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/06-CertificationEnvironment.mdx @@ -1,16 +1,16 @@ --- -title: 'Certification Environment' -description: 'Overview of the certification environment used in the certification of Repostor Data Protector for PostgreSQL' +title: 'Certification environment' +description: 'Overview of the certification environment used for certifying Repostor Data Protector for PostgreSQL' redirects: - /partner_docs/RepostorGuide/06-CertificationEnvironment/ --- |   |   | | ----------- | ----------- | -| **Certification Test Date**| April 6, 2021 | +| **Certification test date**| April 6, 2021 | | **EDB Postgres Advanced Server** | 13 | | **Repostor Data Protector for PostgreSQL**| 5.0.0.0-96 | | **OS** | CentOS Linux 7.9.2009 | | **IBM Tivoli Storage Manager client API**| TIVsm-API64-8.1.8-0.x86_64 | | **IBM Tivoli Storage Manager client BA**| TIVsm-BA-8.1.8-0.x86_64 | -| **IBM Tivoli Storage Manager Server**| 8.1.8.000 | +| **IBM Tivoli Storage Manager server**| 8.1.8.000 | diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/index.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/index.mdx index 512a9c9bb47..9f9acccc6cc 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/index.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Repostor Data Protector for PostgresSQL Implementation Guide' +title: 'Implementing Repostor Data Protector for PostgresSQL' indexCards: simple directoryDefaults: iconName: handshake @@ -7,8 +7,7 @@ redirects: - /partner_docs/RepostorGuide/ --- -

- -

+![Partner Program Logo](Images/PartnerProgram.png) +

EDB GlobalConnect Technology Partner Implementation Guide

Repostor Data Protector for PostgresSQL

From 537ba6c10521abd670f6c42882045718d349bcf5 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Tue, 1 Aug 2023 11:11:50 -0400 Subject: [PATCH 47/65] Update 04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx --- .../04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx index 0ad4ec6bbe1..5991aedc544 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx @@ -93,7 +93,7 @@ that owns the PostgreSQL server process. TIVsm-BA-8.1.8-0.x86_64 ``` -2. Install RDP. (Verify that you have latest version on www.repostor.com.) +2. Install RDP. (Verify that you have the latest version on www.repostor.com.) ``` rpm -ivh rdp4Postgres-4.4.4.0-31.x86_64 ``` From 9a04fe658a050581bd5bf7df9d75ff286276b1a5 Mon Sep 17 00:00:00 2001 From: Betsy Gitelman <93718720+ebgitelman@users.noreply.github.com> Date: Thu, 3 Aug 2023 09:42:38 -0400 Subject: [PATCH 48/65] Updates per Jennifer's feedback --- .../04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx | 2 +- .../05-UsingRepostorDataProtectorforPostgreSQL.mdx | 2 +- .../partner_docs/RepostorDataProtectorforPostgreSQL/index.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx index 5991aedc544..e5585e4f7f0 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx @@ -1,5 +1,5 @@ --- -title: 'Configuring Repostor Data Protector for PostgreSQL' +title: 'Configuring' description: 'Walkthrough of configuring Repostor Data Protector for PostgreSQL' --- diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/05-UsingRepostorDataProtectorforPostgreSQL.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/05-UsingRepostorDataProtectorforPostgreSQL.mdx index f194616cc76..45ef071fca7 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/05-UsingRepostorDataProtectorforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/05-UsingRepostorDataProtectorforPostgreSQL.mdx @@ -1,5 +1,5 @@ --- -title: 'Using Repostor Data Protector for PostgreSQL' +title: 'Using' description: 'Walkthroughs of multiple Repostor Data Protector for PostgreSQL usage scenarios' --- The current RDP version is a command-line client. You run three commands: diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/index.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/index.mdx index 9f9acccc6cc..1d38e281eeb 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/index.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/index.mdx @@ -1,5 +1,5 @@ --- -title: 'Implementing Repostor Data Protector for PostgresSQL' +title: 'Repostor Data Protector for PostgresSQL Implementation Guide' indexCards: simple directoryDefaults: iconName: handshake From 2b27020133440c749a2a9592d1fe4a35a7810f42 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 15 Aug 2023 11:24:05 -0400 Subject: [PATCH 49/65] Fixed capitalization issue --- .../04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx index e5585e4f7f0..58a1a6527a8 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx @@ -45,7 +45,7 @@ The high-level steps for installing and configuring the integration are: 1. [Configure PostgreSQL `archive_command` to use logwriter.](#configure-postgresql-archive_command-to-use-logwriter) 1. [Set up a backup script.](#set-up-a-backup-script) -### Configure the local Spectrum Protect Configuration files +### Configure the local Spectrum Protect configuration files Configure the local Spectrum Protect configuration files with details needed to connect to Spectrum Protect server. From 3b23fcfa3207ead2b4a6dcee3d8c571d8f7353c8 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 15 Aug 2023 11:24:30 -0400 Subject: [PATCH 50/65] minor rewrite for clarity --- .../04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx index 58a1a6527a8..6e386a32fc9 100644 --- a/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/RepostorDataProtectorforPostgreSQL/04-ConfiguringRepostorDataProtectorforPostgreSQL.mdx @@ -74,7 +74,7 @@ connection details. 4. Set the `DSMI_CONFIG` variable for the OS user that runs the RDP tools, preferably the same user that owns the PostgreSQL server process. -5. Verify that the OS user can connect to Spectrum Protect server and the correct file access to local SP files: +5. Verify that the OS user can connect to Spectrum Protect server, which also verifies correct file access to local SP files: ``` dsmc q session -se=XXX From 4c2fa73fc2ef072a8d13f3c49ee0da79e9f10cf8 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 15 Aug 2023 11:44:55 -0400 Subject: [PATCH 51/65] Added spaces between option names for clarity --- .../partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx b/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx index fcfe05ebeca..b067bb683ba 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx @@ -14,7 +14,7 @@ To configure Quest Toad Edge for EDB Postgres Advanced Server and EDB Postgres E 1. Launch the Toad Edge application. 2. On the main menu, select **Connect > New Connection**. -3. In the New Connection screen, enter the following values: **Hostname**,**Port**,**Database**,**Username**,**Password**. +3. In the New Connection screen, enter the following values: **Hostname**, **Port**, **Database**, **Username**, and **Password**. 4. On the left pane, select **EDB Postgres Advanced Server**. 5. To verify connectivity to the database server, select **Test Connection**. 6. If the connection is successful, select **Connect**. Otherwise, verify your database information and try again. From ed557fa09481e522d1829a5b35b0563e3654b5d3 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Tue, 15 Aug 2023 11:45:05 -0400 Subject: [PATCH 52/65] Added spaces between option names for clarity --- .../partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx b/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx index b067bb683ba..1c2cd26138f 100644 --- a/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx +++ b/advocacy_docs/partner_docs/QuestToadEdge/04-ConfiguringQuestToadEdge.mdx @@ -36,7 +36,7 @@ If needed, you can update the EDB JDBC driver by downloading the latest version 1. Launch the Toad Edge application. 2. On the main menu, select **Connect > New Connection**. -3. In the New Connection screen, enter the following values: **Hostname**,**Port**,**Database**,**Username**,**Password**. +3. In the New Connection screen, enter the following values: **Hostname**, **Port**, **Database**, **Username**, and **Password**. 4. On the left pane, select **PostgreSQL**. 5. To verify connectivity to the database server, select **Test Connection**. 6. If the connection is successful, select **Connect**. Otherwise, verify your database information and try again. From 56fdb12178d77bbb5a4d91987711dc271fd80a0e Mon Sep 17 00:00:00 2001 From: Dee Dee Rothery <83650384+drothery-edb@users.noreply.github.com> Date: Tue, 15 Aug 2023 14:07:18 -0400 Subject: [PATCH 53/65] incorporated Dave'd suggestion --- .../04-ConfiguringThalesCipherTrustTransparentEncryption.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx index 8474a406f88..bfe650e1e87 100644 --- a/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx +++ b/advocacy_docs/partner_docs/ThalesCipherTrustTransparentEncryption/04-ConfiguringThalesCipherTrustTransparentEncryption.mdx @@ -61,7 +61,7 @@ Log in to the CipherTrust Manager (CM) web UI. Then: ![Create User Sets2](Images/CreateUserSets3.png) -3. Create the policies. Navigate back to **Policies** and select **Create Policy**. +3. Create a policy by navigating back to **Policies** and selecting **Create Policy**. The following screenshots show the live data transformation (LDT) policies postgres-policy, epas-policy, and barman-policy. From a59b0d692e7ae5a6e67915abf3ff38c16bffdacd Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Tue, 15 Aug 2023 23:00:12 +0000 Subject: [PATCH 54/65] Add images to whats-new, fix warning --- src/components/table-of-contents.js | 2 +- src/pages/index.js | 8 ++++++++ 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/src/components/table-of-contents.js b/src/components/table-of-contents.js index 241c6dd6e50..6af04b6f869 100644 --- a/src/components/table-of-contents.js +++ b/src/components/table-of-contents.js @@ -31,7 +31,7 @@ const TableOfContents = ({ toc, deepToC }) => { > {item.title} - {deepToC && item.items != undefined && ( + {deepToC && item.items?.filter && (
    {item.items .filter((subitem) => subitem.title) diff --git a/src/pages/index.js b/src/pages/index.js index 0cf2d826617..3691f60b775 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -63,6 +63,10 @@ const Page = () => (

    + EDB's PostgreSQL installers and installation packages simplify the process of installing PostgreSQL. Check out recent improvements to our install instructions including new @@ -99,6 +103,10 @@ const Page = () => (

    + Use the new reference section in EDB Postgres Distributed to quickly look up views, catalogs, functions, and variables. It's a new view of the documentation designed to centralize From e7c233020388cd1bd902603bf75e8e86a0557eb3 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 16 Aug 2023 09:11:17 -0400 Subject: [PATCH 55/65] Fixed type --- .../PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx index b13f42abc62..5d900d58895 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/04-Configuratingpreciselyconnectcdc.mdx @@ -16,7 +16,7 @@ Implementing Precisely Connect CDC with EDB Postgres Advanced Server requires th ## Configuring your PostgreSQL distribution -You need these components before integrating you PostgreSQL distribution with Precisely Connect CDC: +You need these components before integrating your PostgreSQL distribution with Precisely Connect CDC: - Two running instances of EDB Postgres Advanced Server From de06df5801e7e85d6961f4647b7ef70af2cddecc Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 16 Aug 2023 09:11:35 -0400 Subject: [PATCH 56/65] Fixed typo --- .../PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx index 0dd0f65801c..1bfa16ece80 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx @@ -20,7 +20,7 @@ After you configure your EDB Postgres distribution, you can use Precisely Connec - Execute replication using Connect CDC MonCon. - Replication distribution replicates all the changes done on the source tables to the target tables. -With Precisely Connect CDC, you can also transform your data as you replicate it from a source to target using data transformation. For inoformation on how to enable this feature, see [Data transformation](#data-transformation). +With Precisely Connect CDC, you can also transform your data as you replicate it from a source to target using data transformation. For information on how to enable this feature, see [Data transformation](#data-transformation). - Data transformation - Modify the source data you're distributing or create or derive the data you distribute to the target system. From 9119bbc9c0894deeb5439615cc8d8d01f8eba8f9 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 16 Aug 2023 09:12:04 -0400 Subject: [PATCH 57/65] Very minor wording change --- .../PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx b/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx index 1bfa16ece80..8733a67eba3 100644 --- a/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx +++ b/advocacy_docs/partner_docs/PreciselyConnectCDC/05-Usingpreciselyconnectcdc.mdx @@ -111,7 +111,7 @@ Set up and configure EDB Postgres Advanced Server as a target instance to Connec ![pgAdmin](Images/targetpgadmin.png) -8. Right-click the newly added server, and select **Prepare User Database**, which adds the replication user to EDB Postgres Advanced Server target instance. +8. Right-click the newly added server, and select **Prepare User Database**, which adds the replication user to the EDB Postgres Advanced Server target instance. 9. In **Database/Schema Name**, provide the schema name `public`. From efdcf3fa0a7fbf2a58d9f3beda10d3fc7e2f4589 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 16 Aug 2023 09:51:06 -0400 Subject: [PATCH 58/65] Fixed typo --- .../partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx b/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx index 3d53886284d..30b0d520f29 100644 --- a/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx +++ b/advocacy_docs/partner_docs/LiquibasePro/05-UsingLiquibasePro.mdx @@ -19,7 +19,7 @@ Create a project for the target database instance on the Liquibase Hub. All data ![Create Project page](Images/IntegrationViews2.png) -### Applying catabase changes +### Applying database changes These examples show how to apply database changes using Liquibase changesets, including: From c1b22908a2a1c6924f1182f2a7dd934a0bf2b834 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 16 Aug 2023 10:28:05 -0400 Subject: [PATCH 59/65] Minor edit for clarity --- .../04-ConfiguringTransitSecretsEngine.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx index 4243810090a..5d6357b42ca 100644 --- a/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx +++ b/advocacy_docs/partner_docs/HashicorpVaultTransitSecretsEngine/04-ConfiguringTransitSecretsEngine.mdx @@ -60,4 +60,4 @@ root@ip-172-31-50-151:/usr/lib/edb-pge/15/bin# vault write -f transit/keys/pg-td Success! Data written to: transit/keys/pg-tde-master-1 ``` -Your encryption key is now set and are ready to export your WRAP and UNWRAP commands and initialize your database. \ No newline at end of file +Your encryption key is now set and you are ready to export your WRAP and UNWRAP commands and initialize your database. \ No newline at end of file From cd1b733e80e8db844d92c69ee32af0e8587d7193 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 16 Aug 2023 10:33:16 -0400 Subject: [PATCH 60/65] Fixed minor wording issue --- .../partner_docs/ImpervaDataSecurityFabric/07-Support.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/07-Support.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/07-Support.mdx index ad7156eb0b6..552becba4dc 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/07-Support.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/07-Support.mdx @@ -5,7 +5,7 @@ description: 'Details of the support process and logging information' ## Support -Technical support for the use of these products is provided by both EDB and Imperva. A support contract is must in place at both EDB and Imperva. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. +Technical support for the use of these products is provided by both EDB and Imperva. A support contract must be in place at both EDB and Imperva. You can open a support ticket with either company to start the process. If it's determined through the support ticket that resources from the other vendor are required, open a support ticket with that vendor through normal support channels. This approach allows both companies to work together to help you as needed. ## Logging From 5577079e54a0230644a9bd02a45add1f486500b7 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 16 Aug 2023 10:33:38 -0400 Subject: [PATCH 61/65] Added the word "agent" for clarity --- .../05-UsingImpervaDataSecurityFabric.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx index 9be4a1cede2..507516292b9 100644 --- a/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx +++ b/advocacy_docs/partner_docs/ImpervaDataSecurityFabric/05-UsingImpervaDataSecurityFabric.mdx @@ -25,7 +25,7 @@ You can use Imperva agents within the Imperva Data Security Fabric solution to t Set up the Imperva Data Security Fabric agent to monitor EDB Postgres Advanced Server traffic. -1. Install Imperva Data Security Fabric. +1. Install Imperva Data Security Fabric agent. 2. Run the basic management configuration. From 66a358d434d404de496d1cadda72005b3321b5c2 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 16 Aug 2023 11:12:05 -0400 Subject: [PATCH 62/65] minor edit to improve clarity --- .../HashicorpVault/04-ConfiguringHashicorpVault.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx b/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx index 1024aff4756..05dbcc0fa05 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx @@ -117,7 +117,7 @@ root@ip-172-31-46-134:/home/ubuntu# vault write kmip/config listen_addrs=0.0.0.0 Success! Data written to: kmip/config ``` -6. To create the scope to use to define the allowed operations a role can perform, enter `vault write -f kmip/scope/`: +6. To create the scope for defining allowed operations a role can perform, enter `vault write -f kmip/scope/`: ```bash root@ip-172-31-46-134:/home/ubuntu# vault write -f kmip/scope/edb From 65fac43395763759e1560e6207dd3192df646613 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 16 Aug 2023 11:12:26 -0400 Subject: [PATCH 63/65] minor edit to improve clarity --- .../HashicorpVault/04-ConfiguringHashicorpVault.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx b/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx index 05dbcc0fa05..88222b36acf 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/04-ConfiguringHashicorpVault.mdx @@ -95,7 +95,7 @@ Complete! ## Configure Hashicorp Vault KMIP secrets engine !!! Note - You have to set your environment variable with Hashicorp Vault before you can configure the Hashicorp Vault server using the API IP address and port. If you receive the error message, “Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client,” you need to issue this at your command line: `export VAULT_ADDR="http://127.0.0.1:8200"`. + You have to set your environment variable with Hashicorp Vault before you can configure the Hashicorp Vault server using the API IP address and port. If you receive the error message, “Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client,” enter this command at your command line: `export VAULT_ADDR="http://127.0.0.1:8200"`. After your Hashicorp Vault configuration is installed and deployed per the guidelines in the [Hashicorp documentation](https://developer.hashicorp.com/vault/tutorials/getting-started/getting-started-install), you then need to enable the KMIP capabilities. From f72249424098fa817c6154b1592f3304e24ad6c1 Mon Sep 17 00:00:00 2001 From: David Wicinas <93669463+dwicinas@users.noreply.github.com> Date: Wed, 16 Aug 2023 11:12:43 -0400 Subject: [PATCH 64/65] fixed typo --- .../partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx b/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx index 2660b97d8ea..cef616604be 100644 --- a/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx +++ b/advocacy_docs/partner_docs/HashicorpVault/05-UsingHashicorpVault.mdx @@ -131,7 +131,7 @@ root@ip-172-31-46-134:/etc/vault.d# ## Perform initdb for the database -After you complete the previous steps, can export the PGDATAKEYWRAPCMD and PGDATAKEYUNWRAPCMD to wrap and unwrap your encryption key and initialize your database. +After you complete the previous steps, you can export the PGDATAKEYWRAPCMD and PGDATAKEYUNWRAPCMD to wrap and unwrap your encryption key and initialize your database. 1. Log in to your EDB Postgres distribution as the superuser. For our example, use the enterprisedb user: `sudo su - enterprisedb`. From fa15228b787934613ec1b45586ccd380af09eb76 Mon Sep 17 00:00:00 2001 From: drothery-edb Date: Wed, 16 Aug 2023 11:39:36 -0400 Subject: [PATCH 65/65] adjusted the note formatting --- ...04-ConfiguringCohesityDataProtectforPostgreSQL.mdx | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/advocacy_docs/partner_docs/CohesityDataProtectforPostgreSQL/04-ConfiguringCohesityDataProtectforPostgreSQL.mdx b/advocacy_docs/partner_docs/CohesityDataProtectforPostgreSQL/04-ConfiguringCohesityDataProtectforPostgreSQL.mdx index 8d42ce2c09c..5efe6f2a445 100644 --- a/advocacy_docs/partner_docs/CohesityDataProtectforPostgreSQL/04-ConfiguringCohesityDataProtectforPostgreSQL.mdx +++ b/advocacy_docs/partner_docs/CohesityDataProtectforPostgreSQL/04-ConfiguringCohesityDataProtectforPostgreSQL.mdx @@ -52,7 +52,8 @@ Implementing Cohesity DataProtect for PostgreSQL with EDB Postgres Advanced Serv - In the **App Authentication** section, enter the admin username and password for the user who has admin privileges on your database to perform a backup. -!!! Note Instead of password-based authentication, if you want to use kerberos authentication, then leave the username and password fields blank. +!!!note +Instead of password-based authentication, if you want to use kerberos authentication, then leave the username and password fields blank. !!! ![Cohesity Universal Data Adapter Information](Images/CohesityUniversalDataAdapterInformation.png) @@ -74,7 +75,8 @@ Implementing Cohesity DataProtect for PostgreSQL with EDB Postgres Advanced Serv ### Configuring EDB Postgres Advanced Server -!!! Note When you run your first backup on the database, Cohesity will set up a file called postgresql.auto.conf with their archive command and they will set archive_mode=on in postgresql.conf and restart the database if you have not already set archive_mode=on. +!!!note +When you run your first backup on the database, Cohesity will set up a file called postgresql.auto.conf with their archive command and they will set archive_mode=on in postgresql.conf and restart the database if you have not already set archive_mode=on. !!! Set up WAL archiving on the EDB Postgres Advanced Server server by using the steps below. WAL archiving prepares Postgresql/EDB Postgres Advanced Server database servers for backup/recovery operations and is a precondition for any backup/recovery tool to work with the database server. @@ -90,7 +92,8 @@ Set up WAL archiving on the EDB Postgres Advanced Server server by using the ste archive_command = test ! -f /%f && cp %p /%f ``` -!!! Note Replace `` in the `archive_command` parameter with the location of the directory created in Step 1. -!!! + !!!note + Replace `` in the `archive_command` parameter with the location of the directory created in Step 1. + !!! 3. Restart the PostgreSQL server. \ No newline at end of file