diff --git a/product_docs/docs/biganimal/release/known_issues/index.mdx b/product_docs/docs/biganimal/release/known_issues/index.mdx index b2c56283d93..b97c9b2da00 100644 --- a/product_docs/docs/biganimal/release/known_issues/index.mdx +++ b/product_docs/docs/biganimal/release/known_issues/index.mdx @@ -1,8 +1,8 @@ --- -title: Known Issues and Limitations -navTitle: Known Issues +title: Known issues and limitations +navTitle: Known issues --- -This section lists known issues and/or limitations in the current release of BigAnimal and the Postgres deployments it supports: +These known issues and/or limitations are in the current release of BigAnimal and the Postgres deployments it supports: -* [Known Issues with Distributed High Availability](known_issues_dha) \ No newline at end of file +* [Known issues with distributed high availability](known_issues_dha) \ No newline at end of file diff --git a/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx b/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx index e9183280f77..fbfa2c9e110 100644 --- a/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx +++ b/product_docs/docs/biganimal/release/known_issues/known_issues_dha.mdx @@ -1,21 +1,21 @@ --- -title: Known Issues with Distributed High Availability/PGD -navTitle: Distributed High Availability/PGD Known Issues +title: Known issues with distributed high availability/PGD +navTitle: Distributed high availability/PGD known issues deepToC: true --- -These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in Distributed High Availability clusters. These known issues are tracked in our ticketing system and are expected to be resolved in a future release. +These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. These known issues are tracked in our ticketing system and are expected to be resolved in a future release. -## Management/Administration +## Management/administration -### Deleting a PGD Data Group may not fully reconcile -When deleting a PGD Data Group, the target group resources will be physically deleted, but in some cases we have observed that the PGD nodes may not be completely partitioned from the remaining PGD Groups. It’s recommended to avoid use of this feature until this is fixed and removed from the known issues list. +### Deleting a PGD data group may not fully reconcile +When deleting a PGD data group, the target group resources is physically deleted, but in some cases we have observed that the PGD nodes may not be completely partitioned from the remaining PGD Groups. We recommend avoiding use of this feature until this is fixed and removed from the known issues list. ### Adjusting PGD cluster architecture may not fully reconcile -In rare cases, we have observed that changing the node architecture of an existing PGD cluster may not complete. If a change has not taken effect in 1 hour, reach out to Support. +In rare cases, we have observed that changing the node architecture of an existing PGD cluster may not complete. If a change hasn't taken effect in 1 hour, reach out to Support. -### PGD Cluster may fail to create due to Azure SKU issue -In some cases, although a regional quota check may have passed initially when the PGD cluster is created, it may fail if an SKU critical for the Witness Nodes is unavailable across three availability zones. +### PGD cluster may fail to create due to Azure SKU issue +In some cases, although a regional quota check may have passed initially when the PGD cluster is created, it may fail if an SKU critical for the witness nodes is unavailable across three availability zones. To check for this issue at the time of a region quota check, run: ``` @@ -25,13 +25,13 @@ biganimal-csp-preflight --onboard -i d2s_v3 -x eha If you have already encountered this issue, reach out to Azure support: ```plaintext -We're going to be provisioning a number of instances of in and need to be able to provision these instances in all AZs, can you please ensure that subscription is able to provision this VM type in all AZs of . Thank you! +We're going to be provisioning a number of instances of in and need to be able to provision these instances in all AZs. Can you please ensure that subscription is able to provision this VM type in all AZs of . Thank you! ``` ## Replication ### A PGD replication slot may fail to transition cleanly from disconnect to catch up -As part of fault injection testing with PGD on BigAnimal, you may decide to delete VMs. Your cluster will recover if you do so, as expected. However, if you are testing in a Bring Your Own Account (BYOA) deployment, in some cases, as the cluster is recovering, a replication slot may remain disconnected. This will persist for a few hours until the replication slot recovers automatically. +As part of fault injection testing with PGD on BigAnimal, you may decide to delete VMs. Your cluster will recover if you do so, as expected. However, if you're testing in a bring-your-own-account (BYOA) deployment, in some cases, as the cluster is recovering, a replication slot may remain disconnected. This will persist for a few hours until the replication slot recovers automatically. ### Replication speed is slow during a large data migration During a large data migration, when migrating to a PGD cluster, you may experience a replication rate of 20 MBps. @@ -42,13 +42,12 @@ PGD clusters that are in a healthy state may experience a change in PGD node lea ## Migration ### Connection interruption disrupts migration via Migration Toolkit -When using Migration Toolkit (MTK), if the session is interrupted, the migration will error out. To resolve, you will need to restart the migration from the beginning. The recommended path to avoid this is to migrate on a per-table basis when using MTK so that if this issue does occur, you retry the migration with a table rather than the whole database. +When using Migration Toolkit (MTK), if the session is interrupted, the migration errors out. To resolve, you need to restart the migration from the beginning. The recommended path to avoid this is to migrate on a per-table basis when using MTK so that if this issue does occur, you retry the migration with a table rather than the whole database. ### Ensure loaderCount is less than 1 in Migration ToolKit -When using Migration Toolkit to migrate a PGD cluster, if you have adjusted the loaderCount to be greater than 1 in order to speed up migration, you may see an error in the MTK CLI that says “pgsql_tmp/': No such file or directory.” If you see this, reduce your loaderCount to 1 in MTK. +When using Migration Toolkit to migrate a PGD cluster, if you adjusted the loaderCount to be greater than 1 to speed up migration, you may see an error in the MTK CLI that says “pgsql_tmp/': No such file or directory.” If you see this, reduce your loaderCount to 1 in MTK. ## Tools ### Verify-settings command via PGD CLI provides false negative for PGD on BigAnimal clusters -The command verify-settings in the PGD CLI will display that a “node is unreachable” when used with PGD on BigAnimal clusters. - +When used with PGD on BigAnimal clusters, the command verify-settings in the PGD CLI displays that a “node is unreachable.” diff --git a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx index 9917b0dd0df..c228ef6a142 100644 --- a/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx +++ b/product_docs/docs/biganimal/release/migration/dha_bulk_migration.mdx @@ -1,6 +1,6 @@ --- -title: Bulk loading data into DHA/PGD clusters -navTITLE: Bulk loading into DHA/PGD clusters +title: Bulk loading data into PGD clusters +navTITLE: Bulk loading into PGD clusters description: This guidance is specifically for environments where there's no direct access to the PGD nodes, only PGD Proxy endpoints, such as BigAnimal's distributed high availability deployments of PGD. deepToC: true --- diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx index 17a6d570f16..b82fe22f401 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx @@ -180,7 +180,7 @@ You have successfully built a tunnel between your client VM's virtual network an ```shell NICID=$(az network private-endpoint show -n vnet-client-private-endpoint -g rg-client --query "networkInterfaces[0].id" -o tsv) -az network nic show -n ${NICID##*/} -g rg-client --query "ipConfigurations[0].privateIpAddress" -o tsv +az network nic show -n ${NICID##*/} -g rg-client --query "ipConfigurations[0].privateIPAddress" -o tsv __OUTPUT__ 100.64.111.5 ``` diff --git a/product_docs/docs/efm/4/05_using_efm.mdx b/product_docs/docs/efm/4/05_using_efm.mdx index 4ff307ea1f2..b7040a1dbec 100644 --- a/product_docs/docs/efm/4/05_using_efm.mdx +++ b/product_docs/docs/efm/4/05_using_efm.mdx @@ -39,6 +39,8 @@ To start the Failover Manager cluster on RHEL/CentOS 7.x or RHEL/Rocky Linux/Alm `systemctl start edb-efm-4.` +!!! Note + If the agent fails to start, see the startup log `/var/log/efm-4./startup-efm.log` for more information. If the cluster properties file for the node specifies that `is.witness` is `true`, the node starts as a witness node. diff --git a/product_docs/docs/efm/4/13_troubleshooting.mdx b/product_docs/docs/efm/4/13_troubleshooting.mdx index 74fff13d648..1aab4fc531d 100644 --- a/product_docs/docs/efm/4/13_troubleshooting.mdx +++ b/product_docs/docs/efm/4/13_troubleshooting.mdx @@ -10,6 +10,10 @@ legacyRedirectsGenerated: +## The Failover Manager agent fails to start + +If an agent fails to start, see the startup log `/var/log/efm-/startup-.log` for more information. + ## Authorization file not found. Is the local agent running? If you invoke an Failover Manager cluster management command and Failover Manager isn't running on the node, the `efm` command displays an error: diff --git a/product_docs/docs/pgd/5/admin-tpa/installing.mdx b/product_docs/docs/pgd/5/admin-tpa/installing.mdx index c33d86e50aa..a351d3e1923 100644 --- a/product_docs/docs/pgd/5/admin-tpa/installing.mdx +++ b/product_docs/docs/pgd/5/admin-tpa/installing.mdx @@ -4,6 +4,7 @@ navTitle: Deploying with TPA description: > Detailed reference and examples for using TPA to configure and deploy PGD redirects: + - /pgd/latest/tpa/ - /pgd/latest/deployments/tpaexec/using_tpaexec/ - /pgd/latest/tpa/using_tpa/ - ../deployments/tpaexec