From 9aab5593baa7abe875aca52449a78971c3b37ef6 Mon Sep 17 00:00:00 2001 From: drothery-edb <83650384+drothery-edb@users.noreply.github.com> Date: Tue, 2 Nov 2021 00:04:09 -0400 Subject: [PATCH 1/2] Release/2021 11 1 (#1984) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * initial import and style guide changes through the first example initial import and style guide changes through the first example * Cleaning up text in the private networking topic * Cleaning text * consolidated pros and cons Moved the consolidated pros and cons to the Connect from External Azure Resources section * Cleaning up the text and defining the structure for the steps. * updated intro to examples updated intro to examples trying out adding a header for the examples and just using bold for the high level steps * Attempting to merge with Dee Dee's updates * Cleaning Text * implemented structural changes - minor edits Added a section to the Connecting to Your Cluster topic Created separate topics for the two main approaches Changed heading levels and added an intro to the walkthrough and links to the walkthrough for the first solution * Addes screenshots. Cleaned the low-level steps. Added screenshots. Added missing steps. * more restructuring more restructuring * Reviewed and updated the content in the private networking pages * edits and comparison table Did some random cleanup Took a pass at the On-premises topic Created a draft of a Pros and Cons table * Dee Dee's final edits prior to review Updated the topic intro to mention private networking and other small edits * last set of changes before the review last set of changes before the review * more reviewer notes * Move private_networking to cluster_networking * cluster networking whitespace cleanup * make cluster_networking more abstract and pub/priv * hack private endpoint example * clean up code examples in vnet_vnet * fixup connecting-from-azure * Update product_docs/docs/edbcloud/beta/getting_started/creating_a_cluster/01_cluster_networking.mdx * Update product_docs/docs/edbcloud/beta/getting_started/creating_a_cluster/01_cluster_networking.mdx * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/index.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * addressed a few of Ben's comments * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/02_virtual_network_peering.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/03_vnet_vnet.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Addressing comments from Zane and Ben * Addressing comments * Addressed comments, regarding terminology and several other changes. * Updated screenshot. Addressed comments. * Added the content for monitoring Postgres instance on Azure using PEM Added the content for monitoring Postgres instance on Azure using PEM * Updated the content referencing EDB Cloud Updated the content referencing EDB Cloud * Update 03a_pem_define_azure_instance_connection.mdx Missing "account". * Update 03a_pem_define_azure_instance_connection.mdx * added PEM monitoring content to Cloud doc * Updated the Remote Monitoring of EDB Cloud content as per the inputs from DeeDee, Kelly and Anthony. Updated the Remote Monitoring of EDB Cloud content as per the inputs from DeeDee, Kelly and Anthony. * cleaning up after merging develop resolved conflicts made a few minor edits updating case of headings * Updated the content as per the dicussion with Dee Dee Updated the content as per the dicussion with Dee Dee * Updated the content as per dicussion with Anthony and Dee Dee. Updated the content as per dicussion with Anthony and Dee Dee. * fixed link plus one minor edit * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/03_vnet_vnet.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/03_vnet_vnet.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/03_vnet_vnet.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/02_virtual_network_peering.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/02_virtual_network_peering.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/02_virtual_network_peering.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/02_virtual_network_peering.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * incorporating new comments * Update product_docs/docs/edbcloud/beta/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> * Changing the positioning of the PEM solution per Adam * Incorporated inputs from Zane * minor edits * redid rebranding changes * removed video link * more rebranding * Added the content for monitoring Postgres instance on Azure using PEM Added the content for monitoring Postgres instance on Azure using PEM * Updated the content referencing EDB Cloud Updated the content referencing EDB Cloud * Update 03a_pem_define_azure_instance_connection.mdx Missing "account". * Update 03a_pem_define_azure_instance_connection.mdx * added PEM monitoring content to Cloud doc * Updated the Remote Monitoring of EDB Cloud content as per the inputs from DeeDee, Kelly and Anthony. Updated the Remote Monitoring of EDB Cloud content as per the inputs from DeeDee, Kelly and Anthony. * Updated the content as per the dicussion with Dee Dee Updated the content as per the dicussion with Dee Dee * Updated the content as per dicussion with Anthony and Dee Dee. Updated the content as per dicussion with Anthony and Dee Dee. * fixed link plus one minor edit * rebranding changes * fixed label * One copy-paste too many Co-authored-by: Moiz Nalwalla Co-authored-by: Francisco González Co-authored-by: Benjamin Anderson Co-authored-by: Benjamin Anderson <79652654+ba-edb@users.noreply.github.com> Co-authored-by: moiznalwalla <90263457+moiznalwalla@users.noreply.github.com> Co-authored-by: nidhibhammar <59045594+nidhibhammar@users.noreply.github.com> Co-authored-by: Anthony Waite <70154584+alwaite@users.noreply.github.com> Co-authored-by: Jon Ericson --- gatsby-config.js | 2 +- .../01_portal_access.mdx | 0 .../03_account_activity.mdx | 0 .../administering_cluster/images/map1.puml | 0 .../administering_cluster/images/org-map.png | 0 .../release}/administering_cluster/index.mdx | 0 .../01_check_resource_limits.mdx | 5 +- .../02_connect_cloud_account.mdx | 0 .../01_cluster_networking.mdx | 63 ++++++ .../creating_a_cluster/index.mdx} | 9 +- .../getting_started/images/assign-roles.png | 0 .../getting_started/images/clusters.png | 0 .../images/create-cluster-1.png | 0 .../images/create-cluster-2.png | 0 .../getting_started/images/edbcloud.png | 0 .../images/marketplace-preview.png | 0 .../images/raising-limits-video.png | 0 .../release}/getting_started/images/roles.png | 0 .../getting_started/images/setup-edbcloud.png | 0 .../images/subscribe-to-dbaas.png | 0 .../images/subscription-process.png | 0 .../release}/getting_started/images/users.png | 0 .../release}/getting_started/index.mdx | 0 .../beta => biganimal/release}/index.mdx | 6 +- .../overview/02_high_availibility.mdx | 6 +- .../release}/overview/03_security.mdx | 6 +- .../overview/04_responsibility_model.mdx | 4 +- .../overview/05_database_version_policy.mdx | 2 +- .../release}/overview/06_support.mdx | 10 +- .../overview/images/ha-not-enabled.png | 0 .../overview/images/high-availability.png | 0 .../overview/images/high-availibility.puml | 0 .../release}/overview/index.mdx | 2 +- .../release}/pricing_and_billing/index.mdx | 4 +- .../release}/reference/index.mdx | 52 ++--- .../using_cluster/01_postgres_access.mdx | 8 +- .../05_db_configuration_parameters.mdx | 2 +- .../03_modifying_your_cluster/index.mdx | 4 +- .../using_cluster/04_backup_and_restore.mdx | 10 +- .../05_monitoring_and_logging.mdx | 15 +- .../01_private_endpoint.mdx | 207 ++++++++++++++++++ .../02_virtual_network_peering.mdx | 78 +++++++ .../01_connecting_from_azure/03_vnet_vnet.mdx | 122 +++++++++++ .../01_connecting_from_azure/index.mdx | 51 +++++ .../connecting_your_cluster/images/image1.png | 3 + .../images/image10.png | 3 + .../images/image11.png | 3 + .../images/image12.png | 3 + .../images/image13.png | 3 + .../images/image14.png | 3 + .../images/image15.png | 3 + .../images/image16.png | 3 + .../images/image17.png | 3 + .../images/image18.png | 3 + .../images/image19.png | 3 + .../images/image20.png | 3 + .../images/image21.png | 3 + .../images/image22.png | 3 + .../images/image24.png | 3 + .../images/image25.png | 3 + .../connecting_your_cluster/images/image3.png | 3 + .../connecting_your_cluster/images/image4.png | 3 + .../connecting_your_cluster/images/image5.png | 3 + .../connecting_your_cluster/images/image6.png | 3 + .../connecting_your_cluster/images/image7.png | 3 + .../connecting_your_cluster/images/image8.png | 3 + .../connecting_your_cluster/images/image9.png | 3 + .../images/point-to-site-download.png | 3 + .../images/point-to-site.png | 3 + .../connecting_your_cluster/index.mdx} | 16 +- .../biganimal/release/using_cluster/index.mdx | 7 + .../edbcloud/beta/using_cluster/index.mdx | 7 - .../8/pem_admin/02a_pem_remote_monitoring.mdx | 30 +++ .../03_pem_define_aws_instance_connection.mdx | 53 ++--- scripts/source/config_sources.py | 4 +- src/pages/index.js | 4 +- static/_redirects | 3 + 77 files changed, 745 insertions(+), 122 deletions(-) rename product_docs/docs/{edbcloud/beta => biganimal/release}/administering_cluster/01_portal_access.mdx (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/administering_cluster/03_account_activity.mdx (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/administering_cluster/images/map1.puml (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/administering_cluster/images/org-map.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/administering_cluster/index.mdx (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/01_check_resource_limits.mdx (89%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/02_connect_cloud_account.mdx (100%) create mode 100644 product_docs/docs/biganimal/release/getting_started/creating_a_cluster/01_cluster_networking.mdx rename product_docs/docs/{edbcloud/beta/getting_started/03_create_cluster.mdx => biganimal/release/getting_started/creating_a_cluster/index.mdx} (87%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/assign-roles.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/clusters.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/create-cluster-1.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/create-cluster-2.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/edbcloud.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/marketplace-preview.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/raising-limits-video.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/roles.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/setup-edbcloud.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/subscribe-to-dbaas.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/subscription-process.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/images/users.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/getting_started/index.mdx (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/index.mdx (71%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/overview/02_high_availibility.mdx (85%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/overview/03_security.mdx (92%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/overview/04_responsibility_model.mdx (91%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/overview/05_database_version_policy.mdx (93%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/overview/06_support.mdx (73%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/overview/images/ha-not-enabled.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/overview/images/high-availability.png (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/overview/images/high-availibility.puml (100%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/overview/index.mdx (76%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/pricing_and_billing/index.mdx (92%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/reference/index.mdx (86%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/using_cluster/01_postgres_access.mdx (93%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/using_cluster/03_modifying_your_cluster/05_db_configuration_parameters.mdx (93%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/using_cluster/03_modifying_your_cluster/index.mdx (85%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/using_cluster/04_backup_and_restore.mdx (85%) rename product_docs/docs/{edbcloud/beta => biganimal/release}/using_cluster/05_monitoring_and_logging.mdx (72%) create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/02_virtual_network_peering.mdx create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/03_vnet_vnet.mdx create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/index.mdx create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image1.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image10.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image11.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image12.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image13.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image14.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image15.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image16.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image17.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image18.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image19.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image20.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image21.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image22.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image24.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image25.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image3.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image4.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image5.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image6.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image7.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image8.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image9.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/point-to-site-download.png create mode 100644 product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/point-to-site.png rename product_docs/docs/{edbcloud/beta/using_cluster/02_connect_to_cluster.mdx => biganimal/release/using_cluster/connecting_your_cluster/index.mdx} (69%) create mode 100644 product_docs/docs/biganimal/release/using_cluster/index.mdx delete mode 100644 product_docs/docs/edbcloud/beta/using_cluster/index.mdx create mode 100644 product_docs/docs/pem/8/pem_admin/02a_pem_remote_monitoring.mdx diff --git a/gatsby-config.js b/gatsby-config.js index d1adc0d6c88..84a4498d479 100644 --- a/gatsby-config.js +++ b/gatsby-config.js @@ -20,7 +20,7 @@ const sourceToPluginConfig = { bart: { name: "bart", path: "product_docs/docs/bart" }, bdr: { name: "bdr", path: "product_docs/docs/bdr" }, harp: { name: "harp", path: "product_docs/docs/harp" }, - edbcloud: { name: "edbcloud", path: "product_docs/docs/edbcloud" }, + biganimal: { name: "biganimal", path: "product_docs/docs/biganimal" }, efm: { name: "efm", path: "product_docs/docs/efm" }, epas: { name: "epas", path: "product_docs/docs/epas" }, eprs: { name: "eprs", path: "product_docs/docs/eprs" }, diff --git a/product_docs/docs/edbcloud/beta/administering_cluster/01_portal_access.mdx b/product_docs/docs/biganimal/release/administering_cluster/01_portal_access.mdx similarity index 100% rename from product_docs/docs/edbcloud/beta/administering_cluster/01_portal_access.mdx rename to product_docs/docs/biganimal/release/administering_cluster/01_portal_access.mdx diff --git a/product_docs/docs/edbcloud/beta/administering_cluster/03_account_activity.mdx b/product_docs/docs/biganimal/release/administering_cluster/03_account_activity.mdx similarity index 100% rename from product_docs/docs/edbcloud/beta/administering_cluster/03_account_activity.mdx rename to product_docs/docs/biganimal/release/administering_cluster/03_account_activity.mdx diff --git a/product_docs/docs/edbcloud/beta/administering_cluster/images/map1.puml b/product_docs/docs/biganimal/release/administering_cluster/images/map1.puml similarity index 100% rename from product_docs/docs/edbcloud/beta/administering_cluster/images/map1.puml rename to product_docs/docs/biganimal/release/administering_cluster/images/map1.puml diff --git a/product_docs/docs/edbcloud/beta/administering_cluster/images/org-map.png b/product_docs/docs/biganimal/release/administering_cluster/images/org-map.png similarity index 100% rename from product_docs/docs/edbcloud/beta/administering_cluster/images/org-map.png rename to product_docs/docs/biganimal/release/administering_cluster/images/org-map.png diff --git a/product_docs/docs/edbcloud/beta/administering_cluster/index.mdx b/product_docs/docs/biganimal/release/administering_cluster/index.mdx similarity index 100% rename from product_docs/docs/edbcloud/beta/administering_cluster/index.mdx rename to product_docs/docs/biganimal/release/administering_cluster/index.mdx diff --git a/product_docs/docs/edbcloud/beta/getting_started/01_check_resource_limits.mdx b/product_docs/docs/biganimal/release/getting_started/01_check_resource_limits.mdx similarity index 89% rename from product_docs/docs/edbcloud/beta/getting_started/01_check_resource_limits.mdx rename to product_docs/docs/biganimal/release/getting_started/01_check_resource_limits.mdx index 1a874eacf62..d3e6b17700d 100644 --- a/product_docs/docs/edbcloud/beta/getting_started/01_check_resource_limits.mdx +++ b/product_docs/docs/biganimal/release/getting_started/01_check_resource_limits.mdx @@ -8,10 +8,9 @@ The default number of total cores per subscription per region is 20. See [Virtua The default Public IP address limits for Public IP Addresses Basic and Public IP Addresses Standards is set to 10. See [Public IP address limits](https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/azure-subscription-service-limits#publicip-address) for more information. You need to increase the limit of `Public IP Addresses - Basic` and `Public IP Addresses - Standard` for the regions where you plan to deploy your EDB clusters with the total number of EDB clusters you plan to use. - ## Virtual machine quota requirements -In each region, EDB Cloud uses six ESv3 and six DSv2 virtual machine cores to manage your EDB Cloud infrastructure. +In each region, BigAnimal uses six ESv3 and six DSv2 virtual machine cores to manage your BigAnimal infrastructure. Your Postgres clusters deployed in the region use separate ESv3 virtual machine cores. The number of cores depends on the Instance Type and High Availability options of the clusters you provision. You can calculate the number of ESv3 cores required for your cluster based on the following: @@ -19,7 +18,7 @@ The number of cores depends on the Instance Type and High Availability options o * Cluster running on an E{N}Sv3 instance with high availability not enabled uses exactly {N} ESv3 cores. * Cluster running on an E{N}Sv3 instance with high availability enabled uses 3 * {N} ESv3 cores. -As an example, if you provision the largest virtual machine E64Sv3 with high availability enabled, it requires (3 * 64)=192 ESv3 cores per region. EDB Cloud infrastructure requires an additional six ESv3 and six DSv2 virtual machine cores per region. +As an example, if you provision the largest virtual machine E64Sv3 with high availability enabled, it requires (3 * 64)=192 ESv3 cores per region. BigAnimal infrastructure requires an additional six ESv3 and six DSv2 virtual machine cores per region. ## Checking current utilization diff --git a/product_docs/docs/edbcloud/beta/getting_started/02_connect_cloud_account.mdx b/product_docs/docs/biganimal/release/getting_started/02_connect_cloud_account.mdx similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/02_connect_cloud_account.mdx rename to product_docs/docs/biganimal/release/getting_started/02_connect_cloud_account.mdx diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/01_cluster_networking.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/01_cluster_networking.mdx new file mode 100644 index 00000000000..a893a407e74 --- /dev/null +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/01_cluster_networking.mdx @@ -0,0 +1,63 @@ +--- +title: Cluster networking architecture +--- + +BigAnimal clusters can be exposed to client applications in two ways: +- Public - where the cluster is available on the Internet, and +- Private - where access to the cluster is restricted to specific +sources and network traffic never leaves the Azure network. + +## Basic architecture + +When you create your first cluster in a region, BigAnimal deploys a +dedicated virtual network (VNet) in Azure to host all clusters and +supporting management services. + +This VNet is named after the region where the cluster is deployed. +For example, if the cluster is deployed in the East US region, it is +named `vnet-eastus`. This VNet uses IP addresses in the +`10.240.0.0.16` space. + +## Public cluster load balancing + +When a cluster is created with public network access, an Azure +Standard SKU Load Balancer is created and configured with a public IP +address that always routes to the leader of your cluster. Once +assigned, this IP address does not change unless you change the +networking configuration for your cluster. + +Only one Azure Load Balancer is typically deployed in your Azure +subscription per BigAnimal region; subsequent public clusters add +additional IP addresses to the existing load balancer. + +## Private cluster load balancing + +When a cluster is created with public network access, an Azure +Standard SKU Load Balancer is created and configured with an IP +address that always routes to the leader of your cluster. Once +assigned, this IP address does not change unless you change the +networking configuration for your cluster. + +This IP address is private to the VNet hosting your BigAnimal +services; by default it is not routable from other networks, even in +your Azure account. See [Setting up Azure infrastructure to connect to a private network cluster](../../using_cluster/connecting_your_cluster/#setting-up-azure-infrastructure-to-connect-to-a-private-network-cluster) for details and instructions +on how to properly configure routing for private clusters. + +Only one Azure Internal Load Balancer is typically deployed in your +Azure subscription per BigAnimal region; subsequent private clusters add additional IP addresses to the existing load balancer. + +## DNS + +Every BigAnimal cluster, regardless of public or private networking +status, is assigned a single DNS zone that maps to its exposed IP +address, either public or private. When toggling between public and +private, wait up to 120 seconds for DNS caches to flush. + +## Toggling between public and private + +Clusters can be changed from public to private and vice versa at any +time. When this happens, the IP address previously assigned to the +cluster is de-allocated, a new one is assigned, and DNS is updated +accordingly. + + diff --git a/product_docs/docs/edbcloud/beta/getting_started/03_create_cluster.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx similarity index 87% rename from product_docs/docs/edbcloud/beta/getting_started/03_create_cluster.mdx rename to product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index f6da75f8b49..a321dae91f4 100644 --- a/product_docs/docs/edbcloud/beta/getting_started/03_create_cluster.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -1,6 +1,11 @@ --- title: "Creating a cluster" + +redirects: + #adding hierarchy to the structure (Creating a Cluster topic nows has a child topic) so created a folder and moved the contents from 03_create_cluster to index.mdx + - ../03_create_cluster/ --- + !!! Note Prior to creating your cluster, make sure you have adequate Azure resources or your request to create a cluster will fail. See [Raising your Azure resource limits](01_check_resource_limits). !!! @@ -44,8 +49,8 @@ To create a cluster: 5. In the **Storage** section, select **Volume Type**, and in **Volume Properties** the type and amount of storage needed for your cluster. !!! Note EDB Cloud currently supports Azure Premium SSD storage types. See [the Azure documentation](https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssd) for more information. -6. In the **Networking** section, Whether to use private or public networking using the **Networking** slide button. Networking is set to Public by default. Public means that any client can connect to your cluster’s public IP address over the internet. -Private networking allows only IP addresses within your private network to connect to your cluster. +6. In the **Networking** section, you specify whether to use private or public networking. Networking is set to Public by default. Public means that any client can connect to your cluster’s public IP address over the internet. +Private networking allows only IP addresses within your private network to connect to your cluster. See [Cluster networking architecture](01_cluster_networking) for more information. 7. To optionally make updates to your database configuration parameters, select **Next: DB Configuration**. ## DB Configuration tab diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/assign-roles.png b/product_docs/docs/biganimal/release/getting_started/images/assign-roles.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/assign-roles.png rename to product_docs/docs/biganimal/release/getting_started/images/assign-roles.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/clusters.png b/product_docs/docs/biganimal/release/getting_started/images/clusters.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/clusters.png rename to product_docs/docs/biganimal/release/getting_started/images/clusters.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/create-cluster-1.png b/product_docs/docs/biganimal/release/getting_started/images/create-cluster-1.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/create-cluster-1.png rename to product_docs/docs/biganimal/release/getting_started/images/create-cluster-1.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/create-cluster-2.png b/product_docs/docs/biganimal/release/getting_started/images/create-cluster-2.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/create-cluster-2.png rename to product_docs/docs/biganimal/release/getting_started/images/create-cluster-2.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/edbcloud.png b/product_docs/docs/biganimal/release/getting_started/images/edbcloud.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/edbcloud.png rename to product_docs/docs/biganimal/release/getting_started/images/edbcloud.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/marketplace-preview.png b/product_docs/docs/biganimal/release/getting_started/images/marketplace-preview.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/marketplace-preview.png rename to product_docs/docs/biganimal/release/getting_started/images/marketplace-preview.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/raising-limits-video.png b/product_docs/docs/biganimal/release/getting_started/images/raising-limits-video.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/raising-limits-video.png rename to product_docs/docs/biganimal/release/getting_started/images/raising-limits-video.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/roles.png b/product_docs/docs/biganimal/release/getting_started/images/roles.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/roles.png rename to product_docs/docs/biganimal/release/getting_started/images/roles.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/setup-edbcloud.png b/product_docs/docs/biganimal/release/getting_started/images/setup-edbcloud.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/setup-edbcloud.png rename to product_docs/docs/biganimal/release/getting_started/images/setup-edbcloud.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/subscribe-to-dbaas.png b/product_docs/docs/biganimal/release/getting_started/images/subscribe-to-dbaas.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/subscribe-to-dbaas.png rename to product_docs/docs/biganimal/release/getting_started/images/subscribe-to-dbaas.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/subscription-process.png b/product_docs/docs/biganimal/release/getting_started/images/subscription-process.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/subscription-process.png rename to product_docs/docs/biganimal/release/getting_started/images/subscription-process.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/images/users.png b/product_docs/docs/biganimal/release/getting_started/images/users.png similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/images/users.png rename to product_docs/docs/biganimal/release/getting_started/images/users.png diff --git a/product_docs/docs/edbcloud/beta/getting_started/index.mdx b/product_docs/docs/biganimal/release/getting_started/index.mdx similarity index 100% rename from product_docs/docs/edbcloud/beta/getting_started/index.mdx rename to product_docs/docs/biganimal/release/getting_started/index.mdx diff --git a/product_docs/docs/edbcloud/beta/index.mdx b/product_docs/docs/biganimal/release/index.mdx similarity index 71% rename from product_docs/docs/edbcloud/beta/index.mdx rename to product_docs/docs/biganimal/release/index.mdx index 91f59209b85..da1cf901549 100644 --- a/product_docs/docs/edbcloud/beta/index.mdx +++ b/product_docs/docs/biganimal/release/index.mdx @@ -1,6 +1,6 @@ --- -title: EDB Cloud -description: "EDB Cloud: DBaaS for PostgresSQL " +title: BigAnimal +description: "BigAnimal: DBaaS for PostgresSQL " indexCards: simple hideVersion: true directoryDefaults: @@ -12,5 +12,5 @@ navigation: - administering_cluster - pricing_and_billing - reference - - release_notes + --- diff --git a/product_docs/docs/edbcloud/beta/overview/02_high_availibility.mdx b/product_docs/docs/biganimal/release/overview/02_high_availibility.mdx similarity index 85% rename from product_docs/docs/edbcloud/beta/overview/02_high_availibility.mdx rename to product_docs/docs/biganimal/release/overview/02_high_availibility.mdx index 7eb081f6a7d..d8145362875 100644 --- a/product_docs/docs/edbcloud/beta/overview/02_high_availibility.mdx +++ b/product_docs/docs/biganimal/release/overview/02_high_availibility.mdx @@ -2,7 +2,7 @@ title: "Supported architectures" --- -EDB Cloud enables deploying a cluster with or without high availability. The option is controlled with the **High Availablity** slide button on the [Create Cluster](https://portal.edbcloud.com/create-cluster) page in the [EDB Cloud](https://portal.edbcloud.com) portal. +BigAnimal enables deploying a cluster with or without high availability. The option is controlled with the **High Availablity** slide button on the [Create Cluster](https://portal.biganimal.com/create-cluster) page in the [BigAnimal](https://portal.biganimal.com) portal. ## High availability - enabled @@ -12,7 +12,7 @@ The high availability option is provided to minimize downtime in cases of failur * Replicas are usually called _standby servers_ and can also be used for read-only workloads. * In case of temporary or permanent unavailability of the primary, a standby replica will become the primary. -![*EDB Cloud Cluster4*](images/high-availability.png) +![*BigAnimal Cluster4*](images/high-availability.png) Incoming client connections are always routed to the current primary. In case of failure of the primary, a standby replica will automatically be promoted to primary and new connections will be routed to the new primary. When the old primary recovers, it will re-join the cluster as a replica. @@ -24,4 +24,4 @@ For non-production use cases where high availability is not a primary concern, a In case of permanent unavailability of the primary, a restore from a backup is required. -![*EDB Cloud Cluster4*](images/ha-not-enabled.png ) +![*BigAnimal Cluster4*](images/ha-not-enabled.png ) diff --git a/product_docs/docs/edbcloud/beta/overview/03_security.mdx b/product_docs/docs/biganimal/release/overview/03_security.mdx similarity index 92% rename from product_docs/docs/edbcloud/beta/overview/03_security.mdx rename to product_docs/docs/biganimal/release/overview/03_security.mdx index 00a191c3f87..8293fef0688 100644 --- a/product_docs/docs/edbcloud/beta/overview/03_security.mdx +++ b/product_docs/docs/biganimal/release/overview/03_security.mdx @@ -2,11 +2,11 @@ title: "Security" --- -EDB Cloud runs in your own cloud account, isolates your data from other users, and gives you control over our access to it. The key security features are: -- **Data isolation:** Clusters are installed and managed in your cloud environment. Complete segregation of your data is ensured: your data never leaves your cloud account, and compromise of another EDB Cloud customer's systems does not put your data at risk. +BigAnimal runs in your own cloud account, isolates your data from other users, and gives you control over our access to it. The key security features are: +- **Data isolation:** Clusters are installed and managed in your cloud environment. Complete segregation of your data is ensured: your data never leaves your cloud account, and compromise of another BigAnimal customer's systems does not put your data at risk. - **Granular access control:** You can use Single Sign On (SSO) and define your own sets of roles and Role Based Access Control (RBAC) policies to manage your individual cloud environments. See [Managing portal access](../administering_cluster/01_user_access) for more information. -- **Data encryption:** All data in EDB Cloud is encrypted in motion and at rest. Network traffic is encrypted using Transport Layer Security (TLS) v1.2 or greater, where applicable. Data at rest is encrypted using AES with 256 bit keys. Data encryption keys are envelope encrypted and the wrapped data encryption keys are securely stored in an Azure Key Vault instance in your account. Encryption keys never leave your environment. +- **Data encryption:** All data in BigAnimal is encrypted in motion and at rest. Network traffic is encrypted using Transport Layer Security (TLS) v1.2 or greater, where applicable. Data at rest is encrypted using AES with 256 bit keys. Data encryption keys are envelope encrypted and the wrapped data encryption keys are securely stored in an Azure Key Vault instance in your account. Encryption keys never leave your environment. - **Portal audit logging:** Activities in the portal, such as those related to user roles, organization updates, and cluster creation and deletion are tracked automatically and viewed in the activity log. - **Database logging and auditing:** Functionality to track and analyze database activities is enabled automatically. For PostgreSQL, the PostgreSQL Audit Extension (pgAudit) is enabled automatically for you when deploying a Postgres cluster. For EDB Postgres Advanced Server, the EDB Audit extension (edbAudit) is enabled automatically for you. - **pgAudit:** The classes of statements being logged for pgAudit are set globally on a cluster with `pgaudit.log = 'write,ddl'`. The following statements made on tables will be logged by default when the cluster type is PostgreSQL: `INSERT`, `UPDATE`, `DELETE`, `TRUNCATE`, AND `COPY`. All `DDL` will be logged. diff --git a/product_docs/docs/edbcloud/beta/overview/04_responsibility_model.mdx b/product_docs/docs/biganimal/release/overview/04_responsibility_model.mdx similarity index 91% rename from product_docs/docs/edbcloud/beta/overview/04_responsibility_model.mdx rename to product_docs/docs/biganimal/release/overview/04_responsibility_model.mdx index 1ec47fe7c9e..22fccd6eb05 100644 --- a/product_docs/docs/edbcloud/beta/overview/04_responsibility_model.mdx +++ b/product_docs/docs/biganimal/release/overview/04_responsibility_model.mdx @@ -2,7 +2,7 @@ title: "Responsibility model" --- -Security and confidentiality is a shared responsibility between you and EDB. EDB provides a secure platform that enables you to create and maintain secure database clusters deployed on EDB Cloud. You have numerous responsibilities around the security of your clusters and data held within them. +Security and confidentiality is a shared responsibility between you and EDB. EDB provides a secure platform that enables you to create and maintain secure database clusters deployed on BigAnimal. You have numerous responsibilities around the security of your clusters and data held within them. The following responsibility model describes the distribution of specific responsibilities between you and EDB. @@ -30,4 +30,4 @@ The following responsibility model describes the distribution of specific respon ## Credential management - EDB is responsible for making credentials available to customers. -- You are responsible for managing and securing your passwords, both for EDB Cloud and your database passwords. +- You are responsible for managing and securing your passwords, both for BigAnimal and your database passwords. diff --git a/product_docs/docs/edbcloud/beta/overview/05_database_version_policy.mdx b/product_docs/docs/biganimal/release/overview/05_database_version_policy.mdx similarity index 93% rename from product_docs/docs/edbcloud/beta/overview/05_database_version_policy.mdx rename to product_docs/docs/biganimal/release/overview/05_database_version_policy.mdx index 4b24abf71d2..ca3b0be05c8 100644 --- a/product_docs/docs/edbcloud/beta/overview/05_database_version_policy.mdx +++ b/product_docs/docs/biganimal/release/overview/05_database_version_policy.mdx @@ -19,7 +19,7 @@ PostgreSQL and EDB Postgres Advanced Server major versions are supported from th ## Minor version support -EDB performs periodic maintenance to ensure stability and security. EDB automatically performs minor version upgrades and patch updates as part of periodic maintenance. Customers are notified within the EDB Cloud portal prior to maintenance occurring. Minor versions are not user configurable. +EDB performs periodic maintenance to ensure stability and security. EDB automatically performs minor version upgrades and patch updates as part of periodic maintenance. Customers are notified within the BigAnimal portal prior to maintenance occurring. Minor versions are not user configurable. EDB reserves the right to upgrade customers to the latest minor version without prior notice in an extraordinary circumstance. diff --git a/product_docs/docs/edbcloud/beta/overview/06_support.mdx b/product_docs/docs/biganimal/release/overview/06_support.mdx similarity index 73% rename from product_docs/docs/edbcloud/beta/overview/06_support.mdx rename to product_docs/docs/biganimal/release/overview/06_support.mdx index b9b49307879..b0ff65d0e11 100644 --- a/product_docs/docs/edbcloud/beta/overview/06_support.mdx +++ b/product_docs/docs/biganimal/release/overview/06_support.mdx @@ -2,7 +2,7 @@ title: "Support options" --- -If you experience problems with EDB Cloud, you have several options to engage with EDB's Support team to get help. If you have an EDB Cloud account, you can go directly to the Support portal or the EDB Cloud portal to open a support case, or you can leave Support a message using the Support Case widget. +If you experience problems with BigAnimal, you have several options to engage with EDB's Support team to get help. If you have an BigAnimal account, you can go directly to the Support portal or the BigAnimal portal to open a support case, or you can leave Support a message using the Support Case widget. If you can’t log in to your account, send us an email to [cloudsupport@enterprisedb.com](mailto:cloudsupport@enterprisedb.com). @@ -10,10 +10,10 @@ If you can’t log in to your account, send us an email to [cloudsupport@enterpr 1. Initiate a support case using any one of these options: - - Go to the [Support portal](https://support.edbcloud.com/hc/en-us) and select **Submit a request** at the top right of the page. + - Go to the [Support portal](https://support.biganimal.com/hc/en-us) and select **Submit a request** at the top right of the page. - - Log in to [EDB Cloud](https://portal.edbcloud.com), select the question mark (?) at the top right of the page, select **Support Portal**, and select **Submit a request** at the top right of the page. - - Log in to [EDB Cloud](https://portal.edbcloud.com), select the question mark (?) at the top right of the page, select **Create Support ticket**. + - Log in to [BigAnimal](https://portal.biganimal.com), select the question mark (?) at the top right of the page, select **Support Portal**, and select **Submit a request** at the top right of the page. + - Log in to [BigAnimal](https://portal.biganimal.com), select the question mark (?) at the top right of the page, select **Create Support ticket**. 1. Enter a description in the **Subject** field. 1. (Optional) Select the cluster name from the **Cluster name** list. @@ -23,7 +23,7 @@ If you can’t log in to your account, send us an email to [cloudsupport@enterpr ## Creating a support case from the **Support** widget -1. Log in to EDB Cloud and select **Support** on the bottom of the left navigation pane. +1. Log in to BigAnimal and select **Support** on the bottom of the left navigation pane. 1. Fill in the **Leave us a message** form. 1. (Optional) The **Your Name** field is pre-filled, but you can edit it. diff --git a/product_docs/docs/edbcloud/beta/overview/images/ha-not-enabled.png b/product_docs/docs/biganimal/release/overview/images/ha-not-enabled.png similarity index 100% rename from product_docs/docs/edbcloud/beta/overview/images/ha-not-enabled.png rename to product_docs/docs/biganimal/release/overview/images/ha-not-enabled.png diff --git a/product_docs/docs/edbcloud/beta/overview/images/high-availability.png b/product_docs/docs/biganimal/release/overview/images/high-availability.png similarity index 100% rename from product_docs/docs/edbcloud/beta/overview/images/high-availability.png rename to product_docs/docs/biganimal/release/overview/images/high-availability.png diff --git a/product_docs/docs/edbcloud/beta/overview/images/high-availibility.puml b/product_docs/docs/biganimal/release/overview/images/high-availibility.puml similarity index 100% rename from product_docs/docs/edbcloud/beta/overview/images/high-availibility.puml rename to product_docs/docs/biganimal/release/overview/images/high-availibility.puml diff --git a/product_docs/docs/edbcloud/beta/overview/index.mdx b/product_docs/docs/biganimal/release/overview/index.mdx similarity index 76% rename from product_docs/docs/edbcloud/beta/overview/index.mdx rename to product_docs/docs/biganimal/release/overview/index.mdx index 313409ac6a5..f1b16b656e5 100644 --- a/product_docs/docs/edbcloud/beta/overview/index.mdx +++ b/product_docs/docs/biganimal/release/overview/index.mdx @@ -3,7 +3,7 @@ title: "Overview of service" --- -EDB Cloud is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account, and operated by the Postgres experts. EDB Cloud makes it easy to set up, manage, and scale your databases. Provision [PostgreSQL](https://www.enterprisedb.com/docs/supported-open-source/postgresql/) or [EDB Postgres Advanced Server](https://www.enterprisedb.com/docs/epas/latest/) with Oracle compatibility. +BigAnimal is a fully managed database-as-a-service with built-in Oracle compatibility, running in your cloud account, and operated by the Postgres experts. BigAnimal makes it easy to set up, manage, and scale your databases. Provision [PostgreSQL](https://www.enterprisedb.com/docs/supported-open-source/postgresql/) or [EDB Postgres Advanced Server](https://www.enterprisedb.com/docs/epas/latest/) with Oracle compatibility. diff --git a/product_docs/docs/edbcloud/beta/pricing_and_billing/index.mdx b/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx similarity index 92% rename from product_docs/docs/edbcloud/beta/pricing_and_billing/index.mdx rename to product_docs/docs/biganimal/release/pricing_and_billing/index.mdx index b0a09961412..04e0886a0e0 100644 --- a/product_docs/docs/edbcloud/beta/pricing_and_billing/index.mdx +++ b/product_docs/docs/biganimal/release/pricing_and_billing/index.mdx @@ -2,7 +2,7 @@ title: "Pricing and billing " --- -This section covers the pricing breakdown for EDB Cloud as well as how to view invoices and infrastructure usage through Microsoft Azure. +This section covers the pricing breakdown for BigAnimal as well as how to view invoices and infrastructure usage through Microsoft Azure. ## Pricing Pricing is based on the number of Virtual Central Processing Units (vCPUs) provisioned for the database software offering. Consumption of vCPUs is metered hourly. A deployment is comprised of either one instance or one primary and two replica instances of either PostgreSQL or EDB Postgres Advanced Server. When high availability is enabled, the number of vCPU per instance should be multiplied by three to calculate the full price for all resources used. See the full cost breakdown below: @@ -18,4 +18,4 @@ Pricing is based on the number of Virtual Central Processing Units (vCPUs) provi All billing is handled directly by Microsoft Azure. Invoices and usage can be viewed on the Azure Portal billing page. [Learn more](https://docs.microsoft.com/en-us/azure/cost-management-billing/) ## Cloud infrastructure costs -EDB does not bill you for cloud infrastructure such as compute, storage, data transfer, monitoring, and logging. EDB Cloud clusters run in your Microsoft Azure account. Azure bills you directly for the cloud infrastructure provisioned according to the terms of your account agreement. Invoices and usage can be viewed on the Azure Portal billing page. [Learn more](https://docs.microsoft.com/en-us/azure/cost-management-billing/) +EDB does not bill you for cloud infrastructure such as compute, storage, data transfer, monitoring, and logging. BigAnimal clusters run in your Microsoft Azure account. Azure bills you directly for the cloud infrastructure provisioned according to the terms of your account agreement. Invoices and usage can be viewed on the Azure Portal billing page. [Learn more](https://docs.microsoft.com/en-us/azure/cost-management-billing/) diff --git a/product_docs/docs/edbcloud/beta/reference/index.mdx b/product_docs/docs/biganimal/release/reference/index.mdx similarity index 86% rename from product_docs/docs/edbcloud/beta/reference/index.mdx rename to product_docs/docs/biganimal/release/reference/index.mdx index e0639d92680..1ed5de542d4 100644 --- a/product_docs/docs/edbcloud/beta/reference/index.mdx +++ b/product_docs/docs/biganimal/release/reference/index.mdx @@ -1,17 +1,17 @@ --- -title: Using the EDB Cloud API +title: Using the BigAnimal API --- -Use the EDB Cloud API to integrate directly with EDB Cloud for management activities such as cluster provisioning, de-provisioning, and scaling. +Use the BigAnimal API to integrate directly with BigAnimal for management activities such as cluster provisioning, de-provisioning, and scaling. -The API reference documentation is available from the [EDB Cloud portal](https://portal.edbcloud.com). The direct documentation link is [https://portal.edbcloud.com/api/docs/](https://portal.edbcloud.com/api/docs/). +The API reference documentation is available from the [BigAnimal portal](https://portal.biganimal.com). The direct documentation link is [https://portal.biganimal.com/api/docs/](https://portal.biganimal.com/api/docs/). To access the API, you need a token. The high-level steps to obtain a token are: 1. [Query the authentication endpoint](#query-the-authentication-endpoint). 2. [Request the device code](#request-the-device-code-using-curl). 3. [Authorize as a user](#authorize-as-a-user). 4. [Request the raw token](#request-the-raw-token-using-curl). -5. [Exchange for the raw token for the EDB Cloud token](#exchange-the-edbcloud-token-using-curl). +5. [Exchange for the raw token for the BigAnimal token](#exchange-the-biganimal-token-using-curl). EDB provides an optional script to simplify getting your device code and getting and refreshing your tokens. See [Using the `get-token` script](#using-the-get-token-script) for details. @@ -26,16 +26,16 @@ This call returns the information that either: ``` -curl https://portal.edbcloud.com/api/v1/auth/provider +curl https://portal.biganimal.com/api/v1/auth/provider ``` The response returns the `clientId`, `issuerUri`, `scope`, and `audience`. For example: ``` { "clientId": "pM8PRguGtW9yVnrsvrvpaPyyeS9fVvFh", - "issuerUri": "https://auth.edbcloud.com", + "issuerUri": "https://auth.biganimal.com", "scope": "openid profile email offline_access", - "audience": "https://portal.edbcloud.com/api" + "audience": "https://portal.biganimal.com/api" } ``` @@ -43,9 +43,9 @@ EDB recommends you store the output in environment variables to make including t ``` CLIENT_ID=pM8PRguGtW9yVnrsvrvpaPyyeS9fVvFh -ISSUER_URL=https://auth.edbcloud.com +ISSUER_URL=https://auth.biganimal.com SCOPE="openid profile email offline_access" -AUDIENCE="https://portal.edbcloud.com/api" +AUDIENCE="https://portal.biganimal.com/api" ``` The following example calls use these environment variables. @@ -77,10 +77,10 @@ For example: { "device_code": "KEOY2_5YjuVsRuIrrR-aq5gs", "user_code": "HHHJ-MMSZ", - "verification_uri": "https://auth.edbcloud.com/activate", + "verification_uri": "https://auth.biganimal.com/activate", "expires_in": 900, "interval": 5, - "verification_uri_complete": "https://auth.edbcloud.com/activate?user_code=HHHJ-MMSZ" + "verification_uri_complete": "https://auth.biganimal.com/activate?user_code=HHHJ-MMSZ" } ``` @@ -98,7 +98,7 @@ To authorize as a user: 2. Select **Confirm** on the Device Confirmation dialog. -3. Select **Continue with Microsoft Azure AD** on the EDB Cloud Welcome screen. +3. Select **Continue with Microsoft Azure AD** on the BigAnimal Welcome screen. 4. Log in with your Azure AD credentials. @@ -118,7 +118,7 @@ curl --request POST \ --data "client_id=$CLIENT_ID" ``` If successful, the call returns: -- `access_token` - use to exchange for the token to access EDB Cloud API. +- `access_token` - use to exchange for the token to access BigAnimal API. - `refresh_token` - use to obtain a new access token or ID token after the previous one has expired. (See [Refresh tokens](https://auth0.com/docs/tokens/refresh-tokens) for more information.) Refresh tokens expire after 30 days. @@ -144,7 +144,7 @@ REFRESH_TOKEN="v1.MTvuZpu.......sbiionEhtTw" ``` !!!note -The access token obtained at this step is only used in the next step to exchange the EDB Cloud token. +The access token obtained at this step is only used in the next step to exchange the BigAnimal token. !!! If not successful, you receive one of the following errors: @@ -154,22 +154,22 @@ If not successful, you receive one of the following errors: - `expired_token` - you have not authorized the device quickly enough, so the `device_code` has expired. Your application should notify you that it has expired and to restore it. - `access_denied` -## Exchange the EDB Cloud token using `curl` +## Exchange the BigAnimal token using `curl` !!!note The `get-token` script executes this step. You don't need to make this call if you are using the script. !!! -Use the raw token you obtained in the previous step [Request the raw token using `curl`](#request-the-raw-token-using-curl) to get the EDB Cloud token: +Use the raw token you obtained in the previous step [Request the raw token using `curl`](#request-the-raw-token-using-curl) to get the BigAnimal token: ``` curl -s --request POST \ - --url "https://portal.edbcloud.com/api/v1/auth/token" \ + --url "https://portal.biganimal.com/api/v1/auth/token" \ --header "content-type: application/json" \ --data "{\"token\":\"$RAW_ACCESS_TOKEN\"}" ``` If successful, the call returns: -- `token` - The bearer token used to access the EDB Cloud API. +- `token` - The bearer token used to access the BigAnimal API. For example: ``` @@ -178,7 +178,7 @@ For example: } ``` -This token, as opposed to the raw access token, is recognized by the EDB Cloud API. +This token, as opposed to the raw access token, is recognized by the BigAnimal API. Store this token in environment variables for future use. For example: ``` @@ -186,12 +186,12 @@ ACCESS_TOKEN="eyJhbGc.......0HFkr_19Vr7w" ``` !!! Tip -Contact [Customer Support](../overview/06_support) if you have trouble obtaining a valid access token to access EDB Cloud API. +Contact [Customer Support](../overview/06_support) if you have trouble obtaining a valid access token to access BigAnimal API. !!! ## Calling the API -To call the EDB Cloud API, your application must pass the retrieved access token as a bearer token in the Authorization header of your HTTP request. For example: +To call the BigAnimal API, your application must pass the retrieved access token as a bearer token in the Authorization header of your HTTP request. For example: ``` curl --request GET \ @@ -262,7 +262,7 @@ RAW_ACCESS_TOKEN="eyJhbGc.......1Qtkaw2fyho" REFRESH_TOKEN="v1.MTvuZpu.......sbiionEhtTw" ``` -The token you obtain from this step is the raw access token, you need to exchange this token for an EDB Cloud token. See [Exchange for EDB Cloud token](#exchange-the-edbcloud-token-using-curl) for more information. +The token you obtain from this step is the raw access token, you need to exchange this token for an BigAnimal token. See [Exchange for BigAnimal token](#exchange-the-biganimal-token-using-curl) for more information. !!! Note You need to save the refresh token retrieved from this response for the next refresh call. The refresh token in the response when you originally [requested the token](#request-the-token) is obsoleted once it has been used. @@ -281,7 +281,7 @@ Before running the script, [query the authentication endpoint](#query-the-authen ### get-token usage ``` -Get Tokens for EDB Cloud API +Get Tokens for BigAnimal API Usage: ./get-token.sh [flags] [options] @@ -293,7 +293,7 @@ Usage: the next use -h, --help show this help message -Reference: https://www.enterprisedb.com/docs/edbcloud/latest/reference/ +Reference: https://www.enterprisedb.com/docs/biganimal/latest/reference/ ``` ### Request your token using `get-token` @@ -302,8 +302,8 @@ To use the `get-token` script to get your tokens, use the script without the `-- ``` ./get-token.sh -o plain Please login to -https://edbcloud.us.auth0.com/activate?user_code=ZMNX-VVJT -with your EDB Cloud account +https://biganimal.us.auth0.com/activate?user_code=ZMNX-VVJT +with your BigAnimal account Have you finished the login successfully? (y/N) y ####### Access Token ################ diff --git a/product_docs/docs/edbcloud/beta/using_cluster/01_postgres_access.mdx b/product_docs/docs/biganimal/release/using_cluster/01_postgres_access.mdx similarity index 93% rename from product_docs/docs/edbcloud/beta/using_cluster/01_postgres_access.mdx rename to product_docs/docs/biganimal/release/using_cluster/01_postgres_access.mdx index 85c414e34df..2d25254cdf0 100644 --- a/product_docs/docs/edbcloud/beta/using_cluster/01_postgres_access.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/01_postgres_access.mdx @@ -7,7 +7,7 @@ The `edb_admin` database role and `edb_admin` database created during the _Creat To create a new role and database, first connect using `psql`: ``` -psql -W "postgres://edb_admin@xxxxxxxxx.xxxxx.edbcloud.io:5432/edb_admin?sslmode=require" +psql -W "postgres://edb_admin@xxxxxxxxx.xxxxx.biganimal.io:5432/edb_admin?sslmode=require" ``` ## Notes on the edb_admin role @@ -15,10 +15,10 @@ The `edb_admin` role does not have superuser priviledges by default. You should superuser setting changes are not retained after a restart. To avoid issues, do not run the system out of superuser connections. -You have to remember your `edb_admin` password as EDB does not have access to it. If you forget it, you can set a new one in the EDB Cloud portal on the **Edit Cluster** page. +You have to remember your `edb_admin` password as EDB does not have access to it. If you forget it, you can set a new one in the BigAnimal portal on the **Edit Cluster** page. Don't use the `edb_admin` user or the `edb_admin` database in your applications. Instead, use `CREATE USER; GRANT; CREATE DATABASE.` -EDB Cloud stores all database-level authentication securely and directly in PostgreSQL. The `edb_admin` user password is SCRAM-SHA-256 hashed prior to storage. This hash, even if compromised, cannot be replayed by an attacker to gain access to the system. +BigAnimal stores all database-level authentication securely and directly in PostgreSQL. The `edb_admin` user password is SCRAM-SHA-256 hashed prior to storage. This hash, even if compromised, cannot be replayed by an attacker to gain access to the system. ## One database with one application @@ -43,7 +43,7 @@ Using this example, the username and database in your connection string would be ## One database with multiple schemas -If a single database is used to host multiple schemas, create a database owner and then roles and schemas for each application. The example in the following steps shows creating two database roles and two schemas. The default `search_path` for database roles in EDB Cloud is `"$user",public`. If the role name and schema match, then objects in that schema will match first, and no `search_path` changes or fully qualifying of objects are needed. The [PostgreSQL documentation](https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH) covers the schema search path in detail. +If a single database is used to host multiple schemas, create a database owner and then roles and schemas for each application. The example in the following steps shows creating two database roles and two schemas. The default `search_path` for database roles in BigAnimal is `"$user",public`. If the role name and schema match, then objects in that schema will match first, and no `search_path` changes or fully qualifying of objects are needed. The [PostgreSQL documentation](https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH) covers the schema search path in detail. 1. Create a database owner and new database. For example, ``` diff --git a/product_docs/docs/edbcloud/beta/using_cluster/03_modifying_your_cluster/05_db_configuration_parameters.mdx b/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/05_db_configuration_parameters.mdx similarity index 93% rename from product_docs/docs/edbcloud/beta/using_cluster/03_modifying_your_cluster/05_db_configuration_parameters.mdx rename to product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/05_db_configuration_parameters.mdx index b280604d1ec..15a4ecb2a97 100644 --- a/product_docs/docs/edbcloud/beta/using_cluster/03_modifying_your_cluster/05_db_configuration_parameters.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/05_db_configuration_parameters.mdx @@ -13,7 +13,7 @@ The list of parameters is populated based on the type of database you selected o - For additional information on parameters, see [postgresqlco.nf](https://postgresqlco.nf/). !!!note -Not all database configuration parameters are supported by EDB Cloud. Some parameters, such as `wal_level` and `restore_command`, are reserved for EDB to provide the managed database features of EDB Cloud. +Not all database configuration parameters are supported by BigAnimal. Some parameters, such as `wal_level` and `restore_command`, are reserved for EDB to provide the managed database features of BigAnimal. !!! To modify a parameter, diff --git a/product_docs/docs/edbcloud/beta/using_cluster/03_modifying_your_cluster/index.mdx b/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx similarity index 85% rename from product_docs/docs/edbcloud/beta/using_cluster/03_modifying_your_cluster/index.mdx rename to product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx index 29752e31f71..dec153db91a 100644 --- a/product_docs/docs/edbcloud/beta/using_cluster/03_modifying_your_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/03_modifying_your_cluster/index.mdx @@ -4,9 +4,9 @@ title: Modifying your cluster redirects: - ../03_modify_and_scale_cluster --- -1. Sign in to the [EDB Cloud](https://portal.edbcloud.com) portal. +1. Sign in to the [BigAnimal](https://portal.biganimal.com) portal. -1. From the [**Clusters**](https://portal.edbcloud.com/clusters) page, select the name of the cluster you want to edit. +1. From the [**Clusters**](https://portal.biganimal.com/clusters) page, select the name of the cluster you want to edit. 2. Select **Edit Cluster** from the top right corner of the **Cluster Info** panel. diff --git a/product_docs/docs/edbcloud/beta/using_cluster/04_backup_and_restore.mdx b/product_docs/docs/biganimal/release/using_cluster/04_backup_and_restore.mdx similarity index 85% rename from product_docs/docs/edbcloud/beta/using_cluster/04_backup_and_restore.mdx rename to product_docs/docs/biganimal/release/using_cluster/04_backup_and_restore.mdx index 079498cd5be..501d77a1eb9 100644 --- a/product_docs/docs/edbcloud/beta/using_cluster/04_backup_and_restore.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/04_backup_and_restore.mdx @@ -4,13 +4,13 @@ title: "Backing up and restoring" ### Backups -EDB Cloud automatically backs up the data in your PostgreSQL clusters. Backups are stored in Azure Blob Storage, in the "hot" access tier with geo-zone-redundant storage (GZRS). You are responsible for the charges associated with backup storage; see [Azure Blob Storage documentation](https://docs.microsoft.com/en-us/azure/storage/blobs/) for more information. +BigAnimal automatically backs up the data in your PostgreSQL clusters. Backups are stored in Azure Blob Storage, in the "hot" access tier with geo-zone-redundant storage (GZRS). You are responsible for the charges associated with backup storage; see [Azure Blob Storage documentation](https://docs.microsoft.com/en-us/azure/storage/blobs/) for more information. -PostgreSQL clusters in EDB Cloud are continuously backed up through a combination of base backups and transaction log (WAL) archiving. When a new cluster is created, an initial "base" backup is taken; thereafter, every time a WAL file is closed - by default, every 5 minutes - it is automatically uploaded to Azure Blob Storage. +PostgreSQL clusters in BigAnimal are continuously backed up through a combination of base backups and transaction log (WAL) archiving. When a new cluster is created, an initial "base" backup is taken; thereafter, every time a WAL file is closed - by default, every 5 minutes - it is automatically uploaded to Azure Blob Storage. ### Restores -In the event a restore is necessary - for example, in case of an accidental `DROP TABLE` statement - clusters can be restored to any point in time as long as backups are retained in Azure Blob Storage. Currently EDB Cloud does not age out backups, so clusters can be restored to any time since cluster creation. +In the event a restore is necessary - for example, in case of an accidental `DROP TABLE` statement - clusters can be restored to any point in time as long as backups are retained in Azure Blob Storage. Currently BigAnimal does not age out backups, so clusters can be restored to any time since cluster creation. Cluster restores are not performed "in-place" on an existing cluster. Instead, a new cluster is created and initialized with data from the backup archive. Restores must re-play the transaction logs between the most recent full database backup and the target restore point. Thus restore times (i.e., "RTO") are dependent on the write activity in the source cluster. @@ -18,7 +18,7 @@ You can restore backups into a new cluster in the same region. #### Performing a cluster restore -1. Select the cluster you wish to restore on the [**Clusters**](https://portal.edbcloud.com/clusters) page in the [EDB Cloud](https://portal.edbcloud.com) portal. +1. Select the cluster you wish to restore on the [**Clusters**](https://portal.biganimal.com/clusters) page in the [BigAnimal](https://portal.biganimal.com) portal. 2. From **Quick Actions**, select **Restore**. @@ -30,6 +30,6 @@ You can restore backups into a new cluster in the same region. 4. Review your selections in the **Cluster Summary** and select **Restore Cluster** to begin the restore process. -5. The new cluster is now available on the [**Clusters**](https://portal.edbcloud.com/clusters) page. +5. The new cluster is now available on the [**Clusters**](https://portal.biganimal.com/clusters) page. diff --git a/product_docs/docs/edbcloud/beta/using_cluster/05_monitoring_and_logging.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging.mdx similarity index 72% rename from product_docs/docs/edbcloud/beta/using_cluster/05_monitoring_and_logging.mdx rename to product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging.mdx index bd46bbe05f1..0dadd3f074c 100644 --- a/product_docs/docs/edbcloud/beta/using_cluster/05_monitoring_and_logging.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging.mdx @@ -2,13 +2,18 @@ title: "Monitoring and logging" --- +You can monitor your Postgres clusters by viewing the metrics and logs from Azure. For existing Postgres Enterprise Manager (PEM) users who wish to monitor EDB Cloud clusters alongside self-managed Postgres clusters, you can use the remote Remote Monitoring capability of PEM. For more information on using PEM to monitor your clusters see [Remote Monitoring](../../../../../pem/latest/pem_admin/02a_pem_remote_monitoring). -EDB Cloud sends all metrics and logs from PostgreSQL clusters to Azure. This topic describes what metrics and logs are sent and how to view them. +The following sections describe viewing metrics and logs directly from Azure. + +## Viewing metrics and logs from Azure + +EDB Cloud sends all metrics and logs from PostgreSQL clusters to Azure. The following describes what metrics and logs are sent and how to view them. ### Azure log analytics -When EDB Cloud deploys workloads on Azure, the logs from the PostgreSQL clusters are forwarded to the Azure Log Workspace. -To query EDB Cloud logs, you must use [Azure Log Analytics](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-overview) and [Kusto Query language](https://azure-training.com/azure-data-science/the-kusto-query-language/). +When BigAnimal deploys workloads on Azure, the logs from the PostgreSQL clusters are forwarded to the Azure Log Workspace. +To query BigAnimal logs, you must use [Azure Log Analytics](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-overview) and [Kusto Query language](https://azure-training.com/azure-data-science/the-kusto-query-language/). @@ -21,7 +26,7 @@ All logs from your PostgreSQL clusters are stored in the _Customer Log Analytics 2. Select **Resource Groups**. -2. Select the Resource Group corresponding to the region where you choose to deploy your EDB Cloud cluster. You will see resources included in that Resource Group. +2. Select the Resource Group corresponding to the region where you choose to deploy your BigAnimal cluster. You will see resources included in that Resource Group. 3. Select the resource of type _Log Analytics workspace_ with the suffix -customer. @@ -62,7 +67,7 @@ To view logs from your PostgreSQL clusters using Shared Dashboard: 2. Select **Resource Groups**. -2. Select the Resource Group corresponding to the region where you choose to deploy your EDB Cloud cluster. You will see resources included in that Resource Group. +2. Select the Resource Group corresponding to the region where you choose to deploy your BigAnimal cluster. You will see resources included in that Resource Group. 3. Select the resource of type _Shared Dashboard_ with the suffix -customer. diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx new file mode 100644 index 00000000000..2d4ec92b1e5 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/01_private_endpoint.mdx @@ -0,0 +1,207 @@ +--- +title: Private Endpoint example +--- + +These are the steps to connect using Azure Private Endpoint. + +Assume that your cluster is on a subscription called `development` and is being accessed from a Linux client VM on another subscription called `test` with the following properties: + +- Cluster: + - Subscription: `development` + - Cluster ID: `p-c4j0jfcmp3af2ieok5eg` + - Account ID: `brcxzr08qr7rbei1` + - Organization's domain name: `biganimal.io` + + +- Linux client VM called `vm-client`: + - Subscription: `test` + - Resource group: `rg-client` + - Virtual network: `vnet-client` + - Virtual network subnet: `snet-client` + +#### Prerequisites + +To walk through an example in your own environment, you need: + +- Your cluster URL. You can find the URL in the **Connect** tab of your cluster instance in the BigAnimal portal. + +- The IP address of your cluster. You can find the IP address of your cluster using the following command: + + ``` + $ dig +short p-c4j0jfcmp3af2ieok5eg.brcxzr08qr7rbei1.biganimal.io + 10.240.1.218 + ``` + +- A Postgresql client, such as [psql](https://www.postgresql.org/download/), installed on your client VM. + +#### Step 1: Create an Azure Private Link service for your cluster + + In this example, you will create an [Azure Private Link service](https://docs.microsoft.com/en-us/azure/private-link/private-link-service-overview) in your cluster's resource group. You must perform this procedure for every cluster that you want to connect to in Azure. + + 1. Get the resource group details from the Azure CLI or the Azure portal and note down the resource group name. For example, if the cluster's virtual network is `vnet-japaneast`, use the following command: + + ``` + $ az network vnet list --query "[?name==\'vnet-japaneast\'].resourceGroup" -o json + + ``` + + 1. On the upper-left part of the page in the Azure portal, select **Create a resource**. + + 1. Search for **Private Link** in the **Search the Marketplace** box. + + 1. Select **Create**. + + 1. Enter the details for the Azure Private Link. Use a unique name for the Azure Private Link. + + For example, `p-c4j0jfcmp3af2ieok5eg-service-private-link`, where `p-c4j0jfcmp3af2ieok5eg` is the cluster ID. + + ![Create private link service](../images/image5.png) + + 1. Enter the resource group name obtained in step 1. + + 3. In the **Outbound settings** page, select the `kubernetes-internal` load balancer + and select the IP address of your cluster in the **Load balancer frontend IP + address** field. + + You can get the IP address of your cluster with the following command: + + ``` + $ dig +short p-c4j0jfcmp3af2ieok5eg.brcxzr08qr7rbei1.biganimal.io + 10.240.1.218 + ``` + + ![Outbound settings](../images/image1.png) + + 4. On the **Access security** page, configure the level of access for the private link service. See [control service exposure](https://docs.microsoft.com/en-us/azure/private-link/private-link-service-overview#control-service-exposure) for details. + + ![](../images/image21.png) + + !!! Note + If the required access is not provided to the account or subscription accessing the cluster, you must manually approve the connection request from the **Pending connections** page in **Private Link Center**. + + + 5. After the private link service is created, note down its alias. The alias is the unique ID for your private service, which can be shared with the service consumers. Obtain the alias either from the Azure portal or by using the following CLI command: + + ``` + $ az network private-link-service list --query "[?name=='p-c4j0jfcmp3af2ieok5eg-service-private-link'].alias" -o tsv + p-c4j0jfcmp3af2ieok5eg-service-private-link.48f26b42-45dc-4e80-8e3d-307d58d7d274.japaneast.azure.privatelinkservice + ``` + + 6. Select **Review + Create**. + + 7. Select **Create**. + +#### Step 2: Create an Azure Private Endpoint in each client virtual network + + In this example, you will create an Azure Private Endpoint in your client VM's virtual network. After you have created the private endpoint, you can use its private IP address to access the cluster. You must perform this procedure for every virtual network you wish to connect from. + + 1. On the upper-left side of the screen in the Azure portal, select **Create a resource > Networking > Private Link** or in the search box enter "Private Link". + + 1. Select **Create**. + + 1. In Private Link Center, select **Private endpoints** in the menu on the left. + + 1. In Private endpoints, Select **Add**. + + 1. Enter the details for the private endpoint as shown in the following screenshot. Use a unique name for the private endpoint. + + For example, `vnet-client-private-pg-service`, where `vnet-client` is the client VNet ID. + + !!! Note + In a later step, you will require the private endpoint's name to get its private IP address. + + ![](../images/image12.png) + + 1. Connect the private endpoint to the private link service that we created by entering its alias. + + ![](../images/image17.png) + + 1. In the **Configuration** page, enter the client VM's Virtual Network `vnet-client`. + + 1. Select **Review + Create**. + + 7. Select **Create**. + + !!! Note + If the private endpoint's status appears as **Pending**, your account or subscription might not be authorized to access the private link service. + + To resolve this issue, the connection must be manually approved from the **Pending connections** page in **Private Link Center**, from the BigAnimal Azure subscription. + + ![](../images/image3.png) + + +11. You have now successfully built a tunnel between your client VM's virtual network and the cluster. You can now access the cluster from the private endpoint in your client VM. The private endpoint's private IP address is associated with an independent virtual network NIC. Get the private endpoint's private IP address using the following commands: + + ``` + $ NICID=$(az network private-endpoint show -n vnet-client-private-pg-service -g rg-client --query "networkInterfaces[0].id" -o tsv) + $ az network nic show -n ${NICID##*/} -g rg-client --query "ipConfigurations[0].privateIpAddress" -o tsv + 100.64.111.5 + ``` + +12. From the client VM `vm-client`, access the cluster by using the private IP address: + + ``` + $ psql -h 100.64.111.5 -U edb_admin + + Password for user edb_admin : + + psql (13.4 (Ubuntu 13.4-1.pgdg20.04+1), server 13.4.8 (Debian 13.4.8-1+deb10)) + WARNING : psql major version 13, server major version 13. Some psql features might not work. + SSL connection (protocol : TLSV1.3, cipher : TLS_AES_256_GCM_SHA384, bits : 256, compression : off) Type "help" for help. + + edb_admin=> + + ``` + +#### Step 3: Create an Azure Private DNS Zone for the private endpoint + +EDB strongly recommends using an [Azure Private DNS Zone](https://docs.microsoft.com/en-us/azure/dns/private-dns-privatednszone) with the private endpoint to establish a connection with a cluster, because it is not possible to validate TLS certificates using `verify-full` when connecting to an IP address. + +With a Private DNS Zone you configure a DNS entry for your cluster's public hostname and Azure DNS ensures that all requests to that domain name from your VNet resolve to the private endpoint's IP address instead of the cluster's IP address. + +!!! Note + You need to create a single Azure Private DNS Zone for each VNet, even if you are connecting to multiple clusters. If you've already created a DNS Zone for this VNet, you can skip to step 6. + + +1. In the Azure portal search for "Private DNS Zones". + +1. Select **Private DNS zone**. + +1. Select **Create private DNS zone**. + +1. Create a private DNS zone using your organization's domain name as an apex domain. The organization's domain name must be unique to your BigAnimal organization. For example, `biganimal.io`. + + ![](../images/image6.png) + +1. Select the **Virtual network** link on the **Private DNS Zone** page of `brcxzr08qr7rbei1.biganimal.io` and link the private DNS Zone to the client VM's virtual network +`vnet-client`. + + ![](../images/image10.png) + +1. Add a new record for the private endpoint. The address is a private IP address - the one created with the private endpoint in the previous step. + + ![](../images/image4.png) + +1. You can now access your cluster with this private domain name. + + ``` + $ dig +short p-c4iabjleig40jngmac40.brcxzr08qr7rbei1.biganimal.io + 10.240.1.123 + + $ psql -h p-c4iabjleig40jngmac40.brcxzr08qr7rbei1.biganimal.io -U edb_admin + Password for user edb_admin: + + psql (13.4 (Ubuntu 13.4-1.pgdg28.84+1), server 13.4.8 (Debian 13.4.8-1+deb10)) + WARNING : psql major version 13, server major version 13. Some psql features might not work. + SSL connection (protocol : TLSV1.3cipherTLS_AES_256_GCM_SHA384, bits : 256, compression : off) Type "help" for help. + + edb_admin=> + + ``` + + !!! Tip + You might need to flush your local DNS cache to resolve your domain name to the new private IP address after adding the private endpoint. For example, on Ubuntu run the following command: + + ``` + $ sudo systemd-resolve --flush-caches + ``` diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/02_virtual_network_peering.mdx b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/02_virtual_network_peering.mdx new file mode 100644 index 00000000000..0c1d788df1c --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/02_virtual_network_peering.mdx @@ -0,0 +1,78 @@ +--- +title: Virtual network peering example +--- + +These are the steps to connect using virtual network peering. + +!!! Note + The IP ranges of two peered virtual networks cannot overlap. BigAnimal VNets use the 10.240.0.0/16 address space and cannot be peered with VNets utilizing this same space. + +Assume that your cluster is on a subscription called `development` and is being accessed from a Linux client VM on another subscription called `test` with the following properties: + +- Cluster: + - Subscription: `development` + - Cluster ID: `p-c4j0jfcmp3af2ieok5eg` + - Account ID: `brcxzr08qr7rbei1` + - Organization's domain name: `biganimal.io` + + +- Linux client VM called `vm-client`: + - Subscription: `test` + - Resource group: `rg-client` + - Virtual network: `vnet-client` + - Virtual network subnet: `snet-client` + +#### Prerequisites + +To walk through an example in your own environment, you need: + +- Your cluster URL. You can find the URL in the **Connect** tab of your cluster instance in the BigAnimal portal. + +- The IP address of your cluster. You can find the IP address of your cluster using the following command: + + ``` + $ dig +short p-c4j0jfcmp3af2ieok5eg.brcxzr08qr7rbei1.biganimal.io + 10.240.1.218 + ``` + +- A Postgresql client, such as [psql](https://www.postgresql.org/download/), installed on your client VM. + +#### Step 1: Create a virtual network peering link + +You need to add two peering links, one from the client VM's VNet `vnet-client` and the other from your cluster's VNet `vnet-japaneast`. + +!!! Note + In this example, you create virtual network peering for virtual networks that belong to subscriptions in the same Azure Active Directory tenants. For steps to create virtual network peering for virtual networks that belong to subscriptions in different Azure Active Directory tenants, see [peering virtual networks from different Azure Active Directory tenants](https://docs.microsoft.com/azure/virtual-network/create-peering-different-subscriptions). + +1. In the Azure portal, search for "Virtual networks". When "Virtual networks" appear in the search results, select it. Don't select "Virtual networks (classic)", as you can't create a peering from a virtual network deployed through the classic deployment model. + +1. Select the client VM's Virtual Network `vnet-client` from the list that you want to create a peering for. + +1. Select **Peerings** under Settings and then select **+ Add**. + +1. From the **Peerings** page of the client VM's Virtual Network `vnet-client`, add two peering links called `peer-client-edb` and `peer-edb-client`, to join the address space of two virtual networks together. + + To simplify the process, Azure creates both peering links for you when you add peering from either side. + + ![](../images/image25.png) + + ![](../images/image7.png) + +#### Step 2: Access the cluster + +Access the cluster with its domain name from your cluster's connection string. It is accessible from `vnet-client` after peering. + + ``` + $ dig +short p-c4j0jfcmp3af2ieok5eg.brcxzr08qr7rbei1.biganimal.io + 10.240.1.123 + + $ psql -h p-c4j0jfcmp3af2ieok5eg.brcxzr08qr7rbei1.biganimal.io -U edb_admin + Password for user edb_admin: + + psql (13.4 (Ubuntu 13.4-1.pgdg28.84+1), server 13.4.8 (Debian 13.4.8-1+deb10)) + WARNING : psql major version 13, server major version 13. Some psql features might not work. + SSL connection (protocol : TLSV1.3cipherTLS_AES_256_GCM_SHA384, bits : 256, compression : off) Type "help" for help. + + edb_admin=> + + ``` diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/03_vnet_vnet.mdx b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/03_vnet_vnet.mdx new file mode 100644 index 00000000000..3beb1b65582 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/03_vnet_vnet.mdx @@ -0,0 +1,122 @@ +--- +title: VNet-VNet example +--- + + +These are the steps to connect using VNet-VNet connections. + +To use this method, you need to create [Azure VPN Gateways](https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-vpngateways) in each connected virtual network. + +!!! Note + VPN gateway creation can take up to 45 minutes. + +Assume that your cluster is on a subscription called `development` and is being accessed from a Linux client VM on another subscription called `test` with the following properties: + +- Cluster: + - Subscription: `development` + - Cluster ID: `p-c4j0jfcmp3af2ieok5eg` + - Account ID: `brcxzr08qr7rbei1` + - Organization's domain name: `biganimal.io` + + +- Linux client VM called `vm-client`: + - Subscription: `test` + - Resource group: `rg-client` + - Virtual network: `vnet-client` + - Virtual network subnet: `snet-client` + +#### Prerequisites + +To walk through an example in your own environment, you need: + +- Your cluster URL. You can find the URL in the **Connect** tab of your cluster instance in the BigAnimal portal. + +- The IP address of your cluster. You can find the IP address of your cluster using the following command: + + ``` + $ dig +short p-c4j0jfcmp3af2ieok5eg.brcxzr08qr7rbei1.biganimal.io + 10.240.1.218 + ``` + +- A Postgresql client, such as [psql](https://www.postgresql.org/download/), installed on your client VM. + +#### Step 1: Create a VPN gateway for the cluster's virtual network + in the search box. Locate "Virtual network gateways" in the search results and select it. + +1. On the Virtual network gateways page, select **+ Create**. This opens the **Create virtual network gateway** page. + +1. On the **Create virtual network gateway** page, create the VPN gateway for the cluster's resource virtual network `vnet-japaneast`. Name the VPN gateway `vpng-biganimal`. + + ![](../images/image8.png) + +!!! Note + The VPN gateway automatically creates a dedicated subnet to accommodate its gateway VMs. Ensure that your cluster's virtual network address space has sufficient IP range for the subnet to prevent errors in the virtual network. For more information, see [Plan virtual networks](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-vnet-plan-design-arm#subnets). + +#### Step 2: Create a VPN gateway for the client VM virtual network + +In the same way, create the gateway for your client VM virtual network `vnet-client`. Name the client VPN gateway `vpng-client`. + +#### Step 3: Add a gateway connection between the two VPN gateways + +Use the Azure CLI (or [PowerShell](https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps)) to add a VPN gateway connection from `vpng-biganimal`, as follows: + +!!! Note + The Azure portal only allows you to create VPN gateway connections between virtual networks belonging to the same subscription. + +1. Get the VPN gateway ID of `vpng-client`. + + **From the BigAnimal subscription**: + + ``` + $ az network vnet-gateway show -n vpng-biganimal -g brCxzr08qr7RBEi1-rg-japaneast-management --query "[id]" -otsv + subscriptions/.../vpng-biganimal + ``` + + **From the client VM's subscription**: + + ``` + $ az network vnet-gateway show -n vpng-client -g rg-client --query "[id]" -o tsv + /subscriptions/.../vpng-client + ``` + +2. From the BigAnimal subscription, create a connection from `vpng-biganimal` to `vpng-client`. + + ``` + $ az network vpn-connection create -n vpnc-biganimal-client -g brCxzr08qr7RBEi1-rg-japaneast-management --vnet-gateway1 /subscriptions/.../vpng-biganimal -l japaneast --shared-key "a_very_long_and_complex_psk" \--vnet-gateway2 /subscriptions/.../vpng-client + + ``` + + Note down the value for `--shared-key`. It is a PSK for pairing authentication from both sides needed in the next step. + +3. From the client VM's subscription, create another connection from `vpng-client` to `vpng-ebdcloud`. + + ``` + $ az network vpn-connection create -n vpnc-client-biganimal -g rg-client --vnet-gateway1 /subscriptions/.../vpng-client -l japaneast --shared-key "a_very_long_and_complex_psk!" --vnet-gateway2 /subscriptions/.../vpng-biganimal + ``` + +#### Step 4: Verify the connection + +1. After a few minutes, verify the gateway connection status either from either virtual networks with the following command: + + ``` + $ az network vpn-connection show --name vpnc-client-biganimal -g rg-client --query "[connectionStatus]" -o tsv +Connected + ``` + +2. Verify the connectivity to the cluster: + + ``` + $ dig +short p-c4j0jfcmp3af2ieok5eg.brcxzr08qr7rbei1.biganimal.io + 10.240.1.123 + + $ psql -h p-c4j0jfcmp3af2ieok5eg.brcxzr08qr7rbei1.biganimal.io -U edb_admin + Password for user edb_admin: + + psql (13.4 (Ubuntu 13.4-1.pgdg28.84+1), server 13.4.8 (Debian 13.4.8-1+deb10)) + WARNING : psql major version 13, server major version 13. Some psql features might not work. + SSL connection (protocol : TLSV1.3cipherTLS_AES_256_GCM_SHA384, bits : 256, compression : off) Type "help" for help. + + edb_admin=> + + ``` + diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/index.mdx b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/index.mdx new file mode 100644 index 00000000000..50649630f16 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/01_connecting_from_azure/index.mdx @@ -0,0 +1,51 @@ +--- +title: Connecting from Azure +--- + +There are three different methods to connect to your cluster from your application's virtual network in Azure. Each method offers different levels of accessibility and security. The Azure Private Endpoint method is recommended and is most commonly used; however, you can implement either of the other solutions depending on your organization's requirements. + +- [Azure Private Endpoint (recommended)](#azure-private-endpoint-recommended) +- [Virtual network peering](#virtual-network-peering) +- [Azure VNet-VNet connection](#azure-vnet-vnet-connection) + +## Azure Private Endpoint (recommended) + + Azure Private Endpoint is a network interface that securely connects a private IP + address from your Azure Virtual Network (VNet) to an external service. You only grant access to a single cluster instead of the entire BigAnimal resource virtual network, thus ensuring maximum network isolation. Private Endpoints are the same mechanism used by first-party Azure services such as CosmosDB for private VNet connectivity. For more information, see [What is an Azure Private Endpoint?](https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-overview). + +**Pros** + - The Private Link only needs to be configured once, then multiple Private Links can be used to connect applications from many different VNets. + - There is no risk of IP address conflicts. + +**Cons** + - Private Links (required by Private Endpoints) are not free. See [Azure Private Link pricing](https://azure.microsoft.com/en-us/pricing/details/private-link/#pricing). + +See the [Private Endpoint Example](01_private_endpoint) for the steps for connecting using this method. + +## Virtual network peering + +Virtual network peering connects two Azure Virtual Networks, allowing traffic to be freely routed between the two. Once peered, the two virtual networks act as one with respect to connectivity. Network Security Group rules are still observed. + +**Pros** +- Simple and easy to set up. + +**Cons** +- There is an associated cost. See [pricing for virtual network peering](https://azure.microsoft.com/en-us/pricing/details/virtual-network/#pricing) for details. +- The IP ranges of two peered virtual networks cannot overlap. BigAnimal VNets use the 10.240.0.0/16 address space and cannot be peered with VNets utilizing this same space. + +See the [Viritual Network Peering Example](02_virtual_network_peering) for the steps to connect using this VNet Peering. + +## Azure VNet-VNet connection + +VNet-VNet connections use VPN gateways to send encrypted traffic between Azure virtual networks. + +**Pros** +- Cluster domain name is directly accessible without a NAT. +- VNets from different subscriptions do not need to be associated with the same Active Directory tenant. + +**Cons** +- Bandwidth is limited, see [virtual network gateway planning table](https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-vpngateways#planningtable). +- Configuration is complicated. +- There is an associated cost, see [virtual network gateway planning table](https://docs.microsoft.com/en-us/azure/vpn-gateway/vpn-gateway-about-vpngateways#planningtable). + +See the [VNet-VNet Example](03_vnet_vnet) for the steps for connecting using VNet-VNet Connection. diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image1.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image1.png new file mode 100644 index 00000000000..b5452bd2484 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b47bbc0d70d9bf1a7ae3163414fef650cdfc9469b38a6cb78488403c6e220841 +size 201346 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image10.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image10.png new file mode 100644 index 00000000000..b08fa351449 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image10.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f4ad42ae4b709330c333352d61b755b66138919e922f6d0813bca0135204e66 +size 101250 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image11.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image11.png new file mode 100644 index 00000000000..67399577b2a --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image11.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:58adc7d628f2243adb12c7c09f619315948cccd5e5dd0761c10afc690dbc721d +size 120181 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image12.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image12.png new file mode 100644 index 00000000000..f38df9bb43a --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image12.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5616e7e35acf6e5ca68a7c67b40ab61bf1f0479c6ed633bb70edf1c4258d634b +size 52016 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image13.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image13.png new file mode 100644 index 00000000000..1af0ba466ed --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image13.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77632225eb040d6bd7714c257db4c1fc7c3acdf17bdfc60ab4cdba5d605c8671 +size 52393 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image14.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image14.png new file mode 100644 index 00000000000..17f56d9c9cf --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image14.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:89fff9cd8bfe708c79e9c79246dba6d054c8f5dce927ad8e556a1e502bf8c195 +size 14453 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image15.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image15.png new file mode 100644 index 00000000000..72e0b4a14b9 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image15.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b12d933ddc8c757deecb3d5d27f0bfb86a3b566acbc4371a9fc295daf1aab4a8 +size 348099 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image16.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image16.png new file mode 100644 index 00000000000..f65e3ffb69d --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image16.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:baa4cadb2ec0b7f0c3e68e00433a468a23524bb491beeca5b9e83df0bfcd72b0 +size 59115 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image17.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image17.png new file mode 100644 index 00000000000..15b63229877 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image17.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c601e6c6edc8969d2dc8576d08825ffddcd089ee2e361e7bbe807779d830dae2 +size 51029 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image18.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image18.png new file mode 100644 index 00000000000..f2e1a7c8655 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image18.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc3eaa18032353cb49db2d784de8975b889db2f55c7869b689c1dc6a140b6a27 +size 32633 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image19.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image19.png new file mode 100644 index 00000000000..3076ccc6956 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image19.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b7c27554a9470a375182cabc4dbb50f30c312430770d840db0762215893da4a +size 74313 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image20.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image20.png new file mode 100644 index 00000000000..ce7c7bf4f88 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image20.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d392c2e2c119b3efd7e0f739da9fb53c01af3ed063611c344995b6d72988ace8 +size 73731 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image21.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image21.png new file mode 100644 index 00000000000..fe208c6743c --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image21.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:951d1787fb5bfb456c2585dc446d5df0398b8f4df4bb58b2c4839764f09c4069 +size 189161 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image22.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image22.png new file mode 100644 index 00000000000..ed57ad222e5 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image22.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60913654ffb3b250edd40e059037efc81f9cb7c06e60b709b4ced491ec4e3020 +size 25468 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image24.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image24.png new file mode 100644 index 00000000000..97e21518406 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image24.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bd9bdd8296691033bf3dfe18ab7f6896c7357eccce5fef1f519186f64048ffb3 +size 158854 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image25.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image25.png new file mode 100644 index 00000000000..02d309024b7 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image25.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a68aecb56a588c51d31da34b9518d5ea529cff30f4225b06baa1eb1bb0a5b186 +size 128624 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image3.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image3.png new file mode 100644 index 00000000000..a33efdc3088 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3779cf2de554d15a684d8f0a705a7d85e7afc605e4b465d9867251e0abcee73d +size 93088 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image4.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image4.png new file mode 100644 index 00000000000..f08c994f81a --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image4.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:27fd2bfdb99bd21ed96c071a9882a8980f934c301a5f6a62d7657ca821ce0926 +size 55549 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image5.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image5.png new file mode 100644 index 00000000000..130df25d4e7 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image5.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2d5f214d29bb85e78ab6179d186cca1a26bd97e2825eb6602849fa2e68cc1414 +size 45010 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image6.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image6.png new file mode 100644 index 00000000000..97c8a340f46 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image6.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0b737d6e0a1bf6da7210c7fd8f995efd20275827836cf0c50385067d675ab112 +size 70442 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image7.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image7.png new file mode 100644 index 00000000000..c359a38c8b5 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image7.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e57dfe30346e3ea1e45c0b692cb5027bd55f4618a67e8759fc4a2ddc43a077fd +size 142887 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image8.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image8.png new file mode 100644 index 00000000000..95fe657d978 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image8.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dcdd5055d3d21edfc25313dee1cefd736ca6edfbe1d91bf97266cba25ee7137b +size 175663 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image9.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image9.png new file mode 100644 index 00000000000..6dd38d36e2e --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/image9.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0efe293c857043126d9580a79c2eca0482af2a297207eea2876ea21d3bf73fa5 +size 18168 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/point-to-site-download.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/point-to-site-download.png new file mode 100644 index 00000000000..3cd955c7528 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/point-to-site-download.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1fc452195361f3e3daa553f8cc87e38add5eb9ec2f4ee664b66339c147875c4c +size 110990 diff --git a/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/point-to-site.png b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/point-to-site.png new file mode 100644 index 00000000000..997950e095f --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/images/point-to-site.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cdf63b79ef05ba054d590a06407986ee67a5364d3cb4d328282afb23bda50b87 +size 233535 diff --git a/product_docs/docs/edbcloud/beta/using_cluster/02_connect_to_cluster.mdx b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/index.mdx similarity index 69% rename from product_docs/docs/edbcloud/beta/using_cluster/02_connect_to_cluster.mdx rename to product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/index.mdx index 32ea1a3fb9d..47ce260e249 100644 --- a/product_docs/docs/edbcloud/beta/using_cluster/02_connect_to_cluster.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/connecting_your_cluster/index.mdx @@ -1,8 +1,13 @@ --- title: "Connecting to your cluster" +redirects: + #adding hierarchy to the structure (Creating a Cluster topic nows has a child topic) so created a folder and moved the contents from 03_create_cluster to index.mdx + - ../02_connect_to_cluster/ --- -You can connect to your cluster using [`psql`](http://postgresguide.com/utilities/psql.html), the terminal-based client for Postgres, or another client. See [Recommended settings for SSL mode](#recommended-settings-for-ssl-mode) for EDB's recommendations for secure connections. +You can connect to your cluster using [`psql`](http://postgresguide.com/utilities/psql.html), the terminal-based client for Postgres, or another client. For additional security measures see: +- [Recommendations for Settings for SSL Mode](#recommended-settings-for-ssl-mode) +- [Using a private network to connect to your cluster](#setting-up-azure-infrastructure-to-connect-to-a-private-network-cluster) ## Using `psql` To connect to your cluster using `psql`: @@ -40,3 +45,12 @@ edb_admin=> \conninfo You are connected to database "edb_admin" as user "edb_admin" on host "xxxxxxxxx.xxxxx.edbcloud.io" at port "5432". SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) ``` + +## Setting up Azure infrastructure to connect to a private network cluster + +The Private Networking option offers a higher level of isolation and security by moving your cluster out of the public Internet. Clusters with Private Networking enabled, are by default not accessible from outside of your cluster's resource virtual network. You need to perform additional configuration steps to connect your applications in other parts of your Azure infrastructure to your clusters via private network links. + +!!! Note + EDB strongly discourages users from provisioning additional resources in the cluster's resource virtual network. + +You can connect to the private cluster from an application in Azure, see [Connecting from Azure](01_connecting_from_azure). This section also contains walkthrough examples to guide you through the different methods to connect to your cluster. \ No newline at end of file diff --git a/product_docs/docs/biganimal/release/using_cluster/index.mdx b/product_docs/docs/biganimal/release/using_cluster/index.mdx new file mode 100644 index 00000000000..464be68121b --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/index.mdx @@ -0,0 +1,7 @@ +--- +title: "Using your cluster" +--- + +In this section, account owners and contributors can learn how to connect, edit, scale, and monitor clusters through the BigAnimal portal. This section also provides information on how BigAnimal handles backup and restore. + + diff --git a/product_docs/docs/edbcloud/beta/using_cluster/index.mdx b/product_docs/docs/edbcloud/beta/using_cluster/index.mdx deleted file mode 100644 index 2454948ba59..00000000000 --- a/product_docs/docs/edbcloud/beta/using_cluster/index.mdx +++ /dev/null @@ -1,7 +0,0 @@ ---- -title: "Using your cluster" ---- - -In this section, account owners and contributors can learn how to connect, edit, scale, and monitor clusters through the EDB Cloud portal. This section also provides information on how EDB Cloud handles backup and restore. - - diff --git a/product_docs/docs/pem/8/pem_admin/02a_pem_remote_monitoring.mdx b/product_docs/docs/pem/8/pem_admin/02a_pem_remote_monitoring.mdx new file mode 100644 index 00000000000..9d558476d7e --- /dev/null +++ b/product_docs/docs/pem/8/pem_admin/02a_pem_remote_monitoring.mdx @@ -0,0 +1,30 @@ +--- +title: "Remote Monitoring" +--- + +Remote monitoring is monitoring your Postgres cluster using a PEM Agent residing on a different host. + +To remotely monitor a Postgres cluster with PEM, you must register the cluster with PEM, and bind a PEM agent. See [Registering a Server](02_registering_server/#registering_server) for more information. + +The following scenarios require remote monitoring using PEM: + +- [Postgres cluster running on AWS RDS](../pem_admin/03_pem_define_aws_instance_connection/#monitoring-a-postgres-instance-running-on-aws-ec2S) +- [Postgres cluster running on BigAnimal](../../../biganimal/latest/using_cluster/05_monitoring_and_logging/) + +PEM remote monitoring supports: + +| Feature Name | Remote Monitoring Supported? | Comments | +| --------------------------------------------------------------------------------------------------------------------------------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| [Manage Charts](../pem_ent_feat/05_performance_monitoring_and_management/#using-the-manage-charts-tab) | Yes | | +| [System Reports](../pem_ent_feat/12_reports/#system-configuration-report) | Yes | | +| [Capacity Manager](../pem_ent_feat/06_capacity_manager) | Limited | There is no correlation between the Postgres cluster and operating system metrices. | +| [Manage Alerts](../pem_ent_feat/05_performance_monitoring_and_management/#alerting) | Limited | When you run an alert script on the Postges cluster, it runs on the machine where the bound PEM Agent is running, and not on the actual Postgres cluster machine. | +| [Manage Dashboards](../pem_ent_feat/05_performance_monitoring_and_management/#using-dashboards-to-view-performance-information) | Limited | Some dashboards may not be able to show complete data. For example, the operating system information where the Postgres cluster is running is not displayed as it is not available. | +| [Manage Probes](../pem_ent_feat/05_performance_monitoring_and_management/#probes) | Limited | Some of the PEM probes do not return information, and some of the functionality may be affected. For details about probe functionality, see the [PEM Agent Privileges](../pem_agent/03_managing_pem_agent/#agent-privileges). | +| [Postgres Expert](../pem_ent_feat/11_postgres_expert/) | Limited | The Postgres Expert provides partial information as operating system information is not available. | +| [Scheduled Tasks](../pem_online_help/04_toc_pem_features/15_pem_scheduled_task_tab/) | Limited | Scheduled tasks work only for Postgres clusters and scripts run on a remote Agent. | +| [Core Usage Reports](../pem_ent_feat/12_reports/#core-usage-report) | Limited | The Core Usage report do not show complete information. For example, the platform, number of cores, and total RAM is not displayed. | +| [Audit Manager](../pem_ent_feat/07_audit_manager) | No | | +| [Log Manager](../pem_ent_feat/08_log_manager) | No | | +| [Postgres Log Analysis Expert](../pem_ent_feat/08_log_manager/#postgres-log-analysis-expert) | No | | +| [Tuning Wizard](../pem_ent_feat/10_tuning_wizard/) | No | | \ No newline at end of file diff --git a/product_docs/docs/pem/8/pem_admin/03_pem_define_aws_instance_connection.mdx b/product_docs/docs/pem/8/pem_admin/03_pem_define_aws_instance_connection.mdx index f8a2b5e9577..2bfd57f92ab 100644 --- a/product_docs/docs/pem/8/pem_admin/03_pem_define_aws_instance_connection.mdx +++ b/product_docs/docs/pem/8/pem_admin/03_pem_define_aws_instance_connection.mdx @@ -1,57 +1,30 @@ --- -title: "Defining and Monitoring Postgres instances on AWS" +title: "Defining and Monitoring Postgres clusters on AWS" legacyRedirectsGenerated: # This list is generated by a script. If you need add entries, use the `legacyRedirects` key. - "/edb-docs/d/edb-postgres-enterprise-manager/user-guides/administrators-guide/8.0/pem_define_aws_instance_connection.html" --- -There are two scenarios in which you can monitor a Postgres instance on an AWS host with PEM. You can monitor a: +There are two scenarios in which you can monitor a Postgres cluster on an AWS host using PEM. You can monitor a: -- Postgres Instance running on AWS EC2 -- Postgres Instance running on AWS RDS +- Postgres cluster running on AWS EC2 +- Postgres cluster running on AWS RDS -## Monitoring a Postgres Instance Running on AWS EC2 +## Monitoring a Postgres Cluster Running on AWS EC2 -After creating a Postgres instance on AWS EC2, you can use the PEM server to register and monitor your instance. The following scenarios are currently supported: +After creating a Postgres cluster on AWS EC2, you can use the PEM server to register and monitor your cluster. The following scenarios are currently supported: -- Postgres instance and PEM Agent running on the same AWS EC2 and a PEM Server running on your local machine. -- Postgres instance and PEM Agent running on the same local machine and a PEM Server running on AWS EC2. -- Postgres instance and PEM Agent running on the same AWS EC2 and a PEM Server running in different AWS EC2. +- Postgres cluster and PEM Agent running on the same AWS EC2 and a PEM Server running on your local machine. +- Postgres cluster and PEM Agent running on the same local machine and a PEM Server running on AWS EC2. +- Postgres cluster and PEM Agent running on the same AWS EC2 and a PEM Server running in different AWS EC2. !!! Note In the first two scenarios, you must configure the VPN on AWS EC2 , so the AWS EC2 instance can access the `pem` database. Please contact your network administrator to setup the VPN if needed. -The PEM Agent running on AWS EC2 or on your local machine should be registered to the PEM Server. Please note that when registering the PEM Agent with the PEM Server you should use the hostname of AWS EC2 instance. For more details on registering the PEM Agent see, [PEM Self Registration](02_registering_server/#registering_server). +Since the PEM Agent is on a different host from the PEM Server, register the PEM Agent to the PEM Server first. Also, make sure to use the AWS EC2 instance hostname while registering the PEM Agent to the PEM Server. For more details on registering the PEM Agent see, [Registering an Agent](../pem_agent/02_registering_agent/). -You can register the Postgres instance running on AWS EC2 on PEM Server using the `Create - Server` dialog. For more details on registering the server using `Create - Server` dialog see, [Registering a Server](02_registering_server/#registering_server). Use the `PEM Agent` tab on the `Create - Server` dialog to bind the registered PEM Agent with the Postgres instance. +After you register the PEM Agent with the PEM Server and bind the PEM Agent to the Postgres cluster while adding to the PEM Server, you can monitor your Postgres cluster using PEM. -When the PEM Agent is registered to the PEM Server and your Postgres instance that is running on AWS EC2 is registered to the PEM Server, you can monitor your instance with PEM. +## Monitoring a Postgres Cluster Running on AWS RDS -## Monitoring a Postgres Instance Running on AWS RDS - -While creating an AWS RDS database, choose `PostgreSQL` when prompted for `Engine options`. After creating a `Postgres(RDS)` instance on AWS, use `Create - Server` dialog to add the `Postgres(RDS)` instance to the PEM Server. Using this dialog you can describe a new server connection, bind the server to a PEM Agent, and display the server to the PEM browser tree control. - -For detailed information on the `Create - Server` dialog and configuration details for each tab, see [Registering a Server](02_registering_server/#registering_server). - -The `PEM Agent` tab in the `Create - Server` dialog must have the `Remote Monitoring` field set to `Yes` to monitor the `Postgres(RDS)` instance on AWS instance using PEM Server. - -![Create Server dialog - PEM Agent tab](../images/create_server_pem_agent_tab_remote_monitoring.png) - -As the PEM Agent will be monitoring the Postgres(RDS) AWS instance remotely, the functionality will be limited as described below: - -| Feature Name | Works with remote PEM Agent | Comments | -| ---------------------------- | --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Audit Manager | No | | -| Capacity Manager | Limited | There will be no correlation between the database server and operating system metrices. | -| Log Manager | No | | -| Manage Alerts | Limited | When you run an alert script on the database server, it will run on the machine where the bound PEM Agent is running, and not on the actual database server machine. | -| Manage Charts | Yes | | -| Manage Dashboards | Limited | Some dashboards may not be able to show complete data. For example, the operating system information of the database server will not be displayed as it is not available. | -| Manage Probes | Limited | Some of the PEM probes will not return information, and some of the functionalities may be affected. For details about probe functionality, see the [PEM Agent Guide](../pem_agent/). | -| Postgres Expert | Limited | The Postgres Expert will provide partial information as operating system information is not available. | -| Postgres Log Analysis Expert | No | The Postgres Log Analysis Expert will not be able to perform an analysis as it is dependent on the logs imported by log manager, which will not work as required. | -| Scheduled Tasks | Limited | Scheduled tasks will work only for database server; scripts will run on a remote Agent. | -| Tuning Wizard | No | | -| System Reports | Yes | | -| Core Usage Reports | Limited | The Core Usage report will not show complete information. For example, the platform, number of cores, and total RAM will not be displayed. | -| Managing BART | No | BART requires password less authentication between two machines, where database server and BART are installed. An AWS RDS instance doesn't allow to use host access. | +While creating an AWS RDS database, choose `PostgreSQL` when prompted for `Engine options`. See [Remote Monitoring](../pem_admin/02a_pem_remote_monitoring) for next steps. \ No newline at end of file diff --git a/scripts/source/config_sources.py b/scripts/source/config_sources.py index d8f23853510..af9930796c0 100644 --- a/scripts/source/config_sources.py +++ b/scripts/source/config_sources.py @@ -26,7 +26,7 @@ 'pgpool', 'postgis', 'slony', - 'edbcloud', + 'biganimal', ] BASE_OUTPUT = {} @@ -54,7 +54,7 @@ { 'index': '1q', 'name': 'Mongo Data Adapter', 'key': 'mongo_data_adapter', 'indent': True }, { 'index': '1r', 'name': 'MySQL Data Adapter', 'key': 'mysql_data_adapter', 'indent': True }, { 'index': '1s', 'name': 'Replication Server', 'key': 'eprs', 'indent': True }, - { 'index': '1t', 'name': 'EDB Cloud', 'key': 'edbcloud', 'indent': True }, + { 'index': '1t', 'name': 'BigAnimal', 'key': 'biganimal', 'indent': True }, ] print('Which sources would you like loaded when you run `npm run develop`?') diff --git a/src/pages/index.js b/src/pages/index.js index 7ebf92f8bb5..bcae160b7e6 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -108,9 +108,7 @@ const Page = () => ( - - EDB Cloud Database Service - + BigAnimal diff --git a/static/_redirects b/static/_redirects index 17f40dd6ee9..44d17e3ba28 100644 --- a/static/_redirects +++ b/static/_redirects @@ -91,6 +91,9 @@ /docs/odbc_connector/12.0.0.1/* /docs/odbc_connector/latest/ 301 /docs/odbc_connector/12.2.0.1/* /docs/odbc_connector/latest/ 301 +# BigAnimal +/docs/edbcloud/* /docs/biganimal/:splat 301 + # Super legacy redirects (Docs 0.5 -> 1.0) /docs/en/1.0/EDB_HA_SCALABILITY/* https://www.enterprisedb.com/edb-docs/d/edb-postgres-failover-manager/user-guides/high-availability-scalability-guide/3.2/:splat 301 /docs/en/1.0/EDB_Migration_Portal_v1.0/* https://www.enterprisedb.com/edb-docs/d/edb-postgres-migration-portal/user-guides/user-guide/1.0/:splat 301 From 9198a51a4fb6df4daacd7f921d4b36b5b04e8e33 Mon Sep 17 00:00:00 2001 From: Craig Ringer Date: Tue, 2 Nov 2021 13:49:51 +0800 Subject: [PATCH 2/2] doc(UPM-2321): Document CNP metrics exposed by BigAnimal List the CNP metrics exposed by BigAnimal Also provide some guidance on using those metrics and on the structure of metrics and logs entries. Note that this documentation change contains a section that is generated by a script. The script is indended to be hosted in the upm-substrate repo. It doesn't seem practical to add the script here and have it re-generate the automatically generated section on every run, so updating it is expected to be part of the BigAnimal release process for now. A comment in the Markdown tries to direct the reader to where the script lives. --- .../05_monitoring_and_logging.mdx | 108 +++- .../release/using_cluster/06_metrics.mdx | 598 ++++++++++++++++++ 2 files changed, 678 insertions(+), 28 deletions(-) create mode 100644 product_docs/docs/biganimal/release/using_cluster/06_metrics.mdx diff --git a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging.mdx b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging.mdx index 0dadd3f074c..679b2670707 100644 --- a/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/05_monitoring_and_logging.mdx @@ -2,46 +2,96 @@ title: "Monitoring and logging" --- -You can monitor your Postgres clusters by viewing the metrics and logs from Azure. For existing Postgres Enterprise Manager (PEM) users who wish to monitor EDB Cloud clusters alongside self-managed Postgres clusters, you can use the remote Remote Monitoring capability of PEM. For more information on using PEM to monitor your clusters see [Remote Monitoring](../../../../../pem/latest/pem_admin/02a_pem_remote_monitoring). +You can monitor your Postgres clusters by viewing the metrics and logs from +Azure. -The following sections describe viewing metrics and logs directly from Azure. +For existing Postgres Enterprise Manager (PEM) users who wish to monitor +BigAnimal clusters alongside self-managed Postgres clusters, you can use the +remote Remote Monitoring capability of PEM. For more information on using PEM +to monitor your clusters see +[Remote Monitoring](../../../../../pem/latest/pem_admin/02a_pem_remote_monitoring). -## Viewing metrics and logs from Azure +The following sections describes how to access logs and metrics directly in the +Azure portal. -EDB Cloud sends all metrics and logs from PostgreSQL clusters to Azure. The following describes what metrics and logs are sent and how to view them. +As every customer's needs are different, it is anticipated that applying Azure +Monitor features to the supplied data streams will enable customers to create +tailored insights into their workloads in order to better meet their business +goals. -### Azure log analytics +Pre-defined dashboards and metrics queries are provided in Azure Monitor as a +starting point for exploring the available data. -When BigAnimal deploys workloads on Azure, the logs from the PostgreSQL clusters are forwarded to the Azure Log Workspace. -To query BigAnimal logs, you must use [Azure Log Analytics](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-overview) and [Kusto Query language](https://azure-training.com/azure-data-science/the-kusto-query-language/). +## Viewing metrics and logs from Azure +BigAnimal sends all metrics and logs from Postgres clusters to Azure. The +following describes what metrics and logs are sent and how to view them. +### Azure Log Analytics -### Querying PostgreSQL cluster logs +When BigAnimal deploys workloads on Azure, the logs from the postgres +clusters are forwarded to the Azure Log Workspace. +A pre-defined shared dashboard panel in the Azure Portal shows recent postgres +logs. To query BigAnimal logs in more detail you must use +[Azure Log Analytics](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-analytics-overview) +and the +[Kusto Query language](https://azure-training.com/azure-data-science/the-kusto-query-language/). -All logs from your PostgreSQL clusters are stored in the _Customer Log Analytics workspace_. To find your _Customer Log Analytics workspace_: +### Using shared dashboards to view PostgreSQL cluster logs and metrics -1. Sign in to the [Azure portal](https://portal.azure.com). +To view logs and selected metrics summaries from your PostgreSQL clusters using +Shared Dashboard: +1. Sign in to the [Azure portal](https://portal.azure.com). 2. Select **Resource Groups**. +2. Select the Resource Group corresponding to the region where you choose to + deploy your BigAnimal cluster. You will see resources included in that + Resource Group. +3. Select the resource of type _Shared Dashboard_ with the suffix -customer. +4. Select the **Go to dashboard** link located at the top of the page. -2. Select the Resource Group corresponding to the region where you choose to deploy your BigAnimal cluster. You will see resources included in that Resource Group. +The default shared dashboard provided by BigAnimal will be extended and +enhanced over time. It includes panels for monitoring and diagnostic +information like: -3. Select the resource of type _Log Analytics workspace_ with the suffix -customer. +* Recent log entries for all clusters +* Row insert, update, and delete rates per database +* Query deadlock rates +* Connection counts over time +* Temporary storage use trend +* Replication lag +* Age of longest running transaction + +!!! Important +Changes you make to the shared dashboard will be overwritten when +BigAnimal updates are deployed. Create your own custom dashboard if you wish to +modify or extend the provided dashboard. You can start with the BigAnimal dashboard +by using the "Clone" button in the dashboard view. +!!! + +### Querying PostgreSQL cluster logs and metrics +All logs from your PostgreSQL clusters are stored in the _Customer Log +Analytics workspace_. To find your _Customer Log Analytics workspace_: + +1. Sign in to the [Azure portal](https://portal.azure.com). +2. Select **Resource Groups**. +2. Select the Resource Group corresponding to the region where you choose to + deploy your BigAnimal cluster. You will see resources included in that Resource + Group. +3. Select the resource of type _Log Analytics workspace_ with the suffix -customer. 4. Select the Logs in the menu on the left in the General section. +5. Close the dashboard with pre-built queries. This will bring you to the KQL Editor. -5. Close the dashboard with prebuilt queries. This will bring you to the KQL Editor. +#### Available Logs and Metrics -The following tables are available in the _Customer Log Analytic workspace_. +See the next section [Metrics Details](#metrics-details-list) for a listing of +available metrics and details on the structure of log entries. -| Table name | Description | Logger | -| ---------- | ----------- | ------ | -| PostgresLogs_CL | Logs of the Customer clusters databases (all postgres related logs) | `logger = postgres` | -| PostgresAuditLogs_CL | Audit Logs of the Customer clusters databases | `logger = pgaudit or edb_audit` | +#### Example Log Queries -You can use the KQL Query editor to compose your queries over these tables. For example, +For example, ``` PostgresLogs_CL @@ -59,18 +109,20 @@ PostgresAuditLogs_CL | sort by record_log_time_s desc ``` -### Using shared dashboards to view PostgreSQL cluster logs - -To view logs from your PostgreSQL clusters using Shared Dashboard: - -1. Sign in to the [Azure portal](https://portal.azure.com). -2. Select **Resource Groups**. +#### Example Metrics Queries -2. Select the Resource Group corresponding to the region where you choose to deploy your BigAnimal cluster. You will see resources included in that Resource Group. +To list the metrics from BigAnimal presently available in the `InsightsMetrics` +table use this query: -3. Select the resource of type _Shared Dashboard_ with the suffix -customer. +``` +InsightsMetrics +| where Namespace == "prometheus" +| distinct Name +``` -4. Select the **Go to dashboard** link located at the top of the page. +(Or just use Metrics Explorer). +### See also +* [Azure Monitor Metrics Overview](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/data-platform-metrics) diff --git a/product_docs/docs/biganimal/release/using_cluster/06_metrics.mdx b/product_docs/docs/biganimal/release/using_cluster/06_metrics.mdx new file mode 100644 index 00000000000..dce8e228c51 --- /dev/null +++ b/product_docs/docs/biganimal/release/using_cluster/06_metrics.mdx @@ -0,0 +1,598 @@ +--- +title: "Metrics Details" +--- + +A variety of metrics are collected by the BigAnimal instance and made available +to the customer's Azure subscription for dashboarding, alerting, querying and +other analytics. + +See [Monitoring and Logging](#monitoring-and-logging) for an introduction to +the available monitoring capabilities. + +This section explains how to find and interpret the available metrics and logs. +It also lists and describes the individual metrics provided. + +## Understanding BigAnimal Logs and Metrics + +You can see example queries over these metrics by editing the predefined +dashboard panels in the default shared dashboard. Some pre-defined queries +and/or functions may also be available in the Log Analytics queries panel. +The Azure Monitor Metrics Explorer provides a useful entry point for +discovering the available metrics. + +In-depth advice on the details of querying these metrics is beyond the scope of +this documentation. Refer to The Azure Log Analytics and Azure Monitor +documentation and to the documentation on the Kusto query language used by +Azure Monitor. A wide variety of analytics capabilities are available including +time-series functions, seasonally adjusted statistics, alert generation and +more. + +## Available Logs and Metrics + +The following tables in the _Customer Log Analytic workspace_ contain entries +specific to BigAnimal: + +| Table name | Description | +| ---------- | ----------- | +| PostgresLogs_CL | Logs of the Customer clusters databases (all postgres related logs) | +| PostgresAuditLogs_CL | Audit Logs of the Customer clusters databases, if enabled | +| InsightsMetrics | Metrics streams from BigAnimal Prometheus and Azure Monitor. BigAnimal metrics have `namespace == "prometheus"` | + +You can use the KQL Query editor in the Log Workspace view to compose queries +over these tables. + +## Logs + +Postgres logs are added to the `PostgresLogs_CL` table. + +Logs are split into structured fields matching those of the Postgres +[csvlog format](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-CSVLOG) +with a `record_` prefix and a type-suffix. For example the `application_name` +is in the `record_application_name_s` log field. + +The `pg_cluster_id_s` field identifies the specific postgres cluster +that originated the log message. + +## Metrics Overview + +BigAnimal collects a wide set of metrics about postgres instances into the +`InsightsMetrics` log analytics table. Most of these metrics are acquired +directly from postgres system tables, views, and functions. The postgres +documentation serves as the main reference for these metrics. + +KQL can be used to analyze time-series metrics, report latest samples of +metrics, etc by querying the `InsightsMetrics` table. + +Some data from postgres monitoring system views, tables and functions are +transformed to be easier to consume in Prometheus metrics format. For example, +timestamp fields are generally converted to unix epoch time and/or accompanied +by a relative time-interval metric. Other metrics are aggregated into +categories by label dimensions to limit the number of very specific and +narrowly scoped individual metrics emitted. It would be not be very useful to +report the inactivity period of every single backend, for example, so backend +statistics are aggregated by database, user, `application_name` and backend +state. + +Prometheus [Labels](https://prometheus.io/docs/practices/naming/#labels) +are mapped to Azure metrics +[Dimensions](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/data-platform-metrics#multi-dimensional-metrics). +Dimensions vary depending on the individual metric, and are documented +separately for each group of related metrics. + +The forwarded Prometheus metrics use structured json fields, particularly for +the `Tags` field. Effective use of them will require use of the +[`todynamic()`](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/parsejsonfunction) +function in queries. + +The available set of metrics is subject to change. Metrics may be added, +removed or renamed. Where feasible an effort will be made not to change the +meaning or type of existing metrics without also changing the metric name. + +At time of writing all metrics forwarded from Prometheus are in the +`prometheus` namespace. This may change in a future release. + +Effective use of the available metrics will require an understanding of Azure +time-series data, metrics dimensions, and of the tagging conventions used in +the metrics streams. + +### Metrics tags + +All postgres metrics share a common tagging scheme. Entries will generally +have at least the following tags: + +| Name | Description | +|--------------------------------|-------------| +| address | IP address of the host the metric originated from | +| postgresql | BigAnimal postgres cluster identifier e.g. `p-abcdef012345` | +| role | Postgres instance role, "primary" or "replica" | +| datname | Postgres database name (where applicable) | +| pod_name | k8s pod name | +| hostName | AKS node host name | +| container.azm.ms/clusterName | AKS cluster name | + +When querying for tags best performance is achieved when any filters that do not +require inspection of tags (e.g. filters by metric name) are applied before any +tag-based filters. + +The `Tags` field of a metrics entry is a json-typed field that may be queried +for individual values with `todynamic(Tags).keyname` in KQL. Some uses of values +may require explicit casts to another type e.g. `tostring(...)`. + +Example usage: + +``` +InsightsMetrics +| where Namespace == "prometheus" and Name startswith "cnp_" +| extend t = todynamic(Tags) +| where t.role == "primary" +| project postgres_cluster_id = tostring(t.postgresql), dbname = tostring(t.datname) +| where not (dbname has_any("template0", "template1")) +| distinct postgres_cluster_id, dbname +``` + +[comment1]: # (Generated content see upm-substrate repo config monitoring dir) + +#### Group `cnp_backends` + +Backend counts from `pg_stat_activity` aggregated by the listed label +dimensions. Useful for identifying busy applications, excessive idle +backends, etc. + +Derived from the `pg_stat_activity` view. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_backends_total` | GAUGE | Number of backends | +| `cnp_backends_max_tx_duration_seconds` | GAUGE | Maximum duration of a transaction in seconds | + + +##### Labels + +The above metrics may have these labels, represented +as dimensions in Azure Monitor: + +| Label | Description | +|-------|-------------| +| `datname` | Name of the database | +| `usename` | Name of the user | +| `application_name` | Name of the application | +| `state` | State of the backend | + +#### Group `cnp_backends_waiting` + +Postgres-instance-level aggregate information on backends that are blocked +waiting for locks. Does not count I/O waits or other reasons backends might +wait or be blocked. + +Derived from the `pg_locks` view. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_backends_waiting_total` | GAUGE | Total number of backends that are currently waiting on other queries | + +#### Group `cnp_pg_database` + +Per-database metrics for each database in the postgres instance. +Includes per-database vacuum progress information. + +Derived from the `pg_database` catalog. + +See also `cnp_pg_stat_database`. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_database_size_bytes` | GAUGE | Disk space used by the database | +| `cnp_pg_database_xid_age` | GAUGE | Number of transactions from the frozen XID to the current one | +| `cnp_pg_database_mxid_age` | GAUGE | Number of multiple transactions (Multixact) from the frozen XID to the current one | + + +##### Labels + +The above metrics may have these labels, represented +as dimensions in Azure Monitor: + +| Label | Description | +|-------|-------------| +| `datname` | Name of the database | + +#### Group `cnp_pg_postmaster` + +Data on the postgres instance's managing "postmaster" process. + +Derived from the `pg_postmaster_start_time()` function. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_postmaster_start_time` | GAUGE | Time at which postgres started (based on epoch) | + +#### Group `cnp_pg_replication` + +Physical replication details for a standby postgres instance +as captured from the standby itself. + +Derived from the `pg_last_xact_replay_timestamp()` function. + +Only relevant on standby servers. + +See also `cnp_pg_stat_replication`, `cnp_pg_replication_slots`. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_replication_lag` | GAUGE | Replication lag behind primary in seconds | +| `cnp_pg_replication_in_recovery` | GAUGE | Whether the instance is in recovery | + +#### Group `cnp_pg_replication_slots` + +Details about replication slots on a postgres instance. In most +configurations only the primary server will have active replication clients, +but other nodes may still have replication slots. + +Note that logical replication slots are specific to a database, whereas +physical replication slots will have an empty "database" label as they +apply to the postgres instance as a whole. + +Derived from the `pg_replication_slots` view. + +See also `cnp_pg_stat_replication`, `cnp_pg_replication`. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_replication_slots_active` | GAUGE | Flag indicating if the slot is active | +| `cnp_pg_replication_slots_pg_wal_lsn_diff` | GAUGE | Replication lag in bytes | + + +##### Labels + +The above metrics may have these labels, represented +as dimensions in Azure Monitor: + +| Label | Description | +|-------|-------------| +| `slot_name` | Name of the replication slot | +| `database` | Name of the database | + +#### Group `cnp_pg_stat_archiver` + +Progress information about WAL archiving. Only the currently active primary +server will generally be performing WAL archiving. + +WAL archiving is important for backup and restore. If WAL archiving is +delayed or failing for too long, the point-in-time recovery backups for +a postgres cluster will not be up to date. This has disaster recovery +implications and can potentially also affect failover. + +Occasional WAL archiving failures are normal, but a growing delay in the time +since the last successful WAL archiving operation should be taken seriously. + +Metrics in this section are reset when a postgres stats reset is issued +on the db server. + +Derived from the `pg_stat_archiver` view. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_stat_archiver_archived_count` | COUNTER | Number of WAL files that have been successfully archived | +| `cnp_pg_stat_archiver_failed_count` | COUNTER | Number of failed attempts for archiving WAL files | +| `cnp_pg_stat_archiver_seconds_since_last_archival` | GAUGE | Seconds since the last successful archival operation | +| `cnp_pg_stat_archiver_seconds_since_last_failure` | GAUGE | Seconds since the last failed archival operation | +| `cnp_pg_stat_archiver_last_archived_time` | GAUGE | Epoch of the last time WAL archiving succeeded | +| `cnp_pg_stat_archiver_last_failed_time` | GAUGE | Epoch of the last time WAL archiving failed | +| `cnp_pg_stat_archiver_last_archived_wal_start_lsn` | GAUGE | Archived WAL start LSN | +| `cnp_pg_stat_archiver_last_failed_wal_start_lsn` | GAUGE | Last failed WAL LSN | +| `cnp_pg_stat_archiver_stats_reset_time` | GAUGE | Time at which these statistics were last reset | + +#### Group `cnp_pg_stat_bgwriter` + +Stats for the postgres background writer and checkpointer processes, which +are instance-wide and shared across all databases in a postgres instance. + +Very long delays between checkpoints on a busy system will increase the time +taken for it to return to read/write availability if crash recovery is +required. Excessively frequent checkpoints can increase I/O load and the size +of the WAL stream for backup and replication. + +The postgres documentation discusses checkpoints, dirty writeback, and +checkpoint tuning in detail. + +Metrics in this section are reset when a postgres stats reset is issued +on the db server. + +Derived from the `pg_stat_bgwriter` catalog. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_stat_bgwriter_checkpoints_timed` | COUNTER | Number of scheduled checkpoints that have been performed | +| `cnp_pg_stat_bgwriter_checkpoints_req` | COUNTER | Number of requested checkpoints that have been performed | +| `cnp_pg_stat_bgwriter_checkpoint_write_time` | COUNTER | Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds | +| `cnp_pg_stat_bgwriter_checkpoint_sync_time` | COUNTER | Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds | +| `cnp_pg_stat_bgwriter_buffers_checkpoint` | COUNTER | Number of buffers written during checkpoints | +| `cnp_pg_stat_bgwriter_buffers_clean` | COUNTER | Number of buffers written by the background writer | +| `cnp_pg_stat_bgwriter_maxwritten_clean` | COUNTER | Number of times the background writer stopped a cleaning scan because it had written too many buffers | +| `cnp_pg_stat_bgwriter_buffers_backend` | COUNTER | Number of buffers written directly by a backend | +| `cnp_pg_stat_bgwriter_buffers_backend_fsync` | COUNTER | Number of times a backend had to execute its own fsync call (normally the background writer handles those even when the backend does its own write) | +| `cnp_pg_stat_bgwriter_buffers_alloc` | COUNTER | Number of buffers allocated | + +#### Group `cnp_pg_stat_database` + +This metrics group directly exposes the summary data postgres collects in its +own `pg_stat_database` view. It contains statistical counters maintained by +postgres itself for database activity. + +Metrics in this section are reset when a postgres stats reset is issued +on the db server. + +Derived from the `pg_stat_database` catalog. + +See also `cnp_pg_database`. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_stat_database_xact_commit` | COUNTER | Number of transactions in this database that have been committed | +| `cnp_pg_stat_database_xact_rollback` | COUNTER | Number of transactions in this database that have been rolled back | +| `cnp_pg_stat_database_blks_read` | COUNTER | Number of disk blocks read in this database | +| `cnp_pg_stat_database_blks_hit` | COUNTER | Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache) | +| `cnp_pg_stat_database_tup_returned` | COUNTER | Number of rows returned by queries in this database | +| `cnp_pg_stat_database_tup_fetched` | COUNTER | Number of rows fetched by queries in this database | +| `cnp_pg_stat_database_tup_inserted` | COUNTER | Number of rows inserted by queries in this database | +| `cnp_pg_stat_database_tup_updated` | COUNTER | Number of rows updated by queries in this database | +| `cnp_pg_stat_database_tup_deleted` | COUNTER | Number of rows deleted by queries in this database | +| `cnp_pg_stat_database_conflicts` | COUNTER | Number of queries canceled due to conflicts with recovery in this database | +| `cnp_pg_stat_database_temp_files` | COUNTER | Number of temporary files created by queries in this database | +| `cnp_pg_stat_database_temp_bytes` | COUNTER | Total amount of data written to temporary files by queries in this database | +| `cnp_pg_stat_database_deadlocks` | COUNTER | Number of deadlocks detected in this database | +| `cnp_pg_stat_database_blk_read_time` | COUNTER | Time spent reading data file blocks by backends in this database, in milliseconds | +| `cnp_pg_stat_database_blk_write_time` | COUNTER | Time spent writing data file blocks by backends in this database, in milliseconds | + + +##### Labels + +The above metrics may have these labels, represented +as dimensions in Azure Monitor: + +| Label | Description | +|-------|-------------| +| `datname` | Name of this database | + +#### Group `cnp_pg_stat_database_conflicts` + +These metrics provide information on conflicts between queries on a standby +and the standby's replay of the change-stream from the primary. These are +called recovery conflicts. + +These metrics are unrelated to "INSERT ... ON CONFLICT" conflicts, or +multi-master replication row conflicts. They are only relevant on standby +servers. + +Metrics in this section are reset when a postgres stats reset is issued +on the db server. + +Only defined on standby servers. + +Derived from the `pg_stat_database_conflicts` view. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_stat_database_conflicts_confl_tablespace` | COUNTER | Number of queries in this database that have been canceled due to dropped tablespaces | +| `cnp_pg_stat_database_conflicts_confl_lock` | COUNTER | Number of queries in this database that have been canceled due to lock timeouts | +| `cnp_pg_stat_database_conflicts_confl_snapshot` | COUNTER | Number of queries in this database that have been canceled due to old snapshots | +| `cnp_pg_stat_database_conflicts_confl_bufferpin` | COUNTER | Number of queries in this database that have been canceled due to pinned buffers | +| `cnp_pg_stat_database_conflicts_confl_deadlock` | COUNTER | Number of queries in this database that have been canceled due to deadlocks | + + +##### Labels + +The above metrics may have these labels, represented +as dimensions in Azure Monitor: + +| Label | Description | +|-------|-------------| +| `datname` | Name of the database | + +#### Group `cnp_pg_stat_user_tables` + +Access and usage statistics maintained by postgres on non-system tables. + +Metrics in this section are reset when a postgres stats reset is issued +on the db server. + +Derived from the `pg_stat_user_tables` view. + +See also `cnp_pg_statio_user_tables`. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_stat_user_tables_seq_scan` | COUNTER | Number of sequential scans initiated on this table | +| `cnp_pg_stat_user_tables_seq_tup_read` | COUNTER | Number of live rows fetched by sequential scans | +| `cnp_pg_stat_user_tables_idx_scan` | COUNTER | Number of index scans initiated on this table | +| `cnp_pg_stat_user_tables_idx_tup_fetch` | COUNTER | Number of live rows fetched by index scans | +| `cnp_pg_stat_user_tables_n_tup_ins` | COUNTER | Number of rows inserted | +| `cnp_pg_stat_user_tables_n_tup_upd` | COUNTER | Number of rows updated | +| `cnp_pg_stat_user_tables_n_tup_del` | COUNTER | Number of rows deleted | +| `cnp_pg_stat_user_tables_n_tup_hot_upd` | COUNTER | Number of rows HOT updated (i.e., with no separate index update required) | +| `cnp_pg_stat_user_tables_n_live_tup` | GAUGE | Estimated number of live rows | +| `cnp_pg_stat_user_tables_n_dead_tup` | GAUGE | Estimated number of dead rows | +| `cnp_pg_stat_user_tables_n_mod_since_analyze` | GAUGE | Estimated number of rows changed since last analyze | +| `cnp_pg_stat_user_tables_last_vacuum` | GAUGE | Last time at which this table was manually vacuumed (not counting VACUUM FULL) | +| `cnp_pg_stat_user_tables_last_autovacuum` | GAUGE | Last time at which this table was vacuumed by the autovacuum daemon | +| `cnp_pg_stat_user_tables_last_analyze` | GAUGE | Last time at which this table was manually analyzed | +| `cnp_pg_stat_user_tables_last_autoanalyze` | GAUGE | Last time at which this table was analyzed by the autovacuum daemon | +| `cnp_pg_stat_user_tables_vacuum_count` | COUNTER | Number of times this table has been manually vacuumed (not counting VACUUM FULL) | +| `cnp_pg_stat_user_tables_autovacuum_count` | COUNTER | Number of times this table has been vacuumed by the autovacuum daemon | +| `cnp_pg_stat_user_tables_analyze_count` | COUNTER | Number of times this table has been manually analyzed | +| `cnp_pg_stat_user_tables_autoanalyze_count` | COUNTER | Number of times this table has been analyzed by the autovacuum daemon | + + +##### Labels + +The above metrics may have these labels, represented +as dimensions in Azure Monitor: + +| Label | Description | +|-------|-------------| +| `datname` | Name of current database | +| `schemaname` | Name of the schema that this table is in | +| `relname` | Name of this table | + +#### Group `cnp_pg_stat_replication` + +Realtime information about replication connections to this postgres instance, +their progress and activity. + +Metrics in this section are not reset when a postgres stats reset is issued +on the db server. The "stat" in the name is a historic artefact from postgres +development. + +Derived from the `pg_stat_replication` view. + +See also `cnp_pg_replication_slots`, `cnp_pg_replication`. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_stat_replication_backend_start` | COUNTER | Time when this process was started | +| `cnp_pg_stat_replication_backend_xmin_age` | COUNTER | The age of this standby's xmin horizon | +| `cnp_pg_stat_replication_sent_diff_bytes` | GAUGE | Difference in bytes from the last write-ahead log location sent on this connection | +| `cnp_pg_stat_replication_write_diff_bytes` | GAUGE | Difference in bytes from the last write-ahead log location written to disk by this standby server | +| `cnp_pg_stat_replication_flush_diff_bytes` | GAUGE | Difference in bytes from the last write-ahead log location flushed to disk by this standby server | +| `cnp_pg_stat_replication_replay_diff_bytes` | GAUGE | Difference in bytes from the last write-ahead log location replayed into the database on this standby server | +| `cnp_pg_stat_replication_write_lag_seconds` | GAUGE | Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it | +| `cnp_pg_stat_replication_flush_lag_seconds` | GAUGE | Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it | +| `cnp_pg_stat_replication_replay_lag_seconds` | GAUGE | Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it | + + +##### Labels + +The above metrics may have these labels, represented +as dimensions in Azure Monitor: + +| Label | Description | +|-------|-------------| +| `usename` | Name of the replication user | +| `application_name` | Name of the application | +| `client_addr` | Client IP address | + +#### Group `cnp_pg_statio_user_tables` + +I/O activity statistics maintained by postgres on non-system tables. + +Metrics in this section are reset when a postgres stats reset is issued +on the db server. + +Derived from the `pg_statio_user_tables` view. + +See also `cnp_pg_stat_user_tables`. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_statio_user_tables_heap_blks_read` | COUNTER | Number of disk blocks read from this table | +| `cnp_pg_statio_user_tables_heap_blks_hit` | COUNTER | Number of buffer hits in this table | +| `cnp_pg_statio_user_tables_idx_blks_read` | COUNTER | Number of disk blocks read from all indexes on this table | +| `cnp_pg_statio_user_tables_idx_blks_hit` | COUNTER | Number of buffer hits in all indexes on this table | +| `cnp_pg_statio_user_tables_toast_blks_read` | COUNTER | Number of disk blocks read from this table's TOAST table (if any) | +| `cnp_pg_statio_user_tables_toast_blks_hit` | COUNTER | Number of buffer hits in this table's TOAST table (if any) | +| `cnp_pg_statio_user_tables_tidx_blks_read` | COUNTER | Number of disk blocks read from this table's TOAST table indexes (if any) | +| `cnp_pg_statio_user_tables_tidx_blks_hit` | COUNTER | Number of buffer hits in this table's TOAST table indexes (if any) | + + +##### Labels + +The above metrics may have these labels, represented +as dimensions in Azure Monitor: + +| Label | Description | +|-------|-------------| +| `datname` | Name of current database | +| `schemaname` | Name of the schema that this table is in | +| `relname` | Name of this table | + +#### Group `cnp_pg_settings` + +Expose the subset of postgres server settings that can be represented as +Prometheus compatible metrics - any integer, boolean or real number. +Text-format settings, list-valued settings and enumeration-typed settings are +not captured or reported. + +This set of metrics does not expose per-database settings assigned with +`ALTER DATABASE ... SET ...`, per-user settings assigned with `ALTER USER ... +SET ...`, or per-session values. It only shows the database-system-wide +global values. You can explore other settings interactively using postgres +system views. + +Derived from the `pg_settings` view. + +##### Metrics + +| Metric | Usage | Description | +|----------|-------|-------------| +| `cnp_pg_settings_setting` | GAUGE | Setting value | + + +##### Labels + +The above metrics may have these labels, represented +as dimensions in Azure Monitor: + +| Label | Description | +|-------|-------------| +| `name` | Name of the setting | + +[comment2]: # (End generated content) + +### Other metrics streams + +In addition to postgres metrics from the Cloud Native PostgreSQL operator that +manages databases in BigAnimal, additional metrics about Kubernetes cluster +state and other details may be streamed to the Log Workspace. Any such metrics +are generally well-known metrics from widely used tools, documented by the +upstream vendor of the component. + +Details on individual metrics from such sources will not be listed in this +document. Refer to the documentation of the tool or project that defines the +metrics. + +See also: + +* [Kubernetes cluster metrics](https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/). + +Additional streams of metrics may be supplied by the cloud platform itself +directly to the customer's metrics, analytics and dashboarding endpoint. + +### Dive Deeper + +The capabilities available in the Azure portal are too broad to fully cover in this +documentation. They include the ability to: + +* Discover metrics in the Azure Monitor Metrics Explorer (Monitor -> Metrics) +* Query logs and metrics from the Azure Monitor Logs view (Monitor -> Logs) +* Create dashboards backed by metrics queries in the Portal +* Define alerting rules to trigger notifications based on queries +* Use AI-assisted analytics assistant capabilities ("Metrics Advisers") to find + patterns in metrics +* Apply complex analytic tools for time-series data in Application Insights, + including seasonally adjusted statistics to discover patterns, anomalies and + trends.