diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/index.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/index.mdx
index 59e54f5a9e4..0a10fe86383 100644
--- a/advocacy_docs/edb-postgres-ai/ai-ml/index.mdx
+++ b/advocacy_docs/edb-postgres-ai/ai-ml/index.mdx
@@ -3,15 +3,16 @@ title: EDB Postgres AI - AI/ML
navTitle: AI/ML
indexCards: simple
iconName: BrainCircuit
+description: How to make use of EDB Postgres AI for AI/ML workloads and using the pgai extension.
navigation:
- overview
- install-tech-preview
- using-tech-preview
---
-EDB Postgres® AI Database is designed to solve all AI data management needs, including storing, searching, and retrieving of AI data. This uplevels Postgres to a database that manages and serves all types of data modalities directly and combines it with its battle-proof strengths as an established Enterprise system of record that manages high-value business data.
+EDB Postgres® AI Database is designed to solve all AI data management needs, including storing, searching, and retrieving of AI data. This up-levels Postgres to a database that manages and serves all types of data modalities directly and combines it with its battle-proof strengths as an established Enterprise system of record that manages high-value business data.
-In this tech preview, you will be able to use the pgai extension to build a simple retrieval augmented generation (RAG) application in Postgres.
+In this tech preview, you can use the pgai extension to build a simple retrieval augmented generation (RAG) application in Postgres.
An [overview](overview) of the pgai extension gives you a high-level understanding of the major functionality available to date.
diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx
index e3de4b43a57..cd579a41a89 100644
--- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx
+++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx
@@ -4,11 +4,35 @@ navTitle: Working with AI data in S3
description: How to work with AI data stored in S3-compatible object storage using the pgai extension.
---
-We recommend you to prepare your own S3 compatible object storage bucket with some test data and try the steps in this section with that. But it is possible to simply use the example S3 bucket data as is in the examples here even with your custom access key and secret key credentials because these have been configured for public access.
+The following examples demonstrate how to use the pgai functions with S3-compatible object storage. You can use the following examples as is, because they use a publicly accessible example S3 bucket. Or you can prepare your own S3 compatible object storage bucket with some test data and try the steps in this section with that data.
-In addition we use image data and an according image encoder LLM in this example instead of text data. But you could also use plain text data on object storage similar to the examples in the previous section.
+These examples also use image data and an appropriate image encoder LLM instead of text data. You could, though, use plain text data on object storage similar to the examples in [Working with AI data in Postgres](working-with-ai-data-in-postgres).
-First let's create a retriever for images stored on s3-compatible object storage as the source. We specify torsten as the bucket name and an endpoint URL where the bucket is created. We specify an empty string as prefix because we want all the objects in that bucket. We use the [`clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32) open encoder model for image data from HuggingFace. We provide a name for the retriever so that we can identify and reference it subsequent operations:
+### Creating a retriever
+
+Start by creating a retriever for images stored on s3-compatible object storage as the source using the `pgai.create_s3_retriever` function.
+
+```
+pgai.create_s3_retriever(
+ retriever_name text,
+ schema_name text,
+ model_name text,
+ data_type text,
+ bucket_name text,
+ prefix text,
+ endpoint_url text
+)
+```
+
+* The retriever_name is used to identify and reference the retriever; set it to `image_embeddings` for this example.
+* The schema_name is the schema where the source table is located.
+* The model_name is the name of the embeddings encoder model for similarity data; set it to [`clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32) to use the open encoder model for image data from HuggingFace.
+* The data_type is the type of data in the source table, which could be either `img` or `text`; set it to `img`.
+* The bucket_name is the name of the S3 bucket where the data is stored; set this to `torsten`.
+* The prefix is the prefix of the objects in the bucket; set this to an empty string because you want all the objects in that bucket.
+* The endpoint_url is the URL of the S3 endpoint; set that to `https://s3.us-south.cloud-object-storage.appdomain.cloud` to access the public example bucket.
+
+This gives the following SQL command:
```sql
SELECT pgai.create_s3_retriever(
@@ -27,8 +51,9 @@ __OUTPUT__
(1 row)
```
+### Refreshing the retriever
-Next, run the refresh_retriever function.
+Next, run the `pgai.refresh_retriever` function.
```sql
SELECT pgai.refresh_retriever('image_embeddings');
@@ -38,8 +63,13 @@ __OUTPUT__
(1 row)
```
-
-Finally, run the retrieve_via_s3 function with the required parameters to retrieve the top K most relevant (most similar) AI data items. Be aware that the object type is currently limited to image and text files.
+
+### Retrieving data
+
+Finally, run the `pgai.retrieve_via_s3` function with the required parameters to retrieve the top K most relevant (most similar) AI data items. Be aware that the object type is currently limited to image and text files.
+
+```sql
+Finally, run the `pgai.retrieve_via_s3` function with the required parameters to retrieve the top K most relevant (most similar) AI data items. Be aware that the object type is currently limited to image and text files.
```sql
SELECT data from pgai.retrieve_via_s3(
diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx
index 863ea18a1ec..b67c93cbc52 100644
--- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx
+++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx
@@ -4,11 +4,11 @@ navTitle: Working with AI data in Postgres
description: How to work with AI data stored in Postgres tables using the pgai extension.
---
-We will first look at working with AI data stored in columns in the Postgres table.
+The examples on this page are about working with AI data stored in columns in the Postgres table.
-To see how to use AI data stored in S3-compatible object storage, skip to the next section.
+To see how to use AI data stored in S3-compatible object storage, skip to [working with AI data in S3](working-with-ai-data-in-S3).
-First let's create a Postgres table for some test AI data:
+Begin by creating a Postgres table for some test AI data:
```sql
CREATE TABLE products (
@@ -21,8 +21,33 @@ __OUTPUT__
CREATE TABLE
```
+## Working with auto embedding
-Now let's create a retriever with the just created products table as the source. We specify product_id as the unique key column to and we define the product_name and description columns to use for the similarity search by the retriever. We use the `all-MiniLM-L6-v2` open encoder model from HuggingFace. We set `auto_embedding` to True so that any future insert, update or delete to the source table will automatically generate, update or delete also the corresponding embedding. We provide a name for the retriever so that we can identify and reference it subsequent operations:
+Next, you are going to create a retriever with the just created products table as the source using the `pgai.create_pg_retriever` function which has this syntax:
+
+```sql
+pgai.create_pg_retriever(
+ retriever_name text,
+ schema_name text,
+ primary_key text,
+ model_name text,
+ data_type text,
+ source_table text,
+ columns text[],
+ auto_embedding boolean
+)
+```
+
+* The `retriever_name` is used to identify and reference the retriever; set it to `product_embeddings_auto` for this example.
+* The `schema_name` is the schema where the source table is located; set this to `public`.
+* The `primary_key` is the primary key column of the source table.
+* The `model_name` is the name of the embeddings encoder model for similarity data; set it to `all-MiniLM-L6-v2` to use the open encoder model for text data from HuggingFace.
+* The `data_type` is the type of data in the source table, which could be either `img` or `text`. Set it to `text`.
+* The `source_table` is the name of the source table. The source table created previously, is `products` so set it to that.
+* The `columns` is an array of columns to use for the similarity search by the retriever. Set this to `ARRAY['product_name', 'description']` to use the product_name and description columns.
+* The `auto_embedding` is a boolean value to set a trigger for auto embeddings. Set it to TRUE so that any future insert, update or delete to the source table shall automatically generate, update or delete also the corresponding embedding.
+
+This gives the following SQL command:
```sql
SELECT pgai.create_pg_retriever(
@@ -42,9 +67,8 @@ __OUTPUT__
(1 row)
```
-
-
-Now let's insert some AI data records into the products table. Since we have set auto_embedding to True, the retriever will automatically generate all embeddings in real-time for each inserted record:
+You have now created a retriever for the products table. The next step is to insert some AI data records into the products table.
+Since you set `auto_embedding` to true, the retriever shall automatically generate all embeddings in real-time for each inserted record:
```sql
INSERT INTO products (product_name, description) VALUES
@@ -61,7 +85,21 @@ __OUTPUT__
INSERT 0 9
```
-Now we can directly use the retriever (specifying the retriever name) for a similarity retrieval of the top K most relevant (most similar) AI data items:
+Now you can use the retriever, by specifying the retriever name, to perform a similarity retrieval of the top K most relevant, in this case most similar, AI data items. You can do this by running the `pgai.retrieve` function with the required parameters:
+
+```sql
+pgai.retrieve(
+ query text,
+ top_k integer,
+ retriever_name text
+)
+```
+
+* The `query` is the text to use to retrieve the top similar data. Set it to `I like it`.
+* The `top_k` is the number of top similar data items to retrieve. Set this to 5
+* The `retriever_name` is the name of the retriever. The retriever's name is `product_embeddings_auto`.
+
+This gives the following SQL command:
```sql
SELECT data FROM pgai.retrieve(
@@ -80,7 +118,9 @@ __OUTPUT__
(5 rows)
```
-Now let's try a retriever without auto embedding. This means that the application has control over when the embeddings are computed in a bulk fashion. For demonstration we can simply create a second retriever for the same products table that we just created above:
+### Working without auto embedding
+
+You can now create a retriever without auto embedding. This means that the application has control over when the embeddings computation occurs. It also means that the computation is a bulk operation. For demonstration you can simply create a second retriever for the same products table that you just previously created the first retriever for, but setting `auto_embedding` to false.
```sql
SELECT pgai.create_pg_retriever(
@@ -100,8 +140,7 @@ __OUTPUT__
(1 row)
```
-
-We created this second retriever on the products table after we have inserted the AI records there. If we run a retrieve operation now we would not get back any results:
+The AI records are already in the table though. As this second retriever is newly created, it won't have created any embeddings. Running `pgai.retrieve` using the retriever now doesn't return any results:
```sql
SELECT data FROM pgai.retrieve(
@@ -115,7 +154,15 @@ __OUTPUT__
(0 rows)
```
-That's why we first need to run a bulk generation of embeddings. This is achieved via the `refresh_retriever()` function:
+You need to run a bulk generation of embeddings before performing any retrieval. You can do this using the `pgai.refresh_retriever` function:
+
+```
+pgai.refresh_retriever(
+ retriever_name text
+)
+```
+
+The `retriever_name` is the name of the retriever. Our retriever's name is `product_embeddings_bulk`. So the SQL command is:
```sql
SELECT pgai.refresh_retriever(
@@ -129,7 +176,7 @@ INFO: inserted table name public._pgai_embeddings_product_embeddings_bulk
(1 row)
```
-Now we can run the same retrieve operation with the second retriever as above:
+You can now run that retrieve operation using the second retriever and get the same results as with the first retriever:
```sql
SELECT data FROM pgai.retrieve(
@@ -148,7 +195,7 @@ __OUTPUT__
(5 rows)
```
-Now let's see what happens if we add additional AI data records:
+The next step is to see what happens if when you add more AI data records:
```sql
INSERT INTO products (product_name, description) VALUES
@@ -177,7 +224,7 @@ __OUTPUT__
(5 rows)
```
-At the same time the second retriever without auto embedding does not reflect the new data until there is another explicit refresh_retriever() run:
+The second retriever without auto embedding doesn't reflect the new data. It can only do so when once there has been another explicit call to `pgai.refresh_retriever`. Until then, the results don't change:
```sql
SELECT data FROM pgai.retrieve(
@@ -196,7 +243,7 @@ __OUTPUT__
(5 rows)
```
-If we now call `refresh_retriever()` again, the new data is picked up:
+If you now call `pgai.refresh_retriever()` again, the embeddings computation uses the new data to refresh the embeddings:
```sql
SELECT pgai.refresh_retriever(
@@ -208,7 +255,7 @@ INFO: inserted table name public._pgai_embeddings_product_embeddings_bulk
-------------------
```
-And will be returned when we run the retrieve operation again:
+And the new data shows up in the results of the query when you call the `pgai.retrieve` function again:
```sql
SELECT data FROM pgai.retrieve(
@@ -227,6 +274,8 @@ __OUTPUT__
(5 rows)
```
-We used the two different retrievers for the same source data just to demonstrate the workings of auto embedding compared to explicit `refresh_retriever()`. In practice you may want to combine auto embedding and refresh_retriever() in a single retriever to conduct an initial embedding of data that existed before you created the retriever and then rely on auto embedding for any future data that is ingested, updated or deleted.
+You used the two different retrievers for the same source data just to demonstrate the workings of auto embedding compared to explicit `refresh_retriever()`.
+
+In practice you may want to combine auto embedding and refresh_retriever() in a single retriever to conduct an initial embedding of data that existed before you created the retriever and then rely on auto embedding for any future data that's ingested, updated, or deleted.
-You should consider relying on `refresh_retriever()` only, without auto embedding, if you typically ingest a lot of AI data at once in a batched manner.
+You should consider relying on `pgai.refresh_retriever`, and not using auto embedding, if you typically ingest a lot of AI data at once as a batch.
diff --git a/advocacy_docs/edb-postgres-ai/analytics/concepts.mdx b/advocacy_docs/edb-postgres-ai/analytics/concepts.mdx
index 0a606979e87..0c7d202d059 100644
--- a/advocacy_docs/edb-postgres-ai/analytics/concepts.mdx
+++ b/advocacy_docs/edb-postgres-ai/analytics/concepts.mdx
@@ -2,6 +2,7 @@
title: Concepts - EDB Postgres Lakehouse
navTitle: Concepts
description: Learn about the ideas and terminology behind EDB Postgres Lakehouse for Analytics workloads.
+deepToC: true
---
EDB Postgres Lakehouse is the solution for running Rapid Analytics against
@@ -121,4 +122,4 @@ Here's a slightly more comprehensive diagram of how these services fit together:
Here's the more detailed, zoomed-in view of "what's in the box":
-[![Level 200 Architecture](images/level-300.svg)](images/level-300.svg)
+[![Level 300 Architecture](images/level-300.svg)](images/level-300.svg)
diff --git a/advocacy_docs/edb-postgres-ai/analytics/how_to_lakehouse_sync.mdx b/advocacy_docs/edb-postgres-ai/analytics/how_to_lakehouse_sync.mdx
index d39de52069b..823df0d7943 100644
--- a/advocacy_docs/edb-postgres-ai/analytics/how_to_lakehouse_sync.mdx
+++ b/advocacy_docs/edb-postgres-ai/analytics/how_to_lakehouse_sync.mdx
@@ -19,9 +19,9 @@ The Lakehouse sync process organizes the transactional database data into Lakeho
### Navigate to Lakehouse Sync
-1. Go to the [EDB Postgres AI Console]().
+1. Go to the [EDB Postgres AI Console](https://portal.biganimal.com/beacon).
-2. From the landing page, select the project with the database instance you want to sync. If it is not shown on the landing page, select the **View Projects** link in the **Projects** section and select your project from there.
+2. From the landing page, select the project with the database instance you want to sync. If it's not shown on the landing page, select the **View Projects** link in the **Projects** section and select your project from there.
3. Select the **Migrate** dropdown in the left navigation bar and then select **Migrations**.
@@ -49,8 +49,9 @@ The Lakehouse sync process organizes the transactional database data into Lakeho
11. Select the **Start Lakehouse Sync** button.
-12. If successful, you will see your Lakehouse sync with the 'Creating' status under 'MOST RECENT' migrations on the Migrations page. The time taken to perform a sync can depend upon how much data is being synchronized and may take several hours.
+12. If successful, you'll see your Lakehouse sync with the 'Creating' status under 'MOST RECENT' migrations on the Migrations page. The time taken to perform a sync can depend upon how much data is being synchronized and may take several hours.
-!!! Warning
+!!! Note
The first sync in a project will take a couple of hours due to the provisioning of the required infrastructure.
-!!!
\ No newline at end of file
+!!!
+
diff --git a/advocacy_docs/edb-postgres-ai/analytics/images/create-cluster-dropdown.svg b/advocacy_docs/edb-postgres-ai/analytics/images/create-cluster-dropdown.svg
new file mode 100644
index 00000000000..6925d6e00e3
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/analytics/images/create-cluster-dropdown.svg
@@ -0,0 +1,5 @@
+
+
+
diff --git a/advocacy_docs/edb-postgres-ai/analytics/index.mdx b/advocacy_docs/edb-postgres-ai/analytics/index.mdx
index 2d4bff1e9d4..425fefca3e4 100644
--- a/advocacy_docs/edb-postgres-ai/analytics/index.mdx
+++ b/advocacy_docs/edb-postgres-ai/analytics/index.mdx
@@ -3,6 +3,7 @@ title: Lakehouse analytics
navTitle: Lakehouse analytics
indexCards: simple
iconName: Improve
+description: How EDB Postgres Lakehouse extends the power of Postgres by adding a vectorized query engine and separating storage from compute, to handle analytical workloads.
navigation:
- concepts
- quick_start
diff --git a/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx b/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx
index 4e177f4bdf1..a9e1a8d2bc4 100644
--- a/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx
+++ b/advocacy_docs/edb-postgres-ai/analytics/quick_start.mdx
@@ -2,6 +2,7 @@
title: Quick Start - EDB Postgres Lakehouse
navTitle: Quick Start
description: Launch a Lakehouse node and query sample data.
+deepToC: true
---
In this guide, you will:
@@ -43,7 +44,7 @@ in object storage.
Here's what's in the box of a Lakehouse node:
-![Level 300 Architecture of Postgres Lakehouse node](./images/level-300-architecture.png)
+[![Level 300 Architecture](images/level-300.svg)](images/level-300.svg)
## Getting started
@@ -55,9 +56,11 @@ a project, you can create a cluster.
You will see a “Lakehouse Analytics” option under the “Create New” dropdown
on your project page:
-![Create Lakehouse Node Dropdown](./images/create-cluster-dropdown.png)
+
+
+
-Clicking this button will start a configuration wizard that looks like this:
+Selecting the "Lakehouse Analytics" button starts a configuration wizard that looks like this:
![Create Lakehouse Node Wizard Step 1](./images/create-cluster-wizard.png)
@@ -97,7 +100,7 @@ cluster). Then you can copy the connection string and use it as an argument to
In general, you should be able to connect to the database with any Postgres
client. We expect all introspection queries to work, and if you find one that
-does not, then that is a bug.
+doesn't, then that's a bug.
### Understand the constraints
@@ -125,11 +128,11 @@ see [Reference - Bring your own data](./reference/#advanced-bring-your-own-data)
## Inspect the benchmark datasets
-Inspect the Benchmark Datasets. Every cluster has some benchmarking data
+Inspect the Benchmark Datasets. Every cluster has some benchmarking data
available out of the box. If you are using pgcli, you can run `\dn` to see
the available tables.
-The available benchmarking datsets are:
+The available benchmarking datasets are:
* TPC-H, at scale factors 1, 10, 100 and 1000
* TPC-DS, at scale factors 1, 10, 100 and 1000
diff --git a/advocacy_docs/edb-postgres-ai/analytics/reference.mdx b/advocacy_docs/edb-postgres-ai/analytics/reference.mdx
index 40632349eef..848981e983f 100644
--- a/advocacy_docs/edb-postgres-ai/analytics/reference.mdx
+++ b/advocacy_docs/edb-postgres-ai/analytics/reference.mdx
@@ -2,6 +2,7 @@
title: Reference - EDB Postgres Lakehouse
navTitle: Reference
description: Things to know about EDB Postgres Lakehouse
+deepToC: true
---
Postgres Lakehouse is an early product. Eventually, it will support deployment
diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/deployment.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/deployment.mdx
new file mode 100644
index 00000000000..40cf2408cf3
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/cloud-service/deployment.mdx
@@ -0,0 +1,40 @@
+---
+title: Deployment Options on EDB Postgres AI Cloud Service
+navTitle: Deployment Options
+description: EDB Postgres AI Cloud Service offers a variety of deployment options for EDB Postgres databases for high availability and fault tolerance.
+deepToC: true
+---
+
+
+## Availability options
+
+### Single instance
+
+Single instance databases are great for development and testing, but for
+production workloads, you need to consider high availability and fault
+tolerance.
+
+### Primary/secondary replication
+
+Primary/secondary replication is a common high-availability solution for
+databases. In this configuration, a primary database server is responsible for
+processing read and write requests. A secondary database server is setup to
+replicate the primary database server. If the primary database server fails, the
+secondary database server can take over and become the primary database server.
+
+This configuration provides fault tolerance and high-availability in a
+particular location. This is available with EDB Postgres® Advanced Server (EPAS)
+and EDB Postgres Extended Server (PGE).
+
+
+### Distributed high availability
+
+High availability is a critical requirement for mission-critical workloads. EDB
+Postgres Distributed (PGD) provides a distributed database environment designed
+to ensure high availability and fault tolerance for mission-critical workloads.
+PGD can use EPAS, PGE, or PostgreSQL databases as the underlying replicated
+database. PGD is available for self-managed deployment and on the EDB Postgres
+AI Cloud Service (as the Distributed High Availability option).
+
+
+
diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/hosted.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/hosted.mdx
new file mode 100644
index 00000000000..49a64ea4321
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/cloud-service/hosted.mdx
@@ -0,0 +1,25 @@
+---
+title: Fully Hosted Databases on EDB Postgres AI Cloud Service
+navTitle: Hosted Databases
+description: EDB Postgres AI Cloud Service is able to host a fully managed EDB Postgres database in the EDB cloud.
+---
+
+If you are looking for a fully hosted database solution, the Hosted option of the EDB Postgres AI Cloud Service provides a fully managed database service that allows you to focus on your applications and data, while EDB manages the database infrastructure. All you need to provide is the data and EDB manages the rest.
+
+The advantage of the Hosted option is that the database deploys within the EDB cloud infrastructure, so you have full control over the data. EDB's cloud infrastructure globally spans the major cloud service providers making it easy to co-locate the database with other services.
+
+EDB manages the database infrastructure, including backups, monitoring, and updates, and provides a single pane of glass for managing your databases, monitoring performance, and accessing logs and metrics. All of this is visible through the [EDB Postgres AI Console](../console) and its single pane of glass which provides a unified view of your fully managed databases, self-managed databases (through the [EDB Postgres AI Agent](../console/agent)), analytics lakehouses, S3 stores and machine learning services.
+
+Hosted, fully managed databases are also quick to deploy and easy to scale. You can start with a small database and scale up as your data grows. EDB manages the scaling and performance of the database, so you can focus on your applications and data.
+
+The Hosted option is ideal for organizations that want a fully managed database solution without the complexity of managing the database infrastructure or that want to quickly deploy a database and start building applications without having to provision and manage the database infrastructure.
+
+The Hosted option is available for EDB Postgres Advanced Server (EPAS) and EDB Postgres Extended Server (PGE) databases.To get a fully hosted cluster, select EDB Postgres AI Cloud Service as the deployment option when you create a new database:
+
+
+
+
+
+You can deploy EDB's Postgres Advanced Server (EPAS) or Postgres Extended Server (PGE) databases as hosted databases, with a range of [deployment options](deployment) for high availability and fault tolerance.
+
+
diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/images/fullyhosted.svg b/advocacy_docs/edb-postgres-ai/cloud-service/images/fullyhosted.svg
new file mode 100644
index 00000000000..ecad5cdb384
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/cloud-service/images/fullyhosted.svg
@@ -0,0 +1,5 @@
+
+
+
diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/images/fullymanaged.svg b/advocacy_docs/edb-postgres-ai/cloud-service/images/fullymanaged.svg
new file mode 100644
index 00000000000..b53ec10cccc
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/cloud-service/images/fullymanaged.svg
@@ -0,0 +1,5 @@
+
+
+
diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/index.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/index.mdx
new file mode 100644
index 00000000000..4b8fb25a57b
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/cloud-service/index.mdx
@@ -0,0 +1,31 @@
+---
+title: EDB Postgres AI Cloud Service
+navTitle: Cloud Service
+description: An introduction to the EDB Postgres AI Cloud Service and its features.
+navigation:
+- hosted
+- managed
+- deployment
+---
+
+The EDB Postgres® AI Cloud Service, formerly known as [BigAnimal](/biganimal/latest/), is an evolution of the service to offer a holistic platform which offers hybrid data estate management, observability, analytics, and AI capabilities.
+
+!!! info
+EDB is currently updating the documentation to reflect the new EDB Postgres AI Cloud Service. Consult the [BigAnimal documentation](/biganimal/latest/) during the transition.
+!!!
+
+## Overview
+
+The EDB Postgres AI Cloud Service itself is a fully managed cloud service that provides a high-performance, scalable, and secure database platform for analytics, AI, and machine learning workloads. It also provides the platform for [EDB Postgres AI Analytics](../analytics/) and [EDB Postgres AI Machine Learning](../ai-ml/) services.
+
+Cloud Service builds on the [EDB Postgres Advanced Server](../databases/epas) and [EDB Postgres Extended](../databases/pge) databases and it's designed to help organizations accelerate the development and deployment of AI and machine learning applications.
+
+Databases in the EDB Postgres AI Cloud Service can run on EDB's own cloud accounts or managed by EDB on your own cloud on your behalf.
+
+## EDB Postgres AI Cloud Service, Console, and Estate
+
+You get full visibility of the databases from the [EDB Postgres AI Console](../console), which is a web-based interface that provides a single pane of glass for managing your databases, monitoring performance, and accessing logs and metrics.
+
+The Console view isn't limited to managed or hosted database deployments. You can also deploy the databases yourself in your own cloud or on-premises, but still have them visible in your EDB Postgres AI Console as part of your [Estate](../console/estate) using the [Beacon Agent](../console/agent/).
+
+
diff --git a/advocacy_docs/edb-postgres-ai/cloud-service/managed.mdx b/advocacy_docs/edb-postgres-ai/cloud-service/managed.mdx
new file mode 100644
index 00000000000..dee14838140
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/cloud-service/managed.mdx
@@ -0,0 +1,15 @@
+---
+title: Managed Databases on EDB Postgres AI Cloud Service
+navTitle: Managed Databases
+description: EDB Postgres AI Cloud Service is able to manage an EDB Postgres database in your cloud.
+---
+
+For many organizations, managing databases can be a complex and time-consuming task. The Managed option of the EDB Postgres AI Cloud Service provides a fully managed database service that allows you to focus on your applications and data, while EDB manages the database infrastructure. All you need to provide is the cloud account for EDB to deploy into.
+
+
+
+
+
+The advantage of the Managed option is that the database can deploy directly into your cloud infrastructure, so you have full control over the cloud account and the data. EDB manages the database infrastructure, including backups, monitoring, and updates, and provides a single pane of glass for managing your databases, monitoring performance, and accessing logs and metrics.
+
+
diff --git a/advocacy_docs/edb-postgres-ai/console/agent/agent-as-a-service.mdx b/advocacy_docs/edb-postgres-ai/console/agent/agent-as-a-service.mdx
index 3ea57971429..44bc0a90c8a 100644
--- a/advocacy_docs/edb-postgres-ai/console/agent/agent-as-a-service.mdx
+++ b/advocacy_docs/edb-postgres-ai/console/agent/agent-as-a-service.mdx
@@ -1,6 +1,6 @@
---
title: Running Beacon Agent as a service
-description: How to run Beacon Agent as a service on Ubuntu 22.04
+description: How to configure Beacon Agent to run as a service on Ubuntu 22.04
---
## Running Beacon Agent as a service
@@ -11,12 +11,12 @@ To have Beacon Agent run automatically on startup and restart after error, you n
Future versions of the agent package may set this up automatically.
!!!
-What follows is an example of how to run Beacon Agent as a service, specifically on an Ubuntu 22.04 machine. Follow the instructions above for setting up a machine user and then installing, configuring, testing, and running Beacon Agent before moving on to set up Beacon as a service. These instructions assume you have completed the last two sections and installed Beacon Agent on a Ubuntu 22.04 machine.
+What follows is an example of how to run Beacon Agent as a service, specifically on an Ubuntu 22.04 machine. Follow the instructions for [setting up a machine user](create-machine-user) and then [installing, configuring, testing, and running Beacon Agent](install-agent) before moving on to set up Beacon as a service. These instructions assume you have completed the previous two sections and you have Beacon Agent installed on a Ubuntu 22.04 machine.
-1. We are running the agent on the same server as the Postgres instance we're monitoring. So it's faster and more secure to have Beacon agent use Postgres local auth rather than set up password auth over TCP/IP.
+1. In this example, we're running the agent on the same server as the Postgres instance we're monitoring. So it's faster and more secure to have Beacon agent use Postgres local auth rather than set up password auth over TCP/IP.
!!! Note
- In this example, we use the 'ubuntu' user that is created by default on the EC2 with a default Ubuntu (22.04) machine image. In production, you'd want to use a minimally privileged user explicitly created for the purposes of running Beacon Agent on the server.
+ In this example, we use the 'ubuntu' user created by default on an AWS EC2 instance with a default Ubuntu 22.04 machine image. In production, you'd want to use a minimally privileged user explicitly created for the purposes of running Beacon Agent on the server.
!!!
To configure for local authentication, add the user, `ubuntu`, to Postgres using `psql` and then exit:
@@ -27,7 +27,7 @@ What follows is an example of how to run Beacon Agent as a service, specifically
exit
```
- To complete the setup for local authentication with Postgres, you need to ensure your `pg_hba.conf` file is configured to allow Unix-domain socket connections. Verify or update the following line in your `pg_hba.conf` file:
+ To complete the setup for local authentication with Postgres, you need to configure your `pg_hba.conf` file to allow Unix-domain socket connections. Ensure the following line in your `pg_hba.conf` file:
```
local all all peer
@@ -35,9 +35,9 @@ What follows is an example of how to run Beacon Agent as a service, specifically
This configuration allows any local system user to connect to any database without a password, provided that a corresponding PostgreSQL role with the same name as the system user exists (in this case, user `ubuntu`).
- Now Postgres is configured to allow local authentication so that Beacon Agent can access it as a service.
+ Now Postgres allows local authentication so that Beacon Agent can access it as a service. without a password.
-1. Create a new file using your text editor of choice (superuser level permissions are necessary here). vi is used in this example:
+1. Create a new file using your text editor of choice (superuser level permissions are necessary here). This example command uses the vi editor:
```
sudo vi /etc/systemd/beacon-agent.service
@@ -53,7 +53,7 @@ What follows is an example of how to run Beacon Agent as a service, specifically
After=network.target
[Service]
- # Simple services don't do any forking / background nonsense
+ # Simple services don't do any forking / background
Type=simple
# User with which to run the service
diff --git a/advocacy_docs/edb-postgres-ai/console/agent/beacon-agent-ref.mdx b/advocacy_docs/edb-postgres-ai/console/agent/beacon-agent-ref.mdx
index b4cbcc9227d..8a501c7b4a3 100644
--- a/advocacy_docs/edb-postgres-ai/console/agent/beacon-agent-ref.mdx
+++ b/advocacy_docs/edb-postgres-ai/console/agent/beacon-agent-ref.mdx
@@ -1,71 +1,171 @@
---
-title: Reference documentation for beacon-agent
+title: beacon-agent
navTitle: Reference
description: Reference documentation for the beacon-agent.
+deepToC: true
---
-## `beacon-agent` command
-### Synopsis
+## Synopsis
`beacon-agent` runs Beacon Agent or sets up Beacon agent through its subcommand `setup`.
-### Usage
+## Usage
```
-beacon-agent [command] [options]
+beacon-agent [subcommand] [options]
```
-### Commands
+### Global options
-1. **(no subcommand)** - runs Beacon agent.
+| Option | Description |
+|----------|--------------------------------------------|
+| `--help` | Provides information about optional flags. |
- - **Usage**
+## Sub-commands
- ```
- beacon-agent
- ```
- - **Description**
+### (no sub-command)
- This default mode manages the Beacon Agent process, which sends data ingestions to the Beacon Server and maintains a local log of its activities.
-
- - **Examples**
+#### Description
+
+Runs the Beacon agent.
+
+In default mode, with no subcommand, this runs the Beacon Agent process, which sends data ingestions to the Beacon Server and maintains a local log of its activities.
+
+With no configuration file specified, the agent looks for a `beacon_agent.yaml` file in `/etc/beacon`, `$HOME/.beacon` and in the current directory.
+
+#### Usage
+
+```
+beacon-agent [-file=]
+```
+
+#### Options
- - Start Beacon Agent:
- ```
- beacon-agent
- ```
-2. **setup** - creates the configuration file and authenticates with the server.
+| Option | Description |
+|------------------|---------------------------------------------------------------------------------------|
+| `-file=` | Sets the filename (and path) of the configuration file. (default `"beacon_agent.yaml"`) |
- - **Usage**
- ```
- beacon-agent setup
- ```
+#### Examples
- - **Description**
+Start Beacon Agent:
- This command creates the Beacon configuration file and, by default, authenticates the Beacon project with the Beacon server.
+```
+beacon-agent
+```
+
+### `setup`
+
+#### Description
+
+Creates the Beacon agent configuration file and, by default, authenticates the Beacon agent with the EDB Postgres AI control plane.
- - **Options**
+Verification of credentials requires the environment variables `BEACON_AGENT_ACCESS_KEY` and `BEACON_AGENT_PROJECT_ID` to be set.
- - `-file string=`: Sets the filename of the generated configuration file. (default "beacon_agent.yaml").
- - `-verify=`: Verifies the project's credentials with the Beacon server.
- - `--help`: Provides information about optional flags.
+```shell
+export BEACON_AGENT_ACCESS_KEY=
+export BEACON_AGENT_PROJECT_ID=
+```
+
+#### Usage
+
+```shell
+beacon-agent setup [-file=]
+```
+
+#### Options
+
+| Option | Description |
+|---------------------|---------------------------------------------------------------------------------------------------|
+| `-file=` | Sets the filename (and path) of the generated configuration file. (default `"beacon_agent.yaml"`) |
+| `-verify=` | Verifies the project's credentials with the Beacon server. |
- - **Examples**
- - Create the configuration file, but don't authenticate the Beacon project with the Beacon Server.
+#### Examples
+
+##### Creating a configuration file, without authentication
- ```
- beacon-agent setup -verify=false
- ```
+```
+beacon-agent setup -verify=false
+```
- - Create the configuration file with a different name than the default (beacon_agent.yaml).
+##### Creating a configuration file with a different name
- ```
- beacon-agent setup -file my_beacon_config.yaml
- ```
-
+```
+beacon-agent setup -file my_beacon_config.yaml
+```
+
+### `version`
+#### Description
+
+Print the version of the agent and exit.
+
+#### Usage
+
+```
+beacon-agent version
+```
+
+
+## Configuration file format
+
+The configuration file is a YAML file that contains the following fields:
+
+| Key | Description |
+|------------------------------|-------------------------------------------|
+| agent.access_key | The access key for the Beacon Agent. |
+| agent.access_key_grpc_header | The header key for the access key. |
+| agent.batch_size | The number of records to send in a batch. |
+| agent.beacon_server | The URL of the Beacon Server. |
+| agent.feature_flag_interval | The interval to check for feature flags. |
+| agent.project_id | The project ID for the Beacon Agent. |
+| agent.providers | An array of provider names of database providers. |
+| provider | An object containing provider configurations. |
+
+The `provider` object contains provider configurations for each provider. The key is the provider name.
+
+| Key | Description |
+|----------------------------------|----------------------------------------------------------------------|
+| _provider_name_.databases | An object containing named database configurations for the provider. |
+| _provider_name_.host | The host for the provider. |
+| _provider_name_.poll_interval | The polling interval for the provider. |
+| _provider_name_.host.resource_id | The resource ID for the host. |
+| _provider_name_.host.tags | An array of tags for the host. |
+
+The `databases` object contains database configurations for each database. The key is the database name:
+
+| Key | Description |
+|---------------------|------------------------------------|
+| _databasename_.dsn | The DSN for the database. |
+| _databasename_.tags | An array of tags for the database. |
+
+
+
+#### Example configuration file
+
+```yaml
+agent:
+ access_key: "$BEACON_AGENT_ACCESS_KEY"
+ access_key_grpc_header: "x-access-key"
+ batch_size: 100
+ beacon_server: "beacon.biganimal.com:443"
+ feature_flag_interval: 10m0s
+ project_id: ""
+ providers:
+ - "onprem"
+provider:
+ onprem:
+ databases:
+ sales_reporting:
+ dsn: "$DSN"
+ tags:
+ - "foo"
+ - "bar"
+ host:
+ resource_id: "postgresql.lan"
+ tags: []
+ poll_interval: 5m0s
+```
diff --git a/advocacy_docs/edb-postgres-ai/console/agent/create-machine-user.mdx b/advocacy_docs/edb-postgres-ai/console/agent/create-machine-user.mdx
index 31196551535..b55927b03f3 100644
--- a/advocacy_docs/edb-postgres-ai/console/agent/create-machine-user.mdx
+++ b/advocacy_docs/edb-postgres-ai/console/agent/create-machine-user.mdx
@@ -4,18 +4,47 @@ description: Learn how to create a machine user in EDB Postgres AI Console to en
---
-1. In the EDB Postgres® AI Console, using your avatar's dropdown, select the **User Management** page.
+In the EDB Postgres® AI Console, using your avatar's user menu (top right on the display), select the **User Management** page.
-2. Select the **Add New User** button.
+This takes you to the User Management page. Here, you can create a machine user that Beacon Agent can use to ingest data into your Estate.
-3. On the **Add New User** page, select *Machine User*, give the user a name, check the **Create Access Key** checkbox, give the key a name, and finally, select the **Add User** button.
+## Creating a machine user
-4. After you get your access key, store it somewhere securely, as EDB does only lets you view access keys at creation time. If you lose your record of an access key, you will have to get a new/replacement key by regenerating it.
+Select the **Add New User** button.
-5. Next, go to the project from which you want to monitor the database, select the **Users** tab, and locate the machine user you just created.
+On the **Add New User** page, for the "User Type" select **Machine User*.
-6. Select the edit button to the right of the user entry, then select the **Assign Roles** button.
+The form changes when you select make that selection so it's asking for a name and optional email. Enter a name for the machine user.
-7. In the **Assign Project Roles** modal, select the "estate ingester" option and then select the **Submit** button.
+## Creating an access key
+
+Check the **Create Access Key** checkbox in **Add New User** page. The form will expand to ask for an Access Key Name and an Expiry time.
+
+Give the key a name in the Access Key Name field and enter a value from 1 to 365 for the number of days from now that you want this key to be valid in the Expires In field. The date on which the key will expire is shown in the underneath the field.
+
+Select the **Add User** button.
+
+The form changes again to show a field with an "Access Key ...'s for new key". Copy the contents of this field to a secure location. This is the only time you can see this key. You can use the copy icon on the right of the field to copy the key to your clipboard.
+
+After you have copied your access key, store it somewhere securely, as EDB does only lets you view access keys at creation time. If you lose your record of an access key, you need to get a new/replacement key by regenerating it.
+
+Once you have securely stored your access key, select the **Key Stored Safely** button.
+
+## Assigning roles
+
+Select the **Projects** tab to view all your projects.
+
+Select the project from which you want to monitor the a database.
+
+This will take you to the project's overview. Select the **Users** tab, and locate the machine user you just created.
+
+Select the edit button (a pen icon) on the far right of the user entry. This takes you to the user editing page.
+
+Select the **Assign Roles** button on the right of the page.
+
+An **Assign Project Roles** dialog will appear with a selection of roles that can be assigned.
+
+Select the "estate ingester" role and then select the **Submit** button.
+
+Your new machine user is ready for to ingest data from the Beacon Agent.
-8. Your machine user is ready for estate ingestions from Beacon Agent.
\ No newline at end of file
diff --git a/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx b/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx
index ec2763cbff2..c10c8f73d72 100644
--- a/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx
+++ b/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx
@@ -3,158 +3,173 @@ title: Installing Beacon Agent
description: Installing, configuring, testing, and running Beacon Agent
---
-The following steps walk you through how to install and configure Beacon Agent, test locally, and then run the agent to see the results in your Estates page in the EDB Postgres® AI Console.
+The following steps walk you through how to install and configure Beacon Agent, test it locally, and then run the agent to see the results in your Estates page in the EDB Postgres® AI Console.
Before you begin, you need to have the following:
* The access key for a machine user with the `estate ingester` role assigned to it. For more information, see [Creating a machine user](create-machine-user).
* The project ID for the project you want to monitor. You can find this in the URL when you are in the project in the EDB Postgres AI Console.
-1. Establish a connection between your system and the Cloudsmith repository for EDB Postgres AI.
- First, find your EnterpriseDB Repos 2.0 token [here](https://www.enterprisedb.com/repos-downloads).
- Next, set environmental variables for your repos subscription token and your EDB subscription type (standard or enterprise):
+## Enable your system to download packages
- ```
- export EDB_SUBSCRIPTION_TOKEN=
- export EBD_SUBSCRIPTION_TYPE=
- ```
+You need to enable the system you are planning on running the agent on to download packages from the EDB repositories for EDB Postgres AI.
- Then, download and run a script to configure your system to access the `beacon-agent` package from the repository. Be sure to replace `` with your EDB Repos 2.0 token and replace `` with your subscription type (`standard` or `enterprise`):
+Locate your EDB subscription token on the the [EnterpriseDB repos download page](https://www.enterprisedb.com/repos-downloads).
- For RHEL-like or SLES:
+Using the retrieved subscription token, set an environmental variables (`EDB_SUBSCRIPTION_TOKEN`) to the token's value. Also set an environmental variable for your EDB subscription type (`EDB_SUBSCRIPTION_TYPE`) to one of `standard` or `enterprise` depending on which plan you are signed up to:
- ```
- curl -1sSLf 'https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/$EDB_SUBSCRIPTION_TYPE/setup.rpm.sh' | sudo -E bash
- ```
+```shell
+export EDB_SUBSCRIPTION_TOKEN=
+export EBD_SUBSCRIPTION_TYPE=
+```
+
+Depending on your operating system, run the following command to enable your system to enable package downloading from the EDB repositories:
+
+For RHEL-like or SLES:
+
+```shell
+curl -1sSLf 'https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/$EDB_SUBSCRIPTION_TYPE/setup.rpm.sh' | sudo -E bash
+```
+
+For Debian or Ubuntu:
+
+```shell
+curl -1sSLf 'https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/$EDB_SUBSCRIPTION_TYPE/setup.deb.sh' | sudo -E bash
+```
+
+## Install the beacon-agent package
+
+You can now install packages from EDB's repositories. Install the `beacon-agent` package:
+
+For RHEL-like:
- For Debian or Ubuntu:
+```shell
+dnf install beacon-agent
+```
- ```
- curl -1sSLf 'https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/$EDB_SUBSCRIPTION_TYPE/setup.deb.sh' | sudo -E bash
- ```
+Or if dnf isn't available:
-2. Install the `beacon-agent` package:
+```shell
+yum install beacon-agent
+```
- For RHEL-like:
+For SLES:
- ```
- dnf install beacon-agent
- ```
+```shell
+zypper install beacon-agent
+```
- Or if dnf isn't available:
+For Debian or Ubuntu:
- ```
- yum install beacon-agent
- ```
+```
+apt install beacon-agent
+```
- For SLES:
+## Configure Beacon Agent
- ```
- zypper install beacon-agent
- ```
+Create a Beacon configuration directory in your home directory:
- For Debian or Ubuntu:
+```
+mkdir ${HOME}/.beacon
+```
+
+Next, configure Beacon Agent by setting the access key (the one you obtained while [Creating a machine user](create_machine_user)) and project ID:
+
+```
+export BEACON_AGENT_ACCESS_KEY=
+export BEACON_AGENT_PROJECT_ID=
+```
- ```
- apt install beacon-agent
- ```
+Running the `beacon-agent setup` command creates a configuration file in the Beacon configuration directory. using those environment variables.
-3. Configure Beacon Agent and database connections.
+You also need to specify the Beacon configuration directory for storing the configuration file and the name of the configuration file to generate there. The `$HOME/.beacon/` file is one of the default locations which `beacon_agent` searches for `beacon_agent.yaml` when it starts. Using the `-file` flag tells the agent setup process to create its configuration file in a specific location.
- First, create a Beacon config in your home directory:
+```
+beacon-agent setup -file="$HOME/.beacon/beacon_agent.yaml"
+```
- ```
- mkdir ${HOME}/.beacon
- ```
+During the `beacon-agent setup` process, an authentication attempt occurs, using the access key and project ID you provided. This authentication is necessary for Beacon Agent to communicate with the Beacon server and to register with the project.
- Next, configure Beacon Agent by setting the access key (the one you obtained while [Creating a machine user](create_machine_user)) and project ID:
-
- ```
- export BEACON_AGENT_ACCESS_KEY=
- export BEACON_AGENT_PROJECT_ID=
- ```
-
-
- Then, specify the Beacon config directory for storing the configuration file and the name of the configuration file to be created there.
+Upon a successful registration, you should see a message indicating that you have authenticated successfully to your EDB Postgres AI project.
- ```
- beacon-agent setup -file="$HOME/.beacon/beacon_agent.yaml"
- ```
+## Configure database connections
- During the `beacon-agent setup` process, an authentication attempt occurs, utilizing the provided access key and project ID. This authentication is necessary for Beacon Agent to communicate with the Beacon server and register with the project successfully.
+Create DSN/connection strings for each database you want to monitor. These should include the database name, the user, the password, the host, and the port. For example:
- If the commands execute as expected, you should see a message indicating that you have authenticated successfully to your EDB Postgres AI project.
+"user=postgres password=postgres dbname=postgres host=localhost port=5432"
+You can also use a DSN in the format `postgres://user:password@host:port/dbname`.
-4. For security best practices, create a `DSN` environmental variable for each database (`DSN1`, `DSN2`, and so on) you want to monitor and set each to the correct corresponding dsn.
+As DSNs can contain sensitive information such as the password, its best practice to create an environmental variable for each database you want to monitor and set each to the a corresponding DSN.
- ```
- export DSN1=
- ```
+```shell
+export DSN1=
+```
-5. Open your `$HOME/.beacon/beacon_agent.yaml` configuration file and add an entry for each database you want to connect to, including the environmental variable for each database's dsn.
+Edit your `$HOME/.beacon/beacon_agent.yaml` configuration file and add an entry for each database you want to connect to, including the environmental variable for each database's DSN. Precede the DSN with a `$` to indicate that it's an environmental variable. Each database entry should also include tags to help you identify the database in the EDB Postgres AI Console.
- Entries under `databases` utilize the following format:
-
- ```
- databases:
- :
- dsn: "$DSN1"
- tags:
- - ""
- - ""
- :
- dsn: "$DSN2"
- tags:
- - ""
- - ""
- ```
+Entries under `databases` utilize the following format:
- Here is an example with a database named `sales_reporting`:
+```yaml
+databases:
+ :
+ dsn: "$DSN1"
+ tags:
+ - ""
+ - ""
+ :
+ dsn: "$DSN2"
+ tags:
+ - ""
+ - ""
```
+
+Here is an example `beacon_agent.yaml` file configured for a database named `sales_reporting`:
+
+```yaml
agent:
- access_key: $BEACON_AGENT_ACCESS_KEY
- access_key_grpc_header: "x-access-key"
- batch_size: 100
- beacon_server: "beacon.biganimal.com:443"
- feature_flag_interval: 10m0s
- project_id: ""
- providers:
- - "onprem"
+access_key: $BEACON_AGENT_ACCESS_KEY
+access_key_grpc_header: "x-access-key"
+batch_size: 100
+beacon_server: "beacon.biganimal.com:443"
+feature_flag_interval: 10m0s
+project_id: ""
+providers:
+- "onprem"
provider:
- onprem:
- databases:
- sales_reporting:
- dsn: “$DSN”
- tags:
- - "foo"
- - "bar"
- host:
- resource_id: "postgresql.lan"
- tags: []
- poll_interval: 5m0s
+onprem:
+databases:
+ sales_reporting:
+ dsn: “$DSN”
+ tags:
+ - "sales"
+ - "reports"
+host:
+ resource_id: "postgresql.lan"
+ tags: []
+poll_interval: 5m0s
```
-6. Test Beacon Agent locally.
+## Test Beacon Agent locally.
- As an initial smoke test of the agent, it can send the ingestions that it would send back to the Beacon server to stdout instead. This allows you to quickly confirm if the agent is successfully able to gather ingestions and what those ingestions look like.
+For an initial test of the agent, you can get it to send the data that it would normally send to the EDB Enterprise AI control plane to standard output, your terminal session, instead. This allows you to quickly confirm if the agent is successfully able to gather data and what that data looks like.
- You can run the agent in stdout mode by modifying the `beacon_agent.yaml` file generated previously to have an `agent.beacon_server` value of `"stdout"`. A truncated example of this would be:
+You can run the agent in standard output mode by modifying the `beacon_agent.yaml` file generated previously to have an `"agent.beacon_server"` value of `"stdout"`. A truncated example of this would be:
- ```
- agent:
- beacon_server: "stdout"
- ```
+```yaml
+agent:
+ beacon_server: "stdout"
+```
- Next, run the agent in this mode using the following command:
+Next, run the agent in this mode using the following command:
- ```
- beacon-agent
- ```
+```shell
+beacon-agent
+```
- You should see output similar to the following eventually (it can take around 5 minutes to see the last few lines):
+You should see output similar to the following eventually (it can take around 5 minutes to see the last few lines):
```
{"level":"debug","data":"$BEACON_AGENT_ACCESS_KEY","time":"2024-05-08T18:40:34Z","message":"expanding environment variable in configuration"}
@@ -179,17 +194,33 @@ provider:
```
!!! Note
-The message in the second to last line of the logs above specifies that we are using the log client and not actually sending ingestions.
+The message in the second to last line of the preceding log confirms that we're viewing the gathered data which is being output to stdout and it hasn't gone to the control plane.
!!!
-7. Run Beacon Agent.
+## Configure Beacon Agent to send data
+
+The next step is to configure Beacon agent to send data to the EDB Postgres AI control plane.
+
+In the `beacon_agent.yaml` file, on the `agent.beacon_server` line, replace `"stdout"` with `"beacon.biganimal.com:443"`:
+
+```
+agent:
+ beacon_server: "beacon.biganimal.com:443"
+
+```
+
+## Run Beacon Agent.
+
+Run the agent using the following command:
+
+```shell
+beacon-agent
+```
+
+The agent will start sending data to the EDB Postgres AI control plane. Follow the logs to monitor the data ingestion process.
- In the `beacon_agent.yaml` file, replace `"stdout" with `"beacon.biganimal.com:443"`:
+## Check the EDB Postgres AI Console to see the data from the agent.
- ```
- agent:
- beacon_server: "beacon.biganimal.com:443"
+After a few minutes, you should see the data from the agent in the EDB Postgres AI Console. Navigate to the Estates page's to see the data from the agent.
- ```
-8. Follow the logs to monitor the ingestions.
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/index.mdx b/advocacy_docs/edb-postgres-ai/console/estate/index.mdx
index 46434c02703..143b925cee7 100644
--- a/advocacy_docs/edb-postgres-ai/console/estate/index.mdx
+++ b/advocacy_docs/edb-postgres-ai/console/estate/index.mdx
@@ -1,9 +1,28 @@
---
title: EDB Postgres AI Console - Estate
navTitle: Estate
-description: How to manage and integrate EDB Postgres AI databases and more with EDB Postgres AI Console's single pane of glass.
+description: How to monitor and integrate EDB Postgres AI databases and more with EDB Postgres AI Console's single pane of glass.
---
-## What is EDB Postgres AI Estate?
+## About Estate
+
+EDB Postgres® AI manages and hosts many services for many users, gathering metrics, orchestrating services, and more.
+
+In EDB Postgres AI, the Estate is a term to refer to all the resources you have access to on EDB Postgres AI. Whatever resource you add, it becomes part of your Estate.
+
+Within your Estate are Projects. Projects group your resources and services, allowing you to organize your Estate. You can create a project for each of your teams, each specific application or project, or each environment and you can control access to each project. When you add a resource to your Estate, you can choose which project it belongs to.
+
+## Viewing your estate
+
+The Console provides a centralized location to view for managing the lifecycle of EDB Postgres AI Databases and EDB Postgres AI Agents, including provisioning, scaling, and monitoring. It has three view; an overview, a Projects view and an Estate view.
+
+The Projects view shows all the projects in your Estate. Selecting a project gives you a view of all resources and services in the project.
+
+The Estate view shows all the resources and services in your Estate. You can filter this view by resource type, project, or status. You The Estate view is a powerful way to see everything that is happening in.
+
+
+
+
+
+
-The EDB Postgres® AI Estate is a component of the EDB Postgres AI Console that provides a single pane of glass for managing and integrating EDB Postgres AI Databases and EDB Postgres AI Agents. The Estate provides a centralized location for managing the lifecycle of EDB Postgres AI Databases and EDB Postgres AI Agents, including provisioning, scaling, and monitoring. The Estate also provides a centralized location for managing the integration of EDB Postgres AI Databases and EDB Postgres AI Agents with the EDB Postgres AI Console's single pane of glass.
\ No newline at end of file
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/integrating/index.mdx b/advocacy_docs/edb-postgres-ai/console/estate/integrating/index.mdx
new file mode 100644
index 00000000000..2b457c410fc
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/estate/integrating/index.mdx
@@ -0,0 +1,28 @@
+---
+title: Integrating resources into your EDB Postgres AI Estate
+navTitle: Integrating
+description: How to integrate resources into your EDB Postgres AI Estate so they appear as part of the EDB Postgres AI Console's single pane of glass.
+---
+
+There are three ways to integrate resources into your Estate on EDB Postgres AI.
+
+* Automatically using the [EDB Postgres AI Cloud Service](../../../cloud-service/). Clusters created or managed there automatically deliver metrics to your Estate, and appear in your Estate's single pane of glass. Users can also opt for schema ingestion which brings in metadata from the database.
+* Directly, by configuring your Estate to gather resource metrics from a source.
+* Indirectly, by using the [EDB Postgres AI Agent](../../agent/).
+
+## Automatic integration
+
+When you use the EDB Postgres AI Cloud Service to create or manage clusters, EDB Postgres AI automatically gathers metrics from those clusters and presents them to you as part of your Estate. Selecting **Schema Ingestion** during cluster creation also brings in metadata from the database. Details of this process are available in the [Schema Ingestion](integrate_schema) section.
+
+
+## Direct integration
+
+You can configure EDB Postgres AI to gather resource metrics directly from a source and present them as part of your Estate. One example of this is the Estate's [AWS integration](integrate_aws), which, once configured allows you to see the information about your AWS resources in the EDB Postgres AI Console.
+
+## Agent integration
+
+You can bring bare metal or virtualized resources into your Estate by installing the EDB Postgres AI Agent on them. The Agent gathers metrics from the resources and sends them to EDB Postgres AI where they are visible in your Estate view in the EDB Postgres AI Console. Details of this process are available in the [Agent](../../agent/) section.
+
+
+
+
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/01_create_policy.mdx b/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/01_create_policy.mdx
new file mode 100644
index 00000000000..08921aa17b7
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/01_create_policy.mdx
@@ -0,0 +1,46 @@
+---
+title: Step 1 - Create a custom policy
+navTitle: Create a custom policy
+description: Create a custom policy in AWS to allow a role to query the metadata of your AWS RDS and S3 services.
+---
+
+The process of integrating AWS resources into your Estate starts from the EDB Postgres AI Console.
+
+1. Go to [**EDB Postgres AI Console**](https://portal.biganimal.com/beacon).
+
+2. Scroll down to the **Cloud Hosted Databases** section, select the **Manage Access** button, and choose your project.
+
+3. The **Cloud Hosted Databases** UI shows **Step 1 - Create custom policy**.
+
+4. Go to the console of your AWS account with the RDS instances and S3 buckets you want to monitor.
+
+5. Navigate to IAM, and in the navigation pane on the left side of the AWS console, select **Policies**.
+
+6. On the **Policies** dashboard page, select the **Create policy** button.
+
+7. In the **Policy editor** section, choose the JSON option.
+
+8. Type or paste the following JSON policy document into the JSON editor:
+
+ ```json
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "VisualEditor0",
+ "Effect": "Allow",
+ "Action": [
+ "rds:DescribeDBInstances",
+ "s3:ListAllMyBuckets",
+ "rds:DescribeDBClusters"
+ ],
+ "Resource": "*"
+ }
+ ]
+ }
+ ```
+
+9. Select **Next**, give the policy a name, for example, `edb-postgres-ai-addon-policy` and select **Create Policy**. This policy allows EDB Postgres AI server to query metadata of your AWS RDS and S3 services.
+
+10. Next, in the Cloud Hosted Databases UI, select the **Next: Create a Role** button. The Cloud Hosted Databases UI should now show [**Step 2 - Create a Role**](02_create_role).
+
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/02_create_role.mdx b/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/02_create_role.mdx
new file mode 100644
index 00000000000..6a5af1abe35
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/02_create_role.mdx
@@ -0,0 +1,55 @@
+---
+title: Step 2 - Create a role
+navTitle: Create a role
+description: Create a role in AWS to that uses that custom policy and inform EDB Postgres AI of the role.
+---
+
+Following on from [Creating a policy](01_create_policy), you now need to create a role in AWS that uses that policy. You then need to inform EDB Postgres AI of the role ARN so that it can access the RDS and S3 metadata in your AWS account.
+
+1. Go to the AWS console UI, and in the left-hand navigation pane, choose **Roles** and then select the **Create role** button.
+
+2. Select **Custom trust policy** role type.
+
+3. In the **Custom trust policy** section, paste the trust policy you obtained from **Step 2** in the Cloud Hosted Databases UI. It looks similar to this:
+
+ ```json
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::292478331082:root"
+ },
+ "Action": "sts:AssumeRole",
+ "Condition": {
+ "StringEquals": {
+ "sts:ExternalId": ""
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+ !!! Note
+ The EDB Postgres AI Cloud Hosted Databases UI shows a snippet like the one above but with the `` already specified.
+ !!!
+
+4. Select the **Next** button.
+
+5. Select the policy you created earlier. In this example, we used `edb-postgres-ai-addon-policy`.
+
+6. Select the **Next** button.
+
+7. Give the role a name. Note that you must give the role a name that starts with `biganimal-role`, such as `biganimal-role-beacon`.
+
+8. Select the **Create role** button.
+
+9. Still in the AWS console, select the **View role** button in the green banner at the top of the **Roles** dashboard in the AWS console.
+
+10. Copy the role ARN from the Summary section of the Role page in AWS console and paste it into the form at the bottom of the Cloud Hosted Databases UI labeled **Role ARN**.
+
+11. Select the **Next: Regions and Services** button in the Cloud Hosted Databases UI to move to [**Step-3: Regions and Services**](03_set_regions).
+
+
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/03_set_regions.mdx b/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/03_set_regions.mdx
new file mode 100644
index 00000000000..e31448b2eea
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/03_set_regions.mdx
@@ -0,0 +1,20 @@
+---
+title: Step 3 - Set regions and services
+navTitle: Set regions and services
+description: Set the regions and services you want to monitor in your AWS account.
+---
+
+The final step after setting up a policy and [creating a role](02_create_role) in AWS is to set the regions and services you want to monitor in your AWS account. In the Cloud Hosted Databases UI, you should be seeing the **Step 3 - Regions and Services** page.
+
+1. On the page, select the regions that you want to monitor.
+
+2. Below the regions, select the services you want to monitor in those regions.
+
+3. Select the **Next: Review and submit** button.
+
+4. Review your regions and services selections, then select the **Submit** button. If you notice a mistake, you can always use the **Prev: Regions and Services** button and go back a step.
+
+5. Upon success, you will see a notification at the top of the Estate page saying, "The configuration has been submitted successfully."
+
+6. You should start to see the **Cloud Hosted Databases** section of your **Estate** page populate with the available S3 buckets and RDS instances.
+
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/index.mdx b/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/index.mdx
new file mode 100644
index 00000000000..f2f14b8b576
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_aws/index.mdx
@@ -0,0 +1,16 @@
+---
+title: Integrating AWS resources into your Estate
+navTitle: AWS resources
+description: How to ingest and monitor AWS resources with EDB Postgres AI.
+deepToC: true
+---
+
+Setting up the EDB Postgres® AI Console to monitor your RDS instances and S3 buckets on AWS involves adding a specific policy and role in AWS. Once the policy and role are in place, you need to enter the role ARN of the newly created role into the **Cloud Hosted Databases** UI, accessible via the **Estate** page in the EDB Postgres AI Console.
+
+Using this role ARN and a custom policy, the EDB Postgres AI server can access to the RDS and S3 information in your AWS account.
+
+After providing the role ARN in the Cloud Hosted Databases UI, you can see the selected AWS resources (RDS instances and/or S3 buckets) in the chosen AWS regions on your **Estate** page in the **Cloud Hosted Databases** section.
+
+Follow the steps below to integrate your AWS resources into your Estate, starting with creating a custom policy:
+
+
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_schema.mdx b/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_schema.mdx
new file mode 100644
index 00000000000..19b4ae842f3
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/estate/integrating/integrate_schema.mdx
@@ -0,0 +1,28 @@
+---
+title: Schema Ingestion from EDB Postgres AI Cloud Service
+navTitle: Schema Ingestion
+description: How to ingest schemas from the EDB Postgres AI Cloud Service.
+---
+
+By default, clusters on the EDB Postgres AI Console Cloud Service don't have schemas ingested by EDB Postgres AI, only metrics.
+
+Schema ingestion is the process of collecting metadata from a database, such as schema names, table names, column names, and column types. This metadata is then used to provide additional context and information about the database objects in the EDB Postgres AI Console and Analytics. The metadata is securely transferred and stored within the Estate control plane.
+
+You can enable schemas ingest from the EDB Postgres AI Console Cloud Service by selecting the **Schema Ingestion** option when creating a cluster on the EDB Postgres AI Cloud Service.
+
+When you enable schema ingestion from the EDB Postgres AI Console Cloud Service, the following information gets sent to the Estate control plane:
+
+* Schema names
+* Table names
+* Table types
+* Column names
+* Column types
+* Column default values
+* Column nullability
+* Table description (from comments)
+* Column description (from comments)
+
+The schema ingestion process is secure and doesn't collect any data from the tables themselves. The process only collects metadata about the tables and columns. Future developments planned allow for more detailed metadata collection, such as indexes, constraints, and triggers.
+
+The full schema ingestion schema details are in [Schema Ingestion - Reference](../reference/schema_ingestion).
+
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/monitor_aws.mdx b/advocacy_docs/edb-postgres-ai/console/estate/monitor_aws.mdx
deleted file mode 100644
index b5b360e0976..00000000000
--- a/advocacy_docs/edb-postgres-ai/console/estate/monitor_aws.mdx
+++ /dev/null
@@ -1,121 +0,0 @@
----
-title: Monitoring AWS resources in EDB Postgres AI
-navTitle: Cloud Hosted Databases - AWS resources
-description: How to monitor AWS resources in EDB Postgres AI Estate.
-deepToC: true
----
-
-## Overview
-
-Setting up the EDB Postgres® AI Console to monitor your RDS instances and S3 buckets on AWS involves adding a specific policiy and role in AWS. Once these are configured, you need to enter the role ARN of the newly created role into the **Cloud Hosted Databases** UI, accessible via the **Estate** page in the EDB Postgres AI Console.
-
-Using this role ARN and a custom policy, the EDB Postgres AI server will have access to the RDS and S3 information in your AWS account.
-
-After providing the role ARN in the Cloud Hosted Databases UI, you will see the selected AWS resources (RDS instances and/or S3 buckets) in the chosen AWS regions on your **Estate** page in the **Cloud Hosted Databases** section.
-
-## Setting up monitoring of AWS resources in EDB Postgres AI Estate
-
-### Starting the Cloud Hosted Databases UI
-
-1. Go to **EDB Postgres AI Console**.
-
-2. Scroll down to the **Cloud Hosted Databases** section, select the **Manage Access** button, and choose your project.
-
-3. The **Cloud Hosted Databases** UI shows **Step 1 - Create custom policy**.
-
-### Creating the AWS custom policy
-
-4. Go to the console of your AWS account with the RDS instances and S3 buckets you want to monitor.
-
-5. Navigate to IAM, and in the navigation pane on the left side of the AWS console, select **Policies**.
-
-6. On the **Policies** dashboard page, select the **Create policy** button.
-
-7. In the **Policy editor** section, choose the JSON option.
-
-8. Type or paste the following JSON policy document into the JSON editor:
-
- ```json
- {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Sid": "VisualEditor0",
- "Effect": "Allow",
- "Action": [
- "rds:DescribeDBInstances",
- "s3:ListAllMyBuckets",
- "rds:DescribeDBClusters"
- ],
- "Resource": "*"
- }
- ]
- }
- ```
-
-9. Select **Next**, give the policy a name, for example, `edb-postgres-ai-addon-policy` and select **Create Policy**. This policy allows EDB Postgres AI server to query metadata of your AWS RDS and S3 services.
-
-
-### Creating the AWS role
-
-10. Next, in the Cloud Hosted Databases UI, select the **Next: Create a Role** button. The Cloud Hosted Databases UI should now show **Step 2 - Create a Role**.
-
-11. Go to the AWS console UI, and in the left-hand navigation pane, choose **Roles** and then select the **Create role** button.
-
-12. Select **Custom trust policy** role type.
-
-13. In the **Custom trust policy** section, paste the trust policy you obtained from **Step 2** in the Cloud Hosted Databases UI. It looks similar to this:
-
- ```json
- {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Effect": "Allow",
- "Principal": {
- "AWS": "arn:aws:iam::292478331082:root"
- },
- "Action": "sts:AssumeRole",
- "Condition": {
- "StringEquals": {
- "sts:ExternalId": ""
- }
- }
- }
- ]
- }
- ```
-
- !!! Note
- The EDB Postgres AI Cloud Hosted Databases UI shows a snippet like the one above but with the `` already specified.
- !!!
-
-14. Select the **Next** button.
-
-15. Select the policy you created earlier. In this example, we used `edb-postgres-ai-addon-policy`.
-
-16. Select the **Next** button.
-
-17. Give the role a name. Note that you must give the role a name that starts with `biganimal-role`, such as `biganimal-role-beacon`.
-
-18. Select the **Create role** button.
-
-### Entering the role ARN into the EDB Postgres AI UI
-
-19. Still in the AWS console, select the **View role** button in the green banner at the top of the **Roles** dashboard in the AWS console.
-
-20. Copy the role ARN from the Summary section of the Role page in AWS console and paste it into the form at the bottom of the Cloud Hosted Databases UI labeled **Role ARN**.
-
-21. Select the **Next: Regions and Services** button in the Cloud Hosted Databases UI to move to the next step.
-
-### Selecting the scope of regions and services
-
-22. For **Step 3 - Regions and Services**, select the regions that you want to monitor and the services you want to monitor in those regions.
-
-23. Select the **Next: Review and submit** button.
-
-24. Review your regions and services selections, then select the **Submit** button. If you notice a mistake, you can always use the **Prev: Regions and Services** button and go back a step.
-
-25. Upon success, you will see a notification at the top of the Estate page saying, "The configuration has been submitted successfully."
-
-26. Within a moment, you should start to see the **Cloud Hosted Databases** section of your **Estate** page populate with the available S3 buckets and RDS instances.
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/monitoring/index.mdx b/advocacy_docs/edb-postgres-ai/console/estate/monitoring/index.mdx
new file mode 100644
index 00000000000..db415b28d9c
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/estate/monitoring/index.mdx
@@ -0,0 +1,14 @@
+---
+title: Monitoring resources in your Estate
+navTitle: Monitoring
+description: How to monitor resources in your Estate through the console by navigating the Projects and Clusters views.
+---
+
+To start monitoring resources in your Estate, go to the [EDB Postgres AI Console](https://portal.biganimal.com/beacon).
+
+Select a Project in the *Projects* panel.
+
+This takes you to the Overview page for the selected Project. By default this displays the top three most recently active resources in the project, with statistics for Memory, CPU, Storage, and Disk IOPS usage along with Transactions rate and Database size.
+
+To see all the Clusters in a Project, select the *Clusters* tab. This shows a list of all the Clusters in the Project. To view a Clusters detailed statistics, select the Cluster's name in the view. This takes you to the Cluster view. Select the *Monitoring* tab to see the detailed statistics for the Cluster.
+
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/reference/index.mdx b/advocacy_docs/edb-postgres-ai/console/estate/reference/index.mdx
new file mode 100644
index 00000000000..72181a1fbc5
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/estate/reference/index.mdx
@@ -0,0 +1,9 @@
+---
+title: EDB Postgres AI Console - control plane and Estate - Reference
+navTitle: Reference
+description: Reference documentation for the EDB Postgres AI control plane and Estate.
+---
+
+This section covers the EDB Postgres AI's control plane, the metrics and schema information it can ingest and
+other reference information about the Estate.
+
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/reference/metrics_ingestion.mdx b/advocacy_docs/edb-postgres-ai/console/estate/reference/metrics_ingestion.mdx
new file mode 100644
index 00000000000..641cbf277b6
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/estate/reference/metrics_ingestion.mdx
@@ -0,0 +1,22 @@
+---
+title: Metrics Ingestion into the control plane
+navTitle: Metrics Ingestion
+description: The metrics that can be ingested into the EDB Postgres AI control plane and your Estate.
+---
+
+All agents in the EDB Postgres AI Estate send metrics to the EDB Postgres AI control plane.
+
+Currently ingestion brings in the following metrics into the Estate:
+
+* Database size - derived from pg_database_size() and presented in MiB.
+* Recovery status - derived from pg_is_in_recovery().
+* Postgres version - derived from version().
+* Host name[^1] - derived from `gopsutil.Hostname()`
+* Operating system[^1] - derived from `gopsutil.OS()` for example "linux".
+* Platform[^1] - derived from `gopsutil.Platform()`, for example "ubuntu".
+* Platform version[^1] - derived from `gopsutil.PlatformVersion()`, for example "20.04".
+* CPU count[^1] - derived from `gopsutil.CPUs.Counts()`.
+
+[^1] The agent acquires these metrics from the host where the agent is running and aren't currently collected from the database itself. They're also not currently displayed in the EDB Postgres AI Console.
+
+The metrics ingestion process is secure and doesn't collect any data from the tables themselves.
diff --git a/advocacy_docs/edb-postgres-ai/console/estate/reference/schema_ingestion.mdx b/advocacy_docs/edb-postgres-ai/console/estate/reference/schema_ingestion.mdx
new file mode 100644
index 00000000000..29d73b04649
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/estate/reference/schema_ingestion.mdx
@@ -0,0 +1,23 @@
+---
+title: Schema Ingestion into the control plane
+navTitle: Schema Ingestion
+description: The schema information that can be ingested into the EDB Postgres AI control plane and your Estate.
+---
+
+
+With Schema Ingestion enabled, the following information gets sent to the EDB Postgres AI control plane and included within your Estate:
+
+* Schema names
+* Table names
+* Table types
+* Column names
+* Column types
+* Column default values
+* Column nullablity
+* Table description (from comments)
+* Column description (from comments)
+
+!!! Note
+The schema ingestion process is secure and doesn't collect any data from the tables themselves. If the schema information contains personally identifying information (PII), such as customer names as a column name, then that PII will be ingested into the Estate.
+!!!
+
diff --git a/advocacy_docs/edb-postgres-ai/console/getstarted.mdx b/advocacy_docs/edb-postgres-ai/console/getstarted.mdx
deleted file mode 100644
index a04cde879ed..00000000000
--- a/advocacy_docs/edb-postgres-ai/console/getstarted.mdx
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: EDB Postgres AI Console - Get started
-navTitle: Get started
-description: Get started with the EDB Postgres AI Console.
----
-
-The EDB Postgres® AI Console is a web-based user interface that provides a single pane of glass for managing and monitoring EDB Postgres AI Database Cloud Service and EDB Postgres AI databases. The EDB Postgres AI Console provides a unified view of the EDB Postgres AI Database Cloud Service and EDB Postgres AI databases, allowing users to manage and monitor their databases, users, and resources from a single interface.
-
-## Accessing the EDB Postgres AI Console
-
-To access the EDB Postgres AI Console, you will need to have an account with the EDB Postgres AI Database Cloud Service. If you do not have an account, you can sign up for a free trial at [https://www.enterprisedb.com/edb-postgres-ai](https://www.enterprisedb.com/edb-postgres-ai).
-
-Once you have an account, you can access the EDB Postgres AI Console by navigating to [https://portal.biganimal.com](https://portal.biganimal.com) and logging in with your EDB Postgres AI Database Cloud Service credentials.
diff --git a/advocacy_docs/edb-postgres-ai/console/getstarted/images/overview.svg b/advocacy_docs/edb-postgres-ai/console/getstarted/images/overview.svg
new file mode 100644
index 00000000000..88b1827dd15
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/getstarted/images/overview.svg
@@ -0,0 +1,5 @@
+
+
+
diff --git a/advocacy_docs/edb-postgres-ai/console/getstarted/index.mdx b/advocacy_docs/edb-postgres-ai/console/getstarted/index.mdx
new file mode 100644
index 00000000000..5baba1d60bd
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/console/getstarted/index.mdx
@@ -0,0 +1,82 @@
+---
+title: EDB Postgres AI Console - Get started
+navTitle: Get started
+description: Get started with the EDB Postgres AI Console.
+deepToC: true
+---
+
+The EDB Postgres® AI Console is a web-based user interface that provides a single pane of glass for managing and monitoring EDB Postgres AI Database Cloud Service, EDB Postgres AI databases, non-EDB Postgres such as AWS RDS, and any other Postgres installation. The EDB Postgres AI Console provides a unified view of the EDB Postgres AI Database Cloud Service and EDB Postgres AI databases, allowing users to manage and monitor their databases, users, and resources from a single interface.
+
+## Accessing the EDB Postgres AI Console
+
+To access the Console, you need to have an account with the EDB Postgres AI Database Cloud Service. If you don't have an account, you can sign up for one at [https://www.enterprisedb.com/accounts/register/biganimal](https://www.enterprisedb.com/accounts/register/biganimal).
+
+Once you have your account, you can access the Console by navigating to [https://portal.biganimal.com](https://portal.biganimal.com) and logging in with your EDB Postgres AI Database Cloud Service credentials.
+
+## Start using the Console
+
+Once you have logged in to your EDB account, you’ll be able to go to the EDB Postgres AI Console.
+
+The Console opens by default on overview page for your account and the “single pane of glass” for all your EDB Postgres AI services. You can get to this overview page by selecting the EDB logo in the top right hand corner.
+
+[![Console Overview](images/overview.svg)](images/overview.svg)
+
+## Exploring the Overview
+
+In the body of the overview page, you can see two summaries of the [__Project__](#projects) and [__Estate__](#estate) views.
+
+You can directly access the full __Project__ and __Estate__ views by selecting the **Projects** or **Estate** buttons next to the EDB logo.
+
+## Projects
+
+To create and manage resources, you’ll need a project. A project associates organizationally related resources together which enables simpler administration and access.
+
+In the overview's view of projects, you can see the name of each project, an option to add tags, the number of clusters within the project and the number of users who have access to that project.
+
+Selecting **View Projects** takes you to the Projects view.
+
+If you are an owner of the organization, you can create new projects by selecting the **Create New Project** button
+
+## Estate
+
+The Estate view is your everything view of every resource - clusters, hosted, and managed, self-managed, analytics lakehouses, and managed storage locations - in every project. It cuts through the Projects demarcation to give a single unified view of all your resources.
+
+Rather than grouped into projects, the Estate overview grouped into types of resources.
+
+Each resource shows its type, the number of instances of that resource and, on the right of the pane, a graphical representation or breakdown of those instances.
+
+### EDB Postgres AI Clusters
+
+This is any Postgres cluster created by and managed by EDB Postgres AI. EDB Postgres AI can host these clusters on any supported cloud service provider (AWS, Azure, GCP), or it can host them using by your own account on any cloud service provider.
+
+Also included in this view are EDB Postgres AI Lakehouse analytics clusters.
+
+The graphical view shows a color coded snapshot of the clusters, along with a textual description of which cloud service providers are in use and how many are Cloud Service Hosted by EDB or Managed but not hosted by EDB.
+
+The **Create New** button allows you to create a new Database cluster or Lakehouse analytics cluster in any of your projects. First, select between creating a Database cluster or a Lakehouse analytics node. If you only have one project, the creation operation automatically uses it. If you have more than one project, a menu of available projects to create your cluster in pops-up for you to select which project the cluster should appear in.
+
+Selecting the **EDB Postgres AI Clusters** title takes you to the __EDB Postgres AI Cluster__ pane of the full Estate view.
+
+### Self Managed Postgres
+
+Using an agent you can include self-managed Postgres clusters installed both on-premises and in the cloud as part of your EDB Estate view by using an agent. The agent collects metrics from an associated cluster and feed it to the EDB Estate. It’s in this pane that the information appears.
+
+The **Configure Agent** button takes you through the steps needed to configure the Estate to receive data from an agent. See the [Agent](../agent) documentation for more details and in particular [Install Agent](../agent/install-agent) on how to install the agent on your platform.
+
+Selecting the **Self Managed Postgres** title takes you to the __Self Managed Postgres__ pane of the full Estate view.
+
+### Cloud Hosted Databases
+
+Cloud Hosted Databases currently displays all the AWS S3 buckets and RDS instances that are available in selected AWS accounts.
+
+The **Manage Access** button takes you through the steps required to enable the Estate to collect this information from AWS. See [Integrating AWS](../estate/integrating/integrate_aws.mdx) for more details.
+
+Selecting the **Cloud Hosted Databases** title takes you to the __Cloud Hosted Databases__ pane of the full Estate view.
+
+### Storage Locations
+
+Storage Locations, also known as Managed Storage Locations, are data repositories for EDB Postgres AI Analytics. You sync and migrate data to these locations for analysis from Postgres databases or S3 storage. The data is then optimized for fast query, aggregation, and analysis.
+
+The **Manage Locations** button takes you the __Storage Locations__ view where you can search for, view, and add storage locations.
+
+Selecting the **Storage Locations** title take you to the __Storage Locations__ pane of the full Estate view.
diff --git a/advocacy_docs/edb-postgres-ai/console/index.mdx b/advocacy_docs/edb-postgres-ai/console/index.mdx
index c94a6b16df0..4d074c88fc2 100644
--- a/advocacy_docs/edb-postgres-ai/console/index.mdx
+++ b/advocacy_docs/edb-postgres-ai/console/index.mdx
@@ -3,6 +3,7 @@ title: EDB Postgres AI Console
navTitle: Console
indexCards: simple
iconName: Control
+description: How to manage your EDB Postgres AI environment with the EDB Postgres AI Console, Estate and Agent.
navigation:
- getstarted
- estate
diff --git a/advocacy_docs/edb-postgres-ai/databases/cloudservice.mdx b/advocacy_docs/edb-postgres-ai/databases/cloudservice.mdx
deleted file mode 100644
index 835212ecd5c..00000000000
--- a/advocacy_docs/edb-postgres-ai/databases/cloudservice.mdx
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: EDB Postgres AI Cloud Service
-navTitle: Cloud Service
-description: An introduction to the EDB Postgres AI Cloud Service and its features.
----
-
-The EDB Postgres® AI Cloud Service, formerly known as [BigAnimal](/biganimal/latest/), is an evolution of the service to offer a holistic platform which offers hybrid data estate management, observability, and AI capabilities.
-
-The EDB Postgres AI Cloud Service itself is a fully managed cloud service that provides a high-performance, scalable, and secure database platform for AI and machine learning workloads. It also allows provides the platform for EDB Postgres AI Analytics and EDB Postgres AI Machine Learning services.
-
-The service is built on the EDB Postgres Advanced Server and EDB Postgres Extended databases and is designed to help organizations accelerate the development and deployment of AI and machine learning applications.
-
diff --git a/advocacy_docs/edb-postgres-ai/databases/databases.mdx b/advocacy_docs/edb-postgres-ai/databases/databases.mdx
deleted file mode 100644
index 171adf5f6ed..00000000000
--- a/advocacy_docs/edb-postgres-ai/databases/databases.mdx
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: EDB Postgres AI databases
-navTitle: Databases
-description: Deploy EDB Postgres AI Databases on-premises with the EDB Postgres AI Estate and Agent components.
----
-
-EDB Postgres® databases are the core of the EDB Postgres AI platform. EDB Postgres Databases are available for self-managed deployment and on the EDB Postgres AI Cloud Service. Self-managed EDB Postgres databases can be integrated with the EDB Postgres AI Cloud Service estate and managed through a single pane of glass by installing the [EDB Postgres AI Agent](/edb-postgres-ai/console/agent).
-
-## EDB Postgres Advanced Server (EPAS)
-
-EDB Postgres Advanced Server is an enhanced version of PostgreSQL that is designed to meet the needs of large-scale, mission-critical enterprise workloads. EDB Postgres Advanced Server is built on the open source PostgreSQL database, and includes additional enterprise-class features and capabilities that are critical for enterprise database deployments. These include Oracle compatibility and transparent data encryption. EDB Postgres Advanced Server is available for self-managed deployment and on the EDB Postgres AI Cloud Service.
-
-* Read more about [EDB Postgres Advanced Server](/epas/latest/).
-
-## EDB Postgres Extended Server (PGE)
-
-EDB Postgres Extended Server is an enhanced version of PostgreSQL that is designed to meet the needs of large-scale, mission-critical enterprise workloads. PGE is built on the open source PostgreSQL database, and includes additional enterprise-class features and capabilities that are critical for enterprise database deployments. This includes transparent data encryption. PGE is available for self-managed deployment and on the EDB Postgres AI Cloud Service.
-
-* Read more about [EDB Postgres Extended Server](/pge/latest/).
-
-## EDB Postgres Distributed (PGD)
-
-EDB Postgres Distributed is a high availability solution for EDB Postgres databases. PGD provides a distributed database environment that is designed to ensure high availability and fault tolerance for mission-critical workloads. PGD can be used with EPAS, PGE or PostgreSQL databases. PGD is available for self-managed deployment and on the EDB Postgres AI Cloud Service (as the Distributed High Availability option).
-
-* Read more about [EDB Postgres Distributed](/pgd/latest/).
diff --git a/advocacy_docs/edb-postgres-ai/databases/epas.mdx b/advocacy_docs/edb-postgres-ai/databases/epas.mdx
new file mode 100644
index 00000000000..6e0c7f7d5c1
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/databases/epas.mdx
@@ -0,0 +1,12 @@
+---
+title: EDB Postgres Advanced Server (EPAS)
+navTitle: EDB Postgres Advanced Server
+description: EDB Postgres Advanced Server is an enhanced version of PostgreSQL designed to meet the needs of large-scale, mission-critical enterprise workloads.
+---
+
+EDB Postgres Advanced Server is an enhanced version of PostgreSQL designed to meet the needs of large-scale, mission-critical enterprise workloads. EDB Postgres Advanced Server builds on the open source PostgreSQL database, and includes additional enterprise-class features and capabilities that are critical for enterprise database deployments. These include Oracle compatibility and transparent data encryption.
+
+EDB Postgres Advanced Server is available for self-managed deployment and on the [EDB Postgres AI Cloud Service](../cloud-service/).
+
+* Read more about [EDB Postgres Advanced Server](/epas/latest/).
+
diff --git a/advocacy_docs/edb-postgres-ai/databases/index.mdx b/advocacy_docs/edb-postgres-ai/databases/index.mdx
index ff20568cef7..245363ff4dc 100644
--- a/advocacy_docs/edb-postgres-ai/databases/index.mdx
+++ b/advocacy_docs/edb-postgres-ai/databases/index.mdx
@@ -3,17 +3,24 @@ title: EDB Postgres AI databases
navTitle: Databases
indexCards: simple
iconName: Database
+description: About the Postgres databases that are the core of the EDB Postgres AI platform.
navigation:
- - databases
- - cloudservice
+ - epas
+ - pge
+ - pgd
- options
---
-Building on decades of Postgres expertise, the EDB Postgres® databases are the core of the EDB Postgres AI platform. EDB Postgres Advanced Server can take on Oracle workloads, while EDB Postgres Extended Server is designed for large-scale, mission-critical enterprise workloads. EDB Postgres Distributed provides high availability and fault tolerance for mission-critical workloads.
-From here you can read more about the [databases](databases) that power EDB Postgres AI, and how they can be deployed on-premises with the EDB Postgres AI Estate and Agent components.
+Building on decades of Postgres expertise, the EDB Postgres® databases are the core of the EDB Postgres AI platform.
+* [EDB Postgres Advanced Server](epas) can take on Oracle workloads.
+* [EDB Postgres Extended Server](pge) designed for large-scale, mission-critical enterprise workloads.
+* [EDB Postgres Distributed](pgd) provides distributed high availability and fault tolerance for mission-critical workloads using EPAS or PGE.
-You can also learn about the [EDB Postgres AI Cloud Service](cloudservice) and how it can be used to manage your database estate.
+There's also detailed information on [deployment options](options) for high availability and fault tolerance.
-Finally, there's an outline of some of the options available when deploying EDB Postgres databases.
+You can read more about how where can deploy them as:
+ * hosted databases with the [EDB Postgres AI Cloud Service](../cloud-service)
+ * managed databases with the [EDB Postgres AI Cloud Service](../cloud-service)
+ * or self-managed in your own cloud or on-premises but visible in your [EDB Postgres AI Console](../console) using the [EDB Postgres AI Agent](../console/agent).
diff --git a/advocacy_docs/edb-postgres-ai/databases/options.mdx b/advocacy_docs/edb-postgres-ai/databases/options.mdx
index 3aaccfb577f..fdb2e66f880 100644
--- a/advocacy_docs/edb-postgres-ai/databases/options.mdx
+++ b/advocacy_docs/edb-postgres-ai/databases/options.mdx
@@ -1,5 +1,5 @@
---
-title: EDB Postgres AI databases - Deployment options
+title: EDB Postgres AI Databases - Deployment options
navTitle: Deployment options
description: High availability and other options available for EDB Postgres AI databases and on EDB Postgres AI Cloud Service.
deepToC: true
@@ -9,19 +9,36 @@ deepToC: true
### Single instance
-Single instance databases are great for development and testing, but for production workloads, you need to consider high availability and fault tolerance.
+Single instance databases are great for development and testing, but for
+production workloads, you need to consider high availability and fault
+tolerance. You can include single instance databases for test and development
+alongside your production high availability databases in the [EDB Postgres AI
+Cloud Service](../cloud-service).
### Primary/secondary replication
-Primary/Secondary replication is a common high availability solution for databases. In this configuration, a primary database server is responsible for processing read and write requests. A secondary database server is configured to replicate the primary database server. If the primary database server fails, the secondary database server can take over and become the primary database server.
+Primary/Secondary replication is a common high availability solution for
+databases. In this configuration, a primary database server is responsible for
+processing read and write requests. A secondary database server is setup to
+replicate the primary database server. If the primary database server fails, the
+secondary database server can take over and become the primary database server.
-This configuration provides fault tolerance and high availability in a particular location. This can be used with EDB Postgres® Advanced Server (EPAS) and EDB Postgres Extended Server (PGE).
+This configuration provides fault tolerance and high availability in a
+particular location. This is available with EDB Postgres® Advanced Server (EPAS)
+and EDB Postgres Extended Server (PGE).
-This is a standard configuration option on EDB Postgres AI Cloud Service.
+This is a standard configuration option on [EDB Postgres AI Cloud
+Service](../cloud-service).
### Distributed high availability
-High availability is a critical requirement for mission-critical workloads. EDB Postgres Distributed (PGD) provides a distributed database environment that is designed to ensure high availability and fault tolerance for mission-critical workloads. PGD can be used with EPAS, PGE or PostgreSQL databases. PGD is available for self-managed deployment and on the EDB Postgres AI Cloud Service (as the Distributed High Availability option).
+High availability is a critical requirement for mission-critical workloads. EDB
+Postgres Distributed (PGD) provides a distributed database environment designed
+to ensure high availability and fault tolerance for mission-critical workloads.
+PGD can use EPAS, PGE, or PostgreSQL databases as the underlying replicated
+database. PGD is available for self-managed deployment and on the EDB Postgres
+AI Cloud Service (as the Distributed High Availability option).
-This is also a standard configuration option on EDB Postgres AI Cloud Service.
+This is also a standard configuration option on [EDB Postgres AI Cloud
+Service](../cloud-service).
diff --git a/advocacy_docs/edb-postgres-ai/databases/pgd.mdx b/advocacy_docs/edb-postgres-ai/databases/pgd.mdx
new file mode 100644
index 00000000000..c16047ef439
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/databases/pgd.mdx
@@ -0,0 +1,12 @@
+---
+title: EDB Postgres Distributed (PGD)
+navTitle: EDB Postgres Distributed
+description: EDB Postgres Distributed is a high availability solution for Postgres and EDB Postgres databases.
+---
+
+EDB Postgres Distributed is a high availability solution for EDB Postgres databases. PGD provides a distributed database environment designed to ensure high availability and fault tolerance for mission-critical workloads. PGD works with EPAS, PGE, or PostgreSQL databases.
+
+PGD is available for self-managed deployment and on the [EDB Postgres AI Cloud Service](../cloud-service) (as the Distributed High Availability option).
+
+* Read more about [EDB Postgres Distributed](/pgd/latest/).
+
diff --git a/advocacy_docs/edb-postgres-ai/databases/pge.mdx b/advocacy_docs/edb-postgres-ai/databases/pge.mdx
new file mode 100644
index 00000000000..efb31cdc987
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/databases/pge.mdx
@@ -0,0 +1,12 @@
+---
+title: EDB Postgres Extended Server (PGE)
+navTitle: EDB Postgres Extended Server
+description: EDB Postgres Extended Server is an enhanced version of PostgreSQL designed to meet the needs of large-scale, mission-critical enterprise workloads.
+---
+
+EDB Postgres Extended Server (PGE) is an enhanced version of PostgreSQL designed to meet the needs of large-scale, mission-critical enterprise workloads. PGE builds on the open source PostgreSQL database, and includes additional enterprise-class features and capabilities that are critical for enterprise database deployments. This includes transparent data encryption. PGE is available for self-managed deployment and on the EDB Postgres AI Cloud Service.
+
+EDB Postgres Extended Server is available for self-managed deployment and on the [EDB Postgres AI Cloud Service](../cloud-service/).
+
+* Read more about [EDB Postgres Extended Server](/pge/latest/).
+
diff --git a/advocacy_docs/edb-postgres-ai/index.mdx b/advocacy_docs/edb-postgres-ai/index.mdx
index 653809b878f..6f0682b5f22 100644
--- a/advocacy_docs/edb-postgres-ai/index.mdx
+++ b/advocacy_docs/edb-postgres-ai/index.mdx
@@ -9,6 +9,7 @@ directoryDefaults:
navigation:
- overview
- console
+- cloud-service
- databases
- analytics
- ai-ml
diff --git a/advocacy_docs/edb-postgres-ai/overview/concepts.mdx b/advocacy_docs/edb-postgres-ai/overview/concepts.mdx
deleted file mode 100644
index 712a5d5d54f..00000000000
--- a/advocacy_docs/edb-postgres-ai/overview/concepts.mdx
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: EDB Postgres AI overview - Concepts
-navTitle: Concepts
-description: A look at the concepts that underpin EDB Postgres AI.
----
-
-EDB Postgres® AI takes EDB's leading expertise in Postgres and expands the scope of Postgres to address modern challenges. From simplifying your database estate management to infusing AI deep into Postgres and putting it to work to bring all your data under one analytical eye.
-
-EDB Postgres AI is composed of multiple elements which come together to deliver a unified and powerful experience:
-
-## [EDB Postgres AI - Console](/edb-postgres-ai/console)
-* Providing the single pane of glass onto all your EDB Postgres AI operations, Console is there to manage your database landscape.
- * ### [Hybrid estate management](/edb-postgres-ai/console/estate)
- * EDB Postgres AI Cloud databases are automatically managed on the Console.
- * ### [EDB Postgres AI Agent](/edb-postgres-ai/console/agent)
- * On premises databases can be brought under one manageable Console view with the Agent enabling an unprecedented view of diverse deployments.
-## [EDB Postgres AI - Databases](/edb-postgres-ai/databases)
-* All of EDB's database expertise can be found in EDB Postgres Advanced Server and EDB Postgres Extended Server.
-* Oracle compatibility, transparent data encryption and more. They provide the data fabric on which EDB Postgres AI operates.
-* Combined with EDB Postgres Distributed, they can also provide a high availability environment for your data.
-* All of these components are available on the EDB Postgres AI Cloud Service, and managed through the EDB Postgres AI Console.
- * ### [EDB Postgres Advanced Server and EDB Postgres Extended Server](/edb-postgres-ai/databases/databases)
- * EDB Postgres Advanced Server and EDB Postgres Extended Server both provide mission critical capabilities for your data, with EDB Postgres Advanced Server also providing Oracle compatibility.
- * ### [EDB Postgres Distributed](/edb-postgres-ai/databases/options/)
- * High availability with an active-active mesh of Postgres instances, EDB Postgres Distributed provides a robust and scalable environment for your data.
- * ### [EDB Postgres AI Cloud Service](/edb-postgres-ai/databases/cloudservice)
- * Not just databases, but driven by databases, Cloud Service provides a global platform for delivering new elements of EDB Postgres AI efficiently and effectively.
-## [EDB Postgres AI - Lakehouse analytics](/edb-postgres-ai/analytics)
-* Filtering out the data noise and revealing insights and value, Lakehouse analytics brings both structured relational data in Postgres and unstructured data in object storage together for exploration. At the heart of Analytics is a custom built store for this data:
-* Built to bring structured and unstructured data together, Lakehouse nodes support numerous formats to bring your data in from the cold, ready to be analyzed.
-## [EDB Postgres AI - AI/ML](/edb-postgres-ai/ai-ml)
-* Postgres has proven its capability as a flexible data environment, and Vector data, the core of generative AI, is already infused into EDB Postgres AI providing a platform for a range of practical and effective AI/ML solutions. A technical preview of this capability is available for the pgai extension.
-## [EDB Postgres AI - Platforms and tools](/edb-postgres-ai/tools)
-* Postgres extensions are a source of its power and popularity, and are one of the categories that fall within this element of EDB Postgres AI.
-* Extensions sit alongside existing applications like Postgres Enterprise Manager, Barman, and Query Advisor as tools that allow you to leverage Postgres's capabilities.
-* Also within this element are EDB's Migration tools, Migration Toolkit and Migration Portal. The Migration Portal is among the first EDB tools to include embedded AI with an AI copilot that can assist users in developing migration strategies.
-
diff --git a/advocacy_docs/edb-postgres-ai/overview/guide.mdx b/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx
similarity index 93%
rename from advocacy_docs/edb-postgres-ai/overview/guide.mdx
rename to advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx
index 287b2076acd..95d88aebbf6 100644
--- a/advocacy_docs/edb-postgres-ai/overview/guide.mdx
+++ b/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx
@@ -1,6 +1,6 @@
---
-title: EDB Postgres AI overview - Guide
-navTitle: Guide
+title: EDB Postgres AI overview - Guide and Getting Started
+navTitle: Guide and Getting Started
description: What do you want to use EDB Postgres AI for? Start navigating the documentation here.
---
diff --git a/advocacy_docs/edb-postgres-ai/overview/index.mdx b/advocacy_docs/edb-postgres-ai/overview/index.mdx
index 0f5bfc7368c..d7401ea7adc 100644
--- a/advocacy_docs/edb-postgres-ai/overview/index.mdx
+++ b/advocacy_docs/edb-postgres-ai/overview/index.mdx
@@ -3,13 +3,18 @@ title: EDB Postgres AI overview
navTitle: Overview
indexCards: simple
iconName: Earth
-deepToC: true
+description: An overview of all the EDB Postgres AI platform and tools
+navigation:
+- overview-and-concepts
+- guide-and-getting-started
+- latest-release-news
---
-EDB Postgres® AI is a new era for EDB. With EDB Postgres AI, customers can now leverage EDB's enterprise-grade Postgres offerings to support not just their mission critical transactional workloads, but also their analytical and AI applications. This also means that, in addition to the core transactional database releases you have come to expect from EDB, we will now be delivering regular updates to our analytics, AI, and platform capabilities.
+EDB Postgres® AI is a new era for EDB. With EDB Postgres AI, customers can now leverage EDB's enterprise-grade Postgres offerings to support not just their mission critical transactional workloads, but also their analytical and AI applications. This also means that, in addition to the core transactional database releases you have come to expect from EDB, EDB shall now be delivering regular updates to the analytics, AI, and platform capabilities.
-In this overview section we will:
+This overview section shall:
* [Introduce the concepts that underpin EDB Postgres AI](concepts)
* [Provide a guide to help you navigate the documentation](guide)
-* [Share the latest features released and updated in EDB Postgres AI](releasenotes)
+* [Read about the latest features released and updated in EDB Postgres AI](releasenotes)
+
diff --git a/advocacy_docs/edb-postgres-ai/overview/releasenotes.mdx b/advocacy_docs/edb-postgres-ai/overview/latest-release-news.mdx
similarity index 74%
rename from advocacy_docs/edb-postgres-ai/overview/releasenotes.mdx
rename to advocacy_docs/edb-postgres-ai/overview/latest-release-news.mdx
index fb885564e59..34fa7cdab9c 100644
--- a/advocacy_docs/edb-postgres-ai/overview/releasenotes.mdx
+++ b/advocacy_docs/edb-postgres-ai/overview/latest-release-news.mdx
@@ -1,10 +1,15 @@
---
-title: EDB Postgres AI Overview - Release notes
-navTitle: Release notes
-description: The current features released and updated in EDB Postgres AI.
+title: EDB Postgres AI Overview - Latest release news
+navTitle: Latest release news
+description: The latest features released and updated in EDB Postgres AI.
+deepToC: true
---
-EDB Postgres® AI is a a new era for EDB. With EDB Postgres AI, customers can now
+**May 23, 2024**
+
+## Introducing EDB Postgres AI
+
+EDB Postgres® AI is a new era for EDB. With EDB Postgres AI, customers can now
leverage EDB's enterprise-grade Postgres offerings to support not just their
mission critical transactional workloads, but also their analytical and AI
applications. This also means that, in addition to the core transactional
@@ -53,23 +58,19 @@ more and enroll in the tech preview [here](https://info.enterprisedb.com/pgai-pr
## EDB platform updates
-### [EDB Postgres AI Platform Agent](/edb-postgres-ai/console) release and platform support
+### [EDB Postgres AI Platform Agent](/edb-postgres-ai/console/agent) release and platform support
+
+As a hybrid solution, EDB Postgres AI enables unified management and observability over both cloud-managed and self-managed Postgres databases. You can enable this by using the EDB Postgres AI Agent, which allows users to connect self-managed Postgres deployments to the EDB Postgres AI Console.
-As part of its initial release, the EDB Postgres AI agent enables users to
-connect self-managed Postgres deployments to the platform, to enable unified
-observability and management over hybrid data estates. Additionally, users will
-be provided with the Postgres database version and size (in MB) in the EDB
-Postgres AI Platform interface, with data collected from each database at a
-configurable level. Additionally, EDB Postgres All Platform is available on
-EDB-supported x86 Linux distros.
+Customers now have visibility of their self-managed Postgres databases, Amazon RDS Postgres databases, and EDB Postgres AI Cloud Service database clusters from a “single pane of glass” using the EDB Postgres AI Console, where they can observe metadata including OS and PG versions. Learn how to get started with the Agent in the [documentation](/edb-postgres-ai/console/agent).
## [EDB Postgres AI database](/edb-postgres-ai/databases) updates
### EDB database server updates
As part of EDB's support for the open source community's quarterly release
-schedule, we completed PGE and EDB Postgres Advanced Server merge updates from the latest upstream
-PostgreSQL, including the following:
+schedule, EDB has completed PGE and EDB Postgres Advanced Server merge updates from
+the latest upstream PostgreSQL, including the following:
| Database distributions | Versions supported |
|------------------------------|------------------------------------------------|
@@ -88,10 +89,10 @@ Distributed process, where client applications could only use the lead node to
route their application traffic via PGD Proxy, for both reads and writes,
potentially impacting performance during peak times.
-#### Adding DETACH CONCURRENTLY commands
-We now offer support for DETACH CONCURRENTLY commands for EDB Postgres
+#### Adding `DETACH CONCURRENTLY` commands
+Support in now availabel for DETACH CONCURRENTLY commands for EDB Postgres
Distributed (and all EDB database version types), which enables other SELECT
-queries to be executed on the parent table while the DETACH operation is
+queries to execute on the parent table while the DETACH operation is
underway.
For all the Q2 EDB announcements, visit the [EDB blog](https://www.enterprisedb.com/blog/edb-postgres-ai-q2-release-highlights).
diff --git a/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx b/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx
new file mode 100644
index 00000000000..1dedbbcf6e3
--- /dev/null
+++ b/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx
@@ -0,0 +1,51 @@
+---
+title: EDB Postgres AI - Overview and Concepts
+navTitle: Overview and Concepts
+description: An overview of EDB Postgres AI and the concepts that underpin it
+deepToC: true
+---
+
+EDB Postgres® AI takes EDB's leading expertise in Postgres and expands the scope of Postgres to address modern challenges. From simplifying your database estate management to infusing AI deep into Postgres and putting it to work to bring all your data under one analytical eye.
+
+EDB Postgres AI is composed of multiple elements which come together to deliver a unified and powerful experience.
+
+
+## [EDB Postgres AI Console](/edb-postgres-ai/console)
+The Console is the gateway to EDB Postgres AI. Starting with a single pane of glass onto all your EDB Postgres AI operations, Console is there to manage your database landscape. It also lets you create and manage your databases, and provides a view of your databases' health and performance. Add to that the ability to bring up Lakehouse Analytics and AI/ML capabilities, and you have the power of ED Postgres AI at your fingertips.
+* **[EDB Postgres AI Estate](/edb-postgres-ai/console/estate)**
+ * EDB Postgres AI Cloud Service databases are automatically managed on the Console. The Estate can also monitor the health of AWS RDS and S3 resources and using the Agent, on-premises databases show in the same view.
+* **[EDB Postgres AI Agent](/edb-postgres-ai/console/agent)**
+ * Include on premises databases under the Console's single pane of glass view with the Agent. The Agent is part of enabling EDB Postgres AI an unprecedented view of diverse deployments.
+
+## [EDB Postgres AI Cloud Service](/edb-postgres-ai/cloud-service)
+Not just databases, but driven by databases, Cloud Service provides a global platform for delivering new elements of EDB Postgres AI efficiently and effectively.
+* **[Cloud Service Hosted](/edb-postgres-ai/cloud-service/hosted)**
+ * The Cloud Service Hosted environment is where you can experience the full power of EDB Postgres AI, with all the elements of the platform available to you.
+* **[Cloud Service Managed](/edb-postgres-ai/cloud-service/managed)**
+ * Bring your own cloud account to EDB and let the Managed environment take care of deploying and managing your Postgres databases in your cloud account
+
+## [EDB Postgres AI Databases](/edb-postgres-ai/databases)
+All of EDB's database expertise can be found in EDB Postgres Advanced Server and EDB Postgres Extended Server. Oracle compatibility, transparent data encryption and more. They provide the data fabric on which EDB Postgres AI operates. Combined with EDB Postgres Distributed, they can also provide a high availability environment for your data.
+
+All of these components are available on the EDB Postgres AI Cloud Service, and managed through the EDB Postgres AI Console.
+* **[EDB Postgres Advanced Server](/edb-postgres-ai/databases/epas)**
+ * EDB Postgres Advanced Server provides provide mission critical capabilities for your data, with EDB Postgres Advanced Server also providing Oracle compatibility.
+* **[EDB Postgres Extended Server](/edb-postgres-ai/databases/pge)**
+ * EDB Postgres Extended Server delivers for large-scale, mission-critical enterprise workloads, providing a robust and scalable environment for your data.
+* **[EDB Postgres Distributed](/edb-postgres-ai/databases/pgd/)**
+ * High availability with an active-active mesh of Postgres instances, EDB Postgres Distributed provides a dynamic, resilient, and scalable environment for your data.
+
+## [EDB Postgres AI Lakehouse analytics](/edb-postgres-ai/analytics)
+Filtering out the data noise and revealing insights and value, Lakehouse analytics brings both structured relational data in Postgres and unstructured data in object storage together for exploration.
+
+* **[Lakehouse nodes](/edb-postgres-ai/analytics/lakehouse)**
+ * At the heart of Analytics is a custom built store for this data. Built to bring structured and unstructured data together, Lakehouse nodes support numerous formats to bring your data in from the cold, ready for analysis.
+
+## [EDB Postgres AI AI/ML](/edb-postgres-ai/ai-ml)
+* Postgres has proven its capability as a flexible data environment, and Vector data, the core of generative AI, is already infused into EDB Postgres AI providing a platform for a range of practical and effective AI/ML solutions. A technical preview of this capability is available for the pgai extension.
+
+## [EDB Postgres AI Platforms and tools](/edb-postgres-ai/tools)
+* Postgres extensions are a source of its power and popularity, and are one of the categories that fall within this element of EDB Postgres AI.
+* Extensions sit alongside existing applications like Postgres Enterprise Manager, Barman, and Query Advisor as tools that allow you to leverage Postgres's capabilities.
+* Also within this element are EDB's Migration tools, Migration Toolkit and Migration Portal. The Migration Portal is among the first EDB tools to include embedded AI with an AI copilot that can assist users in developing migration strategies.
+
diff --git a/advocacy_docs/edb-postgres-ai/tools/index.mdx b/advocacy_docs/edb-postgres-ai/tools/index.mdx
index 5f3a5094991..74b745e1ba2 100644
--- a/advocacy_docs/edb-postgres-ai/tools/index.mdx
+++ b/advocacy_docs/edb-postgres-ai/tools/index.mdx
@@ -1,8 +1,9 @@
---
-title: EDB Postgres AI - Tools
-navTitle: Tools
+title: EDB Postgres AI - Platform and tools
+navTitle: Platform and tools
indexCards: simple
iconName: Toolbox
+description: The EDB Postgres AI platform and tools that help your manage your organization's databases, wherever they are.
navigation:
- migration-and-ai
- management
@@ -11,5 +12,5 @@ navigation:
Tools - Everything you need to manage your EDB Postgres® AI databases, from migration to backup and recovery.
-EDB Postgres AI Tools is a set of tools, utilities and extensions that are designed to help you manage your EDB Postgres AI databases. From migration to backup and recovery, EDB Postgres AI Tools has you covered.
+EDB Postgres AI Tools is a set of tools, utilities and extensions designed to help you manage your EDB Postgres AI databases. From migration to backup and recovery, EDB Postgres AI Tools has you covered.
diff --git a/advocacy_docs/edb-postgres-ai/tools/management.mdx b/advocacy_docs/edb-postgres-ai/tools/management.mdx
index 5eeb0f6fbf4..bb4b256c94c 100644
--- a/advocacy_docs/edb-postgres-ai/tools/management.mdx
+++ b/advocacy_docs/edb-postgres-ai/tools/management.mdx
@@ -8,4 +8,4 @@ An introduction to the management tools of EDB Postgres® AI such as Postgres En
[PEM](/pem/latest/) is a comprehensive management tool for EDB Postgres Advanced Server and PostgreSQL databases. PEM provides database administrators with a graphical view of the server, allowing them to easily monitor and manage their databases. PEM also provides tools for database design, monitoring, and tuning, as well as tools for managing database objects, users, and roles.
-PEM is a web-based application that can be [accessed from any web browser](/pem/latest/pem_web_interface/). PEM provides a single pane of glass for managing multiple database servers, allowing administrators to easily monitor and manage their databases from a centralized location.
\ No newline at end of file
+PEM is a web-based application that you can [access from any web browser](/pem/latest/pem_web_interface/). PEM provides a single pane of glass for managing multiple database servers, allowing administrators to easily monitor and manage their databases from a centralized location.
\ No newline at end of file
diff --git a/src/components/left-nav.js b/src/components/left-nav.js
index 1bffeb3d8dc..1b985385cc1 100644
--- a/src/components/left-nav.js
+++ b/src/components/left-nav.js
@@ -14,8 +14,11 @@ const SectionHeading = ({ navTree, path, iconName }) => {
let myIconName = iconName || productIcon(path) || iconNames.DOTTED_BOX;
let className = "fill-orange me-3";
if (myIconName && myIconName.startsWith("edb_postgres_ai")) {
- className = "fill-black me-3";
+ className = "fill:aquamarine me-3";
+ } else if (myIconName && path.startsWith("/edb-postgres-ai/")) {
+ className = "fill:aquamarine me-3";
}
+ console.log(className, path);
return (