diff --git a/docs/serverless/assistant/connect-to-openai.mdx b/docs/serverless/assistant/connect-to-openai.mdx
index 9946730295..57a24d97b1 100644
--- a/docs/serverless/assistant/connect-to-openai.mdx
+++ b/docs/serverless/assistant/connect-to-openai.mdx
@@ -36,7 +36,7 @@ The following video demonstrates these steps.
## Configure the OpenAI connector
-To integrate with ((kib)):
+Finally, configure the connector in ((kib)):
1. Log in to ((kib)).
2. Navigate to **Stack Management → Connectors → Create Connector → OpenAI**.
diff --git a/docs/serverless/assistant/connect-to-vertex.mdx b/docs/serverless/assistant/connect-to-vertex.mdx
new file mode 100644
index 0000000000..5374db1439
--- /dev/null
+++ b/docs/serverless/assistant/connect-to-vertex.mdx
@@ -0,0 +1,67 @@
+---
+slug: /serverless/security/connect-to-google-vertex
+title: Connect to Google Vertex AI
+description: Set up a Google Vertex LLM connector.
+tags: ["security", "overview", "get-started"]
+status: in review
+---
+
+This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI's large language models (LLMs) within ((elastic-sec)). You'll first need to enable Vertex AI, then generate an API key, and finally configure the connector in your ((elastic-sec)) project.
+
+
+Before continuing, you should have an active project in one of Google Vertex AI's [supported regions](https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability).
+
+
+## Enable the Vertex AI API
+
+1. Log in to the GCP console and navigate to **Vertex AI → Vertex AI Studio → Overview**.
+2. If you're new to Vertex AI, the **Get started with Vertex AI Studio** popup appears. Click **Vertex AI API**, then click **ENABLE**.
+
+The following video demonstrates these steps.
+
+
+
+
+For more information about enabling the Vertex AI API, refer to [Google's documentation](https://cloud.google.com/vertex-ai/docs/start/cloud-environment).
+
+
+## Create a Vertex AI service account
+
+1. In the GCP console, navigate to **APIs & Services → Library**.
+2. Search for **Vertex AI API**, select it, and click **MANAGE**.
+3. In the left menu, navigate to **Credentials** then click **+ CREATE CREDENTIALS** and select **Service account**.
+4. Name the new service account, then click **CREATE AND CONTINUE**.
+5. Under **Select a role**, select **Vertex AI User**, then click **CONTINUE**.
+6. Click **Done**.
+
+The following video demonstrates these steps.
+
+
+
+## Generate an API key
+
+1. Return to Vertex AI's **Credentials** menu and click **Manage service accounts**.
+2. Search for the service account you just created, select it, then click the link that appears under **Email**.
+3. Go to the **KEYS** tab, click **ADD KEY**, then select **Create new key**.
+4. Select **JSON**, then click **CREATE** to download the key. Keep it somewhere secure.
+
+The following video demonstrates these steps.
+
+
+
+## Configure the Google Gemini connector
+
+Finally, configure the connector in ((kib)):
+
+1. Log in to ((kib)).
+2. Navigate to **Stack Management → Connectors → Create Connector → Google Gemini**.
+3. Name your connector to help keep track of the model version you are using.
+4. Under **URL**, enter the URL for your region.
+5. Enter your **GCP Region** and **GCP Project ID**.
+6. Under **Default model**, specify either `gemini-1.5.pro` or `gemini-1.5-flash`. [Learn more about the models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models).
+7. Under **Authentication**, enter your API key.
+8. Click **Save**.
+
+The following video demonstrates these steps.
+
+
\ No newline at end of file
diff --git a/docs/serverless/assistant/llm-connector-guides.mdx b/docs/serverless/assistant/llm-connector-guides.mdx
index 38e9406727..fcdedd575a 100644
--- a/docs/serverless/assistant/llm-connector-guides.mdx
+++ b/docs/serverless/assistant/llm-connector-guides.mdx
@@ -12,4 +12,6 @@ Setup guides are available for the following LLM providers:
*
*
-*
\ No newline at end of file
+*
+*
+
diff --git a/docs/serverless/assistant/usecase-attack-disc-ai-assistant-incident-reporting.mdx b/docs/serverless/assistant/usecase-attack-disc-ai-assistant-incident-reporting.mdx
index 65d0639692..b9104ee812 100644
--- a/docs/serverless/assistant/usecase-attack-disc-ai-assistant-incident-reporting.mdx
+++ b/docs/serverless/assistant/usecase-attack-disc-ai-assistant-incident-reporting.mdx
@@ -33,6 +33,7 @@ From a discovery on the Attack discovery page, click **View in AI Assistant** to
AI Assistant can quickly compile essential data and provide suggestions to help you generate an incident report and plan an effective response. You can ask it to provide relevant data or answer questions, such as “How can I remediate this threat?” or “What ((esql)) query would isolate actions taken by this user?”
+
The image above shows an ((esql)) query generated by AI Assistant in response to a user prompt. Learn more about .
diff --git a/docs/serverless/serverless-security.docnav.json b/docs/serverless/serverless-security.docnav.json
index 3f493de8db..d56e4e5b22 100644
--- a/docs/serverless/serverless-security.docnav.json
+++ b/docs/serverless/serverless-security.docnav.json
@@ -38,6 +38,9 @@
},
{
"slug": "/serverless/security/connect-to-openai"
+ },
+ {
+ "slug": "/serverless/security/connect-to-google-vertex"
}
]
},