From 00df0454019f7639c803b7ba35c8756188d39036 Mon Sep 17 00:00:00 2001 From: Benjamin Ironside Goldstein <91905639+benironside@users.noreply.github.com> Date: Mon, 10 Jun 2024 17:58:18 -0400 Subject: [PATCH 1/2] Update OpenAI and Azure OpenAI connector setup guides (#5358) * Update OpenAI connector setup guide * Updates Azure OpenAI guide * adds serverless changes * updates verbiage * minor update * Update docs/serverless/assistant/connect-to-openai.mdx Co-authored-by: Nastasha Solomon <79124755+nastasha-solomon@users.noreply.github.com> * Update docs/assistant/connect-to-openai.asciidoc Co-authored-by: Nastasha Solomon <79124755+nastasha-solomon@users.noreply.github.com> * Update docs/assistant/azure-openai-setup.asciidoc Co-authored-by: Nastasha Solomon <79124755+nastasha-solomon@users.noreply.github.com> * Update docs/serverless/assistant/connect-to-azure-openai.mdx Co-authored-by: Nastasha Solomon <79124755+nastasha-solomon@users.noreply.github.com> --------- Co-authored-by: Nastasha Solomon <79124755+nastasha-solomon@users.noreply.github.com> (cherry picked from commit ee612dca171a96701aa7c50dd43c77af5fe83198) # Conflicts: # docs/serverless/assistant/connect-to-azure-openai.mdx # docs/serverless/assistant/connect-to-openai.mdx --- docs/assistant/azure-openai-setup.asciidoc | 2 +- docs/assistant/connect-to-openai.asciidoc | 3 +- .../assistant/connect-to-azure-openai.mdx | 84 +++++++++++++++++++ .../assistant/connect-to-openai.mdx | 54 ++++++++++++ 4 files changed, 141 insertions(+), 2 deletions(-) create mode 100644 docs/serverless/assistant/connect-to-azure-openai.mdx create mode 100644 docs/serverless/assistant/connect-to-openai.mdx diff --git a/docs/assistant/azure-openai-setup.asciidoc b/docs/assistant/azure-openai-setup.asciidoc index 873428a645..658f237b7b 100644 --- a/docs/assistant/azure-openai-setup.asciidoc +++ b/docs/assistant/azure-openai-setup.asciidoc @@ -72,7 +72,7 @@ Now, set up the Azure OpenAI model: ** If you select `gpt-4`, set the **Model version** to `0125-Preview`. ** If you select `gpt-4-32k`, set the **Model version** to `default`. + -IMPORTANT: The models available to you will depend on https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability[region availability]. For best results, use `GPT 4 Turbo version 0125-preview` or `GPT 4-32k` with the maximum Tokens-Per-Minute (TPM) capacity. In most regions, the GPT 4 Turbo model offers the largest supported context window. +IMPORTANT: The models available to you depend on https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability[region availability]. For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the <>. + . Under **Deployment type**, select **Standard**. . Name your deployment. diff --git a/docs/assistant/connect-to-openai.asciidoc b/docs/assistant/connect-to-openai.asciidoc index 8a0dbd003f..830f657d23 100644 --- a/docs/assistant/connect-to-openai.asciidoc +++ b/docs/assistant/connect-to-openai.asciidoc @@ -12,7 +12,7 @@ This page provides step-by-step instructions for setting up an OpenAI connector Before creating an API key, you must choose a model. Refer to the https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4[OpenAI docs] to select a model. Take note of the specific model name (for example `gpt-4-turbo`); you'll need it when configuring {kib}. -NOTE: `GPT-4 Turbo` offers increased performance. `GPT-4` and `GPT-3.5` are also supported. +NOTE: `GPT-4o` offers increased performance over previous versions. For more information on how different models perform for different tasks, refer to the <>. [discrete] === Create an API key @@ -51,6 +51,7 @@ To integrate with {kib}: . Provide a name for your connector, such as `OpenAI (GPT-4 Turbo Preview)`, to help keep track of the model and version you are using. . Under **Select an OpenAI provider**, choose **OpenAI**. . The **URL** field can be left as default. +. Under **Default model**, specify which https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4[model] you want to use. . Paste the API key that you created into the corresponding field. . Click **Save**. diff --git a/docs/serverless/assistant/connect-to-azure-openai.mdx b/docs/serverless/assistant/connect-to-azure-openai.mdx new file mode 100644 index 0000000000..ea3505d895 --- /dev/null +++ b/docs/serverless/assistant/connect-to-azure-openai.mdx @@ -0,0 +1,84 @@ +--- +id: serverlessSecurityConnectAzureOpenAI +slug: /serverless/security/connect-to-azure-openai +title: Connect to Azure OpenAI +description: Set up an Azure OpenAI LLM connector. +tags: ["security", "overview", "get-started"] +status: in review +--- + +# Connect to Azure OpenAI + +This page provides step-by-step instructions for setting up an Azure OpenAI connector for the first time. This connector type enables you to leverage large language models (LLMs) within ((kib)). You'll first need to configure Azure, then configure the connector in ((kib)). + +## Configure Azure + +### Configure a deployment + +First, set up an Azure OpenAI deployment: + +1. Log in to the Azure console and search for Azure OpenAI. +2. In **Azure AI services**, select **Create**. +3. For the **Project Details**, select your subscription and resource group. If you don't have a resource group, select **Create new** to make one. +4. For **Instance Details**, select the desired region and specify a name, such as `example-deployment-openai`. +5. Select the **Standard** pricing tier, then click **Next**. +6. Configure your network settings, click **Next**, optionally add tags, then click **Next**. +7. Review your deployment settings, then click **Create**. When complete, select **Go to resource**. + +The following video demonstrates these steps. + + + + +### Configure keys + +Next, create access keys for the deployment: + +1. From within your Azure OpenAI deployment, select **Click here to manage keys**. +2. Store your keys in a secure location. + +The following video demonstrates these steps. + + + + +### Configure a model + +Now, set up the Azure OpenAI model: + +1. From within your Azure OpenAI deployment, select **Model deployments**, then click **Manage deployments**. +2. On the **Deployments** page, select **Create new deployment**. +3. Under **Select a model**, choose `gpt-4` or `gpt-4-32k`. +4. Set the **Model version** to `0125-Preview` for `gpt-4` or `default` for `gpt-4-32k`. +5. Under **Deployment type**, select **Standard**. +6. Name your deployment. +7. Slide the **Tokens per Minute Rate Limit** to the maximum. The following example supports 80,000 TPM, but other regions might support higher limits. +8. Click **Create**. + + +The models available to you will depend on [region availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the . + + +The following video demonstrates these steps. + + + +## Configure Elastic AI Assistant + +Finally, configure the connector in ((kib)): + +1. Log in to ((kib)). +2. Go to **Stack Management → Connectors → Create connector → OpenAI**. +3. Give your connector a name to help you keep track of different models, such as `Azure OpenAI (GPT-4 Turbo v. 0125)`. +4. For **Select an OpenAI provider**, choose **Azure OpenAI**. +5. Update the **URL** field. We recommend doing the following: + - Navigate to your deployment in Azure AI Studio and select **Open in Playground**. The **Chat playground** screen displays. + - Select **View code**, then from the drop-down, change the **Sample code** to `Curl`. + - Highlight and copy the URL without the quotes, then paste it into the **URL** field in ((kib)). + - (Optional) Alternatively, refer to the [API documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference) to learn how to create the URL manually. +6. Under **API key**, enter one of your API keys. +7. Click **Save & test**, then click **Run**. + +The following video demonstrates these steps. + + diff --git a/docs/serverless/assistant/connect-to-openai.mdx b/docs/serverless/assistant/connect-to-openai.mdx new file mode 100644 index 0000000000..8fcfbdcb6a --- /dev/null +++ b/docs/serverless/assistant/connect-to-openai.mdx @@ -0,0 +1,54 @@ +--- +id: serverlessSecurityConnectOpenAI +slug: /serverless/security/connect-to-openai +title: Connect to OpenAI +description: Set up an OpenAI LLM connector. +tags: ["security", "overview", "get-started"] +status: in review +--- + +# Connect to OpenAI + +This page provides step-by-step instructions for setting up an OpenAI connector for the first time. This connector type enables you to leverage OpenAI's large language models (LLMs) within ((kib)). You'll first need to create an OpenAI API key, then configure the connector in ((kib)). + +## Configure OpenAI + +### Select a model + +Before creating an API key, you must choose a model. Refer to the [OpenAI docs](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) to select a model. Take note of the specific model name (for example `gpt-4-turbo`); you'll need it when configuring ((kib)). + + +`GPT-4o` offers increased performance over previous versions. For more information on how different models perform for different tasks, refer to the . + + +### Create an API key + +To generate an API key: + +1. Log in to the OpenAI platform and navigate to **API keys**. +2. Select **Create new secret key**. +3. Name your key, select an OpenAI project, and set the desired permissions. +4. Click **Create secret key** and then copy and securely store the key. It will not be accessible after you leave this screen. + +The following video demonstrates these steps. + + + + +## Configure the OpenAI connector + +To integrate with ((kib)): + +1. Log in to ((kib)). +2. Navigate to **Stack Management → Connectors → Create Connector → OpenAI**. +3. Provide a name for your connector, such as `OpenAI (GPT-4 Turbo Preview)`, to help keep track of the model and version you are using. +4. Under **Select an OpenAI provider**, choose **OpenAI**. +5. The **URL** field can be left as default. +6. Under **Default model**, specify which [model](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) you want to use. +7. Paste the API key that you created into the corresponding field. +8. Click **Save**. + +The following video demonstrates these steps. + + + From 0d6d5e120fd6fadb18c5ecb10f0c5c82afce8333 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Mon, 10 Jun 2024 21:59:41 +0000 Subject: [PATCH 2/2] Delete docs/serverless directory and its contents --- .../assistant/connect-to-azure-openai.mdx | 84 ------------------- .../assistant/connect-to-openai.mdx | 54 ------------ 2 files changed, 138 deletions(-) delete mode 100644 docs/serverless/assistant/connect-to-azure-openai.mdx delete mode 100644 docs/serverless/assistant/connect-to-openai.mdx diff --git a/docs/serverless/assistant/connect-to-azure-openai.mdx b/docs/serverless/assistant/connect-to-azure-openai.mdx deleted file mode 100644 index ea3505d895..0000000000 --- a/docs/serverless/assistant/connect-to-azure-openai.mdx +++ /dev/null @@ -1,84 +0,0 @@ ---- -id: serverlessSecurityConnectAzureOpenAI -slug: /serverless/security/connect-to-azure-openai -title: Connect to Azure OpenAI -description: Set up an Azure OpenAI LLM connector. -tags: ["security", "overview", "get-started"] -status: in review ---- - -# Connect to Azure OpenAI - -This page provides step-by-step instructions for setting up an Azure OpenAI connector for the first time. This connector type enables you to leverage large language models (LLMs) within ((kib)). You'll first need to configure Azure, then configure the connector in ((kib)). - -## Configure Azure - -### Configure a deployment - -First, set up an Azure OpenAI deployment: - -1. Log in to the Azure console and search for Azure OpenAI. -2. In **Azure AI services**, select **Create**. -3. For the **Project Details**, select your subscription and resource group. If you don't have a resource group, select **Create new** to make one. -4. For **Instance Details**, select the desired region and specify a name, such as `example-deployment-openai`. -5. Select the **Standard** pricing tier, then click **Next**. -6. Configure your network settings, click **Next**, optionally add tags, then click **Next**. -7. Review your deployment settings, then click **Create**. When complete, select **Go to resource**. - -The following video demonstrates these steps. - - - - -### Configure keys - -Next, create access keys for the deployment: - -1. From within your Azure OpenAI deployment, select **Click here to manage keys**. -2. Store your keys in a secure location. - -The following video demonstrates these steps. - - - - -### Configure a model - -Now, set up the Azure OpenAI model: - -1. From within your Azure OpenAI deployment, select **Model deployments**, then click **Manage deployments**. -2. On the **Deployments** page, select **Create new deployment**. -3. Under **Select a model**, choose `gpt-4` or `gpt-4-32k`. -4. Set the **Model version** to `0125-Preview` for `gpt-4` or `default` for `gpt-4-32k`. -5. Under **Deployment type**, select **Standard**. -6. Name your deployment. -7. Slide the **Tokens per Minute Rate Limit** to the maximum. The following example supports 80,000 TPM, but other regions might support higher limits. -8. Click **Create**. - - -The models available to you will depend on [region availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the . - - -The following video demonstrates these steps. - - - -## Configure Elastic AI Assistant - -Finally, configure the connector in ((kib)): - -1. Log in to ((kib)). -2. Go to **Stack Management → Connectors → Create connector → OpenAI**. -3. Give your connector a name to help you keep track of different models, such as `Azure OpenAI (GPT-4 Turbo v. 0125)`. -4. For **Select an OpenAI provider**, choose **Azure OpenAI**. -5. Update the **URL** field. We recommend doing the following: - - Navigate to your deployment in Azure AI Studio and select **Open in Playground**. The **Chat playground** screen displays. - - Select **View code**, then from the drop-down, change the **Sample code** to `Curl`. - - Highlight and copy the URL without the quotes, then paste it into the **URL** field in ((kib)). - - (Optional) Alternatively, refer to the [API documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference) to learn how to create the URL manually. -6. Under **API key**, enter one of your API keys. -7. Click **Save & test**, then click **Run**. - -The following video demonstrates these steps. - - diff --git a/docs/serverless/assistant/connect-to-openai.mdx b/docs/serverless/assistant/connect-to-openai.mdx deleted file mode 100644 index 8fcfbdcb6a..0000000000 --- a/docs/serverless/assistant/connect-to-openai.mdx +++ /dev/null @@ -1,54 +0,0 @@ ---- -id: serverlessSecurityConnectOpenAI -slug: /serverless/security/connect-to-openai -title: Connect to OpenAI -description: Set up an OpenAI LLM connector. -tags: ["security", "overview", "get-started"] -status: in review ---- - -# Connect to OpenAI - -This page provides step-by-step instructions for setting up an OpenAI connector for the first time. This connector type enables you to leverage OpenAI's large language models (LLMs) within ((kib)). You'll first need to create an OpenAI API key, then configure the connector in ((kib)). - -## Configure OpenAI - -### Select a model - -Before creating an API key, you must choose a model. Refer to the [OpenAI docs](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) to select a model. Take note of the specific model name (for example `gpt-4-turbo`); you'll need it when configuring ((kib)). - - -`GPT-4o` offers increased performance over previous versions. For more information on how different models perform for different tasks, refer to the . - - -### Create an API key - -To generate an API key: - -1. Log in to the OpenAI platform and navigate to **API keys**. -2. Select **Create new secret key**. -3. Name your key, select an OpenAI project, and set the desired permissions. -4. Click **Create secret key** and then copy and securely store the key. It will not be accessible after you leave this screen. - -The following video demonstrates these steps. - - - - -## Configure the OpenAI connector - -To integrate with ((kib)): - -1. Log in to ((kib)). -2. Navigate to **Stack Management → Connectors → Create Connector → OpenAI**. -3. Provide a name for your connector, such as `OpenAI (GPT-4 Turbo Preview)`, to help keep track of the model and version you are using. -4. Under **Select an OpenAI provider**, choose **OpenAI**. -5. The **URL** field can be left as default. -6. Under **Default model**, specify which [model](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) you want to use. -7. Paste the API key that you created into the corresponding field. -8. Click **Save**. - -The following video demonstrates these steps. - - -