diff --git a/docs/reference-docs/ai-tasks/llm-chat-complete.md b/docs/reference-docs/ai-tasks/llm-chat-complete.md
new file mode 100644
index 00000000..c7bda7ae
--- /dev/null
+++ b/docs/reference-docs/ai-tasks/llm-chat-complete.md
@@ -0,0 +1,106 @@
+---
+sidebar_position: 10
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# LLM Chat Complete
+
+A system task to complete the chat query that is designed to direct the model's behavior accurately, preventing any deviation from the objective.
+
+## Definitions
+
+```json
+ {
+ "name": "llm_chat_complete",
+ "taskReferenceName": "llm_chat_complete_ref",
+ "inputParameters": {
+ "llmProvider": "openai",
+ "model": "gpt-4",
+ "instructions": "your-prompt-template",
+ "messages": [
+ {
+ "role": "user",
+ "message": "${workflow.input.text}"
+ }
+ ],
+ "temperature": 0.1,
+ "topP": 0.2,
+ "maxTokens": 4,
+ "stopWords": "and"
+ },
+ "type": "LLM_CHAT_COMPLETE"
+ }
+```
+
+## Input Parameters
+
+| Parameter | Description |
+| --------- | ----------- |
+| llmProvider | Choose the required LLM provider. You can only choose providers to which you have access for at least one model from that provider.
**Note:**If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the **Integrations** tab and configure your required provider. Refer to this doc on [how to integrate the LLM providers with Orkes console and provide access to required groups](https://orkes.io/content/category/integrations/ai-llm). |
+| model | Choose from the available language model for the chosen LLM provider. You can only choose models for which you have access.
For example, If your LLM provider is Azure Open AI & you’ve configured *text-davinci-003* as the language model, you can choose it under this field. |
+| instructions | Set the ground rule/instructions for the chat so the model responds to only specific queries and will not deviate from the objective.
Under this field, choose the AI prompt created. You can only use the prompts for which you have access.
**Note:**If you haven’t created an AI prompt for your language model, refer to this documentation on [how to create AI Prompts in Orkes Conductor and provide access to required groups](https://orkes.io/content/reference-docs/ai-tasks/prompt-template). |
+| messages | Choose the role and messages to complete the chat query.
- Under ‘Role,’ choose the required role for the chat completion. It can take values such as *user*, *assistant*, *system*, or *human*.
- The roles “user” and “human” represent the user asking questions or initiating the conversation.
- The roles “assistant” and “system” refer to the model responding to the user queries.
- Under “Message”, choose the corresponding input to be provided. It can also be [passed as variables](https://orkes.io/content/developer-guides/passing-inputs-to-task-in-conductor).
|
+| temperature | A parameter to control the randomness of the model’s output. Higher temperatures, such as 1.0, make the output more random and creative. Whereas a lower value makes the output more deterministic and focused.
Example: If you're using a text blurb as input and want to categorize it based on its content type, opt for a lower temperature setting. Conversely, if you're providing text inputs and intend to generate content like emails or blogs, it's advisable to use a higher temperature setting. |
+| stopWords | Provide the stop words to be omitted during the text generation process.
In LLM, stop words may be filtered out or given less importance during the text generation process to ensure that the generated text is coherent and contextually relevant. |
+| topP | Another parameter to control the randomness of the model’s output. This parameter defines a probability threshold and then chooses tokens whose cumulative probability exceeds this threshold.
For example: Imagine you want to complete the sentence: “She walked into the room and saw a ______.” Now, the top 4 words the LLM model would consider based on the highest probabilities would be:- Cat - 35%
- Dog - 25%
- Book - 15%
- Chair - 10%
If you set the top-p parameter to 0.70, the AI will consider tokens until their cumulative probability reaches or exceeds 70%. Here's how it works:- Adding "Cat" (35%) to the cumulative probability.
- Adding "Dog" (25%) to the cumulative probability, totaling 60%.
- Adding "Book" (15%) to the cumulative probability, now at 75%.
At this point, the cumulative probability is 75%, exceeding the set top-p value of 70%. Therefore, the AI will randomly select one of the tokens from the list of "Cat," "Dog," and "Book" to complete the sentence because these tokens collectively account for approximately 75% of the likelihood. |
+| maxTokens | The maximum number of tokens to be generated by the LLM and returned as part of the result. A token should be approximately 4 characters. |
+
+## Output Parameters
+
+The task output displays the completed chat by the LLM.
+
+## Examples
+
+
+
+
+
+
+
+
+
+
+1. Add task type **LLM Chat Complete**.
+2. Choose the LLM provider, model & prompt template.
+3. Provide the input parameters.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```json
+{
+ "name": "llm_chat_complete",
+ "taskReferenceName": "llm_chat_complete_ref",
+ "inputParameters": {
+ "llmProvider": "openai",
+ "model": "gpt-4",
+ "instructions": "your-prompt-template",
+ "messages": [
+ {
+ "role": "user",
+ "message": "${workflow.input.text}"
+ }
+ ],
+ "temperature": 0.1,
+ "topP": 0.2,
+ "maxTokens": 4,
+ "stopWords": "and"
+ },
+ "type": "LLM_CHAT_COMPLETE"
+ }
+```
+
+
\ No newline at end of file
diff --git a/docs/reference-docs/ai-tasks/llm-generate-embeddings.md b/docs/reference-docs/ai-tasks/llm-generate-embeddings.md
index 690fb32e..2eb40d61 100644
--- a/docs/reference-docs/ai-tasks/llm-generate-embeddings.md
+++ b/docs/reference-docs/ai-tasks/llm-generate-embeddings.md
@@ -29,7 +29,7 @@ A system task to generate embeddings from the input data provided. Embeddings ar
## Input Parameters
-| Attribute | Description |
+| Parameter | Description |
| --------- | ----------- |
| llmProvider | Choose the required LLM provider. You can only choose providers to which you have access for at least one model from that provider.
**Note**:If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the **Integrations** tab and configure your required provider. Refer to this doc on [how to integrate the LLM providers with Orkes console and provide access to required groups](/content/category/integrations/ai-llm).|
| model | Choose from the available language model for the chosen LLM provider. You can only choose models for which you have access.
For example, If your LLM provider is Azure Open AI & you’ve configured *text-davinci-003* as the language model, you can choose it under this field. |
diff --git a/docs/reference-docs/ai-tasks/llm-get-document.md b/docs/reference-docs/ai-tasks/llm-get-document.md
index 0bb5c0b9..8cd4ebe2 100644
--- a/docs/reference-docs/ai-tasks/llm-get-document.md
+++ b/docs/reference-docs/ai-tasks/llm-get-document.md
@@ -24,7 +24,7 @@ A system task to retrieve the content of the document provided and use it for fu
## Input Parameters
-| Attribute | Description |
+| Parameter | Description |
| --------- | ----------- |
| url | Provide the URL of the document to be retrieved.
Check out our documentation on [how to pass parameters to tasks](https://orkes.io/content/developer-guides/passing-inputs-to-task-in-conductor). |
| mediaType | Select the media type of the file to be retrieved. Currently, supported media types include:- application/pdf
- text/html
- text/plain
- json
|
diff --git a/docs/reference-docs/ai-tasks/llm-get-embeddings.md b/docs/reference-docs/ai-tasks/llm-get-embeddings.md
index 8994dcde..acd444a9 100644
--- a/docs/reference-docs/ai-tasks/llm-get-embeddings.md
+++ b/docs/reference-docs/ai-tasks/llm-get-embeddings.md
@@ -26,7 +26,7 @@ A system task to get the numerical vector representations of words, phrases, sen
## Input Parameters
-| Attribute | Description |
+| Parameter | Description |
| --------- | ----------- |
| vectorDB | Choose the required vector database.
**Note**:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on [how to integrate Vector Databases with Orkes console](/content/category/integrations/vector-databases). |
| namespace | Choose from the available namespace configured within the chosen vector database.
Namespaces are separate isolated environments within the database to manage and organize vector data effectively.
**Note**:Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration.|
diff --git a/docs/reference-docs/ai-tasks/llm-index-document.md b/docs/reference-docs/ai-tasks/llm-index-document.md
index 037c4afb..0d1a2ff7 100644
--- a/docs/reference-docs/ai-tasks/llm-index-document.md
+++ b/docs/reference-docs/ai-tasks/llm-index-document.md
@@ -31,7 +31,7 @@ A system task to index the provided document into a vector database that can be
## Input Parameters
-| Attribute | Description |
+| Parameter | Description |
| --------- | ----------- |
| vectorDB | Choose the required vector database.
**Note**:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on [how to integrate Vector Databases with Orkes console](/content/category/integrations/vector-databases). |
| namespace | Choose from the available namespace configured within the chosen vector database.
Namespaces are separate isolated environments within the database to manage and organize vector data effectively.
**Note**:Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration.|
diff --git a/docs/reference-docs/ai-tasks/llm-index-text.md b/docs/reference-docs/ai-tasks/llm-index-text.md
index 985c7d20..3277de4f 100644
--- a/docs/reference-docs/ai-tasks/llm-index-text.md
+++ b/docs/reference-docs/ai-tasks/llm-index-text.md
@@ -29,7 +29,7 @@ A system task to index the provided text into a vector space that can be efficie
## Input Parameters
-| Attribute | Description |
+| Parameter | Description |
| --------- | ----------- |
| vectorDB | Choose the required vector database.
**Note**:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on [how to integrate Vector Databases with Orkes console](/content/category/integrations/vector-databases). |
| namespace | Choose from the available namespace configured within the chosen vector database.
Namespaces are separate isolated environments within the database to manage and organize vector data effectively.
**Note**:Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration.|
diff --git a/docs/reference-docs/ai-tasks/llm-search-index.md b/docs/reference-docs/ai-tasks/llm-search-index.md
index e27887b6..943e9222 100644
--- a/docs/reference-docs/ai-tasks/llm-search-index.md
+++ b/docs/reference-docs/ai-tasks/llm-search-index.md
@@ -30,7 +30,7 @@ For example, in a recommendation system, a user might issue a query to find prod
## Input Parameters
-| Attribute | Description |
+| Parameter | Description |
| --------- | ----------- |
| vectorDB | Choose the required vector database.
**Note**:If you haven’t configured the vector database on your Orkes console, navigate to the Integrations tab and configure your required provider. Refer to this doc on [how to integrate Vector Databases with Orkes console](/content/category/integrations/vector-databases). |
| namespace | Choose from the available namespace configured within the chosen vector database.
Namespaces are separate isolated environments within the database to manage and organize vector data effectively.
**Note**:Namespace field is applicable only for Pinecone integration and is not applicable to Weaviate integration.|
diff --git a/docs/reference-docs/ai-tasks/llm-store-embeddings.md b/docs/reference-docs/ai-tasks/llm-store-embeddings.md
index 5d6eb3af..62b455f1 100644
--- a/docs/reference-docs/ai-tasks/llm-store-embeddings.md
+++ b/docs/reference-docs/ai-tasks/llm-store-embeddings.md
@@ -28,7 +28,7 @@ A system task responsible for storing the generated embeddings produced by the [
## Input Parameters
-| Attribute | Decsription |
+| Parameter | Description |
| ---------- | ----------- |
| vectorDB | Choose the vector database to which the data is to be stored.
**Note**: If you haven’t configured the vector database on your Orkes console, navigate to the **_Integrations_** tab and configure your required provider. Refer to this doc on [how to integrate Vector Databases with Orkes console](https://orkes.io/content/category/integrations/vector-databases). |
| index | Choose the index in your vector database where the text or data is to be stored.
**Note**: For Weaviate integration, this field refers to the class name, while in Pinecone integration, it denotes the index name itself. |
diff --git a/docs/reference-docs/ai-tasks/llm-text-complete.md b/docs/reference-docs/ai-tasks/llm-text-complete.md
index fad03124..cb64e3e6 100644
--- a/docs/reference-docs/ai-tasks/llm-text-complete.md
+++ b/docs/reference-docs/ai-tasks/llm-text-complete.md
@@ -37,7 +37,7 @@ A system task to predict or generate the next phrase or words in a given text ba
## Input Parameters
-| Attribute | Description |
+| Parameter | Description |
| --------- | ----------- |
| llmProvider | Choose the required LLM provider. You can only choose providers to which you have access for at least one model from that provider.
**Note**:If you haven’t configured your AI / LLM provider on your Orkes console, navigate to the **Integrations** tab and configure your required provider. Refer to this doc on [how to integrate the LLM providers with Orkes console and provide access to required groups](/content/category/integrations/ai-llm).|
| model | Choose from the available language model for the chosen LLM provider. You can only choose models for which you have access.
For example, If your LLM provider is Azure Open AI & you’ve configured *text-davinci-003* as the language model, you can choose it under this field. |
diff --git a/docs/reference-docs/ai-tasks/prompt-template.md b/docs/reference-docs/ai-tasks/prompt-template.md
index f3fff8d6..46c34dbd 100644
--- a/docs/reference-docs/ai-tasks/prompt-template.md
+++ b/docs/reference-docs/ai-tasks/prompt-template.md
@@ -10,7 +10,7 @@ The AI prompts can be created in the Orkes Conductor cluster and can be used in
## Parameters
-| Attribute | Description |
+| Parameter | Description |
| --------- | ----------- |
| Prompt Name | A name for the prompt. |
| Description | A description for the prompt. |
diff --git a/static/img/llm-chat-complete-messages.png b/static/img/llm-chat-complete-messages.png
new file mode 100644
index 00000000..8e6bf3db
Binary files /dev/null and b/static/img/llm-chat-complete-messages.png differ
diff --git a/static/img/llm-chat-complete-ui-method.png b/static/img/llm-chat-complete-ui-method.png
new file mode 100644
index 00000000..5243374c
Binary files /dev/null and b/static/img/llm-chat-complete-ui-method.png differ