diff --git a/kendra_retriever_samples/README.md b/kendra_retriever_samples/README.md index 6b061c5..0ea4687 100644 --- a/kendra_retriever_samples/README.md +++ b/kendra_retriever_samples/README.md @@ -39,25 +39,25 @@ pip install --force-reinstall "boto3>=1.28.57" ``` ## Running samples -Before you run the sample, you need to deploy a Large Language Model (or get an API key if you using Anthropic or OPENAI). The samples in this repository have been tested on models deployed using SageMaker Jumpstart. The model id for the LLMS are specified in the table below. +Before you run the sample, you need to deploy a Large Language Model (or get an API key if you using Anthropic or OPENAI). The samples in this repository have been tested on models deployed using SageMaker JumpStart. -With the latest sagemaker release each endpoint can hold multiple models (called InferenceComponent). For jumpstart models, optionally specify the INFERENCE_COMPONENT_NAME as well as an environment varialbe +With the latest sagemaker release each endpoint can hold multiple models (called InferenceComponent). For JumpStart models, optionally specify the INFERENCE_COMPONENT_NAME as well as an environment variable. When you deploy JumpStart models from the new Studio console, you need to specify the environment variable INFERENCE_COMPONENT_NAME. When you deploy JumpStart models from the Studio Classic console or using the SDK, you do not need to specify the environment variable INFERENCE_COMPONENT_NAME. +The model id for the LLMs are specified in the table below. -| Model name | env var name | Endpoint Name | Inference component name (optional) |streamlit provider name | -| -----------| -------- | ------------------ | ----------------- | -| Falcon 40B instruct | FALCON_40B_ENDPOINT, INFERENCE_COMPONENT_NAME | | |falcon40b | -| Llama2 70B instruct | LLAMA_2_ENDPOINT, INFERENCE_COMPONENT_NAME | | | llama2 | -| Bedrock Titan | None | | | bedrock_titan| -| Bedrock Claude | None | | | bedrock_claude| -| Bedrock Claude V2 | None | | | bedrock_claudev2| +| Model Name | Env Var Name | Endpoint Name | Inference Component Name (Optional) | streamlit Provider Name | +| -----------| -------- | ------------------ | ----------------- |----------------- | +| Falcon 40B instruct | FALCON_40B_ENDPOINT, INFERENCE_COMPONENT_NAME | | | falcon40b | +| Llama2 70B instruct | LLAMA_2_ENDPOINT, INFERENCE_COMPONENT_NAME | | | llama2 | +| Bedrock Titan | None | | | bedrock_titan | +| Bedrock Claude | None | | | bedrock_claude | +| Bedrock Claude V2 | None | | | bedrock_claudev2 | +After deploying the LLM, set up environment variables for kendra id, aws region, endpoint name (or the API key for an external provider), and optionally the inference component name. -after deploying the LLM, set up environment variables for kendra id, aws_region endpoint name (or the API key for an external provider) and optionally the inference component name +For example, for running the `kendra_chat_llama_2.py` sample, these environment variables must be set: AWS_REGION, KENDRA_INDEX_ID, LLAMA_2_ENDPOINT, and INFERENCE_COMPONENT_NAME (if you deploy JumpStart model from the new Studio console). -For example, for running the `kendra_chat_llama_2.py` sample, these environment variables must be set: AWS_REGION, KENDRA_INDEX_ID, LLAMA_2_ENDPOINT and INFERENCE_COMPONENT_NAME. INFERENCE_COMPONENT_NAME is only required when deploying the jumpstart through the console or if you explicitely create an inference component using code. It is also possible to create an endpoint without and inference component in which case, do not set the INFERENCE_COMPONENT_FIELD. - -You can use commands as below to set the environment variables. Only set the environment variable for the provider that you are using. For example, if you are using Flan-xl only set the FLAN_XXL_ENDPOINT. There is no need to set the other Endpoints and keys. +You can use commands as below to set the environment variables. Only set the environment variable for the provider that you are using. For example, if you are using Falcon 40B, only set the FALCON_40B_ENDPOINT. There is no need to set the other Endpoints and keys. ```bash export AWS_REGION= @@ -66,7 +66,7 @@ export KENDRA_INDEX_ID= export FALCON_40B_ENDPOINT= # only if you are using falcon as the endpoint export LLAMA_2_ENDPOINT= #only if you are using llama2 as the endpoint -export INFERENCE_COMPONENT_NAME= # if you are deploying the FM via the JumpStart console. +export INFERENCE_COMPONENT_NAME= # only if you are deploying the FM via the new Studio console. export OPENAI_API_KEY= # only if you are using OPENAI as the endpoint export ANTHROPIC_API_KEY= # only if you are using Anthropic as the endpoint