Skip to content

Flaconi/terraform-aws-bedrock-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

terraform-aws-bedrock-agent

Terraform module for Amazon Bedrock Agent resources

lint test Tag License

Providers

Name Version
aws ~> 5.73
opensearch ~> 2.2
time ~> 0.12

Requirements

Name Version
terraform ~> 1.3
aws ~> 5.73
opensearch ~> 2.2
time ~> 0.12

Required Inputs

The following input variables are required:

Description: Name for the agent.

Type: string

Description: Name for the agent alias.

Type: string

Description: Model identifier for agent.

Type: string

Description: Name for the knowledgebase.

Type: string

Description: Description for the knowledgebase.

Type: string

Description: ARN of S3 bucket with data

Type:

object({
    bucket_arn              = string
    bucket_owner_account_id = optional(string)
    inclusion_prefixes      = optional(set(string))
  })

Description: Name of OpenSearch Serverless Collection.

Type: string

Optional Inputs

The following input variables are optional (have default values):

Description: Description for the agent alias.

Type: string

Default: null

Description: Model identifier for agent.

Type: string

Default: "anthropic.claude-v2"

Description: Model identifier for Knowledgebase.

Type: string

Default: "amazon.titan-embed-text-v1"

Description: Data deletion policy for a data source. Valid values: RETAIN, DELETE

Type: string

Default: "RETAIN"

Description: n/a

Type:

object({
    chunking_configuration = object({
      chunking_strategy = string
      fixed_size_chunking_configuration = optional(object({
        max_tokens         = number
        overlap_percentage = optional(number)
      }))
      hierarchical_chunking_configuration = optional(object({
        overlap_tokens = number
        level_1        = object({ max_tokens = number })
        level_2        = object({ max_tokens = number })
      }))
      semantic_chunking_configuration = optional(object({
        breakpoint_percentile_threshold = number
        buffer_size                     = number
        max_token                       = number
      }))
    })
    custom_transformation_configuration = optional(object({
      intermediate_storage    = string
      transformation_function = string
    }))
  })

Default:

{
  "chunking_configuration": {
    "chunking_strategy": "FIXED_SIZE",
    "fixed_size_chunking_configuration": {
      "max_tokens": 300,
      "overlap_percentage": 20
    },
    "hierarchical_chunking_configuration": null,
    "semantic_chunking_configuration": null
  }
}

Description: Additional ARNs of roles to access OpenSearch

Type: list(string)

Default: []

Description: Prompt template for pre-processing.

Type: string

Default: " You are a helpful assistant. Answer the following question using the context provided:\n Question: {question}\n Context: {context}\n Your response should be thoughtful, detailed, and relevant to the provided context.\n"

Description: Parser mode for pre-processing.

Type: string

Default: "DEFAULT"

Description: Prompt creation mode for pre-processing.

Type: string

Default: "OVERRIDDEN"

Description: Prompt state for pre-processing.

Type: string

Default: "ENABLED"

Description: Maximum number of tokens to allow in the generated response.

Type: number

Default: 512

Description: List of stop sequences that will stop generation.

Type: list(string)

Default:

[
  "END"
]

Description: Likelihood of the model selecting higher-probability options while generating a response.

Type: number

Default: 0.7

Description: Number of top most-likely candidates from which the model chooses the next token.

Type: number

Default: 50

Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.

Type: number

Default: 0.9

Description: Prompt template for pre-processing.

Type: string

Default: " You are preparing the input. Extract relevant context and pre-process the following question:\n Question: {question}\n Context: {context}\n Pre-processing should focus on extracting the core information.\n"

Description: Parser mode for pre-processing.

Type: string

Default: "DEFAULT"

Description: Prompt creation mode for pre-processing.

Type: string

Default: "OVERRIDDEN"

Description: Prompt state for pre-processing.

Type: string

Default: "ENABLED"

Description: Maximum number of tokens to allow in the generated response.

Type: number

Default: 512

Description: List of stop sequences that will stop generation.

Type: list(string)

Default:

[
  "END"
]

Description: Likelihood of the model selecting higher-probability options while generating a response.

Type: number

Default: 0.7

Description: Number of top most-likely candidates from which the model chooses the next token.

Type: number

Default: 50

Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.

Type: number

Default: 0.9

Description: Prompt template for orchestration.

Type: string

Default: " You are orchestrating the flow of the agent. Based on the question and context, determine the next steps in the process:\n Question: {question}\n Context: {context}\n Plan the next steps to follow the best strategy.\n"

Description: Parser mode for orchestration.

Type: string

Default: "DEFAULT"

Description: Prompt creation mode for orchestration.

Type: string

Default: "OVERRIDDEN"

Description: Prompt state for orchestration.

Type: string

Default: "ENABLED"

Description: Maximum number of tokens to allow in the generated response.

Type: number

Default: 512

Description: List of stop sequences that will stop generation.

Type: list(string)

Default:

[
  "END"
]

Description: Likelihood of the model selecting higher-probability options while generating a response.

Type: number

Default: 0.7

Description: Number of top most-likely candidates from which the model chooses the next token.

Type: number

Default: 50

Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.

Type: number

Default: 0.9

Description: Prompt template for post-processing.

Type: string

Default: "You are performing post-processing. Review the agent's output and refine the response for clarity and relevance:\nResponse: {response}\nContext: {context}\nEnsure the output is polished and aligns with the context.\n"

Description: Parser mode for post-processing.

Type: string

Default: "DEFAULT"

Description: Prompt creation mode for post-processing.

Type: string

Default: "OVERRIDDEN"

Description: Prompt state for post-processing.

Type: string

Default: "DISABLED"

Description: Maximum number of tokens to allow in the generated response.

Type: number

Default: 512

Description: List of stop sequences that will stop generation.

Type: list(string)

Default:

[
  "END"
]

Description: Likelihood of the model selecting higher-probability options while generating a response.

Type: number

Default: 0.7

Description: Number of top most-likely candidates from which the model chooses the next token.

Type: number

Default: 50

Description: Top percentage of the probability distribution of next tokens, from which the model chooses the next token.

Type: number

Default: 0.9

Description: A map of tags to assign to the customization job and custom model.

Type: map(string)

Default: {}

Outputs

Name Description
agent Information about created Bedrock Agent
agent_alias Information about created Bedrock Agent Alias
knowledge_base Information about created Bedrock Knowledgebase
oss_collection Information about created OpenSearch Serverless collection

License

MIT License

Copyright (c) 2024 Flaconi GmbH