LangChain gives you the building blocks to interface with any language model.
- (script-chain.py)
- (script-retrieval_chain.py)
- A retriever can be backed by anything, DB, data store, internet, or a vector store
- (script-conversational_retrieval_chain.py)
- chains above answer only one history-independent question
- this can handle follow-up questions as well
This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.
Chains go beyond a single LLM call and involve sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
Memory refers to persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
- Retrieval: Retriever retrieves the important context only, to be passed to the LLM, which was not included in the training data.
- Prompt
- Chat Model
- Prompt: The inputs to language models are often called prompts
- Prompts generate structured messages from user input. They are casted into
Messages
forChatModels
andstr
forLanguageModels
.PromptValue
is the interface-like common class. - User input string to these final structured input to models are handled by
PromptTemplates
. - Types:
MessagePromptTemplate
: Consists ofrole
andPromptTemplate
. Could be:HumanMessagePromptTemplate
,AIMessagePromptTemplate
, orSystemMessagePromptTemplate
, depending on therole
.ChatPromptTemplate
: Consists of list of PromptTemplates depending on variety of roles. Think of it as giving the history of conversation, i.e. list of messages by AI, User, System, or other roles.
- Prompts generate structured messages from user input. They are casted into
- Model
- Types: LLMs and ChatModels (built upon LLMs and tuned for conversation)
- ChatModels take input as a
Message
: attrs:role
andcontent
; different message classes are present in langchain for different roles.content
is either a string or list of dictionaries (for multi-modal input). The message can be classified intoHumanMessage
orAIMessage
SystemMessage
tells model how to behave.FunctionMessage
represents the output of a function callToolMessage
represents the output of a tool call
- Output Parser: Formats AIMessage into human-readable strings
- Tools
- Document Loader: WebBaseLoader for internet
- Text Splitter
- Embeddings: OpenAIEmbeddings
- Vector Store: Data is embedded into this store for indexing
- Document Loader
- Prompt
- Model
- Output Parser