diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..ddb9bd5 --- /dev/null +++ b/404.html @@ -0,0 +1,1403 @@ + + + +
+ + + + + + + + + + + + + + +Coming soon
+ + + + + + + + + + + + + +This chapter presents the high level concepts and constructs of the Owl Agent framework and how it is used in the context of a custom solution.
+Agent is a deployable unit built up by choreographing one or more llm, each with its own workflow that can leverage external tools, and guided by prompts.
+A typical solution includes: 1/ a front end to let a human interact with the system, 2/ a backend to manage the life cycle of agents integrated with a LLM running as a service, with dynamic integration to tools and functions, vector store retrievers, and decision services. 3/ Decision service with inference rules to control the decision to be made.
+ +4/ The conversation is stateful, persisted and integrated with the different agents defined in the solution.
+The Owl Agent backend may manage multiple concurrent conversations and multiple agents instances. It can scale horizontally too.
+The main concepts the framework defines and uses are:
+System prompts
in the Generative AI context. They define what the LLM should doThe following diagram presents, one agent that integrate one LLM, with a set of tools. One tool is helping to access a client given its name, queries a database, one tool is to compute the next best action is send structured data to a decision service, to get better decsion, and finally one is a retriever to access collections in a vector database to support Retrieved Augmented Generation use cases.
+ +Tools can be any python function, ot proxies to remote business service.
+An agent is an interactive application or solution that supports a specific business use case, like helping a worker performing a specific task of a business process. The execution of the agent involves the coordination of one or more LLMs. Agents may be stateful to preserve the state of a conversation using snapshot capabilities.
+Agent management has two parts: 1/ the management of the OwlAgent entity definitions with REST resources and a persistence repository, and 2/ the agent runner instance which manages conversations:
+ +In any solution the conversation manager service use one agent runner.
+Coming soon
+This chapter addresses how to develop a custom solution and how to maintain or add new features to the current backend.
+LLMs are very good at providing linguistic capabilities such as:
+On the other hand, LLMs are completely unable to reason in a logical, trustable, predictable or explainable manner. +There are situations where it is key to rely on robust symbolic reasoning capabilities in order to:
+If you need to take business decisions that must be trusted from a risk and compliance perspective, it would be completely unreasonable to +place a statistical bet on the fact that a linguistic parrot can find the right answer.
+The right approach is to use the right technology for the right use cases. The Athena Owl Agent framework will make your life easy at integrating various approaches to build an hybrid AI solution.
+The following table shows various features that are used by three different applications built via the Athena Owl Agent framework. +Depending on the capabilities we want to provide to the employees that will interact with the agent, we will leverage different features.
++ | Simple agent with RAG | +Insurance complaint handling | +Tax assistant | +
---|---|---|---|
LLM | +general Q&A | +extract data, detect intention | +extract data, detect intention, generate summary of decision outcome | +
RAG | +queries on company documents | +queries on company policies | ++ |
Fetch data (tool calling to data APIs) | ++ | fetch customer and claim data | +fetch tax payer and vehicle data | +
Decide based on company policies (tool calling to rule service) | ++ | determine next best action | +determine eligibility for a tax discount | +
Human in the loop, Open/Closed conversation | ++ | + | ask specific questions needed to decide | +
owl-solution-create
scriptNext, we need to install the Python library dependencies and set the Python path. +
+Edit yaml files (or use admin console)
+This is how we can declare a hook to override the default agent runner class +
ibu_tax_agent:
+ agent_id: ibu_tax_agent
+ name: Tax Agent
+ description: OpenAI based agent with tool to call tax reduction eligibility service
+ runner_class_name: ibu.llm.agents.IBU_TaxAgent.IBU_TaxAgent # custom runner class
+ class_name: athena.llm.agents.base_chain_agent.OwlAgent
+ modelClassName: langchain_openai.ChatOpenAI
+ modelName: gpt-3.5-turbo-0125
+ prompt_ref: tax_first_intent_prompt
+ temperature: 0
+ top_k: 1
+ top_p: 1
+
To run all the unit tests: +pytest -s tests/ut
+To run the integration tests: +...
+We create a file ibu.llm.agents.IBU_TaxAgent.IBU_TaxAgent.py to implement the IBU_TaxAgent class +
from athena.llm.agents.agent_mgr import OwlAgentDefaultRunner, OwlAgent, get_agent_manager
+
+class IBU_TaxAgent(OwlAgentDefaultRunner):
+ # your custom code here
+
Implement an agentic workflow using Langgraph
+There are two possible ways to run the out-of-the-box frontend webapp:
+docker compose up -d
scriptIf needed, install node: +
+If needed, install yarn: npm install --global yarn
To run all the unit tests: +
+Before running the integration tests, we need to start the Docker containers:
+
We can then run all the integration tests: +
+You will need Docker and Docker Desktop on your machine.
+create a folder that will be the placeholder for your first Athena project: +
+ Recommendation: we recommend that you place the 'my-athena-ai-app' folder at the same level as the 'athena-owl-core' folder on your disk. This will make your life easier to reference files using relative paths. +copy the skeleton app SkeletonAppHelloLLM into your app folder and rename it +
+In this demo, we use OpenAI and Tavily. So, the .env file should look like this: +
+Coming soon
+ + + + + + + + + + + + + +Coming soon
+ + + + + + + + + + + + + +Coming soon
+ + + + + + + + + + + + + +Coming soon
+ + + + + + + + + + + + + +Coming soon
+ + + + + + + + + + + + + +This guide provides step-by-step instructions to set up this demo on macOS.
+Homebrew is a package manager for macOS that simplifies the installation of software.
+To install Homebrew, open your Terminal and run the following command:
+ +After the installation is complete, verify that Homebrew is installed:
+ +Colima is a lightweight container runtime for macOS that works with Docker.
+To install Colima, run:
+ +If you have an Apple Silicon (M1/M2) Mac, you need to install Rosetta:
+ +Docker is essential for containerization, and Docker Compose helps in managing multi-container applications.
+To install Docker and Docker Compose, run:
+ +Now, you can start Colima with the desired configuration. If you have an Intel Mac, run:
+ +For Apple Silicon (M1/M2) Macs, run:
+ +Now that your environment is set up, you can clone the athena-owl-demos
repository. If you don't have Git installed, you can do so by running:
Then, clone the repository and navigate into the demo directory:
+ +Create an IBM watsonx.ai account
+ Visit the IBM watsonx.ai page and follow the links to set up a Cloud instance.
Generate an API key
+ In your IBM Cloud account, go to Profile and settings
and generate an API key.
Instantiate a watsonx.ai service
+ From your IBM Cloud account, create a new watsonx.ai service instance.
Copy your watsonx.ai configuration parameters
+WATSONX_URL=<your watsonx.ai URL> # example value = https://us-south.ml.cloud.ibm.com/
+WATSONX_APIKEY=<your watsonx.ai API Key>
+WATSONX_PROJECT_ID=<your watsonx.ai Project ID>
+
Navigate to the IBU Insurance demo directory:
+ +Create your .env
file. It contains some configuration parameters. Paste your watsonx.ai configuration parameters:
WATSONX_URL=<your watsonx.ai URL> # example value = https://us-south.ml.cloud.ibm.com/
+WATSONX_APIKEY=<your watsonx.ai API Key>
+WATSONX_PROJECT_ID=<your watsonx.ai Project ID>
+
If you plan to use other LLMs for which a key is needed, paste them here:
+# Define the IBM KEY to access models deployed on watsonx.ai
+IBM_API_KEY=USE_YOUR_IBM_API_KEY
+
+# Only when you want to use one of Open AI model
+OPENAI_API_KEY=USE_YOUR_OPENAI_API_KEY
+
+# Only when you want to use one a Mistral AI model
+MISTRAL_API_KEY=USE_YOUR_MISTRAL_API_KEY
+
+# Use Tavily to do search on last news, that could be interesting to validate tool calling
+TAVILY_API_KEY=USE_YOUR_TAVILY_API_KEY
+
+# if you want to get traces in Langchain tracing - this is optional
+LANGCHAIN_TRACING_V2=false
+LANGCHAIN_API_KEY=USE_YOUR_LANGCHAIN_KEY
+
+# If you want to use one of the Anthropic Claude models
+ANTHROPIC_API_KEY=USE_YOUR_ANTHROPIC_KEY
+
Once everything is set up, you can run the project by executing:
+ +After the project is running, you can access the Owl Agent at:
+ +Your environment should now be fully set up and ready for development and testing. If you encounter any issues, please refer to the documentation or raise an issue in the GitHub repository.
+ + + + + + + + + + + + + +Coming soon...
+ + + + + + + + + + + + + +Coming soon
+ + + + + + + + + + + + + +Coming soon
+ + + + + + + + + + + + + +Coming soon
+ + + + + + + + + + + + + +Coming soon
+ + + + + + + + + + + + + +