-
Notifications
You must be signed in to change notification settings - Fork 265
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into otel-exporter
- Loading branch information
Showing
8 changed files
with
622 additions
and
9 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,123 @@ | ||
--- | ||
title: 'Ollama Example' | ||
description: 'Using Ollama with AgentOps' | ||
mode: "wide" | ||
--- | ||
|
||
{/* SOURCE_FILE: examples/ollama_examples/ollama_examples.ipynb */}# AgentOps Ollama Integration | ||
|
||
This example demonstrates how to use AgentOps to monitor your Ollama LLM calls. | ||
|
||
First let's install the required packages | ||
|
||
> ⚠️ **Important**: Make sure you have Ollama installed and running locally before running this notebook. You can install it from [ollama.ai](https://ollama.com). | ||
|
||
```python | ||
%pip install -U ollama | ||
%pip install -U agentops | ||
%pip install -U python-dotenv | ||
``` | ||
|
||
Then import them | ||
|
||
|
||
```python | ||
import ollama | ||
import agentops | ||
import os | ||
from dotenv import load_dotenv | ||
|
||
``` | ||
|
||
Next, we'll set our API keys. For Ollama, we'll need to make sure Ollama is running locally. | ||
[Get an AgentOps API key](https://agentops.ai/settings/projects) | ||
|
||
1. Create an environment variable in a .env file or other method. By default, the AgentOps `init()` function will look for an environment variable named `AGENTOPS_API_KEY`. Or... | ||
2. Replace `<your_agentops_key>` below and pass in the optional `api_key` parameter to the AgentOps `init(api_key=...)` function. Remember not to commit your API key to a public repo! | ||
|
||
|
||
```python | ||
# Let's load our environment variables | ||
load_dotenv() | ||
|
||
AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY") or "<your_agentops_key>" | ||
``` | ||
|
||
|
||
```python | ||
# Initialize AgentOps with some default tags | ||
agentops.init(AGENTOPS_API_KEY, default_tags=["ollama-example"]) | ||
``` | ||
|
||
Now let's make some basic calls to Ollama. Make sure you have pulled the model first, use the following or replace with whichever model you want to use. | ||
|
||
|
||
```python | ||
ollama.pull("mistral") | ||
``` | ||
|
||
|
||
```python | ||
# Basic completion, | ||
response = ollama.chat(model='mistral', | ||
messages=[{ | ||
'role': 'user', | ||
'content': 'What are the benefits of using AgentOps for monitoring LLMs?', | ||
}] | ||
) | ||
print(response['message']['content']) | ||
``` | ||
|
||
Let's try streaming responses as well | ||
|
||
|
||
```python | ||
# Streaming Example | ||
stream = ollama.chat( | ||
model='mistral', | ||
messages=[{ | ||
'role': 'user', | ||
'content': 'Write a haiku about monitoring AI agents', | ||
}], | ||
stream=True | ||
) | ||
|
||
for chunk in stream: | ||
print(chunk['message']['content'], end='') | ||
|
||
``` | ||
|
||
|
||
```python | ||
# Conversation Example | ||
messages = [ | ||
{ | ||
'role': 'user', | ||
'content': 'What is AgentOps?' | ||
}, | ||
{ | ||
'role': 'assistant', | ||
'content': 'AgentOps is a monitoring and observability platform for LLM applications.' | ||
}, | ||
{ | ||
'role': 'user', | ||
'content': 'Can you give me 3 key features?' | ||
} | ||
] | ||
|
||
response = ollama.chat( | ||
model='mistral', | ||
messages=messages | ||
) | ||
print(response['message']['content']) | ||
``` | ||
|
||
> 💡 **Note**: In production environments, you should add proper error handling around the Ollama calls and use `agentops.end_session("Error")` when exceptions occur. | ||
Finally, let's end our AgentOps session | ||
|
||
|
||
```python | ||
agentops.end_session("Success") | ||
``` |
Oops, something went wrong.