Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

separated llm tracker by model #346

Merged
merged 29 commits into from
Aug 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
bc9ab5f
separated llm tracker by model
bboynton97 Aug 9, 2024
8313a69
llm refactor progress
bboynton97 Aug 12, 2024
cae048b
working for openai
bboynton97 Aug 13, 2024
0d0919a
groq provider
bboynton97 Aug 13, 2024
67ddeba
cohere provider
bboynton97 Aug 13, 2024
61c14ee
ollama provider
bboynton97 Aug 13, 2024
9f24b7d
remove support for openai v0
bboynton97 Aug 13, 2024
7d1eb30
cohere support
bboynton97 Aug 14, 2024
4bf8ba1
Merge branch 'refs/heads/main' into llm-handler-refactor
bboynton97 Aug 14, 2024
fa6f1e5
test import fix
bboynton97 Aug 14, 2024
0b643c5
test import fix
bboynton97 Aug 14, 2024
efe14fa
Merge branch 'refs/heads/main' into llm-handler-refactor
bboynton97 Aug 14, 2024
ad0ce68
Merge branch 'refs/heads/main' into llm-handler-refactor
bboynton97 Aug 14, 2024
edc2bea
groq test fix
bboynton97 Aug 14, 2024
f036043
ollama tests
bboynton97 Aug 14, 2024
8a772fb
litellm tests
bboynton97 Aug 14, 2024
7e4d4c3
Merge branch 'refs/heads/main' into llm-handler-refactor
bboynton97 Aug 14, 2024
be8fcb8
dont import litellm
bboynton97 Aug 14, 2024
4e6b530
cohere fixes and tests
bboynton97 Aug 15, 2024
55da826
Merge branch 'main' into llm-handler-refactor
bboynton97 Aug 16, 2024
16a0fde
Merge branch 'refs/heads/main' into llm-handler-refactor
bboynton97 Aug 18, 2024
4c4f88e
Merge remote-tracking branch 'origin/llm-handler-refactor' into llm-h…
bboynton97 Aug 18, 2024
2797be0
oai version <0.1 better deprecation warning
bboynton97 Aug 19, 2024
47633ec
[FEATURE] Add Anthropic LLM support via `anthropic` Python SDK (#332)
the-praxs Aug 19, 2024
45c9c08
added undo to canaries
bboynton97 Aug 19, 2024
0c1a1d5
Merge remote-tracking branch 'origin/llm-handler-refactor' into llm-h…
bboynton97 Aug 19, 2024
f4ab640
added anthropic tests
bboynton97 Aug 19, 2024
0bcaf98
undo instrumenting for litellm
bboynton97 Aug 20, 2024
d724fbc
cohere considerations
bboynton97 Aug 20, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
108 changes: 107 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,113 @@ agentops.end_session('Success')
</details>


### LiteLLM
### Anthropic ﹨

Track agents built with the Anthropic Python SDK (>=0.32.0).

- [AgentOps integration example](./examples/anthropic/anthropic_example.ipynb)
- [Official Anthropic documentation](https://docs.anthropic.com/en/docs/welcome)

<details>
<summary>Installation</summary>

```bash
pip install anthropic
```

```python python
import anthropic
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

client = anthropic.Anthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)

message = client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Tell me a cool fact about AgentOps",
}
],
model="claude-3-opus-20240229",
)
print(message.content)

agentops.end_session('Success')
```

Streaming
```python python
import anthropic
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

client = anthropic.Anthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)

stream = client.messages.create(
max_tokens=1024,
model="claude-3-opus-20240229",
messages=[
{
"role": "user",
"content": "Tell me something cool about streaming agents",
}
],
stream=True,
)

response = ""
for event in stream:
if event.type == "content_block_delta":
response += event.delta.text
elif event.type == "message_stop":
print("\n")
print(response)
print("\n")
```

Async

```python python
import asyncio
from anthropic import AsyncAnthropic

client = AsyncAnthropic(
# This is the default and can be omitted
api_key=os.environ.get("ANTHROPIC_API_KEY"),
)


async def main() -> None:
message = await client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "Tell me something interesting about async agents",
}
],
model="claude-3-opus-20240229",
)
print(message.content)


await main()
```
</details>

### LiteLLM 🚅

AgentOps provides support for LiteLLM(>=1.3.1), allowing you to call 100+ LLMs using the same Input/Output Format.

Expand Down
4 changes: 2 additions & 2 deletions agentops/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,15 +19,15 @@
from termcolor import colored

from .event import Event, ErrorEvent
from .helpers import (
from .singleton import (
conditional_singleton,
)
from .session import Session, active_sessions
from .host_env import get_host_env
from .log_config import logger
from .meta_client import MetaClient
from .config import Configuration
from .llm_tracker import LlmTracker
from .llms import LlmTracker


@conditional_singleton
Expand Down
2 changes: 1 addition & 1 deletion agentops/event.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ class LLMEvent(Event):
thread_id(UUID, optional): The unique identifier of the contextual thread that a message pertains to.
prompt(str, list, optional): The message or messages that were used to prompt the LLM. Preferably in ChatML format which is more fully supported by AgentOps.
prompt_tokens(int, optional): The number of tokens in the prompt message.
completion(str, object, optional): The message or returned by the LLM. Preferably in ChatML format which is more fully supported by AgentOps.
completion(str, object, optional): The message or messages returned by the LLM. Preferably in ChatML format which is more fully supported by AgentOps.
completion_tokens(int, optional): The number of tokens in the completion message.
model(str, optional): LLM model e.g. "gpt-4", "gpt-3.5-turbo".

Expand Down
33 changes: 0 additions & 33 deletions agentops/helpers.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
from pprint import pformat
from functools import wraps
from datetime import datetime, timezone
import json
import inspect
from typing import Union
import http.client
Expand All @@ -11,38 +10,6 @@
from .log_config import logger
from uuid import UUID
from importlib.metadata import version
import subprocess

ao_instances = {}


def singleton(class_):

def getinstance(*args, **kwargs):
if class_ not in ao_instances:
ao_instances[class_] = class_(*args, **kwargs)
return ao_instances[class_]

return getinstance


def conditional_singleton(class_):

def getinstance(*args, **kwargs):
use_singleton = kwargs.pop("use_singleton", True)
if use_singleton:
if class_ not in ao_instances:
ao_instances[class_] = class_(*args, **kwargs)
return ao_instances[class_]
else:
return class_(*args, **kwargs)

return getinstance


def clear_singletons():
global ao_instances
ao_instances = {}


def get_ISO_time():
Expand Down
Loading
Loading