Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add Fireworks Python SDK support #592

Open
wants to merge 22 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
b3035d7
feat: Add Fireworks provider integration
devin-ai-integration[bot] Dec 17, 2024
84e44b3
docs: Add Fireworks integration documentation
devin-ai-integration[bot] Dec 17, 2024
e822933
feat: Add Fireworks example notebook
devin-ai-integration[bot] Dec 17, 2024
58fee49
fix: Update Fireworks provider to handle streaming responses and fix …
devin-ai-integration[bot] Dec 18, 2024
2608f16
feat: Add Fireworks provider registration and example notebooks
devin-ai-integration[bot] Dec 18, 2024
566ced9
style: Fix formatting issues and add noqa for E402
devin-ai-integration[bot] Dec 18, 2024
8c8fa36
fix: Improve prompt formatting and streaming event handling
devin-ai-integration[bot] Dec 18, 2024
bb3b056
refactor: Update end_session to return detailed statistics
devin-ai-integration[bot] Dec 18, 2024
26c71cf
style: Apply ruff-format changes to improve code formatting
devin-ai-integration[bot] Dec 18, 2024
68b276e
fix: Improve prompt formatting and streaming event handling
devin-ai-integration[bot] Dec 18, 2024
e9a5a34
fix: Update Fireworks provider with proper sync/async handling
devin-ai-integration[bot] Dec 18, 2024
0f10f0a
style: Apply ruff-format changes to improve code formatting
devin-ai-integration[bot] Dec 18, 2024
9d22c6e
style: Apply additional ruff-format changes
devin-ai-integration[bot] Dec 18, 2024
a28c0f1
fix: Update notebook to handle async code without cell magic
devin-ai-integration[bot] Dec 18, 2024
49d3f27
fix: Update Fireworks provider and notebook for proper event tracking
devin-ai-integration[bot] Dec 18, 2024
74eed4a
test: Add comprehensive tests for FireworksProvider
devin-ai-integration[bot] Dec 18, 2024
c5d6c21
style: Apply ruff-format changes to improve code formatting
devin-ai-integration[bot] Dec 18, 2024
9cb419d
style: Apply ruff-format changes to improve code formatting
devin-ai-integration[bot] Dec 18, 2024
bf9355a
style: Apply ruff-format changes to consolidate multi-line statements
devin-ai-integration[bot] Dec 18, 2024
7203a76
style: Apply ruff-format changes to test file
devin-ai-integration[bot] Dec 18, 2024
001921e
fix: Update Fireworks provider tests to use correct session management
devin-ai-integration[bot] Dec 19, 2024
5a8341c
fix: Move session initialization to setup_method in Fireworks provide…
devin-ai-integration[bot] Dec 19, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
181 changes: 181 additions & 0 deletions agentops/llms/providers/fireworks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
import logging
from typing import Optional, AsyncGenerator
import pprint
from agentops.session import Session
from agentops.helpers import get_ISO_time
from agentops.event import LLMEvent
from agentops.enums import EventType
from .instrumented_provider import InstrumentedProvider

logger = logging.getLogger(__name__)

class FireworksProvider(InstrumentedProvider):
"""Provider for Fireworks.ai API."""

def __init__(self, client):
super().__init__(client)
self._provider_name = "Fireworks"
self._original_completion = None
self._original_async_completion = None
self._session = None # Initialize session attribute
logger.info(f"Initializing {self._provider_name} provider")

Check warning on line 21 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L16-L21

Added lines #L16 - L21 were not covered by tests

def set_session(self, session: Session):
"""Set the session for event tracking."""
self._session = session
logger.debug(f"Set session {session.session_id} for {self._provider_name} provider")

Check warning on line 26 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L25-L26

Added lines #L25 - L26 were not covered by tests

def handle_response(self, response, kwargs, init_timestamp, session: Optional[Session] = None) -> dict:
"""Handle the response from the Fireworks API."""
if session:
self._session = session
logger.debug(f"Updated session to {session.session_id} for {self._provider_name} provider")

Check warning on line 32 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L31-L32

Added lines #L31 - L32 were not covered by tests

try:

Check warning on line 34 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L34

Added line #L34 was not covered by tests
# Handle streaming response
if kwargs.get('stream', False):
async def async_generator(stream):

Check warning on line 37 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L37

Added line #L37 was not covered by tests
async for chunk in stream:
try:

Check warning on line 39 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L39

Added line #L39 was not covered by tests
# Parse the chunk data
if hasattr(chunk, 'choices') and chunk.choices:
content = chunk.choices[0].delta.content if hasattr(chunk.choices[0].delta, 'content') else None

Check warning on line 42 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L42

Added line #L42 was not covered by tests
else:
# Handle raw string chunks from streaming response
content = chunk

Check warning on line 45 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L45

Added line #L45 was not covered by tests

if content:
# Create event data for streaming chunk
event = LLMEvent(

Check warning on line 49 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L49

Added line #L49 was not covered by tests
event_type=EventType.LLM.value,
init_timestamp=init_timestamp,
end_timestamp=get_ISO_time(),
model=kwargs.get('model', 'unknown'),
prompt=str(kwargs.get('messages', [])),
completion="[Streaming Response]",
prompt_tokens=0,
completion_tokens=0,
cost=0.0
)
if self._session:
self._session.record(event)
logger.debug(f"Recorded streaming chunk for session {self._session.session_id}")
yield content
except Exception as e:
logger.error(f"Error processing streaming chunk: {str(e)}")
continue

Check warning on line 66 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L61-L66

Added lines #L61 - L66 were not covered by tests

def generator(stream):

Check warning on line 68 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L68

Added line #L68 was not covered by tests
for chunk in stream:
try:

Check warning on line 70 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L70

Added line #L70 was not covered by tests
# Parse the chunk data
if hasattr(chunk, 'choices') and chunk.choices:
content = chunk.choices[0].delta.content if hasattr(chunk.choices[0].delta, 'content') else None

Check warning on line 73 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L73

Added line #L73 was not covered by tests
else:
# Handle raw string chunks from streaming response
content = chunk

Check warning on line 76 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L76

Added line #L76 was not covered by tests

if content:
# Create event data for streaming chunk
event = LLMEvent(

Check warning on line 80 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L80

Added line #L80 was not covered by tests
event_type=EventType.LLM.value,
init_timestamp=init_timestamp,
end_timestamp=get_ISO_time(),
model=kwargs.get('model', 'unknown'),
prompt=str(kwargs.get('messages', [])),
completion="[Streaming Response]",
prompt_tokens=0,
completion_tokens=0,
cost=0.0
)
if self._session:
self._session.record(event)
logger.debug(f"Recorded streaming chunk for session {self._session.session_id}")
yield content
except Exception as e:
logger.error(f"Error processing streaming chunk: {str(e)}")
continue

Check warning on line 97 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L92-L97

Added lines #L92 - L97 were not covered by tests

if hasattr(response, '__aiter__'):
return async_generator(response)

Check warning on line 100 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L100

Added line #L100 was not covered by tests
else:
return generator(response)

Check warning on line 102 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L102

Added line #L102 was not covered by tests

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 Bug Fix:

Missing Asynchronous Generator for Streaming Responses
The current implementation attempts to handle streaming responses by checking for the __aiter__ attribute to decide whether to use an asynchronous generator. However, the async_generator function is not defined, which will lead to runtime errors when handling asynchronous responses. This is a critical issue as it can break the functionality of the FireworksProvider when dealing with asynchronous data streams.

Actionable Suggestions:

  • Define the async_generator function to properly handle asynchronous streaming responses. This function should mirror the logic of the synchronous generator function but should be capable of handling asynchronous iteration.
  • Ensure that the async_generator function correctly yields content from the asynchronous response, handling any exceptions that may occur during iteration.

By implementing these changes, the code will be robust against both synchronous and asynchronous streaming responses, ensuring consistent functionality across different response types.

🔧 Suggested Code Diff:

async def async_generator(stream):
    async for chunk in stream:
        try:
            # Parse the chunk data
            if hasattr(chunk, 'choices') and chunk.choices:
                content = chunk.choices[0].delta.content if hasattr(chunk.choices[0].delta, 'content') else None
            else:
                # Handle raw string chunks from streaming response
                content = chunk

            if content:
                # Create event data for streaming chunk
                event = LLMEvent(
                    event_type=EventType.LLM.value,
                    init_timestamp=init_timestamp,
                    end_timestamp=get_ISO_time(),
                    model=kwargs.get('model', 'unknown'),
                    prompt=str(kwargs.get('messages', [])),
                    completion="[Streaming Response]",
                    prompt_tokens=0,
                    completion_tokens=0,
                    cost=0.0
                )
                if self._session:
                    self._session.record(event)
                    logger.debug(f"Recorded streaming chunk for session {self._session.session_id}")
                yield content
        except Exception as e:
            logger.error(f"Error processing streaming chunk: {str(e)}")
            continue
📝 Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
def generator(stream):
for chunk in stream:
try:
# Parse the chunk data
if hasattr(chunk, 'choices') and chunk.choices:
content = chunk.choices[0].delta.content if hasattr(chunk.choices[0].delta, 'content') else None
else:
# Handle raw string chunks from streaming response
content = chunk
if content:
# Create event data for streaming chunk
event = LLMEvent(
event_type=EventType.LLM.value,
init_timestamp=init_timestamp,
end_timestamp=get_ISO_time(),
model=kwargs.get('model', 'unknown'),
prompt=str(kwargs.get('messages', [])),
completion="[Streaming Response]",
prompt_tokens=0,
completion_tokens=0,
cost=0.0
)
if self._session:
self._session.record(event)
logger.debug(f"Recorded streaming chunk for session {self._session.session_id}")
yield content
except Exception as e:
logger.error(f"Error processing streaming chunk: {str(e)}")
continue
if hasattr(response, '__aiter__'):
return async_generator(response)
else:
return generator(response)
async def async_generator(stream):
async for chunk in stream:
try:
# Parse the chunk data
if hasattr(chunk, 'choices') and chunk.choices:
content = chunk.choices[0].delta.content if hasattr(chunk.choices[0].delta, 'content') else None
else:
# Handle raw string chunks from streaming response
content = chunk
if content:
# Create event data for streaming chunk
event = LLMEvent(
event_type=EventType.LLM.value,
init_timestamp=init_timestamp,
end_timestamp=get_ISO_time(),
model=kwargs.get('model', 'unknown'),
prompt=str(kwargs.get('messages', [])),
completion="[Streaming Response]",
prompt_tokens=0,
completion_tokens=0,
cost=0.0
)
if self._session:
self._session.record(event)
logger.debug(f"Recorded streaming chunk for session {self._session.session_id}")
yield content
except Exception as e:
logger.error(f"Error processing streaming chunk: {str(e)}")
continue
def generator(stream):
for chunk in stream:
try:
# Parse the chunk data
if hasattr(chunk, 'choices') and chunk.choices:
content = chunk.choices[0].delta.content if hasattr(chunk.choices[0].delta, 'content') else None
else:
# Handle raw string chunks from streaming response
content = chunk
if content:
# Create event data for streaming chunk
event = LLMEvent(
event_type=EventType.LLM.value,
init_timestamp=init_timestamp,
end_timestamp=get_ISO_time(),
model=kwargs.get('model', 'unknown'),
prompt=str(kwargs.get('messages', [])),
completion="[Streaming Response]",
prompt_tokens=0,
completion_tokens=0,
cost=0.0
)
if self._session:
self._session.record(event)
logger.debug(f"Recorded streaming chunk for session {self._session.session_id}")
yield content
except Exception as e:
logger.error(f"Error processing streaming chunk: {str(e)}")
continue
if hasattr(response, '__aiter__'):
return async_generator(response)
else:
return generator(response)
📜 Guidelines

• Use type annotations to improve code clarity and maintainability
• Follow PEP 8 style guide for consistent code formatting



# Handle non-streaming response
if hasattr(response, 'choices') and response.choices:
content = response.choices[0].message.content if hasattr(response.choices[0], 'message') else ""

Check warning on line 106 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L106

Added line #L106 was not covered by tests

# Create event data for non-streaming response
event = LLMEvent(

Check warning on line 109 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L109

Added line #L109 was not covered by tests
event_type=EventType.LLM.value,
init_timestamp=init_timestamp,
end_timestamp=get_ISO_time(),
model=kwargs.get('model', 'unknown'),
prompt=str(kwargs.get('messages', [])),
completion=content,
prompt_tokens=0,
completion_tokens=0,
cost=0.0
)
if self._session:
self._session.record(event)
logger.debug(f"Recorded non-streaming response for session {self._session.session_id}")

Check warning on line 122 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L121-L122

Added lines #L121 - L122 were not covered by tests

return response

Check warning on line 124 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L124

Added line #L124 was not covered by tests

except Exception as e:
logger.error(f"Error handling Fireworks response: {str(e)}")
raise

Check warning on line 128 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L126-L128

Added lines #L126 - L128 were not covered by tests

def override(self):
"""Override Fireworks API methods with instrumented versions."""
logger.info(f"Overriding {self._provider_name} provider methods")

Check warning on line 132 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L132

Added line #L132 was not covered by tests

# Store original methods
self._original_completion = self.client.chat.completions.create
self._original_async_completion = getattr(self.client.chat.completions, 'acreate', None)

Check warning on line 136 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L135-L136

Added lines #L135 - L136 were not covered by tests

# Override methods
self._override_fireworks_completion()

Check warning on line 139 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L139

Added line #L139 was not covered by tests
if self._original_async_completion:
self._override_fireworks_async_completion()

Check warning on line 141 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L141

Added line #L141 was not covered by tests

def _override_fireworks_completion(self):
"""Override synchronous completion method."""
original_create = self._original_completion
provider = self

Check warning on line 146 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L145-L146

Added lines #L145 - L146 were not covered by tests

def patched_function(*args, **kwargs):
try:
init_timestamp = get_ISO_time()
response = original_create(*args, **kwargs)
return provider.handle_response(response, kwargs, init_timestamp, provider._session)
except Exception as e:
logger.error(f"Error in Fireworks completion: {str(e)}")
raise

Check warning on line 155 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L148-L155

Added lines #L148 - L155 were not covered by tests

self.client.chat.completions.create = patched_function

Check warning on line 157 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L157

Added line #L157 was not covered by tests

def _override_fireworks_async_completion(self):
"""Override asynchronous completion method."""
original_acreate = self._original_async_completion
provider = self

Check warning on line 162 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L161-L162

Added lines #L161 - L162 were not covered by tests

async def patched_function(*args, **kwargs):
try:
init_timestamp = get_ISO_time()
response = await original_acreate(*args, **kwargs)
return provider.handle_response(response, kwargs, init_timestamp, provider._session)
except Exception as e:
logger.error(f"Error in Fireworks async completion: {str(e)}")
raise

Check warning on line 171 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L164-L171

Added lines #L164 - L171 were not covered by tests

self.client.chat.completions.acreate = patched_function

Check warning on line 173 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L173

Added line #L173 was not covered by tests

def undo_override(self):
"""Restore original Fireworks API methods."""
logger.info(f"Restoring original {self._provider_name} provider methods")

Check warning on line 177 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L177

Added line #L177 was not covered by tests
if self._original_completion:
self.client.chat.completions.create = self._original_completion

Check warning on line 179 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L179

Added line #L179 was not covered by tests
if self._original_async_completion:
self.client.chat.completions.acreate = self._original_async_completion

Check warning on line 181 in agentops/llms/providers/fireworks.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/providers/fireworks.py#L181

Added line #L181 was not covered by tests
14 changes: 14 additions & 0 deletions agentops/llms/tracker.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
from .providers.anthropic import AnthropicProvider
from .providers.mistral import MistralProvider
from .providers.ai21 import AI21Provider
from .providers.fireworks import FireworksProvider

original_func = {}
original_create = None
Expand Down Expand Up @@ -48,6 +49,9 @@
"mistralai": {
"1.0.1": ("chat.complete", "chat.stream"),
},
"fireworks-ai": {
"0.1.0": ("chat.completions.create",),
},
"ai21": {
"2.0.0": (
"chat.completions.create",
Expand Down Expand Up @@ -155,6 +159,15 @@
else:
logger.warning(f"Only AI21>=2.0.0 supported. v{module_version} found.")

if api == "fireworks-ai":
module_version = version(api)

Check warning on line 163 in agentops/llms/tracker.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/tracker.py#L163

Added line #L163 was not covered by tests

if Version(module_version) >= parse("0.1.0"):
provider = FireworksProvider(self.client)
provider.override()

Check warning on line 167 in agentops/llms/tracker.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/tracker.py#L166-L167

Added lines #L166 - L167 were not covered by tests
else:
logger.warning(f"Only Fireworks>=0.1.0 supported. v{module_version} found.")

Check warning on line 169 in agentops/llms/tracker.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/tracker.py#L169

Added line #L169 was not covered by tests

if api == "llama_stack_client":
module_version = version(api)

Comment on lines 159 to 173

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 Bug Fix:

Enhance Error Handling for Unsupported FireworksProvider Versions
The current implementation logs a warning when the FireworksProvider version is below 0.1.0, but it does not prevent the system from attempting to use an unsupported provider. This could lead to unexpected behavior or errors later in the execution. To address this, it's recommended to implement a fallback mechanism or raise an exception to ensure the system does not proceed with an unsupported version. This will improve the robustness and reliability of the code.

Actionable Steps:

  • Implement a fallback mechanism that either uses a default provider or skips the operation if the version is unsupported.
  • Alternatively, raise an exception to halt execution and notify the user of the unsupported version.
  • Ensure that any changes are accompanied by appropriate unit tests to verify the new behavior.

🔧 Suggested Code Diff:

else:
    logger.warning(f"Only Fireworks>=0.1.0 supported. v{module_version} found.")
+    raise RuntimeError(f"Unsupported Fireworks version: {module_version}. Please upgrade to >=0.1.0.")
📝 Committable Code Suggestion

‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
else:
logger.warning(f"Only AI21>=2.0.0 supported. v{module_version} found.")
if api == "fireworks-ai":
module_version = version(api)
if Version(module_version) >= parse("0.1.0"):
provider = FireworksProvider(self.client)
provider.override()
else:
logger.warning(f"Only Fireworks>=0.1.0 supported. v{module_version} found.")
if api == "llama_stack_client":
module_version = version(api)
else:
logger.warning(f"Only AI21>=2.0.0 supported. v{module_version} found.")
if api == "fireworks-ai":
module_version = version(api)
if Version(module_version) >= parse("0.1.0"):
provider = FireworksProvider(self.client)
provider.override()
else:
logger.warning(f"Only Fireworks>=0.1.0 supported. v{module_version} found.")
raise RuntimeError(f"Unsupported Fireworks version: {module_version}. Please upgrade to >=0.1.0.")
if api == "llama_stack_client":
module_version = version(api)
📜 Guidelines

• Use exceptions for error handling rather than return codes


Expand All @@ -174,3 +187,4 @@
MistralProvider(self.client).undo_override()
AI21Provider(self.client).undo_override()
LlamaStackClientProvider(self.client).undo_override()
FireworksProvider(self.client).undo_override()

Check warning on line 190 in agentops/llms/tracker.py

View check run for this annotation

Codecov / codecov/patch

agentops/llms/tracker.py#L190

Added line #L190 was not covered by tests
174 changes: 174 additions & 0 deletions docs/v1/integrations/fireworks.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
---
title: Fireworks
description: "AgentOps provides support for Fireworks AI's LLM models"
---

import CodeTooltip from '/snippets/add-code-tooltip.mdx'
import EnvTooltip from '/snippets/add-env-tooltip.mdx'

<Card title="Fireworks" icon="robot" href="https://fireworks.ai">
First class support for Fireworks AI models including Llama-v3
</Card>

<Steps>
<Step title="Install the AgentOps SDK">
<CodeGroup>
```bash pip
pip install agentops
```
```bash poetry
poetry add agentops
```
</CodeGroup>
</Step>
<Step title="Install the Fireworks SDK">
<CodeGroup>
```bash pip
pip install --upgrade fireworks-ai
```
```bash poetry
poetry add fireworks-ai
```
</CodeGroup>
</Step>
<Step title="Add 3 lines of code">
<CodeTooltip/>
<CodeGroup>
```python python
import agentops
from fireworks.client import Fireworks

agentops.init(<INSERT YOUR API KEY HERE>)
client = Fireworks()
...
# End of program (e.g. main.py)
agentops.end_session("Success")
```
</CodeGroup>
<EnvTooltip />
<CodeGroup>
```python .env
AGENTOPS_API_KEY=<YOUR API KEY>
FIREWORKS_API_KEY=<YOUR FIREWORKS API KEY>
```
</CodeGroup>
Read more about environment variables in [Advanced Configuration](/v1/usage/advanced-configuration)
</Step>
<Step title="Run your Agent">
Execute your program and visit [app.agentops.ai/drilldown](https://app.agentops.ai/drilldown) to observe your Agent! 🕵️
<Tip>
After your run, AgentOps prints a clickable url to console linking directly to your session in the Dashboard
</Tip>
<div/>
<Frame type="glass" caption="Clickable link to session">
<img height="200" src="https://raw.githubusercontent.com/AgentOps-AI/agentops/refs/heads/main/docs/images/external/app_screenshots/session-replay.png?raw=true" />
</Frame>
</Step>
</Steps>

## Full Examples

<CodeGroup>
```python sync
from fireworks.client import Fireworks
import agentops

agentops.init(<INSERT YOUR API KEY HERE>)
client = Fireworks()

response = client.chat.completions.create(
model="accounts/fireworks/models/llama-v3p1-8b-instruct",
messages=[{
"role": "user",
"content": "Write a haiku about AI and humans working together"
}]
)

print(response.choices[0].message.content)
agentops.end_session('Success')
```

```python async
from fireworks.client import AsyncFireworks
import agentops
import asyncio

async def main():
agentops.init(<INSERT YOUR API KEY HERE>)
client = AsyncFireworks()

response = await client.chat.completions.create(
model="accounts/fireworks/models/llama-v3p1-8b-instruct",
messages=[{
"role": "user",
"content": "Write a haiku about AI and humans working together"
}]
)

print(response.choices[0].message.content)
agentops.end_session('Success')

asyncio.run(main())
```

</CodeGroup>

### Streaming examples

<CodeGroup>
```python sync
from fireworks.client import Fireworks
import agentops

agentops.init(<INSERT YOUR API KEY HERE>)
client = Fireworks()

stream = client.chat.completions.create(
model="accounts/fireworks/models/llama-v3p1-8b-instruct",
stream=True,
messages=[{
"role": "user",
"content": "Write a haiku about AI and humans working together"
}],
)

for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")

agentops.end_session('Success')
```

```python async
from fireworks.client import AsyncFireworks
import agentops
import asyncio

async def main():
agentops.init(<INSERT YOUR API KEY HERE>)
client = AsyncFireworks()

stream = await client.chat.completions.create(
model="accounts/fireworks/models/llama-v3p1-8b-instruct",
stream=True,
messages=[{
"role": "user",
"content": "Write a haiku about AI and humans working together"
}],
)


async for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")

agentops.end_session('Success')

asyncio.run(main())
```

</CodeGroup>

<script type="module" src="/scripts/github_stars.js"></script>
<script type="module" src="/scripts/scroll-img-fadein-animation.js"></script>
<script type="module" src="/scripts/button_heartbeat_animation.js"></script>
<script type="css" src="/styles/styles.css"></script>
<script type="module" src="/scripts/adjust_api_dynamically.js"></script>
Loading
Loading