diff --git a/docs/v1/examples/langchain.mdx b/docs/v1/examples/langchain.mdx
index 42b0e09e..b8065e25 100644
--- a/docs/v1/examples/langchain.mdx
+++ b/docs/v1/examples/langchain.mdx
@@ -5,4 +5,325 @@ mode: "wide"
---
_View Notebook on Github_
-{/* SOURCE_FILE: examples/langchain_examples/langchain_examples.ipynb */}
\ No newline at end of file
+
+{/* SOURCE_FILE: examples/langchain_examples/langchain_examples.ipynb */}
+
+# AgentOps Langchain Agent Implementation
+
+Using AgentOps monitoring with Langchain is simple. We've created a LangchainCallbackHandler that will do all of the heavy lifting!
+
+First let's install the required packages
+
+
+```python
+%pip install langchain==0.2.9
+%pip install langchain_openai
+%pip install -U agentops
+%pip install -U python-dotenv
+```
+
+Then import them
+
+
+```python
+import os
+from langchain_openai import ChatOpenAI
+from langchain.agents import tool, AgentExecutor, create_openai_tools_agent
+from dotenv import load_dotenv
+from langchain_core.prompts import ChatPromptTemplate
+```
+
+The only difference with using AgentOps is that we'll also import this special Callback Handler
+
+
+```python
+from agentops.partners.langchain_callback_handler import (
+ LangchainCallbackHandler as AgentOpsLangchainCallbackHandler,
+)
+```
+
+Next, we'll set our API keys. There are several ways to do this, the code below is just the most foolproof way for the purposes of this notebook. It accounts for both users who use environment variables and those who just want to set the API Key here in this notebook.
+
+[Get an AgentOps API key](https://agentops.ai/settings/projects)
+
+1. Create an environment variable in a .env file or other method. By default, the AgentOps `init()` function will look for an environment variable named `AGENTOPS_API_KEY`. Or...
+
+2. Replace `` below and pass in the optional `api_key` parameter to the AgentOps `init(api_key=...)` function. Remember not to commit your API key to a public repo!
+
+
+```python
+load_dotenv()
+AGENTOPS_API_KEY = os.environ.get("AGENTOPS_API_KEY")
+OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
+```
+
+This is where AgentOps comes into play. Before creating our LLM instance via Langchain, first we'll create an instance of the AO LangchainCallbackHandler. After the handler is initialized, a session will be recorded automatically.
+
+Pass in your API key, and optionally any tags to describe this session for easier lookup in the AO dashboard.
+
+agentops_handler = AgentOpsLangchainCallbackHandler(
+ api_key=AGENTOPS_API_KEY, default_tags=["Langchain Example"]
+)
+
+
+
+```python
+agentops_handler = AgentOpsLangchainCallbackHandler(
+ api_key=AGENTOPS_API_KEY, default_tags=["Langchain Example"]
+)
+
+llm = ChatOpenAI(
+ openai_api_key=OPENAI_API_KEY, callbacks=[agentops_handler], model="gpt-3.5-turbo"
+)
+
+# You must pass in a callback handler to record your agent
+llm.callbacks = [agentops_handler]
+
+prompt = ChatPromptTemplate.from_messages(
+ [
+ ("system", "You are a helpful assistant. Respond only in Spanish."),
+ ("human", "{input}"),
+ # Placeholders fill up a **list** of messages
+ ("placeholder", "{agent_scratchpad}"),
+ # ("tool_names", "find_movie")
+ ]
+)
+```
+
+You can also retrieve the `session_id` of the newly created session.
+
+
+```python
+print("Agent Ops session ID: " + str(agentops_handler.current_session_ids))
+```
+
+Agents generally use tools. Let's define a simple tool here. Tool usage is also recorded.
+
+
+```python
+@tool
+def find_movie(genre: str) -> str:
+ """Find available movies"""
+ if genre == "drama":
+ return "Dune 2"
+ else:
+ return "Pineapple Express"
+
+
+tools = [find_movie]
+```
+
+For each tool, you need to also add the callback handler
+
+
+```python
+for t in tools:
+ t.callbacks = [agentops_handler]
+```
+
+Add the tools to our LLM
+
+
+```python
+llm_with_tools = llm.bind_tools([find_movie])
+```
+
+Finally, let's create our agent! Pass in the callback handler to the agent, and all the actions will be recorded in the AO Dashboard
+
+
+```python
+agent = create_openai_tools_agent(llm, tools, prompt)
+agent_executor = AgentExecutor(agent=agent, tools=tools)
+```
+
+
+```python
+agent_executor.invoke(
+ {"input": "What comedies are playing?"}, config={"callback": [agentops_handler]}
+)
+```
+
+## Check your session
+Finally, check your run on [AgentOps](https://app.agentops.ai)
+
+Now if we look in the AgentOps dashboard, you will see a session recorded with the LLM calls and tool usage.
+
+## Langchain V0.1 Example
+This example is out of date. You can uncomment all the following cells and the example will run but AgentOps is deprecating support.
+
+
+```python
+# %pip install langchain==0.1.6
+```
+
+
+```python
+# import os
+# from langchain_openai import ChatOpenAI
+# from langchain.agents import initialize_agent, AgentType
+# from langchain.agents import tool
+```
+
+The only difference with using AgentOps is that we'll also import this special Callback Handler
+
+
+```python
+# from agentops.partners.langchain_callback_handler import (
+# LangchainCallbackHandler as AgentOpsLangchainCallbackHandler,
+# )
+```
+
+Next, we'll grab our two API keys.
+
+
+```python
+# from dotenv import load_dotenv
+
+# load_dotenv()
+```
+
+This is where AgentOps comes into play. Before creating our LLM instance via Langchain, first we'll create an instance of the AO LangchainCallbackHandler. After the handler is initialized, a session will be recorded automatically.
+
+Pass in your API key, and optionally any tags to describe this session for easier lookup in the AO dashboard.
+
+
+```python
+# AGENTOPS_API_KEY = os.environ.get("AGENTOPS_API_KEY")
+# OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
+
+# agentops_handler = AgentOpsLangchainCallbackHandler(
+# api_key=AGENTOPS_API_KEY, default_tags=["Langchain Example"]
+# )
+
+# llm = ChatOpenAI(
+# openai_api_key=OPENAI_API_KEY, callbacks=[agentops_handler], model="gpt-3.5-turbo"
+# )
+```
+
+You can also retrieve the `session_id` of the newly created session.
+
+
+```python
+# print("Agent Ops session ID: " + str(agentops_handler.current_session_ids))
+```
+
+Agents generally use tools. Let's define a simple tool here. Tool usage is also recorded.
+
+
+```python
+# @tool
+# def find_movie(genre) -> str:
+# """Find available movies"""
+# if genre == "drama":
+# return "Dune 2"
+# else:
+# return "Pineapple Express"
+
+
+# tools = [find_movie]
+```
+
+For each tool, you need to also add the callback handler
+
+
+```python
+# for t in tools:
+# t.callbacks = [agentops_handler]
+```
+
+Finally, let's use our agent! Pass in the callback handler to the agent, and all the actions will be recorded in the AO Dashboard
+
+
+```python
+# agent = initialize_agent(
+# tools,
+# llm,
+# agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
+# verbose=True,
+# callbacks=[
+# agentops_handler
+# ], # You must pass in a callback handler to record your agent
+# handle_parsing_errors=True,
+# )
+```
+
+
+```python
+# agent.invoke("What comedies are playing?", callbacks=[agentops_handler])
+```
+
+## Check your session
+Finally, check your run on [AgentOps](https://app.agentops.ai)
+
+# Async Agents
+
+Several langchain agents require async callback handlers. AgentOps also supports this.
+
+
+```python
+# import os
+# from langchain.chat_models import ChatOpenAI
+# from langchain.agents import initialize_agent, AgentType
+# from langchain.agents import tool
+```
+
+
+```python
+# from agentops.partners.langchain_callback_handler import (
+# AsyncLangchainCallbackHandler as AgentOpsAsyncLangchainCallbackHandler,
+# )
+```
+
+
+```python
+# from dotenv import load_dotenv
+
+# load_dotenv()
+
+# AGENTOPS_API_KEY = os.environ.get("AGENTOPS_API_KEY")
+# OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
+```
+
+
+```python
+# agentops_handler = AgentOpsAsyncLangchainCallbackHandler(
+# api_key=AGENTOPS_API_KEY, tags=["Async Example"]
+# )
+
+# llm = ChatOpenAI(
+# openai_api_key=OPENAI_API_KEY, callbacks=[agentops_handler], model="gpt-3.5-turbo"
+# )
+
+# print("Agent Ops session ID: " + str(await agentops_handler.session_id))
+```
+
+
+```python
+# @tool
+# def find_movie(genre) -> str:
+# """Find available movies"""
+# if genre == "drama":
+# return "Dune 2"
+# else:
+# return "Pineapple Express"
+
+
+# tools = [find_movie]
+
+# for t in tools:
+# t.callbacks = [agentops_handler]
+```
+
+
+```python
+# agent = initialize_agent(
+# tools,
+# llm,
+# agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
+# verbose=True,
+# handle_parsing_errors=True,
+# callbacks=[agentops_handler],
+# )
+
+# await agent.arun("What comedies are playing?")
+```
diff --git a/docs/v1/examples/multi_agent.mdx b/docs/v1/examples/multi_agent.mdx
index 40133490..8f51035c 100644
--- a/docs/v1/examples/multi_agent.mdx
+++ b/docs/v1/examples/multi_agent.mdx
@@ -5,4 +5,142 @@ mode: "wide"
---
_View Notebook on Github_
-{/* SOURCE_FILE: examples/multi_agent_example.ipynb */}
\ No newline at end of file
+
+{/* SOURCE_FILE: examples/multi_agent_example.ipynb */}
+
+# Multi-Agent Support
+This is an example implementation of tracking events from two separate agents
+
+First let's install the required packages
+
+
+```python
+%pip install -U openai
+%pip install -U agentops
+%pip install -U python-dotenv
+```
+
+Then import them
+
+
+```python
+import agentops
+from agentops import track_agent
+from openai import OpenAI
+import os
+from dotenv import load_dotenv
+import logging
+from IPython.display import display, Markdown
+```
+
+Next, we'll set our API keys. There are several ways to do this, the code below is just the most foolproof way for the purposes of this notebook. It accounts for both users who use environment variables and those who just want to set the API Key here in this notebook.
+
+[Get an AgentOps API key](https://agentops.ai/settings/projects)
+
+1. Create an environment variable in a .env file or other method. By default, the AgentOps `init()` function will look for an environment variable named `AGENTOPS_API_KEY`. Or...
+
+2. Replace `` below and pass in the optional `api_key` parameter to the AgentOps `init(api_key=...)` function. Remember not to commit your API key to a public repo!
+
+
+```python
+load_dotenv()
+OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") or ""
+AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY") or ""
+logging.basicConfig(
+ level=logging.DEBUG
+) # this will let us see that calls are assigned to an agent
+```
+
+
+```python
+agentops.init(AGENTOPS_API_KEY, default_tags=["multi-agent-notebook"])
+openai_client = OpenAI(api_key=OPENAI_API_KEY)
+```
+
+Now lets create a few agents!
+
+
+```python
+@track_agent(name="qa")
+class QaAgent:
+ def completion(self, prompt: str):
+ res = openai_client.chat.completions.create(
+ model="gpt-3.5-turbo",
+ messages=[
+ {
+ "role": "system",
+ "content": "You are a qa engineer and only output python code, no markdown tags.",
+ },
+ {"role": "user", "content": prompt},
+ ],
+ temperature=0.5,
+ )
+ return res.choices[0].message.content
+
+
+@track_agent(name="engineer")
+class EngineerAgent:
+ def completion(self, prompt: str):
+ res = openai_client.chat.completions.create(
+ model="gpt-3.5-turbo",
+ messages=[
+ {
+ "role": "system",
+ "content": "You are a software engineer and only output python code, no markdown tags.",
+ },
+ {"role": "user", "content": prompt},
+ ],
+ temperature=0.5,
+ )
+ return res.choices[0].message.content
+```
+
+
+```python
+qa = QaAgent()
+engineer = EngineerAgent()
+```
+
+Now we have our agents and we tagged them with the `@track_agent` decorator. Any LLM calls that go through this class will now be tagged as agent calls in AgentOps.
+
+Lets use these agents!
+
+
+```python
+generated_func = engineer.completion("python function to test prime number")
+```
+
+
+```python
+display(Markdown("```python\n" + generated_func + "\n```"))
+```
+
+
+```python
+generated_test = qa.completion(
+ "Write a python unit test that test the following function: \n " + generated_func
+)
+```
+
+
+```python
+display(Markdown("```python\n" + generated_test + "\n```"))
+```
+
+Perfect! It generated the code as expected, and in the DEBUG logs, you can see that the calls were made by agents named "engineer" and "qa"!
+
+Lets verify one more thing! If we make an LLM call outside of the context of a tracked agent, we want to make sure it gets assigned to the Default Agent.
+
+
+```python
+res = openai_client.chat.completions.create(
+ model="gpt-3.5-turbo",
+ messages=[
+ {"role": "system", "content": "You are not a tracked agent"},
+ {"role": "user", "content": "Say hello"},
+ ],
+)
+res.choices[0].message.content
+```
+
+You'll notice that we didn't log an agent name, so the AgentOps backend will assign it to the Default Agent for the session!
diff --git a/docs/v1/examples/multion.mdx b/docs/v1/examples/multion.mdx
index 6b1cbefe..6905fdb9 100644
--- a/docs/v1/examples/multion.mdx
+++ b/docs/v1/examples/multion.mdx
@@ -6,4 +6,142 @@ mode: "wide"
_View All Notebooks on Github_
+
{/* SOURCE_FILE: examples/multion_examples/Autonomous_web_browsing.ipynb */}
+
+# MultiOn Tracking Web Browse Actions
+
+
+Agents using MultiOn can launch and control remote or local web browsers to perform actions and retrieve context using natural language commands. With AgentOps, MultiOn evens such as browse, retrieve, and step are automatically tracked.
+
+
+![AgentOps MultiOn Browse](https://github.com/AgentOps-AI/agentops/blob/main/docs/images/agentops-multion-browse.gif?raw=true)
+
+Furthermore, events and LLM calls in your Python program will be tracked as well.
+
+First let's install the required packages
+
+
+```python
+%pip install -U multion
+%pip install -U agentops
+%pip install -U openai
+%pip install -U python-dotenv
+```
+
+Then import them
+
+
+```python
+from multion.client import MultiOn
+from multion.core.request_options import RequestOptions
+import openai
+import agentops
+import os
+from dotenv import load_dotenv
+```
+
+Next, we'll set our API keys. There are several ways to do this, the code below is just the most foolproof way for the purposes of this notebook. It accounts for both users who use environment variables and those who just want to set the API Key here in this notebook.
+
+[Get an AgentOps API key](https://agentops.ai/settings/projects)
+
+1. Create an environment variable in a .env file or other method. By default, the AgentOps `init()` function will look for an environment variable named `AGENTOPS_API_KEY`. Or...
+
+2. Replace `` below and pass in the optional `api_key` parameter to the AgentOps `init(api_key=...)` function. Remember not to commit your API key to a public repo!
+
+
+```python
+load_dotenv()
+MULTION_API_KEY = os.getenv("MULTION_API_KEY") or ""
+AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY") or ""
+OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") or ""
+```
+
+### Tracking MultiOn events with AgentOps
+
+When an `agentops_api_key` is provided, MultiOn will automatically start an AgentOps session and record events.
+
+
+```python
+multion = MultiOn(
+ api_key=MULTION_API_KEY,
+ agentops_api_key=AGENTOPS_API_KEY,
+)
+cmd = "what three things do i get with agentops"
+request_options = RequestOptions(
+ timeout_in_seconds=60, max_retries=4, additional_headers={"test": "ing"}
+)
+
+browse_response = multion.browse(
+ cmd="what three things do i get with agentops",
+ url="https://www.agentops.ai/",
+ max_steps=4,
+ include_screenshot=True,
+ request_options=request_options,
+)
+
+print(browse_response.message)
+
+# End session to see your dashboard
+agentops.end_session("Success")
+```
+
+### Linking MultiOn events to an existing AgentOps session
+When running `agentops.init()`, be sure to set `auto_start_session=False`. MultiOn will automatically launch AgentOps sessions by default, but by setting auto start to false, you can configure your AgentOps client independently.
+
+
+```python
+agentops.init(
+ AGENTOPS_API_KEY, auto_start_session=False, default_tags=["MultiOn browse example"]
+)
+```
+
+Now, we can launch a MultiOn browse event. This event will automatically get added to your AgentOps session.
+
+
+```python
+multion = MultiOn(
+ api_key=MULTION_API_KEY,
+ agentops_api_key=AGENTOPS_API_KEY,
+)
+cmd = "what three things do i get with agentops"
+request_options = RequestOptions(
+ timeout_in_seconds=60, max_retries=4, additional_headers={"test": "ing"}
+)
+
+browse_response = multion.browse(
+ cmd="what three things do i get with agentops",
+ url="https://www.agentops.ai/",
+ max_steps=4,
+ include_screenshot=True,
+ request_options=request_options,
+)
+
+print(browse_response.message)
+```
+
+Let's use OpenAI to summarize our output
+
+
+```python
+messages = [
+ {
+ "role": "user",
+ "content": f"Format this data as a markdown table: {browse_response.message}",
+ }
+]
+client = openai.OpenAI()
+response = client.chat.completions.create(messages=messages, model="gpt-3.5-turbo")
+
+print(response.choices[0].message.content)
+```
+
+
+```python
+agentops.end_session("Success")
+```
+
+## Check your session
+Check your session on [AgentOps](https://app.agentops.ai). This session should include the MultiOn browse action and the OpenAI call.
+
+![image.png](Autonomous_web_browsing_files/image.png)
diff --git a/docs/v1/examples/recording_events.mdx b/docs/v1/examples/recording_events.mdx
index 7d08f7f7..3bdee3ca 100644
--- a/docs/v1/examples/recording_events.mdx
+++ b/docs/v1/examples/recording_events.mdx
@@ -5,4 +5,134 @@ mode: "wide"
---
_View Notebook on Github_
-{/* SOURCE_FILE: examples/recording-events.ipynb */}
\ No newline at end of file
+
+{/* SOURCE_FILE: examples/recording-events.ipynb */}
+
+# Recording Events
+AgentOps has a number of different [Event Types](https://docs.agentops.ai/v1/details/events)
+
+We automatically instrument your LLM Calls from OpenAI, LiteLLM, Cohere, and more. Just make sure their SDKs are imported before initializing AgentOps like we see below
+
+First let's install the required packages
+
+
+```python
+%pip install -U openai
+%pip install -U agentops
+%pip install -U python-dotenv
+```
+
+Then import them
+
+
+```python
+from openai import OpenAI
+import agentops
+import os
+from dotenv import load_dotenv
+```
+
+Next, we'll set our API keys. There are several ways to do this, the code below is just the most foolproof way for the purposes of this notebook. It accounts for both users who use environment variables and those who just want to set the API Key here in this notebook.
+
+[Get an AgentOps API key](https://agentops.ai/settings/projects)
+
+1. Create an environment variable in a .env file or other method. By default, the AgentOps `init()` function will look for an environment variable named `AGENTOPS_API_KEY`. Or...
+
+2. Replace `` below and pass in the optional `api_key` parameter to the AgentOps `init(api_key=...)` function. Remember not to commit your API key to a public repo!
+
+
+```python
+load_dotenv()
+OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") or ""
+AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY") or ""
+```
+
+
+```python
+# Initialize the client, which will automatically start a session
+agentops.init()
+
+# Optionally, we can add default tags to all sessions
+# agentops.init(default_tags=['Hello Tracker'])
+
+openai = OpenAI()
+
+messages = [{"role": "user", "content": "Hello"}]
+response = openai.chat.completions.create(
+ model="gpt-3.5-turbo", messages=messages, temperature=0.5
+)
+print(response.choices[0].message.content)
+```
+
+Click the AgentOps link above to see your session!
+
+## Action Event
+
+AgentOps allows you to record other actions. The easiest way to record actions is through the use of AgentOp's decorators
+
+
+```python
+from agentops import record_action
+
+
+@record_action("add numbers")
+def add(x, y):
+ return x + y
+
+
+add(2, 4)
+```
+
+We can also manually craft an event exactly the way we want by creating and recording an `ActionEvent`
+
+
+```python
+from agentops import ActionEvent
+
+agentops.record(
+ ActionEvent(
+ action_type="Agent says hello", params={"message": "Hi"}, returns="Hi Back!"
+ )
+)
+```
+
+## Tool Event
+Agents use tools. These tools are useful to track with information such as name, end status, runtime, etc. To record tool usage, you can create and record a `ToolEvent` similar to above.
+
+
+```python
+from agentops import ToolEvent, record
+
+
+def scrape_website(url: str):
+ tool_event = ToolEvent(
+ name="scrape_website", params={"url": url}
+ ) # the start timestamp is set when the obj is created
+ result = "scraped data" # perform tool logic
+ tool_event.returns = result
+ record(tool_event)
+```
+
+## Error Events
+Error events can be used alone or in reference to another event. Lets add a catch block to the code above
+
+
+```python
+from agentops import ToolEvent, record, ErrorEvent
+
+
+def scrape_website(url: str):
+ tool_event = ToolEvent(
+ name="scrape_website", params={"url": url}
+ ) # the start timestamp is set when the obj is created
+
+ try:
+ 1 / 0 # Ooops! Something went wrong
+ except Exception as e:
+ record(ErrorEvent(exception=e, trigger_event=tool_event))
+
+
+scrape_website("https://app.agentops.ai")
+
+agentops.end_session("Success")
+```
diff --git a/docs/v1/examples/simple_agent.mdx b/docs/v1/examples/simple_agent.mdx
index 07a41318..48dde402 100644
--- a/docs/v1/examples/simple_agent.mdx
+++ b/docs/v1/examples/simple_agent.mdx
@@ -5,4 +5,123 @@ mode: "wide"
---
_View Notebook on Github_
+
{/* SOURCE_FILE: examples/openai-gpt.ipynb */}
+
+# AgentOps Basic Monitoring
+This is an example of how to use the AgentOps library for basic Agent monitoring with OpenAI's GPT
+
+First let's install the required packages
+
+
+```python
+%pip install -U openai
+%pip install -U agentops
+%pip install -U python-dotenv
+```
+
+Then import them
+
+
+```python
+from openai import OpenAI
+import agentops
+import os
+from dotenv import load_dotenv
+```
+
+Next, we'll set our API keys. There are several ways to do this, the code below is just the most foolproof way for the purposes of this notebook. It accounts for both users who use environment variables and those who just want to set the API Key here in this notebook.
+
+[Get an AgentOps API key](https://agentops.ai/settings/projects)
+
+1. Create an environment variable in a .env file or other method. By default, the AgentOps `init()` function will look for an environment variable named `AGENTOPS_API_KEY`. Or...
+
+2. Replace `` below and pass in the optional `api_key` parameter to the AgentOps `init(api_key=...)` function. Remember not to commit your API key to a public repo!
+
+
+```python
+load_dotenv()
+OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") or ""
+AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY") or ""
+```
+
+The AgentOps library is designed to be a plug-and-play replacement for the OpenAI Client, maximizing use with minimal install effort.
+
+
+```python
+openai = OpenAI(api_key=OPENAI_API_KEY)
+agentops.init(AGENTOPS_API_KEY, default_tags=["openai-gpt-notebook"])
+```
+
+Now just use OpenAI as you would normally!
+
+## Single Session with ChatCompletion
+
+
+```python
+message = [{"role": "user", "content": "Write a 12 word poem about secret agents."}]
+response = openai.chat.completions.create(
+ model="gpt-3.5-turbo", messages=message, temperature=0.5, stream=False
+)
+print(response.choices[0].message.content)
+```
+
+Make sure to end your session with a `Result` (Success|Fail|Indeterminate) for better tracking
+
+
+```python
+agentops.end_session("Success")
+```
+
+Now if you check the AgentOps dashboard, you should see information related to this run!
+
+# Events
+Additionally, you can track custom events via AgentOps.
+Let's start a new session and record some events
+
+
+```python
+# Create new session
+agentops.start_session(tags=["openai-gpt-notebook-events"])
+```
+
+The easiest way to record actions is through the use of AgentOp's decorators
+
+
+```python
+from agentops import record_action
+
+
+@record_action("add numbers")
+def add(x, y):
+ return x + y
+
+
+add(2, 4)
+```
+
+We can also manually craft an event exactly the way we want
+
+
+```python
+from agentops import ActionEvent
+
+message = ({"role": "user", "content": "Hello"},)
+response = openai.chat.completions.create(
+ model="gpt-3.5-turbo", messages=message, temperature=0.5
+)
+
+if "hello" in str(response.choices[0].message.content).lower():
+ agentops.record(
+ ActionEvent(
+ action_type="Agent says hello",
+ params=str(message),
+ returns=str(response.choices[0].message.content),
+ )
+ )
+```
+
+
+```python
+agentops.end_session("Success")
+```
diff --git a/examples/multion_examples/Autonomous_web_browsing_files/image.png b/examples/multion_examples/Autonomous_web_browsing_files/image.png
new file mode 100644
index 00000000..48d70373
Binary files /dev/null and b/examples/multion_examples/Autonomous_web_browsing_files/image.png differ