Skip to content

Commit

Permalink
šŸš€ RELEASE: Version bump to 1.2.0. Added Home Assistant Conversation Iā€¦
Browse files Browse the repository at this point in the history
ā€¦ntegration. #3
  • Loading branch information
bearlike authored May 16, 2024
2 parents e46c006 + cdb10dc commit b401fe1
Show file tree
Hide file tree
Showing 21 changed files with 878 additions and 68 deletions.
26 changes: 19 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@
<p align="center">
<a href="https://github.com/bearlike/Personal-Assistant/wiki"><img alt="Wiki" src="https://img.shields.io/badge/GitHub-Wiki-blue?style=for-the-badge&logo=github"></a>
<a href="https://github.com/features/actions"><img alt="GitHub Actions Workflow Status" src="https://img.shields.io/github/actions/workflow/status/bearlike/Personal-Assistant/docker-buildx.yml?style=for-the-badge&"></a>
<a href="https://github.com/bearlike/Personal-Assistant/pkgs/container/meeseeks-chat"><img src="https://img.shields.io/badge/ghcr.io-bearlike/meeseeks&#x2212;chat:latest-blue?style=for-the-badge&logo=docker&logoColor=white" alt="Docker Image"></a>
<a href="https://github.com/bearlike/Personal-Assistant/releases"><img src="https://img.shields.io/github/v/release/bearlike/Personal-Assistant?style=for-the-badge&" alt="GitHub Release"></a>
<a href="https://github.com/bearlike/Personal-Assistant/pkgs/container/meeseeks-chat"><img src="https://img.shields.io/badge/ghcr.io-bearlike/meeseeks--chat:latest-blue?style=for-the-badge&logo=docker&logoColor=white" alt="Docker Image"></a>
<a href="https://github.com/bearlike/Personal-Assistant/pkgs/container/meeseeks-api"><img src="https://img.shields.io/badge/ghcr.io-bearlike/meeseeks--api:latest-blue?style=for-the-badge&logo=docker&logoColor=white" alt="Docker Image"></a>
</p>


Expand All @@ -26,30 +27,41 @@ Meeseeks is an innovative AI assistant built on a multi-agent large language mod

| Completed | In-Progress | Planned | Scoping |
| :-------: | :---------: | :-----: | :-----: |
| āœ… | šŸš§ | šŸ“… | šŸ§ |
| āœ… | šŸš§ | šŸ“… | šŸ§ |

</details>

# Features šŸ”„
> [!NOTE]
> Visit [**Features - Wiki**](https://github.com/bearlike/Personal-Assistant/wiki/Features) for detailed information on tools and integration capabilities.
<table align="center">
<tr>
<th>Answer questions and interpret sensor information</th>
<th>Control devices and entities</th>
</tr>
<tr>
<td align="center"><img src="docs/screenshot_ha_assist_1.png" alt="Screenshot" height="512px"></td>
<td align="center"><img src="docs/screenshot_ha_assist_2.png" alt="Screenshot" height="512px"></td>
</tr>
</table>

- (āœ…) [LangFuse](https://github.com/langfuse/langfuse) integrations to accurate log and monitor chains.
- (āœ…) Use natural language to interact with integrations and tools.
- (šŸš§) Simple REST API interface for 3rd party tools to interface with Meeseeks.
- (āœ…) Simple REST API interface for 3rd party tools to interface with Meeseeks.
- (āœ…) Handles complex user queries by breaking them into actionable steps, executing these steps, and then summarizing on the results.
- (šŸš§) Custom [Home Assistant Conversation Integration](https://www.home-assistant.io/integrations/conversation/) to allow voice assistance via [**HA Assist**](https://www.home-assistant.io/voice_control/).
- (āœ…) Custom [Home Assistant Conversation Integration](https://www.home-assistant.io/integrations/conversation/) to allow voice assistance via [**HA Assist**](https://www.home-assistant.io/voice_control/).
- (āœ…) A chat Interface using `streamlit` that shows the action plan, user types, and response from the LLM.

## Extras šŸ‘½
Optional feature that users can choose to install to further optimize their experience.
- (šŸ§) **`Quality`** Use [CRITIC reflection framework](https://arxiv.org/pdf/2305.11738) to reflect on a response to a task/query using external tools via [`[^]`](https://llamahub.ai/l/agent/llama-index-agent-introspective).
- (šŸ“…) **`Privacy`** Integrate with [microsoft/presidio](https://github.com/microsoft/presidio) for customizable PII de-identification.
- (šŸ“…) **`Quality`** Use [CRITIC reflection framework](https://arxiv.org/pdf/2305.11738) to reflect on a response to a task/query using external tools via [`[^]`](https://llamahub.ai/l/agent/llama-index-agent-introspective).
- (šŸš§) **`Privacy`** Integrate with [microsoft/presidio](https://github.com/microsoft/presidio) for customizable PII de-identification.

## Integrations šŸ“¦
- (āœ…) [Home Assistant](https://github.com/home-assistant/core)
- (šŸš§) Google Calendar
- (šŸ“…) Google Search, Search recent ArXiv papers and summaries, Yahoo Finance, Yelp
- (šŸš§) Google Search, Search recent ArXiv papers and summaries, Yahoo Finance, Yelp
- (šŸ§) Android Debugging Shell

## Installating and Running Meeseeks
Expand Down
41 changes: 27 additions & 14 deletions core/classes.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
import abc
import os
import json
from typing import Optional
from typing import List, Any
from typing import Optional, List, Any

# Third-party modules
from langchain_community.document_loaders import JSONLoader
from langchain_openai import ChatOpenAI
Expand All @@ -19,6 +19,7 @@


class ActionStep(BaseModel):
"""Defines an action step within a task queue with validation."""
action_consumer: str = Field(
description=f"Specify one of {AVAILABLE_TOOLS} to indicate the action consumer."
)
Expand All @@ -36,11 +37,17 @@ class ActionStep(BaseModel):


class TaskQueue(BaseModel):
"""Manages a queue of actions to be performed, tracking their results."""
human_message: Optional[str] = Field(
alias="_human_message",
description='Human message associated with the task queue.'
)
action_steps: Optional[List[ActionStep]] = None
action_steps: List[ActionStep] = Field(default_factory=list)
task_result: Optional[str] = Field(
alias="_task_result",
default="Not executed yet.",
description='Store the result for the entire task queue'
)

@validator("action_steps", allow_reuse=True)
# pylint: disable=E0213,W0613
Expand Down Expand Up @@ -81,18 +88,25 @@ def validate_actions(cls, field):


class AbstractTool(abc.ABC):
def __init__(self, name, description, model_name=None, temperature=0.2):
# Data Validation
if model_name is None:
default_model = os.getenv("DEFAULT_MODEL", "gpt-3.5-turbo")
self.model_name = os.getenv("TOOL_MODEL", default_model)
else:
self.model_name = model_name

# Set the tool attributes
"""Abstract base class for tools, providing common features and requiring specific methods."""

def _setup_cache_dir(self, name: str) -> str:
"""Set up and return the cache directory path."""
root_cache_dir = os.getenv("CACHE_DIR")
if not root_cache_dir:
raise ValueError("CACHE_DIR environment variable is not set.")
cache_path = os.path.join(
root_cache_dir, "..", ".cache", f"{name.lower().replace(' ', '_')}_tool")
os.makedirs(cache_path, exist_ok=True)
return os.path.abspath(cache_path)

def __init__(self, name: str, description: str, model_name: Optional[str] = None, temperature: float = 0.3):
"""Initialize the tool with optional model configuration."""
self.model_name = model_name or os.getenv(
"TOOL_MODEL", os.getenv("DEFAULT_MODEL", "gpt-3.5-turbo"))
self.name = name
self._id = f"{name.lower().replace(' ', '_')}_tool"
self.description = description
self._id = f"{name.lower().replace(' ', '_')}_tool"
session_id = f"{self._id}-tool-id-{get_unique_timestamp()}"
logging.info(f"Tool created <name={name}; session_id={session_id};>")
self.langfuse_handler = CallbackHandler(
Expand All @@ -107,7 +121,6 @@ def __init__(self, name, description, model_name=None, temperature=0.2):
model=self.model_name,
temperature=temperature
)

root_cache_dir = os.getenv("CACHE_DIR", None)
if root_cache_dir is None:
raise ValueError("CACHE_DIR environment variable is not set.")
Expand Down
85 changes: 51 additions & 34 deletions core/task_master.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,68 +31,68 @@
load_dotenv()


def generate_action_plan(
user_query: str, model_name: str = None) -> List[dict]:
def generate_action_plan(user_query: str, model_name: str = None) -> List[dict]:
"""
Use the LangChain pipeline to generate an action plan
based on the user query.
Use the LangChain pipeline to generate an action plan based on the user query.
Args:
user_query (str): The user query to generate the action plan.
Returns:
List[dict]: The generated action plan as a list of dictionaries.
"""
user_id = "meeseeks-task-master"
session_id = f"action-queue-id-{get_unique_timestamp()}"
trace_name = user_id
version = os.getenv("VERSION", "Not Specified")
release = os.getenv("ENVMODE", "Not Specified")

langfuse_handler = CallbackHandler(
user_id="homeassistant_kk",
session_id=f"action-queue-id-{get_unique_timestamp()}",
trace_name="meeseeks-task-master",
version=os.getenv("VERSION", "Not Specified"),
release=os.getenv("ENVMODE", "Not Specified")
user_id=user_id,
session_id=session_id,
trace_name=trace_name,
version=version,
release=release
)

if model_name is None:
default_model = os.getenv("DEFAULT_MODEL", "gpt-3.5-turbo")
model_name = os.getenv("ACTION_PLAN_MODEL", default_model)
model_name = model_name or os.getenv(
"ACTION_PLAN_MODEL", os.getenv("DEFAULT_MODEL", "gpt-3.5-turbo"))

model = ChatOpenAI(
openai_api_base=os.getenv("OPENAI_API_BASE"),
model=model_name,
temperature=0.4
)
# Instantiate the parser with the new model.

parser = PydanticOutputParser(pydantic_object=TaskQueue)
logging.debug(
"Generating action plan <model='%s'; user_query='%s'>",
model_name, user_query)
# Update the prompt to match the new query and desired format.
"Generating action plan <model='%s'; user_query='%s'>", model_name, user_query)

prompt = ChatPromptTemplate(
messages=[
SystemMessage(
content=get_system_prompt()
),
HumanMessage(
content="Turn on strip lights and heater."
),
SystemMessage(content=get_system_prompt()),
HumanMessage(content="Turn on strip lights and heater."),
AIMessage(get_task_master_examples(example_id=0)),
HumanMessage(
content="What is the weather today?"
),
HumanMessage(content="What is the weather today?"),
AIMessage(get_task_master_examples(example_id=1)),
HumanMessagePromptTemplate.from_template(
"## Format Instructions\n{format_instructions}\n## Generate a task queue for the user query\n{user_query}"
),
],
partial_variables={
"format_instructions": parser.get_format_instructions()},
"format_instructions": parser.get_format_instructions()
},
input_variables=["user_query"]
)

estimator = num_tokens_from_string(str(prompt))
logging.info("Input Prompt Token length is `%s`.", estimator)
chain = prompt | model | parser

action_plan = chain.invoke({"user_query": user_query.strip()},
config={"callbacks": [langfuse_handler]})
action_plan = (prompt | model | parser).invoke(
{"user_query": user_query.strip()},
config={"callbacks": [langfuse_handler]}
)

action_plan.human_message = user_query
logging.info("Action plan generated <%s>", action_plan)
return action_plan
Expand All @@ -112,10 +112,27 @@ def run_action_plan(task_queue: TaskQueue) -> TaskQueue:
"home_assistant_tool": HomeAssistant(),
"talk_to_user_tool": TalkToUser()
}
for idx, action_step in enumerate(task_queue.action_steps):
logging.debug(f"<ActionStep({action_step})>")
tool = tool_dict[action_step.action_consumer]
action_plan = tool.run(action_step)
task_queue.action_steps[idx].result = action_plan

results = []

for action_step in task_queue.action_steps:
logging.debug(f"Processing ActionStep: {action_step}")
tool = tool_dict.get(action_step.action_consumer)

if tool is None:
logging.error(
f"No tool found for consumer: {action_step.action_consumer}")
continue

try:
action_result = tool.run(action_step)
action_step.result = action_result
results.append(
action_result.content if action_result.content is not None else "")
except Exception as e:
logging.error(f"Error processing action step: {e}")
action_step.result = None

task_queue.task_result = " ".join(results).strip()

return task_queue
Binary file added docs/screenshot_ha_assist_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/screenshot_ha_assist_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 9 additions & 2 deletions meeseeks-api/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,13 @@
# meeseeks-api
# Meeseeks API Server
<p align="center">
<a href="https://github.com/bearlike/Personal-Assistant/wiki"><img alt="Wiki" src="https://img.shields.io/badge/GitHub-Wiki-blue?style=for-the-badge&logo=github"></a>
<a href="https://github.com/bearlike/Personal-Assistant/pkgs/container/meeseeks-chat"><img src="https://img.shields.io/badge/ghcr.io-bearlike/meeseeks&#x2212;api:latest-blue?style=for-the-badge&logo=docker&logoColor=white" alt="Docker Image"></a>
<a href="https://github.com/bearlike/Personal-Assistant/releases"><img src="https://img.shields.io/github/v/release/bearlike/Personal-Assistant?style=for-the-badge&" alt="GitHub Release"></a>
</p>

- REST API Engine wrapped around the meeseeks-core.
- No components are explicitly tested for safety or security. Use with caution in a production environment.
- For more information, such as installation, please check out the [Wiki](https://github.com/bearlike/Personal-Assistant/wiki).

[Link to GitHub Repository](https://github.com/bearlike/Personal-Assistant/edit/main/README.md)

[Link to GitHub Repository](https://github.com/bearlike/Personal-Assistant)
23 changes: 20 additions & 3 deletions meeseeks-api/backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,11 @@
# Standard library modules
import os
import sys
from copy import deepcopy
from typing import Dict

# Third-party modules
from flask import Flask, request, jsonify
from flask import Flask, request
from flask_restx import Api, Resource, fields
from dotenv import load_dotenv

Expand All @@ -35,7 +36,11 @@

# Initialize logger
logging = get_logger(name="meeseeks-api")
# logging.basicConfig(level=logging.DEBUG)
logging.info("Starting Meeseeks API server.")
logging.debug("Starting API server with API token: %s", MASTER_API_TOKEN)


# Create Flask application
app = Flask(__name__)

Expand All @@ -59,6 +64,8 @@
task_queue_model = api.model('TaskQueue', {
'human_message': fields.String(
required=True, description='The original user query'),
'task_result': fields.String(
required=True, description='Combined response of all action steps'),
'action_steps': fields.List(fields.Nested(api.model('ActionStep', {
'action_consumer': fields.String(
required=True,
Expand All @@ -75,6 +82,13 @@
})


@app.before_request
def log_request_info():
logging.debug('Endpoint: %s', request.endpoint)
logging.debug('Headers: %s', request.headers)
logging.debug('Body: %s', request.get_data())


@ns.route('/query')
class MeeseeksQuery(Resource):
"""
Expand Down Expand Up @@ -118,10 +132,13 @@ def post(self) -> Dict:

# Execute action plan
task_queue = run_action_plan(task_queue)

# Deep copy the variable into another variable
task_result = deepcopy(task_queue.task_result)
to_return = task_queue.dict()
to_return["task_result"] = task_result
# Return TaskQueue as JSON
logging.info("Returning executed action plan.")
return task_queue.dict(), 200
return to_return, 200


if __name__ == '__main__':
Expand Down
19 changes: 16 additions & 3 deletions meeseeks-chat/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
# meeseeks-chat
# Meeseeks - Chat Interface
<p align="center">
<a href="https://github.com/bearlike/Personal-Assistant/wiki"><img alt="Wiki" src="https://img.shields.io/badge/GitHub-Wiki-blue?style=for-the-badge&logo=github"></a>
<a href="https://github.com/bearlike/Personal-Assistant/pkgs/container/meeseeks-chat"><img src="https://img.shields.io/badge/ghcr.io-bearlike/meeseeks&#x2212;chat:latest-blue?style=for-the-badge&logo=docker&logoColor=white" alt="Docker Image"></a>
<a href="https://github.com/bearlike/Personal-Assistant/releases"><img src="https://img.shields.io/github/v/release/bearlike/Personal-Assistant?style=for-the-badge&" alt="GitHub Release"></a>
</p>

Chat Interface wrapped around the meeseeks-core. Powered by Streamlit.

[Link to GitHub](https://github.com/bearlike/Personal-Assistant/edit/main/README.md)
<p align="center">
<img src="../docs/screenshot_chat_app_1.png" alt="Screenshot of Meeseks WebUI" height="512px">
</p>


- Chat Interface wrapped around the meeseeks-core. Powered by Streamlit.
- For more information, such as installation, please check out the [Wiki](https://github.com/bearlike/Personal-Assistant/wiki).


[Link to GitHub](https://github.com/bearlike/Personal-Assistant)
25 changes: 25 additions & 0 deletions meeseeks_ha_conversation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Home Assistant Conversation Integration for Meeseeks šŸš€

<p align="center">
<a href="https://github.com/bearlike/Personal-Assistant/wiki"><img alt="Wiki" src="https://img.shields.io/badge/GitHub-Wiki-blue?style=for-the-badge&logo=github"></a>
<a href="https://github.com/bearlike/Personal-Assistant/releases"><img src="https://img.shields.io/github/v/release/bearlike/Personal-Assistant?style=for-the-badge&" alt="GitHub Release"></a>
</p>


<table align="center">
<tr>
<th>Answer questions and interpret sensor information</th>
<th>Control devices and entities</th>
</tr>
<tr>
<td align="center"><img src="../docs/screenshot_ha_assist_1.png" alt="Screenshot" height="512px"></td>
<td align="center"><img src="../docs/screenshot_ha_assist_2.png" alt="Screenshot" height="512px"></td>
</tr>
</table>

- Home Assistant Conversation Integration for Meeseeks. Can be used with HA Assist ā­.
- Wrapped around the REST API Engine for Meeseeks. 100% coverage of Meeseeks API.
- No components are explicitly tested for safety or security. Use with caution in a production environment.
- For more information, such as installation, please check out the [Wiki](https://github.com/bearlike/Personal-Assistant/wiki).

[Link to GitHub Repository](https://github.com/bearlike/Personal-Assistant)
Loading

0 comments on commit b401fe1

Please sign in to comment.