-
Notifications
You must be signed in to change notification settings - Fork 4
/
.env.example
59 lines (47 loc) · 2.7 KB
/
.env.example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
# * Environment file for Meeseeks - Personal Assistant
# * Repository: https://github.com/bearlike/personal-assistant
# - Rename this file to .env to make your application functional and remove unused variables.
# TODO-FUTURE: Convert this file to a YAML format for better readability
# * Meeseeks Settings
# - VERSION: Version of your application (There is no need to change these value)
# - ENVMODE: Environment mode of your application (valid options: dev, prod)
VERSION=1.0.0
ENVMODE=dev
LOG_LEVEL=DEBUG
CACHE_DIR='/path/to/cache/directory'
MASTER_API_TOKEN='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
# * Home Assistant Configuration
# - HA_TOKEN (required): Long-lived access token for Home Assistant
# - HA_URL (required): URL of your Home Assistant server API
# - Can be found at: Click your username in the sidebar > Then click on "Create Token" under "Long-Lived Access Tokens"
# - Refer: https://developers.home-assistant.io/docs/auth_api/#long-lived-access-token
HA_TOKEN=HOME_ASSISTANT_LONG_LIVED_ACCESS_TOKEN
HA_URL=https://homeassistant.server.local/api
# * OpenAI Configuration
# * I recommend using the LiteLLM Proxy directly for better caching, and logging.
# - OPENAI_BASE_URL (optional): URL of your OpenAI API compatible server.
# (remove if using OpenAI API directly)
# - OPENAI_API_KEY (required): Your OpenAI API key
# - Refer: https://platform.openai.com/api-keys
OPENAI_BASE_URL=https://lite-llm.server.local/
OPENAI_API_KEY=sk-OPENAI_API_KEY
# * Model Selection
# - DEFAULT_MODEL (required): Default model for your application
# - Setting TOOL_MODEL (optional), ACTION_PLAN_MODEL (optional) is optional but can help for load balancing, cost efficiency, or response quality
# - TOOL_MODEL (used by AbstractTool based classes), ACTION_PLAN_MODELS (used by ActionPlanner)
DEFAULT_MODEL=anthropic/claude-3-opus
# TOOL_MODEL=microsoft/phi-3-mini-128k-instruct
ACTION_PLAN_MODEL=openai/gpt-3.5-turbo
# * Langfuse Configuration for LLM Observability
# - LANGFUSE_HOST (optional): URL of your Langfuse server
# - LANGFUSE_SECRET_KEY (required), LANGFUSE_PUBLIC_KEY (required): Your Langfuse keys
# - Refer: https://langfuse.com/docs/get-started
LANGFUSE_HOST=https://langfuse.server.local/
LANGFUSE_SECRET_KEY=sk-ex-dummykey-1234-1234-1234-123456789012
LANGFUSE_PUBLIC_KEY=pk-ex-dummykey-1234-1234-1234-123456789012
# * Colored Logs Configuration
# Customize the colors and styles of the logs as per your preference
# If unsure, leave the default settings
COLOREDLOGS_FIELD_STYLES='asctime=color=240;name=45,inverse'
COLOREDLOGS_LEVEL_STYLES='info=220;spam=22;debug=34;verbose=34;notice=220;warning=202;success=118,bold;error=124;critical=background=red'
COLOREDLOGS_LOG_FORMAT='%(asctime)s [%(name)s] %(levelname)s %(message)s'