Linux: Recommended, I can help you with any issues you may have.
Windows: Untested, but should work. I can't help you with any issues you may have.
Before proceeding with installation Run this notebook.
Wait until all dependencies and model gets downloaded.
You will get a https://<gibberish>.trycloudflare.com/
link.
Copy that link with v1
suffix, looks like https://<gibberish>.trycloudflare.com/v1
and paste it in the api_base parameter in the next step.
- Install poetry
curl -sSL https://install.python-poetry.org | python3 -
- Install dependencies
poetry install
- Run the CysecAI
poetry run cysecai --api_base "<your api base from colab notebook>" --api_key "sk-xxx" -m "gpt-3.5-turbo" --context_window 8192
CysecAI lets LLMs run code (Python, Javascript, Shell, and more) locally. You can chat with CysecAI through a ChatGPT-like interface in your terminal by running $ CysecAI
after installing.
This provides a natural-language interface to your computer's general-purpose capabilities:
- Create and edit photos, videos, PDFs, etc.
- Control a Chrome browser to perform research
- Plot, clean, and analyze large datasets
- ...etc.
You can modify the max_tokens
and context_window
(in tokens) of locally running models.
For local mode, smaller context windows will use less RAM, so we recommend trying a much shorter window (~1000) if it's is failing / if it's slow. Make sure max_tokens
is less than context_window
.
CysecAI --local --max_tokens 1000 --context_window 3000
To help you inspect CysecAI we have a --verbose
mode for debugging.
You can activate verbose mode by using it's flag (CysecAI --verbose
), or mid-chat:
$ CysecAI
...
> %verbose true <- Turns on verbose mode
> %verbose false <- Turns off verbose mode
In the interactive mode, you can use the below commands to enhance your experience. Here's a list of available commands:
Available Commands:
%verbose [true/false]
: Toggle verbose mode. Without arguments or withtrue
it enters verbose mode. Withfalse
it exits verbose mode.%reset
: Resets the current session's conversation.%undo
: Removes the previous user message and the AI's response from the message history.%tokens [prompt]
: (Experimental) Calculate the tokens that will be sent with the next prompt as context and estimate their cost. Optionally calculate the tokens and estimated cost of aprompt
if one is provided. Relies on LiteLLM'scost_per_token()
method for estimated costs.%help
: Show the help message.
CysecAI allows you to set default behaviors using a config.yaml
file.
This provides a flexible way to configure the CysecAI without changing command-line arguments every time.
Run the following command to CysecAIthe configuration file:
CysecAI --config
CysecAI supports multiple config.yaml
files, allowing you to easily switch between configurations via the --config_file
argument.
Note: --config_file
accepts either a file name or a file path. File names will use the default configuration directory, while file paths will use the specified path.
To create or edit a new configuration, run:
CysecAI --config --config_file $config_path
To have CysecAI load a specific configuration file run:
CysecAI --config_file $config_path
Note: Replace $config_path
with the name of or path to your configuration file.
- Create a new
config.turbo.yaml
fileCysecAI --config --config_file config.turbo.yaml
- Edit the
config.turbo.yaml
file to setmodel
togpt-3.5-turbo
- Run CysecAI with the
config.turbo.yaml
configurationCysecAI --config_file config.turbo.yaml
The generator update enables CysecAI to be controlled via HTTP REST endpoints:
# server.py
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from CysecAI import CysecAI
app = FastAPI()
@app.get("/chat")
def chat_endpoint(message: str):
def event_stream():
for result in CysecAI.chat(message, stream=True):
yield f"data: {result}\n\n"
return StreamingResponse(event_stream(), media_type="text/event-stream")
@app.get("/history")
def history_endpoint():
return CysecAI.messages
pip install fastapi uvicorn
uvicorn server:app --reload
Since generated code is executed in your local environment, it can interact with your files and system settings, potentially leading to unexpected outcomes like data loss or security risks.
You can run CysecAI -y
or set CysecAI.auto_run = True
to bypass this confirmation, in which case:
- Be cautious when requesting commands that modify files or system settings.
- Watch CysecAI like a self-driving car, and be prepared to end the process by closing your terminal.
- Consider running CysecAI in a restricted environment like Google Colab or Replit. These environments are more isolated, reducing the risks of executing arbitrary code.
There is experimental support for a safe mode to help mitigate some risks.
CysecAI equips a function-calling language model with an exec()
function, which accepts a language
(like "Python" or "JavaScript") and code
to run.
We then stream the model's messages, code, and your system's outputs to the terminal as Markdown.
Thank you for your interest in contributing! We welcome involvement from the community.
Please see our contributing guidelines for more details on how to get involved.