📼📠 This is an old version of ownAI. Please use the new version. 📠📼 |
---|
With ownAI you can run your own AIs.
ownAI is an open-source platform written in Python using the Flask framework. It allows you to host and manage AI applications with a web interface for interaction. ownAI supports the customization of AIs for specific use cases and provides a flexible environment for your AI projects.
For a demo installation and a managed private cloud service, please visit https://ownai.org.
- Features
- Installation
- Production setup
- Development setup
- Usage
- AI files
- Updating
- Run with Docker
- Contributing
- License
- Host and manage AI applications
- Web-based interface for interacting with AI models
- Support for AI customization to suit specific needs
- Create and manage additional knowledge for AIs
- Open-source and community-driven
ownAI requires Python 3.8 or higher. Due to a dependency issue, it does not currently run on Python 3.11. To install and set up ownAI, follow these steps:
- Clone the repository:
git clone https://github.com/own-ai/ownAI.git
- Enter the project directory:
cd ownAI
- Create a virtual environment and activate it:
python3 -m venv venv
source venv/bin/activate
- Install requirements:
pip install -r requirements.txt
-
Configure the server by copying the
.env.template
file to.env
and then editing the.env
file. -
Initialize the database:
flask db upgrade
- Register a new user:
flask add-user
- Optionally: Set up your first AI. We recommend starting with the Llama 2 model and the Helpful Assistant example.
Download the model (this needs about 4 GB of disk space):
flask download-model --repo "TheBloke/Llama-2-7B-Chat-GGUF" --filename "llama-2-7b-chat.Q4_K_M.gguf"
Add the AI:
flask add-ai --aifile ./examples/llamacpp/helpful_assistant.aifile
- Start the server:
flask run
Now, you should be able to access the ownAI web interface at http://localhost:5000
.
For a production setup, we recommend using a WSGI server such as Gunicorn. If you followed the steps above, Gunicorn is already installed in your virtual environment. To start the server, run:
gunicorn -b 0.0.0.0:5000 --worker-class geventwebsocket.gunicorn.workers.GeventWebSocketWorker -w 1 "backaind:create_app()"
We recommend using a reverse proxy such as nginx to handle HTTPS.
If you want to contribute to the ownAI frontend, you can set up a development environment as follows:
- Install Node.js and npm.
- Install the frontend dependencies:
npm install
- Start the development server (this already includes starting the Flask backend server):
npm run dev
To start interacting with your own AI, visit the web interface at http://localhost:5000
and follow these steps:
-
Log in using the credentials you set up during installation.
-
Click on the ownAI logo in the upper left corner to open the main menu.
-
Choose "AI Workshop" and click the "New AI" button.
-
Load an Aifile (see below for details and in the examples directory for examples).
-
Optionally customize the AI to suit your needs.
-
Optionally select "Knowledge" from the main menu to upload additional knowledge (if your AI chain supports it).
-
Select "Interaction" from the main menu and enter your input text or data in the provided input box.
ownAI uses .aifile
files to store and load the specification and configuration of AIs.
In the examples directory you can find many different examples to run various AI models locally or to use external providers.
Please also refer to the readme file in the examples directory.
If you want to get started quickly and run your AIs by yourself, please have a look at the llamacpp examples. These allow you to run your own AIs on your machine with little or no further setup.
If you want to create your own aifiles, have a look at the aifilemaker.py script, which you can use as a starting point.
To update ownAI, simply pull the latest changes from the repository and run:
source venv/bin/activate
pip install -r requirements.txt
flask db upgrade
If you have Docker installed, you can run ownAI in a Docker container. Otherwise please follow the installation section.
To run ownAI with Docker, first configure the server by downloading the .env.template
file.
Rename the file to docker.env
and edit the file.
Then run the following command (replace <your-username>
and <your-password>
):
docker run --name ownai --env OWNAI_ROOT_USER=<your-username> --env OWNAI_ROOT_PASSWORD=<your-password> --env OWNAI_DOWNLOAD_REPO="TheBloke/Llama-2-7B-Chat-GGUF" --env OWNAI_DOWNLOAD_FILE="llama-2-7b-chat.Q4_K_M.gguf" --env OWNAI_ADD_AI="./examples/llamacpp/helpful_assistant.aifile:./examples/llamacpp/helpful_assistant_with_knowledge.aifile" --env-file ./docker.env -p 5000:5000 ownai/ownai
This is only required for the first run. To start the server again just run:
docker start ownai
We welcome contributions from the community. To contribute, please:
- Fork the repository
- Create a new branch for your feature or bugfix
- Commit your changes
- Open a pull request
Please follow the coding style guidelines and ensure your changes are well-documented. Thank you very much for your contribution!
ownAI is released under the MIT License.