Run Alpaca model on your computer with a chat ui.
Your own AI assistant run locally on your computer.
Inspired by Dalai, Node-Llama, Alpaca.cpp
npm install -g catai
catai install Vicuna-7B
catai serve
- Auto detect programming language 🧑💻
- Auto detect code block 📃
- Click on user icon to show original message 💬
- Real time text streaming ⏱️
- Fast model downloads 🚀
You can use any Alpaca model as long as your computer can handle it.
catai install Vicuna-13B
You can also download a custom model like this:
catai install https://example.com/model.tar.bin --tag myModel
If you want to switch between models you can use catai use
command.
catai use Vicuna-7B
You can use all the UIs from the client
directory (default catai).
catai server --ui chatGPT
You can use it on Windows, Linux and Mac.
This package from version 1.6.0 could depend on llama-node which supports:
- darwin-x64
- darwin-arm64
- linux-x64-gnu
- win32-x64-msvc
Runs on most modern computers. Unless your computer is very very old, it should work.
According to a llama.cpp discussion thread, here are the memory requirements:
- 7B => ~4 GB
- 13B => ~8 GB
- 30B => ~16 GB
You can change the configuration by edition the config.js
file.
catai config --editor [editor]
After you change the configuration, you need to restart the server.
- 💡To increase the model understanding, try change the
context
size. - 💡To increase the model output, try change the
numPredict
size.
You can list all the models that you have installed.
catai list
You can uninstall models that you don't need.
catai remove Vicuna-7B
You can uninstall the package.
catai remove all # remove all downloaded data
npm uninstall -g catai
- All download data will be downloaded at
~/catai
folder. - The download is multi-threaded, so it may use a lot of bandwidth, but it will download faster!
If you want to run the source code locally, you can follow the steps below.
To run the client.
cd client/catai
npm install
npm run dev
To run the server.
cd server
npm install
npm run install-model Vicuna-7B
npm start
Try change the config:
export const SELECTED_BINDING = 'alpaca-cpp';
It may be slower, but it has more chance to work with alpaca models.
Problem with the dependency zx
, try to run inside git-bash
.
This project use Alpaca.cpp to run Alpaca models on your computer. So any license applied to Alpaca.cpp is also applied to this project.
The GPT frontend is built on top of the chatGPT Frontend mimic