Note
This is an open-source project developed based on One API
Important
- Users must comply with OpenAI's Terms of Use and relevant laws and regulations. Not to be used for illegal purposes.
- This project is for personal learning only. Stability is not guaranteed, and no technical support is provided.
- 🎨 New UI interface (some interfaces pending update)
- 🌍 Multi-language support (work in progress)
- 🎨 Added Midjourney-Proxy(Plus) interface support, Integration Guide
- 💰 Online recharge support, configurable in system settings:
- EasyPay
- 🔍 Query usage quota by key:
- Works with neko-api-key-tool
- 📑 Configurable items per page in pagination
- 🔄 Compatible with original One API database (one-api.db)
- 💵 Support per-request model pricing, configurable in System Settings - Operation Settings
- ⚖️ Support channel weighted random selection
- 📈 Data dashboard (console)
- 🔒 Configurable model access per token
- 🤖 Telegram authorization login support:
- System Settings - Configure Login Registration - Allow Telegram Login
- Send /setdomain command to @Botfather
- Select your bot, then enter http(s)://your-website/login
- Telegram Bot name is the bot username without @
- 🎵 Added Suno API interface support, Integration Guide
- 🔄 Support for Rerank models, compatible with Cohere and Jina, can integrate with Dify, Integration Guide
- ⚡ OpenAI Realtime API - Support for OpenAI's Realtime API, including Azure channels
This version additionally supports:
- Third-party model gps (gpt-4-gizmo-*)
- Midjourney-Proxy(Plus) interface, Integration Guide
- Custom channels with full API URL support
- Suno API interface, Integration Guide
- Rerank models, supporting Cohere and Jina, Integration Guide
- Dify
You can add custom models gpt-4-gizmo-* in channels. These are third-party models and cannot be called with official OpenAI keys.
GENERATE_DEFAULT_TOKEN
: Generate initial token for new users, defaultfalse
STREAMING_TIMEOUT
: Set streaming response timeout, default 60 secondsDIFY_DEBUG
: Output workflow and node info to client for Dify channel, defaulttrue
FORCE_STREAM_OPTION
: Override client stream_options parameter, defaulttrue
GET_MEDIA_TOKEN
: Calculate image tokens, defaulttrue
GET_MEDIA_TOKEN_NOT_STREAM
: Calculate image tokens in non-stream mode, defaulttrue
UPDATE_TASK
: Update async tasks (Midjourney, Suno), defaulttrue
GEMINI_MODEL_MAP
: Specify Gemini model versions (v1/v1beta), format: "model:version", comma-separatedCOHERE_SAFETY_SETTING
: Cohere model safety settings, options:NONE
,CONTEXTUAL
,STRICT
, defaultNONE
GEMINI_VISION_MAX_IMAGE_NUM
: Gemini model maximum image number, default16
, set to-1
to disableMAX_FILE_DOWNLOAD_MB
: Maximum file download size in MB, default20
CRYPTO_SECRET
: Encryption key for encrypting database content
Tip
Latest Docker image: calciumion/new-api:latest
Default account: root, password: 123456
Update command:
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower -cR
- Must set
SESSION_SECRET
environment variable, otherwise login state will not be consistent across multiple servers. - If using a public Redis, must set
CRYPTO_SECRET
environment variable, otherwise Redis content will not be able to be obtained in multi-server deployment.
- Local database (default): SQLite (Docker deployment must mount
/data
directory) - Remote database: MySQL >= 5.7.8, PgSQL >= 9.6
# Clone project
git clone https://github.com/Calcium-Ion/new-api.git
cd new-api
# Edit docker-compose.yml as needed
# Start
docker-compose up -d
# SQLite deployment:
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
# MySQL deployment (add -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi"), modify database connection parameters as needed
# Example:
docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
Channel retry is implemented, configurable in Settings->Operation Settings->General Settings
. Cache recommended.
First retry uses same priority, second retry uses next priority, and so on.
REDIS_CONN_STRING
: Use Redis as cache- Example:
REDIS_CONN_STRING=redis://default:redispw@localhost:49153
- Example:
MEMORY_CACHE_ENABLED
: Enable memory cache, defaultfalse
- Example:
MEMORY_CACHE_ENABLED=true
- Example:
Error codes 400, 504, 524 won't retry
In Channel->Edit
, set Status Code Override
to:
{
"400": "500"
}
- One API: Original project
- Midjourney-Proxy: Midjourney interface support
- chatnio: Next-gen AI B/C solution
- neko-api-key-tool: Query usage quota by key