Skip to content

Commit

Permalink
Merge pull request #1011 from bjwswang/main
Browse files Browse the repository at this point in the history
release v0.2.2
  • Loading branch information
bjwswang authored Apr 12, 2024
2 parents 2f838b1 + c6c4285 commit a5e04a4
Show file tree
Hide file tree
Showing 3 changed files with 51 additions and 45 deletions.
82 changes: 44 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,18 +16,17 @@

## What is Arcadia?

**Arcadia** comes from [Greek mythology](https://www.greekmythology.com/Myths/Places/Arcadia/arcadia.html)(a tranquil and idyllic region, representing harmony, serenity, and natural beauty). We aim to help everyone find a more perfect integration between humans and AI.

To achieve this goal, we provide this one-stop LLMOps solution. Furthermore, we can easily host **Arcadia** at any Kubernetes cluster as production ready by integrating [kubebb](https://github.com/kubebb)(Kubernetes building blocks).
**Arcadia** is a one-stop enterprise-grade LLMOps platform that provides a unified interface for developers and operators to build, debug,deploy and manage AI agents with a orchestration engine(**RAG(Retrieval Augmented Generation)** and **LLM finetuning** has been supported).

## Features
* Multi-tenant isolation (data, model services), built-in OIDC, RBAC, and auditing, supporting different companies and departments to develop through a unified platform
* Kubernetes native AGI agent orchestration

* Build,debug,deploy AI agents on ops-console(GUI for LLMOps)
* Chat with AGI agent on agent-portal(GUI for gpt chat)
* Enterprise-grade infratructure with [KubeBB](https://github.com/kubebb): Multi-tenant isolation (data, model services), built-in OIDC, RBAC, and auditing, supporting different companies and departments to develop through a unified platform
* Support most of the popular LLMs(large language models),embedding models,reranking models,etc..
* Inference acceleration with [vllm](https://github.com/vllm-project/vllm),distributed inference with [ray](https://github.com/ray-project/ray),quantization, and more
* Support fine-tuining with [llama-factory](https://github.com/hiyouga/LLaMA-Factory)
* Built on langchaingo(golang), has better performance and maintainability
* Support distributed inference using Ray
* Support quality and performance evaluation of AGI agent under different configurations
* A development and operational platform for AI agents, along with an AI agent portal for end-users
* Developed based on micro frontends and low-code approach, allowing for quick scalability and integration

## Architecture

Expand All @@ -45,52 +44,59 @@ Visit our [online documents](http://kubeagi.k8s.com.cn/docs/intro)

Read [user guide](http://kubeagi.k8s.com.cn/docs/UserGuide/intro)

## Supported Models

### List of Models can be deployed by kubeagi

### LLMs

List of supported(tested) LLMs
* baichuan2-7b
* chatglm2-6b
* qwen-7b-chat / qwen-14b-chat / qwen-72b-chat
* llama2-7b
* Mistral-7B-Instruct-v0.1
* bge-large-zh ***embedding***
* m3e ***embedding***
* [ZhiPuAI(智谱 AI)](https://github.com/kubeagi/arcadia/tree/main/pkg/llms/zhipuai)
- [example](https://github.com/kubeagi/arcadia/blob/main/examples/zhipuai/main.go)
- [embedding](https://github.com/kubeagi/arcadia/tree/main/pkg/embeddings/zhipuai)
* [DashScope(灵积模型服务)](https://github.com/kubeagi/arcadia/tree/main/pkg/llms/dashscope)
- [example](https://github.com/kubeagi/arcadia/blob/main/examples/dashscope/main.go)
- [text-embedding-v1(通用文本向量 同步接口)](https://help.aliyun.com/zh/dashscope/developer-reference/text-embedding-api-details)
* [chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b)
* [chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b>)
* [qwen(7B,14B,72B)](https://huggingface.co/Qwen)
* [qwen-1.5(0.5B,1.8B,4B,14B,32B](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524)
* [baichuan2](https://huggingface.co/baichuan-inc)
* [llama2](https://huggingface.co/meta-llama)
* [mistral](https://huggingface.co/mistralai)

### Embeddings

> Fully compatible with [langchain embeddings](https://github.com/tmc/langchaingo/tree/main/embeddings)
* [bge-large-zh](https://huggingface.co/BAAI/bge-large-zh-v1.5)
* [m3e](https://huggingface.co/moka-ai/m3e-base)

### Reranking

* [bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) ***reranking***
* [bce-reranking](<https://github.com/netease-youdao/BCEmbedding>) ***reranking***

### VectorStores
### List of Online(third party) LLM Services can be integrated by kubeagi

* [OpenAI](https://openai.com/)
* [Google Gemini](https://gemini.google.com/)
* [智谱AI](https://github.com/kubeagi/arcadia/tree/main/pkg/llms/zhipuai)
* [example](https://github.com/kubeagi/arcadia/blob/main/examples/zhipuai/main.go)
* [embedding](https://github.com/kubeagi/arcadia/tree/main/pkg/embeddings/zhipuai)
* [DashScope(灵积模型服务)](https://github.com/kubeagi/arcadia/tree/main/pkg/llms/dashscope)
* [example](https://github.com/kubeagi/arcadia/blob/main/examples/dashscope/main.go)
* [text-embedding-v1(通用文本向量 同步接口)](https://help.aliyun.com/zh/dashscope/developer-reference/text-embedding-api-details)

## Supported VectorStores

> Fully compatible with [langchain vectorstores](https://github.com/tmc/langchaingo/tree/main/vectorstores)
-[PG Vector](https://github.com/tmc/langchaingo/tree/main/vectorstores/pgvector), KubeAGI adds the PG vector support to [langchaingo](https://github.com/tmc/langchaingo) project.
-[ChromaDB](https://docs.trychroma.com/)
*[PG Vector](https://github.com/tmc/langchaingo/tree/main/vectorstores/pgvector), KubeAGI adds the PG vector support to [langchaingo](https://github.com/tmc/langchaingo) project.
*[ChromaDB](https://docs.trychroma.com/)

## Pure Go Toolchains

Thanks to [langchaingo](https://github.com/tmc/langchaingo),we can have comprehensive AI capability in Golang!But in order to meet our own unique needs, we have further developed a number of other toolchains:

- [Optimized DocumentLoaders](https://github.com/kubeagi/arcadia/tree/main/pkg/documentloaders): optimized csv,etc...
- [Extended LLMs](https://github.com/kubeagi/arcadia/tree/main/pkg/llms): zhipuai,dashscope,etc...
- [Tools](https://github.com/kubeagi/arcadia/tree/main/pkg/tools): bingsearch,weather,etc...
- [AppRuntime](https://github.com/kubeagi/arcadia/tree/main/pkg/appruntime): powerful node(LLM,Chain,KonwledgeBase,vectorstore,Agent,etc...) orchestration runtime for arcadia
* [Optimized DocumentLoaders](https://github.com/kubeagi/arcadia/tree/main/pkg/documentloaders): optimized csv,etc...
* [Extended LLMs](https://github.com/kubeagi/arcadia/tree/main/pkg/llms): zhipuai,dashscope,etc...
* [Tools](https://github.com/kubeagi/arcadia/tree/main/pkg/tools): bingsearch,weather,etc...
* [AppRuntime](https://github.com/kubeagi/arcadia/tree/main/pkg/appruntime): powerful node(LLM,Chain,KonwledgeBase,vectorstore,Agent,etc...) orchestration runtime for arcadia

We have provided some examples on how to use them. See more details at [here](https://github.com/kubeagi/arcadia/tree/main/examples)

## CLI

We provide a Command Line Tool `arctl` to interact with `arcadia`. See [here](http://kubeagi.k8s.com.cn/docs/Tools/arctl-tool) for more details.

- ✅ datasource management
- ✅ RAG evaluation

## Contribute to Arcadia

If you want to contribute to Arcadia, refer to [contribute guide](http://kubeagi.k8s.com.cn/docs/Contribute/prepare-and-start).
Expand Down
2 changes: 1 addition & 1 deletion deploy/charts/arcadia/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: arcadia
description: A Helm chart(Also a KubeBB Component) for KubeAGI Arcadia
type: application
version: 0.3.30
appVersion: "0.2.1"
appVersion: "0.2.2"

keywords:
- LLMOps
Expand Down
12 changes: 6 additions & 6 deletions deploy/charts/arcadia/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ config:
controller:
# 1: error 3:info 5:debug
loglevel: 3
image: kubeagi/arcadia:v0.2.1-20240401-b80e4e4
image: kubeagi/arcadia:v0.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
Expand All @@ -51,7 +51,7 @@ controller:
# related project: https://github.com/kubeagi/arcadia/tree/main/apiserver
apiserver:
loglevel: 3
image: kubeagi/arcadia:v0.2.1-20240401-b80e4e4
image: kubeagi/arcadia:v0.2.2
enableplayground: false
port: 8081
ingress:
Expand All @@ -70,7 +70,7 @@ apiserver:
opsconsole:
enabled: true
kubebbEnabled: true
image: kubeagi/ops-console:v0.2.1-20240401-2e63d80
image: kubeagi/ops-console:v0.2.2
ingress:
path: kubeagi-portal-public
host: portal.<replaced-ingress-nginx-ip>.nip.io
Expand All @@ -81,7 +81,7 @@ gpts:
# all gpt resources are public in this namespace
public_namespace: gpts
agentportal:
image: kubeagi/agent-portal:v0.1.0-20240401-bc9e42d
image: kubeagi/agent-portal:v0.1.0-20240411-e26a310
ingress:
path: ""
host: gpts.<replaced-ingress-nginx-ip>.nip.io
Expand All @@ -91,7 +91,7 @@ fastchat:
enabled: true
image:
repository: kubeagi/arcadia-fastchat
tag: v0.2.36
tag: v0.2.36-patch
ingress:
enabled: true
host: fastchat-api.<replaced-ingress-nginx-ip>.nip.io
Expand Down Expand Up @@ -131,7 +131,7 @@ minio:
# Related project: https://github.com/kubeagi/arcadia/tree/main/data-processing
dataprocess:
enabled: true
image: kubeagi/data-processing:v0.2.1
image: kubeagi/data-processing:v0.2.2
port: 28888
config:
llm:
Expand Down

0 comments on commit a5e04a4

Please sign in to comment.