Skip to content

Commit

Permalink
Merge branch 'main' into example/swe_agent
Browse files Browse the repository at this point in the history
  • Loading branch information
garyzhang99 authored May 15, 2024
2 parents 815c99f + 3eed5af commit f7be053
Show file tree
Hide file tree
Showing 45 changed files with 1,798 additions and 299 deletions.
2 changes: 2 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ assignees: ''

---

**<u>AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.</u>**

**Describe the bug**
A clear and concise description of what the bug is.

Expand Down
3 changes: 3 additions & 0 deletions .github/ISSUE_TEMPLATE/custom.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,7 @@ assignees: ''

---

**<u>AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.</u>**



3 changes: 3 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,9 @@ assignees: ''

---

**<u>AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.</u>**


**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Expand Down
51 changes: 31 additions & 20 deletions README.md

Large diffs are not rendered by default.

21 changes: 16 additions & 5 deletions README_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@

# AgentScope

<h1 align="left">
<img src="https://img.alicdn.com/imgextra/i2/O1CN01cdjhVE1wwt5Auv7bY_!!6000000006373-0-tps-1792-1024.jpg" width="600" alt="agentscope-logo">
</h1>

更简单地构建基于LLM的多智能体应用。

[![](https://img.shields.io/badge/cs.MA-2402.14034-B31C1C?logo=arxiv&logoColor=B31C1C)](https://arxiv.org/abs/2402.14034)
Expand All @@ -24,15 +28,19 @@

## 新闻

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-04-06]** 我们现在发布了**AgentScope** v0.0.4版本!
- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/> **[2024-05-14]** 亲爱的AgentScope用户,我们正在进行AgentScope Workstation & Copilot用户体验的调研活动,目前需要您宝贵的意见来帮助我们改善AgentScope的拖拽式多智能体应用开发与Copilot体验。您的每一个反馈都十分宝贵,调研需要大约3~5分钟。请您点击 [链接](https://survey.aliyun.com/apps/zhiliao/vgpTppn22) 参与问卷,十分感谢您的支持与贡献!

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-05-14]** AgentScope 现已支持 **gpt-4o** 等 OpenAI Vision 模型! 模型配置请见[链接](./examples/model_configs_template/openai_chat_template.json)。同时,新的样例“[与gpt-4o模型对话](./examples/conversation_with_gpt-4o)”已上线!

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-04-30]** 我们现在发布了**AgentScope** v0.0.4版本!

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-04-27]** [AgentScope Workstation](https://agentscope.aliyun.com/)上线了! 欢迎使用 Workstation 体验如何在*拖拉拽编程平台* 零代码搭建多智体应用,也欢迎大家通过*copilot*查询AgentScope各种小知识!

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-04-19]** AgentScope现已经支持Llama3!我们提供了面向CPU推理和GPU推理的[脚本](./examples/model_llama3)[模型配置](./examples/model_llama3),一键式开启Llama3的探索,在我们的样例中尝试Llama3吧!
- **[2024-04-19]** AgentScope现已经支持Llama3!我们提供了面向CPU推理和GPU推理的[脚本](./examples/model_llama3)[模型配置](./examples/model_llama3),一键式开启Llama3的探索,在我们的样例中尝试Llama3吧!

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-04-06]** 我们现在发布了**AgentScope** v0.0.3版本!
- **[2024-04-06]** 我们现在发布了**AgentScope** v0.0.3版本!

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-04-06]** 新的样例“[五子棋](./examples/game_gomoku)”,“[与ReAct智能体对话](./examples/conversation_with_react_agent)”,“[与RAG智能体对话](./examples/conversation_with_RAG_agents)”,“[分布式并行搜索](./examples/distributed_search)”上线了!
- **[2024-04-06]** 新的样例“[五子棋](./examples/game_gomoku)”,“[与ReAct智能体对话](./examples/conversation_with_react_agent)”,“[与RAG智能体对话](./examples/conversation_with_RAG_agents)”,“[分布式并行搜索](./examples/distributed_search)”上线了!

- **[2024-03-19]** 我们现在发布了**AgentScope** v0.0.2版本!在这个新版本中,AgentScope支持了[ollama](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models)(本地CPU推理引擎),[DashScope](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models)[Gemini](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models) APIs。

Expand Down Expand Up @@ -62,7 +70,7 @@ AgentScope提供了一系列`ModelWrapper`来支持本地模型服务和第三

| API | Task | Model Wrapper | Configuration | Some Supported Models |
|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|-----------------------------------------------|
| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |[guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api) <br> [template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_chat_template.json) | gpt-4, gpt-3.5-turbo, ... |
| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |[guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api) <br> [template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_chat_template.json) | gpt-4o, gpt-4, gpt-3.5-turbo, ... |
| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api) <br> [template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_embedding_template.json) | text-embedding-ada-002, ... |
| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api) <br> [template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_dall_e_template.json) | dall-e-2, dall-e-3 |
| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api) <br> [template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_chat_template.json) | qwen-plus, qwen-max, ... |
Expand All @@ -76,6 +84,7 @@ AgentScope提供了一系列`ModelWrapper`来支持本地模型服务和第三
| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api) <br> [template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_chat_template.json) | llama3, llama2, Mistral, ... |
| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api) <br> [template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_embedding_template.json) | llama2, Mistral, ... |
| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api) <br> [template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_generate_template.json) | llama2, Mistral, ... |
| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#litellm-api) <br> [template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/litellm_chat_template.json) | [models supported by litellm](https://docs.litellm.ai/docs/)... |
| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#post-request-api) <br> [template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/postapi_model_config_template.json) | - |

**支持的本地模型部署**
Expand Down Expand Up @@ -111,6 +120,8 @@ AgentScope支持使用以下库快速部署本地模型服务。
- [通过对话查询SQL信息](./examples/conversation_nl2sql/)
- [与RAG智能体对话](./examples/conversation_with_RAG_agents)
- [与SWE智能体对话](./examples/swe_agent/)
- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>[与gpt-4o模型对话](./examples/conversation_with_gpt-4o)


- 游戏
- [五子棋](./examples/game_gomoku)
Expand Down
22 changes: 22 additions & 0 deletions docs/sphinx_doc/en/source/tutorial/203-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ Currently, AgentScope supports the following model service APIs:
- Gemini API, including chat and embedding.
- ZhipuAI API, including chat and embedding.
- Ollama API, including chat, embedding and generation.
- LiteLLM API, including chat, with various model APIs.
- Post Request API, model inference services based on Post
requests, including Huggingface/ModelScope Inference API and various
post request based model APIs.
Expand Down Expand Up @@ -87,6 +88,7 @@ In the current AgentScope, the supported `model_type` types, the corresponding
| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_chat"` | llama2, ... |
| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_embedding"` | llama2, ... |
| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_generate"` | llama2, ... |
| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | `"litellm_chat"` | - |
| Post Request based API | - | [`PostAPIModelWrapperBase`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api"` | - |
| | Chat | [`PostAPIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api_chat"` | meta-llama/Meta-Llama-3-8B-Instruct, ... |

Expand Down Expand Up @@ -440,6 +442,26 @@ Here we provide example configurations for different model wrappers.

<br/>


#### LiteLLM Chat API

<details>
<summary>LiteLLM Chat API (<code><a href="https://github.
com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py">agentscope.models.LiteLLMChatModelWrapper</a></code>)</summary>

```python
{
"config_name": "lite_llm_openai_chat_gpt-3.5-turbo",
"model_type": "litellm_chat",
"model_name": "gpt-3.5-turbo" # You should note that for different models, you should set the corresponding environment variables, such as OPENAI_API_KEY, etc. You may refer to https://docs.litellm.ai/docs/ for this.
},
```

</details>

<br/>


#### Post Request Chat API

<details>
Expand Down
5 changes: 4 additions & 1 deletion docs/sphinx_doc/en/source/tutorial/204-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,10 @@ The following table outlines the various Service functions by type. These functi
| | `arxiv_search` | Perform arXiv search |
| | `download_from_url` | Download file from given URL. |
| | `load_web` | Load and parse the web page of the specified url (currently only supports HTML). |
| | `digest_webpage` | Digest the content of a already loaded web page (currently only supports HTML). |
| | `digest_webpage` | Digest the content of a already loaded web page (currently only supports HTML).
| | `dblp_search_publications` | Search publications in the DBLP database
| | `dblp_search_authors` | Search for author information in the DBLP database |
| | `dblp_search_venues` | Search for venue information in the DBLP database |
| File | `create_file` | Create a new file at a specified path, optionally with initial content. |
| | `delete_file` | Delete a file specified by a file path. |
| | `move_file` | Move or rename a file from one path to another. |
Expand Down
89 changes: 81 additions & 8 deletions docs/sphinx_doc/en/source/tutorial/206-prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ dictionaries as input, where the dictionary must obey the following rules

#### Prompt Strategy

##### Non-Vision Models

In OpenAI Chat API, the `name` field enables the model to distinguish
different speakers in the conversation. Therefore, the strategy of `format`
function in `OpenAIChatWrapper` is simple:
Expand Down Expand Up @@ -100,6 +102,75 @@ print(prompt)
]
```

##### Vision Models

For vision models (gpt-4-turbo, gpt-4o, ...), if the input message contains image urls, the generated `content` field will be a list of dicts, which contains text and image urls.

Specifically, the web image urls will be pass to OpenAI Chat API directly, while the local image urls will be converted to base64 format. More details please refer to the [official guidance](https://platform.openai.com/docs/guides/vision).

Note the invalid image urls (e.g. `/Users/xxx/test.mp3`) will be ignored.

```python
from agentscope.models import OpenAIChatWrapper
from agentscope.message import Msg

model = OpenAIChatWrapper(
config_name="", # empty since we directly initialize the model wrapper
model_name="gpt-4o",
)

prompt = model.format(
Msg("system", "You're a helpful assistant", role="system"), # Msg object
[ # a list of Msg objects
Msg(name="user", content="Describe this image", role="user", url="https://xxx.png"),
Msg(name="user", content="And these images", role="user", url=["/Users/xxx/test.png", "/Users/xxx/test.mp3"]),
],
)
print(prompt)
```

```python
[
{
"role": "system",
"name": "system",
"content": "You are a helpful assistant"
},
{
"role": "user",
"name": "user",
"content": [
{
"type": "text",
"text": "Describe this image"
},
{
"type": "image_url",
"image_url": {
"url": "https://xxx.png"
}
},
]
},
{
"role": "user",
"name": "user",
"content": [
{
"type": "text",
"text": "And these images"
},
{
"type": "image_url",
"image_url": {
"url": "data:image/png;base64,YWJjZGVm..." # for /Users/xxx/test.png
}
},
]
},
]
```

### DashScopeChatWrapper

`DashScopeChatWrapper` encapsulates the DashScope chat API, which takes a list of messages as input. The message must obey the following rules (updated in 2024/03/22):
Expand Down Expand Up @@ -241,11 +312,11 @@ messages as input. The message must obey the following rules (updated in

#### Prompt Strategy

Given a list of messages, we will parse each message as follows:

- `Msg`: Fill the `role` and `content` fields directly. If it has an `url`
field, which refers to an image, we will add it to the message.
- `List`: Parse each element in the list according to the above rules.
- If the role field of the first input message is `"system"`,
it will be treated as system prompt and the other messages will consist
dialogue history in the system message prefixed by "## Dialogue History".
- If the `url` attribute of messages is not `None`, we will gather all urls in
the `"images"` field in the returned dictionary.

```python
from agentscope.models import OllamaChatWrapper
Expand All @@ -268,9 +339,11 @@ print(prompt)

```bash
[
{"role": "system", "content": "You are a helpful assistant"},
{"role": "assistant", "content": "Hi."},
{"role": "assistant", "content": "Nice to meet you!", "images": ["https://example.com/image.jpg"]},
{
"role": "system",
"content": "You are a helpful assistant\n\n## Dialogue History\nBob: Hi.\nAlice: Nice to meet you!",
"images": ["https://example.com/image.jpg"]
},
]
```

Expand Down
Loading

0 comments on commit f7be053

Please sign in to comment.