diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
index c1bf236b4..5de222fa8 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.md
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -7,6 +7,8 @@ assignees: ''
---
+**AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.**
+
**Describe the bug**
A clear and concise description of what the bug is.
diff --git a/.github/ISSUE_TEMPLATE/custom.md b/.github/ISSUE_TEMPLATE/custom.md
index 48d5f81fa..8ff2e28ce 100644
--- a/.github/ISSUE_TEMPLATE/custom.md
+++ b/.github/ISSUE_TEMPLATE/custom.md
@@ -7,4 +7,7 @@ assignees: ''
---
+**AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.**
+
+
diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md
index 215ab80f1..6d6921243 100644
--- a/.github/ISSUE_TEMPLATE/feature_request.md
+++ b/.github/ISSUE_TEMPLATE/feature_request.md
@@ -7,6 +7,9 @@ assignees: ''
---
+**AgentScope is an open-source project. To involve a broader community, we recommend asking your questions in English.**
+
+
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
diff --git a/README.md b/README.md
index caf95fbce..99bfca906 100644
--- a/README.md
+++ b/README.md
@@ -2,6 +2,10 @@ English | [**中文**](README_ZH.md)
# AgentScope
+
+
+
+
Start building LLM-empowered multi-agent applications in an easier way.
[![](https://img.shields.io/badge/cs.MA-2402.14034-B31C1C?logo=arxiv&logoColor=B31C1C)](https://arxiv.org/abs/2402.14034)
@@ -27,15 +31,19 @@ Welcome to join our community on
## News
+- **[2024-05-14]** Dear AgentScope users, we are conducting a survey on **AgentScope Workstation & Copilot** user experience. We currently need your valuable feedback to help us improve the experience of AgentScope's Drag & Drop multi-agent application development and Copilot. Your feedback is valuable and the survey will take about 3~5 minutes. Please click [URL](https://survey.aliyun.com/apps/zhiliao/vgpTppn22) to participate in questionnaire surveys. Thank you very much for your support and contribution!
+
+- **[2024-05-14]** AgentScope supports **gpt-4o** as well as other OpenAI vision models now! Try gpt-4o with its [model configuration](./examples/model_configs_template/openai_chat_template.json) and new example [Conversation with gpt-4o](./examples/conversation_with_gpt-4o)!
+
- **[2024-04-30]** We release **AgentScope** v0.0.4 now!
- **[2024-04-27]** [AgentScope Workstation](https://agentscope.aliyun.com/) is now online! You are welcome to try building your multi-agent application simply with our *drag-and-drop platform* and ask our *copilot* questions about AgentScope!
-- **[2024-04-19]** AgentScope supports Llama3 now! We provide [scripts](./examples/model_llama3) and example [model configuration](./examples/model_llama3) for quick set-up. Feel free to try llama3 in our examples!
+- **[2024-04-19]** AgentScope supports Llama3 now! We provide [scripts](./examples/model_llama3) and example [model configuration](./examples/model_llama3) for quick set-up. Feel free to try llama3 in our examples!
-- **[2024-04-06]** We release **AgentScope** v0.0.3 now!
+- **[2024-04-06]** We release **AgentScope** v0.0.3 now!
-- **[2024-04-06]** New examples [Gomoku](./examples/game_gomoku), [Conversation with ReAct Agent](./examples/conversation_with_react_agent), [Conversation with RAG Agent](./examples/conversation_with_RAG_agents) and [Distributed Parallel Search](./examples/distributed_search) are available now!
+- **[2024-04-06]** New examples [Gomoku](./examples/game_gomoku), [Conversation with ReAct Agent](./examples/conversation_with_react_agent), [Conversation with RAG Agent](./examples/conversation_with_RAG_agents) and [Distributed Parallel Search](./examples/distributed_search) are available now!
- **[2024-03-19]** We release **AgentScope** v0.0.2 now! In this new version,
AgentScope supports [ollama](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models)(A local CPU inference engine), [DashScope](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models) and Google [Gemini](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models) APIs.
@@ -71,23 +79,24 @@ applications in a centralized programming manner for streamlined development.
AgentScope provides a list of `ModelWrapper` to support both local model
services and third-party model APIs.
-| API | Task | Model Wrapper | Configuration | Some Supported Models |
-|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|-----------------------------------------------|
-| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |[guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_chat_template.json) | gpt-4, gpt-3.5-turbo, ... |
-| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_embedding_template.json) | text-embedding-ada-002, ... |
-| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_dall_e_template.json) | dall-e-2, dall-e-3 |
-| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_chat_template.json) | qwen-plus, qwen-max, ... |
-| | Image Synthesis | [`DashScopeImageSynthesisWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_image_synthesis_template.json) | wanx-v1 |
-| | Text Embedding | [`DashScopeTextEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_text_embedding_template.json) | text-embedding-v1, text-embedding-v2, ... |
-| | Multimodal | [`DashScopeMultiModalWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_multimodal_template.json) | qwen-vl-max, qwen-vl-chat-v1, qwen-audio-chat |
-| Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_chat_template.json) | gemini-pro, ... |
-| | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_embedding_template.json) | models/embedding-001, ... |
-| ZhipuAI API | Chat | [`ZhipuAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_chat_template.json) | glm-4, ... |
-| | Embedding | [`ZhipuAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_embedding_template.json) | embedding-2, ... |
-| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_chat_template.json) | llama3, llama2, Mistral, ... |
-| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_embedding_template.json) | llama2, Mistral, ... |
-| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_generate_template.json) | llama2, Mistral, ... |
-| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#post-request-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/postapi_model_config_template.json) | - |
+| API | Task | Model Wrapper | Configuration | Some Supported Models |
+|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|-----------------------------------------------------------------|
+| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |[guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_chat_template.json) | gpt-4o, gpt-4, gpt-3.5-turbo, ... |
+| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_embedding_template.json) | text-embedding-ada-002, ... |
+| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_dall_e_template.json) | dall-e-2, dall-e-3 |
+| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_chat_template.json) | qwen-plus, qwen-max, ... |
+| | Image Synthesis | [`DashScopeImageSynthesisWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_image_synthesis_template.json) | wanx-v1 |
+| | Text Embedding | [`DashScopeTextEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_text_embedding_template.json) | text-embedding-v1, text-embedding-v2, ... |
+| | Multimodal | [`DashScopeMultiModalWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_multimodal_template.json) | qwen-vl-max, qwen-vl-chat-v1, qwen-audio-chat |
+| Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_chat_template.json) | gemini-pro, ... |
+| | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_embedding_template.json) | models/embedding-001, ... |
+| ZhipuAI API | Chat | [`ZhipuAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_chat_template.json) | glm-4, ... |
+| | Embedding | [`ZhipuAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_embedding_template.json) | embedding-2, ... |
+| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_chat_template.json) | llama3, llama2, Mistral, ... |
+| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_embedding_template.json) | llama2, Mistral, ... |
+| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_generate_template.json) | llama2, Mistral, ... |
+| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#litellm-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/litellm_chat_template.json) | [models supported by litellm](https://docs.litellm.ai/docs/)... |
+| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#post-request-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/postapi_model_config_template.json) | - |
**Supported Local Model Deployment**
@@ -123,6 +132,8 @@ the following libraries.
- [Conversation in Natural Language to Query SQL](./examples/conversation_nl2sql/)
- [Conversation with RAG Agent](./examples/conversation_with_RAG_agents)
- [Conversation with SWE Agnet](./examples/swe_agent/)
+ - [Conversation with gpt-4o](./examples/conversation_with_gpt-4o)
+
- Game
- [Gomoku](./examples/game_gomoku)
diff --git a/README_ZH.md b/README_ZH.md
index 693106447..eb6b9d236 100644
--- a/README_ZH.md
+++ b/README_ZH.md
@@ -2,6 +2,10 @@
# AgentScope
+
+
+
+
更简单地构建基于LLM的多智能体应用。
[![](https://img.shields.io/badge/cs.MA-2402.14034-B31C1C?logo=arxiv&logoColor=B31C1C)](https://arxiv.org/abs/2402.14034)
@@ -24,15 +28,19 @@
## 新闻
-- **[2024-04-06]** 我们现在发布了**AgentScope** v0.0.4版本!
+- **[2024-05-14]** 亲爱的AgentScope用户,我们正在进行AgentScope Workstation & Copilot用户体验的调研活动,目前需要您宝贵的意见来帮助我们改善AgentScope的拖拽式多智能体应用开发与Copilot体验。您的每一个反馈都十分宝贵,调研需要大约3~5分钟。请您点击 [链接](https://survey.aliyun.com/apps/zhiliao/vgpTppn22) 参与问卷,十分感谢您的支持与贡献!
+
+- **[2024-05-14]** AgentScope 现已支持 **gpt-4o** 等 OpenAI Vision 模型! 模型配置请见[链接](./examples/model_configs_template/openai_chat_template.json)。同时,新的样例“[与gpt-4o模型对话](./examples/conversation_with_gpt-4o)”已上线!
+
+- **[2024-04-30]** 我们现在发布了**AgentScope** v0.0.4版本!
- **[2024-04-27]** [AgentScope Workstation](https://agentscope.aliyun.com/)上线了! 欢迎使用 Workstation 体验如何在*拖拉拽编程平台* 零代码搭建多智体应用,也欢迎大家通过*copilot*查询AgentScope各种小知识!
-- **[2024-04-19]** AgentScope现已经支持Llama3!我们提供了面向CPU推理和GPU推理的[脚本](./examples/model_llama3)和[模型配置](./examples/model_llama3),一键式开启Llama3的探索,在我们的样例中尝试Llama3吧!
+- **[2024-04-19]** AgentScope现已经支持Llama3!我们提供了面向CPU推理和GPU推理的[脚本](./examples/model_llama3)和[模型配置](./examples/model_llama3),一键式开启Llama3的探索,在我们的样例中尝试Llama3吧!
-- **[2024-04-06]** 我们现在发布了**AgentScope** v0.0.3版本!
+- **[2024-04-06]** 我们现在发布了**AgentScope** v0.0.3版本!
-- **[2024-04-06]** 新的样例“[五子棋](./examples/game_gomoku)”,“[与ReAct智能体对话](./examples/conversation_with_react_agent)”,“[与RAG智能体对话](./examples/conversation_with_RAG_agents)”,“[分布式并行搜索](./examples/distributed_search)”上线了!
+- **[2024-04-06]** 新的样例“[五子棋](./examples/game_gomoku)”,“[与ReAct智能体对话](./examples/conversation_with_react_agent)”,“[与RAG智能体对话](./examples/conversation_with_RAG_agents)”,“[分布式并行搜索](./examples/distributed_search)”上线了!
- **[2024-03-19]** 我们现在发布了**AgentScope** v0.0.2版本!在这个新版本中,AgentScope支持了[ollama](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models)(本地CPU推理引擎),[DashScope](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models)和[Gemini](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#supported-models) APIs。
@@ -62,7 +70,7 @@ AgentScope提供了一系列`ModelWrapper`来支持本地模型服务和第三
| API | Task | Model Wrapper | Configuration | Some Supported Models |
|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|-----------------------------------------------|
-| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |[guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_chat_template.json) | gpt-4, gpt-3.5-turbo, ... |
+| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |[guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_chat_template.json) | gpt-4o, gpt-4, gpt-3.5-turbo, ... |
| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_embedding_template.json) | text-embedding-ada-002, ... |
| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_dall_e_template.json) | dall-e-2, dall-e-3 |
| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_chat_template.json) | qwen-plus, qwen-max, ... |
@@ -76,6 +84,7 @@ AgentScope提供了一系列`ModelWrapper`来支持本地模型服务和第三
| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_chat_template.json) | llama3, llama2, Mistral, ... |
| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_embedding_template.json) | llama2, Mistral, ... |
| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_generate_template.json) | llama2, Mistral, ... |
+| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#litellm-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/litellm_chat_template.json) | [models supported by litellm](https://docs.litellm.ai/docs/)... |
| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#post-request-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/postapi_model_config_template.json) | - |
**支持的本地模型部署**
@@ -111,6 +120,8 @@ AgentScope支持使用以下库快速部署本地模型服务。
- [通过对话查询SQL信息](./examples/conversation_nl2sql/)
- [与RAG智能体对话](./examples/conversation_with_RAG_agents)
- [与SWE智能体对话](./examples/swe_agent/)
+ - [与gpt-4o模型对话](./examples/conversation_with_gpt-4o)
+
- 游戏
- [五子棋](./examples/game_gomoku)
diff --git a/docs/sphinx_doc/en/source/tutorial/203-model.md b/docs/sphinx_doc/en/source/tutorial/203-model.md
index 08ef18dc5..d6e153d0f 100644
--- a/docs/sphinx_doc/en/source/tutorial/203-model.md
+++ b/docs/sphinx_doc/en/source/tutorial/203-model.md
@@ -16,6 +16,7 @@ Currently, AgentScope supports the following model service APIs:
- Gemini API, including chat and embedding.
- ZhipuAI API, including chat and embedding.
- Ollama API, including chat, embedding and generation.
+- LiteLLM API, including chat, with various model APIs.
- Post Request API, model inference services based on Post
requests, including Huggingface/ModelScope Inference API and various
post request based model APIs.
@@ -87,6 +88,7 @@ In the current AgentScope, the supported `model_type` types, the corresponding
| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_chat"` | llama2, ... |
| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_embedding"` | llama2, ... |
| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_generate"` | llama2, ... |
+| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | `"litellm_chat"` | - |
| Post Request based API | - | [`PostAPIModelWrapperBase`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api"` | - |
| | Chat | [`PostAPIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api_chat"` | meta-llama/Meta-Llama-3-8B-Instruct, ... |
@@ -440,6 +442,26 @@ Here we provide example configurations for different model wrappers.
+
+#### LiteLLM Chat API
+
+
+LiteLLM Chat API (agentscope.models.LiteLLMChatModelWrapper
)
+
+```python
+{
+ "config_name": "lite_llm_openai_chat_gpt-3.5-turbo",
+ "model_type": "litellm_chat",
+ "model_name": "gpt-3.5-turbo" # You should note that for different models, you should set the corresponding environment variables, such as OPENAI_API_KEY, etc. You may refer to https://docs.litellm.ai/docs/ for this.
+},
+```
+
+
+
+
+
+
#### Post Request Chat API
diff --git a/docs/sphinx_doc/en/source/tutorial/204-service.md b/docs/sphinx_doc/en/source/tutorial/204-service.md
index 30d82242d..88fccc4b4 100644
--- a/docs/sphinx_doc/en/source/tutorial/204-service.md
+++ b/docs/sphinx_doc/en/source/tutorial/204-service.md
@@ -26,7 +26,10 @@ The following table outlines the various Service functions by type. These functi
| | `arxiv_search` | Perform arXiv search |
| | `download_from_url` | Download file from given URL. |
| | `load_web` | Load and parse the web page of the specified url (currently only supports HTML). |
-| | `digest_webpage` | Digest the content of a already loaded web page (currently only supports HTML). |
+| | `digest_webpage` | Digest the content of a already loaded web page (currently only supports HTML).
+| | `dblp_search_publications` | Search publications in the DBLP database
+| | `dblp_search_authors` | Search for author information in the DBLP database |
+| | `dblp_search_venues` | Search for venue information in the DBLP database |
| File | `create_file` | Create a new file at a specified path, optionally with initial content. |
| | `delete_file` | Delete a file specified by a file path. |
| | `move_file` | Move or rename a file from one path to another. |
diff --git a/docs/sphinx_doc/en/source/tutorial/206-prompt.md b/docs/sphinx_doc/en/source/tutorial/206-prompt.md
index d33a8612e..e30e8abd8 100644
--- a/docs/sphinx_doc/en/source/tutorial/206-prompt.md
+++ b/docs/sphinx_doc/en/source/tutorial/206-prompt.md
@@ -64,6 +64,8 @@ dictionaries as input, where the dictionary must obey the following rules
#### Prompt Strategy
+##### Non-Vision Models
+
In OpenAI Chat API, the `name` field enables the model to distinguish
different speakers in the conversation. Therefore, the strategy of `format`
function in `OpenAIChatWrapper` is simple:
@@ -100,6 +102,75 @@ print(prompt)
]
```
+##### Vision Models
+
+For vision models (gpt-4-turbo, gpt-4o, ...), if the input message contains image urls, the generated `content` field will be a list of dicts, which contains text and image urls.
+
+Specifically, the web image urls will be pass to OpenAI Chat API directly, while the local image urls will be converted to base64 format. More details please refer to the [official guidance](https://platform.openai.com/docs/guides/vision).
+
+Note the invalid image urls (e.g. `/Users/xxx/test.mp3`) will be ignored.
+
+```python
+from agentscope.models import OpenAIChatWrapper
+from agentscope.message import Msg
+
+model = OpenAIChatWrapper(
+ config_name="", # empty since we directly initialize the model wrapper
+ model_name="gpt-4o",
+)
+
+prompt = model.format(
+ Msg("system", "You're a helpful assistant", role="system"), # Msg object
+ [ # a list of Msg objects
+ Msg(name="user", content="Describe this image", role="user", url="https://xxx.png"),
+ Msg(name="user", content="And these images", role="user", url=["/Users/xxx/test.png", "/Users/xxx/test.mp3"]),
+ ],
+)
+print(prompt)
+```
+
+```python
+[
+ {
+ "role": "system",
+ "name": "system",
+ "content": "You are a helpful assistant"
+ },
+ {
+ "role": "user",
+ "name": "user",
+ "content": [
+ {
+ "type": "text",
+ "text": "Describe this image"
+ },
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": "https://xxx.png"
+ }
+ },
+ ]
+ },
+ {
+ "role": "user",
+ "name": "user",
+ "content": [
+ {
+ "type": "text",
+ "text": "And these images"
+ },
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": "data:image/png;base64,YWJjZGVm..." # for /Users/xxx/test.png
+ }
+ },
+ ]
+ },
+]
+```
+
### DashScopeChatWrapper
`DashScopeChatWrapper` encapsulates the DashScope chat API, which takes a list of messages as input. The message must obey the following rules (updated in 2024/03/22):
@@ -241,11 +312,11 @@ messages as input. The message must obey the following rules (updated in
#### Prompt Strategy
-Given a list of messages, we will parse each message as follows:
-
-- `Msg`: Fill the `role` and `content` fields directly. If it has an `url`
- field, which refers to an image, we will add it to the message.
-- `List`: Parse each element in the list according to the above rules.
+- If the role field of the first input message is `"system"`,
+it will be treated as system prompt and the other messages will consist
+dialogue history in the system message prefixed by "## Dialogue History".
+- If the `url` attribute of messages is not `None`, we will gather all urls in
+the `"images"` field in the returned dictionary.
```python
from agentscope.models import OllamaChatWrapper
@@ -268,9 +339,11 @@ print(prompt)
```bash
[
- {"role": "system", "content": "You are a helpful assistant"},
- {"role": "assistant", "content": "Hi."},
- {"role": "assistant", "content": "Nice to meet you!", "images": ["https://example.com/image.jpg"]},
+ {
+ "role": "system",
+ "content": "You are a helpful assistant\n\n## Dialogue History\nBob: Hi.\nAlice: Nice to meet you!",
+ "images": ["https://example.com/image.jpg"]
+ },
]
```
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md b/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md
index 0528abae8..7b912cbf2 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md
@@ -13,6 +13,7 @@ AgentScope中,模型的部署和调用是通过`ModelWrapper`来解耦开的
- Gemini API,包括对话(Chat)和嵌入(Embedding)。
- ZhipuAi API,包括对话(Chat)和嵌入(Embedding)。
- Ollama API,包括对话(Chat),嵌入(Embedding)和生成(Generation)。
+- LiteLLM API, 包括对话(Chat), 支持各种模型的API.
- Post请求API,基于Post请求实现的模型推理服务,包括Huggingface/ModelScope
Inference API和各种符合Post请求格式的API。
@@ -107,6 +108,7 @@ API如下:
| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_chat"` | llama2, ... |
| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_embedding"` | llama2, ... |
| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_generate"` | llama2, ... |
+| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | `"litellm_chat"` | - |
| Post Request based API | - | [`PostAPIModelWrapperBase`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api"` | - |
| | Chat | [`PostAPIChatModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api_chat"` | meta-llama/Meta-Llama-3-8B-Instruct, ... |
@@ -435,6 +437,26 @@ API如下:
+
+#### LiteLLM Chat API
+
+
+LiteLLM Chat API (agentscope.models.LiteLLMChatModelWrapper
)
+
+```python
+{
+ "config_name": "lite_llm_openai_chat_gpt-3.5-turbo",
+ "model_type": "litellm_chat",
+ "model_name": "gpt-3.5-turbo" # You should note that for different models, you should set the corresponding environment variables, such as OPENAI_API_KEY, etc. You may refer to https://docs.litellm.ai/docs/ for this.
+},
+```
+
+
+
+
+
+
#### Post Request API
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial/204-service.md b/docs/sphinx_doc/zh_CN/source/tutorial/204-service.md
index 68f165cba..6892b6339 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial/204-service.md
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/204-service.md
@@ -23,7 +23,10 @@
| | `arxiv_search` | 使用arxiv搜索。 |
| | `download_from_url` | 从指定的 URL 下载文件。 |
| | `load_web` | 爬取并解析指定的网页链接 (目前仅支持爬取 HTML 页面) |
-| | `digest_webpage` | 对已经爬取好的网页生成摘要信息(目前仅支持 HTML 页面) |
+| | `digest_webpage` | 对已经爬取好的网页生成摘要信息(目前仅支持 HTML 页面
+| | `dblp_search_publications` | 在dblp数据库里搜索文献。
+| | `dblp_search_authors` | 在dblp数据库里搜索作者。 |
+| | `dblp_search_venues` | 在dblp数据库里搜索期刊,会议及研讨会。 |
| 文件处理 | `create_file` | 在指定路径创建一个新文件,并可选择添加初始内容。 |
| | `delete_file` | 删除由文件路径指定的文件。 |
| | `move_file` | 将文件从一个路径移动或重命名到另一个路径。 |
diff --git a/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md b/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md
index 5648d2193..7ed143cfe 100644
--- a/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md
+++ b/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md
@@ -42,6 +42,8 @@ AgentScope为以下的模型API提供了内置的提示构建策略。
#### 提示的构建策略
+##### 非视觉(Vision)模型
+
在OpenAI Chat API中,`name`字段使模型能够区分对话中的不同发言者。因此,`OpenAIChatWrapper`中`format`函数的策略很简单:
- `Msg`: 直接将带有`role`、`content`和`name`字段的字典传递给API。
@@ -76,6 +78,75 @@ print(prompt)
]
```
+##### 视觉(Vision)模型
+
+对支持视觉的模型而言,如果输入消息包含图像url,生成的`content`字段将是一个字典的列表,其中包含文本和图像url。
+
+具体来说,如果是网络图片url,将直接传递给OpenAI Chat API,而本地图片url将被转换为base64格式。更多细节请参考[官方指南](https://platform.openai.com/docs/guides/vision)。
+
+注意无效的图片url(例如`/Users/xxx/test.mp3`)将被忽略。
+
+```python
+from agentscope.models import OpenAIChatWrapper
+from agentscope.message import Msg
+
+model = OpenAIChatWrapper(
+ config_name="", # 为空,因为我们直接初始化model wrapper
+ model_name="gpt-4o",
+)
+
+prompt = model.format(
+ Msg("system", "You're a helpful assistant", role="system"), # Msg 对象
+ [ # Msg 对象的列表
+ Msg(name="user", content="Describe this image", role="user", url="https://xxx.png"),
+ Msg(name="user", content="And these images", role="user", url=["/Users/xxx/test.png", "/Users/xxx/test.mp3"]),
+ ],
+)
+print(prompt)
+```
+
+```python
+[
+ {
+ "role": "system",
+ "name": "system",
+ "content": "You are a helpful assistant"
+ },
+ {
+ "role": "user",
+ "name": "user",
+ "content": [
+ {
+ "type": "text",
+ "text": "Describe this image"
+ },
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": "https://xxx.png"
+ }
+ },
+ ]
+ },
+ {
+ "role": "user",
+ "name": "user",
+ "content": [
+ {
+ "type": "text",
+ "text": "And these images"
+ },
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": "data:image/png;base64,YWJjZGVm..." # 对应 /Users/xxx/test.png
+ }
+ },
+ ]
+ },
+]
+```
+
### `DashScopeChatWrapper`
`DashScopeChatWrapper`封装了DashScope聊天API,它接受消息列表作为输入。消息必须遵守以下规则:
@@ -211,8 +282,10 @@ print(prompt)
给定一个消息列表,我们将按照以下规则解析每个消息:
-- `Msg`:直接填充`role`和`content`字段。如果它有一个`url`字段,指向一个图片,我们将把它添加到消息中。
-- `List`:根据上述规则解析列表中的每个元素。
+- 如果输入的第一条信息的`role`字段是`"system"`,该条信息将被视为系统提示(system
+ prompt),其他信息将一起组成对话历史。对话历史将添加`"## Dialogue History"`的前缀,并与
+系统提示一起组成一条`role`为`"system"`的信息。
+- 如果输入信息中的`url`字段不为`None`,则这些url将一起被置于`"images"`对应的键值中。
```python
from agentscope.models import OllamaChatWrapper
@@ -235,9 +308,11 @@ print(prompt)
```bash
[
- {"role": "system", "content": "You are a helpful assistant"},
- {"role": "assistant", "content": "Hi."},
- {"role": "assistant", "content": "Nice to meet you!", "images": ["https://example.com/image.jpg"]},
+ {
+ "role": "system",
+ "content": "You are a helpful assistant\n\n## Dialogue History\nBob: Hi.\nAlice: Nice to meet you!",
+ "images": ["https://example.com/image.jpg"]
+ },
]
```
diff --git a/examples/conversation_basic/README.md b/examples/conversation_basic/README.md
index 1bdd093a2..eb89720a0 100644
--- a/examples/conversation_basic/README.md
+++ b/examples/conversation_basic/README.md
@@ -1,5 +1,6 @@
# Multi-Agent Conversation in AgentScope
-This is a demo of how to program a multi-agent conversation in AgentScope.
+
+This example will show how to program a multi-agent conversation in AgentScope.
Complete code is in `conversation.py`, which set up a user agent and an
assistant agent to have a conversation. When user input "exit", the
conversation ends.
@@ -8,5 +9,13 @@ You can modify the `sys_prompt` to change the role of assistant agent.
# Note: Set your api_key in conversation.py first
python conversation.py
```
+## Tested Models
+
+These models are tested in this example. For other models, some modifications may be needed.
+- dashscope_chat (qwen-max)
+- ollama_chat (ollama_llama3_8b)
+- gemini_chat (models/gemini-pro)
+
+## Prerequisites
To set up model serving with open-source LLMs, follow the guidance in
-[scripts/REAMDE.md](../../scripts/README.md).
\ No newline at end of file
+[scripts/REAMDE.md](../../scripts/README.md).
diff --git a/examples/conversation_self_organizing/README.md b/examples/conversation_self_organizing/README.md
new file mode 100644
index 000000000..0c1fd6d45
--- /dev/null
+++ b/examples/conversation_self_organizing/README.md
@@ -0,0 +1,29 @@
+# Self-Organizing Conversation Example
+
+This example will show
+- How to set up a self-organizing conversation using the `DialogAgent` and `agent_builder`
+- How to extract the discussion scenario and participant agents from the `agent_builder`'s response
+- How to conduct a multi-round discussion among the participant agents
+
+
+## Background
+
+In this example, we demonstrate how to create a self-organizing conversation where the `agent_builder` automatically sets up the agents participating in the discussion based on a given question. The `agent_builder` provides the discussion scenario and the characteristics of the participant agents. The participant agents then engage in a multi-round discussion to solve the given question.
+
+
+## Tested Models
+
+These models are tested in this example. For other models, some modifications may be needed.
+- `dashscope_chat` with `qwen-turbo`
+- `ollama_chat` with `llama3_8b`
+- `gemini_chat` with `models/gemini-1.0-pro-latest`
+
+
+## Prerequisites
+
+Fill the next cell to meet the following requirements
+- Set up the `model_configs` with the appropriate API keys and endpoints
+- Provide the path to the `agent_builder_instruct.txt` file in the `load_txt` function
+- Set the desired `max_round` for the discussion
+- Provide the `query` or question for the discussion
+- [Optional] Adjust the `generate_args` such as `temperature` for the `openai_chat` model
\ No newline at end of file
diff --git a/examples/conversation_with_gpt-4o/README.md b/examples/conversation_with_gpt-4o/README.md
new file mode 100644
index 000000000..715d57f58
--- /dev/null
+++ b/examples/conversation_with_gpt-4o/README.md
@@ -0,0 +1,54 @@
+# Conversation with gpt-4o (OpenAI Vision Model)
+
+This example will show
+- How to use gpt-4o and other OpenAI vision models in AgentScope
+
+In this example,
+- you can have a conversation with OpenAI vision models.
+- you can show gpt-4o with your drawings or web ui designs and look for its suggestions.
+- you can share your pictures with gpt-4o and ask for its comments,
+
+Just input your image url (both local and web URLs are supported) and talk with gpt-4o.
+
+
+## Background
+
+In May 13, 2024, OpenAI released their new model, gpt-4o, which is a large multimodal model that can process both text and multimodal data.
+
+
+## Tested Models
+
+The following models are tested in this example. For other models, some modifications may be needed.
+- gpt-4o
+- gpt-4-turbo
+- gpt-4-vision
+
+
+## Prerequisites
+
+You need to satisfy the following requirements to run this example.
+- Install the latest version of AgentScope by
+ ```bash
+ git clone https://github.com/modelscope/agentscope.git
+ cd agentscope
+ pip install -e .
+ ```
+- Prepare an OpenAI API key
+
+## Running the Example
+
+First fill your OpenAI API key in `conversation_with_gpt-4o.py`, then execute the following command to run the conversation with gpt-4o.
+
+```bash
+python conversation_with_gpt-4o.py
+```
+
+## A Running Example
+
+- Conversation history with gpt-4o.
+
+
+
+- My picture
+
+
diff --git a/examples/conversation_with_gpt-4o/conversation_with_gpt-4o.py b/examples/conversation_with_gpt-4o/conversation_with_gpt-4o.py
new file mode 100644
index 000000000..470f1de32
--- /dev/null
+++ b/examples/conversation_with_gpt-4o/conversation_with_gpt-4o.py
@@ -0,0 +1,36 @@
+# -*- coding: utf-8 -*-
+"""An example for conversation with OpenAI vision models, especially for
+GPT-4o."""
+import agentscope
+from agentscope.agents import UserAgent, DialogAgent
+
+# Fill in your OpenAI API key
+YOUR_OPENAI_API_KEY = "xxx"
+
+model_config = {
+ "config_name": "gpt-4o_config",
+ "model_type": "openai_chat",
+ "model_name": "gpt-4o",
+ "api_key": YOUR_OPENAI_API_KEY,
+ "generate_args": {
+ "temperature": 0.7,
+ },
+}
+
+agentscope.init(model_configs=model_config)
+
+# Require user to input URL, and press enter to skip the URL input
+user = UserAgent("user", require_url=True)
+
+agent = DialogAgent(
+ "Friday",
+ sys_prompt="You're a helpful assistant named Friday.",
+ model_config_name="gpt-4o_config",
+)
+
+x = None
+while True:
+ x = agent(x)
+ x = user(x)
+ if x.content == "exit": # type "exit" to break the loop
+ break
diff --git a/examples/conversation_with_mentions/README.md b/examples/conversation_with_mentions/README.md
index 6359b3413..858915710 100644
--- a/examples/conversation_with_mentions/README.md
+++ b/examples/conversation_with_mentions/README.md
@@ -1,73 +1,36 @@
+###
# Multi-Agent Group Conversation in AgentScope
-This example demonstrates a multi-agent group conversation facilitated by AgentScope. The script `main.py` sets up a virtual chat room where a user agent interacts with several NPC (non-player character) agents. The chat utilizes a special **"@"** mention functionality, which allows participants to address specific agents and have a more directed conversation.
+This example demonstrates a multi-agent group conversation facilitated by AgentScope. The script sets up a virtual chat room where a user agent interacts with several NPC (non-player character) agents. Participants can utilize a special "@" mention functionality to address specific agents directly.
-## Key Features
+## Background
-- **Real-time Group Conversation**: Engage in a chat with multiple agents responding in real time.
-- **@ Mention Functionality**: Use the "@" symbol followed by an agent's name to specifically address that agent within the conversation.
-- **Dynamic Flow**: User-driven conversation with agents responding based on the context and mentions.
-- **Configurable Agent Roles**: Easily modify agent roles and behaviors by editing the `sys_prompt` in the configuration files.
-- **User Timeout**: If the user does not respond within a specified time, the conversation continues with the next agent.
+The conversation takes place in a simulated chat room environment with predefined roles for each participant. Topics are open-ended and evolve based on the user's input and agents' responses.
-## How to Use
-
-To start the group conversation, follow these steps:
-
-1. Make sure to set your `api_key` in the `configs/model_configs.json` file.
-2. Run the script using the following command:
-
-```bash
-python main.py
+## Tested Models
-# or launch agentscope studio
-as_studio main.py
-```
+These models are tested in this example. For other models, some modifications may be needed.
+- gemini_chat (models/gemini-pro, models/gemini-1.0-pro)
+- dashscope_chat (qwen-max, qwen-turbo)
+- ollama_chat (ollama_llama3_8b)
-1. To address a specific agent in the chat, type "@" followed by the agent's name in your message.
-2. To exit the chat, simply type "exit" when it's your turn to speak.
+## Prerequisites
-## Background and Conversation Flow
+Fill the next cell to meet the following requirements:
+- Set your `api_key` in the `configs/model_configs.json` file
+- Optional: Launch agentscope studio with `as_studio main.py`
-The conversation takes place in a simulated chat room environment with roles defined for each participant. The user acts as a regular chat member with the ability to speak freely and address any agent. NPC agents are pre-configured with specific roles that determine their responses and behavior in the chat. The topic of the conversation is open-ended and can evolve organically based on the user's input and agents' programmed personas.
-
-### Example Interaction
+## How to Use
-```
-User input: Hi, everyone! I'm excited to join this chat.
-AgentA: Welcome! We're glad to have you here.
-User input: @AgentB, what do you think about the new technology trends?
-AgentB: It's an exciting time for tech! There are so many innovations on the horizon.
-...
-```
+1. Run the script using the command: `python main.py`
+2. Address specific agents by typing "@" followed by the agent's name.
+3. Type "exit" to leave the chat.
## Customization Options
-The group conversation script provides several options for customization, allowing you to tailor the chat experience to your preferences.
-
-You can customize the conversation by editing the agent configurations and model parameters. The `agent_configs.json` file allows you to set specific behaviors for each NPC agent, while `model_configs.json` contains the parameters for the conversation model.
+You can adjust the behavior and parameters of the NPC agents and conversation model by editing the `agent_configs.json` and `model_configs.json` files, respectively.
### Changing User Input Time Limit
-The `USER_TIME_TO_SPEAK` variable sets the time limit (in seconds) for the user to input their message during each round. By default, this is set to 10 seconds. You can adjust this time limit by modifying the value of `USER_TIME_TO_SPEAK` in the `main.py` script.
-
-For example, to change the time limit to 20 seconds, update the line in `main.py` as follows:
-
-```
-USER_TIME_TO_SPEAK = 20 # User has 20 seconds to type their message
-```
-
-### Setting a Default Topic for the Chat Room
-
-The `DEFAULT_TOPIC` variable defines the initial message or topic of the chat room. It sets the stage for the conversation and is announced at the beginning of the chat session. You can change this message to prompt a specific discussion topic or to provide instructions to the agents.
-
-To customize this message, modify the `DEFAULT_TOPIC` variable in the `main.py` script. For instance, if you want to set the default topic to discuss "The Future of Artificial Intelligence," you would change the code as follows:
-
-```python
-DEFAULT_TOPIC = """
-This is a chat room about the Future of Artificial Intelligence and you can
-speak freely and briefly.
-"""
-```
-
-With these customizations, the chat room can be tailored to fit specific themes or time constraints, enhancing the user's control over the chat experience.
+Adjust the `USER_TIME_TO_SPEAK` variable in the `main.py` script to change the time limit for user input.
+###
diff --git a/examples/distributed_basic/README.md b/examples/distributed_basic/README.md
index 62754f71a..1ee3e4d86 100644
--- a/examples/distributed_basic/README.md
+++ b/examples/distributed_basic/README.md
@@ -1,6 +1,6 @@
# Distributed Basic
-This example run a assistant agent and a user agent as seperate processes and use rpc to communicate between them.
+This example run a assistant agent and a user agent as separate processes and use rpc to communicate between them.
Before running the example, please install the distributed version of Agentscope, fill in your model configuration correctly in `configs/model_configs.json`, and modify the `model_config_name` field in `distributed_dialog.py` accordingly.
diff --git a/examples/distributed_simulation/run_simlation.sh b/examples/distributed_simulation/run_simulation.sh
similarity index 100%
rename from examples/distributed_simulation/run_simlation.sh
rename to examples/distributed_simulation/run_simulation.sh
diff --git a/examples/game_gomoku/code/board_agent.py b/examples/game_gomoku/code/board_agent.py
index c338b1ffd..6cbef4ced 100644
--- a/examples/game_gomoku/code/board_agent.py
+++ b/examples/game_gomoku/code/board_agent.py
@@ -85,33 +85,33 @@ def reply(self, x: dict = None) -> dict:
if x is None:
# Beginning of the game
content = (
- "Welcome to the Gomoku game! Black player goes "
- "first. Please make your move."
+ "Welcome to the Gomoku game! Black player goes first. "
+ "Please make your move."
)
else:
row, col = x["content"]
self.assert_valid_move(row, col)
- if self.check_win(row, col, NAME_TO_PIECE[x["name"]]):
- content = f"The game ends, {x['name']} wins!"
+ # change the board
+ self.board[row, col] = NAME_TO_PIECE[x["name"]]
+
+ # check if the game ends
+ if self.check_draw():
+ content = "The game ends in a draw!"
self.game_end = True
else:
- # change the board
- self.board[row, col] = NAME_TO_PIECE[x["name"]]
+ next_player_name = (
+ NAME_BLACK if x["name"] == NAME_WHITE else NAME_WHITE
+ )
+ content = CURRENT_BOARD_PROMPT_TEMPLATE.format(
+ board=self.board2text(),
+ player=next_player_name,
+ )
- # check if the game ends
- if self.check_draw():
- content = "The game ends in a draw!"
+ if self.check_win(row, col, NAME_TO_PIECE[x["name"]]):
+ content = f"The game ends, {x['name']} wins!"
self.game_end = True
- else:
- next_player_name = (
- NAME_BLACK if x["name"] == NAME_WHITE else NAME_WHITE
- )
- content = CURRENT_BOARD_PROMPT_TEMPLATE.format(
- board=self.board2text(),
- player=next_player_name,
- )
msg_host = Msg(self.name, content, role="assistant")
self.speak(msg_host)
diff --git a/examples/game_gomoku/main.ipynb b/examples/game_gomoku/main.ipynb
index 9c2d9def8..04be0c07b 100644
--- a/examples/game_gomoku/main.ipynb
+++ b/examples/game_gomoku/main.ipynb
@@ -156,30 +156,38 @@
" # Record the status of the game\n",
" self.game_end = False\n",
" \n",
- " def reply(self, input_: dict = None) -> dict:\n",
- " if input_ is None:\n",
+ " def reply(self, x: dict = None) -> dict:\n",
+ " if x is None:\n",
" # Beginning of the game\n",
- " content = \"Welcome to the Gomoku game! Black player goes first. Please make your move.\" \n",
+ " content = (\n",
+ " \"Welcome to the Gomoku game! Black player goes first. \"\n",
+ " \"Please make your move.\"\n",
+ " )\n",
" else:\n",
- " x, y = input_[\"content\"]\n",
- " \n",
- " self.assert_valid_move(x, y)\n",
- " \n",
- " if self.check_win(x, y, NAME_TO_PIECE[input_[\"name\"]]):\n",
- " content = f\"The game ends, {input_['name']} wins!\"\n",
+ " row, col = x[\"content\"]\n",
+ "\n",
+ " self.assert_valid_move(row, col)\n",
+ "\n",
+ " # change the board\n",
+ " self.board[row, col] = NAME_TO_PIECE[x[\"name\"]]\n",
+ "\n",
+ " # check if the game ends\n",
+ " if self.check_draw():\n",
+ " content = \"The game ends in a draw!\"\n",
" self.game_end = True\n",
" else:\n",
- " # change the board\n",
- " self.board[x, y] = NAME_TO_PIECE[input_[\"name\"]]\n",
- " \n",
- " # check if the game ends\n",
- " if self.check_draw():\n",
- " content = \"The game ends in a draw!\"\n",
+ " next_player_name = (\n",
+ " NAME_BLACK if x[\"name\"] == NAME_WHITE else NAME_WHITE\n",
+ " )\n",
+ " content = CURRENT_BOARD_PROMPT_TEMPLATE.format(\n",
+ " board=self.board2text(),\n",
+ " player=next_player_name,\n",
+ " )\n",
+ "\n",
+ " if self.check_win(row, col, NAME_TO_PIECE[x[\"name\"]]):\n",
+ " content = f\"The game ends, {x['name']} wins!\"\n",
" self.game_end = True\n",
- " else:\n",
- " next_player_name = NAME_BLACK if input_[\"name\"] == NAME_WHITE else NAME_WHITE\n",
- " content = CURRENT_BOARD_PROMPT_TEMPLATE.format(board=self.board2text(), player=next_player_name)\n",
- " \n",
+ "\n",
" msg_host = Msg(self.name, content, role=\"assistant\")\n",
" self.speak(msg_host)\n",
" \n",
diff --git a/examples/model_configs_template/litellm_chat_template.json b/examples/model_configs_template/litellm_chat_template.json
new file mode 100644
index 000000000..f1711dca9
--- /dev/null
+++ b/examples/model_configs_template/litellm_chat_template.json
@@ -0,0 +1,11 @@
+[{
+ "config_name": "lite_llm_openai_chat_gpt-3.5-turbo",
+ "model_type": "litellm_chat",
+ "model_name": "gpt-3.5-turbo"
+},
+{
+ "config_name": "lite_llm_claude3",
+ "model_type": "litellm_chat",
+ "model_name": "claude-3-opus-20240229"
+}
+]
diff --git a/examples/model_configs_template/openai_chat_template.json b/examples/model_configs_template/openai_chat_template.json
index 8d3f78087..f5abccf00 100644
--- a/examples/model_configs_template/openai_chat_template.json
+++ b/examples/model_configs_template/openai_chat_template.json
@@ -1,25 +1,38 @@
-[{
- "config_name": "openai_chat_gpt-4",
- "model_type": "openai_chat",
- "model_name": "gpt-4",
- "api_key": "{your_api_key}",
- "client_args": {
- "max_retries": 3
+[
+ {
+ "config_name": "openai_chat_gpt-4",
+ "model_type": "openai_chat",
+ "model_name": "gpt-4",
+ "api_key": "{your_api_key}",
+ "client_args": {
+ "max_retries": 3
+ },
+ "generate_args": {
+ "temperature": 0.7
+ }
},
- "generate_args": {
- "temperature": 0.7
- }
-},
-{
- "config_name": "openai_chat_gpt-3.5-turbo",
- "model_type": "openai_chat",
- "model_name": "gpt-3.5-turbo",
- "api_key": "{your_api_key}",
- "client_args": {
- "max_retries": 3
+ {
+ "config_name": "openai_chat_gpt-3.5-turbo",
+ "model_type": "openai_chat",
+ "model_name": "gpt-3.5-turbo",
+ "api_key": "{your_api_key}",
+ "client_args": {
+ "max_retries": 3
+ },
+ "generate_args": {
+ "temperature": 0.7
+ }
},
- "generate_args": {
- "temperature": 0.7
+ {
+ "config_name": "openai_chat_gpt-4o",
+ "model_type": "openai_chat",
+ "model_name": "gpt-4o",
+ "api_key": "{your_api_key}",
+ "client_args": {
+ "max_retries": 3
+ },
+ "generate_args": {
+ "temperature": 0.7
+ }
}
-}
]
\ No newline at end of file
diff --git a/setup.py b/setup.py
index a3fc085a3..2f2a75c34 100644
--- a/setup.py
+++ b/setup.py
@@ -71,6 +71,7 @@
"ollama>=0.1.7",
"google-generativeai>=0.4.0",
"zhipuai",
+ "litellm",
]
distribute_requires = minimal_requires + rpc_requires
diff --git a/src/agentscope/_init.py b/src/agentscope/_init.py
index 7d1f44d7b..dff68e585 100644
--- a/src/agentscope/_init.py
+++ b/src/agentscope/_init.py
@@ -25,7 +25,7 @@ def init(
save_dir: str = _DEFAULT_DIR,
save_log: bool = True,
save_code: bool = True,
- save_api_invoke: bool = True,
+ save_api_invoke: bool = False,
use_monitor: bool = True,
logger_level: LOG_LEVEL = _DEFAULT_LOG_LEVEL,
runtime_id: Optional[str] = None,
diff --git a/src/agentscope/agents/rpc_agent.py b/src/agentscope/agents/rpc_agent.py
index b7c3441bc..47d32ce3a 100644
--- a/src/agentscope/agents/rpc_agent.py
+++ b/src/agentscope/agents/rpc_agent.py
@@ -9,7 +9,7 @@
import base64
import traceback
import asyncio
-from typing import Any, Type, Optional, Union, Sequence
+from typing import Type, Optional, Union, Sequence
from concurrent import futures
from loguru import logger
@@ -18,11 +18,13 @@
import grpc
from grpc import ServicerContext
from expiringdict import ExpiringDict
-except ImportError:
- dill = None
- grpc = None
- ServicerContext = Any
- ExpiringDict = None
+except ImportError as import_error:
+ from agentscope.utils.tools import ImportErrorReporter
+
+ dill = ImportErrorReporter(import_error, "distribute")
+ grpc = ImportErrorReporter(import_error, "distribute")
+ ServicerContext = ImportErrorReporter(import_error, "distribute")
+ ExpiringDict = ImportErrorReporter(import_error, "distribute")
from agentscope._init import init_process, _INIT_SETTINGS
from agentscope.agents.agent import AgentBase
diff --git a/src/agentscope/agents/user_agent.py b/src/agentscope/agents/user_agent.py
index 38bd46de1..ee97a935f 100644
--- a/src/agentscope/agents/user_agent.py
+++ b/src/agentscope/agents/user_agent.py
@@ -81,7 +81,9 @@ def reply(
# Input url of file, image, video, audio or website
url = None
if self.require_url:
- url = input("URL: ")
+ url = input("URL (or Enter to skip): ")
+ if url == "":
+ url = None
# Add additional keys
msg = Msg(
diff --git a/src/agentscope/memory/memory.py b/src/agentscope/memory/memory.py
index e7671a12c..14b82ee25 100644
--- a/src/agentscope/memory/memory.py
+++ b/src/agentscope/memory/memory.py
@@ -7,11 +7,13 @@
"""
from abc import ABC, abstractmethod
-from typing import Iterable
+from typing import Iterable, Sequence
from typing import Optional
from typing import Union
from typing import Callable
+from ..message import MessageBase
+
class MemoryBase(ABC):
"""Base class for memory."""
@@ -33,6 +35,8 @@ def __init__(
def update_config(self, config: dict) -> None:
"""
Configure memory as specified in config
+ Args:
+ config (`dict`): Configuration of resetting this memory
"""
self.config = config
@@ -43,14 +47,29 @@ def get_memory(
filter_func: Optional[Callable[[int, dict], bool]] = None,
) -> list:
"""
- Return a certain range (`recent_n` or all) of memory, filtered by
- `filter_func`
+ Return a certain range (`recent_n` or all) of memory,
+ filtered by `filter_func`
+ Args:
+ recent_n (int, optional):
+ indicate the most recent N memory pieces to be returned.
+ filter_func (Optional[Callable[[int, dict], bool]]):
+ filter function to decide which pieces of memory should
+ be returned, taking the index and a piece of memory as
+ input and return True (return this memory) or False
+ (does not return)
"""
@abstractmethod
- def add(self, memories: Union[list[dict], dict, None]) -> None:
+ def add(
+ self,
+ memories: Union[Sequence[dict], dict, None],
+ ) -> None:
"""
Adding new memory fragment, depending on how the memory are stored
+ Args:
+ memories (Union[Sequence[dict], dict, None]):
+ Memories to be added. If the memory is not in MessageBase,
+ it will first be converted into a message type.
"""
@abstractmethod
@@ -58,26 +77,48 @@ def delete(self, index: Union[Iterable, int]) -> None:
"""
Delete memory fragment, depending on how the memory are stored
and matched
+ Args:
+ index (Union[Iterable, int]):
+ indices of the memory fragments to delete
"""
@abstractmethod
def load(
self,
- memories: Union[str, dict, list],
+ memories: Union[str, list[MessageBase], MessageBase],
overwrite: bool = False,
) -> None:
"""
Load memory, depending on how the memory are passed, design to load
from both file or dict
+ Args:
+ memories (Union[str, list[MessageBase], MessageBase]):
+ memories to be loaded.
+ If it is in str type, it will be first checked if it is a
+ file; otherwise it will be deserialized as messages.
+ Otherwise, memories must be either in message type or list
+ of messages.
+ overwrite (bool):
+ if True, clear the current memory before loading the new ones;
+ if False, memories will be appended to the old one at the end.
"""
@abstractmethod
def export(
self,
- to_mem: bool = False,
file_path: Optional[str] = None,
+ to_mem: bool = False,
) -> Optional[list]:
- """Export memory, depending on how the memory are stored"""
+ """
+ Export memory, depending on how the memory are stored
+ Args:
+ file_path (Optional[str]):
+ file path to save the memory to.
+ to_mem (Optional[str]):
+ if True, just return the list of messages in memory
+ Notice: this method prevents file_path is None when to_mem
+ is False.
+ """
@abstractmethod
def clear(self) -> None:
diff --git a/src/agentscope/memory/temporary_memory.py b/src/agentscope/memory/temporary_memory.py
index b06de508a..356fa4d96 100644
--- a/src/agentscope/memory/temporary_memory.py
+++ b/src/agentscope/memory/temporary_memory.py
@@ -16,6 +16,14 @@
from ..models import load_model_by_config_name
from ..service.retrieval.retrieval_from_list import retrieve_from_list
from ..service.retrieval.similarity import Embedding
+from ..message import (
+ deserialize,
+ serialize,
+ MessageBase,
+ Msg,
+ Tht,
+ PlaceholderMessage,
+)
class TemporaryMemory(MemoryBase):
@@ -28,6 +36,16 @@ def __init__(
config: Optional[dict] = None,
embedding_model: Union[str, Callable] = None,
) -> None:
+ """
+ Temporary memory module for conversation.
+ Args:
+ config (dict):
+ configuration of the memory
+ embedding_model (Union[str, Callable])
+ if the temporary memory needs to be embedded,
+ then either pass the name of embedding model or
+ the embedding model itself.
+ """
super().__init__(config)
self._content = []
@@ -43,10 +61,20 @@ def add(
memories: Union[Sequence[dict], dict, None],
embed: bool = False,
) -> None:
+ # pylint: disable=too-many-branches
+ """
+ Adding new memory fragment, depending on how the memory are stored
+ Args:
+ memories (Union[Sequence[dict], dict, None]):
+ memories to be added. If the memory is not in MessageBase,
+ it will first be converted into a message type.
+ embed (bool):
+ whether to generate embedding for the new added memories
+ """
if memories is None:
return
- if not isinstance(memories, list):
+ if not isinstance(memories, Sequence):
record_memories = [memories]
else:
record_memories = memories
@@ -54,6 +82,27 @@ def add(
# if memory doesn't have id attribute, we skip the checking
memories_idx = set(_.id for _ in self._content if hasattr(_, "id"))
for memory_unit in record_memories:
+ if not issubclass(type(memory_unit), MessageBase):
+ try:
+ if (
+ "name" in memory_unit
+ and memory_unit["name"] == "thought"
+ ):
+ memory_unit = Tht(**memory_unit)
+ else:
+ memory_unit = Msg(**memory_unit)
+ except Exception as exc:
+ raise ValueError(
+ f"Cannot add {memory_unit} to memory, "
+ f"must be with subclass of MessageBase",
+ ) from exc
+
+ # in case this is a PlaceholderMessage, try to update
+ # the values first
+ if isinstance(memory_unit, PlaceholderMessage):
+ memory_unit.update_value()
+ memory_unit = Msg(**memory_unit)
+
# add to memory if it's new
if (
not hasattr(memory_unit, "id")
@@ -71,6 +120,13 @@ def add(
self._content.append(memory_unit)
def delete(self, index: Union[Iterable, int]) -> None:
+ """
+ Delete memory fragment, depending on how the memory are stored
+ and matched
+ Args:
+ index (Union[Iterable, int]):
+ indices of the memory fragments to delete
+ """
if self.size() == 0:
logger.warning(
"The memory is empty, and the delete operation is "
@@ -101,16 +157,26 @@ def delete(self, index: Union[Iterable, int]) -> None:
def export(
self,
- to_mem: bool = False,
file_path: Optional[str] = None,
+ to_mem: bool = False,
) -> Optional[list]:
- """Export memory to json file"""
+ """
+ Export memory, depending on how the memory are stored
+ Args:
+ file_path (Optional[str]):
+ file path to save the memory to. The messages will
+ be serialized and written to the file.
+ to_mem (Optional[str]):
+ if True, just return the list of messages in memory
+ Notice: this method prevents file_path is None when to_mem
+ is False.
+ """
if to_mem:
return self._content
if to_mem is False and file_path is not None:
with open(file_path, "w", encoding="utf-8") as f:
- json.dump(self._content, f, indent=4)
+ f.write(serialize(self._content))
else:
raise NotImplementedError(
"file type only supports "
@@ -120,16 +186,30 @@ def export(
def load(
self,
- memories: Union[str, dict, list],
+ memories: Union[str, list[MessageBase], MessageBase],
overwrite: bool = False,
) -> None:
+ """
+ Load memory, depending on how the memory are passed, design to load
+ from both file or dict
+ Args:
+ memories (Union[str, list[MessageBase], MessageBase]):
+ memories to be loaded.
+ If it is in str type, it will be first checked if it is a
+ file; otherwise it will be deserialized as messages.
+ Otherwise, memories must be either in message type or list
+ of messages.
+ overwrite (bool):
+ if True, clear the current memory before loading the new ones;
+ if False, memories will be appended to the old one at the end.
+ """
if isinstance(memories, str):
if os.path.isfile(memories):
with open(memories, "r", encoding="utf-8") as f:
- self.add(json.load(f))
+ load_memories = deserialize(f.read())
else:
try:
- load_memories = json.loads(memories)
+ load_memories = deserialize(memories)
if not isinstance(load_memories, dict) and not isinstance(
load_memories,
list,
diff --git a/src/agentscope/message.py b/src/agentscope/message.py
index 30f35fe61..36cd2fabd 100644
--- a/src/agentscope/message.py
+++ b/src/agentscope/message.py
@@ -192,12 +192,18 @@ def __init__(
self,
content: Any,
timestamp: Optional[str] = None,
+ **kwargs: Any,
) -> None:
+ if "name" in kwargs:
+ kwargs.pop("name")
+ if "role" in kwargs:
+ kwargs.pop("role")
super().__init__(
name="thought",
content=content,
role="assistant",
timestamp=timestamp,
+ **kwargs,
)
def to_str(self) -> str:
@@ -399,7 +405,7 @@ def deserialize(s: str) -> Union[MessageBase, Sequence]:
return [deserialize(s) for s in js_msg["__value"]]
elif msg_type not in _MSGS:
raise NotImplementedError(
- "Deserialization of {msg_type} is not supported.",
+ f"Deserialization of {msg_type} is not supported.",
)
return _MSGS[msg_type](**js_msg)
diff --git a/src/agentscope/models/__init__.py b/src/agentscope/models/__init__.py
index 1e607c0e4..832829993 100644
--- a/src/agentscope/models/__init__.py
+++ b/src/agentscope/models/__init__.py
@@ -37,6 +37,9 @@
ZhipuAIChatWrapper,
ZhipuAIEmbeddingWrapper,
)
+from .litellm_model import (
+ LiteLLMChatWrapper,
+)
__all__ = [
@@ -59,6 +62,7 @@
"GeminiEmbeddingWrapper",
"ZhipuAIChatWrapper",
"ZhipuAIEmbeddingWrapper",
+ "LiteLLMChatWrapper",
"load_model_by_config_name",
"read_model_configs",
"clear_model_configs",
diff --git a/src/agentscope/models/dashscope_model.py b/src/agentscope/models/dashscope_model.py
index 4fd380de3..c4183aa85 100644
--- a/src/agentscope/models/dashscope_model.py
+++ b/src/agentscope/models/dashscope_model.py
@@ -11,7 +11,7 @@
try:
import dashscope
-except ModuleNotFoundError:
+except ImportError:
dashscope = None
from .model import ModelWrapperBase, ModelResponse
diff --git a/src/agentscope/models/litellm_model.py b/src/agentscope/models/litellm_model.py
new file mode 100644
index 000000000..242830a38
--- /dev/null
+++ b/src/agentscope/models/litellm_model.py
@@ -0,0 +1,256 @@
+# -*- coding: utf-8 -*-
+"""Model wrapper based on litellm https://docs.litellm.ai/docs/"""
+from abc import ABC
+from typing import Union, Any, List, Sequence
+
+from loguru import logger
+
+from .model import ModelWrapperBase, ModelResponse
+from ..message import MessageBase
+from ..utils.tools import _convert_to_str
+
+try:
+ import litellm
+except ImportError:
+ litellm = None
+
+
+class LiteLLMWrapperBase(ModelWrapperBase, ABC):
+ """The model wrapper based on LiteLLM API."""
+
+ def __init__(
+ self,
+ config_name: str,
+ model_name: str = None,
+ generate_args: dict = None,
+ **kwargs: Any,
+ ) -> None:
+ """
+ To use the LiteLLM wrapper, environent variables must be set.
+ Different model_name could be using different environment variables.
+ For example:
+ - for model_name: "gpt-3.5-turbo", you need to set "OPENAI_API_KEY"
+ ```
+ os.environ["OPENAI_API_KEY"] = "your-api-key"
+ ```
+ - for model_name: "claude-2", you need to set "ANTHROPIC_API_KEY"
+ - for Azure OpenAI, you need to set "AZURE_API_KEY",
+ "AZURE_API_BASE", "AZURE_API_VERSION"
+ You should refer to the docs in https://docs.litellm.ai/docs/ .
+ Args:
+ config_name (`str`):
+ The name of the model config.
+ model_name (`str`, default `None`):
+ The name of the model to use in OpenAI API.
+ generate_args (`dict`, default `None`):
+ The extra keyword arguments used in litellm api generation,
+ e.g. `temperature`, `seed`.
+ For generate_args, please refer to
+ https://docs.litellm.ai/docs/completion/input
+ for more detailes.
+
+ """
+
+ if model_name is None:
+ model_name = config_name
+ logger.warning("model_name is not set, use config_name instead.")
+
+ super().__init__(config_name=config_name)
+
+ if litellm is None:
+ raise ImportError(
+ "Cannot import litellm package in current python environment."
+ "You should try:"
+ "1. Install litellm by `pip install litellm`"
+ "2. If you still have import error, you should try to "
+ "update the openai to higher version, e.g. "
+ "by runing `pip install openai==1.25.1",
+ )
+
+ self.model_name = model_name
+ self.generate_args = generate_args or {}
+ self._register_default_metrics()
+
+ def format(
+ self,
+ *args: Union[MessageBase, Sequence[MessageBase]],
+ ) -> Union[List[dict], str]:
+ raise RuntimeError(
+ f"Model Wrapper [{type(self).__name__}] doesn't "
+ f"need to format the input. Please try to use the "
+ f"model wrapper directly.",
+ )
+
+
+class LiteLLMChatWrapper(LiteLLMWrapperBase):
+ """The model wrapper based on litellm chat API.
+ To use the LiteLLM wrapper, environent variables must be set.
+ Different model_name could be using different environment variables.
+ For example:
+ - for model_name: "gpt-3.5-turbo", you need to set "OPENAI_API_KEY"
+ ```
+ os.environ["OPENAI_API_KEY"] = "your-api-key"
+ ```
+ - for model_name: "claude-2", you need to set "ANTHROPIC_API_KEY"
+ - for Azure OpenAI, you need to set "AZURE_API_KEY",
+ "AZURE_API_BASE", "AZURE_API_VERSION"
+ You should refer to the docs in https://docs.litellm.ai/docs/ .
+ """
+
+ model_type: str = "litellm_chat"
+
+ def _register_default_metrics(self) -> None:
+ # Set monitor accordingly
+ # TODO: set quota to the following metrics
+ self.monitor.register(
+ self._metric("call_counter"),
+ metric_unit="times",
+ )
+ self.monitor.register(
+ self._metric("prompt_tokens"),
+ metric_unit="token",
+ )
+ self.monitor.register(
+ self._metric("completion_tokens"),
+ metric_unit="token",
+ )
+ self.monitor.register(
+ self._metric("total_tokens"),
+ metric_unit="token",
+ )
+
+ def __call__(
+ self,
+ messages: list,
+ **kwargs: Any,
+ ) -> ModelResponse:
+ """
+ Args:
+ messages (`list`):
+ A list of messages to process.
+ **kwargs (`Any`):
+ The keyword arguments to litellm chat completions API,
+ e.g. `temperature`, `max_tokens`, `top_p`, etc. Please refer to
+ https://docs.litellm.ai/docs/completion/input
+ for more detailed arguments.
+
+ Returns:
+ `ModelResponse`:
+ The response text in text field, and the raw response in
+ raw field.
+ """
+
+ # step1: prepare keyword arguments
+ kwargs = {**self.generate_args, **kwargs}
+
+ # step2: checking messages
+ if not isinstance(messages, list):
+ raise ValueError(
+ "LiteLLM `messages` field expected type `list`, "
+ f"got `{type(messages)}` instead.",
+ )
+ if not all("role" in msg and "content" in msg for msg in messages):
+ raise ValueError(
+ "Each message in the 'messages' list must contain a 'role' "
+ "and 'content' key for LiteLLM API.",
+ )
+
+ # step3: forward to generate response
+ response = litellm.completion(
+ model=self.model_name,
+ messages=messages,
+ **kwargs,
+ )
+
+ # step4: record the api invocation if needed
+ self._save_model_invocation(
+ arguments={
+ "model": self.model_name,
+ "messages": messages,
+ **kwargs,
+ },
+ response=response.model_dump(),
+ )
+
+ # step5: update monitor accordingly
+ self.update_monitor(call_counter=1, **response.usage.model_dump())
+
+ # step6: return response
+ return ModelResponse(
+ text=response.choices[0].message.content,
+ raw=response.model_dump(),
+ )
+
+ def format(
+ self,
+ *args: Union[MessageBase, Sequence[MessageBase]],
+ ) -> List[dict]:
+ """Format the input string and dictionary into the unified format.
+ Note that the format function might not be the optimal way to contruct
+ prompt for every model, but a common way to do so.
+ Developers are encouraged to implement their own prompt
+ engineering strategies if have strong performance concerns.
+
+ Args:
+ args (`Union[MessageBase, Sequence[MessageBase]]`):
+ The input arguments to be formatted, where each argument
+ should be a `Msg` object, or a list of `Msg` objects.
+ In distribution, placeholder is also allowed.
+ Returns:
+ `List[dict]`:
+ The formatted messages in the format that anthropic Chat API
+ required.
+ """
+
+ # Parse all information into a list of messages
+ input_msgs = []
+ for _ in args:
+ if _ is None:
+ continue
+ if isinstance(_, MessageBase):
+ input_msgs.append(_)
+ elif isinstance(_, list) and all(
+ isinstance(__, MessageBase) for __ in _
+ ):
+ input_msgs.extend(_)
+ else:
+ raise TypeError(
+ f"The input should be a Msg object or a list "
+ f"of Msg objects, got {type(_)}.",
+ )
+
+ # record dialog history as a list of strings
+ system_content_template = []
+ dialogue = []
+ for i, unit in enumerate(input_msgs):
+ if i == 0 and unit.role == "system":
+ # system prompt
+ system_prompt = _convert_to_str(unit.content)
+ if not system_prompt.endswith("\n"):
+ system_prompt += "\n"
+ system_content_template.append(system_prompt)
+ else:
+ # Merge all messages into a dialogue history prompt
+ dialogue.append(
+ f"{unit.name}: {_convert_to_str(unit.content)}",
+ )
+
+ if len(dialogue) != 0:
+ system_content_template.extend(
+ ["## Dialogue History", "{dialogue_history}"],
+ )
+
+ dialogue_history = "\n".join(dialogue)
+
+ system_content_template = "\n".join(system_content_template)
+
+ messages = [
+ {
+ "role": "user",
+ "content": system_content_template.format(
+ dialogue_history=dialogue_history,
+ ),
+ },
+ ]
+
+ return messages
diff --git a/src/agentscope/models/ollama_model.py b/src/agentscope/models/ollama_model.py
index 31b136dcb..1c38d3b8b 100644
--- a/src/agentscope/models/ollama_model.py
+++ b/src/agentscope/models/ollama_model.py
@@ -3,8 +3,6 @@
from abc import ABC
from typing import Sequence, Any, Optional, List, Union
-from loguru import logger
-
from agentscope.message import MessageBase
from agentscope.models import ModelWrapperBase, ModelResponse
from agentscope.utils.tools import _convert_to_str
@@ -170,10 +168,43 @@ def format(
self,
*args: Union[MessageBase, Sequence[MessageBase]],
) -> List[dict]:
- """A basic strategy to format the input into the required format of
- Ollama Chat API.
+ """Format the messages for ollama Chat API.
+
+ All messages will be formatted into a single system message with
+ system prompt and dialogue history.
+
+ Note:
+ 1. This strategy maybe not suitable for all scenarios,
+ and developers are encouraged to implement their own prompt
+ engineering strategies.
+ 2. For ollama chat api, the content field shouldn't be empty string.
+
+ Example:
+
+ .. code-block:: python
+
+ prompt = model.format(
+ Msg("system", "You're a helpful assistant", role="system"),
+ Msg("Bob", "Hi, how can I help you?", role="assistant"),
+ Msg("user", "What's the date today?", role="user")
+ )
+
+ The prompt will be as follows:
+
+ .. code-block:: python
+
+ [
+ {
+ "role": "user",
+ "content": (
+ "You're a helpful assistant\\n\\n"
+ "## Dialogue History\\n"
+ "Bob: Hi, how can I help you?\\n"
+ "user: What's the date today?"
+ )
+ }
+ ]
- Note for ollama chat api, the content field shouldn't be empty string.
Args:
args (`Union[MessageBase, Sequence[MessageBase]]`):
@@ -185,39 +216,65 @@ def format(
`List[dict]`:
The formatted messages.
"""
- ollama_msgs = []
- for msg in args:
- if msg is None:
- continue
- if isinstance(msg, MessageBase):
- # content shouldn't be empty string
- if msg.content == "":
- logger.warning(
- "In ollama chat API, the content field cannot be "
- "empty string. To avoid error, the empty string is "
- "replaced by a blank space automatically, but the "
- "model may not work as expected.",
- )
- msg.content = " "
-
- ollama_msg = {
- "role": msg.role,
- "content": _convert_to_str(msg.content),
- }
-
- # image url
- if msg.url is not None:
- ollama_msg["images"] = [msg.url]
- ollama_msgs.append(ollama_msg)
- elif isinstance(msg, list):
- ollama_msgs.extend(self.format(*msg))
+ # Parse all information into a list of messages
+ input_msgs = []
+ for _ in args:
+ if _ is None:
+ continue
+ if isinstance(_, MessageBase):
+ input_msgs.append(_)
+ elif isinstance(_, list) and all(
+ isinstance(__, MessageBase) for __ in _
+ ):
+ input_msgs.extend(_)
else:
raise TypeError(
- f"Invalid message type: {type(msg)}, `Msg` is expected.",
+ f"The input should be a Msg object or a list "
+ f"of Msg objects, got {type(_)}.",
)
- return ollama_msgs
+ # record dialog history as a list of strings
+ system_content_template = []
+ dialogue = []
+ # TODO: here we default the url links to images
+ images = []
+ for i, unit in enumerate(input_msgs):
+ if i == 0 and unit.role == "system":
+ # system prompt
+ system_prompt = _convert_to_str(unit.content)
+ if not system_prompt.endswith("\n"):
+ system_prompt += "\n"
+ system_content_template.append(system_prompt)
+ else:
+ # Merge all messages into a dialogue history prompt
+ dialogue.append(
+ f"{unit.name}: {_convert_to_str(unit.content)}",
+ )
+
+ if unit.url is not None:
+ images.append(unit.url)
+
+ if len(dialogue) != 0:
+ system_content_template.extend(
+ ["## Dialogue History", "{dialogue_history}"],
+ )
+
+ dialogue_history = "\n".join(dialogue)
+
+ system_content_template = "\n".join(system_content_template)
+
+ system_message = {
+ "role": "system",
+ "content": system_content_template.format(
+ dialogue_history=dialogue_history,
+ ),
+ }
+
+ if len(images) != 0:
+ system_message["images"] = images
+
+ return [system_message]
class OllamaEmbeddingWrapper(OllamaWrapperBase):
diff --git a/src/agentscope/models/openai_model.py b/src/agentscope/models/openai_model.py
index 2f74e101d..99542582b 100644
--- a/src/agentscope/models/openai_model.py
+++ b/src/agentscope/models/openai_model.py
@@ -1,14 +1,14 @@
# -*- coding: utf-8 -*-
"""Model wrapper for OpenAI models"""
from abc import ABC
-from typing import Union, Any, List, Sequence
+from typing import Union, Any, List, Sequence, Dict
from loguru import logger
from .model import ModelWrapperBase, ModelResponse
from ..file_manager import file_manager
from ..message import MessageBase
-from ..utils.tools import _convert_to_str
+from ..utils.tools import _convert_to_str, _to_openai_image_url
try:
import openai
@@ -107,6 +107,9 @@ class OpenAIChatWrapper(OpenAIWrapperBase):
deprecated_model_type: str = "openai"
+ substrings_in_vision_models_names = ["gpt-4-turbo", "vision", "gpt-4o"]
+ """The substrings in the model names of vision models."""
+
def _register_default_metrics(self) -> None:
# Set monitor accordingly
# TODO: set quota to the following metrics
@@ -212,6 +215,77 @@ def __call__(
raw=response.model_dump(),
)
+ def _format_msg_with_url(
+ self,
+ msg: MessageBase,
+ ) -> Dict:
+ """Format a message with image urls into openai chat format.
+ This format method is used for gpt-4o, gpt-4-turbo, gpt-4-vision and
+ other vision models.
+ """
+ # Check if the model is a vision model
+ if not any(
+ _ in self.model_name
+ for _ in self.substrings_in_vision_models_names
+ ):
+ logger.warning(
+ f"The model {self.model_name} is not a vision model. "
+ f"Skip the url in the message.",
+ )
+ return {
+ "role": msg.role,
+ "name": msg.name,
+ "content": _convert_to_str(msg.content),
+ }
+
+ # Put all urls into a list
+ urls = [msg.url] if isinstance(msg.url, str) else msg.url
+
+ # Check if the url refers to an image
+ checked_urls = []
+ for url in urls:
+ try:
+ checked_urls.append(_to_openai_image_url(url))
+ except TypeError:
+ logger.warning(
+ f"The url {url} is not a valid image url for "
+ f"OpenAI Chat API, skipped.",
+ )
+
+ if len(checked_urls) == 0:
+ # If no valid image url is provided, return the normal message dict
+ return {
+ "role": msg.role,
+ "name": msg.name,
+ "content": _convert_to_str(msg.content),
+ }
+ else:
+ # otherwise, use the vision format message
+ returned_msg = {
+ "role": msg.role,
+ "name": msg.name,
+ "content": [
+ {
+ "type": "text",
+ "text": _convert_to_str(msg.content),
+ },
+ ],
+ }
+
+ image_dicts = [
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": _,
+ },
+ }
+ for _ in checked_urls
+ ]
+
+ returned_msg["content"].extend(image_dicts)
+
+ return returned_msg
+
def format(
self,
*args: Union[MessageBase, Sequence[MessageBase]],
@@ -230,19 +304,22 @@ def format(
The formatted messages in the format that OpenAI Chat API
required.
"""
-
messages = []
for arg in args:
if arg is None:
continue
if isinstance(arg, MessageBase):
- messages.append(
- {
- "role": arg.role,
- "name": arg.name,
- "content": _convert_to_str(arg.content),
- },
- )
+ if arg.url is not None:
+ messages.append(self._format_msg_with_url(arg))
+ else:
+ messages.append(
+ {
+ "role": arg.role,
+ "name": arg.name,
+ "content": _convert_to_str(arg.content),
+ },
+ )
+
elif isinstance(arg, list):
messages.extend(self.format(*arg))
else:
diff --git a/src/agentscope/rpc/__init__.py b/src/agentscope/rpc/__init__.py
index 03cf58169..2c7703a90 100644
--- a/src/agentscope/rpc/__init__.py
+++ b/src/agentscope/rpc/__init__.py
@@ -5,16 +5,19 @@
try:
from .rpc_agent_pb2 import RpcMsg # pylint: disable=E0611
-except ModuleNotFoundError:
- RpcMsg = Any # type: ignore[misc]
-try:
from .rpc_agent_pb2_grpc import RpcAgentServicer
from .rpc_agent_pb2_grpc import RpcAgentStub
from .rpc_agent_pb2_grpc import add_RpcAgentServicer_to_server
-except ImportError:
- RpcAgentServicer = object
- RpcAgentStub = Any
- add_RpcAgentServicer_to_server = Any
+except ImportError as import_error:
+ from agentscope.utils.tools import ImportErrorReporter
+
+ RpcMsg = ImportErrorReporter(import_error, "distribute") # type: ignore[misc]
+ RpcAgentServicer = ImportErrorReporter(import_error, "distribute")
+ RpcAgentStub = ImportErrorReporter(import_error, "distribute")
+ add_RpcAgentServicer_to_server = ImportErrorReporter(
+ import_error,
+ "distribute",
+ )
__all__ = [
diff --git a/src/agentscope/rpc/rpc_agent_client.py b/src/agentscope/rpc/rpc_agent_client.py
index ab9f1a565..189e0895f 100644
--- a/src/agentscope/rpc/rpc_agent_client.py
+++ b/src/agentscope/rpc/rpc_agent_client.py
@@ -3,24 +3,23 @@
import threading
import base64
-from typing import Any, Optional
+from typing import Optional
from loguru import logger
try:
import dill
import grpc
from grpc import RpcError
-except ImportError:
- dill = None
- grpc = None
- RpcError = None
-
-try:
from agentscope.rpc.rpc_agent_pb2 import RpcMsg # pylint: disable=E0611
from agentscope.rpc.rpc_agent_pb2_grpc import RpcAgentStub
-except ModuleNotFoundError:
- RpcMsg = Any # type: ignore[misc]
- RpcAgentStub = Any
+except ImportError as import_error:
+ from agentscope.utils.tools import ImportErrorReporter
+
+ dill = ImportErrorReporter(import_error, "distribute")
+ grpc = ImportErrorReporter(import_error, "distribute")
+ RpcMsg = ImportErrorReporter(import_error, "distribute")
+ RpcAgentStub = ImportErrorReporter(import_error, "distribute")
+ RpcError = ImportError
class RpcAgentClient:
diff --git a/src/agentscope/rpc/rpc_agent_pb2_grpc.py b/src/agentscope/rpc/rpc_agent_pb2_grpc.py
index 93ee27369..4099c7027 100644
--- a/src/agentscope/rpc/rpc_agent_pb2_grpc.py
+++ b/src/agentscope/rpc/rpc_agent_pb2_grpc.py
@@ -3,8 +3,10 @@
"""Client and server classes corresponding to protobuf-defined services."""
try:
import grpc
-except ImportError:
- grpc = None
+except ImportError as import_error:
+ from agentscope.utils.tools import ImportErrorReporter
+
+ grpc = ImportErrorReporter(import_error, "distribute")
import agentscope.rpc.rpc_agent_pb2 as rpc__agent__pb2
diff --git a/src/agentscope/service/__init__.py b/src/agentscope/service/__init__.py
index dce26c195..67cb7ce73 100644
--- a/src/agentscope/service/__init__.py
+++ b/src/agentscope/service/__init__.py
@@ -21,6 +21,11 @@
from .sql_query.mongodb import query_mongodb
from .web.search import bing_search, google_search
from .web.arxiv import arxiv_search
+from .web.dblp import (
+ dblp_search_publications,
+ dblp_search_authors,
+ dblp_search_venues,
+)
from .service_response import ServiceResponse
from .service_toolkit import ServiceToolkit
from .service_toolkit import ServiceFactory
@@ -70,6 +75,9 @@ def get_help() -> None:
"load_web",
"parse_html_to_text",
"download_from_url",
+ "dblp_search_publications",
+ "dblp_search_authors",
+ "dblp_search_venues",
# to be deprecated
"ServiceFactory",
]
diff --git a/src/agentscope/service/web/dblp.py b/src/agentscope/service/web/dblp.py
new file mode 100644
index 000000000..7d6ab9c1c
--- /dev/null
+++ b/src/agentscope/service/web/dblp.py
@@ -0,0 +1,318 @@
+# -*- coding: utf-8 -*-
+""" Search papers, authors and venues in DBLP API.
+For detail usage of the DBLP API
+please refer to https://dblp.org/faq/How+can+I+fetch+DBLP+data.html
+"""
+from agentscope.service.service_response import (
+ ServiceResponse,
+ ServiceExecStatus,
+)
+from agentscope.utils.common import requests_get
+
+
+def dblp_search_publications(
+ question: str,
+ num_results: int = 30,
+ start: int = 0,
+ num_completion: int = 10,
+) -> ServiceResponse:
+ """Search publications in the DBLP database.
+
+ Args:
+ question (`str`):
+ The search query string.
+ num_results (`int`, defaults to `30`):
+ The number of search results to return.
+ start (`int`, defaults to `0`):
+ The index of the first search result to return.
+ num_completion (`int`, defaults to `10`):
+ The number of completions to generate.
+
+ Returns:
+ `ServiceResponse`: A dictionary containing `status` and `content`.
+ The `status` attribute is from the ServiceExecStatus enum,
+ indicating success or error.
+ The `content` is a list of parsed publication data if successful,
+ or an error message if failed.
+ Each item in the list contains publication information
+ includes title, authors, venue, pages, year, type, DOI, and URL.
+
+ Example:
+ .. code-block:: python
+ search_results = dblp_search_publications(question="Extreme
+ Learning Machine",
+ num_results=3,
+ results_per_page=1,
+ num_completion=1)
+ print(search_results)
+
+ It returns the following structure:
+
+ .. code-block:: python
+
+ {
+ 'status': ,
+ 'content': [
+ {
+ 'title': 'Power transformers fault diagnosis
+ based on a meta-learning approach to kernel
+ extreme learning machine with opposition-based
+ learning sparrow search algorithm.',
+ 'venue': 'J. Intell. Fuzzy Syst.',
+ 'pages': '455-466',
+ 'year': '2023',
+ 'type': 'Journal Articles',
+ 'doi': '10.3233/JIFS-211862',
+ 'url': 'https://dblp.org/rec/journals/jifs/YuTZTCH23',
+ 'authors': 'Song Yu, Weimin Tan, Chengming Zhang,
+ Chao Tang, Lihong Cai, Dong Hu'
+ },
+ {
+ 'title': 'Performance comparison of Extreme Learning
+ Machinesand other machine learning methods
+ on WBCD data set.',
+ 'venue': 'SIU',
+ 'pages': '1-4',
+ 'year': '2021',
+ 'type': 'Conference and Workshop Papers',
+ 'doi': '10.1109/SIU53274.2021.9477984',
+ 'url': 'https://dblp.org/rec/conf/siu/KeskinDAY21',
+ 'authors': 'Ömer Selim Keskin, Akif Durdu,
+ Muhammet Fatih Aslan, Abdullah Yusefi'
+ }
+ ]
+ }
+ """
+
+ url = "https://dblp.org/search/publ/api"
+ params = {
+ "q": question,
+ "format": "json",
+ "h": num_results,
+ "f": start,
+ "c": num_completion,
+ }
+ search_results = requests_get(url, params)
+
+ if isinstance(search_results, str):
+ return ServiceResponse(ServiceExecStatus.ERROR, search_results)
+
+ hits = search_results.get("result", {}).get("hits", {}).get("hit", [])
+ parsed_data = []
+ for hit in hits:
+ info = hit.get("info", {})
+ title = info.get("title", "No title available")
+ venue = info.get("venue", "No venue available")
+ pages = info.get("pages", "No page information")
+ year = info.get("year", "Year not specified")
+ pub_type = info.get("type", "Type not specified")
+ doi = info.get("doi", "No DOI available")
+ url = info.get("url", "No URL available")
+ authors = info.get("authors", {}).get("author", [])
+ authors_info = info.get("authors", {}).get("author", [])
+ if isinstance(
+ authors_info,
+ dict,
+ ): # Check if there's only one author in a dict format
+ authors_info = [authors_info]
+ authors = ", ".join(
+ [author["text"] for author in authors_info if "text" in author],
+ )
+ data = {
+ "title": title,
+ "venue": venue,
+ "pages": pages,
+ "year": year,
+ "type": pub_type,
+ "doi": doi,
+ "url": url,
+ "authors": authors,
+ }
+ parsed_data.append(data)
+ return ServiceResponse(ServiceExecStatus.SUCCESS, parsed_data)
+
+
+def dblp_search_authors(
+ question: str,
+ num_results: int = 30,
+ start: int = 0,
+ num_completion: int = 10,
+) -> ServiceResponse:
+ """Search for author information in the DBLP database.
+
+ Args:
+ question (`str`):
+ The search query string.
+ num_results (`int`, defaults to `30`):
+ The number of search results to return.
+ start (`int`, defaults to `0`):
+ The index of the first search result to return.
+ num_completion (`int`, defaults to `10`):
+ The number of completions to generate.
+
+
+ Returns:
+ `ServiceResponse`: A dictionary containing `status` and `content`.
+ The `status` attribute is from the
+ ServiceExecStatus enum, indicating the success or error of the search.
+ The `content` is a list of parsed author
+ data if successful, or an error message if failed.
+ Each item in the list contains author information
+ including their name, URL, and affiliations.
+
+ Example:
+ .. code-block:: python
+
+ search_results = dblp_search_authors(question="Liu ZiWei",
+ num_results=3,
+ results_per_page=1,
+ num_completion=1)
+ print(search_results)
+
+ It returns the following structure:
+
+ .. code-block:: python
+
+ {
+ 'status': ,
+ 'content': [
+ {
+ 'author': 'Ziwei Liu 0001',
+ 'url': 'https://dblp.org/pid/05/6300-1',
+ 'affiliations': 'Advantech Singapore Pte Ltd,
+ Singapore;
+ National University of Singapore,
+ Department of Computer Science, Singapore'
+ },
+ {
+ 'author': 'Ziwei Liu 0002',
+ 'url': 'https://dblp.org/pid/05/6300-2',
+ 'affiliations': 'Nanyang Technological University,
+ S-Lab, Singapore;
+ Chinese University of Hong Kong,
+ Department of Information Engineering,
+ Hong Kong'
+ }
+ ]
+ }
+ """
+ url = "https://dblp.org/search/author/api"
+ params = {
+ "q": question,
+ "format": "json",
+ "h": num_results,
+ "f": start,
+ "c": num_completion,
+ }
+ search_results = requests_get(url, params)
+ if isinstance(search_results, str):
+ return ServiceResponse(ServiceExecStatus.ERROR, search_results)
+ hits = search_results.get("result", {}).get("hits", {}).get("hit", [])
+ parsed_data = []
+ for hit in hits:
+ author = hit["info"]["author"]
+ author_url = hit["info"]["url"]
+ affiliations = []
+ notes = hit["info"].get("notes", {})
+ note_entries = notes.get("note", [])
+ if isinstance(note_entries, dict):
+ note_entries = [note_entries]
+ for note in note_entries:
+ if note["@type"] == "affiliation":
+ affiliations.append(note["text"])
+ affiliations = "; ".join(affiliations)
+ entry_dict = {
+ "author": author,
+ "url": author_url,
+ "affiliations": affiliations,
+ }
+ parsed_data.append(entry_dict)
+ return ServiceResponse(ServiceExecStatus.SUCCESS, parsed_data)
+
+
+def dblp_search_venues(
+ question: str,
+ num_results: int = 30,
+ start: int = 0,
+ num_completion: int = 10,
+) -> ServiceResponse:
+ """Search for venue information in the DBLP database.
+
+ Args:
+ question (`str`):
+ The search query string.
+ num_results (`int`, defaults to `30`):
+ The number of search results to return.
+ start (`int`, defaults to `0`):
+ The index of the first search result to return.
+ num_completion (`int`, defaults to `10`):
+ The number of completions to generate.
+
+ Returns:
+ `ServiceResponse`: A dictionary containing `status` and `content`.
+ The `status` attribute is from the ServiceExecStatus enum,
+ indicating the success or error of the search.
+ The `content` is a list of parsed venue data if successful,
+ or an error message if failed.
+ Each item in the list contains venue information including
+ its name, acronym, type, and URL.
+
+ Example:
+ .. code-block:: python
+
+ search_results = dblp_search_venues(question="AAAI",
+ num_results=1,
+ results_per_page=1,
+ num_completion=1)
+ print(search_results)
+
+ It returns the following structure:
+
+ .. code-block:: python
+
+ {
+ 'status': ,
+ 'content': [
+ {
+ 'venue': 'AAAI Conference on Artificial Intelligence
+ (AAAI)',
+ 'acronym': 'AAAI',
+ 'type': 'Conference or Workshop',
+ 'url': 'https://dblp.org/db/conf/aaai/'
+ },
+ {
+ 'venue': ''AAAI Fall Symposium Series',
+ 'acronym': 'No acronym available',
+ 'type': 'Conference or Workshop',
+ 'url': 'https://dblp.org/db/conf/aaaifs/'
+ }
+ ]
+ }
+ """
+ url = "https://dblp.org/search/venue/api"
+ params = {
+ "q": question,
+ "format": "json",
+ "h": num_results,
+ "f": start,
+ "c": num_completion,
+ }
+ search_results = requests_get(url, params)
+ if isinstance(search_results, str):
+ return ServiceResponse(ServiceExecStatus.ERROR, search_results)
+
+ hits = search_results.get("result", {}).get("hits", {}).get("hit", [])
+ parsed_data = []
+ for hit in hits:
+ venue = hit["info"]["venue"]
+ acronym = hit["info"].get("acronym", "No acronym available")
+ venue_type = hit["info"].get("type", "Type not specified")
+ url = hit["info"]["url"]
+ entry_dict = {
+ "venue": venue,
+ "acronym": acronym,
+ "type": venue_type,
+ "url": url,
+ }
+ parsed_data.append(entry_dict)
+ return ServiceResponse(ServiceExecStatus.SUCCESS, parsed_data)
diff --git a/src/agentscope/service/web/search.py b/src/agentscope/service/web/search.py
index fd72b7536..b5ff7e59f 100644
--- a/src/agentscope/service/web/search.py
+++ b/src/agentscope/service/web/search.py
@@ -1,7 +1,6 @@
# -*- coding: utf-8 -*-
"""Search question in the web"""
from typing import Any
-
from agentscope.service.service_response import ServiceResponse
from agentscope.utils.common import requests_get
from agentscope.service.service_status import ServiceExecStatus
diff --git a/src/agentscope/utils/tools.py b/src/agentscope/utils/tools.py
index 75cc0c7cb..8ebd23777 100644
--- a/src/agentscope/utils/tools.py
+++ b/src/agentscope/utils/tools.py
@@ -3,6 +3,7 @@
import base64
import datetime
import json
+import os.path
import secrets
import string
from typing import Any, Literal, List
@@ -129,7 +130,7 @@ def _to_openai_image_url(url: str) -> str:
"""
# See https://platform.openai.com/docs/guides/vision for details of
# support image extensions.
- image_extensions = (
+ support_image_extensions = (
".png",
".jpg",
".jpeg",
@@ -139,16 +140,17 @@ def _to_openai_image_url(url: str) -> str:
parsed_url = urlparse(url)
- # Checking for HTTP(S) image links
- if parsed_url.scheme in ["http", "https"]:
- lower_path = parsed_url.path.lower()
- if lower_path.endswith(image_extensions):
+ lower_url = url.lower()
+
+ # Web url
+ if parsed_url.scheme != "":
+ if any(lower_url.endswith(_) for _ in support_image_extensions):
return url
# Check if it is a local file
- elif parsed_url.scheme == "file" or not parsed_url.scheme:
- if parsed_url.path.lower().endswith(image_extensions):
- with open(parsed_url.path, "rb") as image_file:
+ elif os.path.exists(url) and os.path.isfile(url):
+ if any(lower_url.endswith(_) for _ in support_image_extensions):
+ with open(url, "rb") as image_file:
base64_image = base64.b64encode(image_file.read()).decode(
"utf-8",
)
@@ -156,7 +158,7 @@ def _to_openai_image_url(url: str) -> str:
mime_type = f"image/{extension}"
return f"data:{mime_type};base64,{base64_image}"
- raise TypeError(f"{url} should be end with {image_extensions}.")
+ raise TypeError(f"{url} should be end with {support_image_extensions}.")
def _download_file(url: str, path_file: str, max_retries: int = 3) -> bool:
@@ -294,3 +296,39 @@ def _join_str_with_comma_and(elements: List[str]) -> str:
return " and ".join(elements)
else:
return ", ".join(elements[:-1]) + f", and {elements[-1]}"
+
+
+class ImportErrorReporter:
+ """Used as a placeholder for missing packages.
+ When called, an ImportError will be raised, prompting the user to install
+ the specified extras requirement.
+ """
+
+ def __init__(self, error: ImportError, extras_require: str = None) -> None:
+ """Init the ImportErrorReporter.
+
+ Args:
+ error (`ImportError`): the original ImportError.
+ extras_require (`str`): the extras requirement.
+ """
+ self.error = error
+ self.extras_require = extras_require
+
+ def __call__(self, *args: Any, **kwds: Any) -> Any:
+ return self._raise_import_error()
+
+ def __getattr__(self, name: str) -> Any:
+ return self._raise_import_error()
+
+ def __getitem__(self, __key: Any) -> Any:
+ return self._raise_import_error()
+
+ def _raise_import_error(self) -> Any:
+ """Raise the ImportError"""
+ err_msg = f"ImportError occorred: [{self.error.msg}]."
+ if self.extras_require is not None:
+ err_msg += (
+ f" Please install [{self.extras_require}] version"
+ " of agentscope."
+ )
+ raise ImportError(err_msg)
diff --git a/tests/format_test.py b/tests/format_test.py
index 3226deab4..07efa86ae 100644
--- a/tests/format_test.py
+++ b/tests/format_test.py
@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
"""Unit test for prompt engineering strategies in format function."""
import unittest
+from unittest import mock
from unittest.mock import MagicMock, patch
from agentscope.message import Msg
@@ -12,6 +13,7 @@
ZhipuAIChatWrapper,
DashScopeChatWrapper,
DashScopeMultiModalWrapper,
+ LiteLLMChatWrapper,
)
@@ -30,6 +32,27 @@ def setUp(self) -> None:
],
]
+ self.inputs_vision = [
+ Msg("system", "You are a helpful assistant", role="system"),
+ [
+ Msg(
+ "user",
+ "Describe the images",
+ role="user",
+ url="https://fakeweb/test.jpg",
+ ),
+ Msg(
+ "user",
+ "And this images",
+ "user",
+ url=[
+ "/Users/xxx/abc.png",
+ "/Users/xxx/def.mp3",
+ ],
+ ),
+ ],
+ ]
+
self.wrong_inputs = [
Msg("system", "You are a helpful assistant", role="system"),
[
@@ -38,6 +61,118 @@ def setUp(self) -> None:
],
]
+ @patch("builtins.open", mock.mock_open(read_data=b"abcdef"))
+ @patch("os.path.isfile")
+ @patch("os.path.exists")
+ @patch("openai.OpenAI")
+ def test_openai_chat_vision_with_wrong_model(
+ self,
+ mock_client: MagicMock,
+ mock_exists: MagicMock,
+ mock_isfile: MagicMock,
+ ) -> None:
+ """Unit test for the format function in openai chat api wrapper with
+ vision models"""
+ mock_exists.side_effect = lambda url: url == "/Users/xxx/abc.png"
+ mock_isfile.side_effect = lambda url: url == "/Users/xxx/abc.png"
+
+ # Prepare the mock client
+ mock_client.return_value = "client_dummy"
+
+ model = OpenAIChatWrapper(
+ config_name="",
+ model_name="gpt-4",
+ )
+
+ # correct format
+ ground_truth = [
+ {
+ "role": "system",
+ "content": "You are a helpful assistant",
+ "name": "system",
+ },
+ {
+ "role": "user",
+ "name": "user",
+ "content": "Describe the images",
+ },
+ {
+ "role": "user",
+ "name": "user",
+ "content": "And this images",
+ },
+ ]
+
+ prompt = model.format(*self.inputs_vision)
+ self.assertListEqual(prompt, ground_truth)
+
+ @patch("builtins.open", mock.mock_open(read_data=b"abcdef"))
+ @patch("os.path.isfile")
+ @patch("os.path.exists")
+ @patch("openai.OpenAI")
+ def test_openai_chat_vision(
+ self,
+ mock_client: MagicMock,
+ mock_exists: MagicMock,
+ mock_isfile: MagicMock,
+ ) -> None:
+ """Unit test for the format function in openai chat api wrapper with
+ vision models"""
+ mock_exists.side_effect = lambda url: url == "/Users/xxx/abc.png"
+ mock_isfile.side_effect = lambda url: url == "/Users/xxx/abc.png"
+
+ # Prepare the mock client
+ mock_client.return_value = "client_dummy"
+
+ model = OpenAIChatWrapper(
+ config_name="",
+ model_name="gpt-4o",
+ )
+
+ # correct format
+ ground_truth = [
+ {
+ "role": "system",
+ "content": "You are a helpful assistant",
+ "name": "system",
+ },
+ {
+ "role": "user",
+ "name": "user",
+ "content": [
+ {
+ "type": "text",
+ "text": "Describe the images",
+ },
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": "https://fakeweb/test.jpg",
+ },
+ },
+ ],
+ },
+ {
+ "role": "user",
+ "name": "user",
+ "content": [
+ {
+ "type": "text",
+ "text": "And this images",
+ },
+ {
+ "type": "image_url",
+ "image_url": {
+ "url": "data:image/png;base64,YWJjZGVm",
+ },
+ },
+ ],
+ },
+ ]
+
+ prompt = model.format(*self.inputs_vision)
+ self.assertListEqual(prompt, ground_truth)
+
@patch("openai.OpenAI")
def test_openai_chat(self, mock_client: MagicMock) -> None:
"""Unit test for the format function in openai chat api wrapper."""
@@ -84,9 +219,16 @@ def test_ollama_chat(self) -> None:
# correct format
ground_truth = [
- {"role": "system", "content": "You are a helpful assistant"},
- {"role": "user", "content": "What is the weather today?"},
- {"role": "assistant", "content": "It is sunny today"},
+ {
+ "role": "system",
+ "content": (
+ "You are a helpful assistant\n"
+ "\n"
+ "## Dialogue History\n"
+ "user: What is the weather today?\n"
+ "assistant: It is sunny today"
+ ),
+ },
]
prompt = model.format(*self.inputs) # type: ignore[arg-type]
self.assertEqual(prompt, ground_truth)
@@ -204,6 +346,32 @@ def test_zhipuai_chat(self) -> None:
with self.assertRaises(TypeError):
model.format(*self.wrong_inputs) # type: ignore[arg-type]
+ def test_litellm_chat(self) -> None:
+ """Unit test for the format function in litellm chat api wrapper."""
+ model = LiteLLMChatWrapper(
+ config_name="",
+ model_name="gpt-3.5-turbo",
+ api_key="xxx",
+ )
+
+ ground_truth = [
+ {
+ "role": "user",
+ "content": (
+ "You are a helpful assistant\n\n"
+ "## Dialogue History\nuser: What is the weather today?\n"
+ "assistant: It is sunny today"
+ ),
+ },
+ ]
+
+ prompt = model.format(*self.inputs)
+ self.assertListEqual(prompt, ground_truth)
+
+ # wrong format
+ with self.assertRaises(TypeError):
+ model.format(*self.wrong_inputs) # type: ignore[arg-type]
+
def test_dashscope_multimodal_image(self) -> None:
"""Unit test for the format function in dashscope multimodal
conversation api wrapper for image."""
diff --git a/tests/litellm_test.py b/tests/litellm_test.py
new file mode 100644
index 000000000..3ee4a8503
--- /dev/null
+++ b/tests/litellm_test.py
@@ -0,0 +1,61 @@
+# -*- coding: utf-8 -*-
+"""litellm test"""
+import unittest
+from unittest.mock import patch, MagicMock
+
+import agentscope
+from agentscope.models import load_model_by_config_name
+
+
+class TestLiteLLMChatWrapper(unittest.TestCase):
+ """Test LiteLLM Chat Wrapper"""
+
+ def setUp(self) -> None:
+ self.api_key = "test_api_key.secret_key"
+ self.messages = [
+ {"role": "user", "content": "Hello, litellm!"},
+ {"role": "assistant", "content": "How can I assist you?"},
+ ]
+
+ @patch("agentscope.models.litellm_model.litellm")
+ def test_chat(self, mock_litellm: MagicMock) -> None:
+ """
+ Test chat"""
+ mock_response = MagicMock()
+ mock_response.model_dump.return_value = {
+ "choices": [
+ {"message": {"content": "Hello, this is a mocked response!"}},
+ ],
+ "usage": {
+ "prompt_tokens": 100,
+ "completion_tokens": 5,
+ "total_tokens": 105,
+ },
+ }
+ mock_response.choices[
+ 0
+ ].message.content = "Hello, this is a mocked response!"
+
+ mock_litellm.completion.return_value = mock_response
+
+ agentscope.init(
+ model_configs={
+ "config_name": "test_config",
+ "model_type": "litellm_chat",
+ "model_name": "ollama/llama3:8b",
+ "api_key": self.api_key,
+ },
+ )
+
+ model = load_model_by_config_name("test_config")
+
+ response = model(
+ messages=self.messages,
+ api_base="http://localhost:11434",
+ )
+
+ self.assertEqual(response.text, "Hello, this is a mocked response!")
+
+
+if __name__ == "__main__":
+ unittest.main()
diff --git a/tests/memory_test.py b/tests/memory_test.py
index 4a25badec..629fb45d9 100644
--- a/tests/memory_test.py
+++ b/tests/memory_test.py
@@ -3,10 +3,11 @@
Unit tests for memory classes and functions
"""
+import os
import unittest
from unittest.mock import patch, MagicMock
-from agentscope.message import Msg
+from agentscope.message import Msg, Tht
from agentscope.memory import TemporaryMemory
@@ -17,6 +18,8 @@ class TemporaryMemoryTest(unittest.TestCase):
def setUp(self) -> None:
self.memory = TemporaryMemory()
+ self.file_name_1 = "tmp_mem_file1.txt"
+ self.file_name_2 = "tmp_mem_file2.txt"
self.msg_1 = Msg("user", "Hello", role="user")
self.msg_2 = Msg(
"agent",
@@ -29,19 +32,15 @@ def setUp(self) -> None:
role="assistant",
)
- self.dict_1 = {
- "name": "dict1",
- "content": "dict 1",
- "role": "assistant",
- }
- self.dict_2 = {
- "name": "dict2",
- "content": "dict 2",
- "role": "assistant",
- }
-
self.invalid = {"invalid_key": "invalid_value"}
+ def tearDown(self) -> None:
+ """Clean up before & after tests."""
+ if os.path.exists(self.file_name_1):
+ os.remove(self.file_name_1)
+ if os.path.exists(self.file_name_2):
+ os.remove(self.file_name_2)
+
def test_add(self) -> None:
"""Test add different types of object"""
# add msg
@@ -51,18 +50,11 @@ def test_add(self) -> None:
[self.msg_1],
)
- # add dict
- self.memory.add(self.dict_1)
- self.assertEqual(
- self.memory.get_memory(),
- [self.msg_1, self.dict_1],
- )
-
# add list
self.memory.add([self.msg_2, self.msg_3])
self.assertEqual(
self.memory.get_memory(),
- [self.msg_1, self.dict_1, self.msg_2, self.msg_3],
+ [self.msg_1, self.msg_2, self.msg_3],
)
@patch("loguru.logger.warning")
@@ -84,17 +76,11 @@ def test_delete(self, mock_logging: MagicMock) -> None:
def test_invalid(self) -> None:
"""Test invalid operations for memory"""
- self.memory.add(self.invalid)
# test invalid add
- self.assertEqual(
- self.memory.get_memory(),
- [self.invalid],
- )
-
- # test print
- self.assertEqual(
- self.memory.get_memory(),
- [{"invalid_key": "invalid_value"}],
+ with self.assertRaises(Exception) as context:
+ self.memory.add(self.invalid)
+ self.assertTrue(
+ f"Cannot add {self.invalid} to memory" in str(context.exception),
)
def test_load_export(self) -> None:
@@ -102,11 +88,11 @@ def test_load_export(self) -> None:
Test load and export function of TemporaryMemory
"""
memory = TemporaryMemory()
- user_input = {"name": "user", "content": "Hello"}
- agent_input = {
- "name": "agent",
- "content": "Hello! How can I help you?",
- }
+ user_input = Msg(name="user", content="Hello")
+ agent_input = Msg(
+ name="agent",
+ content="Hello! How can I help you?",
+ )
memory.load([user_input, agent_input])
retrieved_mem = memory.export(to_mem=True)
self.assertEqual(
@@ -114,6 +100,44 @@ def test_load_export(self) -> None:
[user_input, agent_input],
)
+ memory.export(file_path=self.file_name_1)
+ memory.clear()
+ self.assertEqual(
+ memory.get_memory(),
+ [],
+ )
+ memory.load(self.file_name_1)
+ self.assertEqual(
+ memory.get_memory(),
+ [user_input, agent_input],
+ )
+
+ def test_tht_memory(self) -> None:
+ """
+ Test temporary memory with Tht,
+ add, clear, export, loading
+ """
+ memory = TemporaryMemory()
+ thought = Tht("testing")
+ memory.add(thought)
+
+ self.assertEqual(
+ memory.get_memory(),
+ [thought],
+ )
+
+ memory.export(file_path=self.file_name_2)
+ memory.clear()
+ self.assertEqual(
+ memory.get_memory(),
+ [],
+ )
+ memory.load(self.file_name_2)
+ self.assertEqual(
+ memory.get_memory(),
+ [thought],
+ )
+
if __name__ == "__main__":
unittest.main()
diff --git a/tests/msghub_test.py b/tests/msghub_test.py
index d2034ac2b..75f61dccc 100644
--- a/tests/msghub_test.py
+++ b/tests/msghub_test.py
@@ -5,6 +5,7 @@
from agentscope.agents import AgentBase
from agentscope import msghub
+from agentscope.message import Msg
class TestAgent(AgentBase):
@@ -33,10 +34,10 @@ def setUp(self) -> None:
def test_msghub_operation(self) -> None:
"""Test add, delete and broadcast operations"""
- msg1 = {"msg": 1}
- msg2 = {"msg": 2}
- msg3 = {"msg": 3}
- msg4 = {"msg": 4}
+ msg1 = Msg(name="a1", content="msg1")
+ msg2 = Msg(name="a2", content="msg2")
+ msg3 = Msg(name="a3", content="msg3")
+ msg4 = Msg(name="a4", content="msg4")
with msghub(participants=[self.agent1, self.agent2]) as hub:
self.agent1(msg1)
@@ -68,11 +69,12 @@ def test_msghub(self) -> None:
"""msghub test."""
ground_truth = [
- {
- "role": "wisper",
- "content": "This secret that my password is 123456 can't be"
+ Msg(
+ name="w1",
+ content="This secret that my password is 123456 can't be"
" leaked!",
- },
+ role="wisper",
+ ),
]
with msghub(participants=[self.wisper, self.agent1, self.agent2]):