-
Notifications
You must be signed in to change notification settings - Fork 344
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add distributed search example (#121)
- Loading branch information
Showing
16 changed files
with
387 additions
and
42 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
# Distributed Basic | ||
|
||
This example run a assistant agent and a user agent as seperate processes and use rpc to communicate between them. | ||
|
||
Before running the example, please install the distributed version of Agentscope, fill in your model configuration correctly in `configs/model_configs.json`, and modify the `model_config_name` field in `distributed_dialog.py` accordingly. | ||
|
||
Then, use the following command to start the assistant agent. | ||
|
||
``` | ||
cd examples/distributed | ||
python distributed_dialog.py --role assistant --assistant-host localhost --assistant-port 12010 | ||
# Please make sure the port is available. | ||
# If the assistant agent and the user agent are started on different machines, | ||
# please fill in the ip address of the assistant agent in the host field | ||
``` | ||
|
||
Then, run the user agent. | ||
|
||
``` | ||
python distributed_dialog.py --role user --assistant-host localhost --assistant-port 12010 | ||
# If the assistant agent is started on another machine, | ||
# please fill in the ip address of the assistant agent in the host field | ||
``` | ||
|
||
Now, you can chat with the assistant agent using the command line. |
13 changes: 6 additions & 7 deletions
13
...es/distributed/configs/model_configs.json → ...tributed_basic/configs/model_configs.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
[ | ||
{ | ||
"config_name": "gpt-4", | ||
"model_type": "openai", | ||
"model_name": "gpt-4", | ||
"api_key": "xxx", | ||
"organization": "xxx", | ||
"generate_args": { | ||
"temperature": 0.5 | ||
} | ||
}, | ||
{ | ||
"config_name": "qwen", | ||
"model_type": "dashscope_chat", | ||
"model_name": "qwen-max", | ||
"api_key": "xxx", | ||
"generate_args": { | ||
"temperature": 0.5 | ||
} | ||
} | ||
] |
File renamed without changes.
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,83 @@ | ||
# Multi-Agent Copilot Search | ||
|
||
## Introduction | ||
|
||
This example application converts the user's questions into keywords to call the search engine, and then retrieves a series of web pages to find answers. It involves three types of Agents, namely the UserAgent for the user, the SearcherAgent responsible for searching, and the AnswererAgent responsible for retrieves web pages. | ||
|
||
There are many web page links returned by the search engine. To improve performance, multiple instances of AnswererAgent need to run together. However, with the traditional single-process mode, even if there are multiple AnswererAgent instances, they can only obtain web page and answer questions one by one on a single CPU. | ||
|
||
But, with AgentScope's distributed mode, you can automatically make these AnswererAgent instances run at the same time to improve performance. | ||
|
||
From this example, you can learn: | ||
|
||
- how to run multiple agents in different processes, | ||
- how to make multiple agents run in parallel automatically, | ||
- how to convert a single-process version AgentScope application into a multi-processes version. | ||
|
||
## How to Run | ||
|
||
### Step 0: Install AgentScope distributed version | ||
|
||
This example requires the distributed version of AgentScope. | ||
|
||
```bash | ||
# On windows | ||
pip install -e .[distribute] | ||
# On mac / linux | ||
pip install -e .\[distribute\] | ||
``` | ||
|
||
### Step 1: Prepare your model and search engine API configuration | ||
|
||
For the model configuration, please fill your model configurations in `configs/model_configs.json`. | ||
Here we give an example. | ||
|
||
> Dashscope models, e.g. qwen-max, and openai models, e.g. gpt-3.5-turbo and gpt-4 are tested for this example. | ||
> Other models may require certain modification to the code. | ||
```json | ||
[ | ||
{ | ||
"config_name": "my_model", | ||
"model_type": "dashscope_chat", | ||
"model_name": "qwen-max", | ||
"api_key": "your_api_key", | ||
"generate_args": { | ||
"temperature": 0.5 | ||
}, | ||
"messages_key": "input" | ||
} | ||
] | ||
``` | ||
|
||
For search engines, this example now supports two types of search engines, google and bing. The configuration items for each of them are as follows: | ||
|
||
- `api-key` | ||
- `cse-id` | ||
- bing | ||
- `api-key` | ||
|
||
### Step 2: Run the example | ||
|
||
Use the `main.py` script to run the example. The following are the parameters required to run the script: | ||
|
||
- `--num-workers`: The number of AnswererAgent instances. | ||
- `--use-dist`: Enable distributed mode. | ||
- `--search-engine`: The search engine used, currently supports `google` or `bing`. | ||
- `--api-key`: API key for google or bing. | ||
- `--cse-id`: CSE id for google (If you use bing, ignore this parameter). | ||
|
||
For example, if you want to start the example application in distribtued mode with 10 AnswererAgents and use the bing search engine, you can use the following command | ||
|
||
```shell | ||
python main.py --num-workers 10 --search-engine bing --api-key xxxxx --use-dist | ||
``` | ||
|
||
And if you want to run the above case in a traditional single-process mode, you can use the following command. | ||
|
||
```shell | ||
python main.py --num-workers 10 --search-engine bing --api-key xxxxx | ||
``` | ||
|
||
You can ask the same question in both modes to compare the difference in runtime. For examples, answer a question with 10 workers only takes 13.2s in distributed mode, while it takes 51.3s in single-process mode. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
# -*- coding: utf-8 -*- | ||
"""Answerer Agent.""" | ||
|
||
from agentscope.message import Msg | ||
from agentscope.agents import AgentBase | ||
from agentscope.service.web_search.web_digest import load_web | ||
|
||
|
||
class AnswererAgent(AgentBase): | ||
"""An agent with web digest tool.""" | ||
|
||
def __init__( | ||
self, | ||
name: str, | ||
model_config_name: str = None, | ||
) -> None: | ||
super().__init__( | ||
name=name, | ||
sys_prompt="You are an AI assistant. You need to find answers to " | ||
"user questions based on specified web content.", | ||
model_config_name=model_config_name, | ||
use_memory=False, | ||
) | ||
|
||
def reply(self, x: dict = None) -> dict: | ||
response = load_web( | ||
url=x.url, | ||
keep_raw=False, | ||
html_selected_tags=["p", "div", "h1", "li"], | ||
timeout=5, | ||
).content | ||
if ( | ||
"html_to_text" not in response | ||
or len(response["html_to_text"]) == 0 | ||
): | ||
return Msg( | ||
self.name, | ||
content=f"Unable to load web page [{x.url}].", | ||
url=x.url, | ||
) | ||
# prepare prompt | ||
prompt = self.model.format( | ||
Msg(name="system", role="system", content=self.sys_prompt), | ||
Msg( | ||
name="user", | ||
role="user", | ||
content=f"Please answer my question based on the content of" | ||
" the following web page:\n\n" | ||
f"{response['html_to_text']}" | ||
f"\n\nBased on the above web page," | ||
" please answer my question\n{x.query}", | ||
), | ||
) | ||
# call llm and generate response | ||
response = self.model(prompt).text | ||
msg = Msg(self.name, content=response, url=x.url) | ||
|
||
self.speak(msg) | ||
|
||
return msg |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
[ | ||
{ | ||
"model_type": "tongyi_chat", | ||
"config_name": "my_model", | ||
"model_name": "qwen-max", | ||
"api_key": "your_api_key", | ||
"generate_args": { | ||
"temperature": 0.5 | ||
}, | ||
"messages_key": "input" | ||
} | ||
] |
Oops, something went wrong.