Skip to content

Commit

Permalink
Add distributed search example (#121)
Browse files Browse the repository at this point in the history
  • Loading branch information
pan-x-c authored Mar 29, 2024
1 parent f762dd6 commit b6342b2
Show file tree
Hide file tree
Showing 16 changed files with 387 additions and 42 deletions.
1 change: 1 addition & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ repos:
--disable-error-code=import-untyped,
--disable-error-code=truthy-function,
--follow-imports=skip,
--explicit-package-bases,
]
# - repo: https://github.com/numpy/numpydoc
# rev: v1.6.0
Expand Down
10 changes: 7 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,8 +114,9 @@ the following libraries.
- [Werewolf](./examples/game_werewolf)

- Distribution
- [Distributed Conversation](./examples/distribution_conversation)
- [Distributed Debate](./examples/distribution_debate)
- [Distributed Conversation](./examples/distributed_basic)
- [Distributed Debate](./examples/distributed_debate)
- [Distributed Search](./examples/distributed_search)

More models, services and examples are coming soon!

Expand Down Expand Up @@ -245,22 +246,25 @@ AgentScope provides an easy-to-use runtime user interface capable of
displaying multimodal output on the front end, including text, images,
audio and video. To start a studio, you should install the `full` version
of AgentScope.

```
# On windows
pip install -e .[full]
# On mac
pip install -e .\[full\]
```

Once installed, you can just run

```
as_studio path/to/your/script.py
```

Then the studio will be launched at `localhost:xxxx`, and you can see the UI similar to the
following:
![](https://gw.alicdn.com/imgextra/i3/O1CN01X673v81WaHV1oCxEN_!!6000000002804-0-tps-2992-1498.jpg)
To be able to use the `as_studio` functionality, please implement the `main` function in your code. More detail can be found in [src/agentscope/web/README.md](src/agentscope/web/README.md).


## Tutorial

- [Getting Started](https://modelscope.github.io/agentscope/en/tutorial/quick_start.html)
Expand Down
5 changes: 3 additions & 2 deletions README_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,8 +102,9 @@ AgentScope支持使用以下库快速部署本地模型服务。
- [狼人杀](./examples/game_werewolf)

- 分布式
- [分布式对话](./examples/distribution_conversation)
- [分布式辩论](./examples/distribution_debate)
- [分布式对话](./examples/distributed_basic)
- [分布式辩论](./examples/distributed_debate)
- [分布式搜索](./examples/distributed_search)

更多模型API、服务和示例即将推出!

Expand Down
25 changes: 25 additions & 0 deletions examples/distributed_basic/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Distributed Basic

This example run a assistant agent and a user agent as seperate processes and use rpc to communicate between them.

Before running the example, please install the distributed version of Agentscope, fill in your model configuration correctly in `configs/model_configs.json`, and modify the `model_config_name` field in `distributed_dialog.py` accordingly.

Then, use the following command to start the assistant agent.

```
cd examples/distributed
python distributed_dialog.py --role assistant --assistant-host localhost --assistant-port 12010
# Please make sure the port is available.
# If the assistant agent and the user agent are started on different machines,
# please fill in the ip address of the assistant agent in the host field
```

Then, run the user agent.

```
python distributed_dialog.py --role user --assistant-host localhost --assistant-port 12010
# If the assistant agent is started on another machine,
# please fill in the ip address of the assistant agent in the host field
```

Now, you can chat with the assistant agent using the command line.
Original file line number Diff line number Diff line change
@@ -1,20 +1,19 @@
[
{
"config_name": "gpt-3.5-turbo",
"config_name": "gpt-4",
"model_type": "openai",
"model_name": "gpt-3.5-turbo",
"model_name": "gpt-4",
"api_key": "xxx",
"organization": "xxx",
"generate_args": {
"temperature": 0.0
"temperature": 0.5
}
},
{
"config_name": "gpt-4",
"model_type": "openai",
"model_name": "gpt-4",
"config_name": "qwen",
"model_type": "dashscope_chat",
"model_name": "qwen-max",
"api_key": "xxx",
"organization": "xxx",
"generate_args": {
"temperature": 0.5
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def setup_assistant_server(assistant_host: str, assistant_port: int) -> None:
agent_kwargs={
"name": "Assitant",
"sys_prompt": "You are a helpful assistant.",
"model_config_name": "gpt-3.5-turbo",
"model_config_name": "qwen",
"use_memory": True,
},
host=assistant_host,
Expand All @@ -59,7 +59,7 @@ def run_main_process(assistant_host: str, assistant_port: int) -> None:
assistant_agent = DialogAgent(
name="Assistant",
sys_prompt="You are a helpful assistant.",
model_config_name="gpt-3.5-turbo",
model_config_name="qwen",
use_memory=True,
).to_dist(
host=assistant_host,
Expand Down
Original file line number Diff line number Diff line change
@@ -1,30 +1,4 @@
# Distributed multi-agent example

## Distributed dialogue (`distributed_dialog.py`)

This example run a assistant agent and a user agent as seperate processes and use rpc to communicate between them.

First, use the following command to start the assistant agent.

```
cd examples/distributed
python distributed_dialog.py --role assistant --assistant-host localhost --assistant-port 12010
# please make sure the port is available
# if the assistant agent and the user agent are started on different machines
# please fill in the ip address of the assistant agent in the host field
```

Then, run the user agent.

```
python distributed_dialog.py --role user --assistant-host localhost --assistant-port 12010
# if the assistant agent is started on another machine
# please fill in the ip address of the assistant agent in the host field
```

Now, you can chat with the assistant agent using the command line.

## Distributed debate competition (`distributed_debate.py`)
# Distributed debate competition

This example simulate a debate competition with three participant agents, including the affirmative side (**Pro**), the negative side (**Con**), and the adjudicator (**Judge**).
**You can join in the debate as Pro or Con or both.**
Expand All @@ -36,7 +10,7 @@ Each agent is an independent process and can run on different machines.
You can join the debate as Pro or Con by providing the `--is-human` argument.
Messages generated by any agents can be observed by other agents in the debate.

> Due to role restrictions, DashScope APIs (e.g. Qwen) are currently unable to execute this example.
Before running the example, please install the distributed version of AgentScope, fill in your model configuration correctly in `configs/model_configs.json`, and modify the `model_config_name` field in `configs/debate_agent_configs.json` accordingly.

### Step 1: setup Pro, Con agent servers

Expand Down
21 changes: 21 additions & 0 deletions examples/distributed_debate/configs/model_configs.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
[
{
"config_name": "gpt-4",
"model_type": "openai",
"model_name": "gpt-4",
"api_key": "xxx",
"organization": "xxx",
"generate_args": {
"temperature": 0.5
}
},
{
"config_name": "qwen",
"model_type": "dashscope_chat",
"model_name": "qwen-max",
"api_key": "xxx",
"generate_args": {
"temperature": 0.5
}
}
]
File renamed without changes.
File renamed without changes.
83 changes: 83 additions & 0 deletions examples/distributed_search/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# Multi-Agent Copilot Search

## Introduction

This example application converts the user's questions into keywords to call the search engine, and then retrieves a series of web pages to find answers. It involves three types of Agents, namely the UserAgent for the user, the SearcherAgent responsible for searching, and the AnswererAgent responsible for retrieves web pages.

There are many web page links returned by the search engine. To improve performance, multiple instances of AnswererAgent need to run together. However, with the traditional single-process mode, even if there are multiple AnswererAgent instances, they can only obtain web page and answer questions one by one on a single CPU.

But, with AgentScope's distributed mode, you can automatically make these AnswererAgent instances run at the same time to improve performance.

From this example, you can learn:

- how to run multiple agents in different processes,
- how to make multiple agents run in parallel automatically,
- how to convert a single-process version AgentScope application into a multi-processes version.

## How to Run

### Step 0: Install AgentScope distributed version

This example requires the distributed version of AgentScope.

```bash
# On windows
pip install -e .[distribute]
# On mac / linux
pip install -e .\[distribute\]
```

### Step 1: Prepare your model and search engine API configuration

For the model configuration, please fill your model configurations in `configs/model_configs.json`.
Here we give an example.

> Dashscope models, e.g. qwen-max, and openai models, e.g. gpt-3.5-turbo and gpt-4 are tested for this example.
> Other models may require certain modification to the code.
```json
[
{
"config_name": "my_model",
"model_type": "dashscope_chat",
"model_name": "qwen-max",
"api_key": "your_api_key",
"generate_args": {
"temperature": 0.5
},
"messages_key": "input"
}
]
```

For search engines, this example now supports two types of search engines, google and bing. The configuration items for each of them are as follows:

- google
- `api-key`
- `cse-id`
- bing
- `api-key`

### Step 2: Run the example

Use the `main.py` script to run the example. The following are the parameters required to run the script:

- `--num-workers`: The number of AnswererAgent instances.
- `--use-dist`: Enable distributed mode.
- `--search-engine`: The search engine used, currently supports `google` or `bing`.
- `--api-key`: API key for google or bing.
- `--cse-id`: CSE id for google (If you use bing, ignore this parameter).

For example, if you want to start the example application in distribtued mode with 10 AnswererAgents and use the bing search engine, you can use the following command

```shell
python main.py --num-workers 10 --search-engine bing --api-key xxxxx --use-dist
```

And if you want to run the above case in a traditional single-process mode, you can use the following command.

```shell
python main.py --num-workers 10 --search-engine bing --api-key xxxxx
```

You can ask the same question in both modes to compare the difference in runtime. For examples, answer a question with 10 workers only takes 13.2s in distributed mode, while it takes 51.3s in single-process mode.
60 changes: 60 additions & 0 deletions examples/distributed_search/answerer_agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# -*- coding: utf-8 -*-
"""Answerer Agent."""

from agentscope.message import Msg
from agentscope.agents import AgentBase
from agentscope.service.web_search.web_digest import load_web


class AnswererAgent(AgentBase):
"""An agent with web digest tool."""

def __init__(
self,
name: str,
model_config_name: str = None,
) -> None:
super().__init__(
name=name,
sys_prompt="You are an AI assistant. You need to find answers to "
"user questions based on specified web content.",
model_config_name=model_config_name,
use_memory=False,
)

def reply(self, x: dict = None) -> dict:
response = load_web(
url=x.url,
keep_raw=False,
html_selected_tags=["p", "div", "h1", "li"],
timeout=5,
).content
if (
"html_to_text" not in response
or len(response["html_to_text"]) == 0
):
return Msg(
self.name,
content=f"Unable to load web page [{x.url}].",
url=x.url,
)
# prepare prompt
prompt = self.model.format(
Msg(name="system", role="system", content=self.sys_prompt),
Msg(
name="user",
role="user",
content=f"Please answer my question based on the content of"
" the following web page:\n\n"
f"{response['html_to_text']}"
f"\n\nBased on the above web page,"
" please answer my question\n{x.query}",
),
)
# call llm and generate response
response = self.model(prompt).text
msg = Msg(self.name, content=response, url=x.url)

self.speak(msg)

return msg
12 changes: 12 additions & 0 deletions examples/distributed_search/configs/model_configs.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
[
{
"model_type": "tongyi_chat",
"config_name": "my_model",
"model_name": "qwen-max",
"api_key": "your_api_key",
"generate_args": {
"temperature": 0.5
},
"messages_key": "input"
}
]
Loading

0 comments on commit b6342b2

Please sign in to comment.