Skip to content

Commit

Permalink
fix distributed dialogue example
Browse files Browse the repository at this point in the history
  • Loading branch information
pan-x-c committed Jan 17, 2024
1 parent de8d646 commit fd55a30
Show file tree
Hide file tree
Showing 2 changed files with 81 additions and 140 deletions.
2 changes: 1 addition & 1 deletion notebook/distributed_debate.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@
"id": "0072fc64",
"metadata": {},
"source": [
"Second, let's start the three agents in the debate. Note that each agent here will automatically start a sub-process, and the `reply` method is executed within the sub-process."
"Second, let's start the three agents in the debate. Note that each agent here will automatically starts a sub-process, and the `reply` method is executed within the sub-process."
]
},
{
Expand Down
219 changes: 80 additions & 139 deletions notebook/distributed_dialog.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -26,114 +26,16 @@
},
{
"cell_type": "markdown",
"id": "710f835a-ecc8-481f-a4ab-7f0db33e68f4",
"metadata": {},
"source": [
"In this case, we need to initialize two agents: assistant agent and user agnent."
]
},
{
"cell_type": "markdown",
"id": "7938f3a2-38e5-424c-a589-71b31ea7fde1",
"metadata": {},
"source": [
"The code for assistant is as follows."
]
},
{
"cell_type": "markdown",
"id": "eb93b3c3-aee4-4c59-b6cd-a6b107a23b49",
"metadata": {},
"source": [
"```bash\n",
"def setup_assistant_server(assistant_host: str, assistant_port: int) -> None:\n",
" \"\"\"Set up assistant rpc server\"\"\"\n",
" agentscope.init(\n",
" model_configs=\"configs/model_configs.json\",\n",
" )\n",
" assistant_server_launcher = RpcAgentServerLauncher(\n",
" name=\"Assitant\",\n",
" agent_class=RpcDialogAgent,\n",
" host=assistant_host,\n",
" port=assistant_port,\n",
" sys_prompt=\"You are a helpful assistant.\",\n",
" model=\"gpt-3.5-turbo\",\n",
" use_memory=True,\n",
" local_mode=False,\n",
" )\n",
" assistant_server_launcher.launch()\n",
" assistant_server_launcher.wait_until_terminate()\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "6dc37a09-4077-4ab9-a6e9-e2afa9b87228",
"metadata": {},
"source": [
"To run two processes on jupyter at the same time, we put `assistant agent` in the background."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2c09ed71-ede2-4f03-923a-67a2668fcd00",
"metadata": {},
"outputs": [],
"source": [
"%cd ../examples/distributed/"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c6148137-29f8-48c9-a91c-62e9b48f9e6c",
"metadata": {},
"outputs": [],
"source": [
"import subprocess\n",
"\n",
"# Using Popen to initialize a backstage process\n",
"\n",
"proc = subprocess.Popen(\n",
" ['python', 'distributed_dialog.py', '--role', 'assistant', '--assistant-host', 'localhost', '--assistant-port', '12010'],\n",
" stdout=subprocess.PIPE,\n",
" stderr=subprocess.PIPE\n",
")\n",
"\n",
"# This code will return immediately and the process will run in the background\n"
]
},
{
"cell_type": "markdown",
"id": "d5229b68-8c6d-4063-bfe7-f1d836b0b74d",
"id": "2417b9fc",
"metadata": {},
"source": [
"Then we run the user agent on jupyter."
"First, we need to set the model configs properly."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c1ce5f86-4088-4f1f-a010-377f5b7550f0",
"metadata": {},
"outputs": [],
"source": [
"%cd ../../notebook/"
]
},
{
"cell_type": "markdown",
"id": "fa7ce528-5447-4168-a83a-ffcce53d2155",
"metadata": {},
"source": [
"Define the model_configs."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d7deca99-21c7-4746-9288-67b9a2f733b1",
"execution_count": 1,
"id": "8d61bef5",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -161,70 +63,109 @@
},
{
"cell_type": "markdown",
"id": "62914494-1a6f-409e-96ab-01eaf3dcd12e",
"id": "710f835a-ecc8-481f-a4ab-7f0db33e68f4",
"metadata": {},
"source": [
"Run the following codes to initialize the user agent."
"Then, we need to initialize two agents: an assistant agent and a user agnent.\n",
"\n",
"To facilitate display on jupyter, the agents will be started in a standalone multi-process mode. For a fully distributed version, please refer to `examples/distributed/distributed_dialog.py`."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85367a4c-ec23-40e1-967f-fa7a0e932661",
"metadata": {},
"outputs": [],
"execution_count": 2,
"id": "bf3226dc",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[32m2024-01-17 11:31:22.219\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36magentscope.models\u001b[0m:\u001b[36mread_model_configs\u001b[0m:\u001b[36m147\u001b[0m - \u001b[1mLoad configs for model: gpt-3.5-turbo\u001b[0m\n"
]
}
],
"source": [
"import time\n",
"\n",
"import agentscope\n",
"from agentscope.agents import UserAgent\n",
"from agentscope.agents.user_agent import UserAgent\n",
"from agentscope.agents.rpc_dialog_agent import RpcDialogAgent\n",
"from agentscope.utils.logging_utils import logger\n",
"\n",
"assistant_host = \"localhost\"\n",
"assistant_port = 12010\n",
"\n",
"agentscope.init(\n",
" model_configs=model_configs,\n",
" model_configs=model_configs\n",
")\n",
"\n",
"assistant_agent = RpcDialogAgent(\n",
" name=\"Assistant\",\n",
" host=assistant_host,\n",
" port=assistant_port,\n",
" launch_server=False,\n",
" port=12010,\n",
" sys_prompt=\"You are a helpful assistant.\",\n",
" model=\"gpt-3.5-turbo\",\n",
" use_memory=True,\n",
")\n",
"user_agent = UserAgent(\n",
" name=\"User\",\n",
" require_url=False,\n",
")\n",
"logger.info(\n",
" \"Setup successfully, have fun chatting! (enter 'exit' to close the \"\n",
" \"agent)\",\n",
")\n",
"msg = user_agent()\n",
"while not msg.content.endswith(\"exit\"):\n",
" msg = assistant_agent(msg)\n",
" logger.chat(msg.update_value())\n",
" time.sleep(0.5)\n",
" msg = user_agent(msg)"
")"
]
},
{
"cell_type": "markdown",
"id": "829a2053-9643-4153-a324-0762177e8c1c",
"id": "dd70c37d",
"metadata": {},
"source": [
"Finally, terminate the assistant agent in the backend. "
"Finally, let's write the main process of the dialogue and chat with the assistant."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "59dc514a-c108-4d9e-95ce-bedf757c8958",
"metadata": {},
"outputs": [],
"execution_count": 3,
"id": "b0f3c851",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32m2024-01-17 11:32:00.262\u001b[0m | \u001b[1mINFO \u001b[0m | \u001b[36magentscope.agents.rpc_agent\u001b[0m:\u001b[36mlaunch\u001b[0m:\u001b[36m501\u001b[0m - \u001b[1mStarting rpc server [RpcDialogAgent] at port [12010]...\u001b[0m\n",
"2024-01-17 11:32:00.792 | INFO | agentscope.agents.rpc_agent:setup_rcp_agent_server:378 - rpc server [RpcDialogAgent] at port [12010] started successfully\n",
"2024-01-17 11:32:04.953 | WARNING | agentscope.models.model:checking_wrapper:122 - Fail to parsing response: Hello! How can I assist you today?.\n",
" Exception: Expecting value: line 1 column 1 (char 0), \t Attempt 1 / 1\n",
"Assistant: Hello! How can I assist you today?\n",
"\u001b[36m\u001b[1mAssistant\u001b[0m\u001b[36m\u001b[0m: Hello! How can I assist you today?\n",
"2024-01-17 11:32:16.205 | WARNING | agentscope.models.model:checking_wrapper:122 - Fail to parsing response: Sure, here's a joke for you:\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up literally everything!.\n",
" Exception: Expecting value: line 1 column 1 (char 0), \t Attempt 1 / 1\n",
"Assistant: Sure, here's a joke for you:\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up literally everything!\n",
"\u001b[36m\u001b[1mAssistant\u001b[0m\u001b[36m\u001b[0m: Sure, here's a joke for you:\n",
"\n",
"Why don't scientists trust atoms?\n",
"\n",
"Because they make up literally everything!\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-01-17 11:32:00.779 | INFO | agentscope.models:read_model_configs:147 - Load configs for model: gpt-3.5-turbo\n"
]
}
],
"source": [
"proc.terminate()"
"import time\n",
"from loguru import logger\n",
"\n",
"msg = user_agent()\n",
"while not msg.content.endswith(\"exit\"):\n",
" msg = assistant_agent(msg)\n",
" logger.chat(msg.update_value())\n",
" time.sleep(0.5)\n",
" msg = user_agent(msg)"
]
}
],
Expand All @@ -244,7 +185,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
"version": "3.10.9"
}
},
"nbformat": 4,
Expand Down

0 comments on commit fd55a30

Please sign in to comment.