From 2d351f25a569c7491189ce64ba0474eb5de6c05d Mon Sep 17 00:00:00 2001 From: garyzhang99 Date: Fri, 17 May 2024 14:16:42 +0800 Subject: [PATCH] litellm format doc --- .../en/source/tutorial/206-prompt.md | 49 +++++++++++++++++++ .../zh_CN/source/tutorial/206-prompt.md | 40 +++++++++++++++ 2 files changed, 89 insertions(+) diff --git a/docs/sphinx_doc/en/source/tutorial/206-prompt.md b/docs/sphinx_doc/en/source/tutorial/206-prompt.md index e30e8abd8..552d70406 100644 --- a/docs/sphinx_doc/en/source/tutorial/206-prompt.md +++ b/docs/sphinx_doc/en/source/tutorial/206-prompt.md @@ -300,6 +300,55 @@ print(prompt) ] ``` + +### LiteLLMChatWrapper + +`LiteLLMChatWrapper` encapsulates the litellm chat API, which takes a list of +messages as input. The litellm support different types of models, and each model +might need to obey different formats. To simplify the usage, we provide a format +that could be compatible with most models. If more specifical formats are needed, +you can refer to the specifical model you use as weel as the +[litellm](https://github.com/BerriAI/litellm) documentation to customize your +own format function for your model. + +- format all the messages in the chat history, into a single message with `"user"` as `role` + +#### Prompt Strategy + +- Messages will consist dialogue history in the `user` message prefixed by the system message and "## Dialogue History". + +```python +from agentscope.models import LiteLLMChatWrapper + +model = LiteLLMChatWrapper( + config_name="", # empty since we directly initialize the model wrapper + model_name="gpt-3.5-turbo", +) + +prompt = model.format( + Msg("system", "You are a helpful assistant", role="system"), + [ + Msg("user", "What is the weather today?", role="user"), + Msg("assistant", "It is sunny today", role="assistant"), + ], +) + +print(prompt) +``` + +```bash +[ + { + "role": "user", + "content": ( + "You are a helpful assistant\n\n" + "## Dialogue History\nuser: What is the weather today?\n" + "assistant: It is sunny today" + ), + }, +] +``` + ### OllamaChatWrapper `OllamaChatWrapper` encapsulates the Ollama chat API, which takes a list of diff --git a/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md b/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md index 7ed143cfe..c2767d902 100644 --- a/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md +++ b/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md @@ -271,6 +271,46 @@ print(prompt) ] ``` +### LiteLLMChatWrapper + +`LiteLLMChatWrapper`封装了litellm聊天API,它接受消息列表作为输入。Litellm支持不同类型的模型,每个模型可能需要遵守不同的格式。为了简化使用,我们提供了一种与大多数模型兼容的格式。如果需要更特定的格式,您可以参考您所使用的特定模型以及[litellm](https://github.com/BerriAI/litellm)文档,来定制适合您模型的格式函数。 +- 格式化聊天历史中的所有消息,将其整合成一个以`"user"`作为`role`的单一消息 +#### 提示策略 +- 消息将包括对话历史,`user`消息由系统消息(system message)和"## Dialog History"前缀。 + + +```python +from agentscope.models import LiteLLMChatWrapper + +model = LiteLLMChatWrapper( + config_name="", # empty since we directly initialize the model wrapper + model_name="gpt-3.5-turbo", +) + +prompt = model.format( + Msg("system", "You are a helpful assistant", role="system"), + [ + Msg("user", "What is the weather today?", role="user"), + Msg("assistant", "It is sunny today", role="assistant"), + ], +) + +print(prompt) +``` + +```bash +[ + { + "role": "user", + "content": ( + "You are a helpful assistant\n\n" + "## Dialogue History\nuser: What is the weather today?\n" + "assistant: It is sunny today" + ), + }, +] +``` + ### `OllamaChatWrapper` `OllamaChatWrapper`封装了Ollama聊天API,它接受消息列表作为输入。消息必须遵守以下规则(更新于2024/03/22):