Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc/litellm doc #232

Merged
merged 20 commits into from
May 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 50 additions & 0 deletions docs/sphinx_doc/en/source/tutorial/206-prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -300,6 +300,56 @@ print(prompt)
]
```


### LiteLLMChatWrapper

`LiteLLMChatWrapper` encapsulates the litellm chat API, which takes a list of
messages as input. The litellm supports different types of models, and each model
might need to obey different formats. To simplify the usage, we provide a format
that could be compatible with most models. If more specific formats are needed,
you can refer to the specific model you use as well as the
[litellm](https://github.com/BerriAI/litellm) documentation to customize your
own format function for your model.


- format all the messages in the chat history, into a single message with `"user"` as `role`

#### Prompt Strategy

- Messages will consist dialogue history in the `user` message prefixed by the system message and "## Dialogue History".

```python
from agentscope.models import LiteLLMChatWrapper

model = LiteLLMChatWrapper(
config_name="", # empty since we directly initialize the model wrapper
model_name="gpt-3.5-turbo",
)

prompt = model.format(
Msg("system", "You are a helpful assistant", role="system"),
[
Msg("user", "What is the weather today?", role="user"),
Msg("assistant", "It is sunny today", role="assistant"),
],
)

print(prompt)
```

```bash
[
{
"role": "user",
"content": (
"You are a helpful assistant\n\n"
"## Dialogue History\nuser: What is the weather today?\n"
"assistant: It is sunny today"
),
},
]
```

### OllamaChatWrapper

`OllamaChatWrapper` encapsulates the Ollama chat API, which takes a list of
Expand Down
40 changes: 40 additions & 0 deletions docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -271,6 +271,46 @@ print(prompt)
]
```

### LiteLLMChatWrapper

`LiteLLMChatWrapper`封装了litellm聊天API,它接受消息列表作为输入。Litellm支持不同类型的模型,每个模型可能需要遵守不同的格式。为了简化使用,我们提供了一种与大多数模型兼容的格式。如果需要更特定的格式,您可以参考您所使用的特定模型以及[litellm](https://github.com/BerriAI/litellm)文档,来定制适合您模型的格式函数。
- 格式化聊天历史中的所有消息,将其整合成一个以`"user"`作为`role`的单一消息
#### 提示策略
- 消息将包括对话历史,`user`消息由系统消息(system message)和"## Dialog History"前缀。


```python
from agentscope.models import LiteLLMChatWrapper

model = LiteLLMChatWrapper(
config_name="", # empty since we directly initialize the model wrapper
model_name="gpt-3.5-turbo",
)

prompt = model.format(
Msg("system", "You are a helpful assistant", role="system"),
[
Msg("user", "What is the weather today?", role="user"),
Msg("assistant", "It is sunny today", role="assistant"),
],
)

print(prompt)
```

```bash
[
{
"role": "user",
"content": (
"You are a helpful assistant\n\n"
"## Dialogue History\nuser: What is the weather today?\n"
"assistant: It is sunny today"
),
},
]
```

### `OllamaChatWrapper`

`OllamaChatWrapper`封装了Ollama聊天API,它接受消息列表作为输入。消息必须遵守以下规则(更新于2024/03/22):
Expand Down