Skip to content

Commit

Permalink
deploy: 623ceaf
Browse files Browse the repository at this point in the history
  • Loading branch information
DavdGao committed May 17, 2024
1 parent 6b724cf commit 24ef2da
Show file tree
Hide file tree
Showing 14 changed files with 216 additions and 33 deletions.
Binary file modified en/.doctrees/environment.pickle
Binary file not shown.
Binary file modified en/.doctrees/index.doctree
Binary file not shown.
Binary file modified en/.doctrees/tutorial/206-prompt.doctree
Binary file not shown.
50 changes: 50 additions & 0 deletions en/_sources/tutorial/206-prompt.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -300,6 +300,56 @@ print(prompt)
]
```


### LiteLLMChatWrapper

`LiteLLMChatWrapper` encapsulates the litellm chat API, which takes a list of
messages as input. The litellm supports different types of models, and each model
might need to obey different formats. To simplify the usage, we provide a format
that could be compatible with most models. If more specific formats are needed,
you can refer to the specific model you use as well as the
[litellm](https://github.com/BerriAI/litellm) documentation to customize your
own format function for your model.


- format all the messages in the chat history, into a single message with `"user"` as `role`

#### Prompt Strategy

- Messages will consist dialogue history in the `user` message prefixed by the system message and "## Dialogue History".

```python
from agentscope.models import LiteLLMChatWrapper

model = LiteLLMChatWrapper(
config_name="", # empty since we directly initialize the model wrapper
model_name="gpt-3.5-turbo",
)

prompt = model.format(
Msg("system", "You are a helpful assistant", role="system"),
[
Msg("user", "What is the weather today?", role="user"),
Msg("assistant", "It is sunny today", role="assistant"),
],
)

print(prompt)
```

```bash
[
{
"role": "user",
"content": (
"You are a helpful assistant\n\n"
"## Dialogue History\nuser: What is the weather today?\n"
"assistant: It is sunny today"
),
},
]
```

### OllamaChatWrapper

`OllamaChatWrapper` encapsulates the Ollama chat API, which takes a list of
Expand Down
Binary file modified en/objects.inv
Binary file not shown.
2 changes: 1 addition & 1 deletion en/searchindex.js

Large diffs are not rendered by default.

65 changes: 57 additions & 8 deletions en/tutorial/206-prompt.html
Original file line number Diff line number Diff line change
Expand Up @@ -403,6 +403,55 @@ <h4>Prompt Strategy<a class="headerlink" href="#id2" title="Link to this heading
</div>
</section>
</section>
<section id="litellmchatwrapper">
<h3>LiteLLMChatWrapper<a class="headerlink" href="#litellmchatwrapper" title="Link to this heading"></a></h3>
<p><code class="docutils literal notranslate"><span class="pre">LiteLLMChatWrapper</span></code> encapsulates the litellm chat API, which takes a list of
messages as input. The litellm supports different types of models, and each model
might need to obey different formats. To simplify the usage, we provide a format
that could be compatible with most models. If more specific formats are needed,
you can refer to the specific model you use as well as the
<a class="reference external" href="https://github.com/BerriAI/litellm">litellm</a> documentation to customize your
own format function for your model.</p>
<ul class="simple">
<li><p>format all the messages in the chat history, into a single message with <code class="docutils literal notranslate"><span class="pre">&quot;user&quot;</span></code> as <code class="docutils literal notranslate"><span class="pre">role</span></code></p></li>
</ul>
<section id="id3">
<h4>Prompt Strategy<a class="headerlink" href="#id3" title="Link to this heading"></a></h4>
<ul class="simple">
<li><p>Messages will consist dialogue history in the <code class="docutils literal notranslate"><span class="pre">user</span></code> message prefixed by the system message and “## Dialogue History”.</p></li>
</ul>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">agentscope.models</span> <span class="kn">import</span> <span class="n">LiteLLMChatWrapper</span>

<span class="n">model</span> <span class="o">=</span> <span class="n">LiteLLMChatWrapper</span><span class="p">(</span>
<span class="n">config_name</span><span class="o">=</span><span class="s2">&quot;&quot;</span><span class="p">,</span> <span class="c1"># empty since we directly initialize the model wrapper</span>
<span class="n">model_name</span><span class="o">=</span><span class="s2">&quot;gpt-3.5-turbo&quot;</span><span class="p">,</span>
<span class="p">)</span>

<span class="n">prompt</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">format</span><span class="p">(</span>
<span class="n">Msg</span><span class="p">(</span><span class="s2">&quot;system&quot;</span><span class="p">,</span> <span class="s2">&quot;You are a helpful assistant&quot;</span><span class="p">,</span> <span class="n">role</span><span class="o">=</span><span class="s2">&quot;system&quot;</span><span class="p">),</span>
<span class="p">[</span>
<span class="n">Msg</span><span class="p">(</span><span class="s2">&quot;user&quot;</span><span class="p">,</span> <span class="s2">&quot;What is the weather today?&quot;</span><span class="p">,</span> <span class="n">role</span><span class="o">=</span><span class="s2">&quot;user&quot;</span><span class="p">),</span>
<span class="n">Msg</span><span class="p">(</span><span class="s2">&quot;assistant&quot;</span><span class="p">,</span> <span class="s2">&quot;It is sunny today&quot;</span><span class="p">,</span> <span class="n">role</span><span class="o">=</span><span class="s2">&quot;assistant&quot;</span><span class="p">),</span>
<span class="p">],</span>
<span class="p">)</span>

<span class="nb">print</span><span class="p">(</span><span class="n">prompt</span><span class="p">)</span>
</pre></div>
</div>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="o">[</span>
<span class="w"> </span><span class="o">{</span>
<span class="w"> </span><span class="s2">&quot;role&quot;</span>:<span class="w"> </span><span class="s2">&quot;user&quot;</span>,
<span class="w"> </span><span class="s2">&quot;content&quot;</span>:<span class="w"> </span><span class="o">(</span>
<span class="w"> </span><span class="s2">&quot;You are a helpful assistant\n\n&quot;</span>
<span class="w"> </span><span class="s2">&quot;## Dialogue History\nuser: What is the weather today?\n&quot;</span>
<span class="w"> </span><span class="s2">&quot;assistant: It is sunny today&quot;</span>
<span class="w"> </span><span class="o">)</span>,
<span class="w"> </span><span class="o">}</span>,
<span class="o">]</span>
</pre></div>
</div>
</section>
</section>
<section id="ollamachatwrapper">
<h3>OllamaChatWrapper<a class="headerlink" href="#ollamachatwrapper" title="Link to this heading"></a></h3>
<p><code class="docutils literal notranslate"><span class="pre">OllamaChatWrapper</span></code> encapsulates the Ollama chat API, which takes a list of
Expand All @@ -413,8 +462,8 @@ <h3>OllamaChatWrapper<a class="headerlink" href="#ollamachatwrapper" title="Link
<code class="docutils literal notranslate"><span class="pre">&quot;system&quot;</span></code>, or <code class="docutils literal notranslate"><span class="pre">&quot;assistant&quot;</span></code>.</p></li>
<li><p>An optional <code class="docutils literal notranslate"><span class="pre">images</span></code> field can be added to the message</p></li>
</ul>
<section id="id3">
<h4>Prompt Strategy<a class="headerlink" href="#id3" title="Link to this heading"></a></h4>
<section id="id4">
<h4>Prompt Strategy<a class="headerlink" href="#id4" title="Link to this heading"></a></h4>
<ul class="simple">
<li><p>If the role field of the first input message is <code class="docutils literal notranslate"><span class="pre">&quot;system&quot;</span></code>,
it will be treated as system prompt and the other messages will consist
Expand Down Expand Up @@ -455,8 +504,8 @@ <h4>Prompt Strategy<a class="headerlink" href="#id3" title="Link to this heading
<h3>OllamaGenerationWrapper<a class="headerlink" href="#ollamagenerationwrapper" title="Link to this heading"></a></h3>
<p><code class="docutils literal notranslate"><span class="pre">OllamaGenerationWrapper</span></code> encapsulates the Ollama generation API, which
takes a string prompt as input without any constraints (updated to 2024/03/22).</p>
<section id="id4">
<h4>Prompt Strategy<a class="headerlink" href="#id4" title="Link to this heading"></a></h4>
<section id="id5">
<h4>Prompt Strategy<a class="headerlink" href="#id5" title="Link to this heading"></a></h4>
<p>If the role field of the first message is <code class="docutils literal notranslate"><span class="pre">&quot;system&quot;</span></code>, a system prompt will be created. The rest of the messages will be combined into dialogue history in string format.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">agentscope.models</span> <span class="kn">import</span> <span class="n">OllamaGenerationWrapper</span>
<span class="kn">from</span> <span class="nn">agentscope.message</span> <span class="kn">import</span> <span class="n">Msg</span>
Expand Down Expand Up @@ -501,8 +550,8 @@ <h3><code class="docutils literal notranslate"><span class="pre">GeminiChatWrapp
an agent may act as many different roles and speak continuously.
Therefore, we decide to convert the list of messages into a user message
in our built-in <code class="docutils literal notranslate"><span class="pre">format</span></code> function.</p>
<section id="id5">
<h4>Prompt Strategy<a class="headerlink" href="#id5" title="Link to this heading"></a></h4>
<section id="id6">
<h4>Prompt Strategy<a class="headerlink" href="#id6" title="Link to this heading"></a></h4>
<p>If the role field of the first message is <code class="docutils literal notranslate"><span class="pre">&quot;system&quot;</span></code>, a system prompt will be added in the beginning. The other messages will be combined into dialogue history.</p>
<p><strong>Note</strong> sometimes the <code class="docutils literal notranslate"><span class="pre">parts</span></code> field may contain image urls, which is not
supported in <code class="docutils literal notranslate"><span class="pre">format</span></code> function. We recommend developers to customize the
Expand Down Expand Up @@ -546,8 +595,8 @@ <h3><code class="docutils literal notranslate"><span class="pre">ZhipuAIChatWrap
<code class="docutils literal notranslate"><span class="pre">&quot;system&quot;</span></code> or <code class="docutils literal notranslate"><span class="pre">&quot;assistant&quot;</span></code>.</p></li>
<li><p>There must be at least one <code class="docutils literal notranslate"><span class="pre">user</span></code> message.</p></li>
</ul>
<section id="id6">
<h4>Prompt Strategy<a class="headerlink" href="#id6" title="Link to this heading"></a></h4>
<section id="id7">
<h4>Prompt Strategy<a class="headerlink" href="#id7" title="Link to this heading"></a></h4>
<p>If the role field of the first message is <code class="docutils literal notranslate"><span class="pre">&quot;system&quot;</span></code>, it will be converted into a single message with the <code class="docutils literal notranslate"><span class="pre">role</span></code> field as <code class="docutils literal notranslate"><span class="pre">&quot;system&quot;</span></code> and the <code class="docutils literal notranslate"><span class="pre">content</span></code> field as the system message. The rest of the messages will be converted into a message with the <code class="docutils literal notranslate"><span class="pre">role</span></code> field as <code class="docutils literal notranslate"><span class="pre">&quot;user&quot;</span></code> and the <code class="docutils literal notranslate"><span class="pre">content</span></code> field as the dialogue history.</p>
<p>An example is shown below:</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">agentscope.models</span> <span class="kn">import</span> <span class="n">ZhipuAIChatWrapper</span>
Expand Down
Binary file modified zh_CN/.doctrees/environment.pickle
Binary file not shown.
Binary file modified zh_CN/.doctrees/index.doctree
Binary file not shown.
Binary file modified zh_CN/.doctrees/tutorial/206-prompt.doctree
Binary file not shown.
40 changes: 40 additions & 0 deletions zh_CN/_sources/tutorial/206-prompt.md.txt
Original file line number Diff line number Diff line change
Expand Up @@ -271,6 +271,46 @@ print(prompt)
]
```

### LiteLLMChatWrapper

`LiteLLMChatWrapper`封装了litellm聊天API,它接受消息列表作为输入。Litellm支持不同类型的模型,每个模型可能需要遵守不同的格式。为了简化使用,我们提供了一种与大多数模型兼容的格式。如果需要更特定的格式,您可以参考您所使用的特定模型以及[litellm](https://github.com/BerriAI/litellm)文档,来定制适合您模型的格式函数。
- 格式化聊天历史中的所有消息,将其整合成一个以`"user"`作为`role`的单一消息
#### 提示策略
- 消息将包括对话历史,`user`消息由系统消息(system message)和"## Dialog History"前缀。


```python
from agentscope.models import LiteLLMChatWrapper

model = LiteLLMChatWrapper(
config_name="", # empty since we directly initialize the model wrapper
model_name="gpt-3.5-turbo",
)

prompt = model.format(
Msg("system", "You are a helpful assistant", role="system"),
[
Msg("user", "What is the weather today?", role="user"),
Msg("assistant", "It is sunny today", role="assistant"),
],
)

print(prompt)
```

```bash
[
{
"role": "user",
"content": (
"You are a helpful assistant\n\n"
"## Dialogue History\nuser: What is the weather today?\n"
"assistant: It is sunny today"
),
},
]
```

### `OllamaChatWrapper`

`OllamaChatWrapper`封装了Ollama聊天API,它接受消息列表作为输入。消息必须遵守以下规则(更新于2024/03/22):
Expand Down
Binary file modified zh_CN/objects.inv
Binary file not shown.
2 changes: 1 addition & 1 deletion zh_CN/searchindex.js

Large diffs are not rendered by default.

Loading

0 comments on commit 24ef2da

Please sign in to comment.