Skip to content

Commit

Permalink
feat: added invoke method to chat model docs (langchain-ai#2981)
Browse files Browse the repository at this point in the history
* feat: added invoke method to chat model docs

* nit

* nit: remove install openai dep
  • Loading branch information
bracesproul authored Oct 20, 2023
1 parent 2fb2c8b commit f0af242
Show file tree
Hide file tree
Showing 3 changed files with 25 additions and 19 deletions.
28 changes: 10 additions & 18 deletions docs/docs/modules/model_io/models/chat/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,24 +20,6 @@ The following sections of documentation are provided:

### Setup

To start we'll need to install the official OpenAI package:

import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import CodeBlock from "@theme/CodeBlock";

<Tabs>
<TabItem value="npm" label="npm" default>
<CodeBlock language="bash">npm install -S openai</CodeBlock>
</TabItem>
<TabItem value="yarn" label="Yarn">
<CodeBlock language="bash">yarn add openai</CodeBlock>
</TabItem>
<TabItem value="pnpm" label="pnpm">
<CodeBlock language="bash">pnpm add openai</CodeBlock>
</TabItem>
</Tabs>

Accessing the API requires an API key, which you can get by creating an account and heading [here](https://platform.openai.com/account/api-keys). Once we have a key we'll want to set it as an environment variable by running:

```bash
Expand Down Expand Up @@ -67,6 +49,16 @@ const chat = new ChatOpenAI({});
The chat model interface is based around messages rather than raw text.
The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, `FunctionMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`

### `invoke`

#### Generic inputs -> generic outputs

You can generate LLM responses by calling `.invoke` and passing in whatever inputs you defined in the (`Runnable`)[/docs/expression_language/].

import RunnableExample from "@examples/models/chat/runnable_chat_quick_start.ts";

<CodeBlock language="typescript">{RunnableExample}</CodeBlock>

### `call`

#### Messages in -> message out
Expand Down
2 changes: 1 addition & 1 deletion examples/src/models/chat/chat_quick_start.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import { ChatOpenAI } from "langchain/chat_models/openai";
import { HumanMessage } from "langchain/schema";

const chat = new ChatOpenAI();
const chat = new ChatOpenAI({});
// Pass in a list of messages to `call` to start a conversation. In this simple example, we only pass in one message.
const response = await chat.call([
new HumanMessage(
Expand Down
14 changes: 14 additions & 0 deletions examples/src/models/chat/runnable_chat_quick_start.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
import { ChatOpenAI } from "langchain/chat_models/openai";
import { PromptTemplate } from "langchain/prompts";

const chat = new ChatOpenAI({});
// Create a prompt to start the conversation.
const prompt =
PromptTemplate.fromTemplate(`You're a dog, good luck with the conversation.
Question: {question}`);
// Define your runnable by piping the prompt into the chat model.
const runnable = prompt.pipe(chat);
// Call .invoke() and pass in the input defined in the prompt template.
const response = await runnable.invoke({ question: "Who's a good boy??" });
console.log(response);
// AIMessage { content: "Woof woof! Thank you for asking! I believe I'm a good boy! I try my best to be a good dog and make my humans happy. Wagging my tail happily here! How can I make your day better?" }

0 comments on commit f0af242

Please sign in to comment.