-
Checked other resources
Commit to Help
Example Codeimport { HumanMessage, SystemMessage } from "@langchain/core/messages";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI, ChatOpenAICallOptions } from "@langchain/openai";
import { z } from "zod";
const chatPromptTemplate = ChatPromptTemplate.fromMessages([
new SystemMessage("You are a helpful art history expert."),
new HumanMessage("What was Michaelangelo's most famous work?"),
]);
const baseModel = new ChatOpenAI({ apiKey: "your api key goes here", model: "gpt-4o-2024-08-06" });
const modelOptions: ChatOpenAICallOptions = {
response_format: {
type: "json_schema",
json_schema: {
name: "response-body",
schema: z.object({ name: z.string(), description: z.string() }),
},
},
};
const model = baseModel.bind(modelOptions);
const sequence = chatPromptTemplate.pipe(model);
const stream = await sequence.stream({});
for await (const message of stream) {
console.log(message);
} DescriptionMy issue is an unexpected, and seemingly unnecessary, reduction in capability with a recent release. I am using ChatOpenAI with the new option for response_format With the latest @langchain/openai I am receiving a warning: Before the latest release (ie before langchain added explicit Question: Is this block at all necessary? Can it just be removed? Thank you! System Info"npm info @langchain/openai"
Platform: Windows |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Hello, @ClaysGit! I'm here to assist you with any bugs, questions, or contributions you have. Let's get to the bottom of this together! There is a similar unsolved discussion on this topic: Streaming OpenAI Function Call output in LangChainStreaming [1]. Regarding your question, it is necessary to block streaming when using the However, there are tests indicating that streaming with For now, you might consider using alternative methods for streaming structured output data. You can use the import { z } from "zod";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
temperature: 0,
modelName: "gpt-4-turbo-preview",
});
const calculatorSchema = z.object({
operation: z.enum(["add", "subtract", "multiply", "divide"]),
number1: z.number(),
number2: z.number(),
});
const modelWithStructuredOutput = model.withStructuredOutput(
calculatorSchema,
{
name: "calculator",
}
);
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are VERY bad at math and must always use a calculator."],
["human", "Please help me!! What is 2 + 2?"],
]);
const chain = prompt.pipe(modelWithStructuredOutput);
const stream = await chain.stream({});
const chunks = [];
for await (const chunk of stream) {
chunks.push(chunk);
}
console.log(chunks); This approach allows you to stream structured output data using Zod schemas [3]. Additionally, there is ongoing development to support streaming with |
Beta Was this translation helpful? Give feedback.
-
Issue is resolved in #6906 |
Beta Was this translation helpful? Give feedback.
Issue is resolved in #6906