-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
langchain agent -> Exiting LLM run with output: { "generations": [ [ null ] ] } #6802
Comments
I found a similar issue that might be relevant to your problem: "Getting started code for chatModel throws error", which was resolved by updating To address the Here is a modified version of your code with added checks to handle the import { OpenAI } from "@langchain/openai";
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import type { PromptTemplate } from "@langchain/core/prompts";
import { pull } from "langchain/hub";
import { AgentExecutor, createOpenAIFunctionsAgent } from "langchain/agents";
// Define the tools the agent will have access to.
const tools = [new TavilySearchResults({ maxResults: 1 })];
const llm = new OpenAI({
model: "gpt-3.5-turbo-instruct",
temperature: 0,
});
async function initializeAgent() {
// Get the prompt to use - you can modify this!
const prompt = await pull<PromptTemplate>("hwchase17/react");
if (!prompt) {
throw new Error("Failed to fetch the prompt template.");
}
const agent = await createOpenAIFunctionsAgent({
llm,
prompt,
tools,
});
const agentExecutor = new AgentExecutor({
agent,
tools,
});
const result = await agentExecutor.invoke({
input: "what is LangChain?",
});
console.log(result);
const promptWithChat = await pull<PromptTemplate>("hwchase17/react-chat");
if (!promptWithChat) {
throw new Error("Failed to fetch the chat prompt template.");
}
const agentWithChat = await createOpenAIFunctionsAgent({
llm,
tools,
prompt: promptWithChat,
});
const agentExecutorWithChat = new AgentExecutor({
agent: agentWithChat,
tools,
});
const result2 = await agentExecutorWithChat.invoke({
input: "what's my name?",
chat_history: "Human: Hi! My name is Cob\nAI: Hello Cob! Nice to meet you",
});
console.log(result2);
}
initializeAgent().catch(console.error); This code includes checks to ensure that the Furthermore, if you are using OpenAPI functions, ensure that the OpenAPI specification is correctly parsed and converted to OpenAI functions. The |
but my issue is different @dosu |
To resolve the Here is the relevant part of the code that needs to be checked: parseAIMessage(message: BaseMessage): ToolsAgentAction[] | AgentFinish {
if (message.content && typeof message.content !== "string") {
throw new Error("This agent cannot parse non-string model responses.");
}
if (message.additional_kwargs.tool_calls) {
const toolCalls: OpenAIClient.Chat.ChatCompletionMessageToolCall[] =
message.additional_kwargs.tool_calls;
try {
return toolCalls.map((toolCall, i) => {
const toolInput = toolCall.function.arguments
? JSON.parse(toolCall.function.arguments)
: {};
const messageLog = i === 0 ? [message] : [];
return {
tool: toolCall.function.name as string,
toolInput,
toolCallId: toolCall.id,
log: `Invoking "${toolCall.function.name}" with ${
toolCall.function.arguments ?? "{}"
}\n${message.content}`,
messageLog,
};
});
} catch (error) {
throw new OutputParserException(
`Failed to parse tool arguments from chat model response. Text: "${JSON.stringify(
toolCalls
)}". ${error}`
);
}
} else {
return {
returnValues: { output: message.content },
log: message.content,
};
}
} Ensure that the parseAIMessage(message: BaseMessage): ToolsAgentAction[] | AgentFinish {
if (!message || !message.content) {
throw new Error("Message or message content is undefined.");
}
if (typeof message.content !== "string") {
throw new Error("This agent cannot parse non-string model responses.");
}
if (message.additional_kwargs.tool_calls) {
const toolCalls: OpenAIClient.Chat.ChatCompletionMessageToolCall[] =
message.additional_kwargs.tool_calls;
try {
return toolCalls.map((toolCall, i) => {
const toolInput = toolCall.function.arguments
? JSON.parse(toolCall.function.arguments)
: {};
const messageLog = i === 0 ? [message] : [];
return {
tool: toolCall.function.name as string,
toolInput,
toolCallId: toolCall.id,
log: `Invoking "${toolCall.function.name}" with ${
toolCall.function.arguments ?? "{}"
}\n${message.content}`,
messageLog,
};
});
} catch (error) {
throw new OutputParserException(
`Failed to parse tool arguments from chat model response. Text: "${JSON.stringify(
toolCalls
)}". ${error}`
);
}
} else {
return {
returnValues: { output: message.content },
log: message.content,
};
}
} This additional check ensures that the |
I managed to narrow it down to the issue being with vite. I tried making the exact same thing as -> https://js.langchain.com/v0.1/docs/modules/agents/how_to/custom_agent/ in a normal create-react-app it worked. But create-react-app is not using vite.But in my repo regardless of the config it wont work. @dosu |
The issue you're encountering with Vite might be related to how Vite handles module resolution and asynchronous operations compared to Create React App. Here are some steps and configurations to ensure compatibility with LangChain.js in a Vite project:
By ensuring these configurations and handling asynchronous operations correctly, you should be able to resolve the issues with Vite and LangChain.js compatibility [1][2]. |
so what I'm getting is I need to add a polyfill to my vite config in order for it to work ? @dosu |
Yes, adding polyfills to your Vite configuration can help resolve compatibility issues. Here’s how you can add polyfills for Node.js modules in your Vite project:
By adding these polyfills, you should be able to resolve the issues related to module resolution and asynchronous operations in your Vite project. |
@dosu what about vite polyfill? |
To ensure LangChain.js works correctly in a Vite environment, you need to add polyfills for certain Node.js functionalities. Here’s how you can do it:
These steps will help you polyfill the required Node.js functionalities ( |
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
then after using verbose: true on the agent ->
Description
I'm trying to use langchain to create an agent with openai functions/tools.
My application is living fully in the front-end. React.
In my network tab I can see the responce of the llm is SUCCESSFUL I CAN SEE THE CONTENT but the parser fails
But I CAN'T get langchain agents to work, since I'm using it on the front-end and the library is made for the back-end I suppose there are some kind of timeouts going under the hood that don't await the completion of a agent/tool and it just throws error ( inputVariables becomes always undefined )
( after some more reasearch I foudn this guy has the same issue but in angular Seems like its a known problem for langchain when doing it in front-end https://stackoverflow.com/questions/77075029/langchain-simple-llm-predict-in-angular-returns-empty-result )
BUT IF I do it in a nodeJS environment it works as expected
But I can see the network tab
that the responce is there -_-
P.S. I know I made this and a post in discussions, sorry for the spam, I'm on a tight deadline and need to get this working as soon as possible
System Info
"@langchain/core": "^0.2.31",
"@langchain/openai": "^0.2.10",
"langchain": "^0.3.0",
platform macos
node version -> Node.js v20.12.2.
npm version 10.8.2
The text was updated successfully, but these errors were encountered: