You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched the LangGraph/LangChain documentation with the integrated search.
I used the GitHub search to find a similar question and didn't find it.
I am sure that this is a bug in LangGraph/LangChain rather than my code.
I am sure this is better as an issue rather than a GitHub discussion, since this is a LangGraph bug and not a design question.
Example Code
############################ Graph flow/src/graph.pyfromlanggraph.prebuiltimportToolNode, tools_conditionfromlanggraph.store.memoryimportInMemoryStorefromlanggraph.graph.messageimportadd_messagesfromlanggraph.graphimportStateGraphfromlangchain_core.messagesimportSystemMessage, HumanMessage, ToolMessage, AIMessagefromlangchain_openaiimportAzureChatOpenAIfromlangchain_core.toolsimporttoolfromtyping_extensionsimportAnnotated, TypedDictimportrandomimportos############# Graph #############classConfigSchema(TypedDict):
"""Define the schema for the config object."""thread_id: strclassStateSchema(TypedDict):
"""Define the schema for the state object."""prompt: strmessages: Annotated[list, add_messages] = []
############# Tools #############@tooldefget_city_population(city:str) ->int:
"""Get the population of a city"""returnrandom.randint(1000, 1000000)
tools= [get_city_population]
############# Nodes #############classAgent():
name: str="Agent"agent: AzureChatOpenAI=Nonetools: list= []
def__init__(self, tools: list= [], name: str="Agent"):
self.name=nameself.agent=AzureChatOpenAI(
azure_deployment=os.getenv('MODEL', 'gpt-4o-mini'),
azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),
openai_api_key=os.getenv('AZURE_OPENAI_API_KEY'),
openai_api_version=os.getenv("OPENAI_API_VERSION"),
model_kwargs= {
"response_format": {"type": "json_object"}
}
)
# Bind tools to the agent if they are providediflen(tools) >0:
self.tools=tools# Bind tools to the agent if they are already setiflen(self.tools) >0:
self.agent=self.agent.bind_tools(tools)
# self.agent = self.agent.bind_tools(tools, strict = True)# Disabling this line as it seems not having any effect # self.agent.bind(response_format={"type": "json_object"})defrun(self, state: dict, config: dict) ->dict:
existing_messages=state.get('messages', [])
new_messages= []
ifnotexisting_messages:
new_messages= [
SystemMessage("Your task is to find the population of a city requested by the user. Your response should be a JSON object with the city name and the population. {'<city>':'<population>'}"),
HumanMessage(state.get('prompt')),
]
response=self.agent.invoke(existing_messages+new_messages)
new_messages.append(response)
return {
"messages": new_messages
}
graph_builder=StateGraph(StateSchema, ConfigSchema)
agent_node=Agent(tools=tools)
tools_node=ToolNode(tools=tools)
# Add the nodes to the graphgraph_builder.add_node(agent_node.name, agent_node.run)
graph_builder.add_node(tools_node.name, tools_node)
graph_builder.add_conditional_edges(
agent_node.name,
tools_condition
)
graph_builder.add_edge(tools_node.name, agent_node.name)
# Configure the graphgraph_builder.set_entry_point(agent_node.name)
graph_builder.set_finish_point(agent_node.name)
in_memory_store=InMemoryStore()
graph=graph_builder.compile(store=in_memory_store)
############################ FastAPI api.pyfromfastapiimportFastAPI, BodyfrompydanticimportBaseModelfromflow.src.graphimportgraphasgraph_moduleapp=FastAPI()
classGraphInput(BaseModel):
prompt: str@app.post("/graph")asyncdefcreate_graph(input: GraphInput):
returngraph_module.invoke({"prompt": input.prompt})
Error Message and Stack Trace (if applicable)
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/uvicorn-0.32.0-py3.11.egg/uvicorn/protocols/http/h11_impl.py", line 406, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn-0.32.0-py3.11.egg/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fastapi-0.115.4-py3.11.egg/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/applications.py", line 113, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.11/site-packages/starlette-0.41.2-py3.11.egg/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fastapi-0.115.4-py3.11.egg/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fastapi-0.115.4-py3.11.egg/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/api.py", line 14, in create_graph
return graph_module.invoke({"prompt": input.prompt})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langgraph-0.2.45-py3.11.egg/langgraph/pregel/__init__.py", line 1608, in invoke
forchunkin self.stream(
File "/usr/local/lib/python3.11/site-packages/langgraph-0.2.45-py3.11.egg/langgraph/pregel/__init__.py", line 1336, in stream
for_in runner.tick(
File "/usr/local/lib/python3.11/site-packages/langgraph-0.2.45-py3.11.egg/langgraph/pregel/runner.py", line 58, in tick
run_with_retry(t, retry_policy)
File "/usr/local/lib/python3.11/site-packages/langgraph-0.2.45-py3.11.egg/langgraph/pregel/retry.py", line 29, in run_with_retry
task.proc.invoke(task.input, config)
File "/usr/local/lib/python3.11/site-packages/langgraph-0.2.45-py3.11.egg/langgraph/utils/runnable.py", line 410, in invoke
input = context.run(step.invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langgraph-0.2.45-py3.11.egg/langgraph/utils/runnable.py", line 184, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/flow/src/graph.py", line 71, in run
response = self.agent.invoke(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core-0.3.15-py3.11.egg/langchain_core/runnables/base.py", line 5354, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core-0.3.15-py3.11.egg/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/usr/local/lib/python3.11/site-packages/langchain_core-0.3.15-py3.11.egg/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core-0.3.15-py3.11.egg/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "/usr/local/lib/python3.11/site-packages/langchain_core-0.3.15-py3.11.egg/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/usr/local/lib/python3.11/site-packages/langchain_core-0.3.15-py3.11.egg/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_openai-0.2.6-py3.11.egg/langchain_openai/chat_models/base.py", line 701, in _generate
response = self.root_client.beta.chat.completions.parse(**payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/openai-1.54.3-py3.11.egg/openai/resources/beta/chat/completions.py", line 142, in parse
_validate_input_tools(tools)
File "/usr/local/lib/python3.11/site-packages/openai-1.54.3-py3.11.egg/openai/lib/_parsing/_completions.py", line 53, in validate_input_tools
raise ValueError(
ValueError: `get_city_population` is not strict. Only `strict`functiontools can be auto-parsed
Description
We are experiencing an issue with the AzureChatOpenAI binding tools into it. The odd behaviour is that the graph works in LangGraph Studio .. but onces it's triggered via FastAPI .. an error is thrown indicating "Only strict function tools can be auto-parsed".
The issue seems to be related to the response_format. When declaring the LLM using AzureChatOpenAI, we are passing "response_format": {"type": "json_object"}. We have tried using the option agent.bind(response_format={"type": "json_object"}) but it doesn't seems to have any effect, if we remove "JSON" word from the system prompt ... it won't give any error ... whereas it should respond "'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'.". So utilizing the "model_kwargs" within AzureChatOpenAI did activate the "response_format".
Troubleshooting 1:
Set model_kwargs = {"response_format": {"type": "json_object"}}
API call to the fastapi endpoint: POST /graph {"prompt":"Population of London?"}
Get ERROR, Only strict function tools can be auto-parsed
Troubleshooting 2:
Set model_kwargs = {"response_format": {"type": "json_object"}}
Set agent.bind_tools(tools, strict = True)
API call to the fastapi endpoint: POST /graph {"prompt":"Population of London?"}
Get ERROR, langgraph.errors.GraphRecursionError: Recursion limit of 25 reached without hitting a stop condition
API call to the fastapi endpoint: POST /graph {"prompt":"Population of London?"}
WORKED .. but looks like response_format is not activated.
Removed "JSON" keyword from the system prompt
API call to the fastapi endpoint: POST /graph {"prompt":"Population of London?"}
WORKED, didn't trigger any error .. so json_object feature not "used". Which is not a valid option.
Troubleshooting 4:
Set model_kwargs = {"response_format": {"type": "json_object"}}
Removed "JSON" keyword from the system prompt
API call to the fastapi endpoint: POST /graph {"prompt":"Population of London?"}
Get ERROR (which is expected): "'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'.". Which means ... the "feature" about reponse_format is activated.
And finally, running the initial version using model_kwargs = {"response_format": {"type": "json_object"}} and just using self.agent = agent.bind_tools(tools) WORKS perfect if the graph if triggered using LangGraph Studio.
Within the stack error, it shows 'openai/resources/beta/chat/completions.py' ... so not sure if this could be related?
Thanks for your help and support ;-)
System Info
python -m langchain_core.sys_info
System Information
OS: Linux
OS Version: #1 SMP Mon Aug 12 08:47:01 UTC 2024
Python Version: 3.11.10 (main, Oct 19 2024, 18:56:55) [GCC 12.2.0]
The text was updated successfully, but these errors were encountered:
jaimeescano
changed the title
Agent Tools: ValueError: my_function is not strict. Only strict function tools can be auto-parsed
Agent Tools: ValueError: <function_name> is not strict. Only strict function tools can be auto-parsed
Nov 11, 2024
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
Description
We are experiencing an issue with the AzureChatOpenAI binding tools into it. The odd behaviour is that the graph works in LangGraph Studio .. but onces it's triggered via FastAPI .. an error is thrown indicating "Only
strict
function tools can be auto-parsed".The issue seems to be related to the response_format. When declaring the LLM using AzureChatOpenAI, we are passing
"response_format": {"type": "json_object"}
. We have tried using the optionagent.bind(response_format={"type": "json_object"})
but it doesn't seems to have any effect, if we remove "JSON" word from the system prompt ... it won't give any error ... whereas it should respond"'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'."
. So utilizing the "model_kwargs" within AzureChatOpenAI did activate the "response_format".Troubleshooting 1:
strict
function tools can be auto-parsedTroubleshooting 2:
Troubleshooting 3:
Troubleshooting 4:
"'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'."
. Which means ... the "feature" about reponse_format is activated.And finally, running the initial version using
model_kwargs = {"response_format": {"type": "json_object"}}
and just usingself.agent = agent.bind_tools(tools)
WORKS perfect if the graph if triggered using LangGraph Studio.Within the stack error, it shows 'openai/resources/beta/chat/completions.py' ... so not sure if this could be related?
Thanks for your help and support ;-)
System Info
python -m langchain_core.sys_info
System Information
Package Information
Optional packages not installed
Other Dependencies
The text was updated successfully, but these errors were encountered: