docs: add example of simultaneous tool-calling + structured output for OpenAI (#31433)

This commit is contained in:
ccurme 2025-05-30 09:29:36 -04:00 committed by GitHub
parent d79b5813a0
commit bbb60e210a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -397,6 +397,56 @@
"For more on binding tools and tool call outputs, head to the [tool calling](/docs/how_to/function_calling) docs."
]
},
{
"cell_type": "markdown",
"id": "f06789fb-61e1-4b35-a2b5-2dea18c1a949",
"metadata": {},
"source": [
"### Structured output and tool calls\n",
"\n",
"OpenAI's [structured output](https://platform.openai.com/docs/guides/structured-outputs) feature can be used simultaneously with tool-calling. The model will either generate tool calls or a response adhering to a desired schema. See example below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "15d2b6e0-f457-4abd-a4d5-08b210d09c04",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from pydantic import BaseModel\n",
"\n",
"\n",
"def get_weather(location: str) -> None:\n",
" \"\"\"Get weather at a location.\"\"\"\n",
" return \"It's sunny.\"\n",
"\n",
"\n",
"class OutputSchema(BaseModel):\n",
" \"\"\"Schema for response.\"\"\"\n",
"\n",
" answer: str\n",
" justification: str\n",
"\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4.1\")\n",
"\n",
"structured_llm = llm.bind_tools(\n",
" [get_weather],\n",
" response_format=OutputSchema,\n",
" strict=True,\n",
")\n",
"\n",
"# Response contains tool calls:\n",
"tool_call_response = structured_llm.invoke(\"What is the weather in SF?\")\n",
"\n",
"# structured_response.additional_kwargs[\"parsed\"] contains parsed output\n",
"structured_response = structured_llm.invoke(\n",
" \"What weighs more, a pound of feathers or a pound of gold?\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "84833dd0-17e9-4269-82ed-550639d65751",