mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-02 11:39:18 +00:00
openai[patch]: support Responses API (#30231)
Co-authored-by: Bagatur <baskaryan@gmail.com>
This commit is contained in:
@@ -322,7 +322,7 @@
|
||||
"source": [
|
||||
"### ``strict=True``\n",
|
||||
"\n",
|
||||
":::info Requires ``langchain-openai>=0.1.21rc1``\n",
|
||||
":::info Requires ``langchain-openai>=0.1.21``\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
@@ -397,6 +397,405 @@
|
||||
"For more on binding tools and tool call outputs, head to the [tool calling](/docs/how_to/function_calling) docs."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "84833dd0-17e9-4269-82ed-550639d65751",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Responses API\n",
|
||||
"\n",
|
||||
":::info Requires ``langchain-openai>=0.3.9-rc.1``\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"OpenAI supports a [Responses](https://platform.openai.com/docs/guides/responses-vs-chat-completions) API that is oriented toward building [agentic](/docs/concepts/agents/) applications. It includes a suite of [built-in tools](https://platform.openai.com/docs/guides/tools?api-mode=responses), including web and file search. It also supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses), allowing you to continue a conversational thread without explicitly passing in previous messages.\n",
|
||||
"\n",
|
||||
"`ChatOpenAI` will route to the Responses API if one of these features is used. You can also specify `use_responses_api=True` when instantiating `ChatOpenAI`.\n",
|
||||
"\n",
|
||||
"### Built-in tools\n",
|
||||
"\n",
|
||||
"Equipping `ChatOpenAI` with built-in tools will ground its responses with outside information, such as via context in files or the web. The [AIMessage](/docs/concepts/messages/#aimessage) generated from the model will include information about the built-in tool invocation.\n",
|
||||
"\n",
|
||||
"#### Web search\n",
|
||||
"\n",
|
||||
"To trigger a web search, pass `{\"type\": \"web_search_preview\"}` to the model as you would another tool.\n",
|
||||
"\n",
|
||||
":::tip\n",
|
||||
"\n",
|
||||
"You can also pass built-in tools as invocation params:\n",
|
||||
"```python\n",
|
||||
"llm.invoke(\"...\", tools=[{\"type\": \"web_search_preview\"}])\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "0d8bfe89-948b-42d4-beac-85ef2a72491d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
|
||||
"\n",
|
||||
"tool = {\"type\": \"web_search_preview\"}\n",
|
||||
"llm_with_tools = llm.bind_tools([tool])\n",
|
||||
"\n",
|
||||
"response = llm_with_tools.invoke(\"What was a positive news story from today?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c9fe67c6-38ff-40a5-93b3-a4b7fca76372",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note that the response includes structured [content blocks](/docs/concepts/messages/#content-1) that include both the text of the response and OpenAI [annotations](https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses#output-and-citations) citing its sources:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "3ea5a4b1-f57a-4c8a-97f4-60ab8330a804",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'type': 'text',\n",
|
||||
" 'text': 'Today, a heartwarming story emerged from Minnesota, where a group of high school robotics students built a custom motorized wheelchair for a 2-year-old boy named Cillian Jackson. Born with a genetic condition that limited his mobility, Cillian\\'s family couldn\\'t afford the $20,000 wheelchair he needed. The students at Farmington High School\\'s Rogue Robotics team took it upon themselves to modify a Power Wheels toy car into a functional motorized wheelchair for Cillian, complete with a joystick, safety bumpers, and a harness. One team member remarked, \"I think we won here more than we do in our competitions. Instead of completing a task, we\\'re helping change someone\\'s life.\" ([boredpanda.com](https://www.boredpanda.com/wholesome-global-positive-news/?utm_source=openai))\\n\\nThis act of kindness highlights the profound impact that community support and innovation can have on individuals facing challenges. ',\n",
|
||||
" 'annotations': [{'end_index': 778,\n",
|
||||
" 'start_index': 682,\n",
|
||||
" 'title': '“Global Positive News”: 40 Posts To Remind Us There’s Good In The World',\n",
|
||||
" 'type': 'url_citation',\n",
|
||||
" 'url': 'https://www.boredpanda.com/wholesome-global-positive-news/?utm_source=openai'}]}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response.content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "95fbc34c-2f12-4d51-92c5-bf62a2f8900c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
":::tip\n",
|
||||
"\n",
|
||||
"You can recover just the text content of the response as a string by using `response.text()`. For example, to stream response text:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"for token in llm_with_tools.stream(\"...\"):\n",
|
||||
" print(token.text(), end=\"|\")\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"See the [streaming guide](/docs/how_to/chat_streaming/) for more detail.\n",
|
||||
"\n",
|
||||
":::"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2a332940-d409-41ee-ac36-2e9bee900e83",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The output message will also contain information from any tool invocations:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "a8011049-6c90-4fcb-82d4-850c72b46941",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'tool_outputs': [{'id': 'ws_67d192aeb6cc81918e736ad4a57937570d6f8507990d9d71',\n",
|
||||
" 'status': 'completed',\n",
|
||||
" 'type': 'web_search_call'}]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response.additional_kwargs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "288d47bb-3ccb-412f-a3d3-9f6cee0e6214",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### File search\n",
|
||||
"\n",
|
||||
"To trigger a file search, pass a [file search tool](https://platform.openai.com/docs/guides/tools-file-search) to the model as you would another tool. You will need to populate an OpenAI-managed vector store and include the vector store ID in the tool definition. See [OpenAI documentation](https://platform.openai.com/docs/guides/tools-file-search) for more detail."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"id": "1f758726-33ef-4c04-8a54-49adb783bbb3",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Deep Research by OpenAI is a new capability integrated into ChatGPT that allows for the execution of multi-step research tasks independently. It can synthesize extensive amounts of online information and produce comprehensive reports similar to what a research analyst would do, significantly speeding up processes that would typically take hours for a human.\n",
|
||||
"\n",
|
||||
"### Key Features:\n",
|
||||
"- **Independent Research**: Users simply provide a prompt, and the model can find, analyze, and synthesize information from hundreds of online sources.\n",
|
||||
"- **Multi-Modal Capabilities**: The model is also able to browse user-uploaded files, plot graphs using Python, and embed visualizations in its outputs.\n",
|
||||
"- **Training**: Deep Research has been trained using reinforcement learning on real-world tasks that require extensive browsing and reasoning.\n",
|
||||
"\n",
|
||||
"### Applications:\n",
|
||||
"- Useful for professionals in sectors like finance, science, policy, and engineering, enabling them to obtain accurate and thorough research quickly.\n",
|
||||
"- It can also be beneficial for consumers seeking personalized recommendations on complex purchases.\n",
|
||||
"\n",
|
||||
"### Limitations:\n",
|
||||
"Although Deep Research presents significant advancements, it has some limitations, such as the potential to hallucinate facts or struggle with authoritative information. \n",
|
||||
"\n",
|
||||
"Deep Research aims to facilitate access to thorough and documented information, marking a significant step toward the broader goal of developing artificial general intelligence (AGI).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
|
||||
"\n",
|
||||
"openai_vector_store_ids = [\n",
|
||||
" \"vs_...\", # your IDs here\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"tool = {\n",
|
||||
" \"type\": \"file_search\",\n",
|
||||
" \"vector_store_ids\": openai_vector_store_ids,\n",
|
||||
"}\n",
|
||||
"llm_with_tools = llm.bind_tools([tool])\n",
|
||||
"\n",
|
||||
"response = llm_with_tools.invoke(\"What is deep research by OpenAI?\")\n",
|
||||
"print(response.text())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f88bbd71-83b0-45a6-9141-46ec9da93df6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As with [web search](#web-search), the response will include content blocks with citations:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "865bc14e-1599-438e-be44-857891004979",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'file_id': 'file-3UzgX7jcC8Dt9ZAFzywg5k',\n",
|
||||
" 'index': 346,\n",
|
||||
" 'type': 'file_citation',\n",
|
||||
" 'filename': 'deep_research_blog.pdf'},\n",
|
||||
" {'file_id': 'file-3UzgX7jcC8Dt9ZAFzywg5k',\n",
|
||||
" 'index': 575,\n",
|
||||
" 'type': 'file_citation',\n",
|
||||
" 'filename': 'deep_research_blog.pdf'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 22,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response.content[0][\"annotations\"][:2]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "dd00f6be-2862-4634-a0c3-14ee39915c90",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It will also include information from the built-in tool invocations:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "e16a7110-d2d8-45fa-b372-5109f330540b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'tool_outputs': [{'id': 'fs_67d196fbb83c8191ba20586175331687089228ce932eceb1',\n",
|
||||
" 'queries': ['What is deep research by OpenAI?'],\n",
|
||||
" 'status': 'completed',\n",
|
||||
" 'type': 'file_search_call'}]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 20,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response.additional_kwargs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6fda05f0-4b81-4709-9407-f316d760ad50",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Managing conversation state\n",
|
||||
"\n",
|
||||
"The Responses API supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses).\n",
|
||||
"\n",
|
||||
"#### Manually manage state\n",
|
||||
"\n",
|
||||
"You can manage the state manually or using [LangGraph](/docs/tutorials/chatbot/), as with other chat models:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "51d3e4d3-ea78-426c-9205-aecb0937fca7",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"As of March 12, 2025, here are some positive news stories that highlight recent uplifting events:\n",
|
||||
"\n",
|
||||
"*... exemplify positive developments in health, environmental sustainability, and community well-being. \n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
|
||||
"\n",
|
||||
"tool = {\"type\": \"web_search_preview\"}\n",
|
||||
"llm_with_tools = llm.bind_tools([tool])\n",
|
||||
"\n",
|
||||
"first_query = \"What was a positive news story from today?\"\n",
|
||||
"messages = [{\"role\": \"user\", \"content\": first_query}]\n",
|
||||
"\n",
|
||||
"response = llm_with_tools.invoke(messages)\n",
|
||||
"response_text = response.text()\n",
|
||||
"print(f\"{response_text[:100]}... {response_text[-100:]}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "5da9d20f-9712-46f4-a395-5be5a7c1bc62",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Your question was: \"What was a positive news story from today?\"\n",
|
||||
"\n",
|
||||
"The last sentence of my answer was: \"These stories exemplify positive developments in health, environmental sustainability, and community well-being.\"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"second_query = (\n",
|
||||
" \"Repeat my question back to me, as well as the last sentence of your answer.\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"messages.extend(\n",
|
||||
" [\n",
|
||||
" response,\n",
|
||||
" {\"role\": \"user\", \"content\": second_query},\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"second_response = llm_with_tools.invoke(messages)\n",
|
||||
"print(second_response.text())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5fd8ca21-8a5e-4294-af32-11f26a040171",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
":::tip\n",
|
||||
"\n",
|
||||
"You can use [LangGraph](https://langchain-ai.github.io/langgraph/) to manage conversational threads for you in a variety of backends, including in-memory and Postgres. See [this tutorial](/docs/tutorials/chatbot/) to get started.\n",
|
||||
"\n",
|
||||
":::\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"#### Passing `previous_response_id`\n",
|
||||
"\n",
|
||||
"When using the Responses API, LangChain messages will include an `\"id\"` field in its metadata. Passing this ID to subsequent invocations will continue the conversation. Note that this is [equivalent](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses#openai-apis-for-conversation-state) to manually passing in messages from a billing perspective."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "009e541a-b372-410e-b9dd-608a8052ce09",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Hi Bob! How can I assist you today?\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(\n",
|
||||
" model=\"gpt-4o-mini\",\n",
|
||||
" use_responses_api=True,\n",
|
||||
")\n",
|
||||
"response = llm.invoke(\"Hi, I'm Bob.\")\n",
|
||||
"print(response.text())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "393a443a-4c5f-4a07-bc0e-c76e529b35e3",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Your name is Bob. How can I help you today, Bob?\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"second_response = llm.invoke(\n",
|
||||
" \"What is my name?\",\n",
|
||||
" previous_response_id=response.response_metadata[\"id\"],\n",
|
||||
")\n",
|
||||
"print(second_response.text())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "57e27714",
|
||||
|
Reference in New Issue
Block a user