mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-04 20:46:45 +00:00
openai[patch]: support built-in code interpreter and remote MCP tools (#31304)
This commit is contained in:
@@ -915,6 +915,175 @@
|
||||
"response_2.text()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "34ad0015-688c-4274-be55-93268b44f558",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Code interpreter\n",
|
||||
"\n",
|
||||
"OpenAI implements a [code interpreter](https://platform.openai.com/docs/guides/tools-code-interpreter) tool to support the sandboxed generation and execution of code.\n",
|
||||
"\n",
|
||||
"Example use:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "34826aae-6d48-4b84-bc00-89594a87d461",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"o4-mini\", use_responses_api=True)\n",
|
||||
"\n",
|
||||
"llm_with_tools = llm.bind_tools(\n",
|
||||
" [\n",
|
||||
" {\n",
|
||||
" \"type\": \"code_interpreter\",\n",
|
||||
" # Create a new container\n",
|
||||
" \"container\": {\"type\": \"auto\"},\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"response = llm_with_tools.invoke(\n",
|
||||
" \"Write and run code to answer the question: what is 3^3?\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1b4d92b9-941f-4d54-93a5-b0c73afd66b2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note that the above command created a new container. We can also specify an existing container ID:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "d8c82895-5011-4062-a1bb-278ec91321e9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tool_outputs = response.additional_kwargs[\"tool_outputs\"]\n",
|
||||
"assert len(tool_outputs) == 1\n",
|
||||
"# highlight-next-line\n",
|
||||
"container_id = tool_outputs[0][\"container_id\"]\n",
|
||||
"\n",
|
||||
"llm_with_tools = llm.bind_tools(\n",
|
||||
" [\n",
|
||||
" {\n",
|
||||
" \"type\": \"code_interpreter\",\n",
|
||||
" # Use an existing container\n",
|
||||
" # highlight-next-line\n",
|
||||
" \"container\": container_id,\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "8db30501-522c-4915-963d-d60539b5c16e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"#### Remote MCP\n",
|
||||
"\n",
|
||||
"OpenAI implements a [remote MCP](https://platform.openai.com/docs/guides/tools-remote-mcp) tool that allows for model-generated calls to MCP servers.\n",
|
||||
"\n",
|
||||
"Example use:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "7044a87b-8b99-49e8-8ca4-e2a8ae49f65a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"o4-mini\", use_responses_api=True)\n",
|
||||
"\n",
|
||||
"llm_with_tools = llm.bind_tools(\n",
|
||||
" [\n",
|
||||
" {\n",
|
||||
" \"type\": \"mcp\",\n",
|
||||
" \"server_label\": \"deepwiki\",\n",
|
||||
" \"server_url\": \"https://mcp.deepwiki.com/mcp\",\n",
|
||||
" \"require_approval\": \"never\",\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"response = llm_with_tools.invoke(\n",
|
||||
" \"What transport protocols does the 2025-03-26 version of the MCP \"\n",
|
||||
" \"spec (modelcontextprotocol/modelcontextprotocol) support?\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0ed7494e-425d-4bdf-ab83-3164757031dd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<details>\n",
|
||||
"<summary>MCP Approvals</summary>\n",
|
||||
"\n",
|
||||
"OpenAI will at times request approval before sharing data with a remote MCP server.\n",
|
||||
"\n",
|
||||
"In the above command, we instructed the model to never require approval. We can also configure the model to always request approval, or to always request approval for specific tools:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"llm_with_tools = llm.bind_tools(\n",
|
||||
" [\n",
|
||||
" {\n",
|
||||
" \"type\": \"mcp\",\n",
|
||||
" \"server_label\": \"deepwiki\",\n",
|
||||
" \"server_url\": \"https://mcp.deepwiki.com/mcp\",\n",
|
||||
" \"require_approval\": {\n",
|
||||
" \"always\": {\n",
|
||||
" \"tool_names\": [\"read_wiki_structure\"]\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" ]\n",
|
||||
")\n",
|
||||
"response = llm_with_tools.invoke(\n",
|
||||
" \"What transport protocols does the 2025-03-26 version of the MCP \"\n",
|
||||
" \"spec (modelcontextprotocol/modelcontextprotocol) support?\"\n",
|
||||
")\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Responses may then include blocks with type `\"mcp_approval_request\"`.\n",
|
||||
"\n",
|
||||
"To submit approvals for an approval request, structure it into a content block in an input message:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"approval_message = {\n",
|
||||
" \"role\": \"user\",\n",
|
||||
" \"content\": [\n",
|
||||
" {\n",
|
||||
" \"type\": \"mcp_approval_response\",\n",
|
||||
" \"approve\": True,\n",
|
||||
" \"approval_request_id\": output[\"id\"],\n",
|
||||
" }\n",
|
||||
" for output in response.additional_kwargs[\"tool_outputs\"]\n",
|
||||
" if output[\"type\"] == \"mcp_approval_request\"\n",
|
||||
" ]\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"next_response = llm_with_tools.invoke(\n",
|
||||
" [approval_message],\n",
|
||||
" # continue existing thread\n",
|
||||
" previous_response_id=response.response_metadata[\"id\"]\n",
|
||||
")\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"</details>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6fda05f0-4b81-4709-9407-f316d760ad50",
|
||||
|
Reference in New Issue
Block a user