docs: update OpenAI integration page (#31646)

model_kwargs is no longer needed for `truncation` and `reasoning`.
This commit is contained in:
ccurme 2025-06-17 16:23:06 -04:00 committed by GitHub
parent 6409498f6c
commit da97013f96
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -830,7 +830,7 @@
"# Initialize model\n", "# Initialize model\n",
"llm = ChatOpenAI(\n", "llm = ChatOpenAI(\n",
" model=\"computer-use-preview\",\n", " model=\"computer-use-preview\",\n",
" model_kwargs={\"truncation\": \"auto\"},\n", " truncation=\"auto\",\n",
")\n", ")\n",
"\n", "\n",
"# Bind computer-use tool\n", "# Bind computer-use tool\n",
@ -1359,7 +1359,7 @@
"\n", "\n",
"Some OpenAI models will generate separate text content illustrating their reasoning process. See OpenAI's [reasoning documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses) for details.\n", "Some OpenAI models will generate separate text content illustrating their reasoning process. See OpenAI's [reasoning documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses) for details.\n",
"\n", "\n",
"OpenAI can return a summary of the model's reasoning (although it doesn't expose the raw reasoning tokens). To configure `ChatOpenAI` to return this summary, specify the `reasoning` parameter:" "OpenAI can return a summary of the model's reasoning (although it doesn't expose the raw reasoning tokens). To configure `ChatOpenAI` to return this summary, specify the `reasoning` parameter. `ChatOpenAI` will automatically route to the Responses API if this parameter is set."
] ]
}, },
{ {
@ -1387,11 +1387,7 @@
" \"summary\": \"auto\", # 'detailed', 'auto', or None\n", " \"summary\": \"auto\", # 'detailed', 'auto', or None\n",
"}\n", "}\n",
"\n", "\n",
"llm = ChatOpenAI(\n", "llm = ChatOpenAI(model=\"o4-mini\", reasoning=reasoning)\n",
" model=\"o4-mini\",\n",
" use_responses_api=True,\n",
" model_kwargs={\"reasoning\": reasoning},\n",
")\n",
"response = llm.invoke(\"What is 3^3?\")\n", "response = llm.invoke(\"What is 3^3?\")\n",
"\n", "\n",
"# Output\n", "# Output\n",