Compare commits

..

19 Commits

Author SHA1 Message Date
Eugene Yurtsev
808f56b841 x 2025-05-30 16:13:34 -04:00
Eugene Yurtsev
3e7f7b63e3 empty commit 2025-05-30 16:08:28 -04:00
Eugene Yurtsev
67ddd9d709 x 2025-05-30 16:04:27 -04:00
Eugene Yurtsev
b90b094645 xt 2025-05-30 15:51:42 -04:00
Eugene Yurtsev
f82b8534f7 x 2025-05-30 15:49:50 -04:00
Eugene Yurtsev
c9cfd2b39c x 2025-05-30 14:33:59 -04:00
Eugene Yurtsev
2358235342 Merge branch 'master' into eugene/responses_api_2 2025-05-30 14:19:15 -04:00
Eugene Yurtsev
a84bbd3015 x 2025-05-30 14:18:58 -04:00
Eugene Yurtsev
b1cef428b9 Merge branch 'master' into eugene/responses_api_2 2025-05-30 14:09:44 -04:00
ccurme
5bf89628bf groq[patch]: update model for integration tests (#31440)
Llama-3.1 started failing consistently with
> groq.BadRequestError: Error code: 400 - ***'error': ***'message':
"Failed to call a function. Please adjust your prompt. See
'failed_generation' for more details.", 'type': 'invalid_request_error',
'code': 'tool_use_failed', 'failed_generation':
'<function=brave_search>***"query": "Hello!"***</function>'***
2025-05-30 17:27:12 +00:00
Jorge Piedrahita Ortiz
5b9394319b docs: samabanova doc minor fixes (#31436)
- **Description:** samabanova provider docs minor fixes
2025-05-30 12:07:04 -04:00
ccurme
bbb60e210a docs: add example of simultaneous tool-calling + structured output for OpenAI (#31433) 2025-05-30 09:29:36 -04:00
Eugene Yurtsev
abb00c1000 qxqx 2025-05-29 16:41:44 -04:00
Michael Li
d79b5813a0 doc: fix grammar in writer.ipynb (#31400)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-29 20:03:17 +00:00
अंkur गोswami
729526ff7c huggingface: Undefined model_id fix (#31358)
**Description:** This change fixes the undefined model_id issue when
instantiating
[ChatHuggingFace](https://github.com/langchain-ai/langchain/blob/master/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py#L306)
**Issue:** Fixes https://github.com/langchain-ai/langchain/issues/31357


@baskaryan @hwchase17
2025-05-29 15:59:35 -04:00
Michael Li
b7f34749b1 docs: fix grammar issue in assign.ipynb and fireworks.ipynb (#31412)
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-29 19:55:36 +00:00
Michael Li
dd4fc8ab8f docs: fix misspelled word in kinetica.ipynb and nvidia_ai_endpoints.ipynb (#31415)
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-29 19:41:38 +00:00
Michael Li
cc6df95e58 docs: fix grammar and vocabulary issue in reka.ipynb (#31417)
Fix grammar

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-05-29 15:25:45 -04:00
ccurme
c8951ca124 infra: drop azure from streaming benchmarks (#31421)
Covered by BaseChatOpenAI
2025-05-29 15:06:12 -04:00
19 changed files with 1719 additions and 1458 deletions

View File

@@ -8,10 +8,8 @@ on:
workflow_dispatch:
env:
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: foo
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: foo
DEEPSEEK_API_KEY: foo
FIREWORKS_API_KEY: foo
@@ -62,4 +60,3 @@ jobs:
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.mode || 'instrumentation' }}

View File

@@ -157,7 +157,7 @@
"\n",
"## Next steps\n",
"\n",
"Now you've learned how to pass data through your chains to help to help format the data flowing through your chains.\n",
"Now you've learned how to pass data through your chains to help format the data flowing through your chains.\n",
"\n",
"To learn more, see the other how-to guides on runnables in this section."
]

View File

@@ -17,7 +17,7 @@
"source": [
"# ChatFireworks\n",
"\n",
"This doc help you get started with Fireworks AI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatFireworks features and configurations head to the [API reference](https://python.langchain.com/api_reference/fireworks/chat_models/langchain_fireworks.chat_models.ChatFireworks.html).\n",
"This doc helps you get started with Fireworks AI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatFireworks features and configurations head to the [API reference](https://python.langchain.com/api_reference/fireworks/chat_models/langchain_fireworks.chat_models.ChatFireworks.html).\n",
"\n",
"Fireworks AI is an AI inference platform to run and customize models. For a list of all models served by Fireworks see the [Fireworks docs](https://fireworks.ai/models).\n",
"\n",
@@ -39,7 +39,7 @@
"\n",
"### Credentials\n",
"\n",
"Head to (ttps://fireworks.ai/login to sign up to Fireworks and generate an API key. Once you've done this set the FIREWORKS_API_KEY environment variable:"
"Head to (https://fireworks.ai/login to sign up to Fireworks and generate an API key. Once you've done this set the FIREWORKS_API_KEY environment variable:"
]
},
{

View File

@@ -61,7 +61,7 @@
"# Install Langchain community and core packages\n",
"%pip install --upgrade --quiet langchain-core langchain-community\n",
"\n",
"# Install Kineitca DB connection package\n",
"# Install Kinetica DB connection package\n",
"%pip install --upgrade --quiet 'gpudb>=7.2.0.8' typeguard pandas tqdm\n",
"\n",
"# Install packages needed for this tutorial\n",

View File

@@ -318,7 +318,7 @@
"source": [
"### Code Generation\n",
"\n",
"These models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-genreation and structured code tasks. An example of this is `meta/codellama-70b`."
"These models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-generation and structured code tasks. An example of this is `meta/codellama-70b`."
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -36,7 +36,7 @@
"\n",
"## Setup\n",
"\n",
"To access Reka models you'll need to create an Reka developer account, get an API key, and install the `langchain_community` integration package and the reka python package via 'pip install reka-api'.\n",
"To access Reka models you'll need to create a Reka developer account, get an API key, and install the `langchain_community` integration package and the reka python package via 'pip install reka-api'.\n",
"\n",
"### Credentials\n",
"\n",
@@ -280,7 +280,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Use use with tavtly api search"
"Use with Tavily api search"
]
},
{

View File

@@ -303,7 +303,7 @@
"source": [
"### A note on tool binding\n",
"\n",
"The `ChatWriter.bind_tools()` method does not create new instance with bound tools, but stores the received `tools` and `tool_choice` in the initial class instance attributes to pass them as parameters during the Palmyra LLM call while using `ChatWriter` invocation. This approach allows the support of different tool types, e.g. `function` and `graph`. `Graph` is one of the remotely called Writer Palmyra tools. For further information visit our [docs](https://dev.writer.com/api-guides/knowledge-graph#knowledge-graph). \n",
"The `ChatWriter.bind_tools()` method does not create a new instance with bound tools, but stores the received `tools` and `tool_choice` in the initial class instance attributes to pass them as parameters during the Palmyra LLM call while using `ChatWriter` invocation. This approach allows the support of different tool types, e.g. `function` and `graph`. `Graph` is one of the remotely called Writer Palmyra tools. For further information, visit our [docs](https://dev.writer.com/api-guides/knowledge-graph#knowledge-graph). \n",
"\n",
"For more information about tool usage in LangChain, visit the [LangChain tool calling documentation](https://python.langchain.com/docs/concepts/tool_calling/)."
]
@@ -373,7 +373,7 @@
"source": [
"## Prompt templates\n",
"\n",
"[Prompt templates](https://python.langchain.com/docs/concepts/prompt_templates/) help to translate user input and parameters into instructions for a language model. You can use `ChatWriter` with a prompt templates like so:\n"
"[Prompt templates](https://python.langchain.com/docs/concepts/prompt_templates/) help to translate user input and parameters into instructions for a language model. You can use `ChatWriter` with a prompt template like so:\n"
]
},
{
@@ -411,7 +411,7 @@
"metadata": {},
"source": [
"## API reference\n",
"For detailed documentation of all ChatWriter features and configurations head to the [API reference](https://python.langchain.com/api_reference/writer/chat_models/langchain_writer.chat_models.ChatWriter.html#langchain_writer.chat_models.ChatWriter).\n",
"For detailed documentation of all ChatWriter features and configurations, head to the [API reference](https://python.langchain.com/api_reference/writer/chat_models/langchain_writer.chat_models.ChatWriter.html#langchain_writer.chat_models.ChatWriter).\n",
"\n",
"## Additional resources\n",
"You can find information about Writer's models (including costs, context windows, and supported input types) and tools in the [Writer docs](https://dev.writer.com/home)."

View File

@@ -74,14 +74,14 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms.sambanova import SambaNovaCloud\n",
"\n",
"llm = SambaNovaCloud(\n",
" model=\"Meta-Llama-3.1-70B-Instruct\",\n",
" model=\"Meta-Llama-3.3-70B-Instruct\",\n",
" max_tokens_to_generate=1000,\n",
" temperature=0.01,\n",
" # top_k = 50,\n",

View File

@@ -130,7 +130,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"For a more detailed walkthrough of the SambaStudioEmbeddings component, see [this notebook](https://python.langchain.com/docs/integrations/text_embedding/sambanova/)"
"For a more detailed walkthrough of the SambaNovaCloudEmbeddings component, see [this notebook](https://python.langchain.com/docs/integrations/text_embedding/sambanova/)"
]
},
{

View File

@@ -29,14 +29,10 @@ class BaseTestGroq(ChatModelIntegrationTests):
return True
class TestGroqLlama(BaseTestGroq):
class TestGroqGemma(BaseTestGroq):
@property
def chat_model_params(self) -> dict:
return {
"model": "llama-3.1-8b-instant",
"temperature": 0,
"rate_limiter": rate_limiter,
}
return {"model": "gemma2-9b-it", "rate_limiter": rate_limiter}
@property
def supports_json_mode(self) -> bool:

View File

@@ -775,12 +775,12 @@ class ChatHuggingFace(BaseChatModel):
elif _is_huggingface_pipeline(self.llm):
from transformers import AutoTokenizer # type: ignore[import]
self.model_id = self.model_id or self.llm.model_id
self.tokenizer = (
AutoTokenizer.from_pretrained(self.model_id)
if self.tokenizer is None
else self.tokenizer
)
self.model_id = self.llm.model_id
return
elif _is_huggingface_endpoint(self.llm):
self.model_id = self.llm.repo_id or self.llm.model

View File

@@ -118,6 +118,15 @@ global_ssl_context = ssl.create_default_context(cafile=certifi.where())
_FUNCTION_CALL_IDS_MAP_KEY = "__openai_function_call_ids__"
WellKnownTools = (
"file_search",
"web_search_preview",
"computer_use_preview",
"code_interpreter",
"mcp",
"image_generation",
)
def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
"""Convert a dictionary to a LangChain message.
@@ -1487,13 +1496,7 @@ class BaseChatOpenAI(BaseChatModel):
"type": "function",
"function": {"name": tool_choice},
}
elif tool_choice in (
"file_search",
"web_search_preview",
"computer_use_preview",
"code_interpreter",
"mcp",
):
elif tool_choice in WellKnownTools:
tool_choice = {"type": tool_choice}
# 'any' is not natively supported by OpenAI API.
# We support 'any' since other models use this instead of 'required'.
@@ -3050,6 +3053,13 @@ def _construct_responses_api_payload(
new_tools.append({"type": "function", **tool["function"]})
else:
new_tools.append(tool)
if tool["type"] == "image_generation" and "partial_images" in tool:
raise NotImplementedError(
"Partial image generation is not yet supported "
"via the LangChain ChatOpenAI client. Please "
"drop the 'partial_images' key from the image_generation tool."
)
payload["tools"] = new_tools
if tool_choice := payload.pop("tool_choice", None):
# chat api: {"type": "function", "function": {"name": "..."}}
@@ -3139,6 +3149,7 @@ def _pop_summary_index_from_reasoning(reasoning: dict) -> dict:
def _construct_responses_api_input(messages: Sequence[BaseMessage]) -> list:
"""Construct the input for the OpenAI Responses API."""
input_ = []
for lc_msg in messages:
msg = _convert_message_to_dict(lc_msg)
@@ -3191,6 +3202,7 @@ def _construct_responses_api_input(messages: Sequence[BaseMessage]) -> list:
computer_calls = []
code_interpreter_calls = []
mcp_calls = []
image_generation_calls = []
tool_outputs = lc_msg.additional_kwargs.get("tool_outputs", [])
for tool_output in tool_outputs:
if tool_output.get("type") == "computer_call":
@@ -3199,10 +3211,22 @@ def _construct_responses_api_input(messages: Sequence[BaseMessage]) -> list:
code_interpreter_calls.append(tool_output)
elif tool_output.get("type") == "mcp_call":
mcp_calls.append(tool_output)
elif tool_output.get("type") == "image_generation":
image_generation_calls.append(tool_output)
else:
pass
input_.extend(code_interpreter_calls)
input_.extend(mcp_calls)
# A previous image generation call can be referenced by ID
input_.extend(
[
{"type": "image_generation", "id": image_generation_call["id"]}
for image_generation_call in image_generation_calls
]
)
msg["content"] = msg.get("content") or []
if lc_msg.additional_kwargs.get("refusal"):
if isinstance(msg["content"], str):
@@ -3489,6 +3513,7 @@ def _convert_responses_chunk_to_generation_chunk(
"mcp_call",
"mcp_list_tools",
"mcp_approval_request",
"image_generation_call",
):
additional_kwargs["tool_outputs"] = [
chunk.item.model_dump(exclude_none=True, mode="json")
@@ -3516,6 +3541,9 @@ def _convert_responses_chunk_to_generation_chunk(
{"index": chunk.summary_index, "type": "summary_text", "text": ""}
]
}
elif chunk.type == "response.image_generation_call.partial_image":
# Partial images are not supported yet.
pass
elif chunk.type == "response.reasoning_summary_text.delta":
additional_kwargs["reasoning"] = {
"summary": [

View File

@@ -7,7 +7,7 @@ authors = []
license = { text = "MIT" }
requires-python = ">=3.9"
dependencies = [
"langchain-core<1.0.0,>=0.3.61",
"langchain-core<1.0.0,>=0.3.63",
"openai<2.0.0,>=1.68.2",
"tiktoken<1,>=0.7",
]

View File

@@ -1 +0,0 @@
H4sIAGx1OGgC/+1dW2/bNhR+968Q9GLAiAKnWXbRWxtnwNZhKZoA3dAWAU0dW1woUuUliRv4v4+ynSZt5thhB6GpvjzEtnxoUt85h5dzk1CODONOaGXzXpYY+uDJuryXhL+xLmZ50r9OK7KWTcmm+dvrlOvQRrk0T98Y4ShhiXXazBI21t6FT5y53XQnNVpSoPGWTDp/v5NWuiCZ5spLuZNaZ4hVae6Mp5tPZ7peDCPNr1OhuPQFnfmm2yXZfN5fDKokVpCxyxEmCeOcanfzKUv6g8Hro9Hzw9Oj0WDQ/4woI8V1IdR0E3UtsnOabaLyrtRGfGTNoDfQBsgU8e0IG2wzSWrqyi2J3aymDaSltpswahiVBbTVJsKrzDomlAwykTHDy8eQ25nij6CXbCOz7pJr+wjimvHzcLvZRRCmzYy529KQM7OMa/8oqEwgFxU9vsVWI6woSGORJ6+OT04XF7wR+X1CQ7YOKkZ39Hv1q0EHG81I+gVzLE+Cmpda8IXKB+XlQUEdFWk+3ElFeEnTTwrdvNXjf4J4L9/XRle1O5sIGWaWs9Chl245cay+Eaqgq8UvrQT4Hu11WobemtflN03HEyZtM1VQQEO4WejLsgml8+aSnJyVzFTbN7jyTG5LfSHCRKb4lsOZz9/Pe70VpklyH8zr9XcdOguIOtb0dDvJ3k6lzFoRpCNcDZQToYQtQ2tmtbqZVm+hlXoa4B7b5Tfzuyzc++mHnw/2hwf7v6x4yQPavKpl9oL/ffVqbzqkF7W9fDmq6M24dKfPRkfq+e9/3eH4NMyl+7sHmfNmrLPh3rODuzLQ/NwuD6yW1Mx4u7z06jwQ2Jl1VIXbVlMydZC2hnhSnw33fjzYPxhOxuNAtJrwF4P+f3E8bngI4B4PXOLDlAHkYpBjgC0GtmbVA3IRyO0AtSh5K8OiDuhioLtkFsBhaWgNtnBMYTNAFwMdZzg4RAGnWBVuH9BFQPeHV5jo4hbWUgO4GOAMcIvBDXPcV5weAFzkXo7I4fwQBZ6ASS5O6KjxWQG7GOz0BLjF4DbRGmtrnJVEAbi4aa6khhDgRYC3C9RiRO4E++A44EqGSQ429BbXBkl0DuhioBtLxgFdnPtBw/+ATXC72mrEtITQRWE3NUQwMUVBRzOCUfNh5J5wgO9/8vwTCq7EKhdrsQBuEbhphVM3HChwd377wFUaYfRRyEns4mHJRtQawsFhygZsXxwavLkQFxp+TmQLtaeq8pLNMM3FRcIEdELfAA9rRIt7EiQMxZ30EUkfd9Bn5zCRxNnkcMyPtMkZ7acwoMe56xjiweOQK7Cyxpnl3ql3MAV3y0N8DD9dbNyYRPhT3MFHKKQAxOb8w+cEqyJ8Tk8jjdhXY4nsdRT3gzn2KahrxaQEdHGeJ0k4ccdABx9AbAoFlQLpJ1gg2hQ5b6GscaWvyDrmDSqEI1qyRak7RUB45E6YEzQVlYha9diRFFxoj/C1juYnohRVfBgZcIvAbYIVLgY2mLMRCYXUzidj6RFws8NX3J6eUlU7pFHElttDTEwkcF5NDUyzkUEKukKccdeO3bAod9TMglq4nWU9Z941FlaJlbKrEoCHSXaM9TWMnF3VdaktzpMd4z7MVh3V9hqF87vGe5hZO1zLGb5EBD+3nO1s8IARFHSG1/9JJCpAVWPDUDVKdiJFoT15+01ZUWCWQ4hOm89fBXBRwLFLAIcDV2uwnSO9OTYHC3W5YuvD4qQaBVxtqGYGFtnOWmSxxnWU8RUx1AzqbMoqbEUdZX0pqma4YH+3ouZRHrtzPP8V5rJuMZwkznGdXdfHsB3BG4Pi2k/BVol1OQY3fYkCLnECJ7UvkDqHxaHFsga6xlYUj4hsN8w33CUCyxGuCrfztw9cv49qqXERSa55J/B8GYT6tliNHDsSVEVrNWDVN71abEpQAwD87kJMAjS9o5zXNSmYdbtbDQCK31HWI121wyHnSA/qKuuRighbQttZw8AtAjfR6DVcuCibAz/kU/B9I/gTHqEW5e2NcKX2qNAcuZWzwjE4cRGSh7UVMT7f87O4ETcQFehewyMCI0mrwPkauOGg36KiCsR/fmseAGf858QVFcJXX+UCuM/jL+44/e75/XZ0/OfR+zuX+71lyAsryNh8dfmQ8ZKyw4CN0TJP+oPB66PR88PTo9Fg0L+hWSKXnc5qWkMyCmit+erEGcFDY8OUrbVx2Qlx3/BtDf2CcEImO1JcF+H+19AxHmTZZnw59IxJqS8zbcRUqHUtalFlhj54si4TxTqqj95QJbMF/zIb+gh8WUN7lTWjkNnYTyYLs/VaupX0ZS5gmOm64bVdS1yF25KiId842gVxQbXUs6ppoFhFD5EaJjKhLvQ5PfiLhqYP3bUJ7JaiEi5b/l8N027dwIUBqG3IAytYUGQ1fUwft4029fMwvNYx5z+pSpDGAO2z4bB3U8fKNvqXJ8cve2EmWorJXu9fkd/OsYbtAAA=

File diff suppressed because one or more lines are too long

View File

@@ -38,10 +38,6 @@ class TestAzureOpenAIStandard(ChatModelIntegrationTests):
def supports_json_mode(self) -> bool:
return True
@property
def enable_vcr_tests(self) -> bool:
return True
class TestAzureOpenAIStandardLegacy(ChatModelIntegrationTests):
"""Test a legacy model."""
@@ -62,7 +58,3 @@ class TestAzureOpenAIStandardLegacy(ChatModelIntegrationTests):
@property
def structured_output_kwargs(self) -> dict:
return {"method": "function_calling"}
@property
def enable_vcr_tests(self) -> bool:
return True

View File

@@ -12,6 +12,7 @@ from langchain_core.messages import (
BaseMessage,
BaseMessageChunk,
HumanMessage,
MessageLikeRepresentation,
)
from pydantic import BaseModel
from typing_extensions import TypedDict
@@ -452,3 +453,126 @@ def test_mcp_builtin() -> None:
_ = llm_with_tools.invoke(
[approval_message], previous_response_id=response.response_metadata["id"]
)
@pytest.mark.skip(reason="Re-enable once VCR is properly set up.")
@pytest.mark.vcr()
def test_image_generation_streaming() -> None:
"""Test image generation streaming."""
llm = ChatOpenAI(model="gpt-4.1", use_responses_api=True)
# Test invocation
tool = {
"type": "image_generation",
# For testing purposes let's keep the quality low, so the test runs faster.
"quality": "low",
}
llm_with_tools = llm.bind_tools([tool])
response = llm_with_tools.invoke("Make a picture of a fuzzy cat")
_check_response(response)
tool_output = response.additional_kwargs["tool_outputs"][0]
# Example tool output for an image
# {
# "background": "opaque",
# "id": "ig_683716a8ddf0819888572b20621c7ae4029ec8c11f8dacf8",
# "output_format": "png",
# "quality": "high",
# "revised_prompt": "A fluffy, fuzzy cat sitting calmly, with soft fur, bright "
# "eyes, and a cute, friendly expression. The background is "
# "simple and light to emphasize the cat's texture and "
# "fluffiness.",
# "size": "1024x1024",
# "status": "completed",
# "type": "image_generation_call",
# "result": # base64 encode image data
# }
expected_keys = {
"id",
"background",
"output_format",
"quality",
"result",
"revised_prompt",
"size",
"status",
"type",
}
assert set(tool_output.keys()).issubset(expected_keys)
llm = ChatOpenAI(model="gpt-4.1", use_responses_api=True)
full: Optional[BaseMessageChunk] = None
tool = {"type": "image_generation", "quality": "low"}
for chunk in llm.stream("Make a picture of a fuzzy cat", tools=[tool]):
assert isinstance(chunk, AIMessageChunk)
full = chunk if full is None else full + chunk
complete_ai_message = cast(AIMessageChunk, full)
# At the moment, the streaming API does not pick up annotations fully.
# So the following check is commented out.
# _check_response(complete_ai_message)
tool_output = complete_ai_message.additional_kwargs["tool_outputs"][0]
assert set(tool_output.keys()).issubset(expected_keys)
def test_image_generation_multi_turn() -> None:
"""Test multi-turn editing of image generation by passing in history."""
# Test multi-turn
llm = ChatOpenAI(model="gpt-4.1", use_responses_api=True)
# Test invocation
tool = {
"type": "image_generation",
# For testing purposes let's keep the quality low, so the test runs faster.
"quality": "low",
}
llm_with_tools = llm.bind_tools([tool])
chat_history: list[MessageLikeRepresentation] = [
{"role": "user", "content": "Make a picture of a fuzzy cat"}
]
ai_message = llm_with_tools.invoke(chat_history)
_check_response(ai_message)
tool_output = ai_message.additional_kwargs["tool_outputs"][0]
# Example tool output for an image
# {
# "background": "opaque",
# "id": "ig_683716a8ddf0819888572b20621c7ae4029ec8c11f8dacf8",
# "output_format": "png",
# "quality": "high",
# "revised_prompt": "A fluffy, fuzzy cat sitting calmly, with soft fur, bright "
# "eyes, and a cute, friendly expression. The background is "
# "simple and light to emphasize the cat's texture and "
# "fluffiness.",
# "size": "1024x1024",
# "status": "completed",
# "type": "image_generation_call",
# "result": # base64 encode image data
# }
expected_keys = {
"id",
"background",
"output_format",
"quality",
"result",
"revised_prompt",
"size",
"status",
"type",
}
assert set(tool_output.keys()).issubset(expected_keys)
chat_history.extend(
[
# AI message with tool output
ai_message,
# New request
{"role": "user", "content": "Now make it realistic."},
]
)
ai_message2 = llm_with_tools.invoke(chat_history)
_check_response(ai_message2)
tool_output2 = ai_message2.additional_kwargs["tool_outputs"][0]
assert set(tool_output2.keys()).issubset(expected_keys)

File diff suppressed because it is too large Load Diff