Chester Curme
844b8b87d7
Merge branch 'standard_outputs' into cc/openai_v1
...
# Conflicts:
# libs/core/langchain_core/language_models/v1/chat_models.py
# libs/core/langchain_core/messages/utils.py
# libs/core/langchain_core/messages/v1.py
# libs/partners/openai/langchain_openai/chat_models/_compat.py
# libs/partners/openai/langchain_openai/chat_models/base.py
2025-07-28 12:38:32 -04:00
Chester Curme
61e329637b
lint
2025-07-28 11:02:37 -04:00
Mason Daugherty
ef9b5a9e18
add back standard_outputs
2025-07-28 10:47:26 -04:00
Mason Daugherty
5e9eb19a83
chore: update branch with changes from master ( #32277 )
...
Co-authored-by: Maxime Grenu <69890511+cluster2600@users.noreply.github.com>
Co-authored-by: Claude <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: jmaillefaud <jonathan.maillefaud@evooq.ch>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: tanwirahmad <tanwirahmad@users.noreply.github.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: niceg <79145285+growmuye@users.noreply.github.com>
Co-authored-by: Chaitanya varma <varmac301@gmail.com>
Co-authored-by: dishaprakash <57954147+dishaprakash@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Kanav Bansal <13186335+bansalkanav@users.noreply.github.com>
Co-authored-by: Aleksandr Filippov <71711753+alex-feel@users.noreply.github.com>
Co-authored-by: Alex Feel <afilippov@spotware.com>
2025-07-28 10:39:41 -04:00
Chester Curme
61129557c0
x
2025-07-24 17:17:33 -04:00
Chester Curme
4899857042
start on openai
2025-07-24 17:12:22 -04:00
Chester Curme
0bb7a823c5
x
2025-07-23 15:17:46 -04:00
Chester Curme
7c0d1cb324
openai: fix lint and tests
2025-07-23 09:53:46 -04:00
Chester Curme
78d036a093
Merge branch 'wip-v0.4' into standard_outputs
2025-07-23 09:34:20 -04:00
Chester Curme
e1f034c795
openai: support web search and code interpreter content blocks
2025-07-22 16:58:43 -04:00
ccurme
de13f6ae4f
fix(openai): support acknowledged safety checks in computer use ( #31984 )
2025-07-14 07:33:37 -03:00
Chester Curme
7ab615409c
openai: revert some cassette changes
2025-07-11 14:37:32 -04:00
Chester Curme
ce369125f3
openai: lint
2025-07-11 14:07:47 -04:00
Chester Curme
7546372461
format
2025-07-10 18:22:44 -04:00
Chester Curme
72bb858eec
fix
2025-07-10 18:18:53 -04:00
Chester Curme
8da2bec1c3
enable code interpreter test for v1
2025-07-10 17:54:00 -04:00
Chester Curme
6004ba7a0d
fix streaming annotations
2025-07-10 17:53:36 -04:00
Chester Curme
e928672306
fix image generation
2025-07-10 17:53:30 -04:00
Chester Curme
0d66cc2638
carry over changes
2025-07-10 17:52:50 -04:00
ccurme
612ccf847a
chore: [openai] bump sdk ( #31958 )
2025-07-10 15:53:41 -04:00
Mason Daugherty
6594eb8cc1
docs(xai): update for Grok 4 ( #31953 )
2025-07-10 11:06:37 -04:00
Mason Daugherty
33c9bf1adc
langchain-openai[patch]: Add ruff bandit rules to linter ( #31788 )
2025-06-30 14:01:32 -04:00
Andrew Jaeger
0189c50570
openai[fix]: Correctly set usage metadata for OpenAI Responses API ( #31756 )
2025-06-27 15:35:14 +00:00
ccurme
e8e89b0b82
docs: updates from langchain-openai 0.3.26 ( #31764 )
2025-06-27 11:27:25 -04:00
ccurme
88d5f3edcc
openai[patch]: allow specification of output format for Responses API ( #31686 )
2025-06-26 13:41:43 -04:00
ccurme
84500704ab
openai[patch]: fix bug where function call IDs were not populated ( #31735 )
...
(optional) IDs were getting dropped in some cases.
2025-06-25 19:08:27 +00:00
ccurme
0bf223d6cf
openai[patch]: add attribute to always use previous_response_id ( #31734 )
2025-06-25 19:01:43 +00:00
joshy-deshaw
8a0782c46c
openai[patch]: fix dropping response headers while streaming / Azure ( #31580 )
2025-06-23 17:59:58 -04:00
ccurme
b268ab6a28
openai[patch]: fix client caching when request_timeout is specified via httpx.Timeout ( #31698 )
...
Resolves https://github.com/langchain-ai/langchain/issues/31697
2025-06-23 14:37:49 +00:00
Li-Kuang Chen
4ee6112161
openai[patch]: Improve error message when response type is malformed ( #31619 )
2025-06-21 14:15:21 -04:00
ccurme
e2a0ff07fd
openai[patch]: include 'type' key internally when streaming reasoning blocks ( #31661 )
...
Covered by existing tests.
Will make it easier to process streamed reasoning blocks.
2025-06-18 15:01:54 -04:00
ccurme
6409498f6c
openai[patch]: route to Responses API if relevant attributes are set ( #31645 )
...
Following https://github.com/langchain-ai/langchain/pull/30329 .
2025-06-17 16:04:38 -04:00
ccurme
c1c3e13a54
openai[patch]: add Responses API attributes to BaseChatOpenAI ( #30329 )
...
`reasoning`, `include`, `store`, `truncation`.
Previously these had to be added through `model_kwargs`.
2025-06-17 14:45:50 -04:00
ccurme
b610859633
openai[patch]: support Responses streaming in AzureChatOpenAI ( #31641 )
...
Resolves https://github.com/langchain-ai/langchain/issues/31303 ,
https://github.com/langchain-ai/langchain/issues/31624
2025-06-17 14:41:09 -04:00
ccurme
b9357d456e
openai[patch]: refactor handling of Responses API ( #31587 )
2025-06-16 14:01:39 -04:00
ccurme
0c10ff6418
openai[patch]: handle annotation change in openai==1.82.0 ( #31597 )
...
https://github.com/openai/openai-python/pull/2372/files#diff-91cfd5576e71b4b72da91e04c3a029bab50a72b5f7a2ac8393fca0a06e865fb3
2025-06-12 23:38:41 -04:00
Mohammad Mohtashim
42eb356a44
[OpenAI]: Encoding Model ( #31402 )
...
- **Description:** Small Fix for when getting the encoder in case of
KeyError and using the correct encoder for newer models
- **Issue:** #31390
2025-06-10 16:00:00 -04:00
ccurme
575662d5f1
openai[patch]: accommodate change in image generation API ( #31522 )
...
OpenAI changed their API to require the `partial_images` parameter when
using image generation + streaming.
As described in https://github.com/langchain-ai/langchain/pull/31424 , we
are ignoring partial images. Here, we accept the `partial_images`
parameter (as required by OpenAI), but emit a warning and continue to
ignore partial images.
2025-06-09 14:57:46 -04:00
Bagatur
761f8c3231
openai[patch]: pass through with_structured_output kwargs ( #31518 )
...
Support
```python
from langchain.chat_models import init_chat_model
from pydantic import BaseModel
class ResponseSchema(BaseModel):
response: str
def get_weather(location: str) -> str:
"""Get weather"""
pass
llm = init_chat_model("openai:gpt-4o-mini")
structured_llm = llm.with_structured_output(
ResponseSchema,
tools=[get_weather],
strict=True,
include_raw=True,
tool_choice="required",
parallel_tool_calls=False,
)
structured_llm.invoke("whats up?")
```
2025-06-06 11:17:34 -04:00
Bagatur
0375848f6c
openai[patch]: update with_structured_outputs docstring ( #31517 )
...
Update docstrings
2025-06-06 10:03:47 -04:00
ccurme
4cc2f6b807
openai[patch]: guard against None text completions in BaseOpenAI ( #31514 )
...
Some chat completions APIs will return null `text` output (even though
this is typed as string).
2025-06-06 09:14:37 -04:00
ccurme
6d6f305748
openai[patch]: clarify docs on api_version in docstring for AzureChatOpenAI ( #31502 )
2025-06-05 16:06:22 +00:00
Eugene Yurtsev
17f34baa88
openai[minor]: add image generation to responses api ( #31424 )
...
Does not support partial images during generation at the moment. Before
doing that I'd like to figure out how to specify the aggregation logic
without requiring changes in core.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-02 10:03:54 -04:00
ccurme
afd349cc95
openai: cache httpx client ( #31260 )
...

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-05-29 14:03:06 -04:00
ccurme
053a1246da
openai[patch]: support built-in code interpreter and remote MCP tools ( #31304 )
2025-05-22 11:47:57 -04:00
ccurme
1b5ffe4107
openai[patch]: run _tokenize in background thread in async embedding invocations ( #31312 )
2025-05-22 10:27:33 -04:00
ccurme
32fcc97a90
openai[patch]: compat with Bedrock Converse ( #31280 )
...
ChatBedrockConverse passes through reasoning content blocks in [Bedrock
Converse
format](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ContentBlock.html ).
Similar to how we handle Anthropic thinking blocks, here we ensure these
are filtered out of OpenAI request payloads.
Resolves https://github.com/langchain-ai/langchain/issues/31279 .
2025-05-19 10:35:26 -04:00
ccurme
0b8837a0cc
openai: support runtime kwargs in embeddings ( #31195 )
2025-05-14 09:14:40 -04:00
ccurme
868cfc4a8f
openai: ignore function_calls if tool_calls are present ( #31198 )
...
Some providers include (legacy) function calls in `additional_kwargs` in
addition to tool calls. We currently unpack both function calls and tool
calls if present, but OpenAI will raise 400 in this case.
This can come up if providers are mixed in a tool-calling loop. Example:
```python
from langchain.chat_models import init_chat_model
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
@tool
def get_weather(location: str) -> str:
"""Get weather at a location."""
return "It's sunny."
gemini = init_chat_model("google_genai:gemini-2.0-flash-001").bind_tools([get_weather])
openai = init_chat_model("openai:gpt-4.1-mini").bind_tools([get_weather])
input_message = HumanMessage("What's the weather in Boston?")
tool_call_message = gemini.invoke([input_message])
assert len(tool_call_message.tool_calls) == 1
tool_call = tool_call_message.tool_calls[0]
tool_message = get_weather.invoke(tool_call)
response = openai.invoke( # currently raises 400 / BadRequestError
[input_message, tool_call_message, tool_message]
)
```
Here we ignore function calls if tool calls are present.
2025-05-12 13:50:56 -04:00
zhurou603
1df3ee91e7
partners: (langchain-openai) total_tokens should not add 'Nonetype' t… ( #31146 )
...
partners: (langchain-openai) total_tokens should not add 'Nonetype' t…
# PR Description
## Description
Fixed an issue in `langchain-openai` where `total_tokens` was
incorrectly adding `None` to an integer, causing a TypeError. The fix
ensures proper type checking before adding token counts.
## Issue
Fixes the TypeError traceback shown in the image where `'NoneType'`
cannot be added to an integer.
## Dependencies
None
## Twitter handle
None

Co-authored-by: qiulijie <qiulijie@yuaiweiwu.com>
2025-05-07 11:09:50 -04:00