ccurme
4b89483fe6
release(openai): 0.3.28 ( #32015 )
2025-07-14 10:38:11 +00:00
ccurme
de13f6ae4f
fix(openai): support acknowledged safety checks in computer use ( #31984 )
2025-07-14 07:33:37 -03:00
ccurme
612ccf847a
chore: [openai] bump sdk ( #31958 )
2025-07-10 15:53:41 -04:00
Mason Daugherty
6594eb8cc1
docs(xai): update for Grok 4 ( #31953 )
2025-07-10 11:06:37 -04:00
Mason Daugherty
e7eac27241
ruff: more rules across the board & fixes ( #31898 )
...
* standardizes ruff dep version across all `pyproject.toml` files
* cli: ruff rules and corrections
* langchain: rules and corrections
2025-07-07 17:48:01 -04:00
Mason Daugherty
706a66eccd
fix: automatically fix issues with ruff ( #31897 )
...
* Perform safe automatic fixes instead of only selecting
[isort](https://docs.astral.sh/ruff/rules/#isort-i )
2025-07-07 14:13:10 -04:00
Mason Daugherty
33c9bf1adc
langchain-openai[patch]: Add ruff bandit rules to linter ( #31788 )
2025-06-30 14:01:32 -04:00
ccurme
9f17fabc43
openai: release 0.3.27 ( #31769 )
...
To pick up https://github.com/langchain-ai/langchain/pull/31756 .
2025-06-27 13:44:45 -04:00
Andrew Jaeger
0189c50570
openai[fix]: Correctly set usage metadata for OpenAI Responses API ( #31756 )
2025-06-27 15:35:14 +00:00
ccurme
e8e89b0b82
docs: updates from langchain-openai 0.3.26 ( #31764 )
2025-06-27 11:27:25 -04:00
ccurme
ea1345a58b
openai[patch]: update cassette ( #31752 )
...
Following changes in `openai==1.92`.
2025-06-26 14:52:12 -04:00
ccurme
066be383e3
openai[patch]: update test following release of openai 1.92 ( #31751 )
...
Added new required fields for `ResponseFunctionWebSearch`
2025-06-26 18:22:58 +00:00
ccurme
61feaa4656
openai: release 0.3.26 ( #31749 )
2025-06-26 13:51:51 -04:00
ccurme
88d5f3edcc
openai[patch]: allow specification of output format for Responses API ( #31686 )
2025-06-26 13:41:43 -04:00
ccurme
84500704ab
openai[patch]: fix bug where function call IDs were not populated ( #31735 )
...
(optional) IDs were getting dropped in some cases.
2025-06-25 19:08:27 +00:00
ccurme
0bf223d6cf
openai[patch]: add attribute to always use previous_response_id ( #31734 )
2025-06-25 19:01:43 +00:00
joshy-deshaw
8a0782c46c
openai[patch]: fix dropping response headers while streaming / Azure ( #31580 )
2025-06-23 17:59:58 -04:00
ccurme
643741497a
openai: release 0.3.25 ( #31702 )
2025-06-23 10:55:48 -04:00
ccurme
b268ab6a28
openai[patch]: fix client caching when request_timeout is specified via httpx.Timeout ( #31698 )
...
Resolves https://github.com/langchain-ai/langchain/issues/31697
2025-06-23 14:37:49 +00:00
Li-Kuang Chen
4ee6112161
openai[patch]: Improve error message when response type is malformed ( #31619 )
2025-06-21 14:15:21 -04:00
ccurme
e2a0ff07fd
openai[patch]: include 'type' key internally when streaming reasoning blocks ( #31661 )
...
Covered by existing tests.
Will make it easier to process streamed reasoning blocks.
2025-06-18 15:01:54 -04:00
ccurme
6409498f6c
openai[patch]: route to Responses API if relevant attributes are set ( #31645 )
...
Following https://github.com/langchain-ai/langchain/pull/30329 .
2025-06-17 16:04:38 -04:00
ccurme
3044bd37a9
openai: release 0.3.24 ( #31642 )
2025-06-17 15:06:52 -04:00
ccurme
c1c3e13a54
openai[patch]: add Responses API attributes to BaseChatOpenAI ( #30329 )
...
`reasoning`, `include`, `store`, `truncation`.
Previously these had to be added through `model_kwargs`.
2025-06-17 14:45:50 -04:00
ccurme
b610859633
openai[patch]: support Responses streaming in AzureChatOpenAI ( #31641 )
...
Resolves https://github.com/langchain-ai/langchain/issues/31303 ,
https://github.com/langchain-ai/langchain/issues/31624
2025-06-17 14:41:09 -04:00
ccurme
b9357d456e
openai[patch]: refactor handling of Responses API ( #31587 )
2025-06-16 14:01:39 -04:00
ccurme
5839801897
openai: release 0.3.23 ( #31604 )
2025-06-13 14:02:38 +00:00
ccurme
0c10ff6418
openai[patch]: handle annotation change in openai==1.82.0 ( #31597 )
...
https://github.com/openai/openai-python/pull/2372/files#diff-91cfd5576e71b4b72da91e04c3a029bab50a72b5f7a2ac8393fca0a06e865fb3
2025-06-12 23:38:41 -04:00
Mohammad Mohtashim
42eb356a44
[OpenAI]: Encoding Model ( #31402 )
...
- **Description:** Small Fix for when getting the encoder in case of
KeyError and using the correct encoder for newer models
- **Issue:** #31390
2025-06-10 16:00:00 -04:00
ccurme
71b0f78952
openai: release 0.3.22 ( #31542 )
2025-06-09 15:29:15 -04:00
ccurme
575662d5f1
openai[patch]: accommodate change in image generation API ( #31522 )
...
OpenAI changed their API to require the `partial_images` parameter when
using image generation + streaming.
As described in https://github.com/langchain-ai/langchain/pull/31424 , we
are ignoring partial images. Here, we accept the `partial_images`
parameter (as required by OpenAI), but emit a warning and continue to
ignore partial images.
2025-06-09 14:57:46 -04:00
ccurme
ece9e31a7a
openai[patch]: VCR some tests ( #31524 )
2025-06-06 23:00:57 +00:00
Bagatur
5187817006
openai[release]: 0.3.21 ( #31519 )
2025-06-06 11:40:09 -04:00
Bagatur
761f8c3231
openai[patch]: pass through with_structured_output kwargs ( #31518 )
...
Support
```python
from langchain.chat_models import init_chat_model
from pydantic import BaseModel
class ResponseSchema(BaseModel):
response: str
def get_weather(location: str) -> str:
"""Get weather"""
pass
llm = init_chat_model("openai:gpt-4o-mini")
structured_llm = llm.with_structured_output(
ResponseSchema,
tools=[get_weather],
strict=True,
include_raw=True,
tool_choice="required",
parallel_tool_calls=False,
)
structured_llm.invoke("whats up?")
```
2025-06-06 11:17:34 -04:00
Bagatur
0375848f6c
openai[patch]: update with_structured_outputs docstring ( #31517 )
...
Update docstrings
2025-06-06 10:03:47 -04:00
ccurme
a1f068eb85
openai: release 0.3.20 ( #31515 )
2025-06-06 13:29:12 +00:00
ccurme
4cc2f6b807
openai[patch]: guard against None text completions in BaseOpenAI ( #31514 )
...
Some chat completions APIs will return null `text` output (even though
this is typed as string).
2025-06-06 09:14:37 -04:00
ccurme
6d6f305748
openai[patch]: clarify docs on api_version in docstring for AzureChatOpenAI ( #31502 )
2025-06-05 16:06:22 +00:00
Eugene Yurtsev
6cb3ea514a
openai: release 0.3.19 ( #31466 )
...
Release 0.3.19
2025-06-02 12:44:49 -04:00
Eugene Yurtsev
17f34baa88
openai[minor]: add image generation to responses api ( #31424 )
...
Does not support partial images during generation at the moment. Before
doing that I'd like to figure out how to specify the aggregation logic
without requiring changes in core.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-02 10:03:54 -04:00
ccurme
d3be4a0c56
infra: remove use of --vcr-record=none ( #31452 )
...
This option is specific to `pytest-vcr`. `pytest-recording` runs in this
mode by default.
2025-06-01 10:49:59 -04:00
ccurme
3db1aa0ba6
standard-tests: migrate to pytest-recording ( #31425 )
...
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-05-31 15:21:15 -04:00
ccurme
c8951ca124
infra: drop azure from streaming benchmarks ( #31421 )
...
Covered by BaseChatOpenAI
2025-05-29 15:06:12 -04:00
ccurme
afd349cc95
openai: cache httpx client ( #31260 )
...

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-05-29 14:03:06 -04:00
ccurme
49eeb0f3c3
standard-tests: add benchmarks ( #31302 )
...
Co-authored-by: Sydney Runkle <sydneymarierunkle@gmail.com>
2025-05-29 15:21:37 +00:00
ccurme
ab8b4003be
openai[patch]: add test case for code interpreter ( #31383 )
2025-05-27 19:11:31 +00:00
ccurme
0ce2e69cc1
openai: release 0.3.18 ( #31320 )
2025-05-22 12:53:53 -04:00
ccurme
851fd438cf
openai[patch]: relax Azure llm streaming callback test ( #31319 )
...
Effectively reverts
https://github.com/langchain-ai/langchain/pull/29302 , but check that
counts are "less than" instead of equal to an expected count.
2025-05-22 16:14:53 +00:00
ccurme
053a1246da
openai[patch]: support built-in code interpreter and remote MCP tools ( #31304 )
2025-05-22 11:47:57 -04:00
ccurme
1b5ffe4107
openai[patch]: run _tokenize in background thread in async embedding invocations ( #31312 )
2025-05-22 10:27:33 -04:00