Commit Graph

254 Commits

Author SHA1 Message Date
Bagatur
c3ccd93c12
patch openai json mode test (#28831) 2024-12-19 21:43:32 +00:00
Bagatur
ce6748dbfe
xfail openai image token count test (#28828) 2024-12-19 21:23:30 +00:00
Bagatur
a7f2148061
openai[patch]: Release 0.2.14 (#28826) 2024-12-19 11:56:44 -08:00
Bagatur
1378ddfa5f
openai[patch]: type reasoning_effort (#28825) 2024-12-19 19:36:49 +00:00
Bagatur
68940dd0d6
openai[patch]: Release 0.2.13 (#28800) 2024-12-18 22:08:47 +00:00
Bagatur
4a531437bb
core[patch], openai[patch]: Handle OpenAI developer msg (#28794)
- Convert developer openai messages to SystemMessage
- store additional_kwargs={"__openai_role__": "developer"} so that the
correct role can be reconstructed if needed
- update ChatOpenAI to read in openai_role

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-12-18 21:54:07 +00:00
Bagatur
e4d3ccf62f
json mode standard test (#25497)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-12-17 18:47:34 +00:00
ccurme
ffb5c1905a
openai[patch]: release 0.2.12 (#28633) 2024-12-09 12:38:13 -05:00
ccurme
6e6061fe73
openai[patch]: bump minimum SDK version (#28632)
Resolves https://github.com/langchain-ai/langchain/issues/28625
2024-12-09 11:28:05 -05:00
Erick Friis
0eb7ab65f1
multiple: fix xfailed signatures (#28597) 2024-12-06 15:39:47 -08:00
ccurme
b8e861a63b
openai[patch]: add standard tests for embeddings (#28540) 2024-12-05 17:00:27 +00:00
Erick Friis
7315360907
openai: dont populate logit_bias if None (#28482) 2024-12-03 17:54:53 +00:00
Erick Friis
42d40d694b
partners/openai: release 0.2.11 (#28461) 2024-12-02 23:35:18 +00:00
Erick Friis
9f04416768
openai: set logit_bias to none instead of empty dict by default (#28460) 2024-12-02 15:30:32 -08:00
Bagatur
e9c16552fa
openai[patch]: bump core dep (#28361) 2024-11-26 08:37:05 -08:00
Bagatur
e7dc26aefb
openai[patch]: Release 0.2.10 (#28360) 2024-11-26 08:30:29 -08:00
ccurme
42b18824c2
openai[patch]: use max_completion_tokens in place of max_tokens (#26917)
`max_tokens` is deprecated:
https://platform.openai.com/docs/api-reference/chat/create#chat-create-max_tokens

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-11-26 16:30:19 +00:00
Erick Friis
29f8a79ebe
groq,openai,mistralai: fix unit tests (#28279) 2024-11-22 04:54:01 +00:00
ccurme
56499cf58b
openai[patch]: unskip test and relax tolerance in embeddings comparison (#28262)
From what I can tell response using SDK is not deterministic:
```python
import numpy as np
import openai

documents = ["disallowed special token '<|endoftext|>'"]
model = "text-embedding-ada-002"

direct_output_1 = (
    openai.OpenAI()
    .embeddings.create(input=documents, model=model)
    .data[0]
    .embedding
)

for i in range(10):
    direct_output_2 = (
        openai.OpenAI()
        .embeddings.create(input=documents, model=model)
        .data[0]
        .embedding
    )
    print(f"{i}: {np.isclose(direct_output_1, direct_output_2).all()}")
```
```
0: True
1: True
2: True
3: True
4: False
5: True
6: True
7: True
8: True
9: True
```

See related discussion here:
https://community.openai.com/t/can-text-embedding-ada-002-be-made-deterministic/318054

Found the same result using `"text-embedding-3-small"`.
2024-11-21 10:23:10 -08:00
Erick Friis
d1108607f4
multiple: push deprecation removals to 1.0 (#28236) 2024-11-20 19:56:29 -08:00
Erick Friis
0dbaf05bb7
standard-tests: rename langchain_standard_tests to langchain_tests, release 0.3.2 (#28203) 2024-11-18 19:10:39 -08:00
Erick Friis
d9d689572a
openai: release 0.2.9, o1 streaming (#28197) 2024-11-18 23:54:38 +00:00
Erick Friis
6d2004ee7d
multiple: langchain-standard-tests -> langchain-tests (#28139) 2024-11-15 11:32:04 -08:00
ccurme
5eaa0e8c45
openai[patch]: release 0.2.8 (#28062) 2024-11-12 14:57:11 -05:00
ccurme
1538ee17f9
anthropic[major]: support python 3.13 (#27916)
Last week Anthropic released version 0.39.0 of its python sdk, which
enabled support for Python 3.13. This release deleted a legacy
`client.count_tokens` method, which we currently access during init of
the `Anthropic` LLM. Anthropic has replaced this functionality with the
[client.beta.messages.count_tokens()
API](https://github.com/anthropics/anthropic-sdk-python/pull/726).

To enable support for `anthropic >= 0.39.0` and Python 3.13, here we
drop support for the legacy token counting method, and add support for
the new method via `ChatAnthropic.get_num_tokens_from_messages`.

To fully support the token counting API, we update the signature of
`get_num_tokens_from_message` to accept tools everywhere.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-11-12 14:31:07 -05:00
Bagatur
139881b108
openai[patch]: fix azure oai stream check (#28048) 2024-11-12 15:42:06 +00:00
Bagatur
9611f0b55d
openai[patch]: Release 0.2.7 (#28047) 2024-11-12 15:16:15 +00:00
Bagatur
33dbfba08b
openai[patch]: default to invoke on o1 stream() (#27983) 2024-11-08 19:12:59 -08:00
ccurme
66966a6e72
openai[patch]: release 0.2.6 (#27924)
Some additions in support of [predicted
outputs](https://platform.openai.com/docs/guides/latency-optimization#use-predicted-outputs)
feature:
- Bump openai sdk version
- Add integration test
- Add example to integration docs

The `prediction` kwarg is already plumbed through model invocation.
2024-11-05 23:02:24 +00:00
Bagatur
06420de2e7
integrations[patch]: bump core to 0.3.15 (#27805) 2024-10-31 11:27:05 -07:00
yahya-mouman
6803cb4f34
openai[patch]: add check for none values when summing token usage (#27585)
**Description:** Fixes None addition issues when an empty value is
passed on

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-10-28 12:49:43 -07:00
Bagatur
ede953d617
openai[patch]: fix schema formatting util (#27685) 2024-10-28 15:46:47 +00:00
Bagatur
d5306899d3
openai[patch]: Release 0.2.4 (#27652) 2024-10-25 20:26:21 +00:00
Bagatur
655ced84d7
openai[patch]: accept json schema response format directly (#27623)
fix #25460

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-10-24 18:19:15 +00:00
Fernando de Oliveira
ab205e7389
partners/openai + community: Async Azure AD token provider support for Azure OpenAI (#27488)
This PR introduces a new `azure_ad_async_token_provider` attribute to
the `AzureOpenAI` and `AzureChatOpenAI` classes in `partners/openai` and
`community` packages, given it's currently supported on `openai` package
as
[AsyncAzureADTokenProvider](https://github.com/openai/openai-python/blob/main/src/openai/lib/azure.py#L33)
type.

The reason for creating a new attribute is to avoid breaking changes.
Let's say you have an existing code that uses a `AzureOpenAI` or
`AzureChatOpenAI` instance to perform both sync and async operations.
The `azure_ad_token_provider` will work exactly as it is today, while
`azure_ad_async_token_provider` will override it for async requests.


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-10-22 21:43:06 +00:00
Erick Friis
2cf2cefe39
partners/openai: release 0.2.3 (#27457) 2024-10-18 08:16:01 -07:00
Erick Friis
7d65a32ee0
openai: audio modality, remove sockets from unit tests (#27436) 2024-10-18 08:02:09 -07:00
Erick Friis
92ae61bcc8
multiple: rely on asyncio_mode auto in tests (#27200) 2024-10-15 16:26:38 +00:00
Bagatur
ce33c4fa40
openai[patch]: default temp=1 for o1 (#27206) 2024-10-08 15:45:21 -07:00
Bagatur
bd5b335cb4
standard-tests[patch]: fix oai usage metadata test (#27122) 2024-10-04 20:00:48 +00:00
Bagatur
98942edcc9
openai[patch]: Release 0.2.2 (#27119) 2024-10-04 11:54:01 -07:00
Bagatur
4935a14314
core,integrations[minor]: Dont error on fields in model_kwargs (#27110)
Given the current erroring behavior, every time we've moved a kwarg from
model_kwargs and made it its own field that was a breaking change.
Updating this behavior to support the old instantiations /
serializations.

Assuming build_extra_kwargs was not something that itself is being used
externally and needs to be kept backwards compatible
2024-10-04 11:30:27 -07:00
Erick Friis
e8e5d67a8d
openai: fix None token detail (#27091)
happens in Azure
2024-10-04 01:25:38 +00:00
Bagatur
c09da53978
openai[patch]: add usage metadata details (#27080) 2024-10-03 14:01:03 -07:00
Bagatur
9404e7af9d
openai[patch]: exclude http client (#26891)
httpx clients aren't serializable
2024-09-29 11:16:27 -07:00
ccurme
39987ebd91
openai[patch]: update deprecation target in API ref (#26921) 2024-09-27 08:42:31 -04:00
ccurme
7091a1a798
openai[patch]: increase token limit in azure integration tests (#26901)
`test_json_mode` occasionally runs into this
2024-09-26 14:31:33 +00:00
Bagatur
eaffa92c1d
openai[patch]: Release 0.2.1 (#26858) 2024-09-25 15:55:49 +00:00
ccurme
2a4c5713cd
openai[patch]: fix azure integration tests (#26791) 2024-09-23 17:49:15 -04:00
Bagatur
e1e4f88b3e
openai[patch]: enable Azure structured output, parallel_tool_calls=Fa… (#26599)
…lse, tool_choice=required

response_format=json_schema, tool_choice=required, parallel_tool_calls
are all supported for gpt-4o on azure.
2024-09-22 22:25:22 -07:00