From what I can tell response using SDK is not deterministic:
```python
import numpy as np
import openai
documents = ["disallowed special token '<|endoftext|>'"]
model = "text-embedding-ada-002"
direct_output_1 = (
openai.OpenAI()
.embeddings.create(input=documents, model=model)
.data[0]
.embedding
)
for i in range(10):
direct_output_2 = (
openai.OpenAI()
.embeddings.create(input=documents, model=model)
.data[0]
.embedding
)
print(f"{i}: {np.isclose(direct_output_1, direct_output_2).all()}")
```
```
0: True
1: True
2: True
3: True
4: False
5: True
6: True
7: True
8: True
9: True
```
See related discussion here:
https://community.openai.com/t/can-text-embedding-ada-002-be-made-deterministic/318054
Found the same result using `"text-embedding-3-small"`.
Chunking of the input array controlled by `self.chunk_size` is being
ignored when `self.check_embedding_ctx_length` is disabled. Effectively,
the chunk size is assumed to be equal 1 in such a case. This is
suprising.
The PR takes into account `self.chunk_size` passed by the user.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
Todo
- [x] copy over integration tests
- [x] update docs with new instructions in #15513
- [x] add linear ticket to bump core -> community, community->langchain,
and core->openai deps
- [ ] (optional): add `pip install langchain-openai` command to each
notebook using it
- [x] Update docstrings to not need `openai` install
- [x] Add serialization
- [x] deprecate old models
Contributor steps:
- [x] Add secret names to manual integrations workflow in
.github/workflows/_integration_test.yml
- [x] Add secrets to release workflow (for pre-release testing) in
.github/workflows/_release.yml
Maintainer steps (Contributors should not do these):
- [x] set up pypi and test pypi projects
- [x] add credential secrets to Github Actions
- [ ] add package to conda-forge
Functional changes to existing classes:
- now relies on openai client v1 (1.6.1) via concrete dep in
langchain-openai package
Codebase organization
- some function calling stuff moved to
`langchain_core.utils.function_calling` in order to be used in both
community and langchain-openai