Commit Graph

13578 Commits

Author SHA1 Message Date
Li-Kuang Chen
4ee6112161
openai[patch]: Improve error message when response type is malformed (#31619) 2025-06-21 14:15:21 -04:00
dennism-tulcolabs
9de4f22205
langchain[patch]: smith.evaluation.progress.ProgressBarCallback: Make output after progress bar ends configurable (#31583) 2025-06-20 19:24:35 -04:00
Mikhail
6105a5841b
core: fix get_buffer_string output for structured message content (#31600) 2025-06-20 23:21:50 +00:00
ccurme
cf5a442e4c
langchain: release 0.3.26 (#31695) 2025-06-20 18:19:47 -04:00
ccurme
5015188530
Revert "infra: temporarily drop OpenAI from core release test matrix" (#31694)
Reverts langchain-ai/langchain#31693
2025-06-20 22:12:38 +00:00
ccurme
26030abb70
infra: temporarily drop OpenAI from core release test matrix (#31693)
As part of core releases we run tests on the last released version of
some packages (including langchain-openai) using the new version of
langchain-core. We run langchain-openai's test suite as it was when it
was last released.

Our test for computer use started raising 500 error at some point during
the day today (test passed as part of scheduled test job in the
morning):
> InternalServerError: Error code: 500 - {'error': {'message': 'An error
occurred while processing your request. You can retry your request, or
contact us through our help center at help.openai.com if the error
persists.

Will revert this change after we release langchain-core.
2025-06-20 21:58:39 +00:00
Bagatur
5271fd76f1
core[patch]: check before removing tags (#31691) 2025-06-20 17:46:50 -04:00
ccurme
39a8a1121a
core: release 0.3.66 (#31690) 2025-06-20 17:45:03 -04:00
97tkddnjs
4fe490c0ea
[Docs] Update deprecated Pydantic .schema() method to .model_json_schema() in How to convert Runnables to Tools guide (#31618) 2025-06-20 20:43:59 +00:00
Raghu Kapur
2c9859956a
text-splitters: fix stale header metadata in ExperimentalMarkdownSyntaxTextSplitter (#31622)
**Description:**

Previously, when transitioning from a deeper Markdown header (e.g., ###)
to a shallower one (e.g., ##), the
ExperimentalMarkdownSyntaxTextSplitter retained the deeper header in the
metadata.

This commit updates the `_resolve_header_stack` method to remove headers
at the same or deeper levels before appending the current header. As a
result, each chunk now reflects only the active header context.

Fixes unexpected metadata leakage across sections in nested Markdown
documents.

Additionally, test cases have been updated to:
- Validate correct header resolution and metadata assignment.
- Cover edge cases with nested headers and horizontal rules.

**Issue:** 
Fixes [#31596](https://github.com/langchain-ai/langchain/issues/31596)

**Dependencies:**
None

**Twitter handle:** -> [_RaghuKapur](https://twitter.com/_RaghuKapur)

**LinkedIn:** ->
[https://www.linkedin.com/in/raghukapur/](https://www.linkedin.com/in/raghukapur/)
2025-06-20 15:52:17 -04:00
ZhangShenao
9d4d258162
[Doc] Improve api doc for DeepSeek (#31655)
- Add param in api doc
- Fix word spelling

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-20 19:47:54 +00:00
Saran Connolly
22e6d90937
langchain_mistralai: Include finish_reason in response metadata when parsing MistralAI chunks toAIMessageChunk (#31667)
## Description
<!-- What does this pull request accomplish? -->
- When parsing MistralAI chunk dicts to Langchain to `AIMessageChunk`
schemas via the `_convert_chunk_to_message_chunk` utility function, the
`finish_reason` was not being included in `response_metadata` as it is
for other providers.
- This PR adds a one-liner fix to include the finish reason.

- fixes: https://github.com/langchain-ai/langchain/issues/31666
2025-06-20 15:41:20 -04:00
Mohammad Mohtashim
7ff405077d
core[patch]: Returning always 2D Array for _cosine_similarity (#31528)
- **Description:** Very simple change in `_cosine_similarity` which
always 2D array.
- **Issue:** #31497
2025-06-20 11:25:02 -04:00
Eugene Yurtsev
2842e0c8c1
core[patch]: Add doc-strings to tools/base.py (#31684)
Add doc-strings
2025-06-20 11:16:57 -04:00
Tony Gravagno
5d0bea8378
docs: OPENAI_API_KEY typo in google_serper.ipynb (#31665)
Simple typo fix.
2025-06-20 11:02:13 -04:00
Christophe Bornet
7e046ea848
core: Cleanup Pydantic models and handle deprecation warnings (#30799)
* Simplified Pydantic handling since Pydantic v1 is not supported
anymore.
* Replace use of deprecated v1 methods by corresponding v2 methods.
* Remove use of other deprecated methods.
* Activate mypy errors on deprecated methods use.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-06-20 10:42:52 -04:00
ccurme
29e17fbd6b
docs: use langchain-tavily (#31663)
Commandeering https://github.com/langchain-ai/langchain/pull/31640

---------

Co-authored-by: pulvedu <dustin@tavily.com>
Co-authored-by: pulvedu <dusty.pulver28@gmail.com>
2025-06-18 16:06:26 -04:00
ccurme
e2a0ff07fd
openai[patch]: include 'type' key internally when streaming reasoning blocks (#31661)
Covered by existing tests.

Will make it easier to process streamed reasoning blocks.
2025-06-18 15:01:54 -04:00
Tanmay Singhal
19544ba3c9
docsFix documentation where triggering image generation from openai (#31652)
Description: Fixing Minor Error in ChatOpenAI Documentation
2025-06-18 13:31:47 -04:00
Jannik Maierhöfer
0cadf4fc9a
docs: upgrade langfuse example to python sdk v3 (#31654)
As the Langfuse python sdk v3 includes breaking changes, this PR updates
the code examples.

https://langfuse.com/docs/integrations/langchain/upgrade-paths#python
2025-06-18 13:30:39 -04:00
Mason Daugherty
a79998800c
fix: correct typo in docstring for three_values fixture (#31638)
Docstring typo fix in `base_store.py`
2025-06-17 16:51:07 -04:00
ccurme
da97013f96
docs: update OpenAI integration page (#31646)
model_kwargs is no longer needed for `truncation` and `reasoning`.
2025-06-17 16:23:06 -04:00
ccurme
6409498f6c
openai[patch]: route to Responses API if relevant attributes are set (#31645)
Following https://github.com/langchain-ai/langchain/pull/30329.
2025-06-17 16:04:38 -04:00
ccurme
3044bd37a9
openai: release 0.3.24 (#31642) 2025-06-17 15:06:52 -04:00
ccurme
c1c3e13a54
openai[patch]: add Responses API attributes to BaseChatOpenAI (#30329)
`reasoning`, `include`, `store`, `truncation`.

Previously these had to be added through `model_kwargs`.
2025-06-17 14:45:50 -04:00
ccurme
b610859633
openai[patch]: support Responses streaming in AzureChatOpenAI (#31641)
Resolves https://github.com/langchain-ai/langchain/issues/31303,
https://github.com/langchain-ai/langchain/issues/31624
2025-06-17 14:41:09 -04:00
ccurme
bc1b5ffc91
docs: update agents tutorial to use langchain-tavily (#31637) 2025-06-17 11:25:03 -04:00
Himanshu Sharma
bb7c190d2c
langchain: Fix error in LLMListwiseRerank when Document list is empty (#31300)
**Description:**
This PR fixes an `IndexError` that occurs when `LLMListwiseRerank` is
called with an empty list of documents.

Earlier, the code assumed the presence of at least one document and
attempted to construct the context string based on `len(documents) - 1`,
which raises an error when documents is an empty list.

The fix works with gpt-4o-mini if I make the list empty, but fails
occasionally with gpt-3.5-turbo. In case of empty list, setting the
string to "empty list" seems to have the expected response.

**Issue:**  #31192
2025-06-17 10:14:07 -04:00
ZhangShenao
0b5c06e89f
[Doc] Improve api doc for perplexity (#31636)
- add param in api doc
- fix word spelling

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-17 14:10:43 +00:00
FT
c4c39c1ae6
mistralai[patch]: Fix Typos in Comments and Improve Compatibility Note (#31616)
Description:  
This pull request corrects minor spelling mistakes in the comments
within the `chat_models.py` file of the MistralAI partner integration.
Specifically, it fixes the spelling of "equivalent" and "compatibility"
in two separate comments. These changes improve code readability and
maintain professional documentation standards. No functional code
changes are included.
2025-06-17 09:23:25 -04:00
Cherilyn Buren
cc1e53008f
langchain: add missing milvus branch for self_query (#31630)
Fix #29603
2025-06-17 09:21:54 -04:00
Xin Jin
7702691baf
core and langchain: Remove upper bound restriction langsmith dependency (#31629)
Remove upper bound limitation of LS for good measure: we have full
control over LS so we'll be careful when minor bumping so this shouldn't
risk too much, while on the other hand existing such upperboud
restriction will likely introduce occasional dependency headache for
users

Discussion:
https://langchain.slack.com/archives/C06UEEE4DSS/p1750111219634649?thread_ts=1750107647.115289&cid=C06UEEE4DSS
2025-06-17 09:19:03 -04:00
Xin Jin
e979cd106a
chore: Bump langsmith in splitter uv (#31626)
`uv lock --upgrade-package langsmith
`
Original issue: The lock file (uv.lock) was constraining
langsmith>=0.1.125,<0.4, preventing LangSmith 0.4.1 installation. Even
though the pyproject.toml wasn't restricting langchain core.


Issue:
https://langchain.slack.com/archives/C050X0VTN56/p1750107176007629
2025-06-16 16:58:46 -07:00
Shivnath Tathe
1682b59f92
docs(rockset): add deprecation notice for Rockset integration (#31621) 2025-06-16 22:17:27 +00:00
Shivnath Tathe
d4c84acc39
docs: update deprecated .schema() to .model_json_schema() in tool_run… (#31615)
This PR updates the tool runtime example notebook to replace the
deprecated `.schema()` method with `.model_json_schema()`, aligning it
with Pydantic V2.

### 🔧 Changes:
- Replaced:
```python
update_favorite_pets.get_input_schema().schema()

with 

update_favorite_pets.get_input_schema().model_json_schema()

```

Fixes #31609
2025-06-16 18:07:18 -04:00
ccurme
b9357d456e
openai[patch]: refactor handling of Responses API (#31587) 2025-06-16 14:01:39 -04:00
Tom-Trumper
532e6455e9
text-splitters: Add keep_separator arg to HTMLSemanticPreservingSplitter (#31588)
### Description
Add keep_separator arg to HTMLSemanticPreservingSplitter and pass value
to instance of RecursiveCharacterTextSplitter used under the hood.
### Issue
Documents returned by `HTMLSemanticPreservingSplitter.split_text(text)`
are defaulted to use separators at beginning of page_content. [See third
and fourth document in example output from how-to
guide](https://python.langchain.com/docs/how_to/split_html/#using-htmlsemanticpreservingsplitter):
```
[Document(metadata={'Header 1': 'Main Title'}, page_content='This is an introductory paragraph with some basic content.'),
 Document(metadata={'Header 2': 'Section 1: Introduction'}, page_content='This section introduces the topic'),
 Document(metadata={'Header 2': 'Section 1: Introduction'}, page_content='. Below is a list: First item Second item Third item with bold text and a link Subsection 1.1: Details This subsection provides additional details'),
 Document(metadata={'Header 2': 'Section 1: Introduction'}, page_content=". Here's a table: Header 1 Header 2 Header 3 Row 1, Cell 1 Row 1, Cell 2 Row 1, Cell 3 Row 2, Cell 1 Row 2, Cell 2 Row 2, Cell 3"),
 Document(metadata={'Header 2': 'Section 2: Media Content'}, page_content='This section contains an image and a video: ![image:example_image_link.mp4](example_image_link.mp4) ![video:example_video_link.mp4](example_video_link.mp4)'),
 Document(metadata={'Header 2': 'Section 3: Code Example'}, page_content='This section contains a code block: <code:html> <div> <p>This is a paragraph inside a div.</p> </div> </code>'),
 Document(metadata={'Header 2': 'Conclusion'}, page_content='This is the conclusion of the document.')]
```
### Dependencies
None

@ttrumper3
2025-06-14 17:56:14 -04:00
dayvidborges
52e57cdc20
docs: update multimodal PDF and image usage for gpt-4.1 (#31595)
docs: update multimodal PDF and image usage for gpt-4.1

**Description:**
This update revises the LangChain documentation to support the new
GPT-4.1 multimodal API format. It fixes the previous broken example for
PDF uploads (which returned a 400 error: "Missing required parameter:
'messages[0].content[1].file'") and adds clear instructions on how to
include base64-encoded images for OpenAI models.

**Issue:**
error appointed in foruns for pdf load into api ->
'''
@[Albaeld](https://github.com/Albaeld)
Albaeld
[8 days
ago](https://github.com/langchain-ai/langchain/discussions/27702#discussioncomment-13369460)
This simply does not work with openai:gpt-4.1. I get:
Error code: 400 - {'error': {'message': "Missing required parameter:
'messages[0].content[1].file'.", 'type': 'invalid_request_error',
'param': 'messages[0].content[1].file', 'code':
'missing_required_parameter'}}
'''

**Dependencies:**
None

**Twitter handle:**
N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-14 17:52:01 -04:00
Peter Schneider
cecfec5efa
huggingface: handle image-text-to-text pipeline task (#31611)
**Description:** Allows for HuggingFacePipeline to handle
image-text-to-text pipeline
2025-06-14 16:41:11 -04:00
fuder.eth
50f998a138
Fix Typos in Vectorstore Integration Documentation Notebooks (#31612)
Description:  
This pull request corrects minor typographical errors in the
documentation notebooks for vectorstore integrations. Specifically, it
fixes the spelling of "datastore" in `llm_rails.ipynb` and
"pre-existent" in `redis.ipynb`. These changes improve the clarity and
professionalism of the documentation. No functional code changes are
included.
2025-06-14 16:40:18 -04:00
Akim Tsvigun
f345ae5a1d
docs: Integration with Nebius AI Studio (#31293)
Thank you for contributing to LangChain!

[x] PR title: langchain_ollama: support custom headers for Ollama
partner APIs

Where "package" is whichever of langchain, core, etc. is being modified.
Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
Example: "core: add foobar LLM"
[x] PR message:

**Description: This PR adds support for passing custom HTTP headers to
Ollama models when used as a LangChain integration. This is especially
useful for enterprise users or partners who need to send authentication
tokens, API keys, or custom tracking headers when querying secured
Ollama servers.
Issue: N/A (new enhancement)
**Dependencies: No external dependencies introduced.
Twitter handle: @arunkumar_offl
[x] Add tests and docs: If you're adding a new integration, please
include
1.Added a unit test in test_chat_models.py to validate headers are
passed correctly.
2. Added an example notebook:
docs/docs/integrations/llms/ollama_custom_headers.ipynb showing how to
use custom headers.

[x] Lint and test: Ran make format, make lint, and make test to ensure
the code is clean and passing all checks.

Additional guidelines:

Make sure optional dependencies are imported within a function.
Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
Most PRs should not touch more than one package.
Changes should be backwards compatible.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

This MR is only for the docs. Added integration with Nebius AI Studio to
docs. The integration package is available at
[https://github.com/nebius/langchain-nebius](https://github.com/nebius/langchain-nebius).

---------

Co-authored-by: Akim Tsvigun <aktsvigun@nebius.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-14 16:15:27 -04:00
Xin Jin
01fcdff118
bump langsmith to allow 0.4 (#31594)
Langsmith 0.4 is launched so bump it up across OSS: langchain and
langchain-core. Will have separate langsmith-doc announcement for that
2025-06-13 07:59:42 -07:00
ccurme
5839801897
openai: release 0.3.23 (#31604) 2025-06-13 14:02:38 +00:00
ccurme
0c10ff6418
openai[patch]: handle annotation change in openai==1.82.0 (#31597)
https://github.com/openai/openai-python/pull/2372/files#diff-91cfd5576e71b4b72da91e04c3a029bab50a72b5f7a2ac8393fca0a06e865fb3
2025-06-12 23:38:41 -04:00
Lauren Hirata Singh
bb625081c8
revert incident banner (#31592)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-06-12 17:25:09 -04:00
Nuno Campos
ddc850ca72
core: In LangChainTracer, send only the first token event (#31591)
- only the first one is used for analytics
2025-06-12 14:04:23 -07:00
Lauren Hirata Singh
50f9354d31
incident banner (#31590)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-06-12 16:05:35 -04:00
ccurme
446a9d5647
docs: document built-in tools for ChatVertexAI (#31564) 2025-06-11 10:31:46 -04:00
Eugene Yurtsev
d10fd02bb3
langchain[patch]: Allow specifying other hashing functions in embeddings (#31561)
Allow specifying other hashing functions in embeddings
2025-06-11 10:18:07 -04:00
ccurme
4071670f56
huggingface[patch]: bump transformers (#31559) 2025-06-10 20:43:33 +00:00