Commit Graph

7330 Commits

Author SHA1 Message Date
ccurme
39a8a1121a core: release 0.3.66 (#31690) 2025-06-20 17:45:03 -04:00
Raghu Kapur
2c9859956a text-splitters: fix stale header metadata in ExperimentalMarkdownSyntaxTextSplitter (#31622)
**Description:**

Previously, when transitioning from a deeper Markdown header (e.g., ###)
to a shallower one (e.g., ##), the
ExperimentalMarkdownSyntaxTextSplitter retained the deeper header in the
metadata.

This commit updates the `_resolve_header_stack` method to remove headers
at the same or deeper levels before appending the current header. As a
result, each chunk now reflects only the active header context.

Fixes unexpected metadata leakage across sections in nested Markdown
documents.

Additionally, test cases have been updated to:
- Validate correct header resolution and metadata assignment.
- Cover edge cases with nested headers and horizontal rules.

**Issue:** 
Fixes [#31596](https://github.com/langchain-ai/langchain/issues/31596)

**Dependencies:**
None

**Twitter handle:** -> [_RaghuKapur](https://twitter.com/_RaghuKapur)

**LinkedIn:** ->
[https://www.linkedin.com/in/raghukapur/](https://www.linkedin.com/in/raghukapur/)
2025-06-20 15:52:17 -04:00
ZhangShenao
9d4d258162 [Doc] Improve api doc for DeepSeek (#31655)
- Add param in api doc
- Fix word spelling

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-20 19:47:54 +00:00
Saran Connolly
22e6d90937 langchain_mistralai: Include finish_reason in response metadata when parsing MistralAI chunks toAIMessageChunk (#31667)
## Description
<!-- What does this pull request accomplish? -->
- When parsing MistralAI chunk dicts to Langchain to `AIMessageChunk`
schemas via the `_convert_chunk_to_message_chunk` utility function, the
`finish_reason` was not being included in `response_metadata` as it is
for other providers.
- This PR adds a one-liner fix to include the finish reason.

- fixes: https://github.com/langchain-ai/langchain/issues/31666
2025-06-20 15:41:20 -04:00
Mohammad Mohtashim
7ff405077d core[patch]: Returning always 2D Array for _cosine_similarity (#31528)
- **Description:** Very simple change in `_cosine_similarity` which
always 2D array.
- **Issue:** #31497
2025-06-20 11:25:02 -04:00
Eugene Yurtsev
2842e0c8c1 core[patch]: Add doc-strings to tools/base.py (#31684)
Add doc-strings
2025-06-20 11:16:57 -04:00
Christophe Bornet
7e046ea848 core: Cleanup Pydantic models and handle deprecation warnings (#30799)
* Simplified Pydantic handling since Pydantic v1 is not supported
anymore.
* Replace use of deprecated v1 methods by corresponding v2 methods.
* Remove use of other deprecated methods.
* Activate mypy errors on deprecated methods use.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-06-20 10:42:52 -04:00
ccurme
e2a0ff07fd openai[patch]: include 'type' key internally when streaming reasoning blocks (#31661)
Covered by existing tests.

Will make it easier to process streamed reasoning blocks.
2025-06-18 15:01:54 -04:00
Mason Daugherty
a79998800c fix: correct typo in docstring for three_values fixture (#31638)
Docstring typo fix in `base_store.py`
2025-06-17 16:51:07 -04:00
ccurme
6409498f6c openai[patch]: route to Responses API if relevant attributes are set (#31645)
Following https://github.com/langchain-ai/langchain/pull/30329.
2025-06-17 16:04:38 -04:00
ccurme
3044bd37a9 openai: release 0.3.24 (#31642) 2025-06-17 15:06:52 -04:00
ccurme
c1c3e13a54 openai[patch]: add Responses API attributes to BaseChatOpenAI (#30329)
`reasoning`, `include`, `store`, `truncation`.

Previously these had to be added through `model_kwargs`.
2025-06-17 14:45:50 -04:00
ccurme
b610859633 openai[patch]: support Responses streaming in AzureChatOpenAI (#31641)
Resolves https://github.com/langchain-ai/langchain/issues/31303,
https://github.com/langchain-ai/langchain/issues/31624
2025-06-17 14:41:09 -04:00
Himanshu Sharma
bb7c190d2c langchain: Fix error in LLMListwiseRerank when Document list is empty (#31300)
**Description:**
This PR fixes an `IndexError` that occurs when `LLMListwiseRerank` is
called with an empty list of documents.

Earlier, the code assumed the presence of at least one document and
attempted to construct the context string based on `len(documents) - 1`,
which raises an error when documents is an empty list.

The fix works with gpt-4o-mini if I make the list empty, but fails
occasionally with gpt-3.5-turbo. In case of empty list, setting the
string to "empty list" seems to have the expected response.

**Issue:**  #31192
2025-06-17 10:14:07 -04:00
ZhangShenao
0b5c06e89f [Doc] Improve api doc for perplexity (#31636)
- add param in api doc
- fix word spelling

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-17 14:10:43 +00:00
FT
c4c39c1ae6 mistralai[patch]: Fix Typos in Comments and Improve Compatibility Note (#31616)
Description:  
This pull request corrects minor spelling mistakes in the comments
within the `chat_models.py` file of the MistralAI partner integration.
Specifically, it fixes the spelling of "equivalent" and "compatibility"
in two separate comments. These changes improve code readability and
maintain professional documentation standards. No functional code
changes are included.
2025-06-17 09:23:25 -04:00
Cherilyn Buren
cc1e53008f langchain: add missing milvus branch for self_query (#31630)
Fix #29603
2025-06-17 09:21:54 -04:00
Xin Jin
7702691baf core and langchain: Remove upper bound restriction langsmith dependency (#31629)
Remove upper bound limitation of LS for good measure: we have full
control over LS so we'll be careful when minor bumping so this shouldn't
risk too much, while on the other hand existing such upperboud
restriction will likely introduce occasional dependency headache for
users

Discussion:
https://langchain.slack.com/archives/C06UEEE4DSS/p1750111219634649?thread_ts=1750107647.115289&cid=C06UEEE4DSS
2025-06-17 09:19:03 -04:00
Xin Jin
e979cd106a chore: Bump langsmith in splitter uv (#31626)
`uv lock --upgrade-package langsmith
`
Original issue: The lock file (uv.lock) was constraining
langsmith>=0.1.125,<0.4, preventing LangSmith 0.4.1 installation. Even
though the pyproject.toml wasn't restricting langchain core.


Issue:
https://langchain.slack.com/archives/C050X0VTN56/p1750107176007629
2025-06-16 16:58:46 -07:00
ccurme
b9357d456e openai[patch]: refactor handling of Responses API (#31587) 2025-06-16 14:01:39 -04:00
Tom-Trumper
532e6455e9 text-splitters: Add keep_separator arg to HTMLSemanticPreservingSplitter (#31588)
### Description
Add keep_separator arg to HTMLSemanticPreservingSplitter and pass value
to instance of RecursiveCharacterTextSplitter used under the hood.
### Issue
Documents returned by `HTMLSemanticPreservingSplitter.split_text(text)`
are defaulted to use separators at beginning of page_content. [See third
and fourth document in example output from how-to
guide](https://python.langchain.com/docs/how_to/split_html/#using-htmlsemanticpreservingsplitter):
```
[Document(metadata={'Header 1': 'Main Title'}, page_content='This is an introductory paragraph with some basic content.'),
 Document(metadata={'Header 2': 'Section 1: Introduction'}, page_content='This section introduces the topic'),
 Document(metadata={'Header 2': 'Section 1: Introduction'}, page_content='. Below is a list: First item Second item Third item with bold text and a link Subsection 1.1: Details This subsection provides additional details'),
 Document(metadata={'Header 2': 'Section 1: Introduction'}, page_content=". Here's a table: Header 1 Header 2 Header 3 Row 1, Cell 1 Row 1, Cell 2 Row 1, Cell 3 Row 2, Cell 1 Row 2, Cell 2 Row 2, Cell 3"),
 Document(metadata={'Header 2': 'Section 2: Media Content'}, page_content='This section contains an image and a video: ![image:example_image_link.mp4](example_image_link.mp4) ![video:example_video_link.mp4](example_video_link.mp4)'),
 Document(metadata={'Header 2': 'Section 3: Code Example'}, page_content='This section contains a code block: <code:html> <div> <p>This is a paragraph inside a div.</p> </div> </code>'),
 Document(metadata={'Header 2': 'Conclusion'}, page_content='This is the conclusion of the document.')]
```
### Dependencies
None

@ttrumper3
2025-06-14 17:56:14 -04:00
Peter Schneider
cecfec5efa huggingface: handle image-text-to-text pipeline task (#31611)
**Description:** Allows for HuggingFacePipeline to handle
image-text-to-text pipeline
2025-06-14 16:41:11 -04:00
Akim Tsvigun
f345ae5a1d docs: Integration with Nebius AI Studio (#31293)
Thank you for contributing to LangChain!

[x] PR title: langchain_ollama: support custom headers for Ollama
partner APIs

Where "package" is whichever of langchain, core, etc. is being modified.
Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
Example: "core: add foobar LLM"
[x] PR message:

**Description: This PR adds support for passing custom HTTP headers to
Ollama models when used as a LangChain integration. This is especially
useful for enterprise users or partners who need to send authentication
tokens, API keys, or custom tracking headers when querying secured
Ollama servers.
Issue: N/A (new enhancement)
**Dependencies: No external dependencies introduced.
Twitter handle: @arunkumar_offl
[x] Add tests and docs: If you're adding a new integration, please
include
1.Added a unit test in test_chat_models.py to validate headers are
passed correctly.
2. Added an example notebook:
docs/docs/integrations/llms/ollama_custom_headers.ipynb showing how to
use custom headers.

[x] Lint and test: Ran make format, make lint, and make test to ensure
the code is clean and passing all checks.

Additional guidelines:

Make sure optional dependencies are imported within a function.
Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
Most PRs should not touch more than one package.
Changes should be backwards compatible.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

This MR is only for the docs. Added integration with Nebius AI Studio to
docs. The integration package is available at
[https://github.com/nebius/langchain-nebius](https://github.com/nebius/langchain-nebius).

---------

Co-authored-by: Akim Tsvigun <aktsvigun@nebius.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-14 16:15:27 -04:00
Xin Jin
01fcdff118 bump langsmith to allow 0.4 (#31594)
Langsmith 0.4 is launched so bump it up across OSS: langchain and
langchain-core. Will have separate langsmith-doc announcement for that
2025-06-13 07:59:42 -07:00
ccurme
5839801897 openai: release 0.3.23 (#31604) 2025-06-13 14:02:38 +00:00
ccurme
0c10ff6418 openai[patch]: handle annotation change in openai==1.82.0 (#31597)
https://github.com/openai/openai-python/pull/2372/files#diff-91cfd5576e71b4b72da91e04c3a029bab50a72b5f7a2ac8393fca0a06e865fb3
2025-06-12 23:38:41 -04:00
Nuno Campos
ddc850ca72 core: In LangChainTracer, send only the first token event (#31591)
- only the first one is used for analytics
2025-06-12 14:04:23 -07:00
Eugene Yurtsev
d10fd02bb3 langchain[patch]: Allow specifying other hashing functions in embeddings (#31561)
Allow specifying other hashing functions in embeddings
2025-06-11 10:18:07 -04:00
ccurme
4071670f56 huggingface[patch]: bump transformers (#31559) 2025-06-10 20:43:33 +00:00
ccurme
40d6d4c738 huggingface[patch]: bump core dep (#31558) 2025-06-10 20:26:13 +00:00
Mohammad Mohtashim
42eb356a44 [OpenAI]: Encoding Model (#31402)
- **Description:** Small Fix for when getting the encoder in case of
KeyError and using the correct encoder for newer models
- **Issue:** #31390
2025-06-10 16:00:00 -04:00
ccurme
b0f100af7e core: release 0.3.65 (#31557) 2025-06-10 19:39:50 +00:00
Sydney Runkle
5b165effcd core(fix): revert set_text optimization (#31555)
Revert serialization regression introduced in
https://github.com/langchain-ai/langchain/pull/31238

Fixes https://github.com/langchain-ai/langchain/issues/31486
2025-06-10 13:36:55 -04:00
Eugene Yurtsev
9ce974247c langchain[patch]: Remove proxy imports to langchain_experimental (#31541)
Remove proxy imports to langchain_experimental.

Previously, these imports would work if a user manually installed
langchain_experimental. However, we want to drop support even for that
as langchain_experimental is generally not recommended to be run in
production.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-06-09 17:09:09 -04:00
ccurme
71b0f78952 openai: release 0.3.22 (#31542) 2025-06-09 15:29:15 -04:00
ccurme
575662d5f1 openai[patch]: accommodate change in image generation API (#31522)
OpenAI changed their API to require the `partial_images` parameter when
using image generation + streaming.

As described in https://github.com/langchain-ai/langchain/pull/31424, we
are ignoring partial images. Here, we accept the `partial_images`
parameter (as required by OpenAI), but emit a warning and continue to
ignore partial images.
2025-06-09 14:57:46 -04:00
ccurme
ece9e31a7a openai[patch]: VCR some tests (#31524) 2025-06-06 23:00:57 +00:00
Bagatur
5187817006 openai[release]: 0.3.21 (#31519) 2025-06-06 11:40:09 -04:00
Bagatur
761f8c3231 openai[patch]: pass through with_structured_output kwargs (#31518)
Support 
```python
from langchain.chat_models import init_chat_model
from pydantic import BaseModel


class ResponseSchema(BaseModel):
    response: str


def get_weather(location: str) -> str:
    """Get weather"""
    pass

llm = init_chat_model("openai:gpt-4o-mini")

structured_llm = llm.with_structured_output(
    ResponseSchema,
    tools=[get_weather],
    strict=True,
    include_raw=True,
    tool_choice="required",
    parallel_tool_calls=False,
)

structured_llm.invoke("whats up?")
```
2025-06-06 11:17:34 -04:00
Bagatur
0375848f6c openai[patch]: update with_structured_outputs docstring (#31517)
Update docstrings
2025-06-06 10:03:47 -04:00
ccurme
9c639035c0 standard-tests: add cache_control to Anthropic inputs test (#31516) 2025-06-06 10:00:43 -04:00
ccurme
a1f068eb85 openai: release 0.3.20 (#31515) 2025-06-06 13:29:12 +00:00
ccurme
4cc2f6b807 openai[patch]: guard against None text completions in BaseOpenAI (#31514)
Some chat completions APIs will return null `text` output (even though
this is typed as string).
2025-06-06 09:14:37 -04:00
lc-arjun
35ae5eab4f core: use run tree post/patch (#31500)
Use run post/patch
2025-06-05 14:05:57 -07:00
Eugene Yurtsev
73655b0ca8 huggingface: 0.3.0 release (#31503)
Breaking change to make some dependencies optional:
https://github.com/langchain-ai/langchain/pull/31268

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-06-05 20:20:15 +00:00
Bagatur
f7f52cab12 anthropic[patch]: cache tokens nit (#31484)
if you pass in beta headers directly cache_creation is a dict
2025-06-05 16:15:03 -04:00
ccurme
14c561e15d infra: relax types-requests version range (#31504) 2025-06-05 18:57:08 +00:00
ccurme
6d6f305748 openai[patch]: clarify docs on api_version in docstring for AzureChatOpenAI (#31502) 2025-06-05 16:06:22 +00:00
Simon Stone
815bfa5408 huggingface[major]: Reduce disk footprint by 95% by making large dependencies optional (#31268)
**Description:** 
`langchain_huggingface` has a very large installation size of around 600
MB (on a Mac with Python 3.11). This is due to its dependency on
`sentence-transformers`, which in turn depends on `torch`, which is 320
MB all by itself. Similarly, the depedency on `transformers` adds
another set of heavy dependencies. With those dependencies removed, the
installation of `langchain_huggingface` only takes up ~26 MB. This is
only 5 % of the full installation!

These libraries are not necessary to use `langchain_huggingface`'s API
wrapper classes, only for local inferences/embeddings. All import
statements for those two libraries already have import guards in place
(try/catch with a helpful "please install x" message).

This PR therefore moves those two libraries to an optional dependency
group `full`. So a `pip install langchain_huggingface` will only install
the lightweight version, and a `pip install
"langchain_huggingface[full]"` will install all dependencies.

I know this may break existing code, because `sentence-transformers` and
`transformers` are now no longer installed by default. Given that users
will see helpful error messages when that happens, and the major impact
of this small change, I hope that you will still consider this PR.

**Dependencies:** No new dependencies, but new optional grouping.
2025-06-05 12:04:19 -04:00
Mohammad Mohtashim
ae3551c96b core[patch]: Correct type casting of annotations in _infer_arg_descriptions (#31181)
- **Description:** 
- In _infer_arg_descriptions, the annotations dictionary contains string
representations of types instead of actual typing objects. This causes
_is_annotated_type to fail, preventing the correct description from
being generated.
- This is a simple fix using the get_type_hints method, which resolves
the annotations properly and is supported across all Python versions.

  - **Issue:** #31051

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-06-05 11:58:36 -04:00