Commit Graph

6965 Commits

Author SHA1 Message Date
ccurme
63c16f5ca8
community: deprecate AzureCosmosDBNoSqlVectorSearch in favor of langchain-azure-ai implementation (#30756) 2025-04-09 21:04:16 +00:00
Christophe Bornet
4cc7bc6c93
core: Add ruff rules PLR (#30696)
Add ruff rules [PLR](https://docs.astral.sh/ruff/rules/#refactor-plr)
Except PLR09xxx and PLR2004.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-09 15:15:38 -04:00
célina
68361f9c2d
partners: (langchain-huggingface) Embeddings - Integrate Inference Providers and remove deprecated code (#30735)
Hi there, This is a complementary PR to #30733.
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpointEmbeddings`, in favor of the task-specific
`feature_extraction` method. `InferenceClient.post()` is deprecated and
will be removed in `huggingface_hub` v0.31.0.

## Changes made

- bumped the minimum required version of the `huggingface_hub` package
to ensure compatibility with the latest API usage.
- added a provider field to `HuggingFaceEndpointEmbeddings`, enabling
users to select the inference provider.
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpointEmbeddings` with the task-specific
`feature_extraction` method for future-proofing, `post()` will be
removed in `huggingface-hub` v0.31.0.

 All changes are backward compatible.

---------

Co-authored-by: Lucain <lucainp@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-09 19:05:43 +00:00
Christophe Bornet
98f0016fc2
core: Add ruff rules ARG (#30732)
See https://docs.astral.sh/ruff/rules/#flake8-unused-arguments-arg
2025-04-09 14:39:36 -04:00
Sydney Runkle
78ec7d886d
[performance]: Adding benchmarks for common langchain-core imports (#30747)
The first in a sequence of PRs focusing on improving performance in
core. We're starting with reducing import times for common structures,
hence the benchmarks here.

The benchmark looks a little bit complicated - we have to use a process
so that we don't suffer from Python's import caching system. I tried
doing manual modification of `sys.modules` between runs, but that's
pretty tricky / hacky to get right, hence the subprocess approach.

Motivated by extremely slow baseline for common imports (we're talking
2-5 seconds):

<img width="633" alt="Screenshot 2025-04-09 at 12 48 12 PM"
src="https://github.com/user-attachments/assets/994616fe-1798-404d-bcbe-48ad0eb8a9a0"
/>

Also added a `make benchmark` command to make local runs easy :).
Currently using walltimes so that we can track total time despite using
a manual proces.
2025-04-09 13:00:15 -04:00
German Molina
5fb261ce27
community: Google Vertex AI Search now returns the website title as part of the document metadata (#30688)
Google vertex ai search will now return the title of the found website
as part of the document metadata, if available.

Thank you for contributing to LangChain!

- **Description**: Vertex AI Search can be used to index websites and
then develop chatbots that use these websites to answer questions. At
present, the document metadata includes an `id` and `source` (which is
the URL). While the URL is enough to create a link, the ID is not
descriptive enough to show users. Therefore, I propose we return `title`
as well, when available (e.g., it will not be available in `.txt`
documents found during the website indexing).
- **Issue**: No bug in particular, but it would be better if this was
here.
- **Dependencies**: None
- I do not use twitter.

Format, Lint and Test seem to be all good.
2025-04-09 08:54:06 -04:00
Sydney Runkle
4556b81b1d
Clean up numpy dependencies and speed up 3.13 CI with numpy>=2.1.0 (#30714)
Generally, this PR is CI performance focused + aims to clean up some
dependencies at the same time.

1. Unpins upper bounds for `numpy` in all `pyproject.toml` files where
`numpy` is specified
2. Requires `numpy >= 2.1.0` for Python 3.13 and `numpy > v1.26.0` for
Python 3.12, plus a `numpy` min version bump for `chroma`
3. Speeds up CI by minutes - linting on Python 3.13, installing `numpy <
2.1.0` was taking [~3
minutes](https://github.com/langchain-ai/langchain/actions/runs/14316342925/job/40123305868?pr=30713),
now the entire env setup takes a few seconds
4. Deleted the `numpy` test dependency from partners where that was not
used, specifically `huggingface`, `voyageai`, `xai`, and `nomic`.

It's a bit unfortunate that `langchain-community` depends on `numpy`, we
might want to try to fix that in the future...

Closes https://github.com/langchain-ai/langchain/issues/26026
Fixes https://github.com/langchain-ai/langchain/issues/30555
2025-04-08 09:45:07 -04:00
湛露先生
9cbe91896e
Fix deepseek release tag, as it is update name. (#30717)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-08 08:43:16 -04:00
Nithish Raghunandanan
893942651b
docs: Update couchbase vector store docs (#30710)
-  **Update LangChain-Couchbase documentation**
- Rename `CouchbaseVectorStore` in favor of `CouchbaseSearchVectorStore`

- [x] **Lint and test**
2025-04-07 18:45:14 -04:00
ccurme
a2bec5f2e5
ollama: release 0.3.1 (#30716) 2025-04-07 20:31:25 +00:00
ccurme
e3f15f0a47
ollama[patch]: add model_name to response metadata (#30706)
Fixes [this standard
test](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html#langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.test_usage_metadata).
2025-04-07 16:27:58 -04:00
ccurme
e106e9602f
groq[patch]: add retries to integration tests (#30707)
Tool-calling tests started intermittently failing with
> groq.APIError: Failed to call a function. Please adjust your prompt.
See 'failed_generation' for more details.
2025-04-07 12:45:53 -04:00
Mohammad Mohtashim
e935da0b12
ChatTongyi reasoning_content fix (#30694)
- **Description:** Small fix for `reasoning_content` key
- **Issue:** #30689
2025-04-07 09:27:33 -04:00
Tin Lai
4d03ba4686
langchain_qdrant: fix showing the missing sparse vector name (#30701)
**Description:** The error message was supposed to display the missing
vector name, but instead, it includes only the existing collection
configs.

This simple PR just includes the correct variable name, so that the user
knows the requested vector does not exist in the collection.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: Tin Lai <tin@tinyiu.com>
2025-04-07 09:19:08 -04:00
Christophe Bornet
6650b94627
core: Add ruff rules PYI (#29335)
See https://docs.astral.sh/ruff/rules/#flake8-pyi-pyi

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-04 19:59:44 +00:00
Philippe PRADOS
d8e3b7667f
community[patch]: Fix empty producer in PDF Parsers (#30620)
Fix an issue where if a pdf file doesn't have a “producer” in metadata, it generates an exception.
2025-04-04 15:53:49 -04:00
Christophe Bornet
f0159c7125
core: Add ruff rules PGH (except PGH003) (#30656)
Add ruff rules PGH: https://docs.astral.sh/ruff/rules/#pygrep-hooks-pgh
Except PGH003 which will be dealt in a dedicated PR.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-04-04 19:53:27 +00:00
Armaanjeet Singh Sandhu
7c2468f36b
core: Fix handler removal in BaseCallbackManager (Fixes #30640) (#30659)
**Description:**  
Fixed a bug in `BaseCallbackManager.remove_handler()` that caused a
`ValueError` when removing a handler added via the constructor's
`handlers` parameter. The issue occurred because handlers passed to the
constructor were added only to the `handlers` list and not automatically
to `inheritable_handlers` unless explicitly specified. However,
`remove_handler()` attempted to remove the handler from both lists
unconditionally, triggering a `ValueError` when it wasn't in
`inheritable_handlers`.

The fix ensures the method checks for the handler’s presence in each
list before attempting removal, making it more robust while preserving
its original behavior.

**Issue:** Fixes #30640

**Dependencies:** None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-04 15:45:15 -04:00
Mohammad Mohtashim
bff56c5fa6
community[patch]: Redundant Parser checker for Webbaseloader (#30632)
- **Description:** We do not need to set parser in `scrape` since it is
already been done in `_scrape`
- **Issue:** #30629, not directly related but makes sure xml parser is
used
2025-04-04 14:11:26 -04:00
Christophe Bornet
150ac0cb79
core: Add ruff rules DTZ (#30657)
Add ruff rules DTZ:
https://docs.astral.sh/ruff/rules/#flake8-datetimez-dtz
2025-04-04 13:43:47 -04:00
Christophe Bornet
5e418c2666
core: Rework pydantic version checks (#30653)
This pull request includes various changes to the `langchain_core`
library, focusing on improving compatibility with different versions of
Pydantic. The primary change involves replacing checks for Pydantic
major versions with boolean flags, which simplifies the code and
improves readability.
This also solves ruff rule checks for
[RUF048](https://docs.astral.sh/ruff/rules/map-int-version-parsing/) and
[PLR2004](https://docs.astral.sh/ruff/rules/magic-value-comparison/).

Key changes include:

### Compatibility Improvements:
*
[`libs/core/langchain_core/output_parsers/json.py`](diffhunk://#diff-5add0cf7134636ae4198a1e0df49ee332ae0c9123c3a2395101e02687c717646L22-R24):
Replaced `PYDANTIC_MAJOR_VERSION` with `IS_PYDANTIC_V1` to check for
Pydantic version 1.
*
[`libs/core/langchain_core/output_parsers/pydantic.py`](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L14-R14):
Updated version checks from `PYDANTIC_MAJOR_VERSION` to `IS_PYDANTIC_V2`
in the `PydanticOutputParser` class.
[[1]](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L14-R14)
[[2]](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L27-R27)

### Utility Enhancements:
*
[`libs/core/langchain_core/utils/pydantic.py`](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R23):
Introduced `IS_PYDANTIC_V1` and `IS_PYDANTIC_V2` flags and deprecated
the `get_pydantic_major_version` function. Updated various functions to
use these flags instead of version numbers.
[[1]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R23)
[[2]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R42-R78)
[[3]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L90-R89)
[[4]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L104-R101)
[[5]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L120-R122)
[[6]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L135-R132)
[[7]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L149-R151)
[[8]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L164-R161)
[[9]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L248-R250)
[[10]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L330-R335)
[[11]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L356-R357)
[[12]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L393-R390)
[[13]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L403-R400)

### Test Updates:
*
[`libs/core/tests/unit_tests/output_parsers/test_openai_tools.py`](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L19-R22):
Updated tests to use `IS_PYDANTIC_V1` and `IS_PYDANTIC_V2` for version
checks.
[[1]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L19-R22)
[[2]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L532-R535)
[[3]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L567-R570)
[[4]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L602-R605)
*
[`libs/core/tests/unit_tests/prompts/test_chat.py`](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84R7):
Replaced version tuple checks with `PYDANTIC_VERSION` comparisons.
[[1]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84R7)
[[2]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L35-R38)
[[3]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L924-R927)
[[4]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L935-R938)
*
[`libs/core/tests/unit_tests/runnables/test_graph.py`](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dR3):
Simplified version checks using `PYDANTIC_VERSION`.
[[1]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dR3)
[[2]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dL15-R18)
[[3]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dL234-L239)
*
[`libs/core/tests/unit_tests/runnables/test_runnable.py`](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L18-R20):
Introduced `PYDANTIC_VERSION_AT_LEAST_29` and
`PYDANTIC_VERSION_AT_LEAST_210` for more readable version checks.
[[1]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L18-R20)
[[2]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L92-R99)
[[3]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L230-R233)
[[4]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L652-R655)
2025-04-04 13:42:30 -04:00
Christophe Bornet
43b5dc7191
core: Add ruff rules TD and FIX (#30654)
Add ruff rules:
* FIX: https://docs.astral.sh/ruff/rules/#flake8-fixme-fix
* TD: https://docs.astral.sh/ruff/rules/#flake8-todos-td

Code cleanup:

*
[`libs/core/langchain_core/outputs/chat_generation.py`](diffhunk://#diff-a1017ee46f58fa4005b110ffd4f8e1fb08f6a2a11d6ca4c78ff8be641cbb89e5L56-R56):
Removed the "HACK" prefix from a comment in the `set_text` method.

Configuration adjustments:

*
[`libs/core/pyproject.toml`](diffhunk://#diff-06baaee12b22a370fef9f170c9ed13e2727e377d3b32f5018430f4f0a39d3537R85-R93):
Added new rules `FIX002`, `TD002`, and `TD003` to the ignore list.
*
[`libs/core/pyproject.toml`](diffhunk://#diff-06baaee12b22a370fef9f170c9ed13e2727e377d3b32f5018430f4f0a39d3537L102-L108):
Removed the `FIX` and `TD` rules from the ignore list.

Test refinement:

*
[`libs/core/tests/unit_tests/runnables/test_runnable.py`](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L3231-R3232):
Updated a TODO comment to improve clarity in the `test_map_stream`
function.
2025-04-04 13:40:42 -04:00
ccurme
a007c57285
docs: update package registry sort order (#30677) 2025-04-04 13:12:39 -04:00
Sydney Runkle
33ed7c31da
docs: fix perplexity install instructions in ChatPerplexity docstring (#30676)
* `openai` install no longer needs to be done manually
2025-04-04 12:58:18 -04:00
Dhruvajyoti Sarma
f9bb5ec5d0
feature: removed pandas dataframe dependency for similary_search when using DuckDB as vector store (#30445)
- [ ] **PR title**: "community: Removes pandas dependency for using
DuckDB for similarity search"


- [ ] **PR message**: 
- **Description:** Removes pandas dependency for using DuckDB for
similarity search. The old function still exists as
`similarity_search_pd`, while the new one is at `similarity_search` and
requires no code changes. Return format remains the same.
    - **Issue:** Issue #29933 and update on PR #30435 
    - **Dependencies:** No dependencies
2025-04-04 12:19:18 -04:00
Akshay Dongare
f79473b752
Solved issue Implement langchain-litellm #30368 (#30637)
**PR title**: 
- [x] 1. docs: docs/docs/integrations/providers/LiteLLM.md
- [x] 2. docs: docs/docs/integrations/chat/litellm.ipynb
- [x] 3. libs: libs/packages.yml

- [x] **PR message**:
    - **Description:** Implement langchain-litellm
    - **Issue:** the issue #30368 
    - **Twitter handle:** akshay_d02
    - **LinkedIn Handle** https://linkedin.com/in/akshay-dongare 

- [x] **Add tests and docs**: Done

- [x] **Lint and test**: Done

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-04 16:12:10 +00:00
Yiğit Bekir Kaya, PhD
87e82fe1e8
Added langchain-qwq package documentation (Alibaba Cloud) (#30628)
LangChain QwQ allows non-Tongyi users to access thinking models with
extra capabilities which serve as an extension to Alibaba Cloud.

Hi @ccurme I'm back with the updated PR this time with documentation and
a finished package.

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"



- **Description:** adds documentation of `langchain-qwq` integration
package. Also adds it to Alibaba Cloud provider
- **Issue:** #30580 #30317 #30579
- **Dependencies:** openai, json-repair
- **Twitter handle:** YigitBekir


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-04 11:47:14 -04:00
Andrew Benton
4e7a9a7014
community: Add support for custom runtimes to Riza tools (#30664)
**Description:**
Adds support for Riza custom runtimes to the two Riza code interpreter
tools, allowing users to run LLM-generated code that depends on
libraries outside stdlib.
**Issue:** N/A
**Dependencies:** None
**Twitter handle:** @rizaio
2025-04-04 11:03:14 -04:00
diego dupin
aa37893c00
MariaDB vector store documentation addition (#30229)
### New Feature

Since version 11.7.1, MariaDB support vector. This is a super fast
implementation (see [some perf
blog](https://smalldatum.blogspot.com/2025/01/evaluating-vector-indexes-in-mariadb.html)
The goal is to support MariaDB with langchain

Implementation is done in
https://github.com/mariadb-corporation/langchain-mariadb, published in
https://pypi.org/project/langchain-mariadb/

This concerns the doc addition
 

(initial PR https://github.com/langchain-ai/langchain/pull/29989)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Oskar Stark <oskarstark@googlemail.com>
2025-04-04 14:56:25 +00:00
Sydney Runkle
1cdea6ab07
langchain-community: release 0.3.21 (#30673) 2025-04-04 14:14:50 +00:00
Sydney Runkle
901dffe06b
langchain: release 0.3.23 (#30670)
* Bump `text-splitters` min version
* Bump `langchain-core` min version
* Bump `langchain` version 🚀
2025-04-04 10:06:29 -04:00
ccurme
0c2c8c36c1
text-splitters: release 0.3.8 (#30671) 2025-04-04 09:58:45 -04:00
ccurme
59d508a2ee
openai[patch]: make computer test more reliable (#30672) 2025-04-04 13:53:59 +00:00
Sydney Runkle
c235328b39 Revert "update langchain version and bump min core v"
This reverts commit d0f154dbaa.
2025-04-04 09:31:51 -04:00
Sydney Runkle
d0f154dbaa update langchain version and bump min core v 2025-04-04 09:27:49 -04:00
Sydney Runkle
32cd70d7d2
release: bump core to v0.3.51 (#30668) 2025-04-04 13:23:09 +00:00
Max Forsey
18cf457eec
langchain-runpod integration (#30648)
## Description:

This PR adds the necessary documentation for the `langchain-runpod`
partner package integration. It includes:

* A provider page (`docs/docs/integrations/providers/runpod.ipynb`)
explaining the overall setup.
* An LLM component page (`docs/docs/integrations/llms/runpod.ipynb`)
detailing the `RunPod` class usage.
* A Chat Model component page
(`docs/docs/integrations/chat/runpod.ipynb`) detailing the `ChatRunPod`
class usage, including a feature support table.

These documentation files reflect the latest features of the
`langchain-runpod` package (v0.2.0+) such as async support and API
polling logic.

This work also addresses the review feedback provided on the previous
attempt in PR #30246 by:
*   Removing all TODOs from documentation.
*   Adding the required links between provider and component pages.
*   Completing the feature support table in the chat documentation.
*   Linking to the source code on GitHub for API reference.

Finally, it registers the `langchain-runpod` package in
`libs/packages.yml`.

## Dependencies:

None added to the core LangChain repository by these documentation
changes. The required dependency (`langchain-runpod`) is managed as a
separate package.

## Twitter handle:

@runpod_io

---------

Co-authored-by: Max Forsey <maxpod@maxpod.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 23:57:06 +00:00
Sydney Runkle
af66ab098e
Adding Perplexity extra and deprecating the community version of ChatPerplexity (#30649)
Plus, some accompanying docs updates

Some compelling usage:

```py
from langchain_perplexity import ChatPerplexity


chat = ChatPerplexity(model="llama-3.1-sonar-small-128k-online")
response = chat.invoke(
    "What were the most significant newsworthy events that occurred in the US recently?",
    extra_body={"search_recency_filter": "week"},
)
print(response.content)
# > Here are the top significant newsworthy events in the US recently: ...
```

Also, some confirmation of structured outputs:

```py
from langchain_perplexity import ChatPerplexity
from pydantic import BaseModel


class AnswerFormat(BaseModel):
    first_name: str
    last_name: str
    year_of_birth: int
    num_seasons_in_nba: int


messages = [
    {"role": "system", "content": "Be precise and concise."},
    {
        "role": "user",
        "content": (
            "Tell me about Michael Jordan. "
            "Please output a JSON object containing the following fields: "
            "first_name, last_name, year_of_birth, num_seasons_in_nba. "
        ),
    },
]

llm = ChatPerplexity(model="llama-3.1-sonar-small-128k-online")
structured_llm = llm.with_structured_output(AnswerFormat)
response = structured_llm.invoke(messages)
print(repr(response))
#> AnswerFormat(first_name='Michael', last_name='Jordan', year_of_birth=1963, num_seasons_in_nba=15)
```
2025-04-03 14:29:17 -04:00
ccurme
374769e8fe
core[patch]: log information from certain errors (#30626)
Some exceptions raised by SDKs include information in httpx responses
(see for example
[OpenAI](https://github.com/openai/openai-python/blob/main/src/openai/_exceptions.py)).
Here we trace information from those exceptions.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-04-03 16:45:19 +00:00
Sydney Runkle
17a9cd61e9
Bump langchain-core version in perplexity's pyproject.toml (#30647)
Blocking v0.1.0 release of `langchain-perplexity`
2025-04-03 16:19:10 +00:00
Sydney Runkle
3814bd1ea7
partners: Add Perplexity Chat Integration (#30618)
Perplexity's importance in the space has been growing, so we think it's
time to add an official integration!

Note: following the release of `langchain-perplexity` to `pypi`, we
should be able to add `perplexity` as an extra in
`libs/langchain/pyproject.toml`, but we're blocked by a circular import
for now.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 16:09:14 +00:00
Alejandro Rodríguez
884125e129
community: support usage_metadata for litellm (#30625)
Support "usage_metadata" for LiteLLM. 

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-02 19:45:15 -04:00
Christophe Bornet
f241fd5c11
core: Add ruff rules RET (#29384)
See https://docs.astral.sh/ruff/rules/#flake8-return-ret
All auto-fixes
2025-04-02 16:59:56 -04:00
Eugene Yurtsev
9ae792f56c
core: 0.3.50 release (#30623)
0.3.50 release
2025-04-02 14:46:23 -04:00
Christophe Bornet
ccc3d32ec8
core: Add ruff rules for Pylint PLC (Convention) and PLE (Errors) (#29286)
See https://docs.astral.sh/ruff/rules/#pylint-pl
2025-04-02 10:58:03 -04:00
ccurme
fe0fd9dd70
openai[patch]: upgrade tiktoken and fix test (#30621)
Related to https://github.com/langchain-ai/langchain/issues/30344

https://github.com/langchain-ai/langchain/pull/30542 introduced an
erroneous test for token counts for o-series models. tiktoken==0.8 does
not support o-series models in
`tiktoken.encoding_for_model(model_name)`, and this is the version of
tiktoken we had in the lock file. So we would default to `cl100k_base`
for o-series, which is the wrong encoding model. The test tested against
this wrong encoding (so it passed with tiktoken 0.8).

Here we update tiktoken to 0.9 in the lock file, and fix the expected
counts in the test. Verified that we are pulling
[o200k_base](https://github.com/openai/tiktoken/blob/main/tiktoken/model.py#L8),
as expected.
2025-04-02 10:44:48 -04:00
oxy-tg
38807871ec
docs: Add Oxylabs integration (#30591)
Description:
This PR adds documentation for the langchain-oxylabs integration
package.

The documentation includes instructions for configuring Oxylabs
credentials and provides example code demonstrating how to use the
package.

Issue:
N/A

Dependencies:
No new dependencies are required.

Tests and Docs:

Added an example notebook demonstrating the usage of the
Langchain-Oxylabs package, located in docs/docs/integrations.
Added a provider page in docs/docs/providers.
Added a new package to libs/packages.yml.

Lint and Test:

Successfully ran make format, make lint, and make test.
2025-04-02 14:40:32 +00:00
ccurme
816492e1d3
openai: release 0.3.12 (#30616) 2025-04-02 13:20:15 +00:00
Bagatur
111dd90a46
openai[patch]: support structured output and tools (#30581)
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-02 09:14:02 -04:00
Mahir Shah
9d3262c7aa
core: Propagate config_factories in RunnableBinding (#30603)
- **Description:** Propagates config_factories when calling decoration
methods for RunnableBinding--e.g. bind, with_config, with_types,
with_retry, and with_listeners. This ensures that configs attached to
the original RunnableBinding are kept when creating the new
RunnableBinding and the configs are merged during invocation. Picks up
where #30551 left off.
  - **Issue:** #30531

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-01 18:03:58 -04:00
ccurme
8a69de5c24
openai[patch]: ignore file blocks when counting tokens (#30601)
OpenAI does not appear to document how it transforms PDF pages to
images, which determines how tokens are counted:
https://platform.openai.com/docs/guides/pdf-files?api-mode=chat#usage-considerations

Currently these block types raise ValueError inside
`get_num_tokens_from_messages`. Here we update to generate a warning and
continue.
2025-04-01 15:29:33 -04:00
Christophe Bornet
558191198f
core: Add ruff rule FBT003 (boolean-trap) (#29424)
See
https://docs.astral.sh/ruff/rules/boolean-positional-value-in-call/#boolean-positional-value-in-call-fbt003
This PR also fixes some FBT001/002 in private methods but does not
enforce these rules globally atm.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-01 17:40:12 +00:00
Christophe Bornet
4f8ea13cea
core: Add ruff rules PERF (#29375)
See https://docs.astral.sh/ruff/rules/#perflint-perf
2025-04-01 13:34:56 -04:00
Christophe Bornet
8a33402016
core: Add ruff rules PT (pytest) (#29381)
See https://docs.astral.sh/ruff/rules/#flake8-pytest-style-pt
2025-04-01 13:31:07 -04:00
Christophe Bornet
768e4f695a
core: Add ruff rules S110 and S112 (#30599) 2025-04-01 13:17:22 -04:00
Christophe Bornet
88b4233fa1
core: Add ruff rules D (docstring) (#29406)
This ensures that the code is properly documented:
https://docs.astral.sh/ruff/rules/#pydocstyle-d

Related to #21983
2025-04-01 13:15:45 -04:00
Andras L Ferenczi
64df60e690
community[minor]: Add custom sitemap URL parameter to GitbookLoader (#30549)
## Description
This PR adds a new `sitemap_url` parameter to the `GitbookLoader` class
that allows users to specify a custom sitemap URL when loading content
from a GitBook site. This is particularly useful for GitBook sites that
use non-standard sitemap file names like `sitemap-pages.xml` instead of
the default `sitemap.xml`.
The standard `GitbookLoader` assumes that the sitemap is located at
`/sitemap.xml`, but some GitBook instances (including GitBook's own
documentation) use different paths for their sitemaps. This parameter
makes the loader more flexible and helps users extract content from a
wider range of GitBook sites.
## Issue
Fixes bug
[30473](https://github.com/langchain-ai/langchain/issues/30473) where
the `GitbookLoader` would fail to find pages on GitBook sites that use
custom sitemap URLs.
## Dependencies
No new dependencies required.
*I've added*:
* Unit tests to verify the parameter works correctly
* Integration tests to confirm the parameter is properly used with real
GitBook sites
* Updated docstrings with parameter documentation
The changes are fully backward compatible, as the parameter is optional
with a sensible default.

---------

Co-authored-by: andrasfe <andrasf94@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-04-01 16:17:21 +00:00
Christophe Bornet
fdda1aaea1
core: Accept ALL ruff rules with exclusions (#30595)
This pull request updates the `pyproject.toml` configuration file to
modify the linting rules and ignored warnings for the project. The most
important changes include switching to a more comprehensive selection of
linting rules and updating the list of ignored rules to better align
with the project's requirements.

Linting rules update:

* Changed the `select` option to include all available linting rules by
setting it to `["ALL"]`.

Ignored rules update:

* Updated the `ignore` option to include specific rules that interfere
with the formatter, are incompatible with Pydantic, or are temporarily
excluded due to project constraints.
2025-04-01 11:17:51 -04:00
Kacper Włodarczyk
26a3256fc6
community[major]: DynamoDBChatMessageHistory bulk add messages, raise errors (#30572)
This PR addresses two key issues:

- **Prevent history errors from failing silently**: Previously, errors
in message history were only logged and not raised, which can lead to
inconsistent state and downstream failures (e.g., ValidationError from
Bedrock due to malformed message history). This change ensures that such
errors are raised explicitly, making them easier to detect and debug.
(Side note: I’m using AWS Lambda Powertools Logger but hadn’t configured
it properly with the standard Python logger—my bad. If the error had
been raised, I would’ve seen it in the logs 😄) This is a **BREAKING
CHANGE**

- **Add messages in bulk instead of iteratively**: This introduces a
custom add_messages method to add all messages at once. The previous
approach failed silently when individual messages were too large,
resulting in partial history updates and inconsistent state. With this
change, either all messages are added successfully, or none are—helping
avoid obscure history-related errors from Bedrock.

---------

Co-authored-by: Kacper Wlodarczyk <kacper.wlodarczyk@chaosgears.com>
2025-04-01 11:13:32 -04:00
Armaanjeet Singh Sandhu
4bbc249b13
community: Fix attribute access for transcript text in YoutubeLoader (Fixes #30309) (#30582)
**Description:** 
Fixes a bug in the YoutubeLoader where FetchedTranscript objects were
not properly processed. The loader was only extracting the 'text'
attribute from FetchedTranscriptSnippet objects while ignoring 'start'
and 'duration' attributes. This would cause a TypeError when the code
later tried to access these missing keys, particularly when using the
CHUNKS format or any code path that needed timestamp information.

This PR modifies the conversion of FetchedTranscriptSnippet objects to
include all necessary attributes, ensuring that the loader works
correctly with all transcript formats.

**Issue:** Fixes #30309

**Dependencies:** None

**Testing:**
- Tested the fix with multiple YouTube videos to confirm it resolves the
issue
- Verified that both regular loading and CHUNKS format work correctly
2025-04-01 07:13:06 -04:00
Ivan Brko
ecff055096
community[minor]: Improve Brave Search Tool, allow api key in env var (#30364)
- **Description:** 

- Make Brave Search Tool consistent with other tools and allow reading
its api key from `BRAVE_SEARCH_API_KEY` instead of having to pass the
api key manually (no breaking changes)
- Improve Brave Search Tool by storing api key in `SecretStr` instead of
plain `str`.
    - Add unit test for `BraveSearchWrapper`
    - Reflect the changes in the documentation
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** ivan_brko
2025-03-31 14:48:52 -04:00
ccurme
0c623045b5
core[patch]: pydantic 2.11 compat (#30554)
Release notes: https://pydantic.dev/articles/pydantic-v2-11-release

Covered here:

- We no longer access `model_fields` on class instances (that is now
deprecated);
- Update schema normalization for Pydantic version testing to reflect
changes to generated JSON schema (addition of `"additionalProperties":
True` for dict types with value Any or object).

## Considerations:

### Changes to JSON schema generation

#### Tool-calling / structured outputs

This may impact tool-calling + structured outputs for some providers,
but schema generation only changes if you have parameters of the form
`dict`, `dict[str, Any]`, `dict[str, object]`, etc. If dict parameters
are typed my understanding is there are no changes.

For OpenAI for example, untyped dicts work for structured outputs with
default settings before and after updating Pydantic, and error both
before/after if `strict=True`.

### Use of `model_fields`

There is one spot where we previously accessed `super(cls,
self).model_fields`, where `cls` is an object in the MRO. This was done
for the purpose of tracking aliases in secrets. I've updated this to
always be `type(self).model_fields`-- see comment in-line for detail.

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-03-31 14:22:57 -04:00
keshavshrikant
e8be3cca5c
fix huggingface tokenizer default length function (#30185)
#30184
2025-03-31 11:54:30 -04:00
Wenqi Li
64f97e707e
ollama[patch]: Support seed param for OllamaLLM (#30553)
**Description:** a description of the change
add the seed param for OllamaLLM client reproducibility

**Issue:** the issue # it fixes, if applicable
follow up of a similar issue
https://github.com/langchain-ai/langchain/issues/24703
see also https://github.com/langchain-ai/langchain/pull/24782

**Dependencies:** any dependencies required for this change
n/a
2025-03-31 11:28:49 -04:00
Christophe Bornet
8395abbb42
core: Fix test_stream_error_callback (#30228)
Fixes #29436

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-31 10:37:22 -04:00
Christophe Bornet
026de908eb
core: Add ruff rules G, FA, INP, AIR and ISC (#29334)
Fixes mostly for rules G. See
https://docs.astral.sh/ruff/rules/#flake8-logging-format-g
2025-03-31 10:05:23 -04:00
ccurme
b4fe1f1ec0
groq: release 0.3.2 (#30570) 2025-03-31 13:29:45 +00:00
ccurme
9c682af8f3
langchain: release 0.3.22 (#30557)
Closes https://github.com/langchain-ai/langchain/issues/30536
2025-03-30 14:48:22 -04:00
William FH
b075eab3e0
Include delayed inputs in langchain tracer (#30546) 2025-03-28 16:07:22 -07:00
Thommy257
372dc7f991
core[patch]: fix loss of partially initialized variables during prompt composition (#30096)
**Description:**
This PR addresses the loss of partially initialised variables when
composing different prompts. I.e. it allows the following snippet to
run:

```python
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([('system', 'Prompt {x} {y}')]).partial(x='1')
appendix = ChatPromptTemplate.from_messages([('system', 'Appendix {z}')])

(prompt + appendix).invoke({'y': '2', 'z': '3'})
```

Previously, this would have raised a `KeyError`, stating that variable
`x` remains undefined.

**Issue**
References issue #30049

**Todo**
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 20:41:57 +00:00
Koshik Debanath
e7883d5b9f
langchain-openai: Support token counting for o-series models in ChatOpenAI (#30542)
Related to #30344

Add support for token counting for o-series models in
`test_token_counts.py`.

* **Update `_MODELS` and `_CHAT_MODELS` dictionaries**
- Add "o1", "o3", and "gpt-4o" to `_MODELS` and `_CHAT_MODELS`
dictionaries.

* **Update token counts**
  - Add token counts for "o1", "o3", and "gpt-4o" models.

---

For more details, open the [Copilot Workspace
session](https://copilot-workspace.githubnext.com/langchain-ai/langchain/pull/30542?shareId=ab208bf7-80a3-4b8d-80c4-2287486fedae).
2025-03-28 16:02:09 -04:00
Eugene Yurtsev
d075ad21a0
core[patch]: specify default event loop scope in pyproject.toml (#30543)
Specify default event loop scope
2025-03-28 19:51:19 +00:00
Ahmed Tammaa
f23c3e2444
text-splitters[patch]: Refactor HTMLHeaderTextSplitter for Enhanced Maintainability and Readability (#29397)
Please see PR #27678 for context

## Overview

This pull request presents a refactor of the `HTMLHeaderTextSplitter`
class aimed at improving its maintainability and readability. The
primary enhancements include simplifying the internal structure by
consolidating multiple private helper functions into a single private
method, thereby reducing complexity and making the codebase easier to
understand and extend. Importantly, all existing functionalities and
public interfaces remain unchanged.

## PR Goals

1. **Simplify Internal Logic**:
- **Consolidation of Private Methods**: The original implementation
utilized multiple private helper functions (`_header_level`,
`_dom_depth`, `_get_elements`) to manage different aspects of HTML
parsing and document generation. This fragmentation increased cognitive
load and potential maintenance overhead.
- **Streamlined Processing**: By merging these functionalities into a
single private method (`_generate_documents`), the class now offers a
more straightforward flow, making it easier for developers to trace and
understand the processing steps. (Thanks to @eyurtsev)

2. **Enhance Readability**:
- **Clearer Method Responsibilities**: With fewer private methods, each
method now has a more focused responsibility. The primary logic resides
within `_generate_documents`, which handles both HTML traversal and
document creation in a cohesive manner.
- **Reduced Redundancy**: Eliminating redundant checks and consolidating
logic reduces the code's verbosity, making it more concise without
sacrificing clarity.

3. **Improve Maintainability**:
- **Easier Debugging and Extension**: A simplified internal structure
allows for quicker identification of issues and easier implementation of
future enhancements or feature additions.
- **Consistent Header Management**: The new implementation ensures that
headers are managed consistently within a single context, reducing the
likelihood of bugs related to header scope and hierarchy.

4. **Maintain Backward Compatibility**:
- **Unchanged Public Interface**: All public methods (`split_text`,
`split_text_from_url`, `split_text_from_file`) and their signatures
remain unchanged, ensuring that existing integrations and usage patterns
are unaffected.
- **Preserved Docstrings**: Comprehensive docstrings are retained,
providing clear documentation for users and developers alike.

## Detailed Changes

1. **Removed Redundant Private Methods**:
- **Eliminated `_header_level`, `_dom_depth`, and `_get_elements`**:
These methods were merged into the `_generate_documents` method,
centralizing the logic for HTML parsing and document generation.

2. **Consolidated Document Generation Logic**:
- **Single Private Method `_generate_documents`**: This method now
handles the entire process of parsing HTML, tracking active headers,
managing document chunks, and yielding `Document` instances. This
consolidation reduces the number of moving parts and simplifies the
overall processing flow.

3. **Simplified Header Management**:
- **Immediate Header Scope Handling**: Headers are now managed within
the traversal loop of `_generate_documents`, ensuring that headers are
added or removed from the active headers dictionary in real-time based
on their DOM depth and hierarchy.
- **Removed `chunk_dom_depth` Attribute**: The need to track chunk DOM
depth separately has been eliminated, as header scopes are now directly
managed within the traversal logic.

4. **Streamlined Chunk Finalization**:
- **Enhanced `finalize_chunk` Function**: The chunk finalization process
has been simplified to directly yield a single `Document` when needed,
without maintaining an intermediate list. This change reduces
unnecessary list operations and makes the logic more straightforward.

5. **Improved Variable Naming and Flow**:
- **Descriptive Variable Names**: Variables such as `current_chunk` and
`node_text` provide clear insights into their roles within the
processing logic.
- **Direct Header Removal Logic**: Headers that are out of scope are
removed immediately during traversal, ensuring that the active headers
dictionary remains accurate and up-to-date.

6. **Preserved Comprehensive Docstrings**:
- **Unchanged Documentation**: All existing docstrings, including
class-level and method-level documentation, remain intact. This ensures
that users and developers continue to have access to detailed usage
instructions and method explanations.

## Testing

All existing test cases from `test_html_header_text_splitter.py` have
been executed against the refactored code. The results confirm that:

- **Functionality Remains Intact**: The splitter continues to accurately
parse HTML content, respect header hierarchies, and produce the expected
`Document` objects with correct metadata.
- **Backward Compatibility is Maintained**: No changes were required in
the test cases, and all tests pass without modifications, demonstrating
that the refactor does not introduce any regressions or alter existing
behaviors.


This example remains fully operational and behaves as before, returning
a list of `Document` objects with the expected metadata and content
splits.

## Conclusion

This refactor achieves a more maintainable and readable codebase by
simplifying the internal structure of the `HTMLHeaderTextSplitter`
class. By consolidating multiple private methods into a single, cohesive
private method, the class becomes easier to understand, debug, and
extend. All existing functionalities are preserved, and comprehensive
tests confirm that the refactor maintains the expected behavior. These
changes align with LangChain’s standards for clean, maintainable, and
efficient code.

---

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 15:36:00 -04:00
omahs
6f8735592b
docs,langchain-community: Fix typos in docs and code (#30541)
Fix typos
2025-03-28 19:21:16 +00:00
Agus
47d50f49d9
docs: Add GOAT integration to docs (#30478)
This PR adds:
1. Docs for the GOAT integration 
2. An "Agentic Finance" table to the Tools page that includes GOAT

**Twitter handle**: @0xaguspunk
2025-03-28 15:19:37 -04:00
Shixian Sheng
94a7fd2497
docs: fix broken hyperlinks in fireworks integration package README (#30538)
Fix two broken hyperlinks
2025-03-28 15:18:44 -04:00
Oskar Stark
0d2cea747c
docs: streamline LangSmith teasing (#30302)
This can only be reviewed by [hiding
whitespaces](https://github.com/langchain-ai/langchain/pull/30302/files?diff=unified&w=1).

The motivation behind this PR is to get my hands on the docs and make
the LangSmith teasing short and clear.

Right now I don't know how to do it, but this could be an include in the
future.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 15:13:22 -04:00
Eugene Yurtsev
dd0faab07e fix types 2025-03-28 14:23:50 -04:00
Eugene Yurtsev
21ab1dc675 Merge branch 'master' of github.com:xzq-xu/langchain into xzq-xu/master 2025-03-28 13:56:49 -04:00
Eugene Yurtsev
22cee5d983 x 2025-03-28 13:56:10 -04:00
Eugene Yurtsev
a14d8b103b
Merge branch 'master' into master 2025-03-28 13:53:58 -04:00
Eugene Yurtsev
6d22f40a0b x 2025-03-28 13:51:06 -04:00
Philippe PRADOS
92189c8b31
community[patch]: Handle gray scale images in ImageBlobParser (Fixes 30261 and 29586) (#30493)
Fix [29586](https://github.com/langchain-ai/langchain/issues/29586) and
[30261](https://github.com/langchain-ai/langchain/pull/30261)
2025-03-28 10:15:40 -04:00
小豆豆学长
1f0686db80
community: add netmind integration (#30149)
Co-authored-by: yanrujing <rujing.yan@protagonist-ai.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-27 15:27:04 -04:00
Kyungho Byoun
e6b6c07395
community: add HANA dialect to SQLDatabase (#30475)
This PR includes support for HANA dialect in SQLDatabase, which is a
wrapper class for SQLAlchemy.

Currently, it is unable to set schema name when using HANA DB with
Langchain. And, it does not show any message to user so that it makes
hard for user to figure out why the SQL does not work as expected.

Here is the reference document for HANA DB to set schema for the
session.

- [SET SCHEMA Statement (Session
Management)](https://help.sap.com/docs/SAP_HANA_PLATFORM/4fe29514fd584807ac9f2a04f6754767/20fd550375191014b886a338afb4cd5f.html)
2025-03-27 15:19:50 -04:00
Christophe Bornet
e181d43214
core: Bump ruff version to 0.11 (#30519)
Changes are from the new TC006 rule:
https://docs.astral.sh/ruff/rules/runtime-cast-value/
TC006 is auto-fixed.
2025-03-27 13:01:49 -04:00
ccurme
59908f04d4
fireworks: release 0.2.9 (#30527) 2025-03-27 16:04:20 +00:00
ccurme
05482877be
mistralai: release 0.2.10 (#30526) 2025-03-27 16:01:40 +00:00
Andras L Ferenczi
63673b765b
Fix: Enable max_retries Parameter in ChatMistralAI Class (#30448)
**partners: Enable max_retries in ChatMistralAI**

**Description**

- This pull request reactivates the retry logic in the
completion_with_retry method of the ChatMistralAI class, restoring the
intended functionality of the previously ineffective max_retries
parameter. New unit test that mocks failed/successful retry calls and an
integration test to confirm end-to-end functionality.

**Issue**
- Closes #30362

**Dependencies**
- No additional dependencies required

Co-authored-by: andrasfe <andrasf94@gmail.com>
2025-03-27 11:53:44 -04:00
Keiichi Hirobe
956b09f468
core[patch]: stop deleting records with "scoped_full" when doc is empty (#30520)
Fix a bug that causes `scoped_full` in index to delete records when there are no input docs.
2025-03-27 11:04:34 -04:00
Christophe Bornet
b28a474e79
core[patch]: Add ruff rules for PLW (Pylint Warnings) (#29288)
See https://docs.astral.sh/ruff/rules/#warning-w_1

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-27 10:26:12 +00:00
xzq.xu
92dc3f7341 format test lint passed 2025-03-27 13:44:59 +08:00
xzq.xu
d0a9808148 modify test name 2025-03-27 13:34:51 +08:00
xzq.xu
ed2428f902 add a unit test 2025-03-27 12:43:16 +08:00
David Sánchez Sánchez
75823d580b
community: fix perplexity response parameters not being included in model response (#30440)
This pull request includes enhancements to the `perplexity.py` file in
the `chat_models` module, focusing on improving the handling of
additional keyword arguments (`additional_kwargs`) in message processing
methods. Additionally, new unit tests have been added to ensure the
correct inclusion of citations, images, and related questions in the
`additional_kwargs`.

Issue: resolves https://github.com/langchain-ai/langchain/issues/30439

Enhancements to `perplexity.py`:

*
[`libs/community/langchain_community/chat_models/perplexity.py`](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL208-L212):
Modified the `_convert_delta_to_message_chunk`, `_stream`, and
`_generate` methods to handle `additional_kwargs`, which include
citations, images, and related questions.
[[1]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL208-L212)
[[2]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL277-L286)
[[3]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fR324-R331)

New unit tests:

*
[`libs/community/tests/unit_tests/chat_models/test_perplexity.py`](diffhunk://#diff-dab956d79bd7d17a0f5dea3f38ceab0d583b43b63eb1b29138ee9b6b271ba1d9R119-R275):
Added new tests `test_perplexity_stream_includes_citations_and_images`
and `test_perplexity_stream_includes_citations_and_related_questions` to
verify that the `stream` method correctly includes citations, images,
and related questions in the `additional_kwargs`.
2025-03-26 22:28:08 -04:00
Adeel Ehsan
d7d0bca2bc
docs: add vectara to libs package yml (#30504) 2025-03-26 16:47:53 -04:00
ccurme
a9b1e1b177
openai: release 0.3.11 (#30503) 2025-03-26 19:24:37 +00:00
ccurme
8119a7bc5c
openai[patch]: support streaming token counts in AzureChatOpenAI (#30494)
When OpenAI originally released `stream_options` to enable token usage
during streaming, it was not supported in AzureOpenAI. It is now
supported.

Like the [OpenAI
SDK](f66d2e6fdc/src/openai/resources/completions.py (L68)),
ChatOpenAI does not return usage metadata during streaming by default
(which adds an extra chunk to the stream). The OpenAI SDK requires users
to pass `stream_options={"include_usage": True}`. ChatOpenAI implements
a convenience argument `stream_usage: Optional[bool]`, and an attribute
`stream_usage: bool = False`.

Here we extend this to AzureChatOpenAI by moving the `stream_usage`
attribute and `stream_usage` kwarg (on `_(a)stream`) from ChatOpenAI to
BaseChatOpenAI.

---

Additional consideration: we must be sensitive to the number of users
using BaseChatOpenAI to interact with other APIs that do not support the
`stream_options` parameter.

Suppose OpenAI in the future updates the default behavior to stream
token usage. Currently, BaseChatOpenAI only passes `stream_options` if
`stream_usage` is True, so there would be no way to disable this new
default behavior.

To address this, we could update the `stream_usage` attribute to
`Optional[bool] = None`, but this is technically a breaking change (as
currently values of False are not passed to the client). IMO: if / when
this change happens, we could accompany it with this update in a minor
bump.

--- 

Related previous PRs:
- https://github.com/langchain-ai/langchain/pull/22628
- https://github.com/langchain-ai/langchain/pull/22854
- https://github.com/langchain-ai/langchain/pull/23552

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-26 15:16:37 -04:00
ccurme
f68eaab44f
tests: release 0.3.17 (#30502) 2025-03-26 18:56:54 +00:00
Louis Auneau
0b532a4ed0
community: Azure Document Intelligence parser features not available fixed (#30370)
Thank you for contributing to LangChain!

- **Description:** Azure Document Intelligence OCR solution has a
*feature* parameter that enables some features such as high-resolution
document analysis, key-value pairs extraction, ... In langchain parser,
you could be provided as a `analysis_feature` parameter to the
constructor that was passed on the `DocumentIntelligenceClient`.
However, according to the `DocumentIntelligenceClient` [API
Reference](https://learn.microsoft.com/en-us/python/api/azure-ai-documentintelligence/azure.ai.documentintelligence.documentintelligenceclient?view=azure-python),
this is not a valid constructor parameter. It was therefore remove and
instead stored as a parser property that is used in the
`begin_analyze_document`'s `features` parameter (see [API
Reference](https://learn.microsoft.com/en-us/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.documentanalysisclient?view=azure-python#azure-ai-formrecognizer-documentanalysisclient-begin-analyze-document)).
I also removed the check for "Supported features" since all features are
supported out-of-the-box. Also I did not check if the provided `str`
actually corresponds to the Azure package enumeration of features, since
the `ValueError` when creating the enumeration object is pretty
explicit.
Last caveat, is that some features are not supported for some kind of
documents. This is documented inside Microsoft documentation and
exception are also explicit.
- **Issue:** N/A
- **Dependencies:** No
- **Twitter handle:** @Louis___A

---------

Co-authored-by: Louis Auneau <louis@handshakehealth.co>
2025-03-26 14:40:14 -04:00
Philippe PRADOS
8e5d2a44ce
community[patch]: update PyPDFParser to take into account filters returned as arrays (#30489)
The image parsing is generating a bug as the the extracted objects for
the /Filter returns sometimes an array, sometimes a string.

Fix [Issue
30098](https://github.com/langchain-ai/langchain/issues/30098)
2025-03-26 14:16:54 -04:00
ccurme
422ba4cde5
infra: handle flaky tests (#30501) 2025-03-26 13:28:56 -04:00
ccurme
9a80be7bb7
core[patch]: release 0.3.49 (#30500) 2025-03-26 13:26:32 -04:00
ccurme
299b222c53
mistral[patch]: check types in adding model_name to response_metadata (#30499) 2025-03-26 16:30:09 +00:00
ccurme
22d1a7d7b6
standard-tests[patch]: require model_name in response_metadata if returns_usage_metadata (#30497)
We are implementing a token-counting callback handler in
`langchain-core` that is intended to work with all chat models
supporting usage metadata. The callback will aggregate usage metadata by
model. This requires responses to include the model name in its
metadata.

To support this, if a model `returns_usage_metadata`, we check that it
includes a string model name in its `response_metadata` in the
`"model_name"` key.

More context: https://github.com/langchain-ai/langchain/pull/30487
2025-03-26 12:20:53 -04:00
Ante Javor
20f82502e5
Community: Add Memgraph integration docs (#30457)
Thank you for contributing to LangChain!

**Description:** 
Since we just implemented
[langchain-memgraph](https://pypi.org/project/langchain-memgraph/)
integration, we are adding basic docs to [your site based on this
comment](https://github.com/langchain-ai/langchain/pull/30197#pullrequestreview-2671616410)
from @ccurme .
   
 **Twitter handle:**
 [@memgraphdb](https://x.com/memgraphdb)


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-26 11:58:09 -04:00
xzq.xu
913c8b71d9 format import 2025-03-26 23:34:38 +08:00
xzq.xu
7e3dea5db8 add a new-line 2025-03-26 23:32:07 +08:00
xzq.xu
d602141ab1 remove unused e 2025-03-26 23:10:41 +08:00
xzq.xu
dd9031fc82 _prep_run_args,tool_input copy, Exception 2025-03-26 23:06:43 +08:00
xzq.xu
3382b0d8ea _prep_run_args,tool_input copy 2025-03-26 22:56:32 +08:00
xzq.xu
65ecc22606 # Fix: Prevent run_manager from being added to state object 2025-03-26 22:36:31 +08:00
ccurme
7e62e3a137
core[patch]: store model names on usage callback handler (#30487)
So we avoid mingling tokens from different models.
2025-03-25 21:26:09 -04:00
ccurme
32827765bf
core[patch]: mark usage callback handler as beta (#30486) 2025-03-25 23:25:57 +00:00
Eugene Yurtsev
9f345d64fd
core[patch]: Remove old accidental commit (#30483)
Remove commented out file that was accidentally added

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-25 15:37:20 -07:00
ccurme
4b9e2e51f3
core[patch]: add token counting callback handler (#30481)
Stripped-down version of
[OpenAICallbackHandler](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/callbacks/openai_info.py)
that just tracks `AIMessage.usage_metadata`.

```python
from langchain_core.callbacks import get_usage_metadata_callback
from langgraph.prebuilt import create_react_agent

def get_weather(location: str) -> str:
    """Get the weather at a location."""
    return "It's sunny."

tools = [get_weather]
agent = create_react_agent("openai:gpt-4o-mini", tools)

with get_usage_metadata_callback() as cb:
    result = await agent.ainvoke({"messages": "What's the weather in Boston?"})
    print(cb.usage_metadata)
```
2025-03-25 18:16:39 -04:00
Eugene Yurtsev
0acca6b9c8
core[patch]: Fix handling of title when tool schema is specified manually via JSONSchema (#30479)
Fix issue: https://github.com/langchain-ai/langchain/issues/30456
2025-03-25 15:15:24 -04:00
Ben Chambers
c5e42a4027
community: deprecate graph vector store (#30328)
- **Description:** mark GraphVectorStore `@deprecated`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-25 13:52:54 +00:00
Ian Muge
a8ce63903d
community: Add edge properties to the gremlin graph schema (#30449)
Description: Extend the gremlin graph schema to include the edge
properties, grouped by its triples; i.e: `inVLabel` and `outVLabel`.
This should give more context when crafting queries to run against a
gremlin graph db
2025-03-24 19:03:01 -04:00
ccurme
b60e6f6efa
community[patch]: update API ref for AmazonTextractPDFParser (#30468) 2025-03-24 23:02:52 +00:00
David Sánchez Sánchez
3ba0d28d8e
community: update perplexity docstring (#30451)
This pull request includes extensive documentation updates for the
`ChatPerplexity` class in the
`libs/community/langchain_community/chat_models/perplexity.py` file. The
changes provide detailed setup instructions, key initialization
arguments, and usage examples for various functionalities of the
`ChatPerplexity` class.

Documentation improvements:

* Added setup instructions for installing the `openai` package and
setting the `PPLX_API_KEY` environment variable.
* Documented key initialization arguments for completion parameters and
client parameters, including `model`, `temperature`, `max_tokens`,
`streaming`, `pplx_api_key`, `request_timeout`, and `max_retries`.
* Provided examples for instantiating the `ChatPerplexity` class,
invoking it with messages, using structured output, invoking with
perplexity-specific parameters, streaming responses, and accessing token
usage and response metadata.Thank you for contributing to LangChain!
2025-03-24 15:01:02 -04:00
Vadym Barda
97dec30eea
docs[patch]: update trim_messages doc (#30462) 2025-03-24 18:50:48 +00:00
ccurme
c2dd8d84ff
infra[patch]: remove pyspark from langchain-community extended testing requirements (#30466) 2025-03-24 14:41:54 -04:00
ccurme
aa30d2d57f
standard-tests: release 0.3.16 (#30464) 2025-03-24 18:35:12 +00:00
ccurme
b09e7c125c
cli: use pytest-watcher (#30465)
pytest-watch is no longer maintained.
2025-03-24 18:06:31 +00:00
ccurme
50ec4a1a4f
openai[patch]: attempt to make test less flaky (#30463) 2025-03-24 17:36:36 +00:00
ccurme
8486e0ae80
openai[patch]: bump openai sdk (#30461)
[New required
field](https://github.com/openai/openai-python/pull/2223/files#diff-530fd17eb1cc43440c82630df0ddd9b0893cf14b04065a95e6eef6cd2f766a44R26)
for `ResponseUsage` released in 1.66.5.
2025-03-24 12:10:00 -04:00
ccurme
cbbc968903
openai: release 0.3.10 (#30460) 2025-03-24 15:37:53 +00:00
ccurme
ed5e589191
openai[patch]: support multi-turn computer use (#30410)
Here we accept ToolMessages of the form
```python
ToolMessage(
    content=<representation of screenshot> (see below),
    tool_call_id="abc123",
    additional_kwargs={"type": "computer_call_output"},
)
```
and translate them to `computer_call_output` items for the Responses
API.

We also propagate `reasoning_content` items from AIMessages.

## Example

### Load screenshots
```python
import base64

def load_png_as_base64(file_path):
    with open(file_path, "rb") as image_file:
        encoded_string = base64.b64encode(image_file.read())
        return encoded_string.decode('utf-8')

screenshot_1_base64 = load_png_as_base64("/path/to/screenshot/of/application.png")
screenshot_2_base64 = load_png_as_base64("/path/to/screenshot/of/desktop.png")
```

### Initial message and response
```python
from langchain_core.messages import HumanMessage, ToolMessage
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="computer-use-preview",
    model_kwargs={"truncation": "auto"},
)

tool = {
    "type": "computer_use_preview",
    "display_width": 1024,
    "display_height": 768,
    "environment": "browser"
}
llm_with_tools = llm.bind_tools([tool])

input_message = HumanMessage(
    content=[
        {
            "type": "text",
            "text": (
                "Click the red X to close and reveal my Desktop. "
                "Proceed, no confirmation needed."
            )
        },
        {
            "type": "input_image",
            "image_url": f"data:image/png;base64,{screenshot_1_base64}",
        }
    ]
)

response = llm_with_tools.invoke(
    [input_message],
    reasoning={
        "generate_summary": "concise",
    },
)
response.additional_kwargs["tool_outputs"]
```

### Construct ToolMessage
```python
tool_call_id = response.additional_kwargs["tool_outputs"][0]["call_id"]

tool_message = ToolMessage(
    content=[
        {
            "type": "input_image",
            "image_url": f"data:image/png;base64,{screenshot_2_base64}"
        }
    ],
    #  content=f"data:image/png;base64,{screenshot_2_base64}",  # <-- also acceptable
    tool_call_id=tool_call_id,
    additional_kwargs={"type": "computer_call_output"},
)
```

### Invoke again
```python
messages = [
    input_message,
    response,
    tool_message,
]

response_2 = llm_with_tools.invoke(
    messages,
    reasoning={
        "generate_summary": "concise",
    },
)
```
2025-03-24 15:25:36 +00:00
Vadym Barda
7bc50730aa
core[patch]: release 0.3.48 (#30458) 2025-03-24 09:48:03 -04:00
Mohammad Mohtashim
33f1ab1528
Youtube Loader load method Fixed (#30314)
- **Description:** Fixed the `YoutubeLoader` loading method not
returning the correct object
- **Issue:** #30309

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-23 14:48:03 -04:00
Simon Paredes
df4448dfac
langchain-groq: Add response metadata when streaming (#30379)
- **Description:** Add missing `model_name` and `system_fingerprint`
metadata when streaming.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-23 14:34:41 -04:00
Changyong Um
e2d9fe766f
community[tool]: Integrate a tool for the naver_search (#30392)
Hello!
I have reopened a pull request for tool integration.
Please refer to the previous
[PR](https://github.com/langchain-ai/langchain/pull/30248).

I understand that for the tool integration, a separate package should be
created, and only the documentation should be added under docs/docs/. If
there are any other procedures, please let me know.


[langchain-naver-community](https://github.com/e7217/langchain-naver-community)

cc: @ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-23 14:05:24 -04:00
ccurme
d867afff1c
docs: update package table ordering (#30437)
Update download counts (only impacts ordering, counts in rendered page
are updated automatically).
2025-03-22 18:07:08 -04:00
Matthew Farrellee
e7032901c3
langchain-tests: allow test_serdes for packages outside the default valid namespaces (#30343)
**Description:**

a third party package not listed in the default valid namespaces cannot
pass test_serdes because the load() does not allow for extending the
valid_namespaces.

test_serdes will fail with -
ValueError: Invalid namespace: {'lc': 1, 'type': 'constructor', 'id':
['langchain_other', 'chat_models', 'ChatOther'], 'kwargs':
{'model_name': '...', 'api_key': '...'}, 'name': 'ChatOther'}

this change has test_serdes automatically extend valid_namespaces based
off the ChatModel under test's namespace.
2025-03-22 17:27:39 -04:00
Jiwon Kang
699475a01d
community: uuidv1 is unsafe (#30432)
this_row_id previously used UUID v1. However, since UUID v1 can be
predicted if the MAC address and timestamp are known, it poses a
potential security risk. Therefore, it has been changed to UUID v4.
2025-03-22 15:27:49 -04:00
Dhruvajyoti Sarma
31551dab40
feature: added warning when duckdb is used as a vectorstore without pandas (#30435)
added warning when duckdb is used as a vectorstore without pandas being
installed (currently used for similarity search result processing)

Thank you for contributing to LangChain!

- [ ] **PR title**: "community: added warning when duckdb is used as a
vectorstore without pandas"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** displays a warning when using duckdb as a vector
store without pandas being installed, as it is used by the
`similarity_search` function
    - **Issue:** #29933 
    - **Dependencies:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-22 19:27:21 +00:00
Cesar Sanz
5383abfeee
Fix incorrect import path for AzureAIChatCompletionsModel (#30417)
Fixes #30416

Correct the import path for `AzureAIChatCompletionsModel` in the
`_init_chat_model_helper` function.

* Update the import statement in
`libs/langchain/langchain/chat_models/base.py` to `from
langchain_azure_ai.chat_models import AzureAIChatCompletionsModel`.

---

For more details, open the [Copilot Workspace
session](https://copilot-workspace.githubnext.com/langchain-ai/langchain/pull/30417?shareId=6ff6d5de-e3d1-4972-8d24-5e74838e9945).
2025-03-22 07:44:51 -04:00
Misakar
7750ad588b
community:ChatLiteLLM support output reasoning content (#30430) 2025-03-22 07:43:33 -04:00
Adrián Panella
b75573e858
core: add tool_call exclusion in filter_message (#30289)
Extend functionallity to allow to filter pairs of tool calls (ai +
tool).

---------

Co-authored-by: vbarda <vadym@langchain.dev>
2025-03-21 23:05:29 +00:00
Vadym Barda
673ec00030
docs[patch]: add warning to token counter docstring (#30426) 2025-03-21 18:59:40 -04:00
Adrián Panella
3933a4abc3
core(mermaid): allow greater customization (#29939)
Adds greater style customization by allowing a custom frontmatter
config. This allows to set a `theme` and `look` or to adjust theme by
setting `themeVariables`

Example:

```python

node_colors = NodeStyles(
    default="fill:#e2e2e2,line-height:1.2,stroke:#616161",
    first="fill:#cfeab8,fill-opacity:0",
    last="fill:#eac3b8",
)

frontmatter_config = {
    "config": {
        "theme": "neutral",
        "look": "handDrawn"
    }
}

graph.get_graph().draw_mermaid_png(node_colors=node_colors, frontmatter_config=frontmatter_config)
```


![image](https://github.com/user-attachments/assets/11b56d30-3be2-482f-8432-3ce704a09552)

---------

Co-authored-by: vbarda <vadym@langchain.dev>
2025-03-21 18:25:26 -04:00
Vadym Barda
07823cd41c
core[patch]: optimize trim_messages (#30327)
Refactored w/ Claude

Up to 20x speedup! (with theoretical max improvement of `O(n / log n)`)
2025-03-21 17:08:26 -04:00
ccurme
b78ae7817e
openai[patch]: trace strict in structured_output_kwargs (#30425) 2025-03-21 14:37:28 -04:00
ccurme
1de7fa8f3a
Revert "deepseek: temporarily bypass tests" (#30424)
Reverts langchain-ai/langchain#30423
2025-03-21 17:14:31 +00:00
ccurme
c74dfff836
deepseek: temporarily bypass tests (#30423)
Deepseek infra is not stable enough to get through integration tests.

Previous two attempts had two tests time out, they both pass locally.
2025-03-21 17:08:35 +00:00
ccurme
7147903724
deepseek: release 0.1.3 (#30422) 2025-03-21 16:39:50 +00:00
Andras L Ferenczi
b5f49df86a
partner: ChatDeepSeek on openrouter not returning reasoning (#30240)
Deepseek model does not return reasoning when hosted on openrouter
(Issue [30067](https://github.com/langchain-ai/langchain/issues/30067))

the following code did not return reasoning:

```python
llm = ChatDeepSeek( model = 'deepseek/deepseek-r1:nitro', api_base="https://openrouter.ai/api/v1", api_key=os.getenv("OPENROUTER_API_KEY")) 
messages = [
    {"role": "system", "content": "You are an assistant."},
    {"role": "user", "content": "9.11 and 9.8, which is greater? Explain the reasoning behind this decision."}
]
response = llm.invoke(messages, extra_body={"include_reasoning": True})
print(response.content)
print(f"REASONING: {response.additional_kwargs.get('reasoning_content', '')}")
print(response)
```

The fix is to extract reasoning from
response.choices[0].message["model_extra"] and from
choices[0].delta["reasoning"]. and place in response additional_kwargs.
Change is really just the addition of a couple one-sentence if
statements.

---------

Co-authored-by: andrasfe <andrasf94@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-21 16:35:37 +00:00
Vadym Barda
4852ab8d0a
core[patch]: more tests for trim_messages (#30421) 2025-03-21 16:19:52 +00:00
ccurme
e8e3b2bfae
ollama: release 0.3.0 (#30420) 2025-03-21 15:50:08 +00:00