1
0
mirror of https://github.com/hwchase17/langchain.git synced 2025-04-29 20:35:43 +00:00
Commit Graph

12863 Commits

Author SHA1 Message Date
Chester Curme
f333720aad propagate to tool calls 2025-03-17 14:12:31 -04:00
Oskar Stark
192035f8c0
docs: fix typo () 2025-03-17 16:54:32 +00:00
César García
a5eca20e1b
docs: Fix for broken links for Docling project sites ()
**Description:**: Updated Docling project URLs from ds4sd.github.io to
docling-project.github.io/docling/
 **Issue:** 
 **Dependencies:** None
 **Twitter handle**: @lahoramaker
2025-03-17 16:47:09 +00:00
Oskar Stark
620d723fbf
infra(GHA): remove unused --- at the beginning of some workflows () 2025-03-17 16:46:32 +00:00
César García
2c8b8114fa
docs: Updated link to current Unstructured docs ()
The former link led to a site that explains that the docs have moved,
but did not redirect the user to the actual site automatically. I just
copied the provided url, checked that it works and updated the link to
the current version.


**Description:** Updated the link to Unstructured Docs at
https://docs.unstructured.io
**Issue:**  
**Dependencies:** None
**Twitter handle:** @lahoramaker
2025-03-17 16:46:28 +00:00
ccurme
e2ab4ccab3
docs: update min langchain-openai version in integrations page ()
No longer RC
2025-03-17 16:45:47 +00:00
ccurme
5684653775
openai[patch]: release 0.3.9 () 2025-03-17 16:08:41 +00:00
ccurme
eb9b992aa6
openai[patch]: support additional Responses API features ()
- Include response headers
- Max tokens
- Reasoning effort
- Fix bug with structured output / strict
- Fix bug with simultaneous tool calling + structured output
2025-03-17 12:02:21 -04:00
Bae-ChangHyun
d8510270ee
community: add 'extract' mode to FireCrawlLoader for structured data extraction ()
**Description:** 
Added an 'extract' mode to FireCrawlLoader that enables structured data
extraction from web pages. This feature allows users to Extract
structured data from a single URLs, or entire websites using Large
Language Models (LLMs).
You can show more params and usage on [firecrawl
docs](https://docs.firecrawl.dev/features/extract-beta).
You can extract from only one url now.(it depends on firecrawl's extract
method)

**Dependencies:** 
No new dependencies required. Uses existing FireCrawl API capabilities.

---------

Co-authored-by: chbae <chbae@gcsc.co.kr>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-17 15:15:57 +00:00
qonnop
747efa16ec
community: fix CPU support for FasterWhisperParser (implicit compute type for WhisperModel) ()
FasterWhisperParser fails on a machine without an NVIDIA GPU: "Requested
float16 compute type, but the target device or backend do not support
efficient float16 computation." This problem arises because the
WhisperModel is called with compute_type="float16", which works only for
NVIDIA GPU.

According to the [CTranslate2
docs](https://opennmt.net/CTranslate2/quantization.html#bit-floating-points-float16)
float16 is supported only on NVIDIA GPUs. Removing the compute_type
parameter solves the problem for CPUs. According to the [CTranslate2
docs](https://opennmt.net/CTranslate2/quantization.html#quantize-on-model-loading)
setting compute_type to "default" (standard when omitting the parameter)
uses the original compute type of the model or performs implicit
conversion for the specific computation device (GPU or CPU). I suggest
to remove compute_type="float16".

@hulitaitai you are the original author of the FasterWhisperParser - is
there a reason for setting the parameter to float16?

Thanks for reviewing the PR!

Co-authored-by: qonnop <qonnop@users.noreply.github.com>
2025-03-14 22:22:29 -04:00
ccurme
c74e7b997d
openai[patch]: support structured output via Responses API ()
Also runs all standard tests using Responses API.
2025-03-14 15:14:23 -04:00
Priyansh Agrawal
f54f14b747
community: cube document loader - do not load non-public dimensions and measures ()
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- **Description:** Do not load non-public dimensions and measures
(public: false) with Cube semantic loader

- **Issue:** Currently, non-public dimensions and measures are loaded by
the Cube document loader which leads to downstream applications using
these which is not allowed by Cube.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-14 15:07:56 -04:00
Stavros Kontopoulos
ac22cde130
langchain_ollama: Support keep_alive in embeddings ()
- Description: Adds support for keep_alive in Ollama Embeddings see
https://github.com/ollama/ollama/issues/6401.
Builds on top of of
https://github.com/langchain-ai/langchain/pull/29296. I have this use
case where I want to keep the embeddings model in cpu forever.
- Dependencies: no deps are being introduced.
- Issue: haven't created an issue yet.
2025-03-14 14:56:50 -04:00
Matthew Farrellee
65a8f30729
docs: update json-mode docs to use with_structured_output(method="json_mode") ()
**Description:** update the json-mode concepts doc to use
method="json_mode" instead of model_kwargs w/ response_format
**Issue:** https://github.com/langchain-ai/langchain/issues/30290
2025-03-14 14:54:57 -04:00
ccurme
18f9b5d8ab
docs: update contributing doc () 2025-03-14 18:54:11 +00:00
homeffjy
2c99f12062
community[patch]: fix bilibili loader handling of multi-page content ()
Previously the loader would only extract subtitles from the first page
of multi-page videos.
2025-03-14 14:53:03 -04:00
ccurme
0b80bec015
docs: fix typo () 2025-03-14 17:09:38 +00:00
ccurme
d5d0134e7b
anthropic: release 0.3.10 () 2025-03-14 16:23:21 +00:00
ccurme
226f29bc96
anthropic: support built-in tools, improve docs ()
- Support features from recent update:
https://www.anthropic.com/news/token-saving-updates (mostly adding
support for built-in tools in `bind_tools`
- Add documentation around prompt caching, token-efficient tool use, and
built-in tools.
2025-03-14 16:18:50 +00:00
Priyansh Agrawal
f27e2d7ce7
community: cube document loader - fix logging ()
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- **Description:** Fix bad log message on line#56 and replace f-string
logs with format specifiers

- **Issue:** Log messages such as this one
`INFO:langchain_community.document_loaders.cube_semantic:Loading
dimension values for: {dimension_name}...`

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-14 11:36:18 -04:00
ccurme
bbd4b36d76
mistralai[patch]: bump core () 2025-03-13 23:04:36 +00:00
ccurme
315bb17ef5
core: release 0.3.45 () 2025-03-13 22:44:23 +00:00
pulvedu
d0bfc7f820
community[fix] : Pass API_KEY as argument ()
PR Title:
community: Fix Pass API_KEY as argument

PR Message:
Description:
This PR fixes validation error "Value error, Did not find
tavily_api_key, please add an environment variable `TAVILY_API_KEY`
which contains it, or pass `tavily_api_key` as a named parameter."

Dependencies:
No new dependencies introduced.

---------

Co-authored-by: pulvedu <dustin@tavily.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-13 22:19:31 +00:00
ccurme
5e0fa2cce5
infra: update release pipeline ()
Instead of attempting to conditionally `needs` job, always run job and
exit successfully if not needed.
2025-03-13 18:10:59 -04:00
ccurme
733abcc884
mistral: release 0.2.8 () 2025-03-13 21:54:34 +00:00
Jacob Lee
e9c1765967
fix(core): Ignore missing secrets on deserialization () 2025-03-13 12:27:03 -07:00
ccurme
ebea5e014d
standard tests: test simple agent loop () 2025-03-13 16:34:12 +00:00
ccurme
5237987643
docs: update readme ()
Co-authored-by: Vadym Barda <vadym@langchain.dev>
2025-03-12 13:45:13 -04:00
ccurme
cd1ea8e94d
openai[patch]: support Responses API ()
Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-03-12 12:25:46 -04:00
Jason Zhang
49bdd3b6fe
docs: Add AgentQL provider doc, tool/toolkit doc and documentloader doc ()
- **Description:** Added AgentQL docs for the provider page, tools page
and documentloader page
- **Twitter handle:** @AgentQL

Repo:
https://github.com/tinyfish-io/agentql-integrations/tree/main/langchain
PyPI: https://pypi.org/project/langchain-agentql/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-11 21:57:40 -04:00
Vadym Barda
23fa70f328
core[patch]: release 0.3.44 () 2025-03-11 18:59:02 -04:00
Vadym Barda
c7842730ef
core[patch]: support single-node subgraphs and put subgraph nodes under the respective subgraphs () 2025-03-11 18:55:45 -04:00
Dharshan A
81d1653a30
docs: Fix typo in Generating Examples section of few-shot prompting doc ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-11 09:44:20 -04:00
ccurme
27d86d7bc8
infra: update release workflow ()
Fix condition
2025-03-10 17:53:03 -04:00
ccurme
70fc0b8363
infra: update release workflow () 2025-03-10 20:18:33 +00:00
ccurme
62c570dd77
standard-tests, openai: bump core () 2025-03-10 19:22:24 +00:00
ccurme
38420ee76e
docs: add note on Deepseek R1 () 2025-03-10 15:17:20 -04:00
ccurme
f896e701eb
deepseek: install local langchain-tests in test deps () 2025-03-10 16:58:17 +00:00
ccurme
7b8f266039
infra: additional testing on core release ()
Here we add a job to the release workflow that, when releasing
`langchain-core`, tests prior published versions of select packages
against the new version of core. We limit the testing to the most recent
published versions of langchain-anthropic and langchain-openai.

This is designed to catch backward-incompatible updates to core. We
sometimes update core and downstream packages simultaneously, so there
may not be any commit in the history at which tests would fail. So
although core and latest downstream packages could be consistent, we can
benefit from testing prior versions of downstream packages against core.

I tested the workflow by simulating a [breaking
change](d7287248cf)
in core and running it with publishing steps disabled:
https://github.com/langchain-ai/langchain/actions/runs/13741876345. The
workflow correctly caught the issue.
2025-03-10 08:59:59 -04:00
Hugh Gao
aa6dae4a5b
community: Remove the system message count limit for ChatTongyi. ()
## Description
The models in DashScope support multiple SystemMessage. Here is the
[Doc](https://bailian.console.aliyun.com/model_experience_center/text#/model-market/detail/qwen-long?tabKey=sdk),
and the example code on the document page:
```python
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),  # 如果您没有配置环境变量,请在此处替换您的API-KEY
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",  # 填写DashScope服务base_url
)
# 初始化messages列表
completion = client.chat.completions.create(
    model="qwen-long",
    messages=[
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        # 请将 'file-fe-xxx'替换为您实际对话场景所使用的 file-id。
        {'role': 'system', 'content': 'fileid://file-fe-xxx'},
        {'role': 'user', 'content': '这篇文章讲了什么?'}
    ],
    stream=True,
    stream_options={"include_usage": True}
)

full_content = ""
for chunk in completion:
    if chunk.choices and chunk.choices[0].delta.content:
        # 拼接输出内容
        full_content += chunk.choices[0].delta.content
        print(chunk.model_dump())

print({full_content})
```
Tip: The example code is for OpenAI, but the document said that it also
supports the DataScope API, and I tested it, and it works.
```
Is the Dashscope SDK invocation method compatible?

Yes, the Dashscope SDK remains compatible for model invocation. However, file uploads and file-ID retrieval are currently only supported via the OpenAI SDK. The file-ID obtained through this method is also compatible with Dashscope for model invocation.
```
2025-03-10 08:58:40 -04:00
Dharshan A
34e94755af
Fix typo in astream_events in streaming docs ()
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-10 08:56:07 -04:00
ccurme
67aff1648b
community: Add OpenGradient integration (Toolkit) ()
Commandeering https://github.com/langchain-ai/langchain/pull/30135

---------

Co-authored-by: kylexqian <kylexqian@gmail.com>
2025-03-09 18:08:07 -04:00
ccurme
b209d46eb3
mistral[patch]: set global ssl context () 2025-03-09 21:27:41 +00:00
Vijay Selvaraj
df459d0d5e
community: add Valthera integration ()
```markdown
**Description:**  
This PR integrates Valthera into LangChain, introducing an framework designed to send highly personalized nudges by an LLM agent. This is modeled after Dr. BJ Fogg's Behavior Model. This integration includes:

- Custom data connectors for HubSpot, PostHog, and Snowflake.
- A unified data aggregator that consolidates user data.
- Scoring configurations to compute motivation and ability scores.
- A reasoning engine that determines the appropriate user action.
- A trigger generator to create personalized messages for user engagement.

**Issue:**  
N/A

**Dependencies:**  
N/A

**Twitter handle:**  
- `@vselvarajijay`

**Tests and Docs:**  
- `docs/docs/integrations/tools/valthera` 
- `https://github.com/valthera/langchain-valthera/tree/main/tests`

```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 21:19:08 +00:00
ccurme
3823daa0b9
cli: update integration doc template for tools ()
Chain example -> langgraph agent
2025-03-09 21:14:43 +00:00
David Skarbrevik
0d7cdf290b
langchain: clean pyproject ruff section ()
## Changes
- `/Makefile` - added extra step to `make format` and `make lint` to
ensure the lint dep-group is installed before running ruff (documented
in issue )

- `/pyproject.toml` - removed ruff exceptions for files that no longer
exist or no longer create formatting/linting errors in ruff

## Testing

**running `make format` on this branch/PR**
<img width="435" alt="image"
src="https://github.com/user-attachments/assets/82751788-f44e-4591-98ed-95ce893ce623"
/>

## Issue

fixes 

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 15:06:02 -04:00
Jonathan Feng
911accf733
docs: add contextualai documentation ()
Thank you for contributing to LangChain!
 
**Description:** adds ContextualAI's `langchain-contextual` package's
documentation

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 02:43:13 +00:00
Bharat
b9746a6910
fixes#30182: update tool names to match OpenAI function name pattern ()
The OpenAI API requires function names to match the pattern
'^[a-zA-Z0-9_-]+$'. This updates the JIRA toolkit's tool names to use
underscores instead of spaces to comply with this requirement and
prevent BadRequestError when using the tools with OpenAI functions.

Error fixed:
```
File "langgraph-bug-fix/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1023, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'tools[0].function.name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'tools[0].function.name', 'code': 'invalid_value'}}
During task with name 'agent' and id 'aedd7537-e8d5-6678-d0c5-98129586d3ac'
```

Issue:#30182
2025-03-08 20:48:25 -05:00
ccurme
cee0fecb08
docs: update package registry counts () 2025-03-08 20:37:59 -05:00
William FH
bac3a28e70
Flush () 2025-03-07 16:32:15 -08:00