Compare commits

...

771 Commits

Author SHA1 Message Date
Mason Daugherty
bea249beff release(core): 1.0.0a6 (#33201) 2025-10-01 22:49:36 -04:00
Mason Daugherty
2d29959386 Merge branch 'master' into wip-v1.0 2025-10-01 22:34:05 -04:00
Mason Daugherty
48b77752d0 release(ollama): 0.3.9 (#33200) 2025-10-01 22:31:20 -04:00
Mason Daugherty
6f2d16e6be refactor(ollama): simplify options handling (#33199)
Fixes #32744

Don't restrict options; the client accepts any dict
2025-10-01 21:58:12 -04:00
Mason Daugherty
a9eda18e1e refactor(ollama): clean up tests (#33198) 2025-10-01 21:52:01 -04:00
Mason Daugherty
a89c549cb0 feat(ollama): add basic auth support (#32328)
support for URL authentication in the format
`https://user:password@host:port` for all LangChain Ollama clients.

Related to #32327 and #25055
2025-10-01 20:46:37 -04:00
Sydney Runkle
a336afaecd feat(langchain): use decorators for jumps instead (#33179)
The old `before_model_jump_to` classvar approach was quite clunky, this
is nicer imo and easier to document. Also moving from `jump_to` to
`can_jump_to` which is more idiomatic.

Before:

```py
class MyMiddleware(AgentMiddleware):
    before_model_jump_to: ClassVar[list[JumpTo]] = ["end"]

    def before_model(state, runtime) -> dict[str, Any]:
        return {"jump_to": "end"}
```

After

```py
class MyMiddleware(AgentMiddleware):

    @hook_config(can_jump_to=["end"])
    def before_model(state, runtime) -> dict[str, Any]:
        return {"jump_to": "end"}
```
2025-10-01 16:49:27 -07:00
Mason Daugherty
bb9b802cda fix(ollama): handle ImageContentBlock 2025-10-01 19:38:21 -04:00
Mason Daugherty
4ff83e92c1 fix locks/make filetype optional dep 2025-10-01 19:33:49 -04:00
Mason Daugherty
738dc79959 feat(core): genai standard content (#32987) 2025-10-01 19:09:15 -04:00
Mason Daugherty
9ad7f7a0cc Merge branch 'master' into wip-v1.0 2025-10-01 18:48:38 -04:00
Lauren Hirata Singh
af07949d13 fix(docs): Redirects (#33190) 2025-10-01 16:28:47 -04:00
Sydney Runkle
a10e880c00 feat(langchain_v1): add async support for create_agent (#33175)
This makes branching **much** more simple internally and helps greatly
w/ type safety for users. It just allows for one signature on hooks
instead of multiple.

Opened after https://github.com/langchain-ai/langchain/pull/33164
ballooned more than expected, w/ branching for:
* sync vs async
* runtime vs no runtime (this is self imposed)

**This also removes support for nodes w/o `runtime` in the signature.**
We can always go back and add support for nodes w/o `runtime`.

I think @christian-bromann's idea to re-export `runtime` from
langchain's agents might make sense due to the abundance of imports
here.

Check out the value of the change based on this diff:
https://github.com/langchain-ai/langchain/pull/33176
2025-10-01 19:15:39 +00:00
Eugene Yurtsev
7b5e839be3 chore(langchain_v1): use list[str] for modifyModelRequest (#33166)
Update model request to return tools by name. This will decrease the
odds of misusing the API.

We'll need to extend the type for built-in tools later.
2025-10-01 14:46:19 -04:00
ccurme
fa0955ccb1 fix(openai): fix tests on v1 branch (#33189) 2025-10-01 13:25:29 -04:00
Chester Curme
e02acdfe60 Merge branch 'master' into wip-v1.0
# Conflicts:
#	libs/core/langchain_core/version.py
#	libs/core/pyproject.toml
#	libs/core/uv.lock
#	libs/partners/openai/pyproject.toml
#	libs/partners/openai/tests/integration_tests/chat_models/test_responses_standard.py
#	libs/partners/openai/uv.lock
#	libs/standard-tests/pyproject.toml
#	libs/standard-tests/uv.lock
2025-10-01 11:31:45 -04:00
ccurme
740842485c fix(openai): bump min core version (#33188)
Required for new tests added in
https://github.com/langchain-ai/langchain/pull/32541 and
https://github.com/langchain-ai/langchain/pull/33183.
2025-10-01 11:01:15 -04:00
noeliecherrier
08bb74f148 fix(mistralai): handle HTTP errors in async embed documents (#33187)
The async embed function does not properly handle HTTP errors.

For instance with large batches, Mistral AI returns `Too many inputs in
request, split into more batches.` in a 400 error.

This leads to a KeyError in `response.json()["data"]` l.288

This PR fixes the issue by:
- calling `response.raise_for_status()` before returning
- adding a retry similarly to what is done in the synchronous
counterpart `embed_documents`

I also added an integration test, but willing to move it to unit tests
if more relevant.
2025-10-01 10:57:47 -04:00
ccurme
7d78ed9b53 release(standard-tests): 0.3.22 (#33186) 2025-10-01 10:39:17 -04:00
ccurme
7ccff656eb release(core): 0.3.77 (#33185) 2025-10-01 10:24:07 -04:00
ccurme
002d623f2d feat: (core, standard-tests) support PDF inputs in ToolMessages (#33183) 2025-10-01 10:16:16 -04:00
Mohammad Mohtashim
34f8031bd9 feat(langchain): Using Structured Response as Key in Output Schema for Middleware Agent (#33159)
- **Description:** Changing the key from `response` to
`structured_response` for middleware agent to keep it sync with agent
without middleware. This a breaking change.
 - **Issue:** #33154
2025-10-01 03:24:59 +00:00
Mason Daugherty
87e1bbf3b1 Merge branch 'master' into wip-v1.0 2025-09-30 17:55:40 -04:00
Mason Daugherty
e8f76e506f Merge branch 'master' into wip-v1.0 2025-09-30 17:55:23 -04:00
Mason Daugherty
a541b5bee1 chore(infra): rfc README.md for better presentation (#33172) 2025-09-30 17:44:42 -04:00
Mason Daugherty
3e970506ba chore(core): remove runnable section from README.md (#33171) 2025-09-30 17:15:31 -04:00
Mason Daugherty
d1b0196faa chore(infra): whitespace fix (#33170) 2025-09-30 17:14:55 -04:00
ccurme
aac69839a9 release(openai): 0.3.34 (#33169) 2025-09-30 16:48:39 -04:00
ccurme
64141072a3 feat(openai): support openai sdk 2.0 (#33168) 2025-09-30 16:34:00 -04:00
Mason Daugherty
c49d470e13 Merge branch 'master' into wip-v1.0 2025-09-30 15:53:17 -04:00
Mason Daugherty
208e8e8f07 Merge branch 'master' into wip-v1.0 2025-09-30 15:44:24 -04:00
Mason Daugherty
0795be2a04 docs(core): remove non-existent param from as_tool docstring (#33165) 2025-09-30 19:43:34 +00:00
Mason Daugherty
06301c701e lift langgraph version cap, temp comment out optional deps for langchain_v1 2025-09-30 15:41:16 -04:00
Eugene Yurtsev
9c97597175 chore(langchain_v1): expose middleware decorators and selected messages (#33163)
* Make it easy to improve the middleware shortcuts
* Export the messages that we're confident we'll expose
2025-09-30 14:14:57 -04:00
ccurme
b7223f45cc release(openai): 1.0.0a3 (#33162) 2025-09-30 12:13:58 -04:00
ccurme
8ba8a5e301 release(anthropic): 1.0.0a2 (#33161) 2025-09-30 12:09:17 -04:00
Chester Curme
16a2d9759b revert changes to release workflow 2025-09-30 11:56:10 -04:00
Chester Curme
0e404adc60 🦍 2025-09-30 11:49:40 -04:00
Chester Curme
ffaedf7cfc fix 2025-09-30 11:29:39 -04:00
Chester Curme
b6ecc0b040 infra(fix): temporarily skip some release checks 2025-09-30 11:28:40 -04:00
ccurme
4d581000ad fix(infra): temporarily skip pre-release checks for alpha branch (#33160)
We have deliberately introduced a breaking change in
https://github.com/langchain-ai/langchain/pull/33021.

Will revert when we have compatible pre-releases for tested packages.
2025-09-30 11:10:06 -04:00
ccurme
06ddc57c7a release(core): 1.0.0a5 (#33158) 2025-09-30 10:37:06 -04:00
Chester Curme
49704ffc19 Merge branch 'master' into wip-v1.0
# Conflicts:
#	libs/partners/anthropic/langchain_anthropic/chat_models.py
#	libs/partners/anthropic/pyproject.toml
#	libs/partners/anthropic/uv.lock
2025-09-30 09:25:19 -04:00
Sydney Runkle
eed0f6c289 feat(langchain): todo middleware (#33152)
Porting the [planning
middleware](39c0138d0f/src/deepagents/middleware.py (L21))
over from deepagents.

Also adding the ability to configure:
* System prompt
* Tool description

```py
from langchain.agents.middleware.planning import PlanningMiddleware
from langchain.agents import create_agent

agent = create_agent("openai:gpt-4o", middleware=[PlanningMiddleware()])

result = await agent.invoke({"messages": [HumanMessage("Help me refactor my codebase")]})

print(result["todos"])  # Array of todo items with status tracking
```
2025-09-30 02:23:26 +00:00
Mason Daugherty
8926986483 chore: standardize translator named params 2025-09-29 16:42:59 -04:00
ccurme
729637a347 docs(anthropic): document support for memory tool and context management (#33149) 2025-09-29 16:38:01 -04:00
Mason Daugherty
3325196be1 fix(langchain): handle gpt-5 model name in init_chat_model (#33148)
expand to match any `gpt-*` model to openai
2025-09-29 16:16:17 -04:00
Mason Daugherty
f402fdcea3 fix(langchain): add context_management to Anthropic chat model init (#33150) 2025-09-29 16:13:47 -04:00
ccurme
ca9217c02d release(anthropic): 0.3.21 (#33147) 2025-09-29 19:56:28 +00:00
ccurme
f9bae40475 feat(anthropic): support memory and context management features (#33146)
https://docs.claude.com/en/docs/build-with-claude/context-editing

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-29 15:42:38 -04:00
Mason Daugherty
7f757cf37d fix(core): extras handling, add test 2025-09-29 15:41:49 -04:00
ccurme
c20bd07f16 Merge branch 'master' into wip-v1.0 2025-09-29 12:31:29 -04:00
ccurme
839a18e112 fix(openai): remove __future__.annotations import from test files (#33144)
Breaks schema conversion in places.
2025-09-29 16:23:32 +00:00
Mohammad Mohtashim
33a6def762 fix(core): Support of 'reasoning' type in 'convert_to_openai_messages' (#33050) 2025-09-29 09:17:05 -04:00
nhuang-lc
c456c8ae51 fix(langchain): fix response action for HITL (#33131)
Multiple improvements to HITL flow:

* On a `response` type resume, we should still append the tool call to
the last AIMessage (otherwise we have a ToolResult without a
corresponding ToolCall)
* When all interrupts have `response` types (so there's no pending tool
calls), we should jump back to the first node (instead of end) as we
enforced in the previous `post_model_hook_router`
* Added comments to `model_to_tools` router so clarify all of the
potential exit conditions

Additionally:
* Lockfile update to use latest LG alpha release
* Added test for `jump_to` behaving ephemerally, this was fixed in LG
but surfaced as a bug w/ `jump_to`.
* Bump version to v1.0.0a10 to prep for alpha release

---------

Co-authored-by: Sydney Runkle <sydneymarierunkle@gmail.com>
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-09-29 13:08:18 +00:00
Mason Daugherty
8f60946d5a more docstrings 2025-09-29 00:52:14 -04:00
Mason Daugherty
790c4a8e43 Merge branch 'master' into wip-v1.0 2025-09-28 23:53:54 -04:00
Mason Daugherty
b83d45bff7 fix: add continue when guarding against v0 blocks 2025-09-28 23:53:11 -04:00
Mason Daugherty
ba663defc2 fix: anthropic and converse docstrings 2025-09-28 23:46:17 -04:00
Mason Daugherty
0633974eb0 conversion logic - add missing non-id cases 2025-09-28 23:46:07 -04:00
Mason Daugherty
4d39cf39ff update docstring/comments 2025-09-28 23:44:28 -04:00
Mason Daugherty
77e52a6c9c add docstring 2025-09-28 23:43:44 -04:00
Mason Daugherty
2af1cb6ca3 update conversion docstrings 2025-09-28 23:43:27 -04:00
Mason Daugherty
54f3a6d9cf add note about PlainTextContentBlock and v0/v1 compat 2025-09-28 22:51:17 -04:00
Eugene Yurtsev
54ea62050b chore(langchain_v1): move tool node to tools namespace (#33132)
* Move ToolNode to tools namespace
* Expose injected variable as well in tools namespace
* Update doc-strings throughout
2025-09-26 15:23:57 -04:00
Mason Daugherty
87e5be1097 feat(core): parse reasoning_content from additional_kwargs 2025-09-26 10:50:12 -04:00
Mason Daugherty
fcd8fdd748 revert: remove accidental symlink 2025-09-25 23:48:26 -04:00
Mason Daugherty
370010d195 Merge branch 'master' into wip-v1.0 2025-09-25 20:57:34 -04:00
Mason Daugherty
986302322f docs: more standardization (#33124) 2025-09-25 20:46:20 -04:00
Mason Daugherty
adc941d1dc Merge branch 'master' into wip-v1.0 2025-09-25 20:36:59 -04:00
Mason Daugherty
d15514d571 integrations: delete deprecated items 2025-09-25 18:02:08 -04:00
Mason Daugherty
a5137b0a3e refactor(langchain): resolve pydantic deprecation warnings (#33125) 2025-09-25 17:33:18 -04:00
Mason Daugherty
5bea28393d docs: standardize .. code-block directive usage (#33122)
and fix typos
2025-09-25 16:49:56 -04:00
Mason Daugherty
c3fed20940 docs: correct ported over directives (#33121)
Match rest of repo
2025-09-25 15:54:54 -04:00
Mason Daugherty
6d418ba983 test(mistralai): add xfail for structured output test (#33119)
In rare cases (difficult to reproduce), Mistral's API fails to return
valid bodies, leading to failures from `PydanticToolsParser`
2025-09-25 13:05:31 -04:00
Mason Daugherty
12daba63ff test(openai): raise token limit for o1 test (#33118)
`test_o1[False-False]` was sometimes failing because the OpenAI o1 model
was hitting a token limit with only 100 tokens
2025-09-25 12:57:33 -04:00
Christophe Bornet
eaf8dce7c2 chore: bump ruff version to 0.13 (#33043)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-25 12:27:39 -04:00
Mason Daugherty
ee94f9567a Merge branch 'master' into wip-v1.0 2025-09-25 01:47:00 -04:00
Mason Daugherty
f82de1a8d7 chore: bump locks (#33114) 2025-09-25 01:46:01 -04:00
Mason Daugherty
f3f5b93be6 Merge branch 'master' into wip-v1.0 2025-09-25 01:36:00 -04:00
Mason Daugherty
e3efd1e891 test(text-splitters): capture beta warnings (#33113) 2025-09-25 01:30:20 -04:00
Mason Daugherty
d6769cf032 test(text-splitters): resolve pytest marker warning (#33112)
#33111
2025-09-25 01:29:42 -04:00
Mason Daugherty
00565d7bf6 Merge branch 'master' into wip-v1.0 2025-09-25 01:09:44 -04:00
Mason Daugherty
7ab2e0dd3b test(core): resolve pytest marker warning (#33111)
Remove redundant/outdated `@pytest.mark.requires("jinja2")` decorator

Pytest marks (like `@pytest.mark.requires(...)`) applied directly to
fixtures have no effect and are deprecated.
2025-09-25 01:08:54 -04:00
Mason Daugherty
81319ad3f0 test(core): resolve pydantic_v1 deprecation warning (#33110)
Excluded pydantic_v1 module from import testing

Acceptable since this pydantic_v1 is explicitly deprecated. Testing its
importability at this stage serves little purpose since users should
migrate away from it.
2025-09-25 01:08:03 -04:00
Mason Daugherty
e3f3c13b75 refactor(core): use aadd_documents in vectorstores unit tests (#33109)
Don't use the deprecated `upsert()` and `aupsert()`

Instead use the recommended alternatives
2025-09-25 00:57:08 -04:00
Mason Daugherty
c30844fce4 fix(core): use version agnostic get_fields (#33108)
Resolves a warning
2025-09-25 00:54:29 -04:00
Mason Daugherty
c9eb3bdb2d test(core): use secure hash algorithm in indexing test to eliminate SHA-1 warning (#33107)
Finish work from #33101
2025-09-25 00:49:11 -04:00
Mason Daugherty
e97baeb9a6 test(core): suppress pydantic_v1 deprecation warnings during import tests (#33106)
We intentionally import these. Hide warnings to reduce testing noise.
2025-09-25 00:37:40 -04:00
Mason Daugherty
3a6046b157 test(core): don't use deprecated input_variables param in from_file (#33105)
finish #33104
2025-09-25 04:29:31 +00:00
Mason Daugherty
8fdc619f75 refactor(core): don't use deprecated input_variables param in from_file (#33104)
Missed awhile back; causes warnings during tests
2025-09-25 00:14:17 -04:00
Ali Ismail
729bfe8369 test(core): enhance stringify_value test coverage for nested structures (#33099)
## Summary
Adds test coverage for the `stringify_value` utility function to handle
complex nested data structures that weren't previously tested.

## Changes
- Added `test_stringify_value_nested_structures()` to `test_strings.py`
- Tests nested dictionaries within lists
- Tests mixed-type lists with various data types
- Verifies proper stringification of complex nested structures

## Why This Matters
- Fills a gap in test coverage for edge cases
- Ensures `stringify_value` handles complex data structures correctly  
- Improves confidence in string utility functions used throughout the
codebase
- Low risk addition that strengthens existing test suite

## Testing
```bash
uv run --group test pytest libs/core/tests/unit_tests/utils/test_strings.py::test_stringify_value_nested_structures -v
```

This test addition follows the project's testing patterns and adds
meaningful coverage without introducing any breaking changes.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-25 00:04:47 -04:00
Mason Daugherty
9b624a79b2 test(core): suppress deprecation warnings in PipelinePromptTemplate (#33102)
We're intentionally testing this still so as not to regress. Reduce
warning noise.
2025-09-25 04:03:27 +00:00
Mason Daugherty
c60c5a91cb fix(core): use secure hash algorithm in indexing test to eliminate SHA-1 warning (#33101)
Use SHA-256 (collision-resistant) instead of the default SHA-1. No
functional changes to test behavior.
2025-09-25 00:02:11 -04:00
Mason Daugherty
d9e0c212e0 chore(infra): add tests to label mapping (#33103) 2025-09-25 00:01:53 -04:00
ccurme
2f93566c87 feat: (v1) server tool call and result types (#33021)
- Replaces dedicated types for `WebSearchCall`, `WebSearchResult`,
`CodeInterpreterCall`, `CodeInterpreterResult` with monolithic
`ServerToolCall` and `ServerToolResult`
- Implements `ServerToolCallChunk` for streaming partial arguments
- Full support on Anthropic and OpenAI

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-09-24 23:43:50 -04:00
Mason Daugherty
db8c2d3bae Merge branch 'master' into wip-v1.0 2025-09-24 23:30:23 -04:00
Sydney Runkle
f015526e42 release(langchain): v1.0.0a9 (#33098) 2025-09-24 21:02:53 +00:00
Sydney Runkle
57d931532f fix(langchain): extra arg for anthropic caching, __end__ -> end for jump_to (#33097)
Also updating `jump_to` to use `end` instead of `__end__`
2025-09-24 17:00:40 -04:00
Mason Daugherty
3515a54c10 Merge branch 'master' into wip-v1.0 2025-09-24 16:37:31 -04:00
Mason Daugherty
50012d95e2 chore: update pull_request_target types, harden (#33096)
Enhance the pull request workflows by updating the `pull_request_target`
types and ensuring safety by avoiding checkout of the PR's head. Update
the action to use a specific commit from the archived repository.
2025-09-24 16:37:16 -04:00
Mason Daugherty
33f06875cb fix(langchain_v1): version equality check (#33095) 2025-09-24 16:27:55 -04:00
dependabot[bot]
e5730307e7 chore: bump actions/setup-node from 4 to 5 (#32952)
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4
to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-node/releases">actions/setup-node's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Enhance caching in setup-node with automatic package manager
detection by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
</ul>
<p>This update, introduces automatic caching when a valid
<code>packageManager</code> field is present in your
<code>package.json</code>. This aims to improve workflow performance and
make dependency management more seamless.
To disable this automatic caching, set <code>package-manager-cache:
false</code></p>
<pre lang="yaml"><code>steps:
- uses: actions/checkout@v5
- uses: actions/setup-node@v5
  with:
    package-manager-cache: false
</code></pre>
<ul>
<li>Upgrade action to use node24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p>Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">See
Release Notes</a></p>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade <code>@​octokit/request-error</code> and
<code>@​actions/github</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1227">actions/setup-node#1227</a></li>
<li>Upgrade uuid from 9.0.1 to 11.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1273">actions/setup-node#1273</a></li>
<li>Upgrade undici from 5.28.5 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1295">actions/setup-node#1295</a></li>
<li>Upgrade form-data to bring in fix for critical vulnerability by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/setup-node/pull/1332">actions/setup-node#1332</a></li>
<li>Upgrade actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1345">actions/setup-node#1345</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v4...v5.0.0">https://github.com/actions/setup-node/compare/v4...v5.0.0</a></p>
<h2>v4.4.0</h2>
<h2>What's Changed</h2>
<h3>Bug fixes:</h3>
<ul>
<li>Make eslint-compact matcher compatible with Stylelint by <a
href="https://github.com/FloEdelmann"><code>@​FloEdelmann</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/98">actions/setup-node#98</a></li>
<li>Add support for indented eslint output by <a
href="https://github.com/fregante"><code>@​fregante</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/1245">actions/setup-node#1245</a></li>
</ul>
<h3>Enhancement:</h3>
<ul>
<li>Support private mirrors by <a
href="https://github.com/marco-ippolito"><code>@​marco-ippolito</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1240">actions/setup-node#1240</a></li>
</ul>
<h3>Dependency update:</h3>
<ul>
<li>Upgrade <code>@​action/cache</code> from 4.0.2 to 4.0.3 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1262">actions/setup-node#1262</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/FloEdelmann"><code>@​FloEdelmann</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/98">actions/setup-node#98</a></li>
<li><a href="https://github.com/fregante"><code>@​fregante</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1245">actions/setup-node#1245</a></li>
<li><a
href="https://github.com/marco-ippolito"><code>@​marco-ippolito</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1240">actions/setup-node#1240</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v4...v4.4.0">https://github.com/actions/setup-node/compare/v4...v4.4.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a0853c2454"><code>a0853c2</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/actions/setup-node/issues/1345">#1345</a>)</li>
<li><a
href="b7234cc9fe"><code>b7234cc</code></a>
Upgrade action to use node24 (<a
href="https://redirect.github.com/actions/setup-node/issues/1325">#1325</a>)</li>
<li><a
href="d7a11313b5"><code>d7a1131</code></a>
Enhance caching in setup-node with automatic package manager detection
(<a
href="https://redirect.github.com/actions/setup-node/issues/1348">#1348</a>)</li>
<li><a
href="5e2628c959"><code>5e2628c</code></a>
Bumps form-data (<a
href="https://redirect.github.com/actions/setup-node/issues/1332">#1332</a>)</li>
<li><a
href="65beceff8e"><code>65becef</code></a>
Bump undici from 5.28.5 to 5.29.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1295">#1295</a>)</li>
<li><a
href="7e24a656e1"><code>7e24a65</code></a>
Bump uuid from 9.0.1 to 11.1.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1273">#1273</a>)</li>
<li><a
href="08f58d1471"><code>08f58d1</code></a>
Bump <code>@​octokit/request-error</code> and
<code>@​actions/github</code> (<a
href="https://redirect.github.com/actions/setup-node/issues/1227">#1227</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/setup-node/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-node&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-24 16:26:05 -04:00
Mason Daugherty
ccfdce64d3 add note 2025-09-24 16:18:20 -04:00
Mason Daugherty
4ca523a9d6 Merge branch 'master' into wip-v1.0 2025-09-24 16:17:18 -04:00
Mason Daugherty
4783a9c18e style: update workflow name for version equality check (#33094) 2025-09-24 20:11:30 +00:00
Mason Daugherty
ee4d84de7c style(core): typo/docs lint pass (#33093) 2025-09-24 16:11:21 -04:00
Mason Daugherty
092dd5e174 chore: update link for monorepo structure (#33091) 2025-09-24 19:55:00 +00:00
Mason Daugherty
a4e1a54393 Merge branch 'master' into wip-v1.0 2025-09-24 15:35:44 -04:00
Sydney Runkle
dd81e1c3fb release(langchain): 1.0.0a8 (#33090) 2025-09-24 15:31:29 -04:00
Sydney Runkle
135a5b97e6 feat(langchain): improvements to anthropic prompt caching (#33058)
Adding an `unsupported_model_behavior` arg that can be `'ignore'`,
`'warn'`, or `'raise'`. Defaults to `'warn'`.
2025-09-24 15:28:49 -04:00
Mason Daugherty
b92b394804 style: repo linting pass (#33089)
enable docstring-code-format
2025-09-24 15:25:55 -04:00
Sydney Runkle
083bb3cdd7 fix(langchain): need to inject all state for tools registered by middleware (#33087)
Type hints matter for conditional edges!
2025-09-24 15:25:51 -04:00
Mason Daugherty
2e9291cdd7 fix: lift openai version constraints across packages (#33088)
re: #33038 and https://github.com/openai/openai-python/issues/2644
2025-09-24 15:25:10 -04:00
Sydney Runkle
4f8a76b571 chore(langchain): renaming for HITL (#33067) 2025-09-24 07:19:44 -04:00
Mason Daugherty
05ba941230 style(cli): linting pass (#33078) 2025-09-24 01:24:52 -04:00
Mason Daugherty
ae4976896e chore: delete erroneous .readthedocs.yaml (#33079)
From the legacy docs/not needed here
2025-09-24 01:24:42 -04:00
Mason Daugherty
e7cdaad58f Merge branch 'master' into wip-v1.0 2025-09-24 01:07:22 -04:00
Mason Daugherty
504ef96500 chore: add commit message generation instructions for VSCode (#33077) 2025-09-24 05:06:43 +00:00
Mason Daugherty
d99a02bb27 chore: add AGENTS.md (#33076)
it would be super cool if Anthropic supported this instead of
`CLAUDE.md` :/

https://agents.md/
2025-09-24 05:02:14 +00:00
Mason Daugherty
793de80429 chore: update label mapping in PR title labeler configuration (#33075) 2025-09-24 01:00:14 -04:00
Mason Daugherty
7d4e9d8cda revert(infra): put SECURITY.md at root (#33074) 2025-09-24 00:54:37 -04:00
Mason Daugherty
54dca494cf chore: delete erroneous poetry.toml configuration file (#33073)
- Not used by the current build system
- Potentially confusing for new contributors
- A leftover artifact from the Poetry to uv migration
2025-09-24 04:40:17 +00:00
Mason Daugherty
7b30e58386 chore: delete erroneous yarn.lock in root (#33072)
Appears to have had no purpose/was added by mistake and nobody
questioned it
2025-09-24 04:35:00 +00:00
Mason Daugherty
e62b541dfd chore(infra): move SECURITY.md to .github (#33071)
cleaning up top-level. `.github` folder placement will continue to show
on repo homepage:
https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository#about-security-policies
2025-09-24 00:27:48 -04:00
Mason Daugherty
8699980d09 chore(scripts): remove obsolete release and mypy/ruff update scripts (#33070)
Outdated scripts related to release management and mypy/ruff updates

Cleaning up the root-level
2025-09-24 04:24:38 +00:00
Mason Daugherty
79e536b0d6 chore(infra): further docs build cleanup (#33057)
Reorganize the requirements for better clarity and consistency. Improve
documentation on scripts and workflows.
2025-09-23 17:29:58 -04:00
Sydney Runkle
b5720ff17a chore(langchain): simplifying HITL condition (#33065)
Simplifying condition
2025-09-23 21:24:14 +00:00
nhuang-lc
48b05224ad fix(langchain_v1): only interrupt if at least one ToolConfig value is True (#33064)
**Description:** Right now, we interrupt even if the provided ToolConfig
has all false values. We should ignore ToolConfigs which do not have at
least one value marked as true (just as we would if tool_name: False was
passed into the dict).
2025-09-23 17:20:34 -04:00
Sydney Runkle
89079ad411 feat(langchain): new decorator pattern for dynamically generated middleware (#33053)
# Main Changes

1. Adding decorator utilities for dynamically defining middleware with
single hook functions (see an example below for dynamic system prompt)
2. Adding better conditional edge drawing with jump configuration
attached to middleware. Can be registered w/ the decorator new
decorator!

## Decorator Utilities

```py
from langchain.agents.middleware_agent import create_agent, AgentState, ModelRequest
from langchain.agents.middleware.types import modify_model_request
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import InMemorySaver


@modify_model_request
def modify_system_prompt(request: ModelRequest, state: AgentState) -> ModelRequest:
    request.system_prompt = (
        "You are a helpful assistant."
        f"Please record the number of previous messages in your response: {len(state['messages'])}"
    )
    return request

agent = create_agent(
    model="openai:gpt-4o-mini", 
    middleware=[modify_system_prompt]
).compile(checkpointer=InMemorySaver())
```

## Visualization and Routing improvements

We now require that middlewares define the valid jumps for each hook.

If using the new decorator syntax, this can be done with:

```py
@before_model(jump_to=["__end__"])
@after_model(jump_to=["tools", "__end__"])
```

If using the subclassing syntax, you can use these two class vars:

```py
class MyMiddlewareAgentMiddleware):
    before_model_jump_to = ["__end__"]
    after_model_jump_to = ["tools", "__end__"]
```

Open for debate if we want to bundle these in a single jump map / config
for a middleware. Easy to migrate later if we decide to add more hooks.

We will need to **really clearly document** that these must be
explicitly set in order to enable conditional edges.

Notice for the below case, `Middleware2` does actually enable jumps.

<table>
  <thead>
    <tr>
      <th>Before (broken), adding conditional edges unconditionally</th>
      <th>After (fixed), adding conditional edges sparingly</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>
<img width="619" height="508" alt="Screenshot 2025-09-23 at 10 23 23 AM"
src="https://github.com/user-attachments/assets/bba2d098-a839-4335-8e8c-b50dd8090959"
/>
      </td>
      <td>
<img width="469" height="490" alt="Screenshot 2025-09-23 at 10 23 13 AM"
src="https://github.com/user-attachments/assets/717abf0b-fc73-4d5f-9313-b81247d8fe26"
/>
      </td>
    </tr>
  </tbody>
</table>

<details>
<summary>Snippet for the above</summary>

```py
from typing import Any
from langchain.agents.tool_node import InjectedState
from langgraph.runtime import Runtime
from langchain.agents.middleware.types import AgentMiddleware, AgentState
from langchain.agents.middleware_agent import create_agent
from langchain_core.tools import tool
from typing import Annotated
from langchain_core.messages import HumanMessage
from typing_extensions import NotRequired

@tool
def simple_tool(input: str) -> str:
    """A simple tool."""
    return "successful tool call"


class Middleware1(AgentMiddleware):
    """Custom middleware that adds a simple tool."""

    tools = [simple_tool]

    def before_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

    def after_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

class Middleware2(AgentMiddleware):

    before_model_jump_to = ["tools", "__end__"]

    def before_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

    def after_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

class Middleware3(AgentMiddleware):

    def before_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

    def after_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

builder = create_agent(
    model="openai:gpt-4o-mini",
    middleware=[Middleware1(), Middleware2(), Middleware3()],
    system_prompt="You are a helpful assistant.",
)
agent = builder.compile()
```

</details>

## More Examples

### Guardrails `after_model`

<img width="379" height="335" alt="Screenshot 2025-09-23 at 10 40 09 AM"
src="https://github.com/user-attachments/assets/45bac7dd-398e-45d1-ae58-6ecfa27dfc87"
/>

<details>
<summary>Code</summary>

```py
from langchain.agents.middleware_agent import create_agent, AgentState, ModelRequest
from langchain.agents.middleware.types import after_model
from langchain_core.messages import HumanMessage, AIMessage
from langgraph.checkpoint.memory import InMemorySaver
from typing import cast, Any

@after_model(jump_to=["model", "__end__"])
def after_model_hook(state: AgentState) -> dict[str, Any]:
    """Check the last AI message for safety violations."""
    last_message_content = cast(AIMessage, state["messages"][-1]).content.lower()
    print(last_message_content)

    unsafe_keywords = ["pineapple"]
    if any(keyword in last_message_content for keyword in unsafe_keywords):

        # Jump back to model to regenerate response
        return {"jump_to": "model", "messages": [HumanMessage("Please regenerate your response, and don't talk about pineapples. You can talk about apples instead.")]}

    return {"jump_to": "__end__"}

# Create agent with guardrails middleware
agent = create_agent(
    model="openai:gpt-4o-mini",
    middleware=[after_model_hook],
    system_prompt="Keep your responses to one sentence please!"
).compile()

# Test with potentially unsafe input
result = agent.invoke(
    {"messages": [HumanMessage("Tell me something about pineapples")]},
)

for msg in result["messages"]:
    print(msg.pretty_print())

"""
================================ Human Message =================================

Tell me something about pineapples
None
================================== Ai Message ==================================

Pineapples are tropical fruits known for their sweet, tangy flavor and distinctive spiky exterior.
None
================================ Human Message =================================

Please regenerate your response, and don't talk about pineapples. You can talk about apples instead.
None
================================== Ai Message ==================================

Apples are popular fruits that come in various varieties, known for their crisp texture and sweetness, and are often used in cooking and baking.
None
"""
```

</details>
2025-09-23 13:25:55 -04:00
Mason Daugherty
2c95586f2a chore(infra): audit workflows, scripts (#33055)
Mostly adding a descriptive frontmatter to workflow files. Also address
some formatting and outdated artifacts

No functional changes outside of
[d5457c3](d5457c39ee),
[90708a0](90708a0d99),
and
[338c82d](338c82d21e)
2025-09-23 17:08:19 +00:00
Mason Daugherty
36f8c7335c fix: core version equality check 2025-09-23 01:30:50 -04:00
Mason Daugherty
f6cf4fcd6e Merge branch 'master' into wip-v1.0 2025-09-23 01:27:12 -04:00
Mason Daugherty
9c1285cf5b chore(infra): fix ping pong pr labeler config (#33054)
The title-based labeler was clearing all pre-existing labels (including
the file-based ones) before adding its semantic labels.
2025-09-22 21:19:53 -04:00
Sydney Runkle
c3be45bf14 fix(langchain): HITL bug causing dupe interrupt (#33052)
Need to find **last** AI msg (not first). Getting too creative w/
generators.
2025-09-22 20:09:12 -04:00
Mason Daugherty
cfa9973828 Merge branch 'master' into wip-v1.0 2025-09-22 16:30:00 -04:00
Arman Tsaturian
8f488d62b2 docs: fix stripe toolkit import in the guide (#33044)
**Description:**
Stripe tools integration guide incorrectly referenced the `crewai`
toolkit. Updated the import to use the correct `langchain` toolkit.

Stripe docs reference:
https://docs.stripe.com/agents?framework=langchain&lang=python
2025-09-22 15:17:09 -04:00
Mason Daugherty
cdae9e4942 fix(infra): prevent labeler workflow from adding/removing same labels (#33039)
The file-based and title-based labeler workflows were conflicting,
causing the bot to add and remove identical labels in the same
operation. Hopefully this fixes
2025-09-21 04:37:59 +00:00
Mason Daugherty
e75878bfa2 Merge branch 'master' into wip-v1.0 2025-09-21 00:29:30 -04:00
Mason Daugherty
7ddc798f95 fix(openai): pin upper bound to prevent Pydantic 2.7.0 issues (#33038)
https://github.com/openai/openai-python/issues/2644
2025-09-21 00:27:03 -04:00
Mason Daugherty
6081ba9184 fix: pydantic openai (#33037) 2025-09-21 00:21:56 -04:00
Mason Daugherty
0cc6f8bb58 Merge branch 'master' into wip-v1.0 2025-09-20 23:47:53 -04:00
Mason Daugherty
7dcf6a515e fix: update method calls from dict to model_dump in Chain (#33035) 2025-09-20 23:47:44 -04:00
Mason Daugherty
043a7560a5 test: use .get() for safe ls_params access (#33034) 2025-09-20 23:46:37 -04:00
Mason Daugherty
6c90ba1f05 test: fix pydantic issue by bumping OAI-py 2025-09-20 23:13:42 -04:00
Mason Daugherty
ea3b9695e6 Merge branch 'master' into wip-v1.0 2025-09-20 22:59:25 -04:00
Mason Daugherty
5b418d3f26 feat(infra): add PR labeler configurations and workflows (#33031) 2025-09-20 22:33:08 -04:00
Mason Daugherty
6b4054c795 chore(infra): update pre-commit hooks to include linting (#33029) 2025-09-21 02:26:19 +00:00
Mason Daugherty
30fde5af38 chore(infra): remove couchbase formatting hook from pre-commit (#33030)
Should've been done when it was removed from the monorepo
2025-09-20 22:09:57 -04:00
Mason Daugherty
781db9d892 chore: update pyproject.toml files, remove codespell (#33028)
- Removes Codespell from deps, docs, and `Makefile`s
- Python version requirements in all `pyproject.toml` files now use the
`~=` (compatible release) specifier
- All dependency groups and main dependencies now use explicit lower and
upper bounds, reducing potential for breaking changes
2025-09-20 22:09:33 -04:00
Sydney Runkle
f2b0afd0b7 release(langchain): 1.0.0a6 (#33024)
w/ improvements to HITL, state schema merging, dynamic system prompt
2025-09-19 18:47:41 +00:00
Sydney Runkle
c3654202a3 fix(langchain): use state schema as input schema to middleware nodes (#33023)
We want state schema as the input schema to middleware nodes because the
conditional edges after these nodes need access to the full state.

Also, we just generally want all state passed to middleware nodes, so we
should be specifying this explicitly. If we don't, the state annotations
used by users in their node signatures are used (so they might be
missing fields).
2025-09-19 18:43:33 +00:00
Chester Curme
fa4c302463 fix(anthropic): fix web_fetch test 2025-09-19 09:32:30 -04:00
Sydney Runkle
4d118777bc feat(langchain): dynamic system prompt middleware (#33006)
# Changes

## Adds support for `DynamicSystemPromptMiddleware`

```py
from langchain.agents.middleware import DynamicSystemPromptMiddleware
from langgraph.runtime import Runtime
from typing_extensions import TypedDict

class Context(TypedDict):
    user_name: str

def system_prompt(state: AgentState, runtime: Runtime[Context]) -> str:
    user_name = runtime.context.get("user_name", "n/a")
    return f"You are a helpful assistant. Always address the user by their name: {user_name}"

middleware = DynamicSystemPromptMiddleware(system_prompt)
```

## Adds support for `runtime` in middleware hooks

```py
class AgentMiddleware(Generic[StateT, ContextT]):
    def modify_model_request(
        self,
        request: ModelRequest,
        state: StateT,
        runtime: Runtime[ContextT],  # Optional runtime parameter
    ) -> ModelRequest:
        # upgrade model if runtime.context.subscription is `top-tier` or whatever
```

## Adds support for omitting state attributes from input / output
schemas

```py
from typing import Annotated, NotRequired
from langchain.agents.middleware.types import PrivateStateAttr, OmitFromInput, OmitFromOutput

class CustomState(AgentState):
    # Private field - not in input or output schemas
    internal_counter: NotRequired[Annotated[int, PrivateStateAttr]]
    
    # Input-only field - not in output schema
    user_input: NotRequired[Annotated[str, OmitFromOutput]]
    
    # Output-only field - not in input schema  
    computed_result: NotRequired[Annotated[str, OmitFromInput]]
```

## Additionally
* Removes filtering of state before passing into middleware hooks

Typing is not foolproof here, still need to figure out some of the
generics stuff w/ state and context schema extensions for middleware.

TODO:
* More docs for middleware, should hold off on this until other prios
like MCP and deepagents are met

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-18 16:07:16 -04:00
ccurme
b215ed5642 chore(langchain): (v1) drop python 3.9 and relax dependency bounds (#33013) 2025-09-18 15:05:50 -04:00
Chester Curme
c891a51608 fix(infra): permit pre-releases in release workflow 2025-09-18 13:44:54 -04:00
ccurme
82650ea7f1 release(core): 1.0.0a4 (#33011) 2025-09-18 13:01:02 -04:00
ccurme
13964efccf revert: fix(standard-tests): add filename to PDF file block (#33010)
Reverts langchain-ai/langchain#32989 in favor of
https://github.com/langchain-ai/langchain/pull/33009.
2025-09-18 12:48:00 -04:00
ccurme
f247270111 feat(core): (v1) standard content for AWS (#32969)
https://github.com/langchain-ai/langchain-aws/pull/643
2025-09-18 12:47:26 -04:00
ccurme
8be4adccd1 fix(core): (v1) trace PDFs in v0 standard format (#33009) 2025-09-18 11:45:12 -04:00
Mason Daugherty
b6bd507198 Merge branch 'master' into wip-v1.0 2025-09-18 11:44:06 -04:00
Mason Daugherty
f158cea1e8 release(mistralai): 0.2.12 (#33008) 2025-09-18 11:42:11 -04:00
Mason Daugherty
3e0d7512ef core: update version.py 2025-09-18 10:58:51 -04:00
Mason Daugherty
53ed770849 Merge branch 'master' into wip-v1.0 2025-09-18 10:57:53 -04:00
Mason Daugherty
e2050e24ef release(core): 1.0.0a3 2025-09-18 10:57:22 -04:00
Sadiq Khan
90280d1f58 docs(core): fix bugs and improve example code in chat_history.py (#32994)
## Summary

This PR fixes several bugs and improves the example code in
`BaseChatMessageHistory` docstring that would prevent it from working
correctly.

### Bugs Fixed
- **Critical bug**: Fixed `json.dump(messages, f)` →
`json.dump(serialized, f)` - was using wrong variable
- **NameError**: Fixed bare variable references to use
`self.storage_path` and `self.session_id`
- **Missing imports**: Added required imports (`json`, `os`, message
converters) to make example runnable

### Improvements
- Added missing type hints following project standards (`messages() ->
list[BaseMessage]`, `clear() -> None`)
- Added robust error handling with `FileNotFoundError` exception
handling
- Added directory creation with `os.makedirs(exist_ok=True)` to prevent
path errors
- Improved performance: `json.load(f)` instead of `json.loads(f.read())`
- Added explicit UTF-8 encoding to all file operations
- Updated stores.py to use modern union syntax (`int | None` vs
`Optional[int]`)

### Test Plan
- [x] Code passes linting (`ruff check`)
- [x] Example code now has all required imports and proper syntax
- [x] Fixed variable references prevent runtime errors
- [x] Follows project's type annotation standards

The example code in the docstring is now fully functional and follows
LangChain's coding standards.

---------

Co-authored-by: sadiqkhzn <sadiqkhzn@users.noreply.github.com>
2025-09-18 09:34:19 -04:00
Dushmanta
ee340e0a3b fix(docs): update dead link to docling github and docs (#33001)
- **Description:** Updated the dead/unreachable links to Docling from
the additional resources section of the langchain-docling docs
  - **Issue:** Fixes langchain-ai/docs/issues/574
  - **Dependencies:** None
2025-09-18 09:30:29 -04:00
Sydney Runkle
d5ba5d3511 feat(langchain): improved HITL patterns (#32996)
# Main changes / new features

## Better support for parallel tool calls

1. Support for multiple tool calls requiring human input
2. Support for combination of tool calls requiring human input + those
that are auto-approved
3. Support structured output w/ tool calls requiring human input
4. Support structured output w/ standard tool calls

## Shortcut for allowed actions

Adds a shortcut where tool config can be specified as a `bool`, meaning
"all actions allowed"

```py
HumanInTheLoopMiddleware(tool_configs={"expensive_tool": True})
```

## A few design decisions here
* We only raise one interrupt w/ all `HumanInterrupt`s, currently we
won't be able to execute all tools until all of these are resolved. This
isn't super blocking bc we can't re-invoke the model until all tools
have finished execution. That being said, if you have a long running
auto-approved tool, this could slow things down.

## TODOs

* Ideally, we would rename `accept` -> `approve`
* Ideally, we would rename `respond` -> `reject`
* Docs update (@sydney-runkle to own)
* In another PR I'd like to refactor testing to have one file for each
prebuilt middleware :)

Fast follow to https://github.com/langchain-ai/langchain/pull/32962
which was deemed as too breaking
2025-09-17 16:53:01 -04:00
Mason Daugherty
76d0758007 fix(docs): json_mode -> json_schema (#32993) 2025-09-17 18:21:34 +00:00
Mason Daugherty
8b3f74012c docs: update GenAI structured output section to include JSON mode details (#32992) 2025-09-17 17:40:34 +00:00
Mason Daugherty
59bb8bffd1 feat(core): make is_openai_data_block public and add filtering (#32991) 2025-09-17 11:41:41 -04:00
Mason Daugherty
16ec9bc535 fix(core): add back text to data content block check 2025-09-17 11:13:27 -04:00
Mason Daugherty
fda8a71e19 docs: further comments for clarity 2025-09-17 00:57:48 -04:00
Mason Daugherty
8f23bd109b fix: correct var name in comment 2025-09-17 00:39:01 -04:00
Mason Daugherty
ff632c1028 fix(standard-tests): add filename to PDF file block (#32989)
The standard PDF input test was creating file content blocks without a
filename field.

This caused a warning when the OpenAI block translator processed the
message for LangSmith tracing, since OpenAI requires filenames for file
inputs.
2025-09-17 00:35:49 -04:00
Mason Daugherty
3f71efc93c docs(core): update comments and docstrings 2025-09-17 00:35:30 -04:00
Mason Daugherty
cb3b5bf69b refactor(core): remove Pydantic v2 deprecation warning in tools base module
Replace deprecated `__fields__` attribute access with proper version-agnostic field retrieval using existing `get_fields`
2025-09-16 23:58:37 -04:00
Mason Daugherty
1c8a01ed03 cli: remove unnecessary changes & bump lock 2025-09-16 22:12:26 -04:00
Mason Daugherty
25333d3b45 Merge branch 'master' into wip-v1.0 2025-09-16 22:09:23 -04:00
Mason Daugherty
54a9556f5c chore(cli): update lock (#32986) 2025-09-17 02:08:20 +00:00
Mason Daugherty
42830208f3 Merge branch 'master' into wip-v1.0 2025-09-16 22:04:42 -04:00
Mason Daugherty
66041a2778 refactor(cli): target ruff 310 (#32985)
Use union types for optional parameters
2025-09-16 22:04:28 -04:00
Mason Daugherty
ab1b822523 chore: update PR title lint (#32983) 2025-09-16 22:04:19 -04:00
Chase Lean
543d90e108 docs: add langchain-scraperapi (#31973)
Adds documentation for the integration langchain-scraperapi, which
contains 3 tools using the ScraperAPI service.

The tools give AI agents the ability to

Scrape the web and return HTML/text/markdown
Perform Google search and return json output
Perform Amazon search and return json output

For reference, here is the official repo for langchain_scraperapi:
https://github.com/scraperapi/langchain-scraperapi
2025-09-16 21:46:20 -04:00
Adam Deedman
f8640630d8 docs: fix memory for agents (#32979)
Replaced `input_message` parameter with a directly called tuple, e.g.
`{"messages": [("user", "What is my name?")]}`

Before, the memory function wasn't working with the agent, using the
format of the input_message parameter.

Specifically, on page [Build an
Agent#adding-in-memory](https://python.langchain.com/docs/tutorials/agents/#adding-in-memory)

In the previous code, the query "What's my name?" wasn't working, as the
agent could not recall memory correctly.

<img width="860" height="679" alt="image"
src="https://github.com/user-attachments/assets/dfbca21e-ffe9-4645-a810-3be7a46d81d5"
/>
2025-09-16 15:46:15 -04:00
ccurme
653dc77c7e release(standard-tests): 1.0.0a1 (#32978) 2025-09-16 13:32:53 -04:00
Mason Daugherty
f9605c7438 chore(infra): update contribution guide link in CONTRIBUTING.md (#32976) 2025-09-16 15:15:53 +00:00
Mason Daugherty
ebd6f7d8a3 chore(infra): update security guidelines formatting (#32975) 2025-09-16 15:12:10 +00:00
Chester Curme
f6ab75ba8b Merge branch 'master' into wip-v1.0
# Conflicts:
#	libs/langchain/pyproject.toml
#	libs/langchain/uv.lock
2025-09-16 10:46:43 -04:00
ccurme
e63c1d7171 chore(langchain): drop cap on python version (#32974) 2025-09-16 10:44:21 -04:00
Mason Daugherty
b6af0d228c residual lint 2025-09-16 10:31:42 -04:00
Mason Daugherty
5800721fd4 fix lint 2025-09-16 10:28:23 -04:00
Chester Curme
ebfb938a68 Merge branch 'master' into wip-v1.0 2025-09-16 10:20:24 -04:00
ccurme
1a3e9e06ee chore(langchain): (v1) drop 3.9 support (#32970)
Will need to add back optional deps when we have releases of them that
are compatible with langchain-core 1.0 alphas.
2025-09-16 10:14:23 -04:00
Mason Daugherty
8180020b93 chore: restore commented out optional deps (#32971)
langchain & langchain_v1
2025-09-16 10:10:49 -04:00
Username46786
435194acf6 docs: add cross-links between summarization how-to pages (#32968)
This PR improves navigation in the summarization how-to section by
adding
cross-links from the single-call guide to the related map-reduce and
refine
guides. This mirrors the docs style guide’s emphasis on clear
cross-references
and should help readers discover the appropriate pattern for longer
texts.

- Source edited: docs/docs/how_to/summarize_stuff.ipynb
- Links added:
  - /docs/how_to/summarize_map_reduce/
  - /docs/how_to/summarize_refine/

Type: docs-only (no code changes)
2025-09-16 09:59:03 -04:00
Mason Daugherty
c7ebbe5c8a Merge branch 'master' into wip-v1.0 2025-09-15 19:26:36 -04:00
Mason Daugherty
244c699551 refactor(cli): drop Python 3.9 (#32964) 2025-09-15 19:22:53 -04:00
Mason Daugherty
369858de19 chore(infra): fix codspeed (#32963)
Related to #32950

CodSpeed v4 runs pytest inside its own runner process, which does not
automatically inherit environment variables from the job
2025-09-15 21:52:46 +00:00
Mason Daugherty
76dfb7fbe8 fix(cli): lint 2025-09-15 17:32:11 -04:00
Mason Daugherty
f25133c523 fix: version equality 2025-09-15 17:31:16 -04:00
Mason Daugherty
fc7a07d6f1 . 2025-09-15 17:27:20 -04:00
Mason Daugherty
e8ff6f4db6 Merge branch 'master' into wip-v1.0 2025-09-15 17:27:08 -04:00
Ali Ismail
4ebce80fbb docs(langchain): add docstring for _load_map_reduce_chain (#32961)
Description:
Add a docstring to _load_map_reduce_chain in chains/summarize/ to
explain the purpose of the prompt argument and document function
parameters. This addresses an existing TODO in the codebase.

Issue:
N/A (documentation improvement only)

Dependencies:
None
2025-09-15 17:19:20 -04:00
Mason Daugherty
8670b24c8e test(groq): xfail tool integration test (#32960)
Groq models have known issues with tool calling consistency,
[particularly with complex tools derived from
runnables](https://github.com/langchain-ai/langchain/discussions/19990).
[(more)](https://github.com/langchain-ai/langchain/discussions/24309)

xfail until we can dedicate time to wrangling their API/model handling
2025-09-15 14:23:22 -04:00
Ademílson Tonato
8d60ddba3a docs: update installation command for firecrawl-py package (#32958) 2025-09-15 14:10:08 -04:00
Mason Daugherty
9f6431924f feat(openai): add max_tokens to AzureChatOpenAI (#32959)
Fixes #32949

This pattern is [present in
`ChatOpenAI`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py#L2821)
but wasn't carried over to Azure.


[CI](https://github.com/langchain-ai/langchain/actions/runs/17741751797/job/50417180998)
2025-09-15 14:09:20 -04:00
Ali Ismail
569a3d9602 docs(langchain): add docstring for _load_stuff_chain (#32932)
**Description:**  
Add a docstring to `_load_stuff_chain` in `chains/summarize/` to explain
the purpose of the `prompt` argument and document function parameters.
This addresses an existing TODO in the codebase.

**Issue:**  
N/A (documentation improvement only)

**Dependencies:**  
None
2025-09-15 10:02:50 -04:00
dependabot[bot]
8ef4df903f chore(infra): bump CodSpeedHQ/action from 3 to 4 (#32950)
Bumps [CodSpeedHQ/action](https://github.com/codspeedhq/action) from 3
to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/codspeedhq/action/releases">CodSpeedHQ/action's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>💥 BREAKING</h2>
<p>It's now required to explicitly set the runner mode to
<code>instrumentation</code> or <code>walltime</code> using either:</p>
<ul>
<li>the <code>mode</code> argument</li>
<li>or the <code>CODSPEED_RUNNER_MODE</code> environment variable</li>
</ul>
<blockquote>
<p>[!TIP]
Before, this variable was automatically set to
<code>instrumentation</code> on every runner except for <a
href="https://codspeed.io/docs/instruments/walltime">CodSpeed macro
runners</a> where it was set to <code>walltime</code> by default.</p>
</blockquote>
<p>Find more details in <a
href="https://codspeed.io/docs/instruments">the instruments
documentation</a>.</p>
<h2>Details</h2>
<h3><!-- raw HTML omitted -->🚀 Features</h3>
<ul>
<li>Make perf profiling enabled by default by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/110">#110</a></li>
<li>Make the runner mode argument required by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a></li>
<li>Use introspected node in walltime mode by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/108">#108</a></li>
<li>Add instrumented go shell script by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/102">#102</a></li>
</ul>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>Compute proper load bias by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/107">#107</a></li>
<li>Increase timeout for first perf ping by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a></li>
<li>Prevent running with valgrind by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/106">#106</a></li>
</ul>
<h3><!-- raw HTML omitted -->🏗️ Refactor</h3>
<ul>
<li>Change go-runner binary name by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/111">#111</a></li>
</ul>
<p><strong>Full Runner Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md">https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md</a></p>
<h2>v3.8.1</h2>
<h2>What's Changed</h2>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>Don't show error when libpython is not found by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a></li>
</ul>
<h3><!-- raw HTML omitted -->🏗️ Refactor</h3>
<ul>
<li>Improve conditional compilation in
<code>get_pipe_open_options</code> by <a
href="https://github.com/art049"><code>@​art049</code></a> in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/100">#100</a></li>
</ul>
<h3><!-- raw HTML omitted -->⚙️ Internals</h3>
<ul>
<li>Change log level to warn for venv_compat error by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/104">#104</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/action/compare/v3.8.0...v3.8.1">https://github.com/CodSpeedHQ/action/compare/v3.8.0...v3.8.1</a>
<strong>Full Runner Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md">https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md</a></p>
<h2>v3.8.0</h2>
<h2>What's Changed</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="653fdc30e6"><code>653fdc3</code></a>
Release v4.0.1 🚀</li>
<li><a
href="4da7be1bda"><code>4da7be1</code></a>
chore: bump runner version to 4.0.1</li>
<li><a
href="172d6c5630"><code>172d6c5</code></a>
chore: make the comment about input validation more discrete</li>
<li><a
href="d15e1ce813"><code>d15e1ce</code></a>
chore: improve the release script</li>
<li><a
href="6eeb021fd0"><code>6eeb021</code></a>
Release v4.0.0 🚀</li>
<li><a
href="74312dabbe"><code>74312da</code></a>
chore: improve the release script</li>
<li><a
href="8a17a350a8"><code>8a17a35</code></a>
ci: add modes to the matrix</li>
<li><a
href="8e3f02a649"><code>8e3f02a</code></a>
feat: make the mode argument required</li>
<li><a
href="97c7a6f5fc"><code>97c7a6f</code></a>
chore: bump runner version to 4.0.0</li>
<li><a
href="8a4cadd026"><code>8a4cadd</code></a>
chore: point the changelog to the runner</li>
<li>See full diff in <a
href="https://github.com/codspeedhq/action/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=CodSpeedHQ/action&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:56:46 +00:00
doubleinfinity
b944bbc766 docs: add ZeusDB vector store integration (#32822)
## Description

This PR adds documentation for the new ZeusDB vector store integration
with LangChain.

## Motivation

ZeusDB is a high-performance vector database (Python/Rust backend)
designed for AI applications that need fast similarity search and
real-time vector ops. This integration brings ZeusDB's capabilities to
the LangChain ecosystem, giving developers another production-oriented
option for vector storage and retrieval.

**Key Features:**
- **User-Friendly Python API**: Intuitive interface that integrates
seamlessly with Python ML workflows
- **High Performance**: Powered by a robust Rust backend for
lightning-fast vector operations
- **Enterprise Logging**: Comprehensive logging capabilities for
monitoring and debugging production systems
- **Advanced Features**: Includes product quantization and persistence
capabilities
- **AI-Optimized**: Purpose-built for modern AI applications and RAG
pipelines

## Changes

- Added provider documentation:
`docs/docs/integrations/providers/zeusdb.mdx` (installation, setup).

- Added vector store documentation:
`docs/docs/integrations/vectorstores/zeusdb.ipynb` (quickstart for
creating/querying a ZeusDBVectorStore).

- Registered langchain-zeusdb in `libs/packages.yml` for discovery.

## Target users

- AI/ML engineers building RAG pipelines

- Data scientists working with large document collections

- Developers needing high-throughput vector search

- Teams requiring near real-time vector operations

## Testing

- Followed LangChain's "How to add standard tests to an integration"
guidance.
- Code passes format, lint, and test checks locally.
- Tested with LangChain Core 0.3.74
- Works with Python 3.10 to 3.13

## Package Information
**PyPI:** https://pypi.org/project/langchain-zeusdb
**Github:** https://github.com/ZeusDB/langchain-zeusdb
2025-09-15 09:55:14 -04:00
Filip Makraduli
0be7515abc docs: add superlinked retriever integration (#32433)
# feat(superlinked): add superlinked retriever integration

**Description:** 
Add Superlinked as a custom retriever with full LangChain compatibility.
This integration enables users to leverage Superlinked's multi-modal
vector search capabilities including text similarity, categorical
similarity, recency, and numerical spaces with flexible weighting
strategies. The implementation provides a `SuperlinkedRetriever` class
that extends LangChain's `BaseRetriever` with comprehensive error
handling, parameter validation, and support for various vector databases
(in-memory, Qdrant, Redis, MongoDB).

**Key Features:**
- Full LangChain `BaseRetriever` compatibility with `k` parameter
support
- Multi-modal search spaces (text, categorical, numerical, recency)
- Flexible weighting strategies for complex search scenarios
- Vector database agnostic implementation
- Comprehensive validation and error handling
- Complete test coverage (unit tests, integration tests)
- Detailed documentation with 6 practical usage examples

**Issue:** N/A (new integration)

**Dependencies:** 
- `superlinked==33.5.1` (peer dependency, imported within functions)
- `pandas^2.2.0` (required by superlinked)

**Linkedin handle:** https://www.linkedin.com/in/filipmakraduli/

## Implementation Details

### Files Added/Modified:
- `libs/partners/superlinked/` - Complete package structure
- `libs/partners/superlinked/langchain_superlinked/retrievers.py` - Main
retriever implementation
- `libs/partners/superlinked/tests/unit_tests/test_retrievers.py` - unit
tests
- `libs/partners/superlinked/tests/integration_tests/test_retrievers.py`
- Integration tests with mocking
- `docs/docs/integrations/retrievers/superlinked.ipynb` - Documentation
a few usage examples

### Testing:
- `make format` - passing
- `make lint` - passing 
- `make test` - passing (16 unit tests, integration tests)
- Comprehensive test coverage including error handling, validation, and
edge cases

### Documentation:
- Example notebook with 6 practical scenarios:
  1. Simple text search
  2. Multi-space blog search (content + category + recency)
  3. E-commerce product search (price + brand + ratings)
  4. News article search (sentiment + topics + recency)
  5. LangChain RAG integration example
  6. Qdrant vector database integration

### Code Quality:
- Follows LangChain contribution guidelines
- Backwards compatible
- Optional dependencies imported within functions
- Comprehensive error handling and validation
- Type hints and docstrings throughout

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:54:04 +00:00
Sadiq Khan
cc9a97a477 docs(core): add type hints to BaseStore example code (#32946)
## Summary
- Add comprehensive type hints to the MyInMemoryStore example code in
BaseStore docstring
- Improve documentation quality and educational value for developers
- Align with LangChain's coding standards requiring type hints on all
Python code

## Changes Made
- Added return type annotations to all methods (__init__, mget, mset,
mdelete, yield_keys)
- Added parameter type annotations using proper generic types (Sequence,
Iterator)
- Added instance variable type annotation for the store attribute
- Used modern Python union syntax (str | None) for optional types

## Test Plan
- Verified Python syntax validity with ast.parse()
- No functional changes to actual code, only documentation improvements
- Example code now follows best practices and coding standards

This change improves the educational value of the example code and
ensures consistency with LangChain's requirement that "All Python code
MUST include type hints and return types" as specified in the
development guidelines.

---------

Co-authored-by: sadiqkhzn <sadiqkhzn@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:45:34 +00:00
Dmitry
ee17adb022 docs: add AI/ML API integration (#32430)
**Description:**
Introduces documentation notebooks for AI/ML API integration covering
the following use cases:
- Chat models (`ChatAimlapi`)
- Text completion models (`AimlapiLLM`)
- Provider usage examples
- Text embedding models (`AimlapiEmbeddings`)

Additionally, adds the `langchain-aimlapi` package entry to
`libs/packages.yml` for package management.

This PR aims to provide a comprehensive starting point for developers
integrating AI/ML API models with LangChain via the new
`langchain-aimlapi` package.

**Issue:** N/A

**Dependencies:** None

**Twitter handle:** @aimlapi

---

### **To-Do Before Submitting PR:**

* [x] Run `make format`
* [x] Run `make lint`
* [x] Confirm all documentation notebooks are in
`docs/docs/integrations/`
* [x] Double-check `libs/packages.yml` has the correct repo path
* [x] Confirm no `pyproject.toml` modifications were made unnecessarily

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 09:41:40 -04:00
Noraina
6a43f140bc docs: update SerpApi free searches amount in tool feature table (#32945)
**Description:** 
This PR updates the free searches per month from **100** to **250** and
renames SerpAPI to [SerpApi](https://serpapi.com/) to prevent confusion.
Add import API keys and enhance usage instructions in the Jupyter
notebook

**Issue:** N/A

**Dependencies:** N/A

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
2025-09-14 21:42:59 -04:00
Youngho Kim
4619a2727f docs(anthropic): update documentation links (#32938)
**Description:**
This PR updated links to the latest Anthropic documentation. Changes
include revised links for model overview, tool usage, web search tool,
text editor tool, and more.

**Issue:**
N/A

**Dependencies:**
None

**Twitter handle:**
N/A
2025-09-14 21:38:51 -04:00
湛露先生
6487a7e2e5 chore(langchain): remove duplicate .pdf listing (#32929) 2025-09-14 21:33:40 -04:00
湛露先生
406ebc9141 chore(langchain): Fix typos in core docstrings (#32928)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-09-14 21:33:06 -04:00
Nikhil Chandrappa
e6b5ff213a docs: add YugabyteDB Distributed SQL database (#32571)
- **Description:** The `langchain-yugabytedb` package implementations of
core LangChain abstractions using `YugabyteDB` Distributed SQL Database.
  
YugabyteDB is a cloud-native distributed PostgreSQL-compatible database
that combines strong consistency with ultra-resilience, seamless
scalability, geo-distribution, and highly flexible data locality to
deliver business-critical, transactional applications.

[YugabyteDB](https://www.yugabyte.com/ai/) combines the power of the
`pgvector` PostgreSQL extension with an inherently distributed
architecture. This future-proofed foundation helps you build GenAI
applications using RAG retrieval that demands high-performance vector
search.

- [ ] **tests and docs**: 
1. `langchain-yugabytedb`
[github](https://github.com/yugabyte/langchain-yugabytedb) repo.
2. YugabyteDB VectorStore example notebook showing its use. It lives in
`langchain/docs/docs/integrations/vectorstores/yugabytedb.ipynb`
directory.
  3. Running `langchain-yugabytedb` unit tests 
  
- Setting up a Development Environment

This document details how to set up a local development environment that
will
allow you to contribute changes to the project.

Acquire sources and create virtualenv.
```shell
git clone https://github.com/yugabyte/langchain-yugabytedb
cd langchain-yugabytedb
uv venv --python=3.13
source .venv/bin/activate
```

Install package in editable mode.
```shell
uv pip install pipx  
pipx install poetry
poetry install
uv pip install pytest pytest_asyncio pytest-timeout langchain-core langchain_tests sqlalchemy psycopg psycopg-binary numpy pgvector
```

Start YugabyteDB RF-1 Universe.
```shell
docker run -d --name yugabyte_node01 --hostname yugabyte01 \
  -p 7000:7000 -p 9000:9000 -p 15433:15433 -p 5433:5433 -p 9042:9042 \
  yugabytedb/yugabyte:2.25.2.0-b359 bin/yugabyted start --background=false \
  --master_flags="allowed_preview_flags_csv=ysql_yb_enable_advisory_locks,ysql_yb_enable_advisory_locks=true" \
  --tserver_flags="allowed_preview_flags_csv=ysql_yb_enable_advisory_locks,ysql_yb_enable_advisory_locks=true"

docker exec -it yugabyte_node01 bin/ysqlsh -h yugabyte01 -c "CREATE extension vector;"
```

Invoke test cases.
```shell
pytest -vvv tests/unit_tests/yugabytedb_tests
```
2025-09-12 16:55:09 -04:00
Michael Yilma
03f0ebd93e docs: add Bigtable Key-value Store and Vector Store Docs (#32598)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **feat(docs)**: add Bigtable Key-value store doc
- [X] **feat(docs)**: add Bigtable Vector store doc 

This PR adds a doc for Bigtable and LangChain Key-value store
integration. It contains guides on how to add, delete, get, and yield
key-value pairs from Bigtable Key-value Store for LangChain.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:53:59 -04:00
Bar Cohen
c9eed530ce docs: add Timbr tools integration (#32862)
# feat(integrations): Add Timbr tools integration

## DESCRIPTION

This PR adds comprehensive documentation and integration support for
Timbr's semantic layer tools in LangChain.

[Timbr](https://timbr.ai/) provides an ontology-driven semantic layer
that enables natural language querying of databases through
business-friendly concepts. It connects raw data to governed business
measures for consistent access across BI, APIs, and AI applications.

[`langchain-timbr`](https://pypi.org/project/langchain-timbr/) is a
Python SDK that extends
[LangChain](https://github.com/WPSemantix/Timbr-GenAI/tree/main/LangChain)
and
[LangGraph](https://github.com/WPSemantix/Timbr-GenAI/tree/main/LangGraph)
with custom agents, chains, and nodes for seamless integration with the
Timbr semantic layer. It enables converting natural language prompts
into optimized semantic-SQL queries and executing them directly against
your data.

**What's Added:**
- Complete integration documentation for `langchain-timbr` package
- Tool documentation page with usage examples and API reference

**Integration Components:**
- `IdentifyTimbrConceptChain` - Identify relevant concepts from user
prompts
- `GenerateTimbrSqlChain` - Generate SQL queries from natural language
- `ValidateTimbrSqlChain` - Validate queries against knowledge graph
schemas
- `ExecuteTimbrQueryChain` - Execute queries against semantic databases
- `GenerateAnswerChain` - Generate human-readable answers from results

## Documentation Added

- `/docs/integrations/providers/timbr.mdx` - Provider overview and
configuration
- `/docs/integrations/tools/timbr.ipynb` - Comprehensive tool usage
examples

## Links

- [PyPI Package](https://pypi.org/project/langchain-timbr/)
- [GitHub Repository](https://github.com/WPSemantix/langchain-timbr)
- [Official
Documentation](https://docs.timbr.ai/doc/docs/integration/langchain-sdk/)

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:51:42 -04:00
tbice
e6c38a043f docs: add Qwen integration guide and update qwq documentation (#32817)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

**Description:**  
Add documentation for Qwen integration in LangChain, including setup
instructions, usage examples, and configuration details. Update related
qwq documentation to reflect current best practices and improve clarity
for users.

This PR enhances the documentation ecosystem by:
- Adding a new guide for integrating Qwen models
- Updating outdated or incomplete qwq documentation
- Improving structure and readability of relevant sections

**Issue:** N/A  
**Dependencies:** None

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:49:20 -04:00
Elif Sema Balcioglu
dc47c2c598 docs: update langchain-oracledb documentation (#32805)
`Oracle AI Vector Search` integrations for LangChain have been moved to
a dedicated package, [langchain-oracledb
](https://pypi.org/project/langchain-oracledb/), and a new repository,
[langchain-oracle
](https://github.com/oracle/langchain-oracle/tree/main/libs/oracledb).
This PR updates the corresponding documentation, including installation
instructions and import statements, to reflect these changes.

This PR is complemented with:
https://github.com/langchain-ai/langchain-community/pull/283

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:47:10 -04:00
Yuvraj Chandra
3420ca1da2 docs: add ZenRows provider and tool integration docs (#31742)
**Description:** Adds documentation for ZenRows integration with
LangChain, including provider overview and detailed tool documentation.
ZenRows is an enterprise-grade web scraping solution that enables
LangChain agents to extract web content at scale with advanced features
like JavaScript rendering, anti-bot bypass, geo-targeting, and multiple
output formats.

This PR includes:
- Provider documentation
(`docs/docs/integrations/providers/zenrows.ipynb`)
- Tool documentation
(`docs/docs/integrations/tools/zenrows_universal_scraper.ipynb`)
- Complete usage examples and API reference links

**Issue:** N/A

**Dependencies:** 
- [langchain-zenrows](https://github.com/ZenRows-Hub/langchain-zenrows)
package (external, available on
[PyPI](https://pypi.org/project/langchain-zenrows/))
- No changes to core LangChain dependencies

**LinkedIn handle:** https://www.linkedin.com/company/zenrows/

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:37:49 -04:00
Vishal Karwande
f11dd177e9 docs: update oci documentation and examples. (#32749)
Adding Oracle Generative AI as one of the providers for langchain.
Updated the old examples in the documentation with the new working
examples.

---------

Co-authored-by: Vishal Karwande <vishalkarwande@Vishals-MacBook-Pro.local>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:28:03 -04:00
Ali Ismail
d5a4abf960 docs(core): remove duplicate 'the' in indexing/api.py (#32924)
**Description:** Fixes a small typo in `_get_document_with_hash` inside 
`libs/core/langchain_core/indexing/api.py`.

**Issue:** N/A (no related issue)

**Dependencies:** None
2025-09-12 15:49:54 -04:00
Eugene Yurtsev
b1497bcea1 chore(core): test that default values in tool calls are preserved in json schema representation (#32921)
Add unit test coverage for this issue:
https://github.com/langchain-ai/langchain/issues/32232
2025-09-12 12:50:54 -04:00
ccurme
b88115f6fc feat(openai): (v1) support pdfs passed via url in standard format (#32876) 2025-09-12 10:44:00 -04:00
Sydney Runkle
84f9824cc9 chore: use uv caches (#32919)
Especially helpful for the text splitters tests where we're installing
pytorch (expensive and slow slow slow). Should speed up CI by 5-10 mins.

w/o caches, CI taking 20 minutes 😨 
w/ caches, CI taking 3 minutes
2025-09-12 10:29:35 -04:00
Sydney Runkle
0814bfe5ed ci: use partial runs w/ codspeed (#32920)
Taking advantage of [partial
runs](https://codspeed.io/docs/features/partial-runs)!

This should save us minutes on every CI job, we only run codspeed for
libs w/ changes and this doesn't affect benchmarking drops
2025-09-12 09:46:01 -04:00
Christophe Bornet
cbaf97ada4 chore: bump mypy version to 1.18 (#32914) 2025-09-12 09:19:23 -04:00
Sydney Runkle
dc2da95ac0 release(langchain): v1.0.0a5 (#32917) 2025-09-12 08:36:44 -04:00
Sydney Runkle
9e78ff19ab fix(langchain): use messages from model request (#32908)
Oversight when moving back to basic function call for
`modify_model_request` rather than implementation as its own node.

Basic test right now failing on main, passing on this branch

Revealed a gap in testing. Will write up a more robust test suite for
basic middleware features.
2025-09-12 08:18:02 -04:00
Mason Daugherty
67aa37b144 Merge branch 'master' into wip-v1.0 2025-09-11 23:21:37 -04:00
Mason Daugherty
649d8a8223 test(anthropic): enable VCR for web fetch test (#32913)
The API issues have been resolved; no longer xfailing
2025-09-12 03:19:55 +00:00
Mason Daugherty
9f14714367 fix: anthropic tests (stale cassette from max dynamic tokens) 2025-09-11 22:44:16 -04:00
Mason Daugherty
338d3d2795 chore: remove infra tag from task issue template (#32912) 2025-09-11 22:02:14 -04:00
Mason Daugherty
31f641a11f chore(infra): issue template updates (#32911) 2025-09-11 22:00:44 -04:00
open-swe[bot]
91286b0b27 chore(infra): issue template updates (#32910)
Fixes: #32909

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-11 21:53:35 -04:00
dishaprakash
bea72bac3e docs: add hybrid search documentation to PGVectorStore (#32549)
Adding documentation for Hybrid Search in the PGVectorStore Notebook

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-11 21:12:58 -04:00
Mason Daugherty
7cc9312979 fix: anthropic test since new dynamic max tokens 2025-09-11 17:53:30 -04:00
Mason Daugherty
387d0f4edf fix: lint 2025-09-11 17:52:46 -04:00
Mason Daugherty
207ea46813 Merge branch 'master' into wip-v1.0 2025-09-11 17:24:45 -04:00
Caspar Broekhuizen
15d558ff16 fix(core): resolve mermaid node id collisions when special chars are used (#32857)
### Description

* Replace the Mermaid graph node label escaping logic
(`_escape_node_label`) with `_to_safe_id`, which converts a string into
a unique, Mermaid-compatible node id. Ensures nodes with special
characters always render correctly.

**Before**
* Invalid characters (e.g. `开`) replaced with `_`. Causes collisions
between nodes with names that are the same length and contain all
non-safe characters:
```python
_escape_node_label("开") # '_'
_escape_node_label("始") # '_'  same as above, but different character passed in. not a unique mapping.
```

**After**
```python
_to_safe_id("开") # \5f00
_to_safe_id("始") # \59cb  unique!
```

### Tests
* Rename `test_graph_mermaid_escape_node_label()` to
`test_graph_mermaid_to_safe_id()` and update function logic to use
`_to_safe_id`
* Add `test_graph_mermaid_special_chars()`

### Issue

Fixes langchain-ai/langgraph#6036
2025-09-11 14:15:17 -07:00
Hyunjoon Jeong
9cc85387d1 fix(text-splitters): add validation to prevent infinite loop and prevent empty token splitter (#32205)
### Description
1) Add validation to prevent infinite loop condition when
```tokenizer.tokens_per_chunk > tokenizer.chunk_overlap```
2) Avoid empty decoded chunk when splitter appends tokens

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 16:55:32 -04:00
Mason Daugherty
7e5180e2fa refactor: inline test release (#32903)
Reusable workflows are not currently supported by PyPI's Trusted
Publishing
functionality, and are subject to breakage. Users are strongly
encouraged
to avoid using reusable workflows for Trusted Publishing until support
becomes official. Please, do not report bugs if this breaks.
2025-09-11 16:20:07 -04:00
Mason Daugherty
bbb1b9085d release(prompty): 0.1.2 (#32907) 2025-09-11 16:19:07 -04:00
Vincent Min
ff9f17bc66 fix(core): preserve ordering in RunnableRetry batch/abatch results (#32526)
Description: Fixes a bug in RunnableRetry where .batch / .abatch could
return misordered outputs (e.g. inputs [0,1,2] yielding [1,1,2]) when
some items succeeded on an earlier attempt and others were retried. Root
cause: successful results were stored keyed by the index within the
shrinking “pending” subset rather than the original input index, causing
collisions and reordered/duplicated outputs after retries. Fix updates
_batch and _abatch to:

- Track remaining original indices explicitly.
- Call underlying batch/abatch only on remaining inputs.
- Map results back to original indices.
- Preserve final ordering by reconstructing outputs in original
positional order.

Issue: Fixes #21326

Tests:

- Added regression tests: test_retry_batch_preserves_order and
test_async_retry_batch_preserves_order asserting correct ordering after
a single controlled failure + retry.
- Existing retry tests still pass.

Dependencies:

- None added or changed.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 16:18:25 -04:00
Matthew Lapointe
b1f08467cd feat(core): allow overriding ls_model_name from kwargs (#32541) 2025-09-11 16:18:06 -04:00
Eugene Yurtsev
2903e08311 chore(docs): remove langchain_experimental from api reference (#32904)
This removes langchain-experimental from api reference.

We do not recommend it to users for production use cases, so let's also
deprecate it from documentation
2025-09-11 16:13:58 -04:00
Mason Daugherty
115e20a0bc release(ollama): 0.3.8 (#32906) 2025-09-11 16:00:41 -04:00
Mason Daugherty
0ea945d291 release(nomic): 0.1.5 (#32905) 2025-09-11 15:54:19 -04:00
Mason Daugherty
5795ec3c4d release(exa): 0.3.1 (#32902) 2025-09-11 15:53:13 -04:00
Mason Daugherty
bd765753ca release(chroma): 0.2.6 (#32901) 2025-09-11 15:52:19 -04:00
Christophe Bornet
5fd7962a78 fix(core): fix support of Pydantic v1 models in BaseTool.args (#32487)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 15:44:51 -04:00
Marcus Chia
c68796579e fix(core): resolve infinite recursion in _dereference_refs_helper with mixed $ref objects (#32578)
**Description:** Fixes infinite recursion issue in JSON schema
dereferencing when objects contain both $ref and other properties (e.g.,
nullable, description, additionalProperties). This was causing Apollo
MCP server schemas to hang indefinitely during tool binding.

**Problem:**
- Commit fb5da8384 changed the condition from `set(obj.keys()) ==
{"$ref"}` to `"$ref" in set(obj.keys())`
- This caused objects with $ref + other properties to be treated as pure
$ref nodes
- Result: other properties were lost and infinite recursion occurred
with complex schemas

**Solution:**
- Restore pure $ref detection for objects with only $ref key  
- Add proper handling for mixed $ref objects that preserves all
properties
- Merge resolved reference content with other properties
- Maintain cycle detection to prevent infinite recursion

**Impact:**
- Fixes Apollo MCP server schema integration
- Resolves tool binding infinite recursion with complex GraphQL schemas
- Preserves backward compatibility with existing functionality
- No performance impact - actually improves handling of complex schemas

**Issue:** Fixes #32511

**Dependencies:** None

**Testing:**
- Added comprehensive unit tests covering mixed $ref scenarios
- All existing tests pass (1326 passed, 0 failed)
- Tested with realistic Apollo GraphQL schemas
- Stress tested with 100 iterations of complex schemas

**Verification:**
-  `make format` - All files properly formatted
-  `make lint` - All linting checks pass  
-  `make test` - All 1326 unit tests pass
-  No breaking changes - full backwards compatibility maintained

---------

Co-authored-by: Marcus <marcus@Marcus-M4-MAX.local>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 15:21:31 -04:00
Mason Daugherty
255ad31955 release(anthropic): 0.3.20 (#32900) 2025-09-11 15:18:43 -04:00
Mason Daugherty
00e992a780 feat(anthropic): web fetch beta (#32894)
Note: citations are broken until Anthropic fixes their API
2025-09-11 15:14:06 -04:00
Mason Daugherty
83d938593b lint 2025-09-11 15:12:38 -04:00
Mason Daugherty
38afeddcb6 fix(groq): update docs due to model deprecation (#32899)
On Friday, October 10th, the moonshotai/kimi-k2-instruct model will be
decommissioned in favor of the latest version,
moonshotai/kimi-k2-instruct-0905.
 
Until then, requests to moonshotai/kimi-k2-instruct will automatically
be routed to moonshotai/kimi-k2-instruct-0905.
2025-09-11 15:00:24 -04:00
Yu Zhong
fca1aaa9b5 fix(core): force overwrite additionalProperties to False in strict mode (#32879)
# Description
This PR fixes a bug in _recursive_set_additional_properties_false used
in function_calling.convert_to_openai_function.

Previously, schemas with "additionalProperties=True" were not correctly
overridden when strict validation was expected, which could lead to
invalid OpenAI function schemas.

The updated implementation ensures that:
- Any schema with "additionalProperties" already set will now be forced
to False under strict mode.
- Recursive traversal of properties, items, and anyOf is preserved.
- Function signature remains unchanged for backward compatibility.

# Issue
When using tool calling in OpenAI structured output strict mode
(strict=True), 400: "Invalid schema for response_format XXXXX
'additionalProperties' is required to be supplied and to be false" error
raises for the parameter that contains dict type. OpenAI requires
additionalProperties to be set to False.
Some PRs try to resolved the issue.
- PR #25169 introduced _recursive_set_additional_properties_false to
recursively set additionalProperties=False.
- PR #26287 fixed handling of empty parameter tools for OpenAI function
generation.
- PR #30971 added support for Union type arguments in strict mode of
OpenAI function calling / structured output.

Despite these improvements, since Pydantic 2.11, it will always add
`additionalProperties: True` for arbitrary dictionary schemas dict or
Any (https://pydantic.dev/articles/pydantic-v2-11-release#changes).
Schemas that already had additionalProperties=True in such cases were
not being overridden, which this PR addresses to ensure strict mode
behaves correctly in all cases.

# Dependencies
No Changes

---------

Co-authored-by: Zhong, Yu <yzhong@freewheel.com>
2025-09-11 11:02:12 -04:00
Jonathan Paserman
af17774186 docs: add MLflow tracking and evaluation cookbook (#32667)
This PR adds a new cookbook demonstrating how to build a RAG pipeline
with LangChain and track + evaluate it using MLflow.
Currently not much documentation on LangChain MLflow integration, hope
this can help folks trying to monitor and evaluate their LangChain
applications.

- ArXiv document loader 
- In Memory vector store
- LCEL rag pipeline
- MLflow tracing
- MLflow evaluation

Issue:
N/A

Dependencies:
N/A
2025-09-10 22:55:28 -04:00
chen-assert
d72da29c0b docs: Fix classification notebook small mistake (#32636)
Fix some minor issues in the Classification Notebook.
While some code still using hardcoded OpenAI model instead of selected
chat model.

Specifically, on page [Classify Text into
Labels](https://python.langchain.com/docs/tutorials/classification/)

We selected chat model before and have init_chat_model with our chosen
mode.
<img width="1262" height="576" alt="image"
src="https://github.com/user-attachments/assets/14eb436b-d2ef-4074-96d8-71640a13c0f7"
/>

But the following sample code still uses the hard-coded OpenAI model,
which in my case is obviously unrunable (lack of openai api key)
<img width="1263" height="543" alt="image"
src="https://github.com/user-attachments/assets/d13846aa-1c4b-4dee-b9c1-c66570ba3461"
/>
2025-09-10 22:43:44 -04:00
Amit Biswas
653b0908af docs: update Confident callback integration and examples (#32458)
**Description:**
Updates the Confident AI integration documentation to use modern
patterns and improve code quality. This change:
- Replaces deprecated `DeepEvalCallbackHandler` with the new
`CallbackHandler` from `deepeval.integrations.langchain`
- Updates installation and authentication instructions to match current
best practices
- Adds modern integration examples using LangChain's latest patterns
- Removes deprecated metrics and outdated code examples
- Updates code samples to follow current best practices

The changes make the documentation more maintainable and ensure users
follow the recommended integration patterns.

**Issue:** Fixes #32444

**Dependencies:**
- deepeval
- langchain
- langchain-openai

**Twitter handle:** @Muwinuddin

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-10 22:43:31 -04:00
GDanksAnchor
eb77da7de5 docs: add name title for Anchor Browser (#32512)
# description
change the sidebar name to Anchor Browser from anchor_browser.

# Issue
Anchor Browser sidebar name looks unattractive.
2025-09-10 22:40:37 -04:00
Tianyu Chen
9c93439a01 docs: add Linux quick setup method for JaguarDB (#32520)
Description:
Added "Method Two: Quick Setup (Linux)" section to prerequisites,
providing a curl-based installation method for deploying JaguarDB
without Docker. Retained original Docker setup instructions for
flexibility.
2025-09-10 22:36:01 -04:00
Marco Vinciguerra
64fe1e9a80 docs: update scrapegraph.ipynb (#32617)
I updated ScrapeGraphAI for checking the new ScrapeGraphAI tool
2025-09-10 22:33:57 -04:00
chen-assert
e4a90490c3 docs: Fix agents tutorials parameter missing (#32639)
Fix a minor issue in the Agents Tutorials Notebook.
While a config parameter is missing.

Specifically, on page [Build an Agent#Streaming
tokens](https://python.langchain.com/docs/tutorials/agents/#streaming-tokens)

These pieces of code can not be run without the config parameter, which
seems to have been omitted by the author.
<img width="1318" height="691" alt="image"
src="https://github.com/user-attachments/assets/54ce2833-9499-41bb-9de0-d5f9beba9ef9"
/>
2025-09-10 22:27:24 -04:00
dwelch-spike
80776b80f0 docs: remove aerospike vector store (#32726)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- **Description:** Aerospike Vector Store has been retired. It is no
longer supported so It should no longer be documented on the Langchain
site.

- **Add tests and docs**: Removes docs for retired Aerospike vector
store.

- **Lint and test**: NA
2025-09-10 22:19:43 -04:00
may
2c2bab93fc docs: add example for reusing an existing collection (#32774)
Added a short section to the Weaviate integration docs showing how to
connect to an existing collection (reuse an index) with
`WeaviateVectorStore`. This helps clarify required parameters
(`index_name`, `text_key`) when loading a pre-existing store, which was
previously missing.

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

### Description
Added a short section to the Weaviate integration docs showing how to
connect to an existing collection (reuse an index) with
`WeaviateVectorStore`. This helps clarify required parameters
(`index_name`, `text_key`) when loading a pre-existing store, which was
previously missing.

### Issue
Fixes langchain-ai/langchain-weaviate#197

### Dependencies
None
2025-09-10 22:16:46 -04:00
Mateusz Świtała
221c96e7b4 docs: fix import path in WatsonxToolkit after releasing langchain-ibm 0.3.17 (#32746)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [x] **PR message**: 
- **Description:** Fixing the import path for `WatsonxToolkit` in
examples after releasing `lnagchain-ibm==0.3.17`

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-09-10 22:14:43 -04:00
Mason Daugherty
3ef1165c0c Merge branch 'master' into wip-v1.0 2025-09-10 22:07:40 -04:00
yrk111222
364465bd11 docs: update modelscope.mdx (#32823)
### Description
This PR is primarily aimed at updating some usage methods in the
`modelscope.mdx` file.
Specifically, it changes from `ModelScopeLLM` to `ModelScopeEndpoint`.
### Relevant PR
The relevant PR link is:
https://github.com/langchain-ai/langchain/pull/28941
2025-09-10 22:07:19 -04:00
Mason Daugherty
ffee5155b4 fix lint 2025-09-10 22:03:45 -04:00
Mason Daugherty
5ef7d42bf6 refactor(core): remove example attribute from AIMessage and HumanMessage (#32565) 2025-09-10 22:01:25 -04:00
Mason Daugherty
750a3ffda6 Merge branch 'master' into wip-v1.0 2025-09-10 21:51:38 -04:00
Mason Daugherty
7b874da9b2 fix(docs): text-embedding-004 -> gemini-embedding-001 (#32596)
`text-embedding-004` will be discontinued
2025-09-10 21:47:45 -04:00
Vadym Barda
8b1e25461b fix(core): add on_tool_error to _AstreamEventsCallbackHandler (#30709)
Fixes https://github.com/langchain-ai/langchain/issues/30708

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-10 21:34:05 -04:00
Mason Daugherty
ced9fc270f Merge branch 'master' into wip-v1.0 2025-09-10 21:32:03 -04:00
Mason Daugherty
8e213c9f1a fix(core): AsyncCallbackHandler docstring cleanup (#32897)
plus IDE warning fixes
2025-09-10 21:31:45 -04:00
Mason Daugherty
311aa94d69 Merge branch 'master' into wip-v1.0 2025-09-10 20:57:17 -04:00
Yash Vishwanath Tobre
a8828b1bda fix(core): raise OutputParserException for non-dict JSON outputs (#32236)
**Description:**
Raise a more descriptive OutputParserException when JSON parsing results
in a non-dict type. This improves debugging and aligns behavior with
expectations when using expected_keys.

**Issue:**
Fixes #32233

**Twitter handle:**
@yashvtobre

**Testing:**

- Ran make format and make lint from the root directory; both passed
cleanly.
- Attempted make test but no such target exists in the root Makefile.
- Executed tests directly via pytest targeting the relevant test file,
confirming all tests pass except for unrelated async test failures
outside the scope of this change.

**Notes:**

- No additional dependencies introduced.
- Changes are backward compatible and isolated within the output parser
module.

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-09-10 20:57:09 -04:00
Mason Daugherty
7a158c7f1c revert: "chore: remove ruff target-version" (#32895)
Reverts langchain-ai/langchain#32880

Not needed at the moment, will do when finishing v1
2025-09-10 20:56:48 -04:00
Mason Daugherty
cb8598b828 Merge branch 'master' into wip-v1.0 2025-09-10 20:51:00 -04:00
Daniel Barker
25c34bd9b2 feat(core): allow custom Mermaid URL (#32831)
- **Description:** Currently,
`langchain_core.runnables.graph_mermaid.py` is hardcoded to use
mermaid.ink to render graph diagrams. It would be nice to allow users to
specify a custom URL, e.g. for self-hosted instances of the Mermaid
server.
- **Issue:** [Langchain Forum: allow custom mermaid API
URL](https://forum.langchain.com/t/feature-request-allow-custom-mermaid-api-url/1472)
  - **Dependencies:** None

- [X] **Add tests and docs**: Added unit tests using mock requests.
- [X] **Lint and test**: Run `make format`, `make lint` and `make test`.

Minimal example using the feature:

```python
import os
import operator
from pathlib import Path
from typing import Any, Annotated, TypedDict

from langgraph.graph import StateGraph

class State(TypedDict):
    messages: Annotated[list[dict[str, Any]], operator.add]

def hello_node(state: State) -> State:
    return {"messages": [{"role": "assistant", "content": "pong!"}]}

builder = StateGraph(State)
builder.add_node("hello_node", hello_node)
builder.add_edge("__start__", "hello_node")
builder.add_edge("hello_node", "__end__")

graph = builder.compile()

# Run graph
output = graph.invoke({"messages": [{"role": "user", "content": "ping?"}]})

# Draw graph
Path("graph.png").write_bytes(graph.get_graph().draw_mermaid_png(base_url="https://custom-mermaid.ink"))
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-10 17:14:50 -04:00
Mason Daugherty
38001699d5 fix(anthropic): remove unneeded beta flags (#32893)
- Beta isn't needed for search result tests anymore
- Add TODO for other tests to come back when generally available
- Regenerate remote MCP snapshot after some testing (now the same, but
fresher)
- Bump deps
2025-09-10 20:47:13 +00:00
Mason Daugherty
3da0377c02 fix(anthropic): update ChatAnthropic model in tests/doc (#32892)
from `'claude-3-5-sonnet-latest'` to `'claude-3-5-haiku-latest'` since
sonnet is deprecated
2025-09-10 16:44:04 -04:00
JADAVA VINEETH KUMAR RAO
0abf82a45a fix(openai): ainvoke uses async _aget_response; add async tests (#32459) 2025-09-10 15:52:15 -04:00
Jonathan Hill
2fed177d0b fix(core): preserve ToolMessage.status field in convert_to_messages (#32840) 2025-09-10 15:49:39 -04:00
Eugene Yurtsev
fded6c6b13 chore(core): remove beta namespace and context api (#32850)
* Remove the `beta` namespace from langchain_core
* Remove the context API (never documented and makes it easier to create
memory leaks). This API was a beta API.

Co-authored-by: Sadra Barikbin <sadraqazvin1@yahoo.com>
2025-09-10 15:04:29 -04:00
Aasish
9c7d262ff4 fix(openai): update AzureOpenAIEmbeddings validation logic for openai_api_base (#31782) 2025-09-10 14:53:30 -04:00
ccurme
67e651b592 fix(infra): fix min version check (#32891)
Should no longer require `langchain-core>=(version in monorepo)`
2025-09-10 14:04:26 -04:00
Shibayan003
f08dfb6f49 test: Add failing test for BaseCallbackManager.merge (#32040)
This pull request introduces a failing unit test to reproduce the bug
reported in issue #32028.
The test asserts the expected behavior: `BaseCallbackManager.merge()`
should combine `handlers` and `inheritable_handlers` independently,
without mixing them. This test will fail on the current codebase and is
intended to guide the fix and prevent future regressions.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-10 13:56:18 -04:00
ccurme
450870c9ac release(qdrant): 0.2.1 (#32889) 2025-09-10 13:21:16 -04:00
Zhou Jing
10dfeea110 fix(qdrant): allow as_retriever to work without embeddings in SPARSE mode (#32757) 2025-09-10 13:08:50 -04:00
ccurme
34ecb92178 release(openai): 0.3.33 (#32887) 2025-09-10 11:53:26 -04:00
ccurme
49b3918c26 fix(infra): update scheduled test workflow following uv migration in langchain-google (#32886) 2025-09-10 11:30:55 -04:00
Christophe Bornet
12921a94c5 test(core): reactivate commented tests in test_indexing (#32882)
* These tests now pass
* Commenting them is a [ruff
ERA](https://docs.astral.sh/ruff/rules/commented-out-code/) violation
2025-09-10 11:14:14 -04:00
Alexey Bondarenko
181bb91ce0 fix(ollama): Fix handling message content lists (#32881)
The Ollama chat model adapter does not support all of the possible
message content formats. That leads to Ollama model adapter crashing on
some messages from different models (e.g. Gemini 2.5 Flash).

These changes should fix one known scenario - when `content` is a list
containing a string.
2025-09-10 11:13:28 -04:00
Christophe Bornet
b274416441 chore: remove ruff target-version (#32880)
This is not needed anymore since `requires-python` was added when moving
to `uv`.
2025-09-10 11:12:30 -04:00
Mason Daugherty
544b08d610 Merge branch 'master' into wip-v1.0 2025-09-10 11:10:48 -04:00
ccurme
389a781aa0 fix(infra): exclude pre-releases from latest version checks in core release workflow (#32883) 2025-09-10 10:35:24 -04:00
William FH
443f0ccb0e release(core): 0.3.76 (#32877) 2025-09-10 14:10:44 +00:00
Lauren Hirata Singh
00e547c311 docs: update banner with docs deprecation notice (#32871) 2025-09-10 00:35:43 +00:00
Mason Daugherty
a48ace52ad fix: lint 2025-09-09 18:59:38 -04:00
Sydney Runkle
d464d3089b chore: redirect docs template -> docs repo (#32872) 2025-09-09 18:24:22 -04:00
William FH
f1d44d0f9d fix(core): honor enabled=false in nested tracing (#31986)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-09-09 13:12:17 -07:00
Mason Daugherty
20979d525c Merge branch 'master' into wip-v1.0 2025-09-09 15:01:51 -04:00
Christophe Bornet
a35ee49f37 chore(langchain): enable ruff docstring-code-format in langchain (#32858) 2025-09-09 15:00:38 -04:00
Christophe Bornet
352ff363ca chore(cli): remove ruff exclusion of templates (#32864) 2025-09-09 14:56:47 -04:00
Christophe Bornet
256a0b5f2f chore(langchain): add ruff rule BLE (#32868)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-09 18:52:53 +00:00
ccurme
937087a29c release(groq): 0.3.8 (#32870) 2025-09-09 14:39:02 -04:00
Jan Z
08bf4c321f feat(groq): add support for json_schema (#32396) 2025-09-09 18:30:07 +00:00
Mason Daugherty
4c6af2d1b2 fix(openai): structured output (#32551) 2025-09-09 11:37:50 -04:00
Christophe Bornet
ee268db1c5 feat(standard-tests): add a property to skip relevant tests if the vector store doesn't support get_by_ids() (#32633) 2025-09-09 11:37:23 -04:00
Zhou Jing
dcc517b187 fix(core): ensure InjectedToolCallId always overrides LLM-generated values (#32766) 2025-09-09 11:25:52 -04:00
Mason Daugherty
c124e67325 chore(docs): update package READMEs (#32869)
- Fix badges
- Focus on agents
- Cut down fluff
2025-09-09 14:50:32 +00:00
Christophe Bornet
699a5d06d1 chore(langchain): add ruff rule ERA (#32867) 2025-09-09 10:13:18 -04:00
Christophe Bornet
00f699c60d chore(core): cleanup pyproject.toml (#32865) 2025-09-09 10:12:18 -04:00
Christophe Bornet
e36e25fe2f feat(langchain): support PEP604 ( | union) in tool node error handlers (#32861)
This allows to use PEP604 syntax for `ToolNode` error handlers
```python
def error_handler(e: ValueError | ToolException) -> str:
    return "error"

ToolNode(my_tool, handle_tool_errors=error_handler).invoke(...)
```
Without this change, this fails with `AttributeError: 'types.UnionType'
object has no attribute '__mro__'`
2025-09-09 10:11:12 -04:00
Christophe Bornet
cc3b5afe52 fix(huggingface): fix typing in test_standard (#32863) 2025-09-09 10:05:41 -04:00
Gal Bloch
428c2ee6c5 fix(langchain): preserve supplied llm in FlareChain.from_llm (#32847) 2025-09-09 13:41:23 +00:00
Christophe Bornet
714f74a847 refactor(core): improve beta decorator (#32505)
This is better than using a subclass as returning a `property` works
with `ClassWithBetaMethods.beta_property.__doc__`

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 18:06:48 -04:00
Christophe Bornet
c3b28c769a chore(langchain): add ruff rules D (except D100 and D104) (#31994)
See https://docs.astral.sh/ruff/rules/#pydocstyle-d

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 21:47:22 +00:00
Christophe Bornet
017348b27c chore(langchain): add ruff rule E501 in langchain_v1 (#32812)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:28:14 -04:00
Christophe Bornet
1e101ae9a2 chore(langchain): add ruff rules N (#32098)
See https://docs.astral.sh/ruff/rules/#pep8-naming-n

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:27:43 -04:00
Christophe Bornet
fe6c415c9f chore(langchain): add ruff rule UP007 in langchain_v1 (#32811)
Done by autofix
2025-09-08 17:26:00 -04:00
Mason Daugherty
188c0154b3 Merge branch 'master' into wip-v1.0 2025-09-08 17:08:57 -04:00
Christophe Bornet
3c189f0393 chore(langchain): fix deprecation warnings (#32379)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:03:06 -04:00
Christophe Bornet
54c2419a4e chore(langchain): enable ruff docstring-code-format in langchain_v1 (#32855) 2025-09-08 16:51:18 -04:00
Mason Daugherty
35e9d36b0e fix(standard-tests): ensure non-negative token counts in usage metadata assertions (#32593) 2025-09-08 16:49:26 -04:00
Christophe Bornet
8b90eae455 chore(text-splitters): enable ruff docstring-code-format (#32854) 2025-09-08 16:40:11 -04:00
Christophe Bornet
05d14775f2 chore(standard-tests): enable ruff docstring-code-format (#32852) 2025-09-08 16:39:53 -04:00
PieterKok-jaam
33c7f230e0 feat(core): add id field to Document passed to filter for InMemoryVectorStore similarity search (#32688)
Added an id field to the Document passed to filter for
InMemoryVectorStore similarity search. This allows filtering by Document
id and brings the input to the filter in line with the result returned
by the vector similarity search.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-08 20:39:18 +00:00
Mason Daugherty
97dd7628d2 chore: update badges (#32851)
- stars badge redundant (look at the top of the page)
- remove version badge since we have many pkgs (and it was only showing
core) -- also, just look at the releases tab to the right of the readme
2025-09-08 20:06:59 +00:00
Adithya1617
f5bd00d1f1 feat(core): support AWS Bedrock document content blocks in msg_content_output (#32799) 2025-09-08 19:40:28 +00:00
Sadra Barikbin
3486d6c74d feat(core): support for adding PromptTemplates with formats other than f-string (#32253)
Allow adding`PromptTemplate`s with formats other than `f-string`. Fixes
#32151

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-08 19:16:54 +00:00
Mason Daugherty
8509efa6ad chore: remove erroneous pyversion specifiers 2025-09-08 15:03:08 -04:00
Mason Daugherty
b1a105f85f fix: huggingface lint 2025-09-08 14:59:38 -04:00
Mason Daugherty
9e54c5fa7f Merge branch 'master' into wip-v1.0 2025-09-08 14:44:28 -04:00
Mason Daugherty
0b8817c900 Merge branch 'master' into wip-v1.0 2025-09-08 14:43:58 -04:00
Stefano Lottini
390606c155 fix(standard-tests): standard vectorstore tests accept out-of-order get_by_ids (#32821)
- **Description:** The vectorstore standard-test mistakenly assumes that
the store's `get_by_ids` respects the order of the provided `ids`. This
is not the case (as the base class docstring states). This PR fixes
those tests that would fail otherwise (see issue #32820 for details,
repro and all). Fixes #32820
- **Issue:** Fixes #32820
- **Dependencies:** none

Co-authored-by: Stefano Lottini <stefano.lottini@ibm.com>
2025-09-08 14:22:14 -04:00
Christophe Bornet
cc98fb9bee chore(core): add ruff rule PLC0415 (#32351)
See https://docs.astral.sh/ruff/rules/import-outside-top-level/

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:15:04 -04:00
Christophe Bornet
16420cad71 chore(core): fix some pydocs to use google-style (#32764)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 17:52:17 +00:00
Christophe Bornet
01fdeede50 chore(core): fix some ruff preview rules (#32785)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:55:20 +00:00
Christophe Bornet
f4e83e0ad8 chore(core): fix some docstrings (from DOC preview rule) (#32833)
* Add `Raises` sections
* Add `Returns` sections
* Add `Yields` sections

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:44:15 +00:00
dependabot[bot]
4024d47412 chore(infra): bump actions/setup-python from 5 to 6 (#32842)
Bumps [actions/setup-python](https://github.com/actions/setup-python)
from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-python/releases">actions/setup-python's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Upgrade to node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1164">actions/setup-python#1164</a></li>
</ul>
<p>Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">See
Release Notes</a></p>
<h3>Enhancements:</h3>
<ul>
<li>Add support for <code>pip-version</code> by <a
href="https://github.com/priyagupta108"><code>@​priyagupta108</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1129">actions/setup-python#1129</a></li>
<li>Enhance reading from .python-version by <a
href="https://github.com/krystof-k"><code>@​krystof-k</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/787">actions/setup-python#787</a></li>
<li>Add version parsing from Pipfile by <a
href="https://github.com/aradkdj"><code>@​aradkdj</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1067">actions/setup-python#1067</a></li>
</ul>
<h3>Bug fixes:</h3>
<ul>
<li>Clarify pythonLocation behaviour for PyPy and GraalPy in environment
variables by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1183">actions/setup-python#1183</a></li>
<li>Change missing cache directory error to warning by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1182">actions/setup-python#1182</a></li>
<li>Add Architecture-Specific PATH Management for Python with --user
Flag on Windows by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1122">actions/setup-python#1122</a></li>
<li>Include python version in PyPy python-version output by <a
href="https://github.com/cdce8p"><code>@​cdce8p</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1110">actions/setup-python#1110</a></li>
<li>Update docs: clarification on pip authentication with setup-python
by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1156">actions/setup-python#1156</a></li>
</ul>
<h3>Dependency updates:</h3>
<ul>
<li>Upgrade idna from 2.9 to 3.7 in /<strong>tests</strong>/data by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-python/pull/843">actions/setup-python#843</a></li>
<li>Upgrade form-data to fix critical vulnerabilities <a
href="https://redirect.github.com/actions/setup-python/issues/182">#182</a>
&amp; <a
href="https://redirect.github.com/actions/setup-python/issues/183">#183</a>
by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1163">actions/setup-python#1163</a></li>
<li>Upgrade setuptools to 78.1.1 to fix path traversal vulnerability in
PackageIndex.download by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1165">actions/setup-python#1165</a></li>
<li>Upgrade actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-python/pull/1181">actions/setup-python#1181</a></li>
<li>Upgrade <code>@​actions/tool-cache</code> from 2.0.1 to 2.0.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-python/pull/1095">actions/setup-python#1095</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/krystof-k"><code>@​krystof-k</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/787">actions/setup-python#787</a></li>
<li><a href="https://github.com/cdce8p"><code>@​cdce8p</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/1110">actions/setup-python#1110</a></li>
<li><a href="https://github.com/aradkdj"><code>@​aradkdj</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/setup-python/pull/1067">actions/setup-python#1067</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v5...v6.0.0">https://github.com/actions/setup-python/compare/v5...v6.0.0</a></p>
<h2>v5.6.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Workflow updates related to Ubuntu 20.04 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1065">actions/setup-python#1065</a></li>
<li>Fix for Candidate Not Iterable Error by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1082">actions/setup-python#1082</a></li>
<li>Upgrade semver and <code>@​types/semver</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1091">actions/setup-python#1091</a></li>
<li>Upgrade prettier from 2.8.8 to 3.5.3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1046">actions/setup-python#1046</a></li>
<li>Upgrade ts-jest from 29.1.2 to 29.3.2 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1081">actions/setup-python#1081</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-python/compare/v5...v5.6.0">https://github.com/actions/setup-python/compare/v5...v5.6.0</a></p>
<h2>v5.5.0</h2>
<h2>What's Changed</h2>
<h3>Enhancements:</h3>
<ul>
<li>Support free threaded Python versions like '3.13t' by <a
href="https://github.com/colesbury"><code>@​colesbury</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/973">actions/setup-python#973</a></li>
<li>Enhance Workflows: Include ubuntu-arm runners, Add e2e Testing for
free threaded and Upgrade <code>@​action/cache</code> from 4.0.0 to
4.0.3 by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1056">actions/setup-python#1056</a></li>
<li>Add support for .tool-versions file in setup-python by <a
href="https://github.com/mahabaleshwars"><code>@​mahabaleshwars</code></a>
in <a
href="https://redirect.github.com/actions/setup-python/pull/1043">actions/setup-python#1043</a></li>
</ul>
<h3>Bug fixes:</h3>
<ul>
<li>Fix architecture for pypy on Linux ARM64 by <a
href="https://github.com/mayeut"><code>@​mayeut</code></a> in <a
href="https://redirect.github.com/actions/setup-python/pull/1011">actions/setup-python#1011</a>
This update maps arm64 to aarch64 for Linux ARM64 PyPy
installations.</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="e797f83bcb"><code>e797f83</code></a>
Upgrade to node 24 (<a
href="https://redirect.github.com/actions/setup-python/issues/1164">#1164</a>)</li>
<li><a
href="3d1e2d2ca0"><code>3d1e2d2</code></a>
Revert &quot;Enhance cache-dependency-path handling to support files
outside the w...</li>
<li><a
href="65b071217a"><code>65b0712</code></a>
Clarify pythonLocation behavior for PyPy and GraalPy in environment
variables...</li>
<li><a
href="5b668cf765"><code>5b668cf</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/actions/setup-python/issues/1181">#1181</a>)</li>
<li><a
href="f62a0e252f"><code>f62a0e2</code></a>
Change missing cache directory error to warning (<a
href="https://redirect.github.com/actions/setup-python/issues/1182">#1182</a>)</li>
<li><a
href="9322b3ca74"><code>9322b3c</code></a>
Upgrade setuptools to 78.1.1 to fix path traversal vulnerability in
PackageIn...</li>
<li><a
href="fbeb884f69"><code>fbeb884</code></a>
Bump form-data to fix critical vulnerabilities <a
href="https://redirect.github.com/actions/setup-python/issues/182">#182</a>
&amp; <a
href="https://redirect.github.com/actions/setup-python/issues/183">#183</a>
(<a
href="https://redirect.github.com/actions/setup-python/issues/1163">#1163</a>)</li>
<li><a
href="03bb6152f4"><code>03bb615</code></a>
Bump idna from 2.9 to 3.7 in /<strong>tests</strong>/data (<a
href="https://redirect.github.com/actions/setup-python/issues/843">#843</a>)</li>
<li><a
href="36da51d563"><code>36da51d</code></a>
Add version parsing from Pipfile (<a
href="https://redirect.github.com/actions/setup-python/issues/1067">#1067</a>)</li>
<li><a
href="3c6f142cc0"><code>3c6f142</code></a>
update documentation (<a
href="https://redirect.github.com/actions/setup-python/issues/1156">#1156</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/setup-python/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-python&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 11:27:54 -04:00
Christophe Bornet
083fbfb0d1 chore(core): add utf-8 encoding to Path read_text/write_text (#32784)
## Summary

This PR standardizes all text file I/O to use `UTF-8`. This eliminates
OS-specific defaults (e.g. Windows `cp1252`) and ensures consistent,
Unicode-safe behavior across platforms.

## Breaking changes

Users on systems with a default encoding which is not utf-8 may see
decoding errors from the following code paths:

* langchain_core.vectorstores.in_memroy.InMemoryVectorStore.load
* langchain_core.prompts.loading.load_prompt
* `from_template` in AIMessagePromptTemplate,
HumanMessagePromptTemplate, SystemMessagePromptTemplate

## Migration

Change the encoding of files that are encoded with a non utf-8 encoding to utf-8.
2025-09-08 11:27:15 -04:00
Christophe Bornet
f589168411 refactor(core): use pytest style in TestGetBufferString (#32786)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:16:13 +00:00
Christophe Bornet
5840dad40b chore(core): enable ruff docstring-code-format (#32834)
See https://docs.astral.sh/ruff/settings/#format_docstring-code-format

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:13:50 +00:00
Christophe Bornet
e3b6c9bb66 chore(core): fix some mypy warn_unreachable issues (#32560)
Found by setting `warn_unreachable: true` in mypy.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 15:02:08 +00:00
Christophe Bornet
c672590f42 chore(standard-tests): select ALL rules with exclusions (#31937)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:57:47 +00:00
Christophe Bornet
323729915a chore(standard-tests): add mypy strict checking (#32384)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 10:50:38 -04:00
Christophe Bornet
0c3e8ccd0e chore(text-splitters): select ALL rules with exclusions (#32325)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:46:09 +00:00
Christophe Bornet
20401df25d chore(cli): fix some DOC rules (preview) (#32839)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-08 14:36:22 +00:00
dependabot[bot]
e0aaaccb61 chore(infra): bump aws-actions/configure-aws-credentials from 4 to 5 (#32841)
Bumps
[aws-actions/configure-aws-credentials](https://github.com/aws-actions/configure-aws-credentials)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/aws-actions/configure-aws-credentials/releases">aws-actions/configure-aws-credentials's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.1...v5.0.0">5.0.0</a>
(2025-09-03)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>Cleanup input handling. Changes invalid boolean input behavior (see
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1445">#1445</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>add skip OIDC option (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1458">#1458</a>)
(<a
href="8c45f6b081">8c45f6b</a>)</li>
<li>Cleanup input handling. Changes invalid boolean input behavior (see
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1445">#1445</a>)
(<a
href="74b3e27aa8">74b3e27</a>)</li>
<li>support account id allowlist (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1456">#1456</a>)
(<a
href="c4be498953">c4be498</a>)</li>
</ul>
<h2>v4.3.1</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.0...v4.3.1">4.3.1</a>
(2025-08-04)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>update readme to 4.3.1 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1424">#1424</a>)
(<a
href="be2e7ad815">be2e7ad</a>)</li>
</ul>
<h2>v4.3.0</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.0...v4.3.0">4.3.0</a>
(2025-08-04)</h2>
<p>NOTE: This release tag originally pointed to
59b441846ad109fa4a1549b73ef4e149c4bfb53b, but a critical bug was
discovered shortly after publishing. We updated this tag to
d0834ad3a60a024346910e522a81b0002bd37fea to prevent anyone using the
4.3.0 tag from encountering the bug, and we published 4.3.1 to allow
workflows to auto update correctly.</p>
<h3>Features</h3>
<ul>
<li>dependency update and feature cleanup (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1414">#1414</a>)
(<a
href="59489ba544">59489ba</a>),
closes <a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1062">#1062</a>
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1191">#1191</a></li>
<li>Optional environment variable output (<a
href="c3b3ce61b0">c3b3ce6</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>docs:</strong> readme samples versioning (<a
href="5b3c895046">5b3c895</a>)</li>
<li>the wrong example region for China partition in README (<a
href="37fe9a740b">37fe9a7</a>)</li>
<li>properly set proxy environment variable (<a
href="cbea70821e">cbea708</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li>release 4.3.0 (<a
href="3f7c218721">3f7c218</a>)</li>
</ul>
<h2>v4.2.1</h2>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.2.0...v4.2.1">4.2.1</a>
(2025-05-14)</h2>
<h3>Bug Fixes</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/aws-actions/configure-aws-credentials/blob/main/CHANGELOG.md">aws-actions/configure-aws-credentials's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.3.0...v4.3.1">4.3.1</a>
(2025-08-04)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>update readme to 4.3.1 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1424">#1424</a>)
(<a
href="be2e7ad815">be2e7ad</a>)</li>
</ul>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.2.1...v4.3.0">4.3.0</a>
(2025-08-04)</h2>
<h3>Features</h3>
<ul>
<li>depenency update and feature cleanup (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1414">#1414</a>)
(<a
href="59489ba544">59489ba</a>),
closes <a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1062">#1062</a>
<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1191">#1191</a></li>
<li>Optional environment variable output (<a
href="c3b3ce61b0">c3b3ce6</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li><strong>docs:</strong> readme samples versioning (<a
href="5b3c895046">5b3c895</a>)</li>
<li>the wrong example region for China partition in README (<a
href="37fe9a740b">37fe9a7</a>)</li>
<li>properly set proxy environment variable (<a
href="cbea70821e">cbea708</a>)</li>
</ul>
<h3>Miscellaneous Chores</h3>
<ul>
<li>release 4.3.0 (<a
href="3f7c218721">3f7c218</a>)</li>
</ul>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.2.0...v4.2.1">4.2.1</a>
(2025-05-14)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>ensure explicit inputs take precedence over environment variables
(<a
href="e56e6c4038">e56e6c4</a>)</li>
<li>prioritize explicit inputs over environment variables (<a
href="df9c8fed6b">df9c8fe</a>)</li>
</ul>
<h2><a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4.1.0...v4.2.0">4.2.0</a>
(2025-05-06)</h2>
<h3>Features</h3>
<ul>
<li>add Expiration field to Outputs (<a
href="a4f326760c">a4f3267</a>)</li>
<li>Document role-duration-seconds range (<a
href="5a0cf0167f">5a0cf01</a>)</li>
<li>support action inputs as environment variables (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1338">#1338</a>)
(<a
href="2c168adcae">2c168ad</a>)</li>
</ul>
<h3>Bug Fixes</h3>
<ul>
<li>make sure action builds, also fix dependabot autoapprove (<a
href="c401b8a98c">c401b8a</a>)</li>
<li>role chaning on mulitple runs (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1340">#1340</a>)
(<a
href="9e38641911">9e38641</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a03048d875"><code>a03048d</code></a>
chore(main): release 5.0.0 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1451">#1451</a>)</li>
<li><a
href="337f510212"><code>337f510</code></a>
chore: Fix markdown link formatting in README.md (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1466">#1466</a>)</li>
<li><a
href="f001d79eaa"><code>f001d79</code></a>
chore: update README with versioning (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1465">#1465</a>)</li>
<li><a
href="cf5f2acba3"><code>cf5f2ac</code></a>
chore: Update dist</li>
<li><a
href="b394bdd9f0"><code>b394bdd</code></a>
chore(deps-dev): bump <code>@​aws-sdk/credential-provider-env</code> (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1463">#1463</a>)</li>
<li><a
href="b632c0b5e4"><code>b632c0b</code></a>
chore(deps-dev): bump memfs from 4.38.1 to 4.38.2 (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1462">#1462</a>)</li>
<li><a
href="978e44aa36"><code>978e44a</code></a>
chore: Update dist</li>
<li><a
href="c4be498953"><code>c4be498</code></a>
feat: support account id allowlist (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1456">#1456</a>)</li>
<li><a
href="c5a43c32e1"><code>c5a43c3</code></a>
chore: Update dist</li>
<li><a
href="8c45f6b081"><code>8c45f6b</code></a>
feat: add skip OIDC option (<a
href="https://redirect.github.com/aws-actions/configure-aws-credentials/issues/1458">#1458</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/aws-actions/configure-aws-credentials/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=aws-actions/configure-aws-credentials&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:18:01 -04:00
dependabot[bot]
9368ce6b07 chore(infra): bump google-github-actions/auth from 2 to 3 (#32777)
Bumps
[google-github-actions/auth](https://github.com/google-github-actions/auth)
from 2 to 3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/google-github-actions/auth/releases">google-github-actions/auth's
releases</a>.</em></p>
<blockquote>
<h2>v3.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Bump to Node 24 and remove old parameters by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/508">google-github-actions/auth#508</a></li>
<li>Remove hacky script by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/509">google-github-actions/auth#509</a></li>
<li>Release: v3.0.0 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/510">google-github-actions/auth#510</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2...v3.0.0">https://github.com/google-github-actions/auth/compare/v2...v3.0.0</a></p>
<h2>v2.1.13</h2>
<h2>What's Changed</h2>
<ul>
<li>Update deps by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/506">google-github-actions/auth#506</a></li>
<li>Release: v2.1.13 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/507">google-github-actions/auth#507</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.12...v2.1.13">https://github.com/google-github-actions/auth/compare/v2.1.12...v2.1.13</a></p>
<h2>v2.1.12</h2>
<h2>What's Changed</h2>
<ul>
<li>Add retries for getIDToken by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/502">google-github-actions/auth#502</a></li>
<li>Release: v2.1.12 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/503">google-github-actions/auth#503</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.11...v2.1.12">https://github.com/google-github-actions/auth/compare/v2.1.11...v2.1.12</a></p>
<h2>v2.1.11</h2>
<h2>What's Changed</h2>
<ul>
<li>Update troubleshooting docs for Python by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/488">google-github-actions/auth#488</a></li>
<li>Add linters by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/499">google-github-actions/auth#499</a></li>
<li>Update deps by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/500">google-github-actions/auth#500</a></li>
<li>Release: v2.1.11 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/501">google-github-actions/auth#501</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.10...v2.1.11">https://github.com/google-github-actions/auth/compare/v2.1.10...v2.1.11</a></p>
<h2>v2.1.10</h2>
<h2>What's Changed</h2>
<ul>
<li>Declare workflow permissions by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/482">google-github-actions/auth#482</a></li>
<li>Document that the OIDC token expires in 5min by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/483">google-github-actions/auth#483</a></li>
<li>Release: v2.1.10 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/484">google-github-actions/auth#484</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/google-github-actions/auth/compare/v2.1.9...v2.1.10">https://github.com/google-github-actions/auth/compare/v2.1.9...v2.1.10</a></p>
<h2>v2.1.9</h2>
<h2>What's Changed</h2>
<ul>
<li>Use our custom boolean parsing by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/478">google-github-actions/auth#478</a></li>
<li>Update deps by <a
href="https://github.com/sethvargo"><code>@​sethvargo</code></a> in <a
href="https://redirect.github.com/google-github-actions/auth/pull/479">google-github-actions/auth#479</a></li>
<li>Release: v2.1.9 by <a
href="https://github.com/google-github-actions-bot"><code>@​google-github-actions-bot</code></a>
in <a
href="https://redirect.github.com/google-github-actions/auth/pull/480">google-github-actions/auth#480</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="7c6bc770da"><code>7c6bc77</code></a>
Release: v3.0.0 (<a
href="https://redirect.github.com/google-github-actions/auth/issues/510">#510</a>)</li>
<li><a
href="42e4997ee3"><code>42e4997</code></a>
Remove hacky script (<a
href="https://redirect.github.com/google-github-actions/auth/issues/509">#509</a>)</li>
<li><a
href="5ea4dc1147"><code>5ea4dc1</code></a>
Bump to Node 24 and remove old parameters (<a
href="https://redirect.github.com/google-github-actions/auth/issues/508">#508</a>)</li>
<li>See full diff in <a
href="https://github.com/google-github-actions/auth/compare/v2...v3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=google-github-actions/auth&package-manager=github_actions&previous-version=2&new-version=3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:16:59 -04:00
dependabot[bot]
f8bcc98362 chore(infra): bump amannn/action-semantic-pull-request from 5 to 6 (#32585)
Bumps
[amannn/action-semantic-pull-request](https://github.com/amannn/action-semantic-pull-request)
from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/amannn/action-semantic-pull-request/releases">amannn/action-semantic-pull-request's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.3...v6.0.0">6.0.0</a>
(2025-08-13)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>Upgrade action to use Node.js 24 and ESM (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/287">#287</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>Upgrade action to use Node.js 24 and ESM (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/287">#287</a>)
(<a
href="bc0c9a79ab">bc0c9a7</a>)</li>
</ul>
<h2>v5.5.3</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.2...v5.5.3">5.5.3</a>
(2024-06-28)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Bump <code>braces</code> dependency (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/269">#269</a>.
by <a href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a>)
(<a
href="2d952a1bf9">2d952a1</a>)</li>
</ul>
<h2>v5.5.2</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.1...v5.5.2">5.5.2</a>
(2024-04-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Bump tar from 6.1.11 to 6.2.1 (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/262">#262</a>
by <a href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a>)
(<a
href="9a90d5a5ac">9a90d5a</a>)</li>
</ul>
<h2>v5.5.1</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.5.0...v5.5.1">5.5.1</a>
(2024-04-24)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>Bump ip from 2.0.0 to 2.0.1 (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/263">#263</a>
by <a href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a>)
(<a
href="5e7e9acca3">5e7e9ac</a>)</li>
</ul>
<h2>v5.5.0</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.4.0...v5.5.0">5.5.0</a>
(2024-04-23)</h2>
<h3>Features</h3>
<ul>
<li>Add outputs for <code>type</code>, <code>scope</code> and
<code>subject</code> (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/261">#261</a>
by <a href="https://github.com/bcaurel"><code>@​bcaurel</code></a>) (<a
href="b05f5f6423">b05f5f6</a>)</li>
</ul>
<h2>v5.4.0</h2>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.3.0...v5.4.0">5.4.0</a>
(2023-11-03)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/amannn/action-semantic-pull-request/blob/main/CHANGELOG.md">amannn/action-semantic-pull-request's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.2.0...v5.3.0">5.3.0</a>
(2023-09-25)</h2>
<h3>Features</h3>
<ul>
<li>Use Node.js 20 in action (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/240">#240</a>)
(<a
href="4c0d5a21fc">4c0d5a2</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.1.0...v5.2.0">5.2.0</a>
(2023-03-16)</h2>
<h3>Features</h3>
<ul>
<li>Update dependencies by <a
href="https://github.com/EelcoLos"><code>@​EelcoLos</code></a> (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/229">#229</a>)
(<a
href="e797448a07">e797448</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.0.2...v5.1.0">5.1.0</a>
(2023-02-10)</h2>
<h3>Features</h3>
<ul>
<li>Add regex support to <code>scope</code> and
<code>disallowScopes</code> configuration (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/226">#226</a>)
(<a
href="403a6f8924">403a6f8</a>)</li>
</ul>
<h3><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.0.1...v5.0.2">5.0.2</a>
(2022-10-17)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>Upgrade <code>@actions/core</code> to avoid deprecation warnings (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/208">#208</a>)
(<a
href="91f4126c9e">91f4126</a>)</li>
</ul>
<h3><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5.0.0...v5.0.1">5.0.1</a>
(2022-10-14)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>Upgrade GitHub Action to use Node v16 (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/207">#207</a>)
(<a
href="6282ee339b">6282ee3</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v4.6.0...v5.0.0">5.0.0</a>
(2022-10-11)</h2>
<h3>⚠ BREAKING CHANGES</h3>
<ul>
<li>Enum options need to be newline delimited (to allow whitespace
within them) (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/205">#205</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>Enum options need to be newline delimited (to allow whitespace
within them) (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/205">#205</a>)
(<a
href="c906fe1e5a">c906fe1</a>)</li>
</ul>
<h2><a
href="https://github.com/amannn/action-semantic-pull-request/compare/v4.5.0...v4.6.0">4.6.0</a>
(2022-09-26)</h2>
<h3>Features</h3>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="fdd4d3ddf6"><code>fdd4d3d</code></a>
chore: Release 6.0.1 [skip ci]</li>
<li><a
href="58e4ab40f5"><code>58e4ab4</code></a>
fix: Actually execute action (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/289">#289</a>)</li>
<li><a
href="04a8d177d9"><code>04a8d17</code></a>
chore: Release 6.0.0 [skip ci]</li>
<li><a
href="bc0c9a79ab"><code>bc0c9a7</code></a>
feat!: Upgrade action to use Node.js 24 and ESM (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/287">#287</a>)</li>
<li><a
href="631ffdc028"><code>631ffdc</code></a>
build(deps): bump the github-action-workflows group with 2 updates (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/286">#286</a>)</li>
<li><a
href="c1807ceb58"><code>c1807ce</code></a>
build: configure Dependabot (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/231">#231</a>)</li>
<li><a
href="3352882559"><code>3352882</code></a>
docs: Remove <code>synchronize</code> trigger (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/281">#281</a>)</li>
<li><a
href="04501d43b5"><code>04501d4</code></a>
docs: More restrictive permissions (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/280">#280</a>)</li>
<li><a
href="40166f0081"><code>40166f0</code></a>
chore: Update actions in release workflow (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/276">#276</a>)</li>
<li><a
href="80c0371c57"><code>80c0371</code></a>
docs: Mention <code>reopened</code> trigger in README (<a
href="https://redirect.github.com/amannn/action-semantic-pull-request/issues/272">#272</a>
by <a
href="https://github.com/garysassano"><code>@​garysassano</code></a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/amannn/action-semantic-pull-request/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=amannn/action-semantic-pull-request&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:16:19 -04:00
dependabot[bot]
d8d93882f9 chore(infra): bump actions/checkout from 4 to 5 (#32584)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to
5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/releases">actions/checkout's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
<li>Prepare v5.0.0 release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2238">actions/checkout#2238</a></li>
</ul>
<h2>⚠️ Minimum Compatible Runner Version</h2>
<p><strong>v2.327.1</strong><br />
<a
href="https://github.com/actions/runner/releases/tag/v2.327.1">Release
Notes</a></p>
<p>Make sure your runner is updated to this version or newer to use this
release.</p>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v5.0.0">https://github.com/actions/checkout/compare/v4...v5.0.0</a></p>
<h2>v4.3.0</h2>
<h2>What's Changed</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
<li>Prepare release v4.3.0 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2237">actions/checkout#2237</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/motss"><code>@​motss</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li><a href="https://github.com/mouismail"><code>@​mouismail</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li><a href="https://github.com/benwells"><code>@​benwells</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4...v4.3.0">https://github.com/actions/checkout/compare/v4...v4.3.0</a></p>
<h2>v4.2.2</h2>
<h2>What's Changed</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.2.1...v4.2.2">https://github.com/actions/checkout/compare/v4.2.1...v4.2.2</a></p>
<h2>v4.2.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/Jcambass"><code>@​Jcambass</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/checkout/pull/1919">actions/checkout#1919</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/checkout/compare/v4.2.0...v4.2.1">https://github.com/actions/checkout/compare/v4.2.0...v4.2.1</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/actions/checkout/blob/main/CHANGELOG.md">actions/checkout's
changelog</a>.</em></p>
<blockquote>
<h1>Changelog</h1>
<h2>V5.0.0</h2>
<ul>
<li>Update actions checkout to use node 24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2226">actions/checkout#2226</a></li>
</ul>
<h2>V4.3.0</h2>
<ul>
<li>docs: update README.md by <a
href="https://github.com/motss"><code>@​motss</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1971">actions/checkout#1971</a></li>
<li>Add internal repos for checking out multiple repositories by <a
href="https://github.com/mouismail"><code>@​mouismail</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1977">actions/checkout#1977</a></li>
<li>Documentation update - add recommended permissions to Readme by <a
href="https://github.com/benwells"><code>@​benwells</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2043">actions/checkout#2043</a></li>
<li>Adjust positioning of user email note and permissions heading by <a
href="https://github.com/joshmgross"><code>@​joshmgross</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2044">actions/checkout#2044</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2194">actions/checkout#2194</a></li>
<li>Update CODEOWNERS for actions by <a
href="https://github.com/TingluoHuang"><code>@​TingluoHuang</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/2224">actions/checkout#2224</a></li>
<li>Update package dependencies by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/2236">actions/checkout#2236</a></li>
</ul>
<h2>v4.2.2</h2>
<ul>
<li><code>url-helper.ts</code> now leverages well-known environment
variables by <a href="https://github.com/jww3"><code>@​jww3</code></a>
in <a
href="https://redirect.github.com/actions/checkout/pull/1941">actions/checkout#1941</a></li>
<li>Expand unit test coverage for <code>isGhes</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1946">actions/checkout#1946</a></li>
</ul>
<h2>v4.2.1</h2>
<ul>
<li>Check out other refs/* by commit if provided, fall back to ref by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1924">actions/checkout#1924</a></li>
</ul>
<h2>v4.2.0</h2>
<ul>
<li>Add Ref and Commit outputs by <a
href="https://github.com/lucacome"><code>@​lucacome</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1180">actions/checkout#1180</a></li>
<li>Dependency updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>- <a
href="https://redirect.github.com/actions/checkout/pull/1777">actions/checkout#1777</a>,
<a
href="https://redirect.github.com/actions/checkout/pull/1872">actions/checkout#1872</a></li>
</ul>
<h2>v4.1.7</h2>
<ul>
<li>Bump the minor-npm-dependencies group across 1 directory with 4
updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1739">actions/checkout#1739</a></li>
<li>Bump actions/checkout from 3 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1697">actions/checkout#1697</a></li>
<li>Check out other refs/* by commit by <a
href="https://github.com/orhantoy"><code>@​orhantoy</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1774">actions/checkout#1774</a></li>
<li>Pin actions/checkout's own workflows to a known, good, stable
version. by <a href="https://github.com/jww3"><code>@​jww3</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1776">actions/checkout#1776</a></li>
</ul>
<h2>v4.1.6</h2>
<ul>
<li>Check platform to set archive extension appropriately by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1732">actions/checkout#1732</a></li>
</ul>
<h2>v4.1.5</h2>
<ul>
<li>Update NPM dependencies by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1703">actions/checkout#1703</a></li>
<li>Bump github/codeql-action from 2 to 3 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1694">actions/checkout#1694</a></li>
<li>Bump actions/setup-node from 1 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1696">actions/checkout#1696</a></li>
<li>Bump actions/upload-artifact from 2 to 4 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1695">actions/checkout#1695</a></li>
<li>README: Suggest <code>user.email</code> to be
<code>41898282+github-actions[bot]@users.noreply.github.com</code> by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1707">actions/checkout#1707</a></li>
</ul>
<h2>v4.1.4</h2>
<ul>
<li>Disable <code>extensions.worktreeConfig</code> when disabling
<code>sparse-checkout</code> by <a
href="https://github.com/jww3"><code>@​jww3</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1692">actions/checkout#1692</a></li>
<li>Add dependabot config by <a
href="https://github.com/cory-miller"><code>@​cory-miller</code></a> in
<a
href="https://redirect.github.com/actions/checkout/pull/1688">actions/checkout#1688</a></li>
<li>Bump the minor-actions-dependencies group with 2 updates by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1693">actions/checkout#1693</a></li>
<li>Bump word-wrap from 1.2.3 to 1.2.5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a> in <a
href="https://redirect.github.com/actions/checkout/pull/1643">actions/checkout#1643</a></li>
</ul>
<h2>v4.1.3</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="08c6903cd8"><code>08c6903</code></a>
Prepare v5.0.0 release (<a
href="https://redirect.github.com/actions/checkout/issues/2238">#2238</a>)</li>
<li><a
href="9f265659d3"><code>9f26565</code></a>
Update actions checkout to use node 24 (<a
href="https://redirect.github.com/actions/checkout/issues/2226">#2226</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/checkout/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/checkout&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-08 10:14:09 -04:00
Sadiq Khan
228fbac3a6 fix(openai): handle AIMessages without response_id in _get_last_messages (#32824) 2025-09-08 10:12:50 -04:00
JunHyungKang
6ea06ca972 fix(openai): Fix Azure OpenAI Responses API model field issue (#32649) 2025-09-08 10:08:35 -04:00
ccurme
5b0a55ad35 chore(openai): apply formatting changes to AzureChatOpenAI (#32848) 2025-09-08 09:54:20 -04:00
Sydney Runkle
6e2f46d04c feat(langchain): middleware support in create_agent (#32828)
## Overview

Adding new `AgentMiddleware` primitive that supports `before_model`,
`after_model`, and `prepare_model_request` hooks.

This is very exciting! It makes our `create_agent` prebuilt much more
extensible + capable. Still in alpha and subject to change.

This is different than the initial
[implementation](https://github.com/langchain-ai/langgraph/tree/nc/25aug/agent)
in that it:
* Fills in gaps w/ missing features, for ex -- new structured output,
optionality of tools + system prompt, sync and async model requests,
provider builtin tools
* Exposes private state extensions for middleware, enabling things like
model call tracking, etc
* Middleware can register tools
* Uses a `TypedDict` for `AgentState` -- dataclass subclassing is tricky
w/ required values + required decorators
* Addition of `model_settings` to `ModelRequest` so that we can pass
through things to bind (like cache kwargs for anthropic middleware)

## TODOs

### top prio
- [x] add middleware support to existing agent
- [x] top prio middlewares
  - [x] summarization node
  - [x] HITL
  - [x] prompt caching
 
other ones
- [x] model call limits
- [x] tool calling limits
- [ ] usage (requires output state)

### secondary prio
- [x] improve typing for state updates from middleware (not working
right now w/ simple `AgentUpdate` and `AgentJump`, at least in Python)
- [ ] add support for public state (input / output modifications via
pregel channel mods) -- to be tackled in another PR
- [x] testing!

### docs
See https://github.com/langchain-ai/docs/pull/390
- [x] high level docs about middleware
- [x] summarization node
- [x] HITL
- [x] prompt caching

## open questions

Lots of open questions right now, many of them inlined as comments for
the short term, will catalog some more significant ones here.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2025-09-08 01:10:57 +00:00
Christophe Bornet
5bf0b218c8 chore(cli): fix some ruff preview rules (#32803) 2025-09-07 16:53:19 -04:00
Mason Daugherty
4e39c164bb fix(anthropic): remove beta header warning for TTL (#32832)
No longer beta as of Aug 13
2025-09-05 14:28:58 -04:00
ScarletMercy
0b3af47335 fix(docs): resolve malformed character in tool_calling.ipynb (#32825)
**Description:**  
Remove a character in tool_calling.ipynb that causes a grammatical error
Verification: Local docs build passed after fix 
 
**Issue:**  
None (direct hotfix for rendering issue identified during documentation
review)
 
**Dependencies:**  
None
2025-09-05 11:28:56 -04:00
Mason Daugherty
bc91a4811c chore: update PR template (#32819) 2025-09-04 19:53:54 +00:00
Christophe Bornet
05a61f9508 fix(langchain): fix mypy versions in langchain_v1 (#32816) 2025-09-04 11:51:08 -04:00
Christophe Bornet
f98f7359d3 refactor(core): refactors for python 3.10+ (#32787)
* Remove `sys.version_info` checks no longer needed
* Use `typing` instead of `typing_extensions` where applicable (NB: keep
using `TypedDict` from `typing_extensions` as [Pydantic requires
it](https://docs.pydantic.dev/2.3/usage/types/dicts_mapping/#typeddict))
2025-09-03 15:32:06 -04:00
Christophe Bornet
aa63de9366 chore(langchain): cleanup langchain_v1 mypy config (#32809)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-03 19:28:06 +00:00
Christophe Bornet
86fa34f3eb chore(langchain): add ruff rules D for langchain_v1 (#32808) 2025-09-03 15:26:17 -04:00
JING
36037c9251 fix(docs): update Anthropic model name and add version warnings (#32807)
**Description:** This PR fixes the broken Anthropic model example in the
documentation introduction page and adds a comment field to display
model version warnings in code blocks. The changes ensure that users can
successfully run the example code and are reminded to check for the
latest model versions.

**Issue:** https://github.com/langchain-ai/langchain/issues/32806

**Changes made:**
- Update Anthropic model from broken "claude-3-5-sonnet-latest" to
working "claude-3-7-sonnet-20250219"
- Add comment field to display model version warnings in code blocks
- Improve user experience by providing working examples and version
guidance

**Dependencies:** None required
2025-09-03 15:25:13 -04:00
Martin Meier-Zavodsky
ad26c892ea docs(langchain): update evaluation tutorial link (#32796)
**Description**
This PR updates the evaluation tutorial link for LangSmith to the new
official docs location.

**Issue**
N/A

**Dependencies**
None
2025-09-03 15:22:46 -04:00
Shahroz Ahmad
4828a85ab0 feat(core): add web_search in OpenAI tools list (#32738) 2025-09-02 21:57:25 +00:00
ccurme
50b48fa1ff chore(openai): bump minimum core version (#32795) 2025-09-02 14:06:49 -04:00
Chester Curme
a54f4385f8 Merge branch 'master' into wip-v1.0
# Conflicts:
#	libs/langchain_v1/langchain/__init__.py
2025-09-02 13:17:00 -04:00
ccurme
b999f356e8 fix(langchain): update __init__ version (#32793) 2025-09-02 13:14:42 -04:00
ccurme
98e4e7d043 Merge branch 'master' into wip-v1.0 2025-09-02 13:06:35 -04:00
ccurme
2cf5c52c13 release(core): 1.0.0a2 (#32792) 2025-09-02 12:55:52 -04:00
Sydney Runkle
062196a7b3 release(langchain): v1.0.0a3 (#32791) 2025-09-02 12:29:14 -04:00
ccurme
bf41a75073 release(openai): 1.0.0a2 (#32790) 2025-09-02 12:22:57 -04:00
Sydney Runkle
dc9f941326 chore(langchain): rename create_react_agent -> create_agent (#32789) 2025-09-02 12:13:12 -04:00
ccurme
e15c41233d feat(openai): (v1) update default output_version (#32674) 2025-09-02 12:12:41 -04:00
Mason Daugherty
25d5db88d5 fix: ci 2025-09-01 23:36:38 -05:00
Mason Daugherty
1237f94633 Merge branch 'wip-v1.0' of github.com:langchain-ai/langchain into wip-v1.0 2025-09-01 23:27:40 -05:00
Mason Daugherty
5c8837ea5a fix some imports 2025-09-01 23:27:37 -05:00
Mason Daugherty
820e355f53 Merge branch 'master' into wip-v1.0 2025-09-01 23:27:29 -05:00
Mason Daugherty
9a3ba71636 fix: version equality CI check 2025-09-01 23:24:04 -05:00
Mason Daugherty
00def6da72 rfc: remove unused TypeGuards 2025-09-01 23:13:18 -05:00
Mason Daugherty
4f8cced3b6 chore: move convert_to_openai_data_block and convert_to_openai_image_block from content.py to openai block translators 2025-09-01 23:08:47 -05:00
Mason Daugherty
365d7c414b nit: OpenAI docstrings 2025-09-01 20:30:56 -05:00
Mason Daugherty
a4874123a0 chore: move _convert_openai_format_to_data_block from langchain_v0 to openai 2025-09-01 19:39:15 -05:00
Mason Daugherty
a5f92fdd9a fix: update some docstrings and typing 2025-09-01 19:25:08 -05:00
Adithya1617
238ecd09e0 docs(langchain): update redirect url of "this langsmith conceptual guide" in tracing.mdx (#32776)
…ge (issue : #32775)

- **Description: updated the redirect url of "this langsmith conceptual
guide" in tracing.mdx
  - **Issue:** fixes #32775

---------

Co-authored-by: Adithya <adithya.vardhan1617@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-01 19:02:21 +00:00
Mason Daugherty
431e6d6211 chore(standard-tests): drop python 3.9 (#32772) 2025-08-31 18:23:10 -05:00
Mason Daugherty
0f1afa178e chore(text-splitters): drop python 3.9 support (#32771) 2025-08-31 18:13:35 -05:00
Mason Daugherty
830d1a207c Merge branch 'master' into wip-v1.0 2025-08-31 18:01:24 -05:00
Mason Daugherty
6b5fdfb804 release(text-splitters): 0.3.11 (#32770)
Fixes #32747

SpaCy integration test fixture was trying to use pip to download the
SpaCy language model (`en_core_web_sm`), but uv-managed environments
don't include pip by default. Fail test if not installed as opposed to
downloading.
2025-08-31 23:00:05 +00:00
Ravirajsingh Sodha
b42dac5fe6 docs: standardize OllamaLLM and BaseOpenAI docstrings (#32758)
- Add comprehensive docstring following LangChain standards
- Include Setup, Key init args, Instantiate, Invoke, Stream, and Async
sections
- Provide detailed parameter descriptions and code examples
- Fix linting issues for code formatting compliance

Contributes to #24803

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-31 17:45:56 -05:00
Christophe Bornet
e0a4af8d8b docs(text-splitters): fix some docstrings (#32767) 2025-08-31 13:46:11 -05:00
Rémy HUBSCHER
fcf7175392 chore(langchain): improve PostgreSQL Manager upsert SQLAlchemy API calls. (#32748)
- Make explicit the `constraint` parameter name to avoid mixing it with
`index_elements`
[[Documentation](https://docs.sqlalchemy.org/en/20/dialects/postgresql.html#sqlalchemy.dialects.postgresql.Insert.on_conflict_do_update)]
- ~Fallback on the existing `group_id` row value, to avoid setting it to
`None`.~
2025-08-30 14:13:24 -05:00
Kush Goswami
1f2ab17dff docs: fix typo and grammer in Conceptual guide (#32754)
fixed small typo and grammatical inconsistency in Conceptual guide
2025-08-30 13:48:55 -05:00
Mason Daugherty
b494a3c57b chore(cli): drop python 3.9 support (#32761) 2025-08-30 13:25:33 -05:00
Mason Daugherty
f088fac492 Merge branch 'master' into wip-v1.0 2025-08-30 14:21:37 -04:00
Mason Daugherty
2dc89a2ae7 release(cli): 0.0.37 (#32760)
It's been a minute. Final release prior to dropping Python 3.9 support.
2025-08-30 13:07:55 -05:00
Christophe Bornet
e3c4aeaea1 chore(cli): add mypy strict checking (#32386)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-30 13:02:45 -05:00
Vikas Shivpuriya
444939945a docs: fix punctuation in style guide (#32756)
Removed a period in bulleted list for consistency

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-30 12:56:17 -05:00
Vikas Shivpuriya
ae8db86486 docs: fixed typo in contributing guide (#32755)
Completed the sentence by adding a period ".", in sync with other points

>> Click "Propose changes"

to 

>> Click "Propose changes".

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-30 12:55:25 -05:00
Christophe Bornet
8a1419dad1 chore(cli): add ruff rules ANN401 and D1 (#32576) 2025-08-30 12:41:16 -05:00
Kush Goswami
840e4c8e9f docs: fix grammar and typo in Documentation style guide (#32741)
fixed grammer and one typo in the Documentation style guide
2025-08-29 14:22:54 -04:00
Caspar Broekhuizen
37aff0a153 chore: bump langchain-core minimum to 0.3.75 (#32753)
Update `langchain-core` dependency min from `>=0.3.63` to `>=0.3.75`.

### Motivation
- We located the `langchain-core` package locally in the monorepo and
need to align `langchain-tests` with the new minimum version.
2025-08-29 14:11:28 -04:00
Caspar Broekhuizen
a163d59988 chore(standard-tests): relax langchain-core bounds for langchain-tests 1.0.0a1 (#32752)
### Overview
Preparing the `1.0.0a1` release of `langchain-tests` to align with
`langchain-core` version `1.0.0a1`.

### Changes
- Bump package version to `1.0.0a1`
- Relax `langchain-core` requirement from `<1.0.0,>=0.3.63` to
`<2.0.0,>=0.3.63`

### Motivation
All main LangChain packages are now publishing `1.0.0a` prereleases.  
`langchain-tests` needs a matching prerelease so downstreams can install
tests alongside the 1.0 series without conflicts.

### Tests
- Verified installation and tests against both `0.3.75` and `1.0.0a1`.
2025-08-29 13:46:48 -04:00
Mason Daugherty
925ad65df9 fix(core): typo in content.py 2025-08-28 15:17:51 -04:00
Mason Daugherty
e09d90b627 Merge branch 'master' into wip-v1.0 2025-08-28 14:11:24 -04:00
Sydney Runkle
b26e52aa4d chore(text-splitters): bump version of core (#32740) 2025-08-28 13:14:57 -04:00
Sydney Runkle
38cdd7a2ec chore(text-splitters): relax max bound for langchain-core (#32739) 2025-08-28 13:05:47 -04:00
Sydney Runkle
26e5d1302b chore(langchain): remove upper bound at v1 for core (#32737) 2025-08-28 12:14:42 -04:00
Christopher Jones
107425c68d docs: fix basic Oracle example issues such as capitalization (#32730)
**Description:** fix capitalization and basic issues in
https://python.langchain.com/docs/integrations/document_loaders/oracleadb_loader/

Signed-off-by: Christopher Jones <christopher.jones@oracle.com>
2025-08-28 10:32:45 -04:00
Tik1993
009cc3bf50 docs(docs): added content= keyword when creating SystemMessage and HumanMessage (#32734)
Description: 
Added the content= keyword when creating SystemMessage and HumanMessage
in the messages list, making it consistent with the API reference.
2025-08-28 10:31:46 -04:00
NOOR UL HUDA
6185558449 docs: replace smart quotes with straight quotes on How-to guides landing page (#32725)
### Summary

This PR updates the sentence on the "How-to guides" landing page to
replace smart (curly) quotes with straight quotes in the phrase:

> "How do I...?"

### Why This Change?

- Ensures formatting consistency across documentation
- Avoids encoding or rendering issues with smart quotes
- Matches standard Markdown and inline code formatting

This is a small change, but improves clarity and polish on a key landing
page.
2025-08-28 10:30:12 -04:00
Kush Goswami
0928ff5b12 docs: fix typo in LangGraph section of Introduction (#32728)
Change "Linkedin" to "LinkedIn" to be consistent with LinkedIn's
spelling.

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-28 10:29:35 -04:00
ccurme
ddde1eff68 fix: openai, anthropic (v1) fix core lower bound (#32724) 2025-08-27 14:36:10 -04:00
ccurme
9b576440ed release: anthropic, openai 1.0.0a1 (#32723) 2025-08-27 14:12:28 -04:00
Sydney Runkle
7f9b0772fc chore(langchain): also bump text splitters (#32722) 2025-08-27 18:09:57 +00:00
Sydney Runkle
d6e618258f chore(langchain): use latest core (#32720) 2025-08-27 14:06:07 -04:00
Sydney Runkle
806bc593ab chore(langchain): revert back to static versioning for now (#32719) 2025-08-27 13:54:41 -04:00
Sydney Runkle
047bcbaa13 release(langchain): v1.0.0a1 (#32718)
Also removing globals usage + static version
2025-08-27 13:46:20 -04:00
Sydney Runkle
18db07c292 feat(langchain): revamped create_react_agent (#32705)
Adding `create_react_agent` and introducing `langchain.agents`!

## Enhanced Structured Output

`create_react_agent` supports coercion of outputs to structured data
types like `pydantic` models, dataclasses, typed dicts, or JSON schemas
specifications.

### Structural Changes

In langgraph < 1.0, `create_react_agent` implemented support for
structured output via an additional LLM call to the model after the
standard model / tool calling loop finished. This introduced extra
expense and was unnecessary.

This new version implements structured output support in the main loop,
allowing a model to choose between calling tools or generating
structured output (or both).

The same basic pattern for structured output generation works:

```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent("openai:gpt-4o-mini", tools=[weather_tool], response_format=Weather)
print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```

### Advanced Configuration

The new API exposes two ways to configure how structured output is
generated. Under the hood, LangChain will attempt to pick the best
approach if not explicitly specified. That is, if provider native
support is available for a given model, that takes priority over
artificial tool calling.

1. Artificial tool calling (the default for most models)

LangChain generates a tool (or tools) under the hood that match the
schema of your response format. When the model calls those tools,
LangChain coerces the args to the desired format. Note, LangChain does
not validate outputs adhering to JSON schema specifications.

<details>
<summary>Extended example</summary>

```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from langchain.agents.structured_output import ToolStrategy
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent(
    "openai:gpt-4o-mini",
    tools=[weather_tool],
    response_format=ToolStrategy(
        schema=Weather, tool_message_content="Final Weather result generated"
    ),
)

result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]})
for message in result["messages"]:
    message.pretty_print()

"""
================================ Human Message =================================

What's the weather in Tokyo?
================================== Ai Message ==================================
Tool Calls:
  weather_tool (call_Gg933BMHMwck50Q39dtBjXm7)
 Call ID: call_Gg933BMHMwck50Q39dtBjXm7
  Args:
    city: Tokyo
================================= Tool Message =================================
Name: weather_tool

it's sunny and 70 degrees in Tokyo
================================== Ai Message ==================================
Tool Calls:
  Weather (call_9xOkYUM7PuEXl9DQq9sWGv5l)
 Call ID: call_9xOkYUM7PuEXl9DQq9sWGv5l
  Args:
    temperature: 70
    condition: sunny
================================= Tool Message =================================
Name: Weather

Final Weather result generated
"""

print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```

</details>

2. Provider implementations (limited to OpenAI, Groq)

Some providers support structured output generating directly. For those
cases, we offer the `ProviderStrategy` hint:

<details>
<summary>Extended example</summary>

```py
from langchain.agents import create_react_agent
from langchain_core.messages import HumanMessage
from langchain.agents.structured_output import ProviderStrategy
from pydantic import BaseModel


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""

    return f"it's sunny and 70 degrees in {city}"


agent = create_react_agent(
    "openai:gpt-4o-mini",
    tools=[weather_tool],
    response_format=ProviderStrategy(Weather),
)

result = agent.invoke({"messages": [HumanMessage("What's the weather in Tokyo?")]})
for message in result["messages"]:
    message.pretty_print()

"""
================================ Human Message =================================

What's the weather in Tokyo?
================================== Ai Message ==================================
Tool Calls:
  weather_tool (call_OFJq1FngIXS6cvjWv5nfSFZp)
 Call ID: call_OFJq1FngIXS6cvjWv5nfSFZp
  Args:
    city: Tokyo
================================= Tool Message =================================
Name: weather_tool

it's sunny and 70 degrees in Tokyo
================================== Ai Message ==================================

{"temperature":70,"condition":"sunny"}
Weather(temperature=70.0, condition='sunny')
"""

print(repr(result["structured_response"]))
#> Weather(temperature=70.0, condition='sunny')
```

Note! The final tool message has the custom content provided by the dev.

</details>

Prompted output was previously supported and is no longer supported via
the `response_format` argument to `create_react_agent`. If there's
significant demand for this, we'd be happy to engineer a solution.

## Error Handling

`create_react_agent` now exposes an API for managing errors associated
with structured output generation. There are two common problems with
structured output generation (w/ artificial tool calling):

1. **Parsing error** -- the model generates data that doesn't match the
desired structure for the output
2. **Multiple tool calls error** -- the model generates 2 or more tool
calls associated with structured output schemas

A developer can control the desired behavior for this via the
`handle_errors` arg to `ToolStrategy`.

<details>
<summary>Extended example</summary>

```py
from langchain_core.messages import HumanMessage
from pydantic import BaseModel

from langchain.agents import create_react_agent
from langchain.agents.structured_output import StructuredOutputValidationError, ToolStrategy


class Weather(BaseModel):
    temperature: float
    condition: str


def weather_tool(city: str) -> str:
    """Get the weather for a city."""
    return f"it's sunny and 70 degrees in {city}"


def handle_validation_error(error: Exception) -> str:
    if isinstance(error, StructuredOutputValidationError):
        return (
            f"Please call the {error.tool_name} call again with the correct arguments. "
            f"Your mistake was: {error.source}"
        )
    raise error


agent = create_react_agent(
    "openai:gpt-5",
    tools=[weather_tool],
    response_format=ToolStrategy(
        schema=Weather,
        handle_errors=handle_validation_error,
    ),
)
```

</details>

## Error Handling for Tool Calling

Tools fail for two main reasons:

1. **Invocation failure** -- the args generated by the model for the
tool are incorrect (missing, incompatible data types, etc)
2. **Execution failure** -- the tool execution itself fails due to a
developer error, network error, or some other exception.

By default, when tool **invocation** fails, the react agent will return
an artificial `ToolMessage` to the model asking it to correct its
mistakes and retry.

Now, when tool **execution** fails, the react agent raises the
`ToolException` by default instead of asking the model to retry. This
helps to avoid looping that should be avoided due to the aforementioned
issues.

Developers can configure their desired behavior for retries / error
handling via the `handle_tool_errors` arg to `ToolNode`.

## Pre-Bound Models

`create_react_agent` no longer supports inputs to `model` that have been
pre-bound w/ tools or other configuration. To properly support
structured output generation, the agent itself needs the power to bind
tools + structured output kwargs.

This also makes the devx cleaner - it's always expected that `model` is
an instance of `BaseChatModel` (or `str` that we coerce into a chat
model instance).

Dynamic model functions can return a pre-bound model **IF** structured
output is not also used. Dynamic model functions can then bind tools /
structured output logic.

## Import Changes

Users should now use `create_react_agent` from `langchain.agents`
instead of `langgraph.prebuilts`.
Other imports have a similar migration path, `ToolNode` and `AgentState`
for example.

* `chat_agent_executor.py` -> `react_agent.py`

Some notes:
1. Disabled blockbuster + some linting in `langchain/agents` -- beyond
ideal, but necessary to get this across the line for the alpha. We
should re-enable before official release.
2025-08-27 17:32:21 +00:00
ccurme
a80fa1b25f chore(infra): drop anthropic from core test matrix (#32717) 2025-08-27 13:11:44 -04:00
ccurme
72b66fcca5 release(core): 1.0.0a1 (#32715) 2025-08-27 11:57:07 -04:00
ccurme
a47d993ddd release(core): 1.0.0dev0 (#32713) 2025-08-27 11:05:39 -04:00
ccurme
cb4705dfc0 chore: (v1) drop support for python 3.9 (#32712)
EOL in October

Will update ruff / formatting closer to 1.0 release to minimize merge
conflicts on branch
2025-08-27 10:42:49 -04:00
ccurme
9a9263a2dd fix(langchain): (v1) delete unused chains (#32711)
Merge conflict was not resolved correctly
2025-08-27 10:17:14 -04:00
Chester Curme
e4b69db4cf Merge branch 'master' into wip-v1.0
# Conflicts:
#	libs/langchain_v1/langchain/chains/documents/map_reduce.py
#	libs/langchain_v1/langchain/chains/documents/stuff.py
2025-08-27 09:37:21 -04:00
Mason Daugherty
242881562b feat: standard content, IDs, translators, & normalization (#32569) 2025-08-27 09:31:12 -04:00
Sydney Runkle
1fe2c4084b chore(langchain): remove untested chains for first alpha (#32710)
Also removing globals.py file
2025-08-27 08:24:43 -04:00
Sydney Runkle
c6c7fce6c9 chore(langchain): drop Python 3.9 to prep for v1 (#32704)
Python 3.9 EOL is October 2025, so we're going to drop it for the v1
alpha release.
2025-08-26 23:16:42 +00:00
Mason Daugherty
a2322f68ba Merge branch 'master' into wip-v1.0 2025-08-26 15:51:57 -04:00
Mason Daugherty
3d08b6bd11 chore: adress pytest-asyncio deprecation warnings + other nits (#32696)
amongst some linting imcompatible rules
2025-08-26 15:51:38 -04:00
Matthew Farrellee
f2dcdae467 fix(standard-tests): update function_args to match my_adder_tool param types (#32689)
**Description:**

https://api.llama.com implements strong type checking, which results in
a false negative.

with type mismatch (expected integer, received string) -

```
$ curl -X POST "https://api.llama.com/compat/v1/chat/completions" \
  -H "Authorization: Bearer API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
 "model": "Llama-3.3-70B-Instruct",
 "messages": [
     {"role": "user", "content": "What is 1 + 2"},
     {"role": "assistant", "content": "", "tool_calls": [{"id": "abc123", "type": "function", "function": {"name": "my_adder_tool", "arguments": "{\"a\": \"1\", \"b\": \"2\"}"}}]},
     {"role": "tool", "tool_call_id": "abc123", "content": "{\"result\": 3}"}
 ],
 "tools": [{"type": "function", "function": {"name": "my_adder_tool", "description": "Sum two integers", "parameters": {"properties": {"a": {"type": "integer"}, "b": {"type": "integer"}}, "required": ["a", "b"], "type": "object"}}}]
}'

{"title":"Bad request","detail":"Unexpected param value `a`: \"1\"","status":400}
```

with correct type -

```
$ curl -X POST "https://api.llama.com/compat/v1/chat/completions" \
  -H "Authorization: Bearer API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
 "model": "Llama-3.3-70B-Instruct",
 "messages": [
     {"role": "user", "content": "What is 1 + 2"},
     {"role": "assistant", "content": "", "tool_calls": [{"id": "abc123", "type": "function", "function": {"name": "my_adder_tool", "arguments": "{\"a\": 1, \"b\": 2}"}}]},
     {"role": "tool", "tool_call_id": "abc123", "content": "{\"result\": 3}"}
 ],
 "tools": [{"type": "function", "function": {"name": "my_adder_tool", "description": "Sum two integers", "parameters": {"properties": {"a": {"type": "integer"}, "b": {"type": "integer"}}, "required": ["a", "b"], "type": "object"}}}]
}'

{"id":"AhMwBbuaa5payFr_xsOHzxX","model":"Llama-3.3-70B-Instruct","choices":[{"finish_reason":"stop","index":0,"message":{"refusal":"","role":"assistant","content":"The result of 1 + 2 is 3.","id":"AhMwBbuaa5payFr_xsOHzxX"},"logprobs":null}],"created":1756167668,"object":"chat.completions","usage":{"prompt_tokens":248,"completion_tokens":17,"total_tokens":265}}
```
2025-08-26 15:50:47 -04:00
ccurme
dbebe2ca97 release(core): 0.3.75 (#32693) 2025-08-26 11:12:03 -04:00
ccurme
008043977d release(openai): 0.3.32 (#32691) 2025-08-26 14:05:40 +00:00
Jacob Lee
1459d4f4ce fix(openai): Always add raw response object to OpenAI client errors for invoke (#32655) 2025-08-26 09:59:25 -04:00
ccurme
f33480c2cf feat(core): trace response body on error (#32653) 2025-08-25 14:28:19 -04:00
ccurme
7e9ae5df60 feat(openai): (v1) delete bind_functions and remove tool_calls from additional_kwargs (#32669) 2025-08-25 14:22:31 -04:00
Mason Daugherty
1c55536ec1 chore(core): add note about backward compatibility for tool_calls in additional_kwargs in JsonOutputKeyToolsParser 2025-08-25 10:30:41 -04:00
Maitrey Talware
622337a297 docs(docs): fixed typos in documentations (#32661)
Minor typo fixes. (Not linked to current open issues)
2025-08-25 10:02:53 -04:00
Shahroz Ahmad
1819c73d10 docs(docs): update Docker to ClickHouse 25.7 with vector_similarity support (#32659)
- **Description:** Updated Docker command to use ClickHouse 25.7 (has
`vector_similarity` index support). Added `CLICKHOUSE_SKIP_USER_SETUP=1`
env param to [bypass default user
setup](https://clickhouse.com/docs/install/docker#managing-default-user)
and allow external network access. There was also a bug where if you try
to access results using `similarity_search_with_relevance_scores`, they
need to unpacked first.

- **Issue:** Fixes #32094 if someone following tutorial with default
Clickhouse configurations.
2025-08-25 09:59:28 -04:00
Kim
8171403b4a docs(docs): rebranding of Azure AI Studio to Azure AI Foundry (#32658)
# Description
Updated documentation to reflect Microsoft’s rebranding of Azure AI
Studio to Azure AI Foundry. This ensures consistency with current Azure
terminology across the docs.

# Issue
N/A

# Dependencies
None
2025-08-25 09:58:31 -04:00
Chester Curme
7a108618ae Merge branch 'master' into wip-v1.0 2025-08-25 09:39:39 -04:00
Mason Daugherty
2d0713c2fc fix(infra): ollama CI 2025-08-22 16:40:03 -04:00
Mason Daugherty
8060b371bb fix(infra): ollama CI 2025-08-22 16:37:05 -04:00
Mason Daugherty
7851f66503 release(ollama): 0.3.7 (#32651) 2025-08-22 15:18:40 -04:00
Mason Daugherty
af3b88f58d feat(ollama): update reasoning type to support string values for custom intensity levels (e.g. gpt-oss) (#32650) 2025-08-22 15:11:32 -04:00
itaismith
1eb45d17fb feat(chroma): Add support for collection forking (#32627) 2025-08-21 17:57:55 -04:00
ccurme
8545d4731e release(openai): 0.3.31 (#32646) 2025-08-21 16:50:27 -04:00
Alex Naidis
21f7a9a9e5 fix(openai): allow temperature parameter for gpt-5-chat models (#32624) 2025-08-21 16:40:10 -04:00
sa411022
61bc1bf9cc fix(openai): construct responses api input (#32557) 2025-08-21 15:56:29 -04:00
Shahrukh Shaik
4ba222148d fix(openai): Chat Message Annotations defaults to [ ] if not list or None (#32614) 2025-08-21 15:30:12 -04:00
ccurme
6f058e7b9b fix(core): (v1) update BaseChatModel return type to AIMessage (#32626) 2025-08-21 14:02:24 -04:00
Christophe Bornet
b825f85bf2 fix(standard-tests): fix BaseStoreAsyncTests.test_set_values_is_idempotent (#32638)
The async version of the test should use the `ayield_keys` method
instead of `yield_keys`.
Otherwise tools such as `blockbuster` may trigger on a blocking call.
2025-08-21 10:07:46 -04:00
Mohammed Mohtasim .M.S
b5c44406eb docs(docs): fix typos in table in "How to load PDFs" documentation (#32635)
**Description:**
Fixed corrupted text in the code cell output of the documentation
notebook. The code cell itself was correct, but the saved output
contained garbage text.

**Issue:**
The saved output in the documentation notebook contained garbage/typo
text in the table name.

**Dependencies:**
None
2025-08-21 10:06:45 -04:00
Emmanuel Leroy
2ec63ca7da docs: migration to langchain_oci (#32619)
Doc update. I missed a couple mentions of the old package.
2025-08-21 10:03:44 -04:00
Christophe Bornet
f896bcdb1d chore(langchain): add mypy pydantic plugin (#32610) 2025-08-19 16:59:59 -04:00
Christophe Bornet
73a7de63aa chore(text-splitters): add mypy pydantic plugin (#32611) 2025-08-19 16:58:12 -04:00
Emmanuel Leroy
cd5f3ee364 docs: migrate from community package to langchain-oci (#32608)
Migrate package from langchain_community to langchain_oci
2025-08-19 16:57:37 -04:00
ccurme
dbc5a3b718 fix(anthropic): update cassette for streaming benchmark (#32609) 2025-08-19 11:18:36 -04:00
Christophe Bornet
02d6b9106b chore(core): add mypy pydantic plugin (#32604)
This helps to remove a bunch of mypy false positives.
2025-08-19 09:39:53 -04:00
William FH
b470c79f1d refactor(core): Use duck typing for _StreamingCallbackHandler (#32535)
It's used in langgraph and maybe elsewhere, so would be preferable if it
could just be duck-typed
2025-08-19 05:41:07 -07:00
Mason Daugherty
f0f1e28473 Merge branch 'master' of github.com:langchain-ai/langchain into wip-v1.0 2025-08-18 23:30:10 -04:00
Mason Daugherty
d204f0dd55 feat(infra): add skip-preview tag check in Vercel deployment script (#32600)
Having vercel attempt to deploy on each commit (even if unrelated to
docs) was getting annoying. Options:

- `[skip-preview]`
- `[no-preview]`
- `[skip-deploy]`

Full example: `fix(core): resolve memory leak [no-preview]`
2025-08-18 17:33:27 -04:00
Mohammad Mohtashim
00259b0061 fix(deepseek): Deep Seek Model for LS Tracing (#32575)
- **Description:** Fix for LS Tracing for Provider for DeepSeek.
  - **Issue:** #32484

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-18 18:48:30 +00:00
Mohammad Mohtashim
4fb1132e30 docs: Classification Notebook Update (#32357)
- **Description:** Updating the Classification notebook which was raised
[here](https://github.com/langchain-ai/langchain/issues/32354)
- **Issue:** Fixes #32354

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-18 18:45:03 +00:00
Mason Daugherty
a6690eb9fd release(anthropic): 0.3.19 (#32595) 2025-08-18 14:25:03 -04:00
Mason Daugherty
f69f9598f5 chore: update references to use the latest version of Claude-3.5 Sonnet (#32594) 2025-08-18 14:11:15 -04:00
Mason Daugherty
8d0fb2d04b fix(anthropic): correct input_token count for streaming (#32591)
* Create usage metadata on
[`message_delta`](https://docs.anthropic.com/en/docs/build-with-claude/streaming#event-types)
instead of at the beginning. Consequently, token counts are not included
during streaming but instead at the end. This allows for accurate
reporting of server-side tool usage (important for billing)
* Add some clarifying comments
* Fix some outstanding Pylance warnings
* Remove unnecessary `text` popping in thinking blocks
* Also now correctly reports `input_cache_read`/`input_cache_creation`
as a result
2025-08-18 17:51:47 +00:00
Mason Daugherty
8042b04da6 fix(anthropic): clean up null file_id fields in citations during message formatting (#32592)
When citations are returned from streaming, they include a `file_id:
null` field in their `content_block_location` structure.

When these citations are passed back to the API in subsequent messages,
the API rejects them with "Extra inputs are not permitted" for the
`file_id` field.
2025-08-18 13:01:52 -04:00
Daehwi Kim
fb74265175 fix(docs): update LangGraph guides link and add JS how-to link (#32583)
**Description:**  
Corrected LangGraph documentation link (changed to “guides”), and added
a link to LangGraph JS how-to guides for clarity.

**Issue:**  
N/A  

**Dependencies:**  
None

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-18 14:27:37 +00:00
Oresztesz Margaritisz
21b61aaf9a fix(docs): Using appropriate argument name in ToolNode for error handling (#32586)
The appropriate `ToolNode` attribute for error handling is called
`handle_tool_errors` instead of `handle_tool_error`.

For further info see [ToolNode source code in
LangGraph](https://github.com/langchain-ai/langgraph/blob/main/libs/prebuilt/langgraph/prebuilt/tool_node.py#L255)

**Twitter handle:** gitaroktato

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-18 10:12:10 -04:00
Keyu Chen
03138f41a0 feat(text-splitters): add optional custom header pattern support (#31887)
## Description

This PR adds support for custom header patterns in
`MarkdownHeaderTextSplitter`, allowing users to define non-standard
Markdown header formats (like `**Header**`) and specify their hierarchy
levels.

**Issue:** Fixes #22738

**Dependencies:** None - this change has no new dependencies

**Key Changes:**
- Added optional `custom_header_patterns` parameter to support
non-standard header formats
- Enable splitting on patterns like `**Header**` and `***Header***`
- Maintain full backward compatibility with existing usage
- Added comprehensive tests for custom and mixed header scenarios

## Example Usage

```python
from langchain_text_splitters import MarkdownHeaderTextSplitter

headers_to_split_on = [
    ("**", "Chapter"),
    ("***", "Section"),
]

custom_header_patterns = {
    "**": 1,   # Level 1 headers
    "***": 2,  # Level 2 headers
}

splitter = MarkdownHeaderTextSplitter(
    headers_to_split_on=headers_to_split_on,
    custom_header_patterns=custom_header_patterns,
)

# Now **Chapter 1** is treated as a level 1 header
# And ***Section 1.1*** is treated as a level 2 header
```

## Testing

-  Added unit tests for custom header patterns
-  Added tests for mixed standard and custom headers
-  All existing tests pass (backward compatibility maintained)
-  Linting and formatting checks pass

---

The implementation provides a flexible solution while maintaining the
simplicity of the existing API. Users can continue using the splitter
exactly as before, with the new functionality being entirely opt-in
through the `custom_header_patterns` parameter.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-18 10:10:49 -04:00
Mason Daugherty
fd891ee3d4 revert(anthropic): streaming token counting to defer input tokens until completion (#32587)
Reverts langchain-ai/langchain#32518
2025-08-18 09:48:33 -04:00
ccurme
b8cdbc4eca fix(anthropic): sanitize tool use block when taking directly from content (#32574) 2025-08-18 09:06:57 -04:00
Christophe Bornet
791d309c06 chore(langchain): add mypy warn_unreachable setting (#32529)
See
https://mypy.readthedocs.io/en/stable/config_file.html#confval-warn_unreachable

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-15 23:03:53 +00:00
Mason Daugherty
d3d23e2372 fix(anthropic): streaming token counting to defer input tokens until completion (#32518)
Supersedes #32461

Fixed incorrect input token reporting during streaming when tools are
used. Previously, input tokens were counted at `message_start` before
tool execution, leading to inaccurate counts. Now input tokens are
properly deferred until `message_delta` (completion), aligning with
Anthropic's billing model and SDK expectations.

**Before Fix:**
- Streaming with tools: Input tokens = 0 
- Non-streaming with tools: Input tokens = 472 

**After Fix:**
- Streaming with tools: Input tokens = 472 
- Non-streaming with tools: Input tokens = 472 

Aligns with Anthropic's SDK expectations. The SDK handles input token
updates in `message_delta` events:

```python
# https://github.com/anthropics/anthropic-sdk-python/blob/main/src/anthropic/lib/streaming/_messages.py
if event.usage.input_tokens is not None:
      current_snapshot.usage.input_tokens = event.usage.input_tokens
```
2025-08-15 17:49:46 -04:00
Mason Daugherty
8bd2403518 fix: increase max_tokens limit to 64000 re: Anthropic dynamic tokens 2025-08-15 15:34:54 -04:00
Mason Daugherty
4dd9110424 Merge branch 'master' into wip-v1.0 2025-08-15 15:32:21 -04:00
Mohammad Mohtashim
174e685139 feat(anthropic): dynamic mapping of Max Tokens for Anthropic (#31946)
- **Description:** Dynamic mapping of `max_tokens` as per the choosen
anthropic model.
- **Issue:** Fixes #31605

@ccurme

---------

Co-authored-by: Caspar Broekhuizen <caspar@langchain.dev>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-15 11:33:51 -07:00
Mason Daugherty
2f32c444b8 docs: add details on message IDs and their assignment process (#32534) 2025-08-15 18:22:28 +00:00
Mason Daugherty
9721684501 Merge branch 'master' into wip-v1.0 2025-08-15 14:06:34 -04:00
Mason Daugherty
a4e135b508 fix: use .get() on image URL in ImagePromptValue.to_string() 2025-08-15 13:57:50 -04:00
Mason Daugherty
fe740a9397 fix(docs): chatbot.ipynb trimming regression (#32561)
Supersedes #32544

Changes to the `trimmer` behavior resulted in the call `"What math
problem was asked?"` to no longer see the relevant query due to the
number of the queries' tokens. Adjusted to not trigger trimming the
relevant part of the message history. Also, add print to the trimmer to
increase observability on what is leaving the context window.

Add note to trimming tut & format links as inline
2025-08-15 14:47:22 +00:00
Rostyslav Borovyk
b2b835cb36 docs(docs): add Oxylabs document loader (#32429)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-15 10:46:26 -04:00
Christophe Bornet
4656f727da chore(text-splitters): add mypy warn_unreachable (#32558) 2025-08-15 09:45:20 -04:00
Mason Daugherty
34800332bf chore: update integrations table (#32556)
Enhance the integrations table by adding the `js:
'@langchain/community'` reference for several packages and updating the
titles of specific integrations to avoid improper capitalization
2025-08-14 22:37:36 -04:00
Mason Daugherty
06ba80ff68 docs: formatting Tavily (#32555) 2025-08-14 23:41:37 +00:00
Mason Daugherty
2bd8096faa docs: add pre-commit setup instructions to the dev setup guide (#32553) 2025-08-14 20:35:57 +00:00
Mason Daugherty
a0331285d7 fix(core): Support no-args tools by defaulting args to empty dict (#32530)
Supersedes #32408

Description:  
This PR ensures that tool calls without explicitly provided `args` will
default to an empty dictionary (`{}`), allowing tools with no parameters
(e.g. `def foo() -> str`) to be registered and invoked without
validation errors. This change improves compatibility with agent
frameworks that may omit the `args` field when generating tool calls.

Issue:  
See
[langgraph#5722](https://github.com/langchain-ai/langgraph/issues/5722)
–
LangGraph currently emits tool calls without `args`, which leads to
validation errors
when tools with no parameters are invoked. This PR ensures compatibility
by defaulting
`args` to `{}` when missing.

Dependencies:  
None

---------

Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore,
revert, release
  - Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma,
deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama,
openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do
not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change. Include a [closing
keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Signed-off-by: jitokim <pigberger70@gmail.com>
Co-authored-by: jito <pigberger70@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-14 20:28:36 +00:00
Mason Daugherty
8f68a08528 chore: add Chat LangChain to README.md (#32545) 2025-08-14 16:15:27 -04:00
Lauren Hirata Singh
71651c4a11 docs: update banner (#32552) 2025-08-14 10:54:29 -07:00
Lauren Hirata Singh
44ec1f32b2 docs: banner for academy course (#32550)
Publish at 10AM PT
2025-08-14 10:05:00 -07:00
Yoon
0c81499243 docs(ollama): update API usage examples (#32547)
**Description**  
Corrected a typo in the Ollama chatbot example output in  
`docs/docs/integrations/chat/ollama.ipynb` where `"got-oss"` was  
mistakenly used instead of `"gpt-oss"`.

No functional changes to code; documentation-only update.  
All notebook outputs were cleared to keep the diff minimal.

**Issue**  
N/A

**Dependencies**  
None

**Twitter handle**  
N/A
2025-08-14 12:57:38 -04:00
Mason Daugherty
397cd89988 docs: update outdated README.md content (#32540) 2025-08-13 22:19:38 +00:00
mishraravibhushan
db438d8dcc docs(docs): fixed additional grammar and style issues in how-to index (#32533)
- Fix 'few shot' → 'few-shot' (add hyphen for consistency)
- Fix 'over the database' → 'over a database' (add missing article)
- Fix 'run time' → 'runtime' (more consistent terminology)
- Fix 'in-sync' → 'in sync' (remove unnecessary hyphen)
2025-08-13 14:10:58 -04:00
RecallIO
4f71c35eb0 docs(docs): Add RecallIO.AI as a memory provider (#32331)
Add requested files to add RecallIO as a memory provider.

---------

Co-authored-by: Frey <gfreyburger@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-13 15:09:56 +00:00
Mason Daugherty
156ae2e69b fix(docs): resolve langchain-azure-ai conflict with langchain-core (#32528) 2025-08-13 14:47:23 +00:00
Shenghang Tsai
f4f919768e docs(langchain): create SiliconFlow provider entry (#32342)
SiliconFlow's provider integration will be maintained at
https://github.com/siliconflow/langchain-siliconflow
This PR introduce the basic instruction to make use of the pip package
2025-08-13 10:41:23 -04:00
Mason Daugherty
7932e1edd1 feat(docs): clarify structured output with tools ordering (#32527) 2025-08-13 10:40:48 -04:00
Mason Daugherty
024422e9b0 chore: update to use new LGP docs url (#32522) 2025-08-13 03:38:39 +00:00
Mason Daugherty
d52036accc chore: update README.md to use pepy downloads badge (#32521) 2025-08-13 03:23:11 +00:00
Mason Daugherty
5b701b5189 fix(tests): add anthropic_proxy to configurable test parameters (for v1) 2025-08-12 18:33:21 -04:00
Mason Daugherty
8848b3e018 fix(tests): add anthropic_proxy to configurable test parameters 2025-08-12 18:27:35 -04:00
Mason Daugherty
80068432ed chore(core): bump lock 2025-08-12 17:32:24 -04:00
Jack
b9dcce95be fix(anthropic): Add proxy (#32409)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
fix #30146
- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-12 21:21:26 +00:00
ccurme
be83ce74a7 feat(anthropic): support cache_control as a kwarg (#31523)
```python
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-5-haiku-latest")
caching_llm = llm.bind(cache_control={"type": "ephemeral"})

caching_llm.invoke(
    [
        HumanMessage("..."),
        AIMessage("..."),
        HumanMessage("..."),  # <-- final message / content block gets cache annotation
    ]
)
```
Potentially useful given's Anthropic's [incremental
caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#continuing-a-multi-turn-conversation)
capabilities:
> During each turn, we mark the final block of the final message with
cache_control so the conversation can be incrementally cached. The
system will automatically lookup and use the longest previously cached
prefix for follow-up messages.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-12 16:18:24 -04:00
Mason Daugherty
1167e7458e fix(anthropic): update test model names and adjust token count assertions in integration tests (#32422) 2025-08-12 19:39:35 +00:00
Mason Daugherty
d5fd0bca35 docs(anthropic): add documentation for extended context windows in Claude Sonnet 4 (#32517) 2025-08-12 19:16:26 +00:00
Narasimha Badrinath
30d646b576 docs(docs): remove redundant integration details from ChatGradient page. (#32514)
This commit removes redundant integration info from details page,
additionally, changing reference from "DigitalOcean GradientAI" to
"DigitalOcean Gradient™ AI" and updating the setup instructions
accordingly.
2025-08-12 16:14:18 +00:00
Mason Daugherty
262c83763f release(openai): 0.3.30 (#32515) 2025-08-12 16:06:17 +00:00
Mason Daugherty
0024dffa68 feat(openai): officially support verbosity (#32470) 2025-08-12 16:00:30 +00:00
Brody
98797f367a docs: fix broken links (#32513)
**Description:**

Two broken links were reported by another LangChain employee. This PR
fixes those links.

Fixed and tested locally.
  
**Dependencies:**

None
2025-08-12 15:55:37 +00:00
Christophe Bornet
1563099f3f chore(langchain): select ALL rules with exclusions (#31930)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-12 11:51:31 -04:00
rishiraj
7f259863e1 feat(docs): add truefoundry ai gateway (#32362)
This PR adds documentation for integrating [TrueFoundry’s AI
Gateway](https://www.truefoundry.com/ai-gateway) with Langfuse using the
Langraph OpenAI SDK.
The integration sends requests through TrueFoundry’s AI Gateway for
unified governance, observability, and routing, while Langraph runs on
the client side to capture execution traces and telemetry.
- Issue: N/A
- Dependencies: None
- Twitter - https://x.com/truefoundry


tests - Not applicable

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-12 02:26:45 +00:00
Mason Daugherty
c8df6c7ec9 chore: update CONTRIBUTING.md to more clearly mention forum (#32509) 2025-08-11 23:02:21 +00:00
Christophe Bornet
cf2b4bbe09 chore(cli): select ALL rules with exclusions (#31936)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:43:11 +00:00
Christophe Bornet
09a616fe85 chore(standard-tests): add ruff rules D (#32347)
See https://docs.astral.sh/ruff/rules/#pydocstyle-d

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:26:11 +00:00
Christophe Bornet
46bbd52e81 chore(cli): add ruff rules D1 (#32350)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:25:30 +00:00
Christophe Bornet
8b663ed6c6 chore(text-splitters): bump mypy version to 1.17 (#32387)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 22:24:49 +00:00
Anderson
166c027434 docs: add scrapeless integration documentation (#32081)
Thank you for contributing to LangChain! 
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"

- **Description:** Integrated the Scrapeless package to enable Langchain
users to seamlessly incorporate Scrapeless into their agents.
- **Dependencies:** None
- **Twitter handle:** [Scrapelessteam](https://x.com/Scrapelessteam)

- [x] **Add tests and docs**: If you're adding a new integration, you
must include:
1. A test for the integration, preferably unit tests that do not rely on
network access,
2. An example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 22:16:15 +00:00
GDanksAnchor
4a2a3fcd43 docs: add anchorbrowser (#32494)
# Description

This PR updates the docs for the
[langchain-anchorbrowser](https://pypi.org/project/langchain-anchorbrowser/)
package. It adds a few tools

[Anchor Browser](https://anchorbrowser.io/?utm=langchain) is the
platform for AI Agentic browser automation, which solves the challenge
of automating workflows for web applications that lack APIs or have
limited API coverage. It simplifies the creation, deployment, and
management of browser-based automations, transforming complex web
interactions into simple API endpoints.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 21:48:10 +00:00
Anubhav Dhawan
d46dcf4a60 docs: add Google partner guide for MCP Toolbox (#32356)
This PR introduces a new Google partner guide for MCP Toolbox. The
primary goal of this new documentation is to enhance the discoverability
of MCP Toolbox for developers working within the Google ecosystem,
providing them with a clear and direct path to using our tools.

> [!IMPORTANT]
> This PR contains link to a page which is added in #32344. This will
cause deployment failure until that PR is merged.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 21:34:12 +00:00
William Espegren
d2ac3b375c fix(docs): add Spider as a webpage loader (#32453)
[Spider](https://spider.cloud/) is a webpage loader and should be listed
under the
["Webpages"](https://python.langchain.com/docs/integrations/document_loaders/#webpages)
table on the Document loaders page.

Twitter: https://x.com/WilliamEspegren

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 21:23:03 +00:00
Anubhav Dhawan
1e38fd2ce3 docs: add integration guide for MCP Toolbox (#32344)
This PR introduces a new integration guide for MCP Toolbox. The primary
goal of this new documentation is to enhance the discoverability of MCP
Toolbox for developers working within the LangChain ecosystem, providing
them with a clear and direct path to using our tools.

This approach was chosen to provide users with a practical, hands-on
example that they can easily follow.

> [!NOTE]
> The page added in this PR is linked to from a section in Google
partners page added in #32356.

---------

Co-authored-by: Lauren Hirata Singh <lauren@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 21:03:38 +00:00
Yasien Dwieb
155e3740bc fix(docs): handle collection not found error on RAG tutorial when qdrant is selected as vectorStore (#32099)
In [Rag Part 1
Tutorial](https://python.langchain.com/docs/tutorials/rag/), when QDrant
vector store is selected, the sample code does not work
It fails with error  `ValueError: Collection test not found`

So, this fix is creating that collection and ensuring its dimension size
is matching the selection the embedding size of the selected LLM Model

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-11 20:31:24 +00:00
Deepesh Dhakal
f9b4e501a8 fix(docs): update llamacpp.ipynb for installation options on Mac (#32341)
The previous code generated data invalid error.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-11 20:25:35 +00:00
prem-sagar123
5a50802c9a docs: update prompt_templates.mdx (#32405)
```messages_to_pass = [
    HumanMessage(content="What's the capital of France?"),
    AIMessage(content="The capital of France is Paris."),
    HumanMessage(content="And what about Germany?")
]
formatted_prompt = prompt_template.invoke({"msgs": messages_to_pass})
print(formatted_prompt)```

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-08-11 20:16:30 +00:00
Mohammad Mohtashim
9a7e66be60 docs: put standard-tests before other packages (#32424)
- **Description:** Moving `standard-tests` to main ordered section
- **Issue:** #32395

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 20:05:24 +00:00
Mason Daugherty
5597b277c5 feat(docs): add subsection on Tool Artifacts vs. Injected State (#32468)
Clarify the differences between tool artifacts and injected state in
LangChain and LangGraph
2025-08-11 19:53:33 +00:00
Soham Sharma
a1da5697c6 docs: clarify how to get LangSmith API key (#32402)
**Description:**
I've added a small clarification to the chatbot tutorial. The tutorial
mentions setting the `LANGSMITH_API_KEY`, but doesn't explain how a new
user can get the key from the website. This change adds a brief note to
guide them to the Settings page.

P.S. This is my first pull request, so I'm excited to learn and
contribute!

**Issue:**
N/A

**Dependencies:**
N/A

**Twitter handle:**
@sohamactive

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 19:52:05 +00:00
Divyanshu Gupta
11a54b1f1a docs: clarify SystemMessage usage in LangGraph agent notebook (#32320) (#32346)
Closes #32320

This PR updates the `langgraph_agentic_rag.ipynb` notebook to clarify
that LangGraph does not automatically prepend a `SystemMessage`. A
markdown note and an inline Python comment have been added to guide
users to explicitly include a `SystemMessage` when needed.

This improves documentation for developers working with LangGraph-based
agents and avoids confusion about system-level behavior not being
applied.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 19:49:42 +00:00
Mason Daugherty
5ccdcd7b7b feat(ollama): docs updates (#32507) 2025-08-11 15:39:44 -04:00
Mason Daugherty
ee4c2510eb feat: port various nit changes from wip-v0.4 (#32506)
Lots of work that wasn't directly related to core
improvements/messages/testing functionality
2025-08-11 15:09:08 -04:00
mishraravibhushan
7db9e60601 docs(docs): fix grammar, capitalization, and style issues across documentation (#32503)
**Changes made:**
- Fix 'Async programming with langchain' → 'Async programming with
LangChain'
- Fix 'Langchain asynchronous APIs' → 'LangChain asynchronous APIs'
- Fix 'How to: init any model' → 'How to: initialize any model'
- Fix 'async programming with Langchain' → 'async programming with
LangChain'
- Fix 'How to propagate callbacks constructor' → 'How to propagate
callbacks to the constructor'
- Fix 'How to add a semantic layer over graph database' → 'How to add a
semantic layer over a graph database'
- Fix 'Build a Question/Answering system' → 'Build a Question-Answering
system'

**Why is this change needed?**
- Improves documentation clarity and readability
- Maintains consistent LangChain branding throughout the docs
- Fixes grammar issues that could confuse users
- Follows proper documentation standards

**Files changed:**
- `docs/docs/concepts/async.mdx`
- `docs/docs/concepts/tools.mdx`
- `docs/docs/how_to/index.mdx`
- `docs/docs/how_to/callbacks_constructor.ipynb`
- `docs/docs/how_to/graph_semantic.ipynb`
- `docs/docs/tutorials/sql_qa.ipynb`

**Issue:** N/A (documentation improvements)

**Dependencies:** None

**Twitter handle:** https://x.com/mishraravibhush

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-11 13:32:28 -04:00
Mason Daugherty
e5d0a4e4d6 feat(standard-tests): formatting (#32504)
Not touching `pyproject.toml` or chat model related items as to not
interfere with work in wip0.4 branch
2025-08-11 13:30:30 -04:00
Mason Daugherty
457ce9c4b0 feat(text-splitters): ruff fixes and rules (#32502) 2025-08-11 13:28:22 -04:00
Mason Daugherty
27b6b53f20 feat(xai): ruff fixes and rules (#32501) 2025-08-11 13:03:07 -04:00
Christophe Bornet
f55186b38f fix(core): fix beta decorator for properties (#32497) 2025-08-11 12:43:53 -04:00
Mason Daugherty
374f414c91 feat(qdrant): ruff fixes and rules (#32500) 2025-08-11 12:43:41 -04:00
dependabot[bot]
9b3f3dc8d9 chore: bump actions/download-artifact from 4 to 5 (#32495)
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/407">actions/download-artifact#407</a></li>
<li>BREAKING fix: inconsistent path behavior for single artifact
downloads by ID by <a
href="https://github.com/GrantBirki"><code>@​GrantBirki</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/416">actions/download-artifact#416</a></li>
</ul>
<h2>v5.0.0</h2>
<h3>🚨 Breaking Change</h3>
<p>This release fixes an inconsistency in path behavior for single
artifact downloads by ID. <strong>If you're downloading single artifacts
by ID, the output path may change.</strong></p>
<h4>What Changed</h4>
<p>Previously, <strong>single artifact downloads</strong> behaved
differently depending on how you specified the artifact:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (direct)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/my-artifact/</code> (nested)</li>
</ul>
<p>Now both methods are consistent:</p>
<ul>
<li><strong>By name</strong>: <code>name: my-artifact</code> → extracted
to <code>path/</code> (unchanged)</li>
<li><strong>By ID</strong>: <code>artifact-ids: 12345</code> → extracted
to <code>path/</code> (fixed - now direct)</li>
</ul>
<h4>Migration Guide</h4>
<h5> No Action Needed If:</h5>
<ul>
<li>You download artifacts by <strong>name</strong></li>
<li>You download <strong>multiple</strong> artifacts by ID</li>
<li>You already use <code>merge-multiple: true</code> as a
workaround</li>
</ul>
<h5>⚠️ Action Required If:</h5>
<p>You download <strong>single artifacts by ID</strong> and your
workflows expect the nested directory structure.</p>
<p><strong>Before v5 (nested structure):</strong></p>
<pre lang="yaml"><code>- uses: actions/download-artifact@v4
  with:
    artifact-ids: 12345
    path: dist
# Files were in: dist/my-artifact/
</code></pre>
<blockquote>
<p>Where <code>my-artifact</code> is the name of the artifact you
previously uploaded</p>
</blockquote>
<p><strong>To maintain old behavior (if needed):</strong></p>
<pre lang="yaml"><code>&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="634f93cb29"><code>634f93c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/416">#416</a>
from actions/single-artifact-id-download-path</li>
<li><a
href="b19ff43027"><code>b19ff43</code></a>
refactor: resolve download path correctly in artifact download tests
(mainly ...</li>
<li><a
href="e262cbee4a"><code>e262cbe</code></a>
bundle dist</li>
<li><a
href="bff23f9308"><code>bff23f9</code></a>
update docs</li>
<li><a
href="fff8c148a8"><code>fff8c14</code></a>
fix download path logic when downloading a single artifact by id</li>
<li><a
href="448e3f862a"><code>448e3f8</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/407">#407</a>
from actions/nebuk89-patch-1</li>
<li><a
href="47225c44b3"><code>47225c4</code></a>
Update README.md</li>
<li>See full diff in <a
href="https://github.com/actions/download-artifact/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/download-artifact&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-11 12:41:58 -04:00
lineuman
afc3b1824c docs(deepseek): Add DeepSeek model option (#32481) 2025-08-11 09:20:39 -04:00
ran8080
130b7e6170 docs(docs): add missing name to AIMessage in example (#32482)
**Description:**

In the `docs/docs/how_to/structured_output.ipynb` notebook, an
`AIMessage` within the tool-calling few-shot example was missing the
`name="example_assistant"` parameter. This was inconsistent with the
other `AIMessage` instances in the same list.

This change adds the missing `name` parameter to ensure all examples in
the section are consistent, improving the clarity and correctness of the
documentation.

**Issue:** N/A

**Dependencies:** N/A
2025-08-11 09:20:09 -04:00
Navanit Dubey
d40fa534c1 docs(docs): use model_json_schema() (#32485)
While trying the line People.schema got a warning. 
```The `schema` method is deprecated; use `model_json_schema` instead```

So made the changes and now working file.

Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**

- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
  - Examples:
    - feat(core): add multi-tenant support
    - fix(cli): resolve flag parsing error
    - docs(openai): update API usage examples
  - Allowed `{TYPE}` values:
    - feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
  - Allowed `{SCOPE}` values (optional):
    - core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
  - Note: the `{DESCRIPTION}` must not start with an uppercase letter.
  - Once you've written the title, please delete this checklist item; do not include it in the PR.

- [ ] **PR message**: ***Delete this entire checklist*** and replace with
  - **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
  - **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
  - **Dependencies:** any dependencies required for this change
  - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!

- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
  1. A test for the integration, preferably unit tests that do not rely on network access,
  2. An example notebook showing its use. It lives in `docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. **We will not consider a PR unless these three are passing in CI.** See [contribution guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
2025-08-11 09:19:14 -04:00
mishraravibhushan
20bd296421 docs(docs): fix grammar in "How to deal with high-cardinality categoricals" guide title (#32488)
Description:
Corrected the guide title from "How deal with high cardinality
categoricals" to "How to deal with high-cardinality categoricals".
- Added missing "to" for grammatical correctness.
- Hyphenated "high-cardinality" for standard compound adjective usage.

Issue:
N/A

Dependencies:
None

Twitter handle:
https://x.com/mishraravibhush
2025-08-11 09:17:51 -04:00
ccurme
9259eea846 fix(docs): use pepy for integration package download badges (#32491)
pypi stats has been down for some time.
2025-08-10 18:41:36 -04:00
ccurme
afcb097ef5 fix(docs): DigitalOcean Gradient: link to correct provider page and update page title (#32490) 2025-08-10 17:29:44 -04:00
ccurme
088095b663 release(openai): release 0.3.29 (#32463) 2025-08-08 11:04:33 -04:00
Mason Daugherty
c31236264e chore: formatting across codebase (#32466) 2025-08-08 10:20:10 -04:00
ccurme
02001212b0 fix(openai): revert some changes (#32462)
Keep coverage on `output_version="v0"` (increasing coverage is being
managed in v0.4 branch).
2025-08-08 08:51:18 -04:00
Mason Daugherty
00244122bd feat(openai): minimal and verbosity (#32455) 2025-08-08 02:24:21 +00:00
ccurme
6727d6e8c8 release(core): 0.3.74 (#32454) 2025-08-07 16:39:01 -04:00
Michael Matloka
5036bd7adb fix(openai): don't crash get_num_tokens_from_messages on gpt-5 (#32451) 2025-08-07 16:33:19 -04:00
ccurme
ec2b34a02d feat(openai): custom tools (#32449) 2025-08-07 16:30:01 -04:00
Mason Daugherty
145d38f7dd test(openai): add tests for prompt_cache_key parameter and update docs (#32363)
Introduce tests to validate the behavior and inclusion of the
`prompt_cache_key` parameter in request payloads for the `ChatOpenAI`
model.
2025-08-07 15:29:47 -04:00
ccurme
68c70da33e fix(openai): add in output_text (#32450)
This property was deleted in `openai==1.99.2`.
2025-08-07 15:23:56 -04:00
Eugene Yurtsev
754528d23f feat(langchain): add stuff and map reduce chains (#32333)
* Add stuff and map reduce chains
* We'll need to rename and add unit tests to the chains prior to
official release
2025-08-07 15:20:05 -04:00
CLOVA Studio 개발
ac706c77d4 docs(docs): update v0.1.1 chatModel document on langchain-naver. (#32445)
## **Description:** 
This PR was requested after the `langchain-naver` partner-managed
packages were released
[v0.1.1](https://pypi.org/project/langchain-naver/0.1.1/).
So we've updated some our documents with the additional changed
features.

## **Dependencies:** 
https://github.com/langchain-ai/langchain/pull/30956

---------

Co-authored-by: 김필환[AI Studio Dev1] <pilhwan.kim@navercorp.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-07 15:45:50 +00:00
Tianyu Chen
8493887b6f docs: update Docker image name for jaguardb setup (#32438)
**Description**
Updated the quick setup instructions for JaguarDB in the documentation.
Replaced the outdated Docker image `jaguardb/jaguardb_with_http` with
the current recommended image `jaguardb/jaguardb` for pulling and
running the server.
2025-08-07 11:23:29 -04:00
Christophe Bornet
a647073b26 feat(standard-tests): add a property to set the name of the parameter for the number of results to return (#32443)
Not all retrievers use `k` as param name to set the number of results to
return. Even in LangChain itself. Eg:
bc4251b9e0/libs/core/langchain_core/indexing/in_memory.py (L31)

So it's helpful to be able to change it for a given retriever.
The change also adds hints to disable the tests if the retriever doesn't
support setting the param in the constructor or in the invoke method
(for instance, the `InMemoryDocumentIndex` in the link supports in the
constructor but not in the invoke method).

This change is backward compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-07 11:22:24 -04:00
ccurme
e120604774 fix(infra): exclude pre-releases from previous version testing (#32447) 2025-08-07 10:18:59 -04:00
ccurme
06d8754b0b release(core): 0.3.73 (#32446) 2025-08-07 09:03:53 -04:00
ccurme
6e108c1cb4 feat(core): zero-out token costs for cache hits (#32437) 2025-08-07 08:49:34 -04:00
John Bledsoe
bc4251b9e0 fix(core): fix index checking when merging lists (#32431)
**Description:** fix an issue I discovered when attempting to merge
messages in which one message has an `index` key in its content
dictionary and another does not.
2025-08-06 12:47:33 -04:00
Nelson Sproul
2543007436 docs(langchain): complete PDF embedding example for OpenAI, also some minor doc fixes (#32426)
For OpenAI PDF attaching, note the needed metadata.

Also some minor doc updates.
2025-08-06 12:16:16 -04:00
Mason Daugherty
ba83f58141 release(groq): 0.3.7 (#32417) 2025-08-05 15:13:08 -04:00
Mason Daugherty
fb490b0c39 feat(groq): losen restrictions on reasoning_effort, inject effort in meta, update tests (#32415) 2025-08-05 15:03:38 -04:00
Mason Daugherty
419c173225 feat(groq): openai-oss (#32411)
use new openai-oss for integration tests, set module-level testing model
names and improve robustness of tool tests
2025-08-05 14:18:56 -04:00
Pranav Bhartiya
4011257c25 docs: add Windows-specific setup instructions (#32399)
**Description:** This PR improves the contribution setup guide by adding
comprehensive Windows-specific instructions. The changes address a
common pain point for Windows contributors who don't have `make`
installed by default, making the LangChain contribution process more
accessible across different operating systems.
The main improvements include:

- Added a dedicated "Windows Users" section with multiple installation
options for `make` (Chocolatey, Scoop, WSL)
- Provided direct `uv` commands as alternatives to all `make` commands
throughout the setup guide
- Included Windows-specific instructions for testing, formatting,
linting, and spellchecking
- Enhanced the documentation to be more inclusive for Windows developers

This change makes it easier for Windows users to contribute to LangChain
without requiring additional tool installation, while maintaining the
existing workflow for users who already have `make` available.

**Issue:** This addresses the common barrier Windows users face when
trying to contribute to LangChain due to missing `make` commands.

**Dependencies:** None required - this is purely a documentation
improvement.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-05 15:00:03 +00:00
Kanav Bansal
9de0892a77 fix(docs): update package names across multiple integration docs (#32393)
## **Description:** 
Updated incorrect package names across multiple integration docs by
replacing underscores with hyphens to reflect their actual names on
PyPI. This aligns with the actual PyPI package names and prevents
potential confusion or installation issues.
## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-04 17:38:29 +00:00
Narasimha Badrinath
dd9f5d7cde feat(docs): add langchain-gradientai as provider (#32202)
langchain-gradientai is Digitalocean's integration with Langchain. It
will help users to build langchain applications using Digitalocean's
GradientAI platform.

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-04 14:57:59 +00:00
Ammar Younas
d348cfe968 docs: fix minor typos in image generation description (#32375)
Description:
Fixed minor typos in the `google_imagen.ipynb` integration notebook
related to image generation prompt formatting. No functional changes
were made — just a documentation correction to improve clarity.
2025-08-04 10:52:05 -04:00
Kanav Bansal
84c5048cb8 fix(docs): correct package names in FeatureTables.js (#32377)
## **Description:** 
Updated incorrect package names in `FeatureTables.js` by replacing
underscores with hyphens to reflect their actual names on PyPI. This
aligns with the actual PyPI package names and prevents potential
confusion or installation issues.

The following package names were corrected:
- `langchain_aws` ➝ `langchain-aws`
- `langchain_community` ➝ `langchain-community`
- `langchain_elasticsearch` ➝ `langchain-elasticsearch`
- `langchain_google_community` ➝ `langchain-google-community`

 
## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
2025-08-04 10:51:32 -04:00
garciasces
d318c655b6 fix(docs): inconsistent docs for Google Vertex AI (#32381)
Description: Documentation is inconsistent with API docs.

Current documentation implies that to use the integration you must have
credentials configured AND store the path to a service account JSON
file.

API docs explain that you must only complete EITHER of the steps
regarding credentials.

I have updated the docs to make them consistent with the API wording.
2025-08-04 10:50:50 -04:00
Kanav Bansal
df4eed0cea fix(docs): update package names, class links and package links across kv_store_feat_table.py (#32353)
## **Description:** 
Refactored multiple entries in `kv_store_feat_table.py` to ensure that
all vector store metadata is accurate, consistent, and aligned with
LangChain's latest documentation structure and PyPI naming standards.

**Key improvements across all updated entries:**
- Updated `class` links to point to their respective **docs-based
integration pages** (e.g., `/docs/integrations/stores/...`) instead of
raw API reference URLs.
- Corrected `package` display names to use **hyphenated PyPI-compliant
names** (e.g., `langchain-astradb` instead of `langchain_astradb`).
- Updated `package` links to point to the **specific class-level API
references** (e.g., `/api_reference/.../storage/...ClassName.html`) for
precision.

These improvements enhance:
- Navigation experience for users
- Alignment with PyPI and docs naming conventions
- Clarity across LangChain’s integrations documentation


 
## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
2025-08-04 09:46:54 -04:00
Dhanesh Gujrathi
a25e196fe9 docs(docs): add link for ALPHAVANTAGE_API_KEY generation in integration notebook (#32364)
docs(alpha_vantage): add link for ALPHAVANTAGE_API_KEY generation in
integration notebook

**Description:**

This PR updates the `docs/docs/integrations/tools/alpha_vantage.ipynb`
integration notebook to help users locate the API key registration page
for Alpha Vantage. The following markdown line was added:

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-08-03 19:49:44 +00:00
Ethan Knights
3137d49bd9 docs: minor agent tools markdown improvement (#32367)
Minor sharpening of agent tool doc.
2025-08-03 15:42:19 -04:00
Raphaël
9a2f49df1f fix(docs): add missing space (#32349) 2025-07-31 09:28:51 -04:00
Mason Daugherty
32e5040a42 chore: add CLAUDE.md (#32334) 2025-07-30 23:04:45 +00:00
ccurme
a9e52ca605 chore(openai): bump openai sdk (#32322) 2025-07-30 10:58:18 -04:00
Kanav Bansal
e2bc8f19c0 docs(docs): update RAG tutorials link across multiple vector store docs (AstraDB, DatabricksVectorSearch, FAISS, Redis, etc.) (#32301)
## **Description:** 
This PR updates the internal documentation link for the RAG tutorials to
reflect the updated path. Previously, the link pointed to the root
`/docs/tutorials/`, which was generic. It now correctly routes to the
RAG-specific tutorial page for the following vector store docs.

1. AstraDBVectorStore
2. Clickhouse
3. CouchbaseSearchVectorStore
4. DatabricksVectorSearch
5. ElasticsearchStore
6. FAISS
7. Milvus
8. MongoDBAtlasVectorSearch
9. openGauss
10. PGVector
11. PGVectorStore
12. PineconeVectorStore
13. QdrantVectorStore
14. Redis
15. SQLServer

## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
2025-07-30 09:46:01 -04:00
Mason Daugherty
fbd5a238d8 fix(core): revert "fix: tool call streaming bug with inconsistent indices from Qwen3" (#32307)
Reverts langchain-ai/langchain#32160

Original issue stems from using `ChatOpenAI` to interact with a `qwen`
model. Recommended to use
[langchain-qwq](https://python.langchain.com/docs/integrations/chat/qwq/)
which is built for Qwen
2025-07-29 10:26:38 -04:00
HerrDings
fc2f66ca80 docs: fixed link to docs of unstructured (#32306)
In the section [How to load documents from a
directory](https://python.langchain.com/docs/how_to/document_loader_directory/)
there is a link to the docs of *unstructured*. When you click this link,
it tells you that it has moved. Accordingly this PR fixes this link in
LangChain docs directly

from: `https://unstructured-io.github.io/unstructured/#`
to: `https://docs.unstructured.io/`
2025-07-29 10:12:22 -04:00
Mason Daugherty
0e287763cd fix: lint 2025-07-28 18:49:43 -04:00
Copilot
0b56c1bc4b fix: tool call streaming bug with inconsistent indices from Qwen3 (#32160)
Fixes a streaming bug where models like Qwen3 (using OpenAI interface)
send tool call chunks with inconsistent indices, resulting in
duplicate/erroneous tool calls instead of a single merged tool call.

## Problem

When Qwen3 streams tool calls, it sends chunks with inconsistent `index`
values:
- First chunk: `index=1` with tool name and partial arguments  
- Subsequent chunks: `index=0` with `name=None`, `id=None` and argument
continuation

The existing `merge_lists` function only merges chunks when their
`index` values match exactly, causing these logically related chunks to
remain separate, resulting in multiple incomplete tool calls instead of
one complete tool call.

```python
# Before fix: Results in 1 valid + 1 invalid tool call
chunk1 = AIMessageChunk(tool_call_chunks=[
    {"name": "search", "args": '{"query":', "id": "call_123", "index": 1}
])
chunk2 = AIMessageChunk(tool_call_chunks=[
    {"name": None, "args": ' "test"}', "id": None, "index": 0}  
])
merged = chunk1 + chunk2  # Creates 2 separate tool calls

# After fix: Results in 1 complete tool call
merged = chunk1 + chunk2  # Creates 1 merged tool call: search({"query": "test"})
```

## Solution

Enhanced the `merge_lists` function in `langchain_core/utils/_merge.py`
with intelligent tool call chunk merging:

1. **Preserves existing behavior**: Same-index chunks still merge as
before
2. **Adds special handling**: Tool call chunks with
`name=None`/`id=None` that don't match any existing index are now merged
with the most recent complete tool call chunk
3. **Maintains backward compatibility**: All existing functionality
works unchanged
4. **Targeted fix**: Only affects tool call chunks, doesn't change
behavior for other list items

The fix specifically handles the pattern where:
- A continuation chunk has `name=None` and `id=None` (indicating it's
part of an ongoing tool call)
- No matching index is found in existing chunks
- There exists a recent tool call chunk with a valid name or ID to merge
with

## Testing

Added comprehensive test coverage including:
-  Qwen3-style chunks with different indices now merge correctly
-  Existing same-index behavior preserved  
-  Multiple distinct tool calls remain separate
-  Edge cases handled (empty chunks, orphaned continuations)
-  Backward compatibility maintained

Fixes #31511.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-28 22:31:41 +00:00
Copilot
ad88e5aaec fix(core): resolve cache validation error by safely converting Generation to ChatGeneration objects (#32156)
## Problem

ChatLiteLLM encounters a `ValidationError` when using cache on
subsequent calls, causing the following error:

```
ValidationError(model='ChatResult', errors=[{'loc': ('generations', 0, 'type'), 'msg': "unexpected value; permitted: 'ChatGeneration'", 'type': 'value_error.const', 'ctx': {'given': 'Generation', 'permitted': ('ChatGeneration',)}}])
```

This occurs because:
1. The cache stores `Generation` objects (with `type="Generation"`)
2. But `ChatResult` expects `ChatGeneration` objects (with
`type="ChatGeneration"` and a required `message` field)
3. When cached values are retrieved, validation fails due to the type
mismatch

## Solution

Added graceful handling in both sync (`_generate_with_cache`) and async
(`_agenerate_with_cache`) cache methods to:

1. **Detect** when cached values contain `Generation` objects instead of
expected `ChatGeneration` objects
2. **Convert** them to `ChatGeneration` objects by wrapping the text
content in an `AIMessage`
3. **Preserve** all original metadata (`generation_info`)
4. **Allow** `ChatResult` creation to succeed without validation errors

## Example

```python
# Before: This would fail with ValidationError
from langchain_community.chat_models import ChatLiteLLM
from langchain_community.cache import SQLiteCache
from langchain.globals import set_llm_cache

set_llm_cache(SQLiteCache(database_path="cache.db"))
llm = ChatLiteLLM(model_name="openai/gpt-4o", cache=True, temperature=0)

print(llm.predict("test"))  # Works fine (cache empty)
print(llm.predict("test"))  # Now works instead of ValidationError

# After: Seamlessly handles both Generation and ChatGeneration objects
```

## Changes

- **`libs/core/langchain_core/language_models/chat_models.py`**: 
  - Added `Generation` import from `langchain_core.outputs`
- Enhanced cache retrieval logic in `_generate_with_cache` and
`_agenerate_with_cache` methods
- Added conversion from `Generation` to `ChatGeneration` objects when
needed

-
**`libs/core/tests/unit_tests/language_models/chat_models/test_cache.py`**:
- Added test case to validate the conversion logic handles mixed object
types

## Impact

- **Backward Compatible**: Existing code continues to work unchanged
- **Minimal Change**: Only affects cache retrieval path, no API changes
- **Robust**: Handles both legacy cached `Generation` objects and new
`ChatGeneration` objects
- **Preserves Data**: All original content and metadata is maintained
during conversion

Fixes #22389.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-28 22:28:16 +00:00
Mason Daugherty
30e3ed6a19 fix: add space in run-name for better readability 2025-07-28 17:46:27 -04:00
Mason Daugherty
8641a95c43 fix: update run-name in scheduled_test.yml to include dynamic inputs 2025-07-28 17:45:05 -04:00
Mason Daugherty
df70c5c186 chore: update actions run-names and add default inputs (#32293) 2025-07-28 17:33:27 -04:00
Mason Daugherty
d5ca77e065 fix: remove erreneous rocket emoji in run-name 2025-07-28 17:11:14 -04:00
Mason Daugherty
b7e4797e8b release(anthropic): 0.3.18 (#32292) 2025-07-28 17:07:11 -04:00
Mason Daugherty
3a487bf720 refactor(anthropic): AnthropicLLM to use Messages API (#32290)
re: #32189
2025-07-28 16:22:58 -04:00
Mason Daugherty
e5fd67024c fix: update link text for reporting security vulnerabilities in SECURITY.md 2025-07-28 15:05:31 -04:00
Mason Daugherty
b86841ac40 fix: update alt attribute for GitHub Codespace badge in README 2025-07-28 15:04:57 -04:00
Mason Daugherty
8db16b5633 fix: use new Google model names in examples (#32288) 2025-07-28 19:03:42 +00:00
Mason Daugherty
6f10160a45 fix: scripts/ errors 2025-07-28 15:03:25 -04:00
Mason Daugherty
e79e0bd6b4 fix(openai): add max_retries parameter to ChatOpenAI for handling 503 capacity errors (#32286)
Some integration tests were failing
2025-07-28 13:58:23 -04:00
ccurme
c55294ecb0 chore(core): add test for nested pydantic fields in schemas (#32285) 2025-07-28 17:27:24 +00:00
Mason Daugherty
7a26c3d233 fix: update bar_model to use the correct model version claude-3-7-sonnet-20250219 (#32284) 2025-07-28 12:57:40 -04:00
Mason Daugherty
c6ffac3ce0 refactor: mdx lint (#32282) 2025-07-28 12:56:22 -04:00
Mason Daugherty
a07d2c5016 refactor: remove references to unsupported model claude-3-sonnet-20240229 (#32281)
Addresses some (but not all) test issues brought about in #32280
2025-07-28 11:57:43 -04:00
Aleksandr Filippov
f0b6baa0ef fix(core): track within-batch deduplication in indexing num_skipped count (#32273)
**Description:** Fixes incorrect `num_skipped` count in the LangChain
indexing API. The current implementation only counts documents that
already exist in RecordManager (cross-batch duplicates) but fails to
count documents removed during within-batch deduplication via
`_deduplicate_in_order()`.

This PR adds tracking of the original batch size before deduplication
and includes the difference in `num_skipped`, ensuring that `num_added +
num_skipped` equals the total number of input documents.

**Issue:** Fixes incorrect document count reporting in indexing
statistics

**Dependencies:** None

Fixes #32272

---------

Co-authored-by: Alex Feel <afilippov@spotware.com>
2025-07-28 09:58:51 -04:00
Mason Daugherty
12c0e9b7d8 fix(docs): local API reference documentation build (#32271)
ensure all relevant packages are correctly processed - cli wasn't
included, also fix ValueError
2025-07-28 00:50:20 -04:00
Mason Daugherty
ed682ae62d fix: explicitly tell uv to copy when using devcontainer (#32267) 2025-07-28 00:01:06 -04:00
Mason Daugherty
caf1919217 fix: devcontainer to use volume to store the workspace (#32266)
should resolve the file sharing issue for users on macOS.
2025-07-27 23:43:06 -04:00
Mason Daugherty
904066f1ec feat: add VSCode configuration files for Python development (#32263) 2025-07-27 23:37:59 -04:00
Mason Daugherty
96cbd90cba fix: formatting issues in docstrings (#32265)
Ensures proper reStructuredText formatting by adding the required blank
line before closing docstring quotes, which resolves the "Block quote
ends without a blank line; unexpected unindent" warning.
2025-07-27 23:37:47 -04:00
Mason Daugherty
a8a2cff129 Merge branch 'master' of github.com:langchain-ai/langchain 2025-07-27 23:34:59 -04:00
Mason Daugherty
f4ff4514ef fix: update workspace folder path in devcontainer configuration 2025-07-27 23:34:57 -04:00
Mason Daugherty
d1679cec91 chore: add .editorconfig for consistent coding styles across files (#32261)
Following existing codebase conventions
2025-07-27 23:25:30 -04:00
Mason Daugherty
5295f2add0 fix: update dev container name to match service name 2025-07-27 22:30:16 -04:00
Mason Daugherty
5f5b87e9a3 fix: update service name in devcontainer configuration 2025-07-27 22:28:47 -04:00
Mason Daugherty
e0ef98dac0 feat: add markdownlint configuration file (#32264) 2025-07-27 22:24:58 -04:00
Mason Daugherty
62212c7ee2 fix: update links in SECURITY.md to use markdown format 2025-07-27 21:54:25 -04:00
Mason Daugherty
9d38f170ce refactor: enhance workflow names and descriptions for clarity (#32262) 2025-07-27 21:31:59 -04:00
Mason Daugherty
c6cb1fae61 fix: devcontainer (#32260) 2025-07-27 20:24:16 -04:00
Kanav Bansal
e42b1d23dc docs(docs): update RAG tutorials link to point to correct path (#32256)
- **Description:** This PR updates the internal documentation link for
the RAG tutorials to reflect the updated path. Previously, the link
pointed to the root `/docs/tutorials/`, which was generic. It now
correctly routes to the RAG-specific tutorial page.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** N/A
2025-07-27 20:00:41 -04:00
Mason Daugherty
53d0bfe9cd refactor: markdownlint (#32259) 2025-07-27 20:00:16 -04:00
Mason Daugherty
eafab52483 refactor: markdownlint SECURITY.md (#32258) 2025-07-27 19:55:25 -04:00
Christophe Bornet
efdfa00d10 chore(langchain): add ruff rules ARG (#32110)
See https://docs.astral.sh/ruff/rules/#flake8-unused-arguments-arg

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-26 18:32:34 -04:00
Christophe Bornet
a2ad5aca41 chore(langchain): add ruff rules TC (#31921)
See https://docs.astral.sh/ruff/rules/#flake8-type-checking-tc
2025-07-26 18:27:26 -04:00
Mason Daugherty
5ecbb5f277 fix(docs): temporary workaround until the underlying dependency issues in the AI21 package ecosystem are resolved. (#32248) 2025-07-25 15:12:44 -04:00
Mason Daugherty
c1028171af fix(docs): update protobuf version constraint to <5.0 in vercel_overrides.txt (#32247) 2025-07-25 15:08:44 -04:00
ccurme
f6236d9f12 fix(infra): add pypdf to vercel overrides (#32242)
>   × No solution found when resolving dependencies:
  ╰─▶ Because only langchain-neo4j==0.5.0 is available and
langchain-neo4j==0.5.0 depends on neo4j-graphrag>=1.9.0, we can conclude
that all versions of langchain-neo4j depend on neo4j-graphrag>=1.9.0.
      And because only neo4j-graphrag<=1.9.0 is available and
neo4j-graphrag==1.9.0 depends on pypdf>=5.1.0,<6.0.0, we can conclude
that all versions of langchain-neo4j depend on pypdf>=5.1.0,<6.0.0.
And because langchain-upstage==0.6.0 depends on pypdf>=4.2.0,<5.0.0
and only langchain-upstage==0.6.0 is available, we can conclude that
all versions of langchain-neo4j and all versions of langchain-upstage
      are incompatible.
And because you require langchain-neo4j and langchain-upstage, we can
      conclude that your requirements are unsatisfiable.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-25 15:05:21 -04:00
Mason Daugherty
df20f111a8 fix(docs): add validation for repository format and name in API docs build workflow (#32246)
for build
2025-07-25 15:05:06 -04:00
Eugene Yurtsev
db22311094 ci(infra): no need for . in the regexp (#32245)
No need for allowing `.`
2025-07-25 15:02:02 -04:00
Mason Daugherty
f624ad489a feat(docs): improve devx, fix Makefile targets (#32237)
**TL;DR much of the provided `Makefile` targets were broken, and any
time I wanted to preview changes locally I either had to refer to a
command Chester gave me or try waiting on a Vercel preview deployment.
With this PR, everything should behave like normal.**

Significant updates to the `Makefile` and documentation files, focusing
on improving usability, adding clear messaging, and fixing/enhancing
documentation workflows.

### Updates to `Makefile`:

#### Enhanced build and cleaning processes:
- Added informative messages (e.g., "📚 Building LangChain
documentation...") to makefile targets like `docs_build`, `docs_clean`,
and `api_docs_build` for better user feedback during execution.
- Introduced a `clean-cache` target to the `docs` `Makefile` to clear
cached dependencies and ensure clean builds.

#### Improved dependency handling:
- Modified `install-py-deps` to create a `.venv/deps_installed` marker,
preventing redundant/duplicate dependency installations and improving
efficiency.

#### Streamlined file generation and infrastructure setup:
- Added caching for the LangServe README download and parallelized
feature table generation
- Added user-friendly completion messages for targets like `copy-infra`
and `render`.

#### Documentation server updates:
- Enhanced the `start` target with messages indicating server start and
URL for local documentation viewing.

---

### Documentation Improvements:

#### Content clarity and consistency:
- Standardized section titles for consistency across documentation
files.
[[1]](diffhunk://#diff-9b1a85ea8a9dcf79f58246c88692cd7a36316665d7e05a69141cfdc50794c82aL1-R1)
[[2]](diffhunk://#diff-944008ad3a79d8a312183618401fcfa71da0e69c75803eff09b779fc8e03183dL1-R1)
- Refined phrasing and formatting in sections like "Dependency
management" and "Formatting and linting" for better readability.
[[1]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L6-R6)
[[2]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L84-R82)

#### Enhanced workflows:
- Updated instructions for building and viewing documentation locally,
including tips for specifying server ports and handling API reference
previews.
[[1]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L60-R94)
[[2]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L82-R126)
- Expanded guidance on cleaning documentation artifacts and using
linting tools effectively.
[[1]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L82-R126)
[[2]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L107-R142)

#### API reference documentation:
- Improved instructions for generating and formatting in-code
documentation, highlighting best practices for docstring writing.
[[1]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L107-R142)
[[2]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L144-R186)

---

### Minor Changes:
- Added support for a new package name (`langchain_v1`) in the API
documentation generation script.
- Fixed minor capitalization and formatting issues in documentation
files.
[[1]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L40-R40)
[[2]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L166-R160)

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-25 14:49:03 -04:00
Eugene Yurtsev
549ecd3e78 chore(infra): harden api docs build workflow (#32243)
Harden permissions for api docs build workflow
2025-07-25 14:40:20 -04:00
dishaprakash
a0671676ae feat(docs): add PGVectorStore (#30950)
Thank you for contributing to LangChain!

-  **Adding documentation for PGVectorStore**: 
docs: Adding documentation for the new PGVectorStore as a part of
langchain-postgres

- **Add docs**: The notebook for PGVectorStore is now added to the
directory `docs/docs/integrations`.
As a part of this change, we've also updated the VectorStore features
table and VectorStoreTabs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-07-25 13:22:58 -04:00
Christophe Bornet
12ae42c5e9 chore(langchain): add ruff rules D1 (except D100 and D104) (#32123) 2025-07-25 11:59:48 -04:00
Christophe Bornet
e1238b8085 chore(langchain): add ruff rules SLF (#32112)
See https://docs.astral.sh/ruff/rules/private-member-access/
2025-07-25 11:56:40 -04:00
Chaitanya varma
8f5ec20ccf chore(langchain): strip_ansi fucntion to remove ANSI escape sequences (#32200)
**Description:** 
Fixes a bug in the file callback test where ANSI escape codes were
causing test failures. The improved test now properly handles ANSI
escape sequences by:
- Using exact string comparison instead of substring checking
- Applying the `strip_ansi` function consistently to all file contents
- Adding descriptive assertion messages
- Maintaining test coverage and backward compatibility

The changes ensure tests pass reliably even when terminal control
sequences are present in the output

**Issue:** Fixes #32150

**Dependencies:** None required - uses existing dependencies only.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-07-25 15:53:19 +00:00
niceg
0d6f915442 fix: LLM mimicking Unicode responses due to forced Unicode conversion of non-ASCII characters. (#32222)
fix: Fix LLM mimicking Unicode responses due to forced Unicode
conversion of non-ASCII characters.

- **Description:** This PR fixes an issue where the LLM would mimic
Unicode responses due to forced Unicode conversion of non-ASCII
characters in tool calls. The fix involves disabling the `ensure_ascii`
flag in `json.dumps()` when converting tool calls to OpenAI format.
- **Issue:** Fixes ↓↓↓
input:
```json
{'role': 'assistant', 'tool_calls': [{'type': 'function', 'id': 'call_nv9trcehdpihr21zj9po19vq', 'function': {'name': 'create_customer', 'arguments': '{"customer_name": "你好啊集团"}'}}]}
```
output:
```json
{'role': 'assistant', 'tool_calls': [{'type': 'function', 'id': 'call_nv9trcehdpihr21zj9po19vq', 'function': {'name': 'create_customer', 'arguments': '{"customer_name": "\\u4f60\\u597d\\u554a\\u96c6\\u56e2"}'}}]}
```
then:
llm will mimic outputting unicode. Unicode's vast number of symbols can
lengthen LLM responses, leading to slower performance.
<img width="686" height="277" alt="image"
src="https://github.com/user-attachments/assets/28f3b007-3964-4455-bee2-68f86ac1906d"
/>

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-24 17:01:31 -04:00
Mason Daugherty
d53ebf367e fix(docs): capitalization, codeblock formatting, and hyperlinks, note blocks (#32235)
widespread cleanup attempt
2025-07-24 16:55:04 -04:00
Copilot
54542b9385 docs(openai): add comprehensive documentation and examples for extra_body + others (#32149)
This PR addresses the common issue where users struggle to pass custom
parameters to OpenAI-compatible APIs like LM Studio, vLLM, and others.
The problem occurs when users try to use `model_kwargs` for custom
parameters, which causes API errors.

## Problem

Users attempting to pass custom parameters (like LM Studio's `ttl`
parameter) were getting errors:

```python
#  This approach fails
llm = ChatOpenAI(
    base_url="http://localhost:1234/v1",
    model="mlx-community/QwQ-32B-4bit",
    model_kwargs={"ttl": 5}  # Causes TypeError: unexpected keyword argument 'ttl'
)
```

## Solution

The `extra_body` parameter is the correct way to pass custom parameters
to OpenAI-compatible APIs:

```python
#  This approach works correctly
llm = ChatOpenAI(
    base_url="http://localhost:1234/v1",
    model="mlx-community/QwQ-32B-4bit",
    extra_body={"ttl": 5}  # Custom parameters go in extra_body
)
```

## Changes Made

1. **Enhanced Documentation**: Updated the `extra_body` parameter
docstring with comprehensive examples for LM Studio, vLLM, and other
providers

2. **Added Documentation Section**: Created a new "OpenAI-compatible
APIs" section in the main class docstring with practical examples

3. **Unit Tests**: Added tests to verify `extra_body` functionality
works correctly:
- `test_extra_body_parameter()`: Verifies custom parameters are included
in request payload
- `test_extra_body_with_model_kwargs()`: Ensures `extra_body` and
`model_kwargs` work together

4. **Clear Guidance**: Documented when to use `extra_body` vs
`model_kwargs`

## Examples Added

**LM Studio with TTL (auto-eviction):**
```python
ChatOpenAI(
    base_url="http://localhost:1234/v1",
    api_key="lm-studio",
    model="mlx-community/QwQ-32B-4bit",
    extra_body={"ttl": 300}  # Auto-evict after 5 minutes
)
```

**vLLM with custom sampling:**
```python
ChatOpenAI(
    base_url="http://localhost:8000/v1",
    api_key="EMPTY",
    model="meta-llama/Llama-2-7b-chat-hf",
    extra_body={
        "use_beam_search": True,
        "best_of": 4
    }
)
```

## Why This Works

- `model_kwargs` parameters are passed directly to the OpenAI client's
`create()` method, causing errors for non-standard parameters
- `extra_body` parameters are included in the HTTP request body, which
is exactly what OpenAI-compatible APIs expect for custom parameters

Fixes #32115.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-24 16:43:16 -04:00
Mason Daugherty
7d2a13f519 fix: various typos (#32231) 2025-07-24 12:35:08 -04:00
Christophe Bornet
0b34be4ce5 refactor(langchain): refactor unit test stub classes (#32209)
See
https://github.com/langchain-ai/langchain/pull/32098#discussion_r2225961563
2025-07-24 11:05:56 -04:00
Mason Daugherty
6f3169eb49 chore: update copilot development guidelines for clarity and structure (#32230) 2025-07-24 15:05:09 +00:00
Eugene Yurtsev
7995c719c5 chore(langchain_v1): clean anything uncertain (#32228)
Further clean up of namespace:

- Removed prompts (we'll re-add in a separate commit)
- Remove LocalFileStore until we can review whether all the
implementation details are necessary
- Remove message processing logic from memory (we'll figure out where to
expose it)
- Remove `Tool` primitive (should be sufficient to use `BaseTool` for
typing purposes)
- Remove utilities to create kv stores. Unclear if they've had much
usage outside MultiparentRetriever
2025-07-24 14:41:05 +00:00
Mason Daugherty
bdf1cd383c fix(langchain): update deps 2025-07-24 10:37:08 -04:00
Mason Daugherty
77c981999e fix(text-splitters): update langchain-core version to 0.3.72 2025-07-24 10:35:07 -04:00
Mason Daugherty
7f015b6f14 fix(text-splitters): update lock for release 2025-07-24 10:32:04 -04:00
Mason Daugherty
71ad451e1f Merge branch 'master' of github.com:langchain-ai/langchain 2025-07-24 10:24:17 -04:00
Mason Daugherty
2c42893703 fix(langchain): update langchain-core version to 0.3.72 2025-07-24 10:24:04 -04:00
Mason Daugherty
0e139fb9a6 release(langchain): 0.3.27 (#32227) 2025-07-24 10:20:20 -04:00
tanwirahmad
622bb05751 fix(langchain): class HTMLSemanticPreservingSplitter ignores the text inside the div tag (#32213)
**Description:** We collect the text from the "html", "body", "div", and
"main" nodes, if they have any.

**Issue:** Fixes #32206.
2025-07-24 10:09:03 -04:00
Eugene Yurtsev
56dde3ade3 feat(langchain): v1 scaffolding (#32166)
This PR adds scaffolding for langchain 1.0 entry package.

Most contents have been removed. 

Currently remaining entrypoints for:

* chat models
* embedding models
* memory -> trimming messages, filtering messages and counting tokens
[we may remove this]
* prompts -> we may remove some prompts
* storage: primarily to support cache backed embeddings, may remove the
kv store
* tools -> report tool primitives

Things to be added:

* Selected agent implementations
* Selected workflows
* Common primitives: messages, Document
* Primitives for type hinting: BaseChatModel, BaseEmbeddings
* Selected retrievers
* Selected text splitters

Things to be removed:

* Globals needs to be removed (needs an update in langchain core)


Todos: 

* TBD indexing api (requires sqlalchemy which we don't want as a
dependency)
* Be explicit about public/private interfaces (e.g., likely rename
chat_models.base.py to something more internal)
* Remove dockerfiles
* Update module doc-strings and README.md
2025-07-24 09:47:48 -04:00
Mason Daugherty
bd3d6496f3 release(core): 0.3.72 (#32214)
fixes #32170
2025-07-23 20:33:48 -04:00
jmaillefaud
fb5da8384e fix(core): Dereference Refs for pydantic schema fails in tool schema generation (#32203)
The `_dereference_refs_helper` in `langchain_core.utils.json_schema`
incorrectly handled objects with a reference and other fields.

**Issue**: #32170

# Description

We change the check so that it accepts other keys in the object.
2025-07-23 20:28:27 -04:00
Maxime Grenu
a7d0e42f3f docs: fix typos in documentation (#32201)
## Summary
- Fixed redundant word "done" in SECURITY.md line 69  
- Fixed grammar errors in Fireworks README.md line 77: "how it fares
compares" → "how it compares" and "in terms just" → "in terms of"

## Test plan
- [x] Verified changes improve readability and correct grammar
- [x] No functional changes, documentation only

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-23 10:43:25 -04:00
Christophe Bornet
3496e1739e feat(langchain): add ruff rules PL (#32079)
See https://docs.astral.sh/ruff/rules/#pylint-pl
2025-07-22 23:55:32 -04:00
Jacob Lee
0f39155f62 docs: Specify environment variables for BedrockConverse (#32194) 2025-07-22 17:37:47 -04:00
ccurme
6aeda24a07 docs(chroma): update feature table (#32193)
Supports multi-tenancy.
2025-07-22 20:55:07 +00:00
Mason Daugherty
3ed804a5f3 fix(perplexity): undo xfails (#32192) 2025-07-22 16:29:37 -04:00
Mason Daugherty
ca137bfe62 . 2025-07-22 16:25:02 -04:00
Mason Daugherty
fa487fb62d fix(perplexity): temp xfail int tests (#32191)
It appears the API has changes since the 2025-04-15 release, leading to
failed integration tests.
2025-07-22 16:20:51 -04:00
ccurme
053fb16a05 revert: drop anthropic from core test matrix (#32190)
Reverts langchain-ai/langchain#32185
2025-07-22 20:13:02 +00:00
ccurme
3672bbc71e fix(anthropic): update integration test models (#32189)
Multiple models were
[retired](https://docs.anthropic.com/en/docs/about-claude/model-deprecations#model-status)
yesterday.

Tests remain broken until we figure out what to do with the legacy
Anthropic LLM integration— currently uses their (legacy) text
completions API, for which there appear to be no remaining supported
models.
2025-07-22 19:51:39 +00:00
Mason Daugherty
a02ad3d192 docs: formatting cleanup (#32188)
* formatting cleaning
* make `init_chat_model` more prominent in list of guides
2025-07-22 15:46:15 -04:00
ccurme
0c4054a7fc release(core): 0.3.71 (#32186) 2025-07-22 15:44:36 -04:00
ccurme
75517c3ea9 chore(infra): drop anthropic from core test matrix (#32185) 2025-07-22 19:38:58 +00:00
ccurme
ebf2e11bcb fix(core): exclude api_key from tracing metadata (#32184)
(standard param)
2025-07-22 15:32:12 -04:00
ccurme
e41e6ec6aa release(chroma): 0.2.5 (#32183) 2025-07-22 15:24:03 -04:00
itaismith
09769373b3 feat(chroma): Add Chroma Cloud support (#32125)
* Adding support for more Chroma client options (`HttpClient` and
`CloundClient`). This includes adding arguments necessary for
instantiating these clients.
* Adding support for Chroma's new persisted collection configuration (we
moved index configuration into this new construct).
* Delegate `Settings` configuration to Chroma's client constructors.
2025-07-22 15:14:15 -04:00
ccurme
3fc27e7a95 docs: update feature table for Chroma (#32182) 2025-07-22 18:21:17 +00:00
ccurme
8acfd677bc fix(core): add type key when tracing in some cases (#31825) 2025-07-22 18:08:16 +00:00
Mason Daugherty
af3789b9ed fix(deepseek): release openai version (#32181)
used sdk version instead of langchain by accident
2025-07-22 13:29:52 -04:00
Mason Daugherty
a6896794ca release(ollama): 0.3.6 (#32180) 2025-07-22 13:24:17 -04:00
Copilot
d40fd5a3ce feat(ollama): warn on empty load responses (#32161)
## Problem

When using `ChatOllama` with `create_react_agent`, agents would
sometimes terminate prematurely with empty responses when Ollama
returned `done_reason: 'load'` responses with no content. This caused
agents to return empty `AIMessage` objects instead of actual generated
text.

```python
from langchain_ollama import ChatOllama
from langgraph.prebuilt import create_react_agent
from langchain_core.messages import HumanMessage

llm = ChatOllama(model='qwen2.5:7b', temperature=0)
agent = create_react_agent(model=llm, tools=[])

result = agent.invoke(HumanMessage('Hello'), {"configurable": {"thread_id": "1"}})
# Before fix: AIMessage(content='', response_metadata={'done_reason': 'load'})
# Expected: AIMessage with actual generated content
```

## Root Cause

The `_iterate_over_stream` and `_aiterate_over_stream` methods treated
any response with `done: True` as final, regardless of `done_reason`.
When Ollama returns `done_reason: 'load'` with empty content, it
indicates the model was loaded but no actual generation occurred - this
should not be considered a complete response.

## Solution

Modified the streaming logic to skip responses when:
- `done: True`
- `done_reason: 'load'` 
- Content is empty or contains only whitespace

This ensures agents only receive actual generated content while
preserving backward compatibility for load responses that do contain
content.

## Changes

- **`_iterate_over_stream`**: Skip empty load responses instead of
yielding them
- **`_aiterate_over_stream`**: Apply same fix to async streaming
- **Tests**: Added comprehensive test cases covering all edge cases

## Testing

All scenarios now work correctly:
-  Empty load responses are skipped (fixes original issue)
-  Load responses with actual content are preserved (backward
compatibility)
-  Normal stop responses work unchanged
-  Streaming behavior preserved
-  `create_react_agent` integration fixed

Fixes #31482.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-22 13:21:11 -04:00
Mason Daugherty
116b758498 fix: bump deps for release (#32179)
forgot to bump the `pyproject.toml` files
2025-07-22 13:12:14 -04:00
Mason Daugherty
10996a2821 release(perplexity): 0.1.2 (#32176) 2025-07-22 13:02:19 -04:00
Mason Daugherty
2aed07efb6 release(deepseek): 0.1.4 (#32178) 2025-07-22 13:01:54 -04:00
Mason Daugherty
64dac1faf7 release(huggingface): 0.3.1 (#32177) 2025-07-22 13:01:34 -04:00
Mason Daugherty
58768d8aef release(xai): 0.2.5 (#32174) 2025-07-22 13:01:26 -04:00
Mason Daugherty
d65da13299 docs(ollama): add validate_model_on_init note, bump lock (#32172) 2025-07-22 10:58:45 -04:00
Kanav Bansal
c14bd1fcfe fix(docs): update RAG tutorials link to point to correct path (#32169)
## **Description:** 
This PR updates the internal documentation link for the RAG tutorials to
reflect the updated path. Previously, the link pointed to the root
`/docs/tutorials/`, which was generic. It now correctly routes to the
RAG-specific tutorial page for the following text-embedding models.

1. DatabricksEmbeddings
2. IBM watsonx.ai
3. OpenAIEmbeddings
4. NomicEmbeddings
5. CohereEmbeddings
6. MistralAIEmbeddings
7. FireworksEmbeddings
8. TogetherEmbeddings
9. LindormAIEmbeddings
10. ModelScopeEmbeddings
11. ClovaXEmbeddings
12. NetmindEmbeddings
13. SambaNovaCloudEmbeddings
14. SambaStudioEmbeddings
15. ZhipuAIEmbeddings

## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
2025-07-22 10:24:50 -04:00
Byeongjin Kang
a1ccabf85d docs: add documentation about how to use extended thinking with ChatBedrockConverse (#32168) 2025-07-22 08:44:08 -04:00
Copilot
2104cf0d9a fix: replace deprecated Pydantic .schema() calls with v1/v2 compatible pattern (#32162)
This PR addresses deprecation warnings users encounter when using
LangChain tools with Pydantic v2:

```
PydanticDeprecatedSince20: The `schema` method is deprecated; use `model_json_schema` instead. 
Deprecated in Pydantic V2.0 to be removed in V3.0.
```

## Root Cause

Several LangChain components were still using the deprecated `.schema()`
method directly instead of the Pydantic v1/v2 compatible approach. While
users calling `.schema()` on returned models will still see warnings
(which is correct), LangChain's internal code should not generate these
warnings.

## Changes Made

Updated 3 files to use the standard compatibility pattern:

```python
# Before (deprecated)
schema = model.schema()

# After (compatible with both v1 and v2) 
if hasattr(model, "model_json_schema"):
    schema = model.model_json_schema()  # Pydantic v2
else:
    schema = model.schema()  # Pydantic v1
```

### Files Updated:
- **`evaluation/parsing/json_schema.py`**: Fixed `_parse_json()` method
to handle Pydantic models correctly
- **`output_parsers/yaml.py`**: Fixed `get_format_instructions()` to use
compatible schema access
- **`chains/openai_functions/citation_fuzzy_match.py`**: Fixed direct
`.schema()` call on QuestionAnswer model

## Verification

 **Zero breaking changes** - all existing functionality preserved  
 **No deprecation warnings** from LangChain internal code  
 **Backward compatible** with Pydantic v1  
 **Forward compatible** with Pydantic v2  
 **Edge cases handled** (strings, plain objects, etc.)

## User Impact

LangChain users will no longer see deprecation warnings from internal
LangChain code. Users who directly call `.schema()` on schemas returned
by LangChain should adopt the same compatibility pattern:

```python
# User code should use this pattern
input_schema = tool.get_input_schema()
if hasattr(input_schema, "model_json_schema"):
    schema_result = input_schema.model_json_schema()
else:
    schema_result = input_schema.schema()
```

Fixes #31458.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-21 21:19:53 -04:00
Copilot
18c64aed6d feat(core): add sanitize_for_postgres utility to fix PostgreSQL NUL byte DataError (#32157)
This PR fixes the PostgreSQL NUL byte issue that causes
`psycopg.DataError` when inserting documents containing `\x00` bytes
into PostgreSQL-based vector stores.

## Problem

PostgreSQL text fields cannot contain NUL (0x00) bytes. When documents
with such characters are processed by PGVector or langchain-postgres
implementations, they fail with:

```
(psycopg.DataError) PostgreSQL text fields cannot contain NUL (0x00) bytes
```

This commonly occurs when processing PDFs, documents from various
loaders, or text extracted by libraries like unstructured that may
contain embedded NUL bytes.

## Solution

Added `sanitize_for_postgres()` utility function to
`langchain_core.utils.strings` that removes or replaces NUL bytes from
text content.

### Key Features

- **Simple API**: `sanitize_for_postgres(text, replacement="")`
- **Configurable**: Replace NUL bytes with empty string (default) or
space for readability
- **Comprehensive**: Handles all problematic examples from the original
issue
- **Well-tested**: Complete unit tests with real-world examples
- **Backward compatible**: No breaking changes, purely additive

### Usage Example

```python
from langchain_core.utils import sanitize_for_postgres
from langchain_core.documents import Document

# Before: This would fail with DataError
problematic_content = "Getting\x00Started with embeddings"

# After: Clean the content before database insertion
clean_content = sanitize_for_postgres(problematic_content)
# Result: "GettingStarted with embeddings"

# Or preserve readability with spaces
readable_content = sanitize_for_postgres(problematic_content, " ")
# Result: "Getting Started with embeddings"

# Use in Document processing
doc = Document(page_content=clean_content, metadata={...})
```

### Integration Pattern

PostgreSQL vector store implementations should sanitize content before
insertion:

```python
def add_documents(self, documents: List[Document]) -> List[str]:
    # Sanitize documents before insertion
    sanitized_docs = []
    for doc in documents:
        sanitized_content = sanitize_for_postgres(doc.page_content, " ")
        sanitized_doc = Document(
            page_content=sanitized_content,
            metadata=doc.metadata,
            id=doc.id
        )
        sanitized_docs.append(sanitized_doc)
    
    return self._insert_documents_to_db(sanitized_docs)
```

## Changes Made

- Added `sanitize_for_postgres()` function in
`langchain_core/utils/strings.py`
- Updated `langchain_core/utils/__init__.py` to export the new function
- Added comprehensive unit tests in
`tests/unit_tests/utils/test_strings.py`
- Validated against all examples from the original issue report

## Testing

All tests pass, including:
- Basic NUL byte removal and replacement
- Multiple consecutive NUL bytes
- Empty string handling
- Real examples from the GitHub issue
- Backward compatibility with existing string utilities

This utility enables PostgreSQL integrations in both langchain-community
and langchain-postgres packages to handle documents with NUL bytes
reliably.

Fixes #26033.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-21 20:33:20 -04:00
Copilot
fc802d8f9f docs: fix vectorstore feature table - correct "IDs in add Documents" values (#32153)
The vectorstore feature table in the documentation was showing incorrect
information for the "IDs in add Documents" capability. Most vectorstores
were marked as  (not supported) when they actually support extracting
IDs from documents.

## Problem

The issue was an inconsistency between two sources of truth:
- **JavaScript feature table** (`docs/src/theme/FeatureTables.js`):
Hardcoded `idsInAddDocuments: false` for most vectorstores
- **Python script** (`docs/scripts/vectorstore_feat_table.py`):
Correctly showed `"IDs in add Documents": True` for most vectorstores

## Root Cause

All vectorstores inherit the base `VectorStore.add_documents()` method
which automatically extracts document IDs:

```python
# From libs/core/langchain_core/vectorstores/base.py lines 277-284
if "ids" not in kwargs:
    ids = [doc.id for doc in documents]
    
    # If there's at least one valid ID, we'll assume that IDs should be used.
    if any(ids):
        kwargs["ids"] = ids
```

Since no vectorstores override `add_documents()`, they all inherit this
behavior and support IDs in documents.

## Solution

Updated `idsInAddDocuments` from `false` to `true` for 13 vectorstores:
- AstraDBVectorStore, Chroma, Clickhouse, DatabricksVectorSearch
- ElasticsearchStore, FAISS, InMemoryVectorStore,
MongoDBAtlasVectorSearch
- PGVector, PineconeVectorStore, Redis, Weaviate, SQLServer

The other 4 vectorstores (CouchbaseSearchVectorStore, Milvus, openGauss,
QdrantVectorStore) were already correctly marked as `true`.

## Impact

Users visiting
https://python.langchain.com/docs/integrations/vectorstores/ will now
see accurate information. The "IDs in add Documents" column will
correctly show  for all vectorstores instead of incorrectly showing 
for most of them.

This aligns with the API documentation which states: "if kwargs contains
ids and documents contain ids, the ids in the kwargs will receive
precedence" - clearly indicating that document IDs are supported.

Fixes #30622.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
2025-07-21 20:29:34 -04:00
Mason Daugherty
b4d87c709c chore: update copilot-instructions.md (#32159) 2025-07-21 20:17:41 -04:00
ccurme
383bc8f2ef revert: drop anthropic from core test matrix (#32152)
Reverts langchain-ai/langchain#32146
2025-07-21 20:15:27 +00:00
Christophe Bornet
64261449b8 feat(langchain): add ruff rules TRY (#32047)
See https://docs.astral.sh/ruff/rules/#tryceratops-try

* TRY004 (replace by TypeError) in main code is escaped with `noqa` to
not break backward compatibility. The rule is still interesting for new
code.
* TRY301 ignored at the moment. This one is quite hard to fix and I'm
not sure it's very interesting to activate it.

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-21 13:41:20 -04:00
Christophe Bornet
8b8d90bea5 feat(langchain): add ruff rules PT (#32010)
See https://docs.astral.sh/ruff/rules/#flake8-pytest-style-pt
2025-07-21 13:15:05 -04:00
Mohammad Mohtashim
095f4a7c28 fix(core): fix parse_resultin case of self.first_tool_only with multiple keys matching for JsonOutputKeyToolsParser (#32106)
* **Description:** Updated `parse_result` logic to handle cases where
`self.first_tool_only` is `True` and multiple matching keys share the
same function name. Instead of returning the first match prematurely,
the method now prioritizes filtering results by the specified key to
ensure correct selection.
* **Issue:** #32100

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-21 12:50:22 -04:00
Mason Daugherty
ddaba21e83 chore: copilot instructions (#32075)
https://docs.github.com/en/copilot/how-tos/custom-instructions/adding-repository-custom-instructions-for-github-copilot
2025-07-21 12:50:07 -04:00
diego-coder
8e4396bb32 fix(ollama): robustly parse single-quoted JSON in tool calls (#32109)
**Description:**
This PR makes argument parsing for Ollama tool calls more robust. Some
LLMs—including Ollama—may return arguments as Python-style dictionaries
with single quotes (e.g., `{'a': 1}`), which are not valid JSON and
previously caused parsing to fail.
The updated `_parse_json_string` method in
`langchain_ollama.chat_models` now attempts standard JSON parsing and,
if that fails, falls back to `ast.literal_eval` for safe evaluation of
Python-style dictionaries. This improves interoperability with LLMs and
fixes a common usability issue for tool-based agents.

**Issue:**
Closes #30910

**Dependencies:**
None

**Tests:**
- Added new unit tests for double-quoted JSON, single-quoted dicts,
mixed quoting, and malformed/failure cases.
- All tests pass locally, including new coverage for single-quoted
inputs.

**Notes:**
- No breaking changes.
- No new dependencies introduced.
- Code is formatted and linted (`ruff format`, `ruff check`).
- If maintainers have suggestions for further improvements, I’m happy to
revise!

Thank you for maintaining LangChain! Looking forward to your feedback.
2025-07-21 12:11:22 -04:00
ccurme
6794422b85 chore(infra): drop anthropic from core test matrix (#32146)
Stricter JSON schema validation broke a test. Test was fixed in
https://github.com/langchain-ai/langchain/pull/32145. Core release runs
old tests (i.e., last released version of langchain-anthropic) against
new core. So we bypass anthropic for release. Will revert after.
2025-07-21 15:06:52 +00:00
ccurme
2ef9465893 fix(anthropic): fix test (#32145) 2025-07-21 14:49:40 +00:00
ccurme
0355da3159 release(core): 0.3.70 (#32144) 2025-07-21 10:49:32 -04:00
Ziafat Majeed
6c18073fe6 docs(core): fix grammar from 'as an bonus' to 'as a bonus (#32143) 2025-07-21 10:48:34 -04:00
Kanav Bansal
38581f31dd docs(docs): update RAG tutorials link to point to correct path in Google Vertex AI Embeddings (#32141) 2025-07-21 09:17:54 -04:00
Kanav Bansal
8246b5b660 docs(docs): update RAG tutorials link to point to correct path in AzureOpenAI (#32131) 2025-07-21 09:17:35 -04:00
astraszab
668c084520 docs(core): move incorrect arg limitation in rate limiter's docstring (#32118) 2025-07-20 14:28:35 -04:00
ccurme
cc076ed891 fix(huggingface): update model used in standard tests (#32116) 2025-07-20 01:50:31 +00:00
Yoshi
6d71bb83de fix(core): fix docstrings and add sleep to FakeListChatModel._call (#32108) 2025-07-19 17:30:15 -04:00
Kanav Bansal
f7d1b1fbb1 docs(docs): update RAG tutorials link to point to correct path (#32113) 2025-07-19 17:27:31 -04:00
Isaac Francisco
98bfd57a76 fix(core): better error message for empty var names (#32073)
Previously, we hit an index out of range error with empty variable names
(accessing tag[0]), now we through a slightly nicer error

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-18 17:00:02 -04:00
Gurram Siddarth Reddy
427d2d6397 fix(core): implement sleep delay in FakeMessagesListChatModel _generate (#32014)
implement sleep delay in FakeMessagesListChatModel._generate so the
sleep parameter is respected, matching the documented behavior. This
adds artificial latency between responses for testing purposes.

Issue: closes
[#31974](https://github.com/langchain-ai/langchain/issues/31974)
following
[docs](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.fake_chat_models.FakeMessagesListChatModel.html#langchain_core.language_models.fake_chat_models.FakeMessagesListChatModel.sleep)

Dependencies: none

Twitter handle: [@siddarthreddyg2](https://x.com/siddarthreddyg2)

---------

Signed-off-by: Siddarthreddygsr <siddarthreddygsr@gmail.com>
2025-07-18 15:54:28 -04:00
Kanav Bansal
50a12a7ee5 fix(docs): fix broken link in VertexAILLM and NVIDIA LLM integrations (#32096)
## **Description:**   
This PR updates the `link` values for the following integration metadata
entries:

1. **VertexAILLM**  
   - Changed from: `google_vertexai`  
   - To: `google_vertex_ai_palm`  
2. **NVIDIA**  
   - Changed from: `NVIDIA`  
   - To: `nvidia_ai_endpoints`  

These changes ensure that the documentation links correspond to the
correct integration paths, improving documentation navigation and
consistency with the integration structure.

## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-18 14:00:49 +00:00
Kanav Bansal
72a0f425ec docs(docs): correct package name from langchain-google_vertexai to langchain-google-vertexai for VertexAILLM (#32095)
- **Description:** This PR updates the `package` field for the VertexAI
integration in the documentation metadata. The original value was
`langchain-google_vertexai`, which has been corrected to
`langchain-google-vertexai` to reflect the actual package name used in
PyPI and LangChain integrations.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** N/A
2025-07-18 09:45:28 -04:00
Sarah Guthals
22535eb4b3 docs: add tensorlake provider (#32046) 2025-07-17 19:28:14 -04:00
open-swe[bot]
5da986c3f6 fix(core): JSON Schema reference resolution for list indices (#32088)
Fixes #32042

## Summary
Fixes a critical bug in JSON Schema reference resolution that prevented
correctly dereferencing numeric components in JSON pointer paths,
specifically for list indices in `anyOf`, `oneOf`, and `allOf` arrays.

## Changes
- Fixed `_retrieve_ref` function in
`libs/core/langchain_core/utils/json_schema.py` to properly handle
numeric components
- Added comprehensive test function `test_dereference_refs_list_index()`
in `libs/core/tests/unit_tests/utils/test_json_schema.py`
- Resolved line length formatting issues
- Improved type checking and index validation for list and dictionary
references

## Key Improvements
- Correctly handles list index references in JSON pointer paths
- Maintains backward compatibility with existing dictionary numeric key
functionality
- Adds robust error handling for out-of-bounds and invalid indices
- Passes all test cases covering various reference scenarios

## Test Coverage
- Verified fix for `#/properties/payload/anyOf/1/properties/startDate`
reference
- Tested edge cases including out-of-bounds and negative indices
- Ensured no regression in existing reference resolution functionality

Resolves the reported issue with JSON Schema reference dereferencing for
list indices.

---------

Co-authored-by: open-swe-dev[bot] <open-swe-dev@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-17 15:54:38 -04:00
Mason Daugherty
6d449df8bb chore: update PR lint (#32091)
remove regex
2025-07-17 15:33:48 -04:00
ccurme
3f4d27fe21 fix(infra): update some notebook cassettes (#32087) 2025-07-17 13:57:29 -04:00
Mason Daugherty
59407338dd docs: remove AI21 embeddings section (#32084)
// no longer exists
2025-07-17 11:32:34 -04:00
Mason Daugherty
a1519af513 fix(docs): fix broken links (#32083) 2025-07-17 10:38:51 -04:00
Christophe Bornet
b61ce9178c refactor(langchain): remove model_rebuild (#32080)
Since #29963 BaseCache and Callbacks are imported in BaseLanguageModel
so there's no need to import them and rebuild the models.
Note: fix is available since `langchain-core==0.3.39` and the current
langchain dependency on core is `>=0.3.66` so the fix will always be
there.
2025-07-17 10:34:41 -04:00
Mason Daugherty
9165cde538 feat(docs): add Slack community link to footer (#32053) 2025-07-17 10:12:09 -04:00
Kanav Bansal
2c0e8dce0d docs(docs): fix broken link in Google Gemini text embedding integration (#32082)
- **Description:** Corrected the `link` path in the Google Gemini
integration entry from
`/docs/integrations/text_embedding/google-generative-ai` to
`/docs/integrations/text_embedding/google_generative_ai` to align with
actual directory structure and prevent broken documentation links.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** N/A
2025-07-17 09:58:07 -04:00
Mason Daugherty
491f63ca82 release(ollama): release 0.3.5 (#32076) 2025-07-16 18:45:32 -04:00
Mason Daugherty
587c213760 bump lcok 2025-07-16 18:44:56 -04:00
Copilot
98c3bbbaf0 fix(ollama): num_gpu parameter not working in async OllamaEmbeddings method (#32074)
The `num_gpu` parameter in `OllamaEmbeddings` was not being passed to
the Ollama client in the async embedding method, causing GPU
acceleration settings to be ignored when using async operations.

## Problem

The issue was in the `aembed_documents` method where the `options`
parameter (containing `num_gpu` and other configuration) was missing:

```python
# Sync method (working correctly)
return self._client.embed(
    self.model, texts, options=self._default_params, keep_alive=self.keep_alive
)["embeddings"]

# Async method (missing options parameter)
return (
    await self._async_client.embed(
        self.model, texts, keep_alive=self.keep_alive  #  No options!
    )
)["embeddings"]
```

This meant that when users specified `num_gpu=4` (or any other GPU
configuration), it would work with sync calls but be ignored with async
calls.

## Solution

Added the missing `options=self._default_params` parameter to the async
embed call to match the sync version:

```python
# Fixed async method
return (
    await self._async_client.embed(
        self.model,
        texts,
        options=self._default_params,  #  Now includes num_gpu!
        keep_alive=self.keep_alive,
    )
)["embeddings"]
```

## Validation

-  Added unit test to verify options are correctly passed in both sync
and async methods
-  All existing tests continue to pass
-  Manual testing confirms `num_gpu` parameter now works correctly
-  Code passes linting and formatting checks

The fix ensures that GPU configuration works consistently across both
synchronous and asynchronous embedding operations.

Fixes #32059.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-16 18:42:52 -04:00
efj-amzn
d3072e2d2e feat(core): update _import_utils.py to not mask the thrown exception (#32071) 2025-07-16 17:11:56 -04:00
Lauren Hirata Singh
b49372595e docs: update LangSmith links (#32070) 2025-07-16 16:31:28 -04:00
Mason Daugherty
16664d3b68 fix(docs): make docs link absolute (#32068) 2025-07-16 20:15:28 +00:00
Inácio Nery
ea8f2a05ba feat(perplexity): expose search_results in chat model (#31468)
Description
The Perplexity chat model already returns a search_results field, but
LangChain dropped it when mapping Perplexity responses to
additional_kwargs.
This patch adds "search_results" to the allowed attribute lists in both
_stream and _generate, so downstream code can access it just like
images, citations, or related_questions.

Dependencies
None. The change is purely internal; no new imports or optional
dependencies required.


https://community.perplexity.ai/t/new-feature-search-results-field-with-richer-metadata/398

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-16 15:16:35 -04:00
zygimantas-jac
2df05f6f6a docs: add oxylabs to web browsing table (#31931)
Added Oxylabs to the web browsing table

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 14:00:14 -04:00
nikk0o046
b1c7de98f5 fix(deepseek): convert tool output arrays to strings (#31913)
## Description
When ChatDeepSeek invokes a tool that returns a list, it results in an
openai.UnprocessableEntityError due to a failure in deserializing the
JSON body.

The root of the problem is that ChatDeepSeek uses BaseChatOpenAI
internally, but the APIs are not identical: OpenAI v1/chat/completions
accepts arrays as tool results, but Deepseek API does not.

As a solution added `_get_request_payload` method to ChatDeepSeek, which
inherits the behavior from BaseChatOpenAI but adds a step to stringify
tool message content in case the content is an array. I also add a unit
test for this.

From the linked issue you can find the full reproducible example the
reporter of the issue provided. After the changes it works as expected.

Source: [Deepseek
docs](https://api-docs.deepseek.com/api/create-chat-completion/)


![image](https://github.com/user-attachments/assets/a59ed3e7-6444-46d1-9dcf-97e40e4e8952)

Source: [OpenAI
docs](https://platform.openai.com/docs/api-reference/chat/create)


![image](https://github.com/user-attachments/assets/728f4fc6-e1a3-4897-b39f-6f1ade07d3dc)


## Issue
Fixes #31394

## Dependencies:
No new dependencies.

## Twitter handle:
Don't have one.

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 12:19:44 -04:00
Mohammad Mohtashim
96bf8262e2 fix: fixing missing Docstring Bug if no Docstring is provided in BaseModel class (#31608)
- **Description:** Ensure that the tool description is an empty string
when creating a Structured Tool from a Pydantic class in case no
description is provided
- **Issue:** Fixes #31606

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 11:56:05 -04:00
Mason Daugherty
15103b0520 chore: add closing keyword to PR template (#32065) 2025-07-16 11:54:26 -04:00
Casi
686a6b754c fix: issue a warning if np.nan or np.inf are in _cosine_similarity argument Matrices (#31532)
- **Description**: issues a warning if inf and nan are passed as inputs
to langchain_core.vectorstores.utils._cosine_similarity
- **Issue**: Fixes #31496
- **Dependencies**: no external dependencies added, only warnings module
imported

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 11:50:09 -04:00
Michael Li
12d370a55a fix(cli): exception to prevent swallowing unexpected errors (#31983) 2025-07-16 10:23:43 -04:00
Michael Li
5a4c0c0816 fix(cli): handle exception in remove() (#31982) 2025-07-16 10:23:02 -04:00
Krishna Somani
e2dc36b126 chore: update SECURITY.md (#32060)
Made minor changes, making it neat
2025-07-16 10:20:59 -04:00
Kanav Bansal
c133eff6c8 docs(docs): fix product name in Google SQL for MySQL description (#32062)
- **Description:** Corrected the service name from "Cloud Cloud SQL" to
"Google Cloud SQL" to accurately reflect the official product branding.
2025-07-16 10:17:59 -04:00
Ahmad Elmalah
1892a67eef docs: adding context for Textract linearization-config param (#32064)
Before jumping into tech implementation, I added a context for
linearization-config param, and explained what's linealization in this
context.
I also linked an AWS blog for more advanced use cases, as this single
example doesn't cover all use cases.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 10:17:20 -04:00
Ahmad Elmalah
2ab2cab203 docs: update titles for Textract examples (#32063)
**On this PR I am doing two things:**

1. Adding titles to the 4 example we have, to allow the reader to
capture the essence of the paragraph quickly
2. Replacing 'samples' with 'examples', for more clarity, 

**Why 'examples' could be a better terminology over 'samples' here?**
1. On the page, we were using both 'samples' and 'examples'
interchangeably which lead to confusion, now 'examples' are the use
cases, while 'samples' are the the sample data being used
2. This is consistent with the rest of the docs, we typically use
'examples' for examples, for example
https://python.langchain.com/docs/integrations/callbacks/fiddler/
2025-07-16 10:17:02 -04:00
Mason Daugherty
ad44f0688b release(core): release 0.3.69 (#32056) 2025-07-15 17:13:46 -04:00
Mason Daugherty
8ad12f3fcf docs: add missing js providers to table (#32055)
Update to show that Cerebras, xAI, and Cloudflare now have JS/TS
equivalents
2025-07-15 17:09:35 -04:00
Jacob Lee
535ba43b0d feat(core): add an option to make deserialization more permissive (#32054)
## Description

Currently when deserializing objects that contain non-deserializable
values, we throw an error. However, there are cases (e.g. proxies that
return response fields containing extra fields like Python datetimes),
where these values are not important and we just want to drop them.

Twitter handle: @hacubu

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-15 17:00:01 -04:00
Eugene Yurtsev
3628dccbf3 chore(docs): add gtm tag to docs (#32048)
Docusarus gtm langchain v2
2025-07-15 15:58:11 -04:00
Mason Daugherty
8d2135ad8a chore: update links to LangChain how-to guides in issue templates (#32052) 2025-07-15 15:38:35 -04:00
Mason Daugherty
8199a5562a chore: update hyperlink (#32051) 2025-07-15 14:41:27 -04:00
Mason Daugherty
0807711dad chore: update contribution guidelines and templates to direct users to the LangChain Forum (#32050) 2025-07-15 14:39:40 -04:00
Eugene Yurtsev
7a36d6b99c chore(docs): bump langgraph in docs & reformat all docs (#32044)
Trying to unblock documentation build pipeline

* Bump langgraph dep in docs
* Update langgraph in lock file (resolves an issue in API reference
generation)
2025-07-15 15:06:59 +00:00
Mason Daugherty
3b9dd1eba0 docs(groq): cleanup (#32043) 2025-07-15 10:37:37 -04:00
Eugene Yurtsev
02d0a9af6c chore(core): unpin packaging dependency (#32032)
Unpin packaging dependency

---------

Co-authored-by: ntjohnson1 <24689722+ntjohnson1@users.noreply.github.com>
2025-07-14 21:42:32 +00:00
Christophe Bornet
953592d4f7 feat(langchain): add ruff rules G (#32029)
https://docs.astral.sh/ruff/rules/#flake8-logging-format-g
2025-07-14 15:19:36 -04:00
Christophe Bornet
19fff8cba9 feat(langchain): add ruff rules DTZ (#32021)
See https://docs.astral.sh/ruff/rules/#flake8-datetimez-dtz
2025-07-14 12:47:16 -04:00
Marco Vinciguerra
26c2c8f70a docs: update ScrapeGraphAI tools (#32026)
It was outdated

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-14 12:38:55 -04:00
Hunter Lovell
d96b75f9d3 chore: update readme with forum link (#32027) 2025-07-14 09:15:26 -07:00
Ahmad Elmalah
2fdccd789c docs: update Textract docs (#31992)
I am modifying two things:

1. "This sample demonstrates" with "The following samples demonstrate"
as we're talking about at least 4 samples
2. Bringing the sentence to after talking about the definition of
textract to keep the document organized (textract definition then
samples)

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-14 15:36:29 +00:00
董哥的黑板报
553ac1863b docs: add deprecation notice for PipelinePromptTemplate (#31999)
**PR title**: 
add deprecation notice for PipelinePromptTemplate

**PR message**: 
In the API documentation, PipelinePromptTemplate is marked as
deprecated, but this is not mentioned in the docs.

I'm submitting this PR to add a deprecation notice to the docs.

**Tests**:
N/A (documentation only)

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-14 15:27:29 +00:00
Andreas V. Jonsterhaug
6dcca35a34 fix(core): correct return type hints in BaseChatPromptTemplate (#32009)
This PR changes the return type hints of the `format_prompt` and
`aformat_prompt` methods in `BaseChatPromptTemplate` from `PromptValue`
to `ChatPromptValue`. Since both methods always return a
`ChatPromptValue`.
2025-07-14 11:00:01 -04:00
Christophe Bornet
d57216c295 feat(core): add ruff rules D to tests except D1 (#32000)
Docs are not required for tests but when there are docstrings, they
shall be correctly formatted.
See https://docs.astral.sh/ruff/rules/#pydocstyle-d
2025-07-14 10:42:03 -04:00
Christophe Bornet
58d4261875 feat(langchain): add ruff rules PTH (#32008)
See https://docs.astral.sh/ruff/rules/#flake8-use-pathlib-pth
2025-07-14 10:41:37 -04:00
Fabio Fontana
fd168e1c11 feat(text-splitters): add Visual Basic 6 support (#31173)
### **Description**

Add Visual Basic 6 support.

---

### **Issue**

No specific issue addressed.

---

### **Dependencies**

No additional dependencies required.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-14 13:51:16 +00:00
Mason Daugherty
7e146a185b chore: add text splitters to PR linting (#32018) 2025-07-14 09:24:40 -04:00
ccurme
4b89483fe6 release(openai): 0.3.28 (#32015) 2025-07-14 10:38:11 +00:00
ccurme
de13f6ae4f fix(openai): support acknowledged safety checks in computer use (#31984) 2025-07-14 07:33:37 -03:00
Mason Daugherty
a5f1774b76 chore: update PR template (#32011) 2025-07-13 17:00:08 -04:00
Michael Li
a04131489e chore: update error message formatting (#31980) 2025-07-11 15:19:51 -04:00
Mason Daugherty
56d6d69ce9 release(groq): 0.3.6 (#31975) 2025-07-11 10:55:26 -04:00
Fazal
8197504fdc docs: fix typo (#31972)
**Description**:

Fixed a typo from "Hi, I'm Bob and I life in SF." to "Hi, I'm Bob and I
live in SF." .

**Connect with me**: https://www.linkedin.com/in/0xFazal/

**More about me**: https://fazal.me/

-**Fazal**.
2025-07-11 09:47:23 -04:00
Akshara
103fd6ac0c docs: add Google-style docstrings to tools and llms modules (zapier, … (#31957)
**Description:**
Added standardized Google-style docstrings to improve documentation
consistency across key modules.

Updated files:
- `tools/zapier/tool.py`
- `tools/jira/tool.py`
- `tools/json/tool.py`
- `llms/base.py`

These changes enhance readability and maintain consistency with
LangChain’s documentation style guide.

**Issue:**
Fixes #21983

**Dependencies:**
None

**Twitter handle :**
@Akshara_p_
2025-07-11 09:46:21 -04:00
ccurme
612ccf847a chore: [openai] bump sdk (#31958) 2025-07-10 15:53:41 -04:00
Mason Daugherty
b5462b8979 chore: update pr_lint.yml to add description (#31954) 2025-07-10 14:45:28 -04:00
Mason Daugherty
6594eb8cc1 docs(xai): update for Grok 4 (#31953) 2025-07-10 11:06:37 -04:00
Christophe Bornet
060fc0e3c9 text-splitters: Add ruff rules FBT (#31935)
See [flake8-boolean-trap
(FBT)](https://docs.astral.sh/ruff/rules/#flake8-boolean-trap-fbt)
2025-07-09 18:36:58 -04:00
Azhagammal
4d9c0b0883 fix[core]: added error message if the query vector or embedding contains NaN values (#31822)
**Description:**  
Added an explicit validation step in
`langchain_core.vectorstores.utils._cosine_similarity` to raise a
`ValueError` if the input query or any embedding contains `NaN` values.
This prevents silent failures or unstable behavior during similarity
calculations, especially when using maximal_marginal_relevance.

**Issue**:
Fixes #31806 

**Dependencies:**  
None

---------

Co-authored-by: Azhagammal S C <azhagammal@kofluence.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-09 18:30:26 -04:00
Michael Li
a8998a1f57 chore[langchain]: fix broad base except in crawler.py (#31941)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-09 15:17:42 -04:00
Mason Daugherty
042da2a2b2 chore: clean up capitalization in workflow names (#31944) 2025-07-09 15:16:48 -04:00
Mason Daugherty
c026a71a06 chore: add PR title linting (#31943) 2025-07-09 15:04:25 -04:00
Eugene Yurtsev
059942b5fc ci: update issue template to refer Q&A to langchain forum (#31829)
Update issue template to link to the langchain forum
2025-07-09 09:49:16 -04:00
1629 changed files with 121055 additions and 55526 deletions

View File

@@ -5,26 +5,31 @@ This project includes a [dev container](https://containers.dev/), which lets you
You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
## GitHub Codespaces
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
You may use the button above, or follow these steps to open this repo in a Codespace:
1. Click the **Code** drop-down menu at the top of https://github.com/langchain-ai/langchain.
1. Click the **Code** drop-down menu at the top of <https://github.com/langchain-ai/langchain>.
1. Click on the **Codespaces** tab.
1. Click **Create codespace on master**.
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
## VS Code Dev Containers
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
Note: If you click the link above you will open the main repo (langchain-ai/langchain) and not your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
```
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
> [!NOTE]
> If you click the link above you will open the main repo (`langchain-ai/langchain`) and *not* your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
```txt
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/&lt;YOUR_USERNAME&gt;/&lt;YOUR_CLONED_REPO_NAME&gt;
```
Then you will have a local cloned repo where you can contribute and then create pull requests.
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
If you already have VS Code and Docker installed, you can use the button above to get started. This will use VSCode to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
Alternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
@@ -40,5 +45,5 @@ You can learn more in the [Dev Containers documentation](https://code.visualstud
## Tips and tricks
* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
- If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
- If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.

View File

@@ -1,36 +1,58 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
{
// Name for the dev container
"name": "langchain",
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/langchain",
// Prevent the container from shutting down
"overrideCommand": true
// Features to add to the dev container. More info: https://containers.dev/features
// "features": {
// "ghcr.io/devcontainers-contrib/features/poetry:2": {}
// }
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created.
// "postCreateCommand": "cat /etc/os-release",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
// Name for the dev container
"name": "langchain",
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/langchain",
"mounts": [
"source=langchain-workspaces,target=/workspaces/langchain,type=volume"
],
// Prevent the container from shutting down
"overrideCommand": true,
// Features to add to the dev container. More info: https://containers.dev/features
"features": {
"ghcr.io/devcontainers/features/git:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {}
},
"containerEnv": {
"UV_LINK_MODE": "copy"
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Run commands after the container is created
"postCreateCommand": "uv sync && echo 'LangChain (Python) dev environment ready!'",
// Configure tool-specific properties.
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.debugpy",
"ms-python.mypy-type-checker",
"ms-python.isort",
"unifiedjs.vscode-mdx",
"davidanson.vscode-markdownlint",
"ms-toolsai.jupyter",
"GitHub.copilot",
"GitHub.copilot-chat"
],
"settings": {
"python.defaultInterpreterPath": ".venv/bin/python",
"python.formatting.provider": "none",
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
}
}
}
}
}
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}

View File

@@ -4,26 +4,9 @@ services:
build:
dockerfile: libs/langchain/dev.Dockerfile
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces/langchain:cached
networks:
- langchain-network
# environment:
# MONGO_ROOT_USERNAME: root
# MONGO_ROOT_PASSWORD: example123
# depends_on:
# - mongo
# mongo:
# image: mongo
# restart: unless-stopped
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: example123
# ports:
# - "27017:27017"
# networks:
# - langchain-network
networks:
langchain-network:

52
.editorconfig Normal file
View File

@@ -0,0 +1,52 @@
# top-most EditorConfig file
root = true
# All files
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Python files
[*.py]
indent_style = space
indent_size = 4
max_line_length = 88
# JSON files
[*.json]
indent_style = space
indent_size = 2
# YAML files
[*.{yml,yaml}]
indent_style = space
indent_size = 2
# Markdown files
[*.md]
indent_style = space
indent_size = 2
trim_trailing_whitespace = false
# Configuration files
[*.{toml,ini,cfg}]
indent_style = space
indent_size = 4
# Shell scripts
[*.sh]
indent_style = space
indent_size = 2
# Makefile
[Makefile]
indent_style = tab
indent_size = 4
# Jupyter notebooks
[*.ipynb]
# Jupyter may include trailing whitespace in cell
# outputs that's semantically meaningful
trim_trailing_whitespace = false

View File

@@ -129,4 +129,4 @@ For answers to common questions about this code of conduct, see the FAQ at
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations
[translations]: https://www.contributor-covenant.org/translations

View File

@@ -3,4 +3,4 @@
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
To learn how to contribute to LangChain, please follow the [contribution guide here](https://docs.langchain.com/oss/python/contributing).

View File

@@ -1,38 +0,0 @@
labels: [idea]
body:
- type: checkboxes
id: checks
attributes:
label: Checked
description: Please confirm and check all the following options.
options:
- label: I searched existing ideas and did not find a similar one
required: true
- label: I added a very descriptive title
required: true
- label: I've clearly described the feature request and motivation for it
required: true
- type: textarea
id: feature-request
validations:
required: true
attributes:
label: Feature request
description: |
A clear and concise description of the feature proposal. Please provide links to any relevant GitHub repos, papers, or other resources if relevant.
- type: textarea
id: motivation
validations:
required: true
attributes:
label: Motivation
description: |
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
- type: textarea
id: proposal
validations:
required: false
attributes:
label: Proposal (If applicable)
description: |
If you would like to propose a solution, please describe it here.

View File

@@ -1,122 +0,0 @@
labels: [Question]
body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain 🦜️🔗!
Please follow these instructions, fill every question, and do every step. 🙏
We're asking for this because answering questions and solving problems in GitHub takes a lot of time --
this is time that we cannot spend on adding new features, fixing bugs, writing documentation or reviewing pull requests.
By asking questions in a structured way (following this) it will be much easier for us to help you.
There's a high chance that by following this process, you'll find the solution on your own, eliminating the need to submit a question and wait for an answer. 😎
As there are many questions submitted every day, we will **DISCARD** and close the incomplete ones.
That will allow us (and others) to focus on helping people like you that follow the whole process. 🤓
Relevant links to check before opening a question to see if your question has already been answered, fixed or
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this question.
required: true
- label: I searched the LangChain documentation with the integrated search.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- type: checkboxes
id: help
attributes:
label: Commit to Help
description: |
After submitting this, I commit to one of:
* Read open questions until I find 2 where I can help someone and add a comment to help there.
* I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
* Once my question is answered, I will mark the answer as "accepted".
options:
- label: I commit to help with one of those options 👆
required: true
- type: textarea
id: example
attributes:
label: Example Code
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
render: python
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
What is the problem, question, or error?
Write a short description explaining what you are doing, what you expect to happen, and what is currently happening.
placeholder: |
* I'm trying to use the `langchain` library to do X.
* I expect to see Y.
* Instead, it does Z.
validations:
required: true
- type: textarea
id: system-info
attributes:
label: System Info
description: |
Please share your system info with us.
"pip freeze | grep langchain"
platform (windows / linux / mac)
python version
OR if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
placeholder: |
"pip freeze | grep langchain"
platform
python version
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
These will only surface LangChain packages, don't forget to include any other relevant
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
validations:
required: true

View File

@@ -1,33 +1,32 @@
name: "\U0001F41B Bug Report"
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the GitHub Discussions.
labels: ["02 Bug Report"]
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the LangChain forum.
labels: ["bug"]
type: bug
body:
- type: markdown
attributes:
value: >
Thank you for taking the time to file a bug report.
Use this to report bugs in LangChain.
If you're not certain that your issue is due to a bug in LangChain, please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions)
to ask for help with your issue.
value: |
Thank you for taking the time to file a bug report.
Use this to report BUGS in LangChain. For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
- label: This is a bug, not a usage question.
required: true
- label: I added a clear and descriptive title that summarizes this issue.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
@@ -35,6 +34,10 @@ body:
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: This is not related to the langchain-community package.
required: true
- label: I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
required: true
- type: textarea
@@ -45,25 +48,25 @@ body:
label: Example Code
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
**Important!**
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
The following code:
The following code:
```python
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
```
@@ -99,16 +102,18 @@ body:
Please share your system info with us. Do NOT skip this step and please don't trim
the output. Most users don't include enough information here and it makes it harder
for us to help you.
Run the following command in your terminal and paste the output here:
python -m langchain_core.sys_info
`python -m langchain_core.sys_info`
or if you have an existing python interpreter running:
```python
from langchain_core import sys_info
sys_info.print_sys_info()
```
alternatively, put the entire output of `pip freeze` here.
placeholder: |
python -m langchain_core.sys_info

View File

@@ -1,12 +1,9 @@
blank_issues_enabled: false
version: 2.1
contact_links:
- name: 🤔 Question or Problem
about: Ask a question or ask about a problem in GitHub Discussions.
url: https://www.github.com/langchain-ai/langchain/discussions/categories/q-a
- name: Feature Request
url: https://www.github.com/langchain-ai/langchain/discussions/categories/ideas
about: Suggest a feature or an idea
- name: Show and tell
about: Show what you built with LangChain
url: https://www.github.com/langchain-ai/langchain/discussions/categories/show-and-tell
- name: 📚 Documentation
url: https://github.com/langchain-ai/docs/issues/new?template=langchain.yml
about: Report an issue related to the LangChain documentation
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: General community discussions and support

View File

@@ -1,58 +0,0 @@
name: Documentation
description: Report an issue related to the LangChain documentation.
title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"
labels: [03 - Documentation]
body:
- type: markdown
attributes:
value: >
Thank you for taking the time to report an issue in the documentation.
Only report issues with documentation here, explain if there are
any missing topics or if you found a mistake in the documentation.
Do **NOT** use this to ask usage questions or reporting issues with your code.
If you have usage questions or need help solving some problem,
please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions).
If you're in the wrong place, here are some helpful links to find a better
place to ask your question:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: input
id: url
attributes:
label: URL
description: URL to documentation
validations:
required: false
- type: checkboxes
id: checks
attributes:
label: Checklist
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I included a link to the documentation page I am referring to (if applicable).
required: true
- type: textarea
attributes:
label: "Issue with current documentation:"
description: >
Please make sure to leave a reference to the document/code you're
referring to. Feel free to include names of classes, functions, methods
or concepts you'd like to see documented more.
- type: textarea
attributes:
label: "Idea or request for content:"
description: >
Please describe as clearly as possible what topics you think are missing
from the current documentation.

View File

@@ -0,0 +1,118 @@
name: "✨ Feature Request"
description: Request a new feature or enhancement for LangChain. For questions, please use the LangChain forum.
labels: ["feature request"]
type: feature
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to request a new feature.
Use this to request NEW FEATURES or ENHANCEMENTS in LangChain. For bug reports, please use the bug report template. For usage questions and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a feature request to see if your request has already been made or
if there's another way to achieve what you want:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: This is a feature request, not a bug report or usage question.
required: true
- label: I added a clear and descriptive title that summarizes the feature request.
required: true
- label: I used the GitHub search to find a similar feature request and didn't find it.
required: true
- label: I checked the LangChain documentation and API reference to see if this feature already exists.
required: true
- label: This is not related to the langchain-community package.
required: true
- type: textarea
id: feature-description
validations:
required: true
attributes:
label: Feature Description
description: |
Please provide a clear and concise description of the feature you would like to see added to LangChain.
What specific functionality are you requesting? Be as detailed as possible.
placeholder: |
I would like LangChain to support...
This feature would allow users to...
- type: textarea
id: use-case
validations:
required: true
attributes:
label: Use Case
description: |
Describe the specific use case or problem this feature would solve.
Why do you need this feature? What problem does it solve for you or other users?
placeholder: |
I'm trying to build an application that...
Currently, I have to work around this by...
This feature would help me/users to...
- type: textarea
id: proposed-solution
validations:
required: false
attributes:
label: Proposed Solution
description: |
If you have ideas about how this feature could be implemented, please describe them here.
This is optional but can be helpful for maintainers to understand your vision.
placeholder: |
I think this could be implemented by...
The API could look like...
```python
# Example of how the feature might work
```
- type: textarea
id: alternatives
validations:
required: false
attributes:
label: Alternatives Considered
description: |
Have you considered any alternative solutions or workarounds?
What other approaches have you tried or considered?
placeholder: |
I've tried using...
Alternative approaches I considered:
1. ...
2. ...
But these don't work because...
- type: textarea
id: additional-context
validations:
required: false
attributes:
label: Additional Context
description: |
Add any other context, screenshots, examples, or references that would help explain your feature request.
placeholder: |
Related issues: #...
Similar features in other libraries:
- ...
Additional context or examples:
- ...

View File

@@ -4,12 +4,7 @@ body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain! 🚀
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation in a [Question in GitHub Discussions](https://github.com/langchain-ai/langchain/discussions/categories/q-a) instead.
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
or are a regular contributor to LangChain with previous merged pull requests.
If you are not a LangChain maintainer, employee, or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
- type: checkboxes
id: privileged
attributes:

91
.github/ISSUE_TEMPLATE/task.yml vendored Normal file
View File

@@ -0,0 +1,91 @@
name: "📋 Task"
description: Create a task for project management and tracking by LangChain maintainers. If you are not a maintainer, please use other templates or the forum.
labels: ["task"]
type: task
body:
- type: markdown
attributes:
value: |
Thanks for creating a task to help organize LangChain development.
This template is for **maintainer tasks** such as project management, development planning, refactoring, documentation updates, and other organizational work.
If you are not a LangChain maintainer or were not asked directly by a maintainer to create a task, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead or use the appropriate bug report or feature request templates on the previous page.
- type: checkboxes
id: maintainer
attributes:
label: Maintainer task
description: Confirm that you are allowed to create a task here.
options:
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create a task here.
required: true
- type: textarea
id: task-description
attributes:
label: Task Description
description: |
Provide a clear and detailed description of the task.
What needs to be done? Be specific about the scope and requirements.
placeholder: |
This task involves...
The goal is to...
Specific requirements:
- ...
- ...
validations:
required: true
- type: textarea
id: acceptance-criteria
attributes:
label: Acceptance Criteria
description: |
Define the criteria that must be met for this task to be considered complete.
What are the specific deliverables or outcomes expected?
placeholder: |
This task will be complete when:
- [ ] ...
- [ ] ...
- [ ] ...
validations:
required: true
- type: textarea
id: context
attributes:
label: Context and Background
description: |
Provide any relevant context, background information, or links to related issues/PRs.
Why is this task needed? What problem does it solve?
placeholder: |
Background:
- ...
Related issues/PRs:
- #...
Additional context:
- ...
validations:
required: false
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: |
List any dependencies or blockers for this task.
Are there other tasks, issues, or external factors that need to be completed first?
placeholder: |
This task depends on:
- [ ] Issue #...
- [ ] PR #...
- [ ] External dependency: ...
Blocked by:
- ...
validations:
required: false

View File

@@ -1,28 +1,33 @@
Thank you for contributing to LangChain!
(Replace this entire block of text)
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "core: add foobar LLM"
Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- Examples:
- feat(core): add multi-tenant support
- fix(cli): resolve flag parsing error
- docs(openai): update API usage examples
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
- *Note:* the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do not include it in the PR.
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
- **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
- **Dependencies:** any dependencies required for this change
- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
1. A test for the integration, preferably unit tests that do not rely on network access,
2. An example notebook showing its use. It lives in `docs/docs/integrations` directory.
- [ ] **Add tests and docs**: If you're adding a new integration, please include
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. **We will not consider a PR unless these three are passing in CI.** See [contribution guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
If no one reviews your PR within a few days, please @-mention one of baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
- Most PRs should not touch more than one package.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Changes should be backwards compatible.
- Make sure optional dependencies are imported within a function.

View File

@@ -4,4 +4,4 @@ RUN pip install httpx PyGithub "pydantic==2.0.2" pydantic-settings "pyyaml>=5.3.
COPY ./app /app
CMD ["python", "/app/main.py"]
CMD ["python", "/app/main.py"]

View File

@@ -1,11 +1,13 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/action.yml
# TODO: fix this, migrate to new docs repo?
name: "Generate LangChain People"
description: "Generate the data for the LangChain People page"
author: "Jacob Lee <jacob@langchain.dev>"
inputs:
token:
description: 'User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}'
description: "User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}"
required: true
runs:
using: 'docker'
image: 'Dockerfile'
using: "docker"
image: "Dockerfile"

View File

@@ -1,12 +1,24 @@
# TODO: https://docs.astral.sh/uv/guides/integration/github/#caching
# Helper to set up Python and uv with caching
name: uv-install
description: Set up Python and uv
description: Set up Python and uv with caching
inputs:
python-version:
description: Python version, supporting MAJOR.MINOR only
required: true
enable-cache:
description: Enable caching for uv dependencies
required: false
default: "true"
cache-suffix:
description: Custom cache key suffix for cache invalidation
required: false
default: ""
working-directory:
description: Working directory for cache glob scoping
required: false
default: "**"
env:
UV_VERSION: "0.5.25"
@@ -15,7 +27,13 @@ runs:
using: composite
steps:
- name: Install uv and set the python version
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@v6
with:
version: ${{ env.UV_VERSION }}
python-version: ${{ inputs.python-version }}
enable-cache: ${{ inputs.enable-cache }}
cache-dependency-glob: |
${{ inputs.working-directory }}/pyproject.toml
${{ inputs.working-directory }}/uv.lock
${{ inputs.working-directory }}/requirements*.txt
cache-suffix: ${{ inputs.cache-suffix }}

325
.github/copilot-instructions.md vendored Normal file
View File

@@ -0,0 +1,325 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

80
.github/pr-file-labeler.yml vendored Normal file
View File

@@ -0,0 +1,80 @@
# Label PRs (config)
# Automatically applies labels based on changed files and branch patterns
# Core packages
core:
- changed-files:
- any-glob-to-any-file:
- "libs/core/**/*"
langchain:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain/**/*"
- "libs/langchain_v1/**/*"
v1:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain_v1/**/*"
cli:
- changed-files:
- any-glob-to-any-file:
- "libs/cli/**/*"
standard-tests:
- changed-files:
- any-glob-to-any-file:
- "libs/standard-tests/**/*"
# Partner integrations
integration:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/**/*"
# Infrastructure and DevOps
infra:
- changed-files:
- any-glob-to-any-file:
- ".github/**/*"
- "Makefile"
- ".pre-commit-config.yaml"
- "scripts/**/*"
- "docker/**/*"
- "Dockerfile*"
github_actions:
- changed-files:
- any-glob-to-any-file:
- ".github/workflows/**/*"
- ".github/actions/**/*"
dependencies:
- changed-files:
- any-glob-to-any-file:
- "**/pyproject.toml"
- "uv.lock"
- "**/requirements*.txt"
- "**/poetry.lock"
# Documentation
documentation:
- changed-files:
- any-glob-to-any-file:
- "docs/**/*"
- "**/*.md"
- "**/*.rst"
- "**/README*"
# Security related changes
security:
- changed-files:
- any-glob-to-any-file:
- "**/*security*"
- "**/*auth*"
- "**/*credential*"
- "**/*secret*"
- "**/*token*"
- ".github/workflows/security*"

41
.github/pr-title-labeler.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
# PR title labeler config
#
# Labels PRs based on conventional commit patterns in titles
#
# Format: type(scope): description or type!: description (breaking)
add-missing-labels: true
clear-prexisting: false
include-commits: false
include-title: true
label-for-breaking-changes: breaking
label-mapping:
documentation: ["docs"]
feature: ["feat"]
fix: ["fix"]
infra: ["build", "ci", "chore"]
integration:
[
"anthropic",
"chroma",
"deepseek",
"exa",
"fireworks",
"groq",
"huggingface",
"mistralai",
"nomic",
"ollama",
"openai",
"perplexity",
"prompty",
"qdrant",
"xai",
]
linting: ["style"]
performance: ["perf"]
refactor: ["refactor"]
release: ["release"]
revert: ["revert"]
tests: ["test"]

View File

@@ -1,24 +1,38 @@
"""Analyze git diffs to determine which directories need to be tested.
Intelligently determines which LangChain packages and directories need to be tested,
linted, or built based on the changes. Handles dependency relationships between
packages, maps file changes to appropriate CI job configurations, and outputs JSON
configurations for GitHub Actions.
- Maps changed files to affected package directories (libs/core, libs/partners/*, etc.)
- Builds dependency graph to include dependent packages when core components change
- Generates test matrix configurations with appropriate Python versions
- Handles special cases for Pydantic version testing and performance benchmarks
Used as part of the check_diffs workflow.
"""
import glob
import json
import os
import sys
from collections import defaultdict
from typing import Dict, List, Set
from pathlib import Path
from typing import Dict, List, Set
import tomllib
from packaging.requirements import Requirement
from get_min_versions import get_min_version_from_toml
from packaging.requirements import Requirement
LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/langchain",
"libs/langchain_v1",
]
# when set to True, we are ignoring core dependents
# When set to True, we are ignoring core dependents
# in order to be able to get CI to pass for each individual
# package that depends on core
# e.g. if you touch core, we don't then add textsplitters/etc to CI
@@ -37,7 +51,7 @@ IGNORED_PARTNERS = [
]
PY_312_MAX_PACKAGES = [
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
]
@@ -50,9 +64,9 @@ def all_package_dirs() -> Set[str]:
def dependents_graph() -> dict:
"""
Construct a mapping of package -> dependents, such that we can
run tests on all dependents of a package when a change is made.
"""Construct a mapping of package -> dependents
Done such that we can run tests on all dependents of a package when a change is made.
"""
dependents = defaultdict(set)
@@ -84,9 +98,9 @@ def dependents_graph() -> dict:
for depline in extended_deps:
if depline.startswith("-e "):
# editable dependency
assert depline.startswith(
"-e ../partners/"
), "Extended test deps should only editable install partner packages"
assert depline.startswith("-e ../partners/"), (
"Extended test deps should only editable install partner packages"
)
partner = depline.split("partners/")[1]
dep = f"langchain-{partner}"
else:
@@ -122,23 +136,24 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
if job == "codspeed":
py_versions = ["3.12"] # 3.13 is not yet supported
elif dir_ == "libs/core":
py_versions = ["3.9", "3.10", "3.11", "3.12", "3.13"]
py_versions = ["3.10", "3.11", "3.12", "3.13"]
# custom logic for specific directories
elif dir_ == "libs/partners/milvus":
# milvus doesn't allow 3.12 because they declare deps in funny way
py_versions = ["3.9", "3.11"]
elif dir_ in PY_312_MAX_PACKAGES:
py_versions = ["3.9", "3.12"]
py_versions = ["3.10", "3.12"]
elif dir_ == "libs/langchain" and job == "extended-tests":
py_versions = ["3.9", "3.13"]
py_versions = ["3.10", "3.13"]
elif dir_ == "libs/langchain_v1":
py_versions = ["3.10", "3.13"]
elif dir_ in {"libs/cli"}:
py_versions = ["3.10", "3.13"]
elif dir_ == ".":
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
py_versions = ["3.9", "3.12"]
py_versions = ["3.10", "3.12"]
else:
py_versions = ["3.9", "3.13"]
py_versions = ["3.10", "3.13"]
return [{"working-directory": dir_, "python-version": py_v} for py_v in py_versions]
@@ -270,7 +285,7 @@ if __name__ == "__main__":
dirs_to_run["extended-test"].add(dir_)
elif file.startswith("libs/standard-tests"):
# TODO: update to include all packages that rely on standard-tests (all partner packages)
# note: won't run on external repo partners
# Note: won't run on external repo partners
dirs_to_run["lint"].add("libs/standard-tests")
dirs_to_run["test"].add("libs/standard-tests")
dirs_to_run["lint"].add("libs/cli")
@@ -284,7 +299,7 @@ if __name__ == "__main__":
elif file.startswith("libs/cli"):
dirs_to_run["lint"].add("libs/cli")
dirs_to_run["test"].add("libs/cli")
elif file.startswith("libs/partners"):
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}") and [
@@ -302,7 +317,10 @@ if __name__ == "__main__":
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif file.startswith("docs/") or file in ["pyproject.toml", "uv.lock"]: # docs or root uv files
elif file.startswith("docs/") or file in [
"pyproject.toml",
"uv.lock",
]: # docs or root uv files
docs_edited = True
dirs_to_run["lint"].add(".")

View File

@@ -1,19 +1,21 @@
"""Check that no dependencies allow prereleases unless we're releasing a prerelease."""
import sys
import tomllib
if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# read toml file
with open(toml_file, "rb") as file:
toml_data = tomllib.load(file)
# see if we're releasing an rc
# See if we're releasing an rc or dev version
version = toml_data["project"]["version"]
releasing_rc = "rc" in version or "dev" in version
# if not, iterate through dependencies and make sure none allow prereleases
# If not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:
dependencies = toml_data["project"]["dependencies"]
for dep_version in dependencies:

View File

@@ -1,24 +1,22 @@
from collections import defaultdict
"""Get minimum versions of dependencies from a pyproject.toml file."""
import sys
from collections import defaultdict
from typing import Optional
if sys.version_info >= (3, 11):
import tomllib
else:
# for python 3.10 and below, which doesnt have stdlib tomllib
# For Python 3.10 and below, which doesnt have stdlib tomllib
import tomli as tomllib
from packaging.requirements import Requirement
from packaging.specifiers import SpecifierSet
from packaging.version import Version
import requests
from packaging.version import parse
import re
from typing import List
import re
import requests
from packaging.requirements import Requirement
from packaging.specifiers import SpecifierSet
from packaging.version import Version, parse
MIN_VERSION_LIBS = [
"langchain-core",
@@ -38,14 +36,13 @@ SKIP_IF_PULL_REQUEST = [
def get_pypi_versions(package_name: str) -> List[str]:
"""
Fetch all available versions for a package from PyPI.
"""Fetch all available versions for a package from PyPI.
Args:
package_name (str): Name of the package
package_name: Name of the package
Returns:
List[str]: List of all available versions
List of all available versions
Raises:
requests.exceptions.RequestException: If PyPI API request fails
@@ -58,25 +55,26 @@ def get_pypi_versions(package_name: str) -> List[str]:
def get_minimum_version(package_name: str, spec_string: str) -> Optional[str]:
"""
Find the minimum published version that satisfies the given constraints.
"""Find the minimum published version that satisfies the given constraints.
Args:
package_name (str): Name of the package
spec_string (str): Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
package_name: Name of the package
spec_string: Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
Returns:
Optional[str]: Minimum compatible version or None if no compatible version found
Minimum compatible version or None if no compatible version found
"""
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
spec_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", spec_string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
for y in range(1, 10):
spec_string = re.sub(rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}", spec_string)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
spec_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}", spec_string
)
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
spec_string = re.sub(
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x+1}", spec_string
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x + 1}", spec_string
)
spec_set = SpecifierSet(spec_string)
@@ -156,25 +154,28 @@ def get_min_version_from_toml(
def check_python_version(version_string, constraint_string):
"""
Check if the given Python version matches the given constraints.
"""Check if the given Python version matches the given constraints.
:param version_string: A string representing the Python version (e.g. "3.8.5").
:param constraint_string: A string representing the package's Python version constraints (e.g. ">=3.6, <4.0").
:return: True if the version matches the constraints, False otherwise.
Args:
version_string: A string representing the Python version (e.g. "3.8.5").
constraint_string: A string representing the package's Python version
constraints (e.g. ">=3.6, <4.0").
Returns:
True if the version matches the constraints
"""
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
constraint_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", constraint_string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
for y in range(1, 10):
constraint_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y+1}.0", constraint_string
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}.0", constraint_string
)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
constraint_string = re.sub(
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x+1}.0.0", constraint_string
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x + 1}.0.0", constraint_string
)
try:

View File

@@ -1,15 +1,19 @@
#!/usr/bin/env python
"""Script to sync libraries from various repositories into the main langchain repository."""
"""Sync libraries from various repositories into this monorepo.
Moves cloned partner packages into libs/partners structure.
"""
import os
import shutil
import yaml
from pathlib import Path
from typing import Dict, Any
from typing import Any, Dict
import yaml
def load_packages_yaml() -> Dict[str, Any]:
"""Load and parse the packages.yml file."""
"""Load and parse packages.yml."""
with open("langchain/libs/packages.yml", "r") as f:
return yaml.safe_load(f)
@@ -28,7 +32,6 @@ def get_target_dir(package_name: str) -> Path:
def clean_target_directories(packages: list) -> None:
"""Remove old directories that will be replaced."""
for package in packages:
target_dir = get_target_dir(package["name"])
if target_dir.exists():
print(f"Removing {target_dir}")
@@ -38,7 +41,6 @@ def clean_target_directories(packages: list) -> None:
def move_libraries(packages: list) -> None:
"""Move libraries from their source locations to the target directories."""
for package in packages:
repo_name = package["repo"].split("/")[1]
source_path = package["path"]
target_dir = get_target_dir(package["name"])
@@ -62,29 +64,46 @@ def move_libraries(packages: list) -> None:
def main():
"""Main function to orchestrate the library sync process."""
"""Orchestrate the library sync process."""
try:
# Load packages configuration
package_yaml = load_packages_yaml()
# Clean target directories
clean_target_directories([
p
for p in package_yaml["packages"]
if (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])
# Clean/empty target directories in preparation for moving new ones
#
# Only for packages in the langchain-ai org or explicitly included via
# include_in_api_ref, excluding 'langchain' itself and 'langchain-ai21'
clean_target_directories(
[
p
for p in package_yaml["packages"]
if (
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
)
and p["repo"] != "langchain-ai/langchain"
and p["name"]
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
]
)
# Move libraries to their new locations
move_libraries([
p
for p in package_yaml["packages"]
if not p.get("disabled", False)
and (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
])
# Move cloned libraries to their new locations, only for packages in the
# langchain-ai org or explicitly included via include_in_api_ref,
# excluding 'langchain' itself and 'langchain-ai21'
move_libraries(
[
p
for p in package_yaml["packages"]
if not p.get("disabled", False)
and (
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
)
and p["repo"] != "langchain-ai/langchain"
and p["name"]
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
]
)
# Delete ones without a pyproject.toml
# Delete partner packages without a pyproject.toml
for partner in Path("langchain/libs/partners").iterdir():
if partner.is_dir() and not (partner / "pyproject.toml").exists():
print(f"Removing {partner} as it does not have a pyproject.toml")

View File

@@ -81,56 +81,93 @@ import time
__version__ = "2022.12+dev"
# Update symlinks only if the platform supports not following them
UPDATE_SYMLINKS = bool(os.utime in getattr(os, 'supports_follow_symlinks', []))
UPDATE_SYMLINKS = bool(os.utime in getattr(os, "supports_follow_symlinks", []))
# Call os.path.normpath() only if not in a POSIX platform (Windows)
NORMALIZE_PATHS = (os.path.sep != '/')
NORMALIZE_PATHS = os.path.sep != "/"
# How many files to process in each batch when re-trying merge commits
STEPMISSING = 100
# (Extra) keywords for the os.utime() call performed by touch()
UTIME_KWS = {} if not UPDATE_SYMLINKS else {'follow_symlinks': False}
UTIME_KWS = {} if not UPDATE_SYMLINKS else {"follow_symlinks": False}
# Command-line interface ######################################################
def parse_args():
parser = argparse.ArgumentParser(
description=__doc__.split('\n---')[0])
parser = argparse.ArgumentParser(description=__doc__.split("\n---")[0])
group = parser.add_mutually_exclusive_group()
group.add_argument('--quiet', '-q', dest='loglevel',
action="store_const", const=logging.WARNING, default=logging.INFO,
help="Suppress informative messages and summary statistics.")
group.add_argument('--verbose', '-v', action="count", help="""
group.add_argument(
"--quiet",
"-q",
dest="loglevel",
action="store_const",
const=logging.WARNING,
default=logging.INFO,
help="Suppress informative messages and summary statistics.",
)
group.add_argument(
"--verbose",
"-v",
action="count",
help="""
Print additional information for each processed file.
Specify twice to further increase verbosity.
""")
""",
)
parser.add_argument('--cwd', '-C', metavar="DIRECTORY", help="""
parser.add_argument(
"--cwd",
"-C",
metavar="DIRECTORY",
help="""
Run as if %(prog)s was started in directory %(metavar)s.
This affects how --work-tree, --git-dir and PATHSPEC arguments are handled.
See 'man 1 git' or 'git --help' for more information.
""")
""",
)
parser.add_argument('--git-dir', dest='gitdir', metavar="GITDIR", help="""
parser.add_argument(
"--git-dir",
dest="gitdir",
metavar="GITDIR",
help="""
Path to the git repository, by default auto-discovered by searching
the current directory and its parents for a .git/ subdirectory.
""")
""",
)
parser.add_argument('--work-tree', dest='workdir', metavar="WORKTREE", help="""
parser.add_argument(
"--work-tree",
dest="workdir",
metavar="WORKTREE",
help="""
Path to the work tree root, by default the parent of GITDIR if it's
automatically discovered, or the current directory if GITDIR is set.
""")
""",
)
parser.add_argument('--force', '-f', default=False, action="store_true", help="""
parser.add_argument(
"--force",
"-f",
default=False,
action="store_true",
help="""
Force updating files with uncommitted modifications.
Untracked files and uncommitted deletions, renames and additions are
always ignored.
""")
""",
)
parser.add_argument('--merge', '-m', default=False, action="store_true", help="""
parser.add_argument(
"--merge",
"-m",
default=False,
action="store_true",
help="""
Include merge commits.
Leads to more recent times and more files per commit, thus with the same
time, which may or may not be what you want.
@@ -138,71 +175,130 @@ def parse_args():
are found sooner, which can improve performance, sometimes substantially.
But as merge commits are usually huge, processing them may also take longer.
By default, merge commits are only used for files missing from regular commits.
""")
""",
)
parser.add_argument('--first-parent', default=False, action="store_true", help="""
parser.add_argument(
"--first-parent",
default=False,
action="store_true",
help="""
Consider only the first parent, the "main branch", when evaluating merge commits.
Only effective when merge commits are processed, either when --merge is
used or when finding missing files after the first regular log search.
See --skip-missing.
""")
""",
)
parser.add_argument('--skip-missing', '-s', dest="missing", default=True,
action="store_false", help="""
parser.add_argument(
"--skip-missing",
"-s",
dest="missing",
default=True,
action="store_false",
help="""
Do not try to find missing files.
If merge commits were not evaluated with --merge and some files were
not found in regular commits, by default %(prog)s searches for these
files again in the merge commits.
This option disables this retry, so files found only in merge commits
will not have their timestamp updated.
""")
""",
)
parser.add_argument('--no-directories', '-D', dest='dirs', default=True,
action="store_false", help="""
parser.add_argument(
"--no-directories",
"-D",
dest="dirs",
default=True,
action="store_false",
help="""
Do not update directory timestamps.
By default, use the time of its most recently created, renamed or deleted file.
Note that just modifying a file will NOT update its directory time.
""")
""",
)
parser.add_argument('--test', '-t', default=False, action="store_true",
help="Test run: do not actually update any file timestamp.")
parser.add_argument(
"--test",
"-t",
default=False,
action="store_true",
help="Test run: do not actually update any file timestamp.",
)
parser.add_argument('--commit-time', '-c', dest='commit_time', default=False,
action='store_true', help="Use commit time instead of author time.")
parser.add_argument(
"--commit-time",
"-c",
dest="commit_time",
default=False,
action="store_true",
help="Use commit time instead of author time.",
)
parser.add_argument('--oldest-time', '-o', dest='reverse_order', default=False,
action='store_true', help="""
parser.add_argument(
"--oldest-time",
"-o",
dest="reverse_order",
default=False,
action="store_true",
help="""
Update times based on the oldest, instead of the most recent commit of a file.
This reverses the order in which the git log is processed to emulate a
file "creation" date. Note this will be inaccurate for files deleted and
re-created at later dates.
""")
""",
)
parser.add_argument('--skip-older-than', metavar='SECONDS', type=int, help="""
parser.add_argument(
"--skip-older-than",
metavar="SECONDS",
type=int,
help="""
Ignore files that are currently older than %(metavar)s.
Useful in workflows that assume such files already have a correct timestamp,
as it may improve performance by processing fewer files.
""")
""",
)
parser.add_argument('--skip-older-than-commit', '-N', default=False,
action='store_true', help="""
parser.add_argument(
"--skip-older-than-commit",
"-N",
default=False,
action="store_true",
help="""
Ignore files older than the timestamp it would be updated to.
Such files may be considered "original", likely in the author's repository.
""")
""",
)
parser.add_argument('--unique-times', default=False, action="store_true", help="""
parser.add_argument(
"--unique-times",
default=False,
action="store_true",
help="""
Set the microseconds to a unique value per commit.
Allows telling apart changes that would otherwise have identical timestamps,
as git's time accuracy is in seconds.
""")
""",
)
parser.add_argument('pathspec', nargs='*', metavar='PATHSPEC', help="""
parser.add_argument(
"pathspec",
nargs="*",
metavar="PATHSPEC",
help="""
Only modify paths matching %(metavar)s, relative to current directory.
By default, update all but untracked files and submodules.
""")
""",
)
parser.add_argument('--version', '-V', action='version',
version='%(prog)s version {version}'.format(version=get_version()))
parser.add_argument(
"--version",
"-V",
action="version",
version="%(prog)s version {version}".format(version=get_version()),
)
args_ = parser.parse_args()
if args_.verbose:
@@ -212,17 +308,18 @@ def parse_args():
def get_version(version=__version__):
if not version.endswith('+dev'):
if not version.endswith("+dev"):
return version
try:
cwd = os.path.dirname(os.path.realpath(__file__))
return Git(cwd=cwd, errors=False).describe().lstrip('v')
return Git(cwd=cwd, errors=False).describe().lstrip("v")
except Git.Error:
return '-'.join((version, "unknown"))
return "-".join((version, "unknown"))
# Helper functions ############################################################
def setup_logging():
"""Add TRACE logging level and corresponding method, return the root logger"""
logging.TRACE = TRACE = logging.DEBUG // 2
@@ -255,11 +352,13 @@ def normalize(path):
if path and path[0] == '"':
# Python 2: path = path[1:-1].decode("string-escape")
# Python 3: https://stackoverflow.com/a/46650050/624066
path = (path[1:-1] # Remove enclosing double quotes
.encode('latin1') # Convert to bytes, required by 'unicode-escape'
.decode('unicode-escape') # Perform the actual octal-escaping decode
.encode('latin1') # 1:1 mapping to bytes, UTF-8 encoded
.decode('utf8', 'surrogateescape')) # Decode from UTF-8
path = (
path[1:-1] # Remove enclosing double quotes
.encode("latin1") # Convert to bytes, required by 'unicode-escape'
.decode("unicode-escape") # Perform the actual octal-escaping decode
.encode("latin1") # 1:1 mapping to bytes, UTF-8 encoded
.decode("utf8", "surrogateescape")
) # Decode from UTF-8
if NORMALIZE_PATHS:
# Make sure the slash matches the OS; for Windows we need a backslash
path = os.path.normpath(path)
@@ -282,12 +381,12 @@ def touch_ns(path, mtime_ns):
def isodate(secs: int):
# time.localtime() accepts floats, but discards fractional part
return time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(secs))
return time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(secs))
def isodate_ns(ns: int):
# for integers fromtimestamp() is equivalent and ~16% slower than isodate()
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=' ')
return datetime.datetime.fromtimestamp(ns / 1000000000).isoformat(sep=" ")
def get_mtime_ns(secs: int, idx: int):
@@ -305,35 +404,49 @@ def get_mtime_path(path):
# Git class and parse_log(), the heart of the script ##########################
class Git:
def __init__(self, workdir=None, gitdir=None, cwd=None, errors=True):
self.gitcmd = ['git']
self.gitcmd = ["git"]
self.errors = errors
self._proc = None
if workdir: self.gitcmd.extend(('--work-tree', workdir))
if gitdir: self.gitcmd.extend(('--git-dir', gitdir))
if cwd: self.gitcmd.extend(('-C', cwd))
if workdir:
self.gitcmd.extend(("--work-tree", workdir))
if gitdir:
self.gitcmd.extend(("--git-dir", gitdir))
if cwd:
self.gitcmd.extend(("-C", cwd))
self.workdir, self.gitdir = self._get_repo_dirs()
def ls_files(self, paths: list = None):
return (normalize(_) for _ in self._run('ls-files --full-name', paths))
return (normalize(_) for _ in self._run("ls-files --full-name", paths))
def ls_dirty(self, force=False):
return (normalize(_[3:].split(' -> ', 1)[-1])
for _ in self._run('status --porcelain')
if _[:2] != '??' and (not force or (_[0] in ('R', 'A')
or _[1] == 'D')))
return (
normalize(_[3:].split(" -> ", 1)[-1])
for _ in self._run("status --porcelain")
if _[:2] != "??" and (not force or (_[0] in ("R", "A") or _[1] == "D"))
)
def log(self, merge=False, first_parent=False, commit_time=False,
reverse_order=False, paths: list = None):
cmd = 'whatchanged --pretty={}'.format('%ct' if commit_time else '%at')
if merge: cmd += ' -m'
if first_parent: cmd += ' --first-parent'
if reverse_order: cmd += ' --reverse'
def log(
self,
merge=False,
first_parent=False,
commit_time=False,
reverse_order=False,
paths: list = None,
):
cmd = "whatchanged --pretty={}".format("%ct" if commit_time else "%at")
if merge:
cmd += " -m"
if first_parent:
cmd += " --first-parent"
if reverse_order:
cmd += " --reverse"
return self._run(cmd, paths)
def describe(self):
return self._run('describe --tags', check=True)[0]
return self._run("describe --tags", check=True)[0]
def terminate(self):
if self._proc is None:
@@ -345,18 +458,22 @@ class Git:
pass
def _get_repo_dirs(self):
return (os.path.normpath(_) for _ in
self._run('rev-parse --show-toplevel --absolute-git-dir', check=True))
return (
os.path.normpath(_)
for _ in self._run(
"rev-parse --show-toplevel --absolute-git-dir", check=True
)
)
def _run(self, cmdstr: str, paths: list = None, output=True, check=False):
cmdlist = self.gitcmd + shlex.split(cmdstr)
if paths:
cmdlist.append('--')
cmdlist.append("--")
cmdlist.extend(paths)
popen_args = dict(universal_newlines=True, encoding='utf8')
popen_args = dict(universal_newlines=True, encoding="utf8")
if not self.errors:
popen_args['stderr'] = subprocess.DEVNULL
log.trace("Executing: %s", ' '.join(cmdlist))
popen_args["stderr"] = subprocess.DEVNULL
log.trace("Executing: %s", " ".join(cmdlist))
if not output:
return subprocess.call(cmdlist, **popen_args)
if check:
@@ -379,30 +496,26 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
mtime = 0
datestr = isodate(0)
for line in git.log(
merge,
args.first_parent,
args.commit_time,
args.reverse_order,
filterlist
merge, args.first_parent, args.commit_time, args.reverse_order, filterlist
):
stats['loglines'] += 1
stats["loglines"] += 1
# Blank line between Date and list of files
if not line:
continue
# Date line
if line[0] != ':': # Faster than `not line.startswith(':')`
stats['commits'] += 1
if line[0] != ":": # Faster than `not line.startswith(':')`
stats["commits"] += 1
mtime = int(line)
if args.unique_times:
mtime = get_mtime_ns(mtime, stats['commits'])
mtime = get_mtime_ns(mtime, stats["commits"])
if args.debug:
datestr = isodate(mtime)
continue
# File line: three tokens if it describes a renaming, otherwise two
tokens = line.split('\t')
tokens = line.split("\t")
# Possible statuses:
# M: Modified (content changed)
@@ -411,7 +524,7 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
# T: Type changed: to/from regular file, symlinks, submodules
# R099: Renamed (moved), with % of unchanged content. 100 = pure rename
# Not possible in log: C=Copied, U=Unmerged, X=Unknown, B=pairing Broken
status = tokens[0].split(' ')[-1]
status = tokens[0].split(" ")[-1]
file = tokens[-1]
# Handles non-ASCII chars and OS path separator
@@ -419,56 +532,76 @@ def parse_log(filelist, dirlist, stats, git, merge=False, filterlist=None):
def do_file():
if args.skip_older_than_commit and get_mtime_path(file) <= mtime:
stats['skip'] += 1
stats["skip"] += 1
return
if args.debug:
log.debug("%d\t%d\t%d\t%s\t%s",
stats['loglines'], stats['commits'], stats['files'],
datestr, file)
log.debug(
"%d\t%d\t%d\t%s\t%s",
stats["loglines"],
stats["commits"],
stats["files"],
datestr,
file,
)
try:
touch(os.path.join(git.workdir, file), mtime)
stats['touches'] += 1
stats["touches"] += 1
except Exception as e:
log.error("ERROR: %s: %s", e, file)
stats['errors'] += 1
stats["errors"] += 1
def do_dir():
if args.debug:
log.debug("%d\t%d\t-\t%s\t%s",
stats['loglines'], stats['commits'],
datestr, "{}/".format(dirname or '.'))
log.debug(
"%d\t%d\t-\t%s\t%s",
stats["loglines"],
stats["commits"],
datestr,
"{}/".format(dirname or "."),
)
try:
touch(os.path.join(git.workdir, dirname), mtime)
stats['dirtouches'] += 1
stats["dirtouches"] += 1
except Exception as e:
log.error("ERROR: %s: %s", e, dirname)
stats['direrrors'] += 1
stats["direrrors"] += 1
if file in filelist:
stats['files'] -= 1
stats["files"] -= 1
filelist.remove(file)
do_file()
if args.dirs and status in ('A', 'D'):
if args.dirs and status in ("A", "D"):
dirname = os.path.dirname(file)
if dirname in dirlist:
dirlist.remove(dirname)
do_dir()
# All files done?
if not stats['files']:
if not stats["files"]:
git.terminate()
return
# Main Logic ##################################################################
def main():
start = time.time() # yes, Wall time. CPU time is not realistic for users.
stats = {_: 0 for _ in ('loglines', 'commits', 'touches', 'skip', 'errors',
'dirtouches', 'direrrors')}
stats = {
_: 0
for _ in (
"loglines",
"commits",
"touches",
"skip",
"errors",
"dirtouches",
"direrrors",
)
}
logging.basicConfig(level=args.loglevel, format='%(message)s')
logging.basicConfig(level=args.loglevel, format="%(message)s")
log.trace("Arguments: %s", args)
# First things first: Where and Who are we?
@@ -499,13 +632,16 @@ def main():
# Symlink (to file, to dir or broken - git handles the same way)
if not UPDATE_SYMLINKS and os.path.islink(fullpath):
log.warning("WARNING: Skipping symlink, no OS support for updates: %s",
path)
log.warning(
"WARNING: Skipping symlink, no OS support for updates: %s", path
)
continue
# skip files which are older than given threshold
if (args.skip_older_than
and start - get_mtime_path(fullpath) > args.skip_older_than):
if (
args.skip_older_than
and start - get_mtime_path(fullpath) > args.skip_older_than
):
continue
# Always add files relative to worktree root
@@ -519,15 +655,17 @@ def main():
else:
dirty = set(git.ls_dirty())
if dirty:
log.warning("WARNING: Modified files in the working directory were ignored."
"\nTo include such files, commit your changes or use --force.")
log.warning(
"WARNING: Modified files in the working directory were ignored."
"\nTo include such files, commit your changes or use --force."
)
filelist -= dirty
# Build dir list to be processed
dirlist = set(os.path.dirname(_) for _ in filelist) if args.dirs else set()
stats['totalfiles'] = stats['files'] = len(filelist)
log.info("{0:,} files to be processed in work dir".format(stats['totalfiles']))
stats["totalfiles"] = stats["files"] = len(filelist)
log.info("{0:,} files to be processed in work dir".format(stats["totalfiles"]))
if not filelist:
# Nothing to do. Exit silently and without errors, just like git does
@@ -544,10 +682,18 @@ def main():
if args.missing and not args.merge:
filterlist = list(filelist)
missing = len(filterlist)
log.info("{0:,} files not found in log, trying merge commits".format(missing))
log.info(
"{0:,} files not found in log, trying merge commits".format(missing)
)
for i in range(0, missing, STEPMISSING):
parse_log(filelist, dirlist, stats, git,
merge=True, filterlist=filterlist[i:i + STEPMISSING])
parse_log(
filelist,
dirlist,
stats,
git,
merge=True,
filterlist=filterlist[i : i + STEPMISSING],
)
# Still missing some?
for file in filelist:
@@ -556,29 +702,33 @@ def main():
# Final statistics
# Suggestion: use git-log --before=mtime to brag about skipped log entries
def log_info(msg, *a, width=13):
ifmt = '{:%d,}' % (width,) # not using 'n' for consistency with ffmt
ffmt = '{:%d,.2f}' % (width,)
ifmt = "{:%d,}" % (width,) # not using 'n' for consistency with ffmt
ffmt = "{:%d,.2f}" % (width,)
# %-formatting lacks a thousand separator, must pre-render with .format()
log.info(msg.replace('%d', ifmt).replace('%f', ffmt).format(*a))
log.info(msg.replace("%d", ifmt).replace("%f", ffmt).format(*a))
log_info(
"Statistics:\n"
"%f seconds\n"
"%d log lines processed\n"
"%d commits evaluated",
time.time() - start, stats['loglines'], stats['commits'])
"Statistics:\n%f seconds\n%d log lines processed\n%d commits evaluated",
time.time() - start,
stats["loglines"],
stats["commits"],
)
if args.dirs:
if stats['direrrors']: log_info("%d directory update errors", stats['direrrors'])
log_info("%d directories updated", stats['dirtouches'])
if stats["direrrors"]:
log_info("%d directory update errors", stats["direrrors"])
log_info("%d directories updated", stats["dirtouches"])
if stats['touches'] != stats['totalfiles']:
log_info("%d files", stats['totalfiles'])
if stats['skip']: log_info("%d files skipped", stats['skip'])
if stats['files']: log_info("%d files missing", stats['files'])
if stats['errors']: log_info("%d file update errors", stats['errors'])
if stats["touches"] != stats["totalfiles"]:
log_info("%d files", stats["totalfiles"])
if stats["skip"]:
log_info("%d files skipped", stats["skip"])
if stats["files"]:
log_info("%d files missing", stats["files"])
if stats["errors"]:
log_info("%d file update errors", stats["errors"])
log_info("%d files updated", stats['touches'])
log_info("%d files updated", stats["touches"])
if args.test:
log.info("TEST RUN - No files modified!")

View File

@@ -1,6 +0,0 @@
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx
self.assertIn(
from trulens_eval import Tru
tru = Tru()

View File

@@ -1,4 +1,12 @@
name: compile-integration-test
# Validates that a package's integration tests compile without syntax or import errors.
#
# (If an integration test fails to compile, it won't run.)
#
# Called as part of check_diffs.yml workflow
#
# Runs pytest with compile marker to check syntax/imports.
name: '🔗 Compile Integration Tests'
on:
workflow_call:
@@ -25,24 +33,26 @@ jobs:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "uv run pytest -m compile tests/integration_tests #${{ inputs.python-version }}"
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: compile-integration-tests-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: Install integration dependencies
- name: '📦 Install Integration Dependencies'
shell: bash
run: uv sync --group test --group test_integration
- name: Check integration tests compile
- name: '🔗 Check Integration Tests Compile'
shell: bash
run: uv run pytest -m compile tests/integration_tests
- name: Ensure the tests did not create any additional files
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu

View File

@@ -1,4 +1,12 @@
name: Integration tests
# Runs `make integration_tests` on the specified package.
#
# Manually triggered via workflow_dispatch for testing with real APIs.
#
# Installs integration test dependencies and executes full test suite.
name: '🚀 Integration Tests'
run-name: 'Test ${{ inputs.working-directory }} on Python ${{ inputs.python-version }}'
on:
workflow_dispatch:
@@ -11,6 +19,7 @@ on:
required: true
type: string
description: "Python version to use"
default: "3.11"
permissions:
contents: read
@@ -24,20 +33,22 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: Python ${{ inputs.python-version }}
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: integration-tests-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: Install dependencies
- name: '📦 Install Integration Dependencies'
shell: bash
run: uv sync --group test --group test_integration
- name: Run integration tests
- name: '🚀 Run Integration Tests'
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
@@ -79,7 +90,7 @@ jobs:
run: |
make integration_tests
- name: Ensure the tests did not create any additional files
- name: 'Ensure testing did not create/modify files'
shell: bash
run: |
set -eu

View File

@@ -1,4 +1,11 @@
name: lint
# Runs linting.
#
# Uses the package's Makefile to run the checks, specifically the
# `lint_package` and `lint_tests` targets.
#
# Called as part of check_diffs.yml workflow.
name: '🧹 Linting'
on:
workflow_call:
@@ -24,56 +31,45 @@ env:
UV_FROZEN: "true"
jobs:
# Linting job - runs quality checks on package and test code
build:
name: "make lint #${{ inputs.python-version }}"
name: 'Python ${{ inputs.python-version }}'
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: lint-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: Install dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
#
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
- name: '📦 Install Lint & Typing Dependencies'
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --group lint --group typing
- name: Analysing the code with our lint
- name: '🔍 Analyze Package Code with Linters'
working-directory: ${{ inputs.working-directory }}
run: |
make lint_package
- name: Install unit test dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
#
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
- name: '📦 Install Test Dependencies (non-partners)'
# (For directories NOT starting with libs/partners/)
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --inexact --group test
- name: Install unit+integration test dependencies
- name: '📦 Install Test Dependencies'
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --inexact --group test --group test_integration
- name: Analysing the code with our lint
- name: '🔍 Analyze Test Code with Linters'
working-directory: ${{ inputs.working-directory }}
run: |
make lint_tests

View File

@@ -1,5 +1,11 @@
name: release
run-name: Release ${{ inputs.working-directory }} by @${{ github.actor }}
# Builds and publishes LangChain packages to PyPI.
#
# Manually triggered, though can be used as a reusable workflow (workflow_call).
#
# Handles version bumping, building, and publishing to PyPI with authentication.
name: '🚀 Package Release'
run-name: 'Release ${{ inputs.working-directory }} ${{ inputs.release-version }}'
on:
workflow_call:
inputs:
@@ -14,11 +20,16 @@ on:
type: string
description: "From which folder this pipeline executes"
default: 'libs/langchain'
release-version:
required: true
type: string
default: '0.1.0'
description: "New version of package being released"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
description: "Release from a non-master branch (danger!) - Only use for hotfixes"
env:
PYTHON_VERSION: "3.11"
@@ -26,6 +37,8 @@ env:
UV_NO_SYNC: "true"
jobs:
# Build the distribution package and extract version info
# Runs in isolated environment with minimal permissions for security
build:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
environment: Scheduled testing
@@ -36,7 +49,7 @@ jobs:
version: ${{ steps.check-version.outputs.version }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
@@ -45,8 +58,8 @@ jobs:
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
# The release stage has trusted publishing and GitHub repo contents write access,
# and we want to keep the scope of that access limited just to the release job.
# (Release stage has trusted publishing and GitHub repo contents write access,
#
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
# could get access to our GitHub or PyPI credentials.
#
@@ -64,7 +77,7 @@ jobs:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Check Version
- name: Check version
id: check-version
shell: python
working-directory: ${{ inputs.working-directory }}
@@ -85,7 +98,7 @@ jobs:
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain
path: langchain
@@ -93,7 +106,7 @@ jobs:
${{ inputs.working-directory }}
ref: ${{ github.ref }} # this scopes to just ref'd branch
fetch-depth: 0 # this fetches entire commit history
- name: Check Tags
- name: Check tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
@@ -109,7 +122,7 @@ jobs:
# Look for the latest release of the same base version
REGEX="^$PKG_NAME==$BASE_VERSION\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
# If no exact base version match, look for the latest release of any kind
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
@@ -120,7 +133,7 @@ jobs:
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
# backup case if releasing e.g. 0.3.0, looks up last release
# note if last release (chronologically) was e.g. 0.1.47 it will get
# note if last release (chronologically) was e.g. 0.1.47 it will get
# that instead of the last 0.2 release
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
@@ -176,13 +189,36 @@ jobs:
needs:
- build
- release-notes
uses:
./.github/workflows/_test_release.yml
permissions: write-all
with:
working-directory: ${{ inputs.working-directory }}
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
secrets: inherit
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v5
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false
pre-release-checks:
needs:
@@ -192,7 +228,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
# We explicitly *don't* set up caching here. This ensures our tests are
# maximally sensitive to catching breakage.
@@ -213,7 +249,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -258,16 +294,19 @@ jobs:
run: |
VIRTUAL_ENV=.venv uv pip install dist/*.whl
- name: Run unit tests
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Check for prerelease versions
# Block release if any dependencies allow prerelease versions
# (unless this is itself a prerelease version)
working-directory: ${{ inputs.working-directory }}
run: |
uv run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Run unit tests
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Get minimum versions
# Find the minimum published versions that satisfies the given constraints
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
@@ -282,7 +321,8 @@ jobs:
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS --editable .
VIRTUAL_ENV=.venv uv pip install --force-reinstall --editable .
VIRTUAL_ENV=.venv uv pip install --force-reinstall $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
@@ -291,6 +331,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
- name: Run integration tests
# Uses the Makefile's `integration_tests` target for the specified package
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
@@ -331,7 +372,10 @@ jobs:
working-directory: ${{ inputs.working-directory }}
# Test select published packages against new core
# Done when code changes are made to langchain-core
test-prior-published-packages-against-new-core:
# Installs the new core with old partners: Installs the new unreleased core
# alongside the previously published partner packages and runs integration tests
needs:
- build
- release-notes
@@ -355,10 +399,11 @@ jobs:
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
# We implement this conditional as Github Actions does not have good support
# for conditionally needing steps. https://github.com/actions/runner/issues/491
# TODO: this seems to be resolved upstream, so we can probably remove this workaround
- name: Check if libs/core
run: |
if [ "${{ startsWith(inputs.working-directory, 'libs/core') }}" != "true" ]; then
@@ -372,7 +417,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v5
if: startsWith(inputs.working-directory, 'libs/core')
with:
name: dist
@@ -381,11 +426,12 @@ jobs:
- name: Test against ${{ matrix.partner }}
if: startsWith(inputs.working-directory, 'libs/core')
run: |
# Identify latest tag
# Identify latest tag, excluding pre-releases
LATEST_PACKAGE_TAG="$(
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| grep -E '[0-9]+\.[0-9]+\.[0-9]+([a-zA-Z]+[0-9]+)?$' \
| sort -Vr \
| head -n 1
)"
@@ -412,6 +458,7 @@ jobs:
make integration_tests
publish:
# Publishes the package to PyPI
needs:
- build
- release-notes
@@ -432,14 +479,14 @@ jobs:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -454,6 +501,7 @@ jobs:
attestations: false
mark-release:
# Marks the GitHub release with the new version tag
needs:
- build
- release-notes
@@ -463,7 +511,7 @@ jobs:
runs-on: ubuntu-latest
permissions:
# This permission is needed by `ncipollo/release-action` to
# create the GitHub release.
# create the GitHub release/tag
contents: write
defaults:
@@ -471,18 +519,18 @@ jobs:
working-directory: ${{ inputs.working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Create Tag
uses: ncipollo/release-action@v1
with:

View File

@@ -1,4 +1,7 @@
name: test
# Runs unit tests with both current and minimum supported dependency versions
# to ensure compatibility across the supported range.
name: '🧪 Unit Testing'
on:
workflow_call:
@@ -20,31 +23,36 @@ env:
UV_NO_SYNC: "true"
jobs:
# Main test job - runs unit tests with current deps, then retests with minimum versions
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "make test #${{ inputs.python-version }}"
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v4
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
id: setup-python
with:
python-version: ${{ inputs.python-version }}
- name: Install dependencies
cache-suffix: test-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test --dev
- name: Run core tests
- name: '🧪 Run Core Unit Tests'
shell: bash
run: |
make test
- name: Get minimum versions
- name: '🔍 Calculate Minimum Dependency Versions'
working-directory: ${{ inputs.working-directory }}
id: min-version
shell: bash
@@ -55,7 +63,7 @@ jobs:
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: Run unit tests with minimum dependency versions
- name: '🧪 Run Tests with Minimum Dependencies'
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
@@ -64,7 +72,7 @@ jobs:
make tests
working-directory: ${{ inputs.working-directory }}
- name: Ensure the tests did not create any additional files
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu
@@ -75,4 +83,4 @@ jobs:
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,4 +1,11 @@
name: test_doc_imports
# Validates that all import statements in `.ipynb` notebooks are correct and functional.
#
# Called as part of check_diffs.yml.
#
# Installs test dependencies and LangChain packages in editable mode and
# runs check_imports.py.
name: '📑 Documentation Import Testing'
on:
workflow_call:
@@ -18,29 +25,32 @@ jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 20
name: "check doc imports #${{ inputs.python-version }}"
name: '🔍 Check Doc Imports (Python ${{ inputs.python-version }})'
steps:
- uses: actions/checkout@v4
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-doc-imports-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: Install dependencies
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test
- name: Install langchain editable
- name: '📦 Install LangChain in Editable Mode'
run: |
VIRTUAL_ENV=.venv uv pip install langchain-experimental langchain-community -e libs/core libs/langchain
- name: Check doc imports
- name: '🔍 Validate Documentation Import Statements'
shell: bash
run: |
uv run python docs/scripts/check_imports.py
- name: Ensure the test did not create any additional files
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu

View File

@@ -1,4 +1,6 @@
name: test pydantic intermediate versions
# Facilitate unit testing against different Pydantic versions for a provided package.
name: '🐍 Pydantic Version Testing'
on:
workflow_call:
@@ -31,29 +33,32 @@ jobs:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "make test # pydantic: ~=${{ inputs.pydantic-version }}, python: ${{ inputs.python-version }}, "
name: 'Pydantic ~=${{ inputs.pydantic-version }}'
steps:
- uses: actions/checkout@v4
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-pydantic-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: Install dependencies
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test
- name: Overwrite pydantic version
- name: '🔄 Install Specific Pydantic Version'
shell: bash
run: VIRTUAL_ENV=.venv uv pip install pydantic~=${{ inputs.pydantic-version }}
- name: Run core tests
- name: '🧪 Run Core Tests'
shell: bash
run: |
make test
- name: Ensure the tests did not create any additional files
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu
@@ -63,4 +68,4 @@ jobs:
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,106 +0,0 @@
name: test-release
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
PYTHON_VERSION: "3.11"
UV_FROZEN: "true"
jobs:
build:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
runs-on: ubuntu-latest
outputs:
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
version: ${{ steps.check-version.outputs.version }}
steps:
- uses: actions/checkout@v4
- name: Set up Python + uv
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
# The release stage has trusted publishing and GitHub repo contents write access,
# and we want to keep the scope of that access limited just to the release job.
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
# could get access to our GitHub or PyPI credentials.
#
# Per the trusted publishing GitHub Action:
# > It is strongly advised to separate jobs for building [...]
# > from the publish job.
# https://github.com/pypa/gh-action-pypi-publish#non-goals
- name: Build project for distribution
run: uv build
working-directory: ${{ inputs.working-directory }}
- name: Upload build
uses: actions/upload-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: Check Version
id: check-version
shell: python
working-directory: ${{ inputs.working-directory }}
run: |
import os
import tomllib
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
pkg_name = data["project"]["name"]
version = data["project"]["version"]
with open(os.environ["GITHUB_OUTPUT"], "a") as f:
f.write(f"pkg-name={pkg_name}\n")
f.write(f"version={version}\n")
publish:
needs:
- build
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false

View File

@@ -1,32 +1,46 @@
name: API docs build
# Build the API reference documentation.
#
# Runs daily. Can also be triggered manually for immediate updates.
#
# Built HTML pushed to langchain-ai/langchain-api-docs-html.
#
# Looks for langchain-ai org repos in packages.yml and checks them out.
# Calls prep_api_docs_build.py.
name: '📚 API Docs'
run-name: 'Build & Deploy API Reference'
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *'
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
env:
PYTHON_VERSION: "3.11"
jobs:
# Only runs on main repository to prevent unnecessary builds on forks
build:
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
runs-on: ubuntu-latest
permissions: write-all
permissions:
contents: read
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
path: langchain
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain-api-docs-html
path: langchain-api-docs-html
token: ${{ secrets.TOKEN_GITHUB_API_DOCS_HTML }}
- name: Get repos with yq
- name: '📋 Extract Repository List with yq'
id: get-unsorted-repos
uses: mikefarah/yq@master
with:
cmd: |
# Extract repos from packages.yml that are in the langchain-ai org
# (excluding 'langchain' itself)
yq '
.packages[]
| select(
@@ -41,66 +55,97 @@ jobs:
| .repo
' langchain/libs/packages.yml
- name: Parse YAML and checkout repos
- name: '📋 Parse YAML & Checkout Repositories'
env:
REPOS_UNSORTED: ${{ steps.get-unsorted-repos.outputs.result }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Get unique repositories
REPOS=$(echo "$REPOS_UNSORTED" | sort -u)
# Checkout each unique repository that is in langchain-ai org
# Checkout each unique repository
for repo in $REPOS; do
# Validate repository format (allow any org with proper format)
if [[ ! "$repo" =~ ^[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+$ ]]; then
echo "Error: Invalid repository format: $repo"
exit 1
fi
REPO_NAME=$(echo $repo | cut -d'/' -f2)
# Additional validation for repo name
if [[ ! "$REPO_NAME" =~ ^[a-zA-Z0-9_.-]+$ ]]; then
echo "Error: Invalid repository name: $REPO_NAME"
exit 1
fi
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
done
- name: Setup python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@v5
- name: '🐍 Setup Python ${{ env.PYTHON_VERSION }}'
uses: actions/setup-python@v6
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install initial py deps
- name: '📦 Install Initial Python Dependencies using uv'
working-directory: langchain
run: |
python -m pip install -U uv
python -m uv pip install --upgrade --no-cache-dir pip setuptools pyyaml
- name: Move libs with script
- name: '📦 Organize Library Directories'
# Places cloned partner packages into libs/partners structure
run: python langchain/.github/scripts/prep_api_docs_build.py
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Rm old html
- name: '🧹 Clear Prior Build'
run:
# Remove artifacts from prior docs build
rm -rf langchain-api-docs-html/api_reference_build/html
- name: Install dependencies
- name: '📦 Install Documentation Dependencies using uv'
working-directory: langchain
run: |
# Install all partner packages in editable mode with overrides
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}") --overrides ./docs/vercel_overrides.txt
# Install core langchain and other main packages
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental libs/standard-tests
# Install Sphinx and related packages for building docs
python -m uv pip install -r docs/api_reference/requirements.txt
- name: Set Git config
- name: '🔧 Configure Git Settings'
working-directory: langchain
run: |
git config --local user.email "actions@github.com"
git config --local user.name "Github Actions"
- name: Build docs
- name: '📚 Build API Documentation'
working-directory: langchain
run: |
# Generate the API reference RST files
python docs/api_reference/create_api_rst.py
# Build the HTML documentation using Sphinx
# -T: show full traceback on exception
# -E: don't use cached environment (force rebuild, ignore cached doctrees)
# -b html: build HTML docs (vs PDS, etc.)
# -d: path for the cached environment (parsed document trees / doctrees)
# - Separate from output dir for faster incremental builds
# -c: path to conf.py
# -j auto: parallel build using all available CPU cores
python -m sphinx -T -E -b html -d ../langchain-api-docs-html/_build/doctrees -c docs/api_reference docs/api_reference ../langchain-api-docs-html/api_reference_build/html -j auto
# Post-process the generated HTML
python docs/api_reference/scripts/custom_formatter.py ../langchain-api-docs-html/api_reference_build/html
# Default index page is blank so we copy in the actual home page.
cp ../langchain-api-docs-html/api_reference_build/html/{reference,index}.html
# Removes Sphinx's intermediate build artifacts after the build is complete.
rm -rf ../langchain-api-docs-html/_build/
# https://github.com/marketplace/actions/add-commit
# Commit and push changes to langchain-api-docs-html repo
- uses: EndBug/add-and-commit@v9
with:
cwd: langchain-api-docs-html

View File

@@ -1,9 +1,11 @@
name: Check Broken Links
# Runs broken link checker in /docs on a daily schedule.
name: '🔗 Check Broken Links'
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *'
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
permissions:
contents: read
@@ -13,16 +15,16 @@ jobs:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 18.x
uses: actions/setup-node@v4
- uses: actions/checkout@v5
- name: '🟢 Setup Node.js 18.x'
uses: actions/setup-node@v5
with:
node-version: 18.x
cache: "yarn"
cache-dependency-path: ./docs/yarn.lock
- name: Install dependencies
- name: '📦 Install Node Dependencies'
run: yarn install --immutable --mode=skip-build
working-directory: ./docs
- name: Check broken links
- name: '🔍 Scan Documentation for Broken Links'
run: yarn check-broken-links
working-directory: ./docs

View File

@@ -1,4 +1,8 @@
name: Check `langchain-core` version equality
# Ensures version numbers in pyproject.toml and version.py stay in sync.
#
# (Prevents releases with mismatched version numbers)
name: '🔍 Check Version Equality'
on:
pull_request:
@@ -14,19 +18,34 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Check version equality
- name: '✅ Verify pyproject.toml & version.py Match'
run: |
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Check core versions
CORE_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
CORE_VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Compare the two versions
if [ "$PYPROJECT_VERSION" != "$VERSION_PY_VERSION" ]; then
# Compare core versions
if [ "$CORE_PYPROJECT_VERSION" != "$CORE_VERSION_PY_VERSION" ]; then
echo "langchain-core versions in pyproject.toml and version.py do not match!"
echo "pyproject.toml version: $PYPROJECT_VERSION"
echo "version.py version: $VERSION_PY_VERSION"
echo "pyproject.toml version: $CORE_PYPROJECT_VERSION"
echo "version.py version: $CORE_VERSION_PY_VERSION"
exit 1
else
echo "Versions match: $PYPROJECT_VERSION"
echo "Core versions match: $CORE_PYPROJECT_VERSION"
fi
# Check langchain_v1 versions
LANGCHAIN_PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/langchain_v1/pyproject.toml)
LANGCHAIN_INIT_PY_VERSION=$(grep -Po '(?<=^__version__ = ")[^"]*' libs/langchain_v1/langchain/__init__.py)
# Compare langchain_v1 versions
if [ "$LANGCHAIN_PYPROJECT_VERSION" != "$LANGCHAIN_INIT_PY_VERSION" ]; then
echo "langchain_v1 versions in pyproject.toml and __init__.py do not match!"
echo "pyproject.toml version: $LANGCHAIN_PYPROJECT_VERSION"
echo "version.py version: $LANGCHAIN_INIT_PY_VERSION"
exit 1
else
echo "Langchain v1 versions match: $LANGCHAIN_PYPROJECT_VERSION"
fi

View File

@@ -1,4 +1,19 @@
name: CI
# Primary CI workflow.
#
# Only runs against packages that have changed files.
#
# Runs:
# - Linting (_lint.yml)
# - Unit Tests (_test.yml)
# - Pydantic compatibility tests (_test_pydantic.yml)
# - Documentation import tests (_test_doc_imports.yml)
# - Integration test compilation checks (_compile_integration_test.yml)
# - Extended test suites that require additional dependencies
# - Codspeed benchmarks (if not labeled 'codspeed-ignore')
#
# Reports status to GitHub checks and PR status.
name: '🔧 CI'
on:
push:
@@ -6,12 +21,13 @@ on:
pull_request:
merge_group:
# Optimizes CI performance by canceling redundant workflow runs
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
# a limited number of job runners to be active at the same time, so it's better to
# cancel pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@@ -24,16 +40,24 @@ env:
UV_NO_SYNC: "true"
jobs:
# This job analyzes which files changed and creates a dynamic test matrix
# to only run tests/lints for the affected packages, improving CI efficiency
build:
name: 'Detect Changes & Set Matrix'
runs-on: ubuntu-latest
if: ${{ !contains(github.event.pull_request.labels.*.name, 'ci-ignore') }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: '🐍 Setup Python 3.11'
uses: actions/setup-python@v6
with:
python-version: '3.11'
- id: files
- name: '📂 Get Changed Files'
id: files
uses: Ana06/get-changed-files@v2.3.0
- id: set-matrix
- name: '🔍 Analyze Changed Files & Generate Build Matrix'
id: set-matrix
run: |
python -m pip install packaging requests
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
@@ -45,8 +69,9 @@ jobs:
dependencies: ${{ steps.set-matrix.outputs.dependencies }}
test-doc-imports: ${{ steps.set-matrix.outputs.test-doc-imports }}
test-pydantic: ${{ steps.set-matrix.outputs.test-pydantic }}
codspeed: ${{ steps.set-matrix.outputs.codspeed }}
# Run linting only on packages that have changed files
lint:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.lint != '[]' }}
strategy:
@@ -59,8 +84,8 @@ jobs:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Run unit tests only on packages that have changed files
test:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.test != '[]' }}
strategy:
@@ -73,8 +98,8 @@ jobs:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Test compatibility with different Pydantic versions for affected packages
test-pydantic:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.test-pydantic != '[]' }}
strategy:
@@ -95,12 +120,13 @@ jobs:
job-configs: ${{ fromJson(needs.build.outputs.test-doc-imports) }}
fail-fast: false
uses: ./.github/workflows/_test_doc_imports.yml
secrets: inherit
with:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Verify integration tests compile without actually running them (faster feedback)
compile-integration-tests:
name: cd ${{ matrix.job-configs.working-directory }}
name: 'Compile Integration Tests'
needs: [ build ]
if: ${{ needs.build.outputs.compile-integration-tests != '[]' }}
strategy:
@@ -113,8 +139,9 @@ jobs:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Run extended test suites that require additional dependencies
extended-tests:
name: "cd ${{ matrix.job-configs.working-directory }} / make extended_tests #${{ matrix.job-configs.python-version }}"
name: 'Extended Tests'
needs: [ build ]
if: ${{ needs.build.outputs.extended-tests != '[]' }}
strategy:
@@ -128,14 +155,16 @@ jobs:
run:
working-directory: ${{ matrix.job-configs.working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Python ${{ matrix.job-configs.python-version }} + uv
- name: '🐍 Set up Python ${{ matrix.job-configs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ matrix.job-configs.python-version }}
cache-suffix: extended-tests-${{ matrix.job-configs.working-directory }}
working-directory: ${{ matrix.job-configs.working-directory }}
- name: Install dependencies and run extended tests
- name: '📦 Install Dependencies & Run Extended Tests'
shell: bash
run: |
echo "Running extended tests, installing dependencies with uv..."
@@ -144,7 +173,7 @@ jobs:
VIRTUAL_ENV=.venv uv pip install -r extended_testing_deps.txt
VIRTUAL_ENV=.venv make extended_tests
- name: Ensure the tests did not create any additional files
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu
@@ -156,9 +185,72 @@ jobs:
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
# Run codspeed benchmarks only on packages that have changed files
codspeed:
name: '⚡ CodSpeed Benchmarks'
needs: [ build ]
if: ${{ needs.build.outputs.codspeed != '[]' && !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
runs-on: ubuntu-latest
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.codspeed) }}
fail-fast: false
steps:
- uses: actions/checkout@v5
# We have to use 3.12 as 3.13 is not yet supported
- name: '📦 Install UV Package Manager'
uses: astral-sh/setup-uv@v6
with:
python-version: "3.12"
- uses: actions/setup-python@v6
with:
python-version: "3.12"
- name: '📦 Install Test Dependencies'
run: uv sync --group test
working-directory: ${{ matrix.job-configs.working-directory }}
- name: '⚡ Run Benchmarks: ${{ matrix.job-configs.working-directory }}'
uses: CodSpeedHQ/action@v4
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd ${{ matrix.job-configs.working-directory }}
if [ "${{ matrix.job-configs.working-directory }}" = "libs/core" ]; then
uv run --no-sync pytest ./tests/benchmarks --codspeed
else
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.job-configs.working-directory == 'libs/core' && 'walltime' || 'instrumentation' }}
# Final status check - ensures all required jobs passed before allowing merge
ci_success:
name: "CI Success"
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic]
name: '✅ CI Success'
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic, codspeed]
if: |
always()
runs-on: ubuntu-latest
@@ -167,7 +259,7 @@ jobs:
RESULTS_JSON: ${{ toJSON(needs.*.result) }}
EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
steps:
- name: "CI Success"
- name: '🎉 All Checks Passed'
run: |
echo $JOBS_JSON
echo $RESULTS_JSON

View File

@@ -1,4 +1,7 @@
name: Integration docs lint
# For integrations, we run check_templates.py to ensure that new docs use the correct
# templates based on their type. See the script for more details.
name: '📑 Integration Docs Lint'
on:
push:
@@ -22,8 +25,8 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- uses: actions/checkout@v5
- uses: actions/setup-python@v6
with:
python-version: '3.10'
- id: files
@@ -33,6 +36,6 @@ jobs:
*.ipynb
*.md
*.mdx
- name: Check new docs
- name: '🔍 Check New Documentation Templates'
run: |
python docs/scripts/check_templates.py ${{ steps.files.outputs.added }}

View File

@@ -1,35 +0,0 @@
name: CI / cd . / make spell_check
on:
push:
branches: [master, v0.1, v0.2]
pull_request:
permissions:
contents: read
jobs:
codespell:
name: (Check for spelling errors)
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Dependencies
run: |
pip install toml
- name: Extract Ignore Words List
run: |
# Use a Python script to extract the ignore words list from pyproject.toml
python .github/workflows/extract_ignored_words_list.py
id: extract_ignore_words
# - name: Codespell
# uses: codespell-project/actions-codespell@v2
# with:
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
# exclude_file: ./.github/workflows/codespell-exclude

View File

@@ -1,65 +0,0 @@
name: CodSpeed
on:
push:
branches:
- master
pull_request:
workflow_dispatch:
permissions:
contents: read
env:
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: foo
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: foo
DEEPSEEK_API_KEY: foo
FIREWORKS_API_KEY: foo
jobs:
codspeed:
name: Run benchmarks
runs-on: ubuntu-latest
strategy:
matrix:
include:
- working-directory: libs/core
mode: walltime
- working-directory: libs/partners/openai
- working-directory: libs/partners/anthropic
- working-directory: libs/partners/deepseek
- working-directory: libs/partners/fireworks
- working-directory: libs/partners/xai
- working-directory: libs/partners/mistralai
- working-directory: libs/partners/groq
fail-fast: false
steps:
- uses: actions/checkout@v4
# We have to use 3.12 as 3.13 is not yet supported
- name: Install uv
uses: astral-sh/setup-uv@v6
with:
python-version: "3.12"
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: uv sync --group test
working-directory: ${{ matrix.working-directory }}
- name: Run benchmarks ${{ matrix.working-directory }}
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd ${{ matrix.working-directory }}
if [ "${{ matrix.working-directory }}" = "libs/core" ]; then
uv run --no-sync pytest ./tests/benchmarks --codspeed
else
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.mode || 'instrumentation' }}

View File

@@ -1,10 +0,0 @@
import toml
pyproject_toml = toml.load("pyproject.toml")
# Extract the ignore words list (adjust the key as per your TOML structure)
ignore_words_list = (
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
)
print(f"::set-output name=ignore_words_list::{ignore_words_list}")

View File

@@ -1,8 +1,11 @@
name: LangChain People
# Updates the LangChain People data by fetching the latest info from the LangChain Git.
# TODO: broken/not used
name: '👥 LangChain People'
run-name: 'Update People Data'
on:
schedule:
- cron: "0 14 1 * *"
- cron: "0 14 1 * *" # Runs at 14:00 UTC on the 1st of every month (10AM EDT/7AM PDT)
push:
branches: [jacob/people]
workflow_dispatch:
@@ -14,13 +17,13 @@ jobs:
permissions:
contents: write
steps:
- name: Dump GitHub context
- name: '📋 Dump GitHub Context'
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- uses: actions/checkout@v4
- uses: actions/checkout@v5
# Ref: https://github.com/actions/runner/issues/2033
- name: Fix git safe.directory in container
- name: '🔧 Fix Git Safe Directory in Container'
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
- uses: ./.github/actions/people
with:

28
.github/workflows/pr_labeler_file.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
# Label PRs based on changed files.
#
# See `.github/pr-file-labeler.yml` to see rules for each label/directory.
name: "🏷️ Pull Request Labeler"
on:
# Safe since we're not checking out or running the PR's code
# Never check out the PR's head in a pull_request_target job
pull_request_target:
types: [opened, synchronize, reopened, edited]
jobs:
labeler:
name: 'label'
permissions:
contents: read
pull-requests: write
issues: write
runs-on: ubuntu-latest
steps:
- name: Label Pull Request
uses: actions/labeler@v6
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/pr-file-labeler.yml
sync-labels: false

28
.github/workflows/pr_labeler_title.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
# Label PRs based on their titles.
#
# See `.github/pr-title-labeler.yml` to see rules for each label/title pattern.
name: "🏷️ PR Title Labeler"
on:
# Safe since we're not checking out or running the PR's code
# Never check out the PR's head in a pull_request_target job
pull_request_target:
types: [opened, synchronize, reopened, edited]
jobs:
pr-title-labeler:
name: 'label'
permissions:
contents: read
pull-requests: write
issues: write
runs-on: ubuntu-latest
steps:
- name: Label PR based on title
# Archived repo; latest commit (v0.1.0)
uses: grafana/pr-labeler-action@f19222d3ef883d2ca5f04420fdfe8148003763f0
with:
token: ${{ secrets.GITHUB_TOKEN }}
configuration-path: .github/pr-title-labeler.yml

105
.github/workflows/pr_lint.yml vendored Normal file
View File

@@ -0,0 +1,105 @@
# PR title linting.
#
# FORMAT (Conventional Commits 1.0.0):
#
# <type>[optional scope]: <description>
# [optional body]
# [optional footer(s)]
#
# Examples:
# feat(core): add multitenant support
# fix(cli): resolve flag parsing error
# docs: update API usage examples
# docs(openai): update API usage examples
#
# Allowed Types:
# * feat — a new feature (MINOR)
# * fix — a bug fix (PATCH)
# * docs — documentation only changes (either in /docs or code comments)
# * style — formatting, linting, etc.; no code change or typing refactors
# * refactor — code change that neither fixes a bug nor adds a feature
# * perf — code change that improves performance
# * test — adding tests or correcting existing
# * build — changes that affect the build system/external dependencies
# * ci — continuous integration/configuration changes
# * chore — other changes that don't modify source or test files
# * revert — reverts a previous commit
# * release — prepare a new release
#
# Allowed Scopes (optional):
# core, cli, langchain, langchain_v1, langchain_legacy, standard-tests,
# text-splitters, docs, anthropic, chroma, deepseek, exa, fireworks, groq,
# huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant,
# xai, infra
#
# Rules:
# 1. The 'Type' must start with a lowercase letter.
# 2. Breaking changes: append "!" after type/scope (e.g., feat!: drop x support)
#
# Enforces Conventional Commits format for pull request titles to maintain a clear and
# machine-readable change history.
name: '🏷️ PR Title Lint'
permissions:
pull-requests: read
on:
pull_request:
types: [opened, edited, synchronize]
jobs:
# Validates that PR title follows Conventional Commits 1.0.0 specification
lint-pr-title:
name: 'validate format'
runs-on: ubuntu-latest
steps:
- name: '✅ Validate Conventional Commits Format'
uses: amannn/action-semantic-pull-request@v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
types: |
feat
fix
docs
style
refactor
perf
test
build
ci
chore
revert
release
scopes: |
core
cli
langchain
langchain_v1
langchain_legacy
standard-tests
text-splitters
docs
anthropic
chroma
deepseek
exa
fireworks
groq
huggingface
mistralai
nomic
ollama
openai
perplexity
prompty
qdrant
xai
infra
requireScope: false
disallowScopes: |
release
[A-Z]+
ignoreLabels: |
ignore-lint-pr-title

View File

@@ -1,5 +1,7 @@
name: Run notebooks
# Integration tests for documentation notebooks.
name: '📓 Validate Documentation Notebooks'
run-name: 'Test notebooks in ${{ inputs.working-directory }}'
on:
workflow_dispatch:
inputs:
@@ -24,43 +26,45 @@ jobs:
build:
runs-on: ubuntu-latest
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
name: "Test docs"
name: '📑 Test Documentation Notebooks'
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
- name: Set up Python + uv
- name: '🐍 Set up Python + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ github.event.inputs.python_version || '3.11' }}
cache-suffix: run-notebooks-${{ github.event.inputs.working-directory || 'all' }}
working-directory: ${{ github.event.inputs.working-directory || '**' }}
- name: 'Authenticate to Google Cloud'
- name: '🔐 Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
uses: google-github-actions/auth@v3
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v5
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Install dependencies
- name: '📦 Install Dependencies'
run: |
uv sync --group dev --group test
- name: Pre-download files
- name: '📦 Pre-download Test Files'
run: |
uv run python docs/scripts/cache_data.py
curl -s https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql | sqlite3 docs/docs/how_to/Chinook.db
cp docs/docs/how_to/Chinook.db docs/docs/tutorials/Chinook.db
- name: Prepare notebooks
- name: '🔧 Prepare Notebooks for CI'
run: |
uv run python docs/scripts/prepare_notebooks_for_ci.py --comment-install-cells --working-directory ${{ github.event.inputs.working-directory || 'all' }}
- name: Run notebooks
- name: '🚀 Execute Notebooks'
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}

View File

@@ -1,16 +1,23 @@
name: Scheduled tests
# Routine integration tests against partner libraries with live API credentials.
#
# Uses `make integration_tests` for each library in the matrix.
#
# Runs daily. Can also be triggered manually for immediate updates.
name: '⏰ Scheduled Integration Tests'
run-name: "Run Integration Tests - ${{ inputs.working-directory-force || 'all libs' }} (Python ${{ inputs.python-version-force || '3.10, 3.13' }})"
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
workflow_dispatch:
inputs:
working-directory-force:
type: string
description: "From which folder this pipeline executes - defaults to all in matrix - example value: libs/partners/anthropic"
python-version-force:
type: string
description: "Python version to use - defaults to 3.9 and 3.11 in matrix - example value: 3.9"
description: "Python version to use - defaults to 3.10 and 3.13 in matrix - example value: 3.11"
schedule:
- cron: '0 13 * * *'
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
permissions:
contents: read
@@ -19,17 +26,19 @@ env:
POETRY_VERSION: "1.8.4"
UV_FROZEN: "true"
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/xai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
POETRY_LIBS: ("libs/partners/google-vertexai" "libs/partners/google-genai" "libs/partners/aws")
POETRY_LIBS: ("libs/partners/aws")
jobs:
# Generate dynamic test matrix based on input parameters or defaults
# Only runs on the main repo (for scheduled runs) or when manually triggered
compute-matrix:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
name: Compute matrix
name: '📋 Compute Test Matrix'
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: Set matrix
- name: '🔢 Generate Python & Library Matrix'
id: set-matrix
env:
DEFAULT_LIBS: ${{ env.DEFAULT_LIBS }}
@@ -37,9 +46,9 @@ jobs:
PYTHON_VERSION_FORCE: ${{ github.event.inputs.python-version-force || '' }}
run: |
# echo "matrix=..." where matrix is a json formatted str with keys python-version and working-directory
# python-version should default to 3.9 and 3.11, but is overridden to [PYTHON_VERSION_FORCE] if set
# python-version should default to 3.10 and 3.13, but is overridden to [PYTHON_VERSION_FORCE] if set
# working-directory should default to DEFAULT_LIBS, but is overridden to [WORKING_DIRECTORY_FORCE] if set
python_version='["3.9", "3.11"]'
python_version='["3.10", "3.13"]'
working_directory="$DEFAULT_LIBS"
if [ -n "$PYTHON_VERSION_FORCE" ]; then
python_version="[\"$PYTHON_VERSION_FORCE\"]"
@@ -50,12 +59,14 @@ jobs:
matrix="{\"python-version\": $python_version, \"working-directory\": $working_directory}"
echo $matrix
echo "matrix=$matrix" >> $GITHUB_OUTPUT
# Run integration tests against partner libraries with live API credentials
# Tests are run with Poetry or UV depending on the library's setup
build:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
name: '🐍 Python ${{ matrix.python-version }}: ${{ matrix.working-directory }}'
runs-on: ubuntu-latest
needs: [compute-matrix]
timeout-minutes: 20
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
@@ -63,19 +74,19 @@ jobs:
working-directory: ${{ fromJSON(needs.compute-matrix.outputs.matrix).working-directory }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
path: langchain
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain-google
path: langchain-google
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: Move libs
- name: '📦 Organize External Libraries'
run: |
rm -rf \
langchain/libs/partners/google-genai \
@@ -84,7 +95,7 @@ jobs:
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: Set up Python ${{ matrix.python-version }} with poetry
- name: '🐍 Set up Python ${{ matrix.python-version }} + Poetry'
if: contains(env.POETRY_LIBS, matrix.working-directory)
uses: "./langchain/.github/actions/poetry_setup"
with:
@@ -93,40 +104,40 @@ jobs:
working-directory: langchain/${{ matrix.working-directory }}
cache-key: scheduled
- name: Set up Python ${{ matrix.python-version }} + uv
- name: '🐍 Set up Python ${{ matrix.python-version }} + UV'
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
uses: "./langchain/.github/actions/uv_setup"
with:
python-version: ${{ matrix.python-version }}
- name: 'Authenticate to Google Cloud'
- name: '🔐 Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
uses: google-github-actions/auth@v3
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v5
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Install dependencies (poetry)
- name: '📦 Install Dependencies (Poetry)'
if: contains(env.POETRY_LIBS, matrix.working-directory)
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: Install dependencies (uv)
- name: '📦 Install Dependencies (UV)'
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
run: |
echo "Running scheduled tests, installing dependencies with uv..."
cd langchain/${{ matrix.working-directory }}
uv sync --group test --group test_integration
- name: Run integration tests
- name: '🚀 Run Integration Tests'
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
@@ -155,14 +166,15 @@ jobs:
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: Remove external libraries
run: |
- name: '🧹 Clean up External Libraries'
# Clean up external libraries to avoid affecting the following git status check
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/aws
- name: Ensure the tests did not create any additional files
- name: '🧹 Verify Clean Working Directory'
working-directory: langchain
run: |
set -eu

9
.github/workflows/v1_changes.md vendored Normal file
View File

@@ -0,0 +1,9 @@
With the deprecation of v0 docs, the following files will need to be migrated/supported
in the new docs repo:
- run_notebooks.yml: New repo should run Integration tests on code snippets?
- people.yml: Need to fix and somehow display on the new docs site
- Subsequently, `.github/actions/people/`
- _test_doc_imports.yml
- check_new_docs.yml
- check-broken-links.yml

1
.gitignore vendored
View File

@@ -1,5 +1,4 @@
.vs/
.vscode/
.idea/
# Byte-compiled / optimized / DLL files
__pycache__/

14
.markdownlint.json Normal file
View File

@@ -0,0 +1,14 @@
{
"MD013": false,
"MD024": {
"siblings_only": true
},
"MD025": false,
"MD033": false,
"MD034": false,
"MD036": false,
"MD041": false,
"MD046": {
"style": "fenced"
}
}

View File

@@ -1,111 +1,105 @@
repos:
- repo: local
hooks:
- id: core
name: format core
language: system
entry: make -C libs/core format
files: ^libs/core/
pass_filenames: false
- id: langchain
name: format langchain
language: system
entry: make -C libs/langchain format
files: ^libs/langchain/
pass_filenames: false
- id: standard-tests
name: format standard-tests
language: system
entry: make -C libs/standard-tests format
files: ^libs/standard-tests/
pass_filenames: false
- id: text-splitters
name: format text-splitters
language: system
entry: make -C libs/text-splitters format
files: ^libs/text-splitters/
pass_filenames: false
- id: anthropic
name: format partners/anthropic
language: system
entry: make -C libs/partners/anthropic format
files: ^libs/partners/anthropic/
pass_filenames: false
- id: chroma
name: format partners/chroma
language: system
entry: make -C libs/partners/chroma format
files: ^libs/partners/chroma/
pass_filenames: false
- id: couchbase
name: format partners/couchbase
language: system
entry: make -C libs/partners/couchbase format
files: ^libs/partners/couchbase/
pass_filenames: false
- id: exa
name: format partners/exa
language: system
entry: make -C libs/partners/exa format
files: ^libs/partners/exa/
pass_filenames: false
- id: fireworks
name: format partners/fireworks
language: system
entry: make -C libs/partners/fireworks format
files: ^libs/partners/fireworks/
pass_filenames: false
- id: groq
name: format partners/groq
language: system
entry: make -C libs/partners/groq format
files: ^libs/partners/groq/
pass_filenames: false
- id: huggingface
name: format partners/huggingface
language: system
entry: make -C libs/partners/huggingface format
files: ^libs/partners/huggingface/
pass_filenames: false
- id: mistralai
name: format partners/mistralai
language: system
entry: make -C libs/partners/mistralai format
files: ^libs/partners/mistralai/
pass_filenames: false
- id: nomic
name: format partners/nomic
language: system
entry: make -C libs/partners/nomic format
files: ^libs/partners/nomic/
pass_filenames: false
- id: ollama
name: format partners/ollama
language: system
entry: make -C libs/partners/ollama format
files: ^libs/partners/ollama/
pass_filenames: false
- id: openai
name: format partners/openai
language: system
entry: make -C libs/partners/openai format
files: ^libs/partners/openai/
pass_filenames: false
- id: prompty
name: format partners/prompty
language: system
entry: make -C libs/partners/prompty format
files: ^libs/partners/prompty/
pass_filenames: false
- id: qdrant
name: format partners/qdrant
language: system
entry: make -C libs/partners/qdrant format
files: ^libs/partners/qdrant/
pass_filenames: false
- id: root
name: format docs, cookbook
language: system
entry: make format
files: ^(docs|cookbook)/
pass_filenames: false
- repo: local
hooks:
- id: core
name: format and lint core
language: system
entry: make -C libs/core format lint
files: ^libs/core/
pass_filenames: false
- id: langchain
name: format and lint langchain
language: system
entry: make -C libs/langchain format lint
files: ^libs/langchain/
pass_filenames: false
- id: standard-tests
name: format and lint standard-tests
language: system
entry: make -C libs/standard-tests format lint
files: ^libs/standard-tests/
pass_filenames: false
- id: text-splitters
name: format and lint text-splitters
language: system
entry: make -C libs/text-splitters format lint
files: ^libs/text-splitters/
pass_filenames: false
- id: anthropic
name: format and lint partners/anthropic
language: system
entry: make -C libs/partners/anthropic format lint
files: ^libs/partners/anthropic/
pass_filenames: false
- id: chroma
name: format and lint partners/chroma
language: system
entry: make -C libs/partners/chroma format lint
files: ^libs/partners/chroma/
pass_filenames: false
- id: exa
name: format and lint partners/exa
language: system
entry: make -C libs/partners/exa format lint
files: ^libs/partners/exa/
pass_filenames: false
- id: fireworks
name: format and lint partners/fireworks
language: system
entry: make -C libs/partners/fireworks format lint
files: ^libs/partners/fireworks/
pass_filenames: false
- id: groq
name: format and lint partners/groq
language: system
entry: make -C libs/partners/groq format lint
files: ^libs/partners/groq/
pass_filenames: false
- id: huggingface
name: format and lint partners/huggingface
language: system
entry: make -C libs/partners/huggingface format lint
files: ^libs/partners/huggingface/
pass_filenames: false
- id: mistralai
name: format and lint partners/mistralai
language: system
entry: make -C libs/partners/mistralai format lint
files: ^libs/partners/mistralai/
pass_filenames: false
- id: nomic
name: format and lint partners/nomic
language: system
entry: make -C libs/partners/nomic format lint
files: ^libs/partners/nomic/
pass_filenames: false
- id: ollama
name: format and lint partners/ollama
language: system
entry: make -C libs/partners/ollama format lint
files: ^libs/partners/ollama/
pass_filenames: false
- id: openai
name: format and lint partners/openai
language: system
entry: make -C libs/partners/openai format lint
files: ^libs/partners/openai/
pass_filenames: false
- id: prompty
name: format and lint partners/prompty
language: system
entry: make -C libs/partners/prompty format lint
files: ^libs/partners/prompty/
pass_filenames: false
- id: qdrant
name: format and lint partners/qdrant
language: system
entry: make -C libs/partners/qdrant format lint
files: ^libs/partners/qdrant/
pass_filenames: false
- id: root
name: format and lint docs, cookbook
language: system
entry: make format lint
files: ^(docs|cookbook)/
pass_filenames: false

View File

@@ -1,25 +0,0 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
commands:
- mkdir -p $READTHEDOCS_OUTPUT
- cp -r api_reference_build/* $READTHEDOCS_OUTPUT
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/api_reference/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
formats:
- pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/api_reference/requirements.txt

21
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,21 @@
{
"recommendations": [
"ms-python.python",
"charliermarsh.ruff",
"ms-python.mypy-type-checker",
"ms-toolsai.jupyter",
"ms-toolsai.jupyter-keymap",
"ms-toolsai.jupyter-renderers",
"ms-toolsai.vscode-jupyter-cell-tags",
"ms-toolsai.vscode-jupyter-slideshow",
"yzhang.markdown-all-in-one",
"davidanson.vscode-markdownlint",
"bierner.markdown-mermaid",
"bierner.markdown-preview-github-styles",
"eamodio.gitlens",
"github.vscode-pull-request-github",
"github.vscode-github-actions",
"redhat.vscode-yaml",
"editorconfig.editorconfig",
],
}

87
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,87 @@
{
"python.analysis.include": [
"libs/**",
"docs/**",
"cookbook/**"
],
"python.analysis.exclude": [
"**/node_modules",
"**/__pycache__",
"**/.pytest_cache",
"**/.*",
"_dist/**",
"docs/_build/**",
"docs/api_reference/_build/**"
],
"python.analysis.autoImportCompletions": true,
"python.analysis.typeCheckingMode": "basic",
"python.testing.cwd": "${workspaceFolder}",
"python.linting.enabled": true,
"python.linting.ruffEnabled": true,
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports.ruff": "explicit",
"source.fixAll": "explicit"
},
"editor.defaultFormatter": "charliermarsh.ruff"
},
"editor.rulers": [
88
],
"editor.tabSize": 4,
"editor.insertSpaces": true,
"editor.trimAutoWhitespace": true,
"files.trimTrailingWhitespace": true,
"files.insertFinalNewline": true,
"files.exclude": {
"**/__pycache__": true,
"**/.pytest_cache": true,
"**/*.pyc": true,
"**/.mypy_cache": true,
"**/.ruff_cache": true,
"_dist/**": true,
"docs/_build/**": true,
"docs/api_reference/_build/**": true,
"**/node_modules": true,
"**/.git": false
},
"search.exclude": {
"**/__pycache__": true,
"**/*.pyc": true,
"_dist/**": true,
"docs/_build/**": true,
"docs/api_reference/_build/**": true,
"**/node_modules": true,
"**/.git": true,
"uv.lock": true,
"yarn.lock": true
},
"git.autofetch": true,
"git.enableSmartCommit": true,
"jupyter.askForKernelRestart": false,
"jupyter.interactiveWindow.textEditor.executeSelection": true,
"[markdown]": {
"editor.wordWrap": "on",
"editor.quickSuggestions": {
"comments": "off",
"strings": "off",
"other": "off"
}
},
"[yaml]": {
"editor.tabSize": 2,
"editor.insertSpaces": true
},
"[json]": {
"editor.tabSize": 2,
"editor.insertSpaces": true
},
"python.terminal.activateEnvironment": false,
"python.defaultInterpreterPath": "./.venv/bin/python",
"github.copilot.chat.commitMessageGeneration.instructions": [
{
"file": ".github/workflows/pr_lint.yml"
}
]
}

325
AGENTS.md Normal file
View File

@@ -0,0 +1,325 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

325
CLAUDE.md Normal file
View File

@@ -0,0 +1,325 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

View File

@@ -7,5 +7,5 @@ Please see the following guides for migrating LangChain code:
* Migrating from [LangChain 0.0.x Chains](https://python.langchain.com/docs/versions/migrating_chains/)
* Upgrade to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help you automatically upgrade your code to use non-deprecated imports.
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help you automatically upgrade your code to use non-deprecated imports.
This will be especially helpful if you're still on either version 0.0.x or 0.1.x of LangChain.

View File

@@ -1,4 +1,4 @@
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck spell_check spell_fix lint lint_package lint_tests format format_diff
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck lint lint_package lint_tests format format_diff
.EXPORT_ALL_VARIABLES:
UV_FROZEN = true
@@ -8,9 +8,6 @@ help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@sed -n 's/^## //p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
## all: Default target, shows help.
all: help
## clean: Clean documentation and API documentation artifacts.
clean: docs_clean api_docs_clean
@@ -19,49 +16,67 @@ clean: docs_clean api_docs_clean
######################
## docs_build: Build the documentation.
docs_build:
docs_build: docs_clean
@echo "📚 Building LangChain documentation..."
cd docs && make build
@echo "✅ Documentation build complete!"
## docs_clean: Clean the documentation build artifacts.
docs_clean:
@echo "🧹 Cleaning documentation artifacts..."
cd docs && make clean
@echo "✅ LangChain documentation cleaned"
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
uv run --no-group test linkchecker _dist/docs/ --ignore-url node_modules
@echo "🔗 Checking documentation links..."
@if [ -d _dist/docs ]; then \
uv run --group test linkchecker _dist/docs/ --ignore-url node_modules; \
else \
echo "⚠️ Documentation not built. Run 'make docs_build' first."; \
exit 1; \
fi
@echo "✅ Link check complete"
## api_docs_build: Build the API Reference documentation.
api_docs_build:
uv run --no-group test python docs/api_reference/create_api_rst.py
cd docs/api_reference && uv run --no-group test make html
uv run --no-group test python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
api_docs_build: clean
@echo "📖 Building API Reference documentation..."
uv pip install -e libs/cli
uv run --group docs python docs/api_reference/create_api_rst.py
cd docs/api_reference && uv run --group docs make html
uv run --group docs python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
@echo "✅ API documentation built"
@echo "🌐 Opening documentation in browser..."
open docs/api_reference/_build/html/reference.html
API_PKG ?= text-splitters
api_docs_quick_preview:
uv run --no-group test python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && uv run make html
uv run --no-group test python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
api_docs_quick_preview: clean
@echo "⚡ Building quick API preview for $(API_PKG)..."
uv run --group docs python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && uv run --group docs make html
uv run --group docs python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
@echo "🌐 Opening preview in browser..."
open docs/api_reference/_build/html/reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
@echo "🧹 Cleaning API documentation artifacts..."
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm -f docs/api_reference/index.md
@echo "✅ API documentation cleaned"
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:
uv run --no-group test linkchecker docs/api_reference/_build/html/index.html
## spell_check: Run codespell on the project.
spell_check:
uv run --no-group test codespell --toml pyproject.toml
## spell_fix: Run codespell on the project and fix the errors.
spell_fix:
uv run --no-group test codespell --toml pyproject.toml -w
@echo "🔗 Checking API documentation links..."
@if [ -f docs/api_reference/_build/html/index.html ]; then \
uv run --group test linkchecker docs/api_reference/_build/html/index.html; \
else \
echo "⚠️ API documentation not built. Run 'make api_docs_build' first."; \
exit 1; \
fi
@echo "✅ API link check complete"
######################
# LINTING AND FORMATTING
@@ -69,18 +84,24 @@ spell_fix:
## lint: Run linting on the project.
lint lint_package lint_tests:
@echo "🔍 Running code linting and checks..."
uv run --group lint ruff check docs cookbook
uv run --group lint ruff format docs cookbook cookbook --diff
git --no-pager grep 'from langchain import' docs cookbook | grep -vE 'from langchain import (hub)' && echo "Error: no importing langchain from root in docs, except for hub" && exit 1 || exit 0
git --no-pager grep 'api.python.langchain.com' -- docs/docs ':!docs/docs/additional_resources/arxiv_references.mdx' ':!docs/docs/integrations/document_loaders/sitemap.ipynb' || exit 0 && \
echo "Error: you should link python.langchain.com/api_reference, not api.python.langchain.com in the docs" && \
exit 1
@echo "✅ Linting complete"
## format: Format the project files.
format format_diff:
@echo "🎨 Formatting project files..."
uv run --group lint ruff format docs cookbook
uv run --group lint ruff check --fix docs cookbook
@echo "✅ Formatting complete"
update-package-downloads:
@echo "📊 Updating package download statistics..."
uv run python docs/scripts/packages_yml_get_downloads.py
@echo "✅ Package downloads updated"

118
README.md
View File

@@ -1,83 +1,75 @@
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/static/img/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/static/img/logo-light.svg">
<img alt="LangChain Logo" src="docs/static/img/logo-dark.svg" width="80%">
</picture>
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/static/img/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/static/img/logo-light.svg">
<img alt="LangChain Logo" src="docs/static/img/logo-dark.svg" width="80%">
</picture>
</p>
<div>
<br>
</div>
<p align="center">
The platform for reliable agents.
</p>
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-core?style=flat-square)](https://pypistats.org/packages/langchain-core)
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![CodSpeed Badge](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/langchain-ai/langchain)
<p align="center">
<a href="https://opensource.org/licenses/MIT" target="_blank">
<img src="https://img.shields.io/pypi/l/langchain-core?style=flat-square" alt="PyPI - License">
</a>
<a href="https://pypistats.org/packages/langchain-core" target="_blank">
<img src="https://img.shields.io/pepy/dt/langchain" alt="PyPI - Downloads">
</a>
<a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain" target="_blank">
<img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square" alt="Open in Dev Containers">
</a>
<a href="https://codespaces.new/langchain-ai/langchain" target="_blank">
<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">
</a>
<a href="https://codspeed.io/langchain-ai/langchain" target="_blank">
<img src="https://img.shields.io/endpoint?url=https://codspeed.io/badge.json" alt="CodSpeed Badge">
</a>
<a href="https://twitter.com/langchainai" target="_blank">
<img src="https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI" alt="Twitter / X">
</a>
</p>
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
LangChain is a framework for building LLM-powered applications. It helps you chain
together interoperable components and third-party integrations to simplify AI
application development — all while future-proofing decisions as the underlying
technology evolves.
LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
```bash
pip install -U langchain
```
To learn more about LangChain, check out
[the docs](https://python.langchain.com/docs/introduction/). If youre looking for more
advanced customization or agent orchestration, check out
[LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building
controllable agent workflows.
---
**Documentation**: To learn more about LangChain, check out [the docs](https://python.langchain.com/docs/introduction/).
If you're looking for more advanced customization or agent orchestration, check out [LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building controllable agent workflows.
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
## Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard
interface for models, embeddings, vector stores, and more.
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
Use LangChain for:
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
external / internal systems, drawing from LangChains vast library of integrations with
model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team
experiments to find the best choice for your applications needs. As the industry
frontier evolves, adapt quickly — LangChains abstractions keep you moving without
losing momentum.
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChains vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your applications needs. As the industry frontier evolves, adapt quickly — LangChains abstractions keep you moving without losing momentum.
## LangChains ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly
with any LangChain product, giving developers a full suite of tools when building LLM
applications.
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
To improve your LLM application development, pair LangChain with:
- [LangSmith](http://www.langchain.com/langsmith) - Helpful for agent evals and
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
reliably handle complex tasks with LangGraph, our low-level agent orchestration
framework. LangGraph offers customizable architecture, long-term memory, and
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
Uber, Klarna, and GitLab.
- [LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/langgraph_platform/) - Deploy
and scale agents effortlessly with a purpose-built deployment platform for long
running, stateful workflows. Discover, reuse, configure, and share agents across
teams — and iterate quickly with visual prototyping in
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows — and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
## Additional resources
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with
guided examples on getting started with LangChain.
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code
snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key
concepts behind the LangChain framework.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
navigating base packages and integrations for LangChain.
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with guided examples on getting started with LangChain.
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key concepts behind the LangChain framework.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on navigating base packages and integrations for LangChain.
- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation.

View File

@@ -4,13 +4,14 @@ LangChain has a large ecosystem of integrations with various external resources
## Best practices
When building such applications developers should remember to follow good security practices:
When building such applications, developers should remember to follow good security practices:
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc. as appropriate for your application.
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), specifying proxy configurations to control external requests, etc., as appropriate for your application.
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it's safest to assume that any LLM able to use those credentials may in fact delete data.
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
@@ -21,65 +22,58 @@ Example scenarios with mitigation strategies:
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs
or databases, consider speaking with your company's security team to determine how to best
design and secure your applications.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
## Reporting OSS Vulnerabilities
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
Please report security vulnerabilities associated with the LangChain
open source projects by visiting the following link:
[https://huntr.com/bounties/disclose/](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true)
Please report security vulnerabilities associated with the LangChain
open source projects at [huntr](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true).
Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) The [Best practices](#best-practices) above to
understand what we consider to be a security vulnerability vs. developer
responsibility.
2) The [langchain-ai/langchain](https://docs.langchain.com/oss/python/contributing/code#supporting-packages) monorepo structure.
3) The [Best Practices](#best-practices) above to understand what we consider to be a security vulnerability vs. developer responsibility.
### In-Scope Targets
The following packages and repositories are eligible for bug bounties:
- langchain-core
- langchain (see exceptions)
- langchain-community (see exceptions)
- langgraph
- langserve
* langchain-core
* langchain (see exceptions)
* langchain-community (see exceptions)
* langgraph
* langserve
### Out of Scope Targets
All out of scope targets defined by huntr as well as:
- **langchain-experimental**: This repository is for experimental code and is not
* **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
* **tools**: Tools in either langchain or langchain-community are not eligible for bug
bounties. This includes the following directories
- libs/langchain/langchain/tools
- libs/community/langchain_community/tools
- Please review the [best practices](#best-practices)
* libs/langchain/langchain/tools
* libs/community/langchain_community/tools
* Please review the [Best Practices](#best-practices)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
- Code documented with security notices. This will be decided done on a case by
case basis, but likely will not be eligible for a bounty as the code is already
* Code documented with security notices. This will be decided on a case-by-case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
- Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
* Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
## Reporting LangSmith Vulnerabilities
Please report security vulnerabilities associated with LangSmith by email to `security@langchain.dev`.
- LangSmith site: https://smith.langchain.com
- SDK client: https://github.com/langchain-ai/langsmith-sdk
* LangSmith site: [https://smith.langchain.com](https://smith.langchain.com)
* SDK client: [https://github.com/langchain-ai/langsmith-sdk](https://github.com/langchain-ai/langsmith-sdk)
### Other Security Concerns

View File

@@ -144,7 +144,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "kWDWfSDBMPl8",
"metadata": {},
"outputs": [
@@ -185,7 +185,7 @@
" )\n",
" # Text summary chain\n",
" model = VertexAI(\n",
" temperature=0, model_name=\"gemini-2.0-flash-lite-001\", max_tokens=1024\n",
" temperature=0, model_name=\"gemini-2.5-flash\", max_tokens=1024\n",
" ).with_fallbacks([empty_response])\n",
" summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()\n",
"\n",
@@ -235,7 +235,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "PeK9bzXv3olF",
"metadata": {},
"outputs": [],
@@ -254,7 +254,7 @@
"\n",
"def image_summarize(img_base64, prompt):\n",
" \"\"\"Make image summary\"\"\"\n",
" model = ChatVertexAI(model=\"gemini-2.0-flash\", max_tokens=1024)\n",
" model = ChatVertexAI(model=\"gemini-2.5-flash\", max_tokens=1024)\n",
"\n",
" msg = model.invoke(\n",
" [\n",
@@ -431,7 +431,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "GlwCErBaCKQW",
"metadata": {},
"outputs": [],
@@ -553,7 +553,7 @@
" \"\"\"\n",
"\n",
" # Multi-modal LLM\n",
" model = ChatVertexAI(temperature=0, model_name=\"gemini-2.0-flash\", max_tokens=1024)\n",
" model = ChatVertexAI(temperature=0, model_name=\"gemini-2.5-flash\", max_tokens=1024)\n",
"\n",
" # RAG pipeline\n",
" chain = (\n",

View File

@@ -63,4 +63,5 @@ Notebook | Description
[rag-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag-locally-on-intel-cpu.ipynb) | Perform Retrieval-Augmented-Generation (RAG) on locally downloaded open-source models using langchain and open source tools and execute it on Intel Xeon CPU. We showed an example of how to apply RAG on Llama 2 model and enable it to answer the queries related to Intel Q1 2024 earnings release.
[visual_RAG_vdms.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/visual_RAG_vdms.ipynb) | Performs Visual Retrieval-Augmented-Generation (RAG) using videos and scene descriptions generated by open source models.
[contextual_rag.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/contextual_rag.ipynb) | Performs contextual retrieval-augmented generation (RAG) prepending chunk-specific explanatory context to each chunk before embedding.
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
[rag-agents-locally-on-intel-cpu.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/local_rag_agents_intel_cpu.ipynb) | Build a RAG agent locally with open source models that routes questions through one of two paths to find answers. The agent generates answers based on documents retrieved from either the vector database or retrieved from web search. If the vector database lacks relevant information, the agent opts for web search. Open-source models for LLM and embeddings are used locally on an Intel Xeon CPU to execute this pipeline.
[rag_mlflow_tracking_evaluation.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_mlflow_tracking_evaluation.ipynb) | Guide on how to create a RAG pipeline and track + evaluate it with MLflow.

View File

@@ -229,9 +229,9 @@
" \"smoke\",\n",
" \"temp\",\n",
"]\n",
"assert all(\n",
" [column in df.columns for column in expected_columns]\n",
"), \"DataFrame does not have the expected columns\""
"assert all([column in df.columns for column in expected_columns]), (\n",
" \"DataFrame does not have the expected columns\"\n",
")"
]
},
{

View File

@@ -487,7 +487,7 @@
" print(\"*\" * 40)\n",
" print(\n",
" colored(\n",
" f\"After {i+1} observations, Tommie's summary is:\\n{tommie.get_summary(force_refresh=True)}\",\n",
" f\"After {i + 1} observations, Tommie's summary is:\\n{tommie.get_summary(force_refresh=True)}\",\n",
" \"blue\",\n",
" )\n",
" )\n",

View File

@@ -20,11 +20,7 @@
"cell_type": "markdown",
"id": "5939a54c-3198-4ba4-8346-1cc088c473c0",
"metadata": {},
"source": [
"##### You can embed text in the same VectorDB space as images, and retreive text and images as well based on input text or image.\n",
"##### Following link demonstrates that.\n",
"<a> https://python.langchain.com/v0.2/docs/integrations/text_embedding/open_clip/ </a>"
]
"source": "##### You can embed text in the same VectorDB space as images, and retrieve text and images as well based on input text or image.\n##### Following link demonstrates that.\n<a> https://python.langchain.com/v0.2/docs/integrations/text_embedding/open_clip/ </a>"
},
{
"cell_type": "markdown",
@@ -389,7 +385,7 @@
" ax = axs[idx]\n",
" ax.imshow(img)\n",
" # Assuming similarity is not available in the new data, removed sim_score\n",
" ax.title.set_text(f\"\\nProduct ID: {data[\"id\"]}\\n Score: {score}\")\n",
" ax.title.set_text(f\"\\nProduct ID: {data['id']}\\n Score: {score}\")\n",
" ax.axis(\"off\") # Turn off axis\n",
"\n",
" # Hide any remaining empty subplots\n",
@@ -600,4 +596,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -79,6 +79,17 @@
"tool_executor = ToolExecutor(tools)"
]
},
{
"cell_type": "markdown",
"id": "168152fc",
"metadata": {},
"source": [
"📘 **Note on `SystemMessage` usage with LangGraph-based agents**\n",
"\n",
"When constructing the `messages` list for an agent, you *must* manually include any `SystemMessage`s.\n",
"Unlike some agent executors in LangChain that set a default, LangGraph requires explicit inclusion."
]
},
{
"cell_type": "markdown",
"id": "fe6e8f78-1ef7-42ad-b2bf-835ed5850553",

View File

@@ -148,11 +148,11 @@
"\n",
" instructions = \"None\"\n",
" for i in range(max_meta_iters):\n",
" print(f\"[Episode {i+1}/{max_meta_iters}]\")\n",
" print(f\"[Episode {i + 1}/{max_meta_iters}]\")\n",
" chain = initialize_chain(instructions, memory=None)\n",
" output = chain.predict(human_input=task)\n",
" for j in range(max_iters):\n",
" print(f\"(Step {j+1}/{max_iters})\")\n",
" print(f\"(Step {j + 1}/{max_iters})\")\n",
" print(f\"Assistant: {output}\")\n",
" print(\"Human: \")\n",
" human_input = input()\n",

View File

@@ -183,7 +183,7 @@
"outputs": [],
"source": [
"game_description = f\"\"\"Here is the topic for a Dungeons & Dragons game: {quest}.\n",
" The characters are: {*character_names,}.\n",
" The characters are: {(*character_names,)}.\n",
" The story is narrated by the storyteller, {storyteller_name}.\"\"\"\n",
"\n",
"player_descriptor_system_message = SystemMessage(\n",
@@ -334,7 +334,7 @@
" You are the storyteller, {storyteller_name}.\n",
" Please make the quest more specific. Be creative and imaginative.\n",
" Please reply with the specified quest in {word_limit} words or less. \n",
" Speak directly to the characters: {*character_names,}.\n",
" Speak directly to the characters: {(*character_names,)}.\n",
" Do not add anything else.\"\"\"\n",
" ),\n",
"]\n",

View File

@@ -200,7 +200,7 @@
"outputs": [],
"source": [
"game_description = f\"\"\"Here is the topic for the presidential debate: {topic}.\n",
"The presidential candidates are: {', '.join(character_names)}.\"\"\"\n",
"The presidential candidates are: {\", \".join(character_names)}.\"\"\"\n",
"\n",
"player_descriptor_system_message = SystemMessage(\n",
" content=\"You can add detail to the description of each presidential candidate.\"\n",
@@ -595,7 +595,7 @@
" Frame the debate topic as a problem to be solved.\n",
" Be creative and imaginative.\n",
" Please reply with the specified topic in {word_limit} words or less. \n",
" Speak directly to the presidential candidates: {*character_names,}.\n",
" Speak directly to the presidential candidates: {(*character_names,)}.\n",
" Do not add anything else.\"\"\"\n",
" ),\n",
"]\n",

View File

@@ -395,8 +395,7 @@
"prompt_messages = [\n",
" SystemMessage(\n",
" content=(\n",
" \"You are a world class algorithm to answer \"\n",
" \"questions in a specific format.\"\n",
" \"You are a world class algorithm to answer questions in a specific format.\"\n",
" )\n",
" ),\n",
" HumanMessage(content=\"Answer question using the following context\"),\n",

View File

@@ -47,10 +47,12 @@
"source": [
"### Prerequisites\n",
"\n",
"Please install the Oracle Database [python-oracledb driver](https://pypi.org/project/oracledb/) to use LangChain with Oracle AI Vector Search:\n",
"You'll need to install `langchain-oracledb` with `python -m pip install -U langchain-oracledb` to use this integration.\n",
"\n",
"The `python-oracledb` driver is installed automatically as a dependency of langchain-oracledb.\n",
"\n",
"```\n",
"$ python -m pip install --upgrade oracledb\n",
"$ python -m pip install -U langchain-oracledb\n",
"```"
]
},
@@ -217,7 +219,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"\n",
"# please update with your related information\n",
"# make sure that you have onnx file in the system\n",
@@ -296,7 +298,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_oracledb.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_core.documents import Document\n",
"\n",
"# loading from Oracle Database table\n",
@@ -354,7 +356,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
"from langchain_core.documents import Document\n",
"\n",
"# using 'database' provider\n",
@@ -395,7 +397,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_oracledb.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_core.documents import Document\n",
"\n",
"# split by default parameters\n",
@@ -452,7 +454,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_core.documents import Document\n",
"\n",
"# using ONNX model loaded to Oracle Database\n",
@@ -498,14 +500,14 @@
"import sys\n",
"\n",
"import oracledb\n",
"from langchain_community.document_loaders.oracleai import (\n",
"from langchain_oracledb.document_loaders.oracleai import (\n",
" OracleDocLoader,\n",
" OracleTextSplitter,\n",
")\n",
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_community.vectorstores import oraclevs\n",
"from langchain_community.vectorstores.oraclevs import OracleVS\n",
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
"from langchain_oracledb.vectorstores import oraclevs\n",
"from langchain_oracledb.vectorstores.oraclevs import OracleVS\n",
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
"from langchain_core.documents import Document"
]
@@ -677,19 +679,19 @@
"outputs": [],
"source": [
"query = \"What is Oracle AI Vector Store?\"\n",
"filter = {\"document_id\": [\"1\"]}\n",
"db_filter = {\"document_id\": \"1\"}\n",
"\n",
"# Similarity search without a filter\n",
"print(vectorstore.similarity_search(query, 1))\n",
"\n",
"# Similarity search with a filter\n",
"print(vectorstore.similarity_search(query, 1, filter=filter))\n",
"print(vectorstore.similarity_search(query, 1, filter=db_filter))\n",
"\n",
"# Similarity search with relevance score\n",
"print(vectorstore.similarity_search_with_score(query, 1))\n",
"\n",
"# Similarity search with relevance score with filter\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=filter))\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=db_filter))\n",
"\n",
"# Max marginal relevance search\n",
"print(vectorstore.max_marginal_relevance_search(query, 1, fetch_k=20, lambda_mult=0.5))\n",
@@ -697,7 +699,7 @@
"# Max marginal relevance search with filter\n",
"print(\n",
" vectorstore.max_marginal_relevance_search(\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=filter\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=db_filter\n",
" )\n",
")"
]

View File

@@ -552,9 +552,7 @@
"cell_type": "markdown",
"id": "77deb6a0-0950-450a-916a-f2a029676c20",
"metadata": {},
"source": [
"**Appending all retreived documents in a single document**"
]
"source": "**Appending all retrieved documents in a single document**"
},
{
"cell_type": "code",
@@ -758,4 +756,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -0,0 +1,455 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "3716230e",
"metadata": {},
"source": [
"# RAG Pipeline with MLflow Tracking, Tracing & Evaluation\n",
"\n",
"This notebook demonstrates how to build a complete Retrieval-Augmented Generation (RAG) pipeline using LangChain and integrate it with MLflow for experiment tracking, tracing, and evaluation.\n",
"\n",
"\n",
"- **RAG Pipeline Construction**: Build a complete RAG system using LangChain components\n",
"- **MLflow Integration**: Track experiments, parameters, and artifacts\n",
"- **Tracing**: Monitor inputs, outputs, retrieved documents, scores, prompts, and timings\n",
"- **Evaluation**: Use MLflow's built-in scorers to assess RAG performance\n",
"- **Best Practices**: Implement proper configuration management and reproducible experiments\n",
"\n",
"We'll build a RAG system that can answer questions about academic papers by:\n",
"1. Loading and chunking documents from ArXiv\n",
"2. Creating embeddings and a vector store\n",
"3. Setting up a retrieval-augmented generation chain\n",
"4. Tracking all experiments with MLflow\n",
"5. Evaluating the system's performance\n",
"\n",
"![System Diagram](https://miro.medium.com/v2/resize:fit:720/format:webp/1*eiw86PP4hrBBxhjTjP0JUQ.png)"
]
},
{
"cell_type": "markdown",
"id": "2f7561c4",
"metadata": {},
"source": [
"#### Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0814ebe9",
"metadata": {},
"outputs": [],
"source": [
"%pip install -U langchain mlflow langchain-community arxiv pymupdf langchain-text-splitters langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "747399b6",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import mlflow\n",
"from mlflow.genai.scorers import RelevanceToQuery, Correctness, ExpectationsGuidelines\n",
"from langchain_community.document_loaders import ArxivLoader\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4141ee05",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = \"<YOUR OPENAI API KEY>\"\n",
"\n",
"mlflow.set_experiment(\"LangChain-RAG-MLflow\")\n",
"mlflow.langchain.autolog()"
]
},
{
"cell_type": "markdown",
"id": "dd5eb41b",
"metadata": {},
"source": [
"Define all hyperparameters and configuration in a centralized dictionary. This makes it easy to:\n",
"- Track different experiment configurations\n",
"- Reproduce results\n",
"- Perform hyperparameter tuning\n",
"\n",
"**Key Parameters**:\n",
"- `chunk_size`: Size of text chunks for document splitting\n",
"- `chunk_overlap`: Overlap between consecutive chunks\n",
"- `retriever_k`: Number of documents to retrieve\n",
"- `embeddings_model`: OpenAI embedding model\n",
"- `llm`: Language model for generation\n",
"- `temperature`: Sampling temperature for the LLM"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6dcdc5d8",
"metadata": {},
"outputs": [],
"source": [
"CONFIG = {\n",
" \"chunk_size\": 400,\n",
" \"chunk_overlap\": 80,\n",
" \"retriever_k\": 3,\n",
" \"embeddings_model\": \"text-embedding-3-small\",\n",
" \"system_prompt\": \"You are a helpful assistant. Use the following context to answer the question. Use three sentences maximum and keep the answer concise.\",\n",
" \"llm\": \"gpt-5-nano\",\n",
" \"temperature\": 0,\n",
"}"
]
},
{
"cell_type": "markdown",
"id": "8a2985f1",
"metadata": {},
"source": [
"#### ArXiv Dcoument Loading and Processing"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1f32aa36",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'Published': '2023-08-02', 'Title': 'Attention Is All You Need', 'Authors': 'Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin', 'Summary': 'The dominant sequence transduction models are based on complex recurrent or\\nconvolutional neural networks in an encoder-decoder configuration. The best\\nperforming models also connect the encoder and decoder through an attention\\nmechanism. We propose a new simple network architecture, the Transformer, based\\nsolely on attention mechanisms, dispensing with recurrence and convolutions\\nentirely. Experiments on two machine translation tasks show these models to be\\nsuperior in quality while being more parallelizable and requiring significantly\\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014\\nEnglish-to-German translation task, improving over the existing best results,\\nincluding ensembles by over 2 BLEU. On the WMT 2014 English-to-French\\ntranslation task, our model establishes a new single-model state-of-the-art\\nBLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction\\nof the training costs of the best models from the literature. We show that the\\nTransformer generalizes well to other tasks by applying it successfully to\\nEnglish constituency parsing both with large and limited training data.'}\n"
]
}
],
"source": [
"# Load documents from ArXiv\n",
"loader = ArxivLoader(\n",
" query=\"1706.03762\",\n",
" load_max_docs=1,\n",
")\n",
"docs = loader.load()\n",
"print(docs[0].metadata)\n",
"\n",
"# Split documents into chunks\n",
"splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size=CONFIG[\"chunk_size\"],\n",
" chunk_overlap=CONFIG[\"chunk_overlap\"],\n",
")\n",
"chunks = splitter.split_documents(docs)\n",
"\n",
"\n",
"# Join chunks into a single string\n",
"def join_chunks(chunks):\n",
" return \"\\n\\n\".join([chunk.page_content for chunk in chunks])"
]
},
{
"cell_type": "markdown",
"id": "6e194ab4",
"metadata": {},
"source": [
"#### Vector Store and Retriever Setup"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "26dfbeaa",
"metadata": {},
"outputs": [],
"source": [
"# Create embeddings\n",
"embeddings = OpenAIEmbeddings(model=CONFIG[\"embeddings_model\"])\n",
"\n",
"# Create vector store from documents\n",
"vectorstore = InMemoryVectorStore.from_documents(\n",
" chunks,\n",
" embedding=embeddings,\n",
")\n",
"\n",
"# Create retriever\n",
"retriever = vectorstore.as_retriever(search_kwargs={\"k\": CONFIG[\"retriever_k\"]})"
]
},
{
"cell_type": "markdown",
"id": "bc1f181b",
"metadata": {},
"source": [
"#### RAG Chain Construction using [LCEL](https://python.langchain.com/docs/concepts/lcel/)\n",
"\n",
"Flow:\n",
"1. Query → Retriever (finds relevant chunks)\n",
"2. Chunks → join_chunks (creates context)\n",
"3. Context + Query → Prompt Template\n",
"4. Prompt → Language Model → Response\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6a810dc3",
"metadata": {},
"outputs": [],
"source": [
"# Initialize the language model\n",
"llm = ChatOpenAI(model=CONFIG[\"llm\"], temperature=CONFIG[\"temperature\"])\n",
"\n",
"# Create the prompt template\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", CONFIG[\"system_prompt\"] + \"\\n\\nContext:\\n{context}\\n\\n\"),\n",
" (\"human\", \"\\n{question}\\n\"),\n",
" ]\n",
")\n",
"\n",
"# Construct the RAG chain\n",
"rag_chain = (\n",
" {\n",
" \"context\": retriever | RunnableLambda(join_chunks),\n",
" \"question\": RunnablePassthrough(),\n",
" }\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c04bd019",
"metadata": {},
"source": [
"#### Prediction Function with MLflow Tracing\n",
"\n",
"Create a prediction function decorated with `@mlflow.trace` to automatically log:\n",
"- Input queries\n",
"- Retrieved documents\n",
"- Generated responses\n",
"- Execution time\n",
"- Chain intermediate steps"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "7b45fc04",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Question: What is the main idea of the paper?\n",
"Response: The main idea is to replace recurrent/convolutional sequence models with a pure attention-based architecture called the Transformer. It uses self-attention to model dependencies between all positions in the input and output, enabling full parallelization and better handling of long-range relations. This approach achieves strong results on translation and can extend to other modalities.\n"
]
}
],
"source": [
"@mlflow.trace\n",
"def predict_fn(question: str) -> str:\n",
" return rag_chain.invoke(question)\n",
"\n",
"\n",
"# Test the prediction function\n",
"sample_question = \"What is the main idea of the paper?\"\n",
"response = predict_fn(sample_question)\n",
"print(f\"Question: {sample_question}\")\n",
"print(f\"Response: {response}\")"
]
},
{
"cell_type": "markdown",
"id": "421469de",
"metadata": {},
"source": [
"#### Evaluation Dataset and Scoring\n",
"\n",
"Define an evaluation dataset and run systematic evaluation using [MLflow's built-in scorers](https://mlflow.org/docs/latest/genai/eval-monitor/scorers/llm-judge/predefined/#available-scorers):\n",
"\n",
"<u>Evaluation Components:</u>\n",
"- **Dataset**: Questions with expected concepts and facts\n",
"- **Scorers**: \n",
" - `RelevanceToQuery`: Measures how relevant the response is to the question\n",
" - `Correctness`: Evaluates factual accuracy of the response\n",
" - `ExpectationsGuidelines`: Checks that output matches expectation guidelines\n",
"\n",
"<u>Best Practices:</u>\n",
"- Create diverse test cases covering different query types\n",
"- Include expected concepts to guide evaluation\n",
"- Use multiple scoring metrics for comprehensive assessment"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5c1dc4f2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2025/08/23 20:14:39 INFO mlflow.models.evaluation.utils.trace: Auto tracing is temporarily enabled during the model evaluation for computing some metrics and debugging. To disable tracing, call `mlflow.autolog(disable=True)`.\n",
"2025/08/23 20:14:39 INFO mlflow.genai.utils.data_validation: Testing model prediction with the first sample in the dataset.\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "2b6c6687efa24796b39c7951d589d481",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Evaluating: 0%| | 0/3 [Elapsed: 00:00, Remaining: ?] "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"✨ Evaluation completed.\n",
"\n",
"Metrics and evaluation results are logged to the MLflow run:\n",
" Run name: \u001b[94mbaseline_eval\u001b[0m\n",
" Run ID: \u001b[94ma2218d9f24c9415f8040d3b77af103a9\u001b[0m\n",
"\n",
"To view the detailed evaluation results with sample-wise scores,\n",
"open the \u001b[93m\u001b[1mTraces\u001b[0m tab in the Run page in the MLflow UI.\n",
"\n"
]
}
],
"source": [
"# Define evaluation dataset\n",
"eval_dataset = [\n",
" {\n",
" \"inputs\": {\"question\": \"What is the main idea of the paper?\"},\n",
" \"expectations\": {\n",
" \"key_concepts\": [\"attention mechanism\", \"transformer\", \"neural network\"],\n",
" \"expected_facts\": [\n",
" \"attention mechanism is a key component of the transformer model\"\n",
" ],\n",
" \"guidelines\": [\"The response must be factual and concise\"],\n",
" },\n",
" },\n",
" {\n",
" \"inputs\": {\n",
" \"question\": \"What's the difference between a transformer and a recurrent neural network?\"\n",
" },\n",
" \"expectations\": {\n",
" \"key_concepts\": [\"sequential\", \"attention mechanism\", \"hidden state\"],\n",
" \"expected_facts\": [\n",
" \"transformer processes data in parallel while RNN processes data sequentially\"\n",
" ],\n",
" \"guidelines\": [\n",
" \"The response must be factual and focus on the difference between the two models\"\n",
" ],\n",
" },\n",
" },\n",
" {\n",
" \"inputs\": {\"question\": \"What does the attention mechanism do?\"},\n",
" \"expectations\": {\n",
" \"key_concepts\": [\"query\", \"key\", \"value\", \"relationship\", \"similarity\"],\n",
" \"expected_facts\": [\n",
" \"attention allows the model to weigh the importance of different parts of the input sequence when processing it\"\n",
" ],\n",
" \"guidelines\": [\n",
" \"The response must be factual and explain the concept of attention\"\n",
" ],\n",
" },\n",
" },\n",
"]\n",
"\n",
"# Run evaluation with MLflow\n",
"with mlflow.start_run(run_name=\"baseline_eval\") as run:\n",
" # Log configuration parameters\n",
" mlflow.log_params(CONFIG)\n",
"\n",
" # Run evaluation\n",
" results = mlflow.genai.evaluate(\n",
" data=eval_dataset,\n",
" predict_fn=predict_fn,\n",
" scorers=[RelevanceToQuery(), Correctness(), ExpectationsGuidelines()],\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "52b137c7",
"metadata": {},
"source": [
"#### Launch MLflow UI to check out the results\n",
"\n",
"<u>What you'll see in the UI:</u>\n",
"- **Experiments**: Compare different RAG configurations\n",
"- **Runs**: Individual experiment runs with metrics and parameters\n",
"- **Traces**: Detailed execution traces showing retrieval and generation steps\n",
"- **Evaluation Results**: Scoring metrics and detailed comparisons\n",
"- **Artifacts**: Saved models, datasets, and other files\n",
"\n",
"Navigate to `http://localhost:5000` after running the command below."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "817c3799",
"metadata": {},
"outputs": [],
"source": [
"!mlflow ui"
]
},
{
"cell_type": "markdown",
"id": "c75861e3",
"metadata": {},
"source": [
"You should see something like this\n",
"\n",
"![MLflow UI image](https://miro.medium.com/v2/resize:fit:720/format:webp/1*Cx7MMy53pAP7150x_hvztA.png)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -34,7 +34,7 @@
"tools = [multiply, exponentiate, add]\n",
"\n",
"gpt35 = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0).bind_tools(tools)\n",
"claude3 = ChatAnthropic(model=\"claude-3-sonnet-20240229\").bind_tools(tools)\n",
"claude3 = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\").bind_tools(tools)\n",
"llm_with_tools = gpt35.configurable_alternatives(\n",
" ConfigurableField(id=\"llm\"), default_key=\"gpt35\", claude3=claude3\n",
")"
@@ -113,14 +113,15 @@
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc', 'function': {'arguments': '{\"x\": 8, \"y\": 2.743}', 'name': 'exponentiate'}, 'type': 'function'}, {'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp', 'function': {'arguments': '{\"x\": 17.24, \"y\": -918.1241}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 58, 'prompt_tokens': 168, 'total_tokens': 226}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-528302fc-7acf-4c11-82c4-119ccf40c573-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8, 'y': 2.743}, 'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc'}, {'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp'}]),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='call_6yMU2WsS4Bqgi1WxFHxtfJRc'),\n",
" ToolMessage(content='-900.8841', tool_call_id='call_GAL3dQiKFF9XEV0RrRLPTvVp'),\n",
" AIMessage(content='The result of \\\\(3 + 5^{2.743}\\\\) is approximately 300.04, and the result of \\\\(17.24 - 918.1241\\\\) is approximately -900.88.', response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 251, 'total_tokens': 295}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-d1161669-ed09-4b18-94bd-6d8530df5aa8-0')]}"
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\", additional_kwargs={}, response_metadata={}),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_xuNXwm2P6U2Pp2pAbC1sdIBz', 'function': {'arguments': '{\"x\": 3, \"y\": 5}', 'name': 'add'}, 'type': 'function'}, {'id': 'call_0pImUJUDlYa5zfBcxxuvWyYS', 'function': {'arguments': '{\"x\": 8, \"y\": 2.743}', 'name': 'exponentiate'}, 'type': 'function'}, {'id': 'call_yaownQ9TZK0dkqD1KSFyax4H', 'function': {'arguments': '{\"x\": 17.24, \"y\": -918.1241}', 'name': 'add'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 75, 'prompt_tokens': 131, 'total_tokens': 206, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-ByJm2qxSWU3oTTSZQv64J4XQKZhA6', 'service_tier': 'default', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run--35fad027-47f7-44d3-aa8b-99f4fc24098c-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 5}, 'id': 'call_xuNXwm2P6U2Pp2pAbC1sdIBz', 'type': 'tool_call'}, {'name': 'exponentiate', 'args': {'x': 8, 'y': 2.743}, 'id': 'call_0pImUJUDlYa5zfBcxxuvWyYS', 'type': 'tool_call'}, {'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'call_yaownQ9TZK0dkqD1KSFyax4H', 'type': 'tool_call'}], usage_metadata={'input_tokens': 131, 'output_tokens': 75, 'total_tokens': 206, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n",
" ToolMessage(content='8.0', tool_call_id='call_xuNXwm2P6U2Pp2pAbC1sdIBz'),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='call_0pImUJUDlYa5zfBcxxuvWyYS'),\n",
" ToolMessage(content='-900.8841', tool_call_id='call_yaownQ9TZK0dkqD1KSFyax4H'),\n",
" AIMessage(content='The results are:\\n1. 3 plus 5 is 8.\\n2. 5 raised to the power of 2.743 is approximately 300.04.\\n3. 17.24 minus 918.1241 is approximately -900.88.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 55, 'prompt_tokens': 236, 'total_tokens': 291, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-ByJm345MYnpowGS90iAZAlSs7haed', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--5fa66d47-d80e-45d0-9c32-31348c735d72-0', usage_metadata={'input_tokens': 236, 'output_tokens': 55, 'total_tokens': 291, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -146,17 +147,17 @@
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
" AIMessage(content=[{'text': \"Okay, let's break this down into two parts:\", 'type': 'text'}, {'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC', 'input': {'x': 3, 'y': 5}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_01AkLGH8sxMHaH15yewmjwkF', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 450, 'output_tokens': 81}}, id='run-f35bfae8-8ded-4f8a-831b-0940d6ad16b6-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 5}, 'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC'}]),\n",
" ToolMessage(content='8.0', tool_call_id='toolu_01DEhqcXkXTtzJAiZ7uMBeDC'),\n",
" AIMessage(content=[{'id': 'toolu_013DyMLrvnrto33peAKMGMr1', 'input': {'x': 8.0, 'y': 2.743}, 'name': 'exponentiate', 'type': 'tool_use'}], response_metadata={'id': 'msg_015Fmp8aztwYcce2JDAFfce3', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 545, 'output_tokens': 75}}, id='run-48aaeeeb-a1e5-48fd-a57a-6c3da2907b47-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8.0, 'y': 2.743}, 'id': 'toolu_013DyMLrvnrto33peAKMGMr1'}]),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='toolu_013DyMLrvnrto33peAKMGMr1'),\n",
" AIMessage(content=[{'text': 'So 3 plus 5 raised to the 2.743 power is 300.04.\\n\\nFor the second part:', 'type': 'text'}, {'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46', 'input': {'x': 17.24, 'y': -918.1241}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_015TkhfRBENPib2RWAxkieH6', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 638, 'output_tokens': 105}}, id='run-45fb62e3-d102-4159-881d-241c5dbadeed-0', tool_calls=[{'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46'}]),\n",
" ToolMessage(content='-900.8841', tool_call_id='toolu_01UTmMrGTmLpPrPCF1rShN46'),\n",
" AIMessage(content='Therefore, 17.24 - 918.1241 = -900.8841', response_metadata={'id': 'msg_01LgKnRuUcSyADCpxv9tPoYD', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 759, 'output_tokens': 24}}, id='run-1008254e-ccd1-497c-8312-9550dd77bd08-0')]}"
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\", additional_kwargs={}, response_metadata={}),\n",
" AIMessage(content=[{'text': \"I'll solve these calculations for you.\\n\\nFor the first part, I need to calculate 3 plus 5 raised to the power of 2.743.\\n\\nLet me break this down:\\n1) First, I'll calculate 5 raised to the power of 2.743\\n2) Then add 3 to the result\", 'type': 'text'}, {'id': 'toolu_01L1mXysBQtpPLQ2AZTaCGmE', 'input': {'x': 5, 'y': 2.743}, 'name': 'exponentiate', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01HCbDmuzdg9ATMyKbnecbEE', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 563, 'output_tokens': 146, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--9f6469fb-bcbb-4c1c-9eec-79f6979c38e6-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 5, 'y': 2.743}, 'id': 'toolu_01L1mXysBQtpPLQ2AZTaCGmE', 'type': 'tool_call'}], usage_metadata={'input_tokens': 563, 'output_tokens': 146, 'total_tokens': 709, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
" ToolMessage(content='82.65606421491815', tool_call_id='toolu_01L1mXysBQtpPLQ2AZTaCGmE'),\n",
" AIMessage(content=[{'text': \"Now I'll add 3 to this result:\", 'type': 'text'}, {'id': 'toolu_01NARC83e9obV35mZ6jYzBiN', 'input': {'x': 3, 'y': 82.65606421491815}, 'name': 'add', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01ELwyCtVLeGC685PUFqmdz2', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 727, 'output_tokens': 87, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--d5af3d7c-e8b7-4cc2-997a-ad2dafd08751-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 82.65606421491815}, 'id': 'toolu_01NARC83e9obV35mZ6jYzBiN', 'type': 'tool_call'}], usage_metadata={'input_tokens': 727, 'output_tokens': 87, 'total_tokens': 814, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
" ToolMessage(content='85.65606421491815', tool_call_id='toolu_01NARC83e9obV35mZ6jYzBiN'),\n",
" AIMessage(content=[{'text': \"For the second part, you asked for 17.24 - 918.1241. I don't have a subtraction function available, but I can rewrite this as adding a negative number: 17.24 + (-918.1241)\", 'type': 'text'}, {'id': 'toolu_01Q6fLcZkBWZpMPCZ55WXR3N', 'input': {'x': 17.24, 'y': -918.1241}, 'name': 'add', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01WkmDwUxWjjaKGnTtdLGJnN', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 832, 'output_tokens': 130, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--39a6fbda-4c81-47a6-b361-524bd4ee5823-0', tool_calls=[{'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'toolu_01Q6fLcZkBWZpMPCZ55WXR3N', 'type': 'tool_call'}], usage_metadata={'input_tokens': 832, 'output_tokens': 130, 'total_tokens': 962, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
" ToolMessage(content='-900.8841', tool_call_id='toolu_01Q6fLcZkBWZpMPCZ55WXR3N'),\n",
" AIMessage(content='So, the answers are:\\n1) 3 plus 5 raised to the 2.743 = 85.65606421491815\\n2) 17.24 - 918.1241 = -900.8841', additional_kwargs={}, response_metadata={'id': 'msg_015Yoc62CvdJbANGFouiQ6AQ', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 978, 'output_tokens': 58, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--174c0882-6180-47ea-8f63-d7b747302327-0', usage_metadata={'input_tokens': 978, 'output_tokens': 58, 'total_tokens': 1036, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}})]}"
]
},
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -177,7 +178,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "langchain",
"language": "python",
"name": "python3"
},
@@ -191,7 +192,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.16"
}
},
"nbformat": 4,

View File

@@ -227,7 +227,7 @@
"outputs": [],
"source": [
"conversation_description = f\"\"\"Here is the topic of conversation: {topic}\n",
"The participants are: {', '.join(names.keys())}\"\"\"\n",
"The participants are: {\", \".join(names.keys())}\"\"\"\n",
"\n",
"agent_descriptor_system_message = SystemMessage(\n",
" content=\"You can add detail to the description of the conversation participant.\"\n",
@@ -396,7 +396,7 @@
" You are the moderator.\n",
" Please make the topic more specific.\n",
" Please reply with the specified quest in {word_limit} words or less. \n",
" Speak directly to the participants: {*names,}.\n",
" Speak directly to the participants: {(*names,)}.\n",
" Do not add anything else.\"\"\"\n",
" ),\n",
"]\n",

View File

@@ -1,9 +1,9 @@
# we build the docs in these stages:
# 1. install vercel and python dependencies
# 2. copy files from "source dir" to "intermediate dir"
# 2. generate files like model feat table, etc in "intermediate dir"
# 3. copy files to their right spots (e.g. langserve readme) in "intermediate dir"
# 4. build the docs from "intermediate dir" to "output dir"
# We build the docs in these stages:
# 1. Install vercel and python dependencies
# 2. Copy files from "source dir" to "intermediate dir"
# 2. Generate files like model feat table, etc in "intermediate dir"
# 3. Copy files to their right spots (e.g. langserve readme) in "intermediate dir"
# 4. Build the docs from "intermediate dir" to "output dir"
SOURCE_DIR = docs/
INTERMEDIATE_DIR = build/intermediate/docs
@@ -18,32 +18,45 @@ PORT ?= 3001
clean:
rm -rf build
clean-cache:
rm -rf build .venv/deps_installed
install-vercel-deps:
yum -y -q update
yum -y -q install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
install-py-deps:
python3 -m venv .venv
$(PYTHON) -m pip install -q --upgrade pip
$(PYTHON) -m pip install -q --upgrade uv
$(PYTHON) -m uv pip install -q --pre -r vercel_requirements.txt
$(PYTHON) -m uv pip install -q --pre $$($(PYTHON) scripts/partner_deps_list.py) --overrides vercel_overrides.txt
@echo "📦 Installing Python dependencies..."
@if [ ! -d .venv ]; then python3 -m venv .venv; fi
@if [ ! -f .venv/deps_installed ]; then \
$(PYTHON) -m pip install -q --upgrade pip --disable-pip-version-check; \
$(PYTHON) -m pip install -q --upgrade uv; \
$(PYTHON) -m uv pip install -q --pre -r vercel_requirements.txt; \
$(PYTHON) -m uv pip install -q --pre $$($(PYTHON) scripts/partner_deps_list.py) --overrides vercel_overrides.txt; \
touch .venv/deps_installed; \
fi
@echo "✅ Dependencies installed"
generate-files:
@echo "📄 Generating documentation files..."
mkdir -p $(INTERMEDIATE_DIR)
cp -rp $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
$(PYTHON) scripts/tool_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
curl https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > $(INTERMEDIATE_DIR)/langserve.md
@if [ ! -f build/langserve_readme_cache.md ] || [ $$(find build/langserve_readme_cache.md -mtime +1 -print) ]; then \
echo "🌐 Downloading LangServe README..."; \
curl -s https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > build/langserve_readme_cache.md; \
fi
cp build/langserve_readme_cache.md $(INTERMEDIATE_DIR)/langserve.md
cp ../SECURITY.md $(INTERMEDIATE_DIR)/security.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/
@echo "🔧 Generating feature tables and processing links..."
$(PYTHON) scripts/tool_feat_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/ & \
wait
@echo "✅ Files generated"
copy-infra:
@echo "📂 Copying infrastructure files..."
mkdir -p $(OUTPUT_NEW_DIR)
cp -r src $(OUTPUT_NEW_DIR)
cp vercel.json $(OUTPUT_NEW_DIR)
@@ -55,15 +68,22 @@ copy-infra:
cp -r static $(OUTPUT_NEW_DIR)
cp -r ../libs/cli/langchain_cli/integration_template $(OUTPUT_NEW_DIR)/src/theme
cp yarn.lock $(OUTPUT_NEW_DIR)
@echo "✅ Infrastructure files copied"
render:
@echo "📓 Converting notebooks (this may take a while)..."
$(PYTHON) scripts/notebook_convert.py $(INTERMEDIATE_DIR) $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Notebooks converted"
md-sync:
@echo "📝 Syncing markdown files..."
rsync -avmq --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --include="*/_category_.yml" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Markdown files synced"
append-related:
@echo "🔗 Appending related links..."
$(PYTHON) scripts/append_related_links.py $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Related links appended"
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
@@ -71,6 +91,10 @@ generate-references:
update-md: generate-files md-sync
build: install-py-deps generate-files copy-infra render md-sync append-related
@echo ""
@echo "🎉 Documentation build complete!"
@echo "📖 To view locally, run: cd docs && make start"
@echo ""
vercel-build: install-vercel-deps build generate-references
rm -rf docs
@@ -84,4 +108,9 @@ vercel-build: install-vercel-deps build generate-references
NODE_OPTIONS="--max-old-space-size=5000" yarn run docusaurus build
start:
cd $(OUTPUT_NEW_DIR) && yarn && yarn start --port=$(PORT)
@echo "🚀 Starting documentation server on port $(PORT)..."
@echo "📖 Installing Node.js dependencies..."
cd $(OUTPUT_NEW_DIR) && yarn install --silent
@echo "🌐 Starting server at http://localhost:$(PORT)"
@echo "Press Ctrl+C to stop the server"
cd $(OUTPUT_NEW_DIR) && yarn start --port=$(PORT)

View File

@@ -1,3 +1,154 @@
# LangChain Documentation
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/how_to/documentation)
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/how_to/documentation).
## Structure
The primary documentation is located in the `docs/` directory. This directory contains
both the source files for the main documentation as well as the API reference doc
build process.
### API Reference
API reference documentation is located in `docs/api_reference/` and is generated from
the codebase using Sphinx.
The API reference have additional build steps that differ from the main documentation.
#### Deployment Process
Currently, the build process roughly follows these steps:
1. Using the `api_doc_build.yml` GitHub workflow, the API reference docs are
[built](#build-technical-details) and copied to the `langchain-api-docs-html`
repository. This workflow is triggered either (1) on a cron routine interval or (2)
triggered manually.
In short, the workflow extracts all `langchain-ai`-org-owned repos defined in
`langchain/libs/packages.yml`, clones them locally (in the workflow runner's file
system), and then builds the API reference RST files (using `create_api_rst.py`).
Following post-processing, the HTML files are pushed to the
`langchain-api-docs-html` repository.
2. After the HTML files are in the `langchain-api-docs-html` repository, they are **not**
automatically published to the [live docs site](https://python.langchain.com/api_reference/).
The docs site is served by Vercel. The Vercel deployment process copies the HTML
files from the `langchain-api-docs-html` repository and deploys them to the live
site. Deployments are triggered on each new commit pushed to `master`.
#### Build Technical Details
The build process creates a virtual monorepo by syncing multiple repositories, then generates comprehensive API documentation:
1. **Repository Sync Phase:**
- `.github/scripts/prep_api_docs_build.py` - Clones external partner repos and organizes them into the `libs/partners/` structure to create a virtual monorepo for documentation building
2. **RST Generation Phase:**
- `docs/api_reference/create_api_rst.py` - Main script that **generates RST files** from Python source code
- Scans `libs/` directories and extracts classes/functions from each module (using `inspect`)
- Creates `.rst` files using specialized templates for different object types
- Templates in `docs/api_reference/templates/` (`pydantic.rst`, `runnable_pydantic.rst`, etc.)
3. **HTML Build Phase:**
- Sphinx-based, uses `sphinx.ext.autodoc` (auto-extracts docstrings from the codebase)
- `docs/api_reference/conf.py` (sphinx config) configures `autodoc` and other extensions
- `sphinx-build` processes the generated `.rst` files into HTML using autodoc
- `docs/api_reference/scripts/custom_formatter.py` - Post-processes the generated HTML
- Copies `reference.html` to `index.html` to create the default landing page (artifact? might not need to do this - just put everyhing in index directly?)
4. **Deployment:**
- `.github/workflows/api_doc_build.yml` - Workflow responsible for orchestrating the entire build and deployment process
- Built HTML files are committed and pushed to the `langchain-api-docs-html` repository
#### Local Build
For local development and testing of API documentation, use the Makefile targets in the repository root:
```bash
# Full build
make api_docs_build
```
Like the CI process, this target:
- Installs the CLI package in editable mode
- Generates RST files for all packages using `create_api_rst.py`
- Builds HTML documentation with Sphinx
- Post-processes the HTML with `custom_formatter.py`
- Opens the built documentation (`reference.html`) in your browser
**Quick Preview:**
```bash
make api_docs_quick_preview API_PKG=openai
```
- Generates RST files for only the specified package (default: `text-splitters`)
- Builds and post-processes HTML documentation
- Opens the preview in your browser
Both targets automatically clean previous builds and handle the complete build pipeline locally, mirroring the CI process but for faster iteration during development.
#### Documentation Standards
**Docstring Format:**
The API reference uses **Google-style docstrings** with reStructuredText markup. Sphinx processes these through the `sphinx.ext.napoleon` extension to generate documentation.
**Required format:**
```python
def example_function(param1: str, param2: int = 5) -> bool:
"""Brief description of the function.
Longer description can go here. Use reStructuredText syntax for
rich formatting like **bold** and *italic*.
TODO: code: figure out what works?
Args:
param1: Description of the first parameter.
param2: Description of the second parameter with default value.
Returns:
Description of the return value.
Raises:
ValueError: When param1 is empty.
TypeError: When param2 is not an integer.
.. warning::
This function is experimental and may change.
"""
```
**Special Markers:**
- `:private:` in docstrings excludes members from documentation
- `.. warning::` adds warning admonitions
#### Site Styling and Assets
**Theme and Styling:**
- Uses [**PyData Sphinx Theme**](https://pydata-sphinx-theme.readthedocs.io/en/stable/index.html) (`pydata_sphinx_theme`)
- Custom CSS in `docs/api_reference/_static/css/custom.css` with LangChain-specific:
- Color palette
- Inter font family
- Custom navbar height and sidebar formatting
- Deprecated/beta feature styling
**Static Assets:**
- Logos: `_static/wordmark-api.svg` (light) and `_static/wordmark-api-dark.svg` (dark mode)
- Favicon: `_static/img/brand/favicon.png`
- Custom CSS: `_static/css/custom.css`
**Post-Processing:**
- `scripts/custom_formatter.py` cleans up generated HTML:
- Shortens TOC entries from `ClassName.method()` to `method()`
**Analytics and Integration:**
- GitHub integration (source links, edit buttons)
- Example backlinking through custom `ExampleLinksDirective`

View File

@@ -50,7 +50,7 @@ class GalleryGridDirective(SphinxDirective):
individual cards + ["image", "header", "content", "title"].
Danger:
This directive can only be used in the context of a Myst documentation page as
This directive can only be used in the context of a MyST documentation page as
the templates use Markdown flavored formatting.
"""
@@ -108,7 +108,7 @@ class GalleryGridDirective(SphinxDirective):
# Parse the template with Sphinx Design to create an output container
# Prep the options for the template grid
class_ = "gallery-directive" + f' {self.options.get("class-container", "")}'
class_ = "gallery-directive" + f" {self.options.get('class-container', '')}"
options = {"gutter": 2, "class-container": class_}
options_str = "\n".join(f":{k}: {v}" for k, v in options.items())

View File

@@ -1,7 +1,5 @@
"""Configuration file for the Sphinx documentation builder."""
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
@@ -20,16 +18,18 @@ from docutils.parsers.rst.directives.admonitions import BaseAdmonition
from docutils.statemachine import StringList
from sphinx.util.docutils import SphinxDirective
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# Add paths to Python import system so Sphinx can import LangChain modules
# This allows autodoc to introspect and document the actual code
_DIR = Path(__file__).parent.absolute()
sys.path.insert(0, os.path.abspath("."))
sys.path.insert(0, os.path.abspath("../../libs/langchain"))
sys.path.insert(0, os.path.abspath(".")) # Current directory
sys.path.insert(0, os.path.abspath("../../libs/langchain")) # LangChain main package
# Load package metadata from pyproject.toml (for version info, etc.)
with (_DIR.parents[1] / "libs" / "langchain" / "pyproject.toml").open("r") as f:
data = toml.load(f)
# Load mapping of classes to example notebooks for backlinking
# This file is generated by scripts that scan our tutorial/example notebooks
with (_DIR / "guide_imports.json").open("r") as f:
imported_classes = json.load(f)
@@ -86,6 +86,7 @@ class Beta(BaseAdmonition):
def setup(app):
"""Register custom directives and hooks with Sphinx."""
app.add_directive("example_links", ExampleLinksDirective)
app.add_directive("beta", Beta)
app.connect("autodoc-skip-member", skip_private_members)
@@ -97,7 +98,7 @@ def skip_private_members(app, what, name, obj, skip, options):
if hasattr(obj, "__doc__") and obj.__doc__ and ":private:" in obj.__doc__:
return True
if name == "__init__" and obj.__objclass__ is object:
# dont document default init
# don't document default init
return True
return None
@@ -125,7 +126,7 @@ extensions = [
"sphinx.ext.viewcode",
"sphinxcontrib.autodoc_pydantic",
"IPython.sphinxext.ipython_console_highlighting",
"myst_parser",
"myst_parser", # For generated index.md and reference.md
"_extensions.gallery_directive",
"sphinx_design",
"sphinx_copybutton",
@@ -258,19 +259,22 @@ html_static_path = ["_static"]
html_css_files = ["css/custom.css"]
html_use_index = False
# Only used on the generated index.md and reference.md files
myst_enable_extensions = ["colon_fence"]
# generate autosummary even if no references
autosummary_generate = True
# Don't fail on autosummary import warnings
autosummary_ignore_module_all = False
html_copy_source = False
html_show_sourcelink = False
googleanalytics_id = "G-9B66JQQH2F"
# Set canonical URL from the Read the Docs Domain
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
googleanalytics_id = "G-9B66JQQH2F"
# Tell Jinja2 templates the build is running on Read the Docs
if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True

View File

@@ -1,4 +1,41 @@
"""Script for auto-generating api_reference.rst."""
"""Auto-generate API reference documentation (RST files) for LangChain packages.
* Automatically discovers all packages in `libs/` and `libs/partners/`
* For each package, recursively walks the filesystem to:
* Load Python modules using importlib
* Extract classes and functions using Python's inspect module
* Classify objects by type (Pydantic models, Runnables, TypedDicts, etc.)
* Filter out private members (names starting with '_') and deprecated items
* Creates structured RST files with:
* Module-level documentation pages with autosummary tables
* Different Sphinx templates based on object type (see templates/ directory)
* Proper cross-references and navigation structure
* Separation of current vs deprecated APIs
* Generates a directory tree like:
```
docs/api_reference/
├── index.md # Main landing page with package gallery
├── reference.md # Package overview and navigation
├── core/ # langchain-core documentation
│ ├── index.rst
│ ├── callbacks.rst
│ └── ...
├── langchain/ # langchain documentation
│ ├── index.rst
│ └── ...
└── partners/ # Integration packages
├── openai/
├── anthropic/
└── ...
```
## Key Features
* Respects privacy markers:
* Modules with `:private:` in docstring are excluded entirely
* Objects with `:private:` in docstring are filtered out
* Names starting with '_' are treated as private
"""
import importlib
import inspect
@@ -97,7 +134,7 @@ def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
if type(type_) is typing_extensions._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
elif type(type_) is typing._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
kind = "TypedDict"
elif (
issubclass(type_, Runnable)
and issubclass(type_, BaseModel)
@@ -177,19 +214,20 @@ def _load_package_modules(
Traversal based on the file system makes it easy to determine which
of the modules/packages are part of the package vs. 3rd party or built-in.
Parameters:
package_directory (Union[str, Path]): Path to the package directory.
submodule (Optional[str]): Optional name of submodule to load.
Args:
package_directory: Path to the package directory.
submodule: Optional name of submodule to load.
Returns:
Dict[str, ModuleMembers]: A dictionary where keys are module names and values are ModuleMembers objects.
A dictionary where keys are module names and values are `ModuleMembers`
objects.
"""
package_path = (
Path(package_directory)
if isinstance(package_directory, str)
else package_directory
)
modules_by_namespace = {}
modules_by_namespace: Dict[str, ModuleMembers] = {}
# Get the high level package name
package_name = package_path.name
@@ -199,9 +237,16 @@ def _load_package_modules(
package_path = package_path / submodule
for file_path in package_path.rglob("*.py"):
# Skip private modules
if file_path.name.startswith("_"):
continue
# Skip integration_template and project_template directories (for libs/cli)
if "integration_template" in file_path.parts:
continue
if "project_template" in file_path.parts:
continue
relative_module_name = file_path.relative_to(package_path)
# Skip if any module part starts with an underscore
@@ -209,8 +254,13 @@ def _load_package_modules(
continue
# Get the full namespace of the module
# Example: langchain_core/schema/output_parsers.py ->
# langchain_core.schema.output_parsers
namespace = str(relative_module_name).replace(".py", "").replace("/", ".")
# Keep only the top level namespace
# Example: langchain_core.schema.output_parsers ->
# langchain_core
top_namespace = namespace.split(".")[0]
try:
@@ -247,16 +297,16 @@ def _construct_doc(
members_by_namespace: Dict[str, ModuleMembers],
package_version: str,
) -> List[typing.Tuple[str, str]]:
"""Construct the contents of the reference.rst file for the given package.
"""Construct the contents of the `reference.rst` for the given package.
Args:
package_namespace: The package top level namespace
members_by_namespace: The members of the package, dict organized by top level
module contains a list of classes and functions
inside of the top level namespace.
members_by_namespace: The members of the package dict organized by top level.
Module contains a list of classes and functions inside of the top level
namespace.
Returns:
The contents of the reference.rst file.
The string contents of the reference.rst file.
"""
docs = []
index_doc = f"""\
@@ -267,7 +317,7 @@ def _construct_doc(
.. _{package_namespace}:
======================================
{package_namespace.replace('_', '-')}: {package_version}
{package_namespace.replace("_", "-")}: {package_version}
======================================
.. automodule:: {package_namespace}
@@ -277,7 +327,7 @@ def _construct_doc(
.. toctree::
:hidden:
:maxdepth: 2
"""
index_autosummary = """
"""
@@ -325,7 +375,7 @@ def _construct_doc(
index_autosummary += f"""
:ref:`{package_namespace}_{module}`
{'^' * (len(package_namespace) + len(module) + 8)}
{"^" * (len(package_namespace) + len(module) + 8)}
"""
if classes:
@@ -359,12 +409,12 @@ def _construct_doc(
module_doc += f"""\
:template: {template}
{class_["qualified_name"]}
"""
index_autosummary += f"""
{class_['qualified_name']}
{class_["qualified_name"]}
"""
if functions:
@@ -427,7 +477,7 @@ def _construct_doc(
"""
index_autosummary += f"""
{class_['qualified_name']}
{class_["qualified_name"]}
"""
if deprecated_functions:
@@ -459,10 +509,13 @@ def _construct_doc(
def _build_rst_file(package_name: str = "langchain") -> None:
"""Create a rst file for building of documentation.
"""Create a rst file for a given package.
Args:
package_name: Can be either "langchain" or "core" or "experimental".
package_name: Name of the package to create the rst file for.
Returns:
The rst file is created in the same directory as this script.
"""
package_dir = _package_dir(package_name)
package_members = _load_package_modules(package_dir)
@@ -481,7 +534,7 @@ def _package_namespace(package_name: str) -> str:
"""Returns the package name used.
Args:
package_name: Can be either "langchain" or "core" or "experimental".
package_name: Can be either "langchain" or "core"
Returns:
modified package_name: Can be either "langchain" or "langchain_{package_name}"
@@ -494,16 +547,11 @@ def _package_namespace(package_name: str) -> str:
def _package_dir(package_name: str = "langchain") -> Path:
"""Return the path to the directory containing the documentation."""
if package_name in (
"langchain",
"experimental",
"community",
"core",
"cli",
"text-splitters",
"standard-tests",
):
"""Return the path to the directory containing the documentation.
Attempts to find the package in `libs/` first, then `libs/partners/`.
"""
if (ROOT_DIR / "libs" / package_name).exists():
return ROOT_DIR / "libs" / package_name / _package_namespace(package_name)
else:
return (
@@ -516,7 +564,7 @@ def _package_dir(package_name: str = "langchain") -> Path:
def _get_package_version(package_dir: Path) -> str:
"""Return the version of the package."""
"""Return the version of the package by reading the `pyproject.toml`."""
try:
with open(package_dir.parent / "pyproject.toml", "r") as f:
pyproject = toml.load(f)
@@ -542,18 +590,33 @@ def _out_file_path(package_name: str) -> Path:
def _build_index(dirs: List[str]) -> None:
"""Build the index.md file for the API reference.
Args:
dirs: List of package directories to include in the index.
Returns:
The index.md file is created in the same directory as this script.
"""
custom_names = {
"aws": "AWS",
"ai21": "AI21",
"ibm": "IBM",
}
ordered = ["core", "langchain", "text-splitters", "community", "experimental"]
ordered = [
"core",
"langchain",
"text-splitters",
"community",
"standard-tests",
]
main_ = [dir_ for dir_ in ordered if dir_ in dirs]
integrations = sorted(dir_ for dir_ in dirs if dir_ not in main_)
doc = """# LangChain Python API Reference
Welcome to the LangChain Python API reference. This is a reference for all
`langchain-x` packages.
Welcome to the LangChain Python API reference. This is a reference for all
`langchain-x` packages.
For user guides see [https://python.langchain.com](https://python.langchain.com).
@@ -592,7 +655,12 @@ For the legacy API reference hosted on ReadTheDocs see [https://api.python.langc
if integrations:
integration_headers = [
" ".join(
custom_names.get(x, x.title().replace("ai", "AI").replace("db", "DB"))
custom_names.get(
x,
x.title().replace("db", "DB")
if dir_ == "langchain_v1"
else x.title().replace("ai", "AI").replace("db", "DB"),
)
for x in dir_.split("-")
)
for dir_ in integrations
@@ -638,9 +706,14 @@ See the full list of integrations in the Section Navigation.
{integration_tree}
```
"""
# Write the reference.md file
with open(HERE / "reference.md", "w") as f:
f.write(doc)
# Write a dummy index.md file that points to reference.md
# Sphinx requires an index file to exist in each doc directory
# TODO: investigate why we don't just put everything in index.md directly?
# if it works it works I guess
dummy_index = """\
# API reference
@@ -656,34 +729,30 @@ Reference<reference>
def main(dirs: Optional[list] = None) -> None:
"""Generate the api_reference.rst file for each package."""
print("Starting to build API reference files.")
"""Generate the `api_reference.rst` file for each package.
If dirs is None, generate for all packages in `libs/` and `libs/partners/`.
Otherwise generate only for the specified package(s).
"""
if not dirs:
dirs = [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs")
if dir_ not in ("cli", "partners", "packages.yml")
and "pyproject.toml" in os.listdir(ROOT_DIR / "libs" / dir_)
p.parent.name
for p in (ROOT_DIR / "libs").rglob("pyproject.toml")
# Exclude packages that are not directly under libs/ or libs/partners/
if p.parent.parent.name in ("libs", "partners")
]
dirs += [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs" / "partners")
if os.path.isdir(ROOT_DIR / "libs" / "partners" / dir_)
and "pyproject.toml" in os.listdir(ROOT_DIR / "libs" / "partners" / dir_)
]
for dir_ in dirs:
# Skip any hidden directories
for dir_ in sorted(dirs):
# Skip any hidden directories prefixed with a dot
# Some of these could be present by mistake in the code base
# e.g., .pytest_cache from running tests from the wrong location.
# (e.g., .pytest_cache from running tests from the wrong location)
if dir_.startswith("."):
print("Skipping dir:", dir_)
continue
else:
print("Building package:", dir_)
print("Building:", dir_)
_build_rst_file(package_name=dir_)
_build_index(dirs)
print("API reference files built.")
_build_index(sorted(dirs))
if __name__ == "__main__":

File diff suppressed because one or more lines are too long

View File

@@ -1,12 +1,12 @@
autodoc_pydantic>=2,<3
sphinx>=8,<9
myst-parser>=3
sphinx-autobuild>=2024
pydata-sphinx-theme>=0.15
toml>=0.10.2
myst-nb>=1.1.1
pyyaml
sphinx-design
sphinx-copybutton
beautifulsoup4
sphinxcontrib-googleanalytics
pydata-sphinx-theme>=0.15
myst-parser>=3
myst-nb>=1.1.1
toml>=0.10.2
pyyaml
beautifulsoup4

View File

@@ -1,3 +1,10 @@
"""Post-process generated HTML files to clean up table-of-contents headers.
Runs after Sphinx generates the API reference HTML. It finds TOC entries like
"ClassName.method_name()" and shortens them to just "method_name()" for better
readability in the sidebar navigation.
"""
import sys
from glob import glob
from pathlib import Path

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNqdVX1sE+cZTxS12la1o4KVFKZy9aa2Yzn7PuyLz643EueDkA97cXA+Jhq9vnvtO+e+cnd2bKOwAduqCQo7xtZqGqyAE4PJAlnCx2hpV6ZqUCbaratoaEUnUbGubAsboKnsI3vPcWgi+Gsnf9zd87y/93l+v+d53i2FNNQNUVUqx0TFhDrgTPRgWFsKOhxMQcP8zqgMTUHl8+FQpOtAShenVwumqRk+lwtoolPVoAJEJ6fKrjTp4gRgutC9JsESTD6m8tlLlU9tdMjQMEACGg7fNzc6OBVtpZgOn2MtlCTVUePQVQmix5QBdcfwhhqHrPJQQi8Smom7VVwWFRF5GaYOgezwxYFkwBqHqarSHKCZ1ezl8ZRSCh+53rn1bXQoQLatOjR1EaYhsvLQ4HRRm3NwdJYNmKjEVV0G9mtMhxIwIY+ZKgYwxISedaKFGtARGOLJsIE1HaWvmyIsPZWc7JtyNChaUUk4hodReohLUYc8CrbshnIsu6mxJORM5Da8YbggQMAj8J15QTVMa3wxw0cAx0FECFQ4lUfQ1s8TOVGrwXgYt2MtIloVWMraKg5AqOFAEtNwdG6VdRRomiRypexcSUNVxsoy4HYgd5uLthg40kwxrakQCqKuxRXOolJQMNLJME7qaAY3TCAqEpIWlwCKZ1Qr2V9aaNAAN4BA8HKZWaNzi8cX+qiGNdIOuFBkESTQOcEaAbrMuCcXvtdTiinK0CoEw3dvVzbe2a5AO0kSfSYWIRtZhbNGSmV0YtFqVAlZnFMRiLWPGJ8nSIJKwhSsAxTLHNShoaHKhltH0TIzZWzJIzHgb88WyhW+P9Q6r+LliuX5BiSMdToCzBqMorB2oGMUQXkwkvHRtI+msOb2rrFgeZuue+ow0aUDxYgjLRrndS9wQkoZgHwxeE/FT9uKo2zs8FFj4TCjqQbEy1FZYz1451xv4y0Nk3Plhat6AihirrStdbok/VAuM8RzKZ4X0kMywebctBiDKS4+VV6Cqt/eBgWEy4Z1gGaZ8bJlnvwiypXASQInyFMZXEdUSKIsIj5Lv+UBY1h5D0EQJ+92MNUBiEZRwU2UrlcWeuhQRqLZe38K42ZZ9uV7O81D0ax9eU4t9jLgwmhISjZO3u1QhthPGGOZeW9c5K3pL6OHfj7m4WM0ZN2AomMMx/AsSXo9EPI0wcVjdO0v7WnAIRRbTE3VTdyAHJqmZtaarpFBxm60AE16aAZl6keTiJNSPIykYg2qnYPhxzQ0kVTAH+HiOAc4AeJz9WcVGno76tpbgsUICjKoqgMi3HWpsrq/n4v3x+TAuvVUQvV2s/G14a5kk8gEBba+qS+irQ8yjZoCRVqT3ZnmhgTUEzhZ66YYhqRICiedhBP1Dd4Y68vEpXqDBx6qtUPpVcgs2aaTcTrJ9Aw2JshWRalfRzPpgbZ+dn1YVOtz1ECUTHka0+5mZ39LJCszfK4u0zsUVenWcJOXjMqiVBsWhhKhlqbMIAjXO3sHmyhvrtmNUgSmEHD5MVSwaFoagXLb4KhtcLtpan3EfNP4Mb5ETMC5eEb6sbXodAopUtaPRWyGIfpHIzsimjDQoSpwejciJpUW+UDyG8mM1i2AyAAEdan2ru6oJ9YIhnrDvOLNCDLb3t+QbusksmanuoAZhmBxokwOQ7i9pdL8NPT/M6rjPfjCKYCHtLljuKCohiLG46MRqKOusoqcpKZ4NO51OBpswjvreq0plnLXkrWeOMt6WW/c68Hr0SCdR7szM/L2WVEAEiq8NGdNCnTA4XO7aYcfk0HAy6AeKx3Wm0fnjq3XK4+t2vaZitJVhb6zs9t3hXa+TSx5feaTZVvffHDdfnP25i86j9Xf99fNVcvXHD47+MSh1/5evbH+UPQ/fzvbcuLcGcfgObDpxhMzz/3pK5VLTj28cknS1XvrQv69fwU/uXL08ImZ45+/feM37ycat6369/Pvhb72rSWHD754e+XqTdHT6//g3zp2rqMnu3v7YHLThke6u7PW8i99dKhuKdxzZSo/dTFaaIzuFtN7rtYmbu2dOoMx1z2VFZkVr/6uIF9//t01x3PjN6u3/CUsOdsrzvzw+194PPjW5hE48ZN9jvNC8zW/+3uXf7XjoRfz3/0n8dk/rq0iPP/InFiWk57FDmT91ZMVv++LPfrrnT9e9eaHJikeNPZ+fP3ZrqlLHx898vQba5gVn2vr2H5zd8XTO85XP/Ph0guRmZOx2zuuzLyhr3jy6pMvgL0/e3xpK33x4rV3vvp28f7e5NJHzi9bVwwwk/rll7+4ov2BH9RQs/LVY9MRRX5p9tGvV428RTI/uvGucIu6/v5V8rGPXr3k9D/24LZv/7cQ7ZloWvXBvteiD8V9Kx/esXniSJ/sv3nh2p/Tfd3v1B47teF822rrp7s+KGlTVXF/3x6tBQn1P0jA3k8=
eNqdVWtsFNcVtnEDDaKpqhC1jYuYTm3ktp71zL68683SENsYGz/W3rV3bYKsuzN3dseeF/NYdm1cpSSqkhCVDIpIWuFAYnuXGMvBtUtIqBMiEqCPtKVJkexISd/QR6SmLWmjBOiZ3TXYgl+92se995x77jnn+865e3MprOmCIpdOCbKBNcQasNCtvTkN7zKxbjySlbCRVLjxUEc4MmZqwsI3koah6nU1NUgVHIqKZSQ4WEWqSTE1bBIZNTBXRZw3Mx5XuMxiadUwKWFdRwmsk3U7hklWgatkg6wjt2FRVMhqUlNEDEtTxxo5srOalBQOi7CRUA3KrVCSIAugpRsaRhJZxyNRx9WkoShiwaCRUe3jvCnn3QfVG9O6YVJGki3VsKEJOIVBymGd1QS1oEB2FQWEIPOKJiF7m9CwiAzMEYZCIAIyoWUccFBFGhiDPOm2YVWD8DVDwPlVXsmeFL0BbwU5QY6MQHiQS0HDHDhbVIMYi2pKfACzBqiN7BzJJTHiwPj+8aSiG9b0ygy/iFgWQ0KwzCocmLZmE0OCWk1wmLd9rSaGdIObhOTKOB+7NTmIsUohUUjhbOGsdRypqiiw+RhrBnRFniqCQdnu3CqetCGhADnZsOY6wJUtzTWhDBBCJhiHz+egj6cp3UCCLALAlIjAq6yal59aLlAROwhGqCLZrGzh8PRyHUW3JtoQ2xFeYRJpbNKaQJrkdc8u39dM2RAkbOXqQ7deVxTeuC7ncjAMfGZWWNYzMmtN5Mn00orTwIcMxSpgxHqOnl5KkIjlhJG0xpx+71EN6yrwGz+chWOGqe8dB0jwz8/nijx/vmP7EpbvlXx5vAHgseYjSbOaYGqJFlMknLTTQzCeOsZX52aIprbIVH3xmshtcZiJaEjWecCicQn9HJs05UHMTdbfFvF5G3GIxnYfyovCaVXRMVX0ypqKUV2FCqeaG2YLJKMULYFkYSh/rfWCjSZUtCDPFcXAd9skXE5JujXmqaWnb0psFlvz9qS/mVdTQnu9M5X2DLQJvVsb3cl4o6hnlrSXYJmELNAUQ1M0M58GWqeUDGWqhSqngHcpgcVUHmW4y/1KmtIgj6IgCQBG/rfYo4A4LhrGyVs1DGUQQzt7gfHQhfHqch0NSxCeHc9NS04/jB/fXuuGNbc/P7yvrNTT8TKfxpySfvJWedHG87Q+lV5SpgTOWqiART/tYTGuddNxvwc76VpgiTvudmOa5f1e2Im/bHcVFqzYdFAVzYA8sdCVjYy1UC2htF2qQRfjcXkh1gB0NFY0ORw24w2KHYQeIFTobAriXmR5ikVsElMFBlu5ht72LW3N9ZNhcLJeUQYFfGCx9Ev9/SzfH5eCrDMcjwmJRGe/GXVudyaTfQ3bU61pxPHdYu9ApMHvHIj2RSIuzTtIMbUeZ63X43EyFOOgHVB5VE8r19AtJ+OqudvVt6svEXmgZxfvibobfXLa40Btu7raRdfAgIaT0VATH+repummb/fuHildG9IH2hvSLV7ULnODTZFQqFn2qGJoCxtzxRjFETbbeSMW1VLNPi2cUrshahUZyWBNgADKQ9fVg8XCo6DwqELZuZfKLkBw+cQEHSt7bYDYBq9chyxmAkTYzjCGf2j9YcHAwXZFxgtPQWLMlMAFOxuFHr6vuXeINmqllozAtrT2R4TIYNTX7/RGW5VWVeIbTJbrcrcty4yf8VN0MTle2u3Lk/Om6/+nVydi1PI+QnWohec8Jyu6LPB8NgxVhTVrkhUVk4NnQ8PZ+q1U15Zea87vpVHc6/e7eFTL+DCiGqNdx5es3eg64/abk0MiEC/FWrNJV5Csc7tdZICQUNDnddN0/tH/Trbw/L1Zemzjvs+W5EcZfK9ff6Lrdfldet38J9/0jf7hq3cemHhr4v7OddTQebncu/675IXv9fSurxjaEfrj+2t+0vz3ufJf7btrz553hvfv+XrJIxfin3vgg4pjF9/954nhqdfOXU182qW8c2X01NWPrshvv/HWn/+a+dubFUzyk7vvORzbeM+P3iPLteDUqpazbvNE3y+ihw/+7OPqMkqYe+xi2vHFyImz39c2j9b89C+Th3ZuWAg4147FzuxfVfLbyteac2//t2kK1fm7+57o9FZdvLSp5Avns48/yfzg/K9nnll3hDx3+Yf/Hp47uPl08Orqyg333d/xUNA8Pf6na3zrutjuzk0fTs+mPrPt0qPYebjyWxusM73jE2fXLJ6NRjrLE7+fXlv1tZcrqpwZ60J96T+eOnPZ8emdG49eeZAJstfkmcvnHkN3lPkCqyufiS0+9+p9liPbfW/vveVNaG3lb+66cvorB54e9bpXPW5deil+zXPqP9KD/3r2w7Ij64e27vz4ocWj28/U7zhtHLk+GEgcumPfo797ujM28/qTHx2ZOTj6wS+3vjHCiccWT65++POb3m9fZHdVXVtlI1JW8u01h2Y2Azz/A3nj8PY=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNqdVWtwE9cVtgemdBpCX3nUbV02miTNJF55pV1JSK4mMfIDAXqr8iNDndXdK2vR7t71PmRLxlOgpJMQJnQT0o4bhjbYSK1wHBy7IXUKTEmYUCalJUxoDBP3R1rSMCTUpT9oJwm9qwexB351Z6Xdu+dxv3O+c87dXsxCReWRVD/BSxpUWKDhhWpsLypwQIeqtqMgQi2NuPFwKBYf0xV+7sG0psmqp7mZlXkrkqHE8laAxOasrRmkWa0Zv8sCLLsZTyIud77+gWGLCFWV7YeqxfPosAUgvJWkWTyWdVAQkKXJoiAB4qWuQsUysqnJIiIOCvhDv6yRDCJFXuKxlqopkBUtnhQrqLDJoiEkVBxqOdk0T+lSGT5WvfHqGbZIrGhKFagpPMxCLOWgChRerihYolUBwUsppIis+ZlQoMBqkCM0RLAEzoSSs2JDmVWwM5wn1XQsKzh8ReNheVVWMl+qaDBaXuq3jIzg8HAueQVyGGxVDcdYVUPJzRBoWG1k00gxDVkOO989nkaqZkwuzfBLLAAQJwRKAHHYtfFif56XmwgOpkysJZxWCZajNkoZCGWSFfgsLFSsjEOsLAs8KEfXvFlF0kSVBtIEcrO4ZJJBYs4kzZgJYRCt/uZwDpeCRNisTqfVfmiIVDWWlwRMLSmwGE9BLstfWyyQWZDBTshqmRmFivHkYh2kGgcCLAjFlrhkFZA2DrCK6GSmF39XdEnjRWgUfeGbt6sKb2xXpK02G76nlnhWcxIwDpTL6PASa1wJORIg7MR4gZqsJUiAUr+WNsbsbuevFKjKuLLhjwrYTNPV7eOYDPjWyWK1wveHNtRYnK+7e7wNE2McibFaE2G3EwFWIeyU3UHYnB7ahW+iMxCf8FW3id+Sh6m4wkpqCnPRXuO9CNK6lIFcyXdLxo+YjONoTPi4sUg4JCMVklVUxkQ3Ga30Nulvm66UF4mUflbi8+VtjSNl6gfzQ4Mc0DkunR0UKXeeofkk1EFqpmqCq9/cBgMiRdUYc9psk1VJLfklHCtF2iiSss0OkQpOhcCLPM5n+b86YFRj3EFR1Ks3K2goA/EoKjJU+Tq6WEOBIibN3PtzN4zb7f7drZVqrmi3eTlml2qpcDEam11UX71ZoepiP6VODNW0SZ4z5u7Fiz6GAzTltrscbofbRTPJNa6Ug4b4kUw63S4O/NacBgB7McmUkaKRKgR4mmo5Y65JZIfMRvPSNgftxJG24EkEBJ2DMT3ZhswY1BZCxhMJsdxLIEUCFqQhWak/o9jWE2wN+H2lGAbpQyjDw2fO13+jrw+k+pKiN5lQOyFHBTYHab2L6drAyKFoJNjT1RuB6UB4IDTgSAE2FxwcSGZIm4uxO502xuEibVbKivuGdPp62qSNyTTrttlla1pL9m0Yas37Y628VeoKMEzCQUXXu3q5lFVQkmKYbk/0ZiJBuq0dMHJaifiRNZKXsyGuryeTCzrDGg06rF3ieocjAeLWfhRPBSi+py8S72AADpHV0t7mFgIXLJ6WqrfaNiRuG9JsGpeHqjVNC8GVE+O1Lp2RLcQ6fDqFJCHXQsTMDEP8xCM7xmvQG0QSnNuDE6Nnec7bJ4L2pCB0CszafIJx5ZS4P+P0A5++1iqEYoFgaKNKDUUVF6V/f1FmnBRDUrXkUMyacml+Dv3/RPVKN7l4CpAhuXIMFyWkSnwqVYhBBXeVUQIC0jk87hVY8HWQ0dYeY8ZtZ1w2zuF2sna7m3Jw5Fo8SGvebsyMcfOsKLICLrwsMKbTtNfiYRja0kKIrHeNE/dY+bDeVqgcWyfqp1c/9cW68rUM/65f3/VMK7pArXz8ymd37j298NiK7LMXozPv3uO77TJ1mbnjx01nn/b03HFvflP4b8e+/PLZd9/zPHb36msLZ65EtvjrdkhvrNrhWZc4Qn/4XvGDF0+c3/L74dOnPv3oddfR6Z6Pj83Pv7ZiaueXSr8ILxy8NLZ/voELZI+fGuN2xa4Gpo7OnnpKobfv+yYTOecO3rZ+WNRnT59r/7n49pbhxOp1D61Mrvjr1+u2/mzhWyPnv/DDJxu5g39Zv/uek6tOfPz2cuJ7o91ftS9v7O6962DDgy/vys4c7QT/WHb5w4Y9M/9suL/+23xdoeM76Kdrd3/0FY5BK8Bdj/RuaALXzvzr/dG91vtv10/HGj/7yc7Eo42rnutsONnh2/aHJ419X9v65kDD689vOZ671Ni18WH5k02Rje+/8cly+q2rNJfkL+Z7JudHE46T0ecPdRfC0cZDo89dvfj3P6d7t3LGfxB95Y/7lMyFT/90reGBx5+e+vXeNzsOHt72gXP3wiPX38nOLaS7l//74d8c3+n477VVt7+iUGfnnpBX7hp5Z/OFFwZ/EDp8drY0f+7S7J7RuMV937F6k5lldb+MfveMH9P0P8WT3G8=
eNqdVWtsHFcVtnFaaGkDtDRQQekwVFVVdtYzuzP7sFmn9saP9WtX9jquE6LN7Mxdz9gzc8fz2HodDE2KCmlSwpTSFhUJGj822bhOk1jBaesKAk3S0qQiRUgxAlVJRECplRKIBVLTcmZ3ndhKfnHlx8ycc797zvm+c+62fBYZpoy1yilZs5DBCxa8mM62vIGGbGRa359UkSVhcTwR706O2YZ85mHJsnSzprqa12Uv1pHGy14Bq9VZplqQeKsannUFFWHG01jMzVc+tIVUkWny/cgkazZuIQUMR2kWWUO2IEXBpIc0sILg1TaRQY5u8pAqFpECH/p1i2IxpcqaDF6mZSBeJWsyvGIiD2lhrJQArZzubs/YWjF8cL32WLOF1HjVtRrIMmSURWAVkSkYsl5yILvKBkLWMthQefczYSCFt5BIWJjgCaiEkfPCRp03AAzqZLrAugHpG5aMim9FJ/ehHA1EK2v95OgopAe1lA0kQrBlN8ix7IbTA0iwwG1002heQrwI4LvGJWxazvTKCu/nBQFBQZAmYBGgnUP9I7LuIUSUcWP1ECOmJRaguBoq5u4UBhHSKV6Rs2iytNd5hdd1RRaKOVYPmFibKpNBueHcaC64lFDAnGY5M3EIpT5WnciBIDSC8YZCXvqVYcq0eFlTgGBK4SGqSb1of225QeeFQQChymJzJkubp5f7YNOZ6OCFePcKSN4QJGeCN9QAe2j5d8PWLFlFTj6auPG4svHacXm/l2Hg58AKZDOnCc5EUUy/WrEb9JCjBAwgzkv09FKBFKT1W5Iz5gsH9hjI1EHf6IlJ2GbZ5rZxoAS9cyJf1vnueNsSl3+tuHd8HdDjzCUl20MwQaLVVggf7eMIhqvx0TVciGjuSE5Fy8ckb8rDgaTBa2YGuGhcYj8vSLY2iMRC9KaMz7mMQzZu+NBeFBrWsYmoclTO1KNUV6nDqdi6QyWRUdjo5zV5pHiss9dlEzpa1mbKZtC7CwmHU6rpjHGsf/q6xVWxM+c+pGIZPSt3Rn3ZYW6gQ+5ramSldKNi5pa8l2gpQBVoiqEpmpkbBllncY6y9VKXU6C7rCwgqsjyGMcFXh2mDKijIqsykFH8W55RIBw/DWv2Rg8LDyIYZ3sZji6tN5b7GEiF9Nx8riP5wrBev7nXNTQ2XFwrYwJRoGUxjflUc/ZGexljN21ODS85U7LonHkAXlI+NpjxsT6RQ0LQJ/rYAJ9mQwLyhQU6k6bDoSPuVBEAxZWDjg0L6iTAVLZyzhmPyg+7rRrxM5w/ALnWwkQTFFtE3XZ6HXaTMGsJHSYb5sX9QoYSeEFCVEnBTn5dX2d9Ryxa6IYgoxgPyuiZ+covp1JCJpVWI/Wg6+BIpjFk0X1ie7Il14PD7TGLV5uDPdlWtn+gnull5A1xmjcpJsj5ggEuwIUoxkt7ofOonqgWT4ihFN2hSk1qPNbZnsKxRDOvJ+ykFWsYsTpjCdy5IZlqHGoKMiirp6IN4VYxYfTTfVZftrEzG4tbWbNvyN+c1PmWnvV2S4Mp2bFUTnlMDAZTXYHu9YMjyCfbjZAib0mR6loCJA9T14yUG4+CxqNKbccttV0tIRYLE/GunLW1RAvccnFNydUS3W6FEfyH0d8tWyjSiTV05lkojJ2VxYgYHRhsSMm93f6srWWGWKk5HWxTuGYk6HG2PSAxbG9DAGmtQlhYVhmaZSi6XJwAzYaK4rwe+v8Z1eFHqeVzhIrrpes8r2FTkzOZyW7oKmQ4BUHBtgjXhoEmo01UV32fMxMO0Hw6jBAX5lhWpBHVAKN4Ce3a1Bl375w8r4DwsoJzSPJHyBqW9ZO1hMpHQgGWpouX/tbJ0vX3ZuXU/Ts+U1FcVfD7ySc7u37zo/foz89d+Oatv/jKH5873Hdl28xdUuVtq3/4WemeI4vsP6fPMYur9qy/+piYV0/f8oC37p0Xg9+9dPJb91QclQ7c8tKGtwr/Ovz6wtqZxYvPvzj72tV/L57cFx7df/7clvnvfPv0b/uZtjvmI7886784sfvURjGqPd7zyFsvTHoLVw7OTwUvPlx1nzyz/U/D3t5k9tjPjHBq+u1cj1fr/OC5nUcfOvzhyxUVw2/j0/9gP75r4wvvnvrpauHZO59amLjtkVUH95LizgcPWp7MN/acYF71XF398sSX9Oa19bFPX9pM/HrryObTlz8cab1j9tJY+9lNeGHVj0994fdiXdsba79+fEdL29Ofm9nx7p40/vl29b5EkxFLiJfJseOfer/uB39I/ffuusSFI5mR3330lwFu593nbm/Sh25vpQ/MCKOjD27q2McdZAOdR7/YWlhz5cRXf/L8XICtfGr7hbVvfqQuvv/n2b9979iux7v+fq+9UDUwNn3nLrXyva0f71t4ZmiNh3zy/ib+5K3R+SeOzX2woAfIr5FC76WnB+n/bL68MB5Zc/xssEhJVUXP+fNVdcDP/wBFIfTx

Some files were not shown because too many files have changed in this diff Show More