Compare commits

..

327 Commits

Author SHA1 Message Date
Mason Daugherty
52d68745a9 lint 2025-07-30 20:49:40 -04:00
copilot-swe-agent[bot]
e5dc9e9afd Remove sentence-transformers dependency from LangChain
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
2025-07-31 00:23:21 +00:00
copilot-swe-agent[bot]
369417c3e7 Initial plan 2025-07-31 00:07:44 +00:00
Mason Daugherty
32e5040a42 chore: add CLAUDE.md (#32334) 2025-07-30 23:04:45 +00:00
ccurme
a9e52ca605 chore(openai): bump openai sdk (#32322) 2025-07-30 10:58:18 -04:00
Kanav Bansal
e2bc8f19c0 docs(docs): update RAG tutorials link across multiple vector store docs (AstraDB, DatabricksVectorSearch, FAISS, Redis, etc.) (#32301)
## **Description:** 
This PR updates the internal documentation link for the RAG tutorials to
reflect the updated path. Previously, the link pointed to the root
`/docs/tutorials/`, which was generic. It now correctly routes to the
RAG-specific tutorial page for the following vector store docs.

1. AstraDBVectorStore
2. Clickhouse
3. CouchbaseSearchVectorStore
4. DatabricksVectorSearch
5. ElasticsearchStore
6. FAISS
7. Milvus
8. MongoDBAtlasVectorSearch
9. openGauss
10. PGVector
11. PGVectorStore
12. PineconeVectorStore
13. QdrantVectorStore
14. Redis
15. SQLServer

## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
2025-07-30 09:46:01 -04:00
Mason Daugherty
fbd5a238d8 fix(core): revert "fix: tool call streaming bug with inconsistent indices from Qwen3" (#32307)
Reverts langchain-ai/langchain#32160

Original issue stems from using `ChatOpenAI` to interact with a `qwen`
model. Recommended to use
[langchain-qwq](https://python.langchain.com/docs/integrations/chat/qwq/)
which is built for Qwen
2025-07-29 10:26:38 -04:00
HerrDings
fc2f66ca80 docs: fixed link to docs of unstructured (#32306)
In the section [How to load documents from a
directory](https://python.langchain.com/docs/how_to/document_loader_directory/)
there is a link to the docs of *unstructured*. When you click this link,
it tells you that it has moved. Accordingly this PR fixes this link in
LangChain docs directly

from: `https://unstructured-io.github.io/unstructured/#`
to: `https://docs.unstructured.io/`
2025-07-29 10:12:22 -04:00
Mason Daugherty
0e287763cd fix: lint 2025-07-28 18:49:43 -04:00
Copilot
0b56c1bc4b fix: tool call streaming bug with inconsistent indices from Qwen3 (#32160)
Fixes a streaming bug where models like Qwen3 (using OpenAI interface)
send tool call chunks with inconsistent indices, resulting in
duplicate/erroneous tool calls instead of a single merged tool call.

## Problem

When Qwen3 streams tool calls, it sends chunks with inconsistent `index`
values:
- First chunk: `index=1` with tool name and partial arguments  
- Subsequent chunks: `index=0` with `name=None`, `id=None` and argument
continuation

The existing `merge_lists` function only merges chunks when their
`index` values match exactly, causing these logically related chunks to
remain separate, resulting in multiple incomplete tool calls instead of
one complete tool call.

```python
# Before fix: Results in 1 valid + 1 invalid tool call
chunk1 = AIMessageChunk(tool_call_chunks=[
    {"name": "search", "args": '{"query":', "id": "call_123", "index": 1}
])
chunk2 = AIMessageChunk(tool_call_chunks=[
    {"name": None, "args": ' "test"}', "id": None, "index": 0}  
])
merged = chunk1 + chunk2  # Creates 2 separate tool calls

# After fix: Results in 1 complete tool call
merged = chunk1 + chunk2  # Creates 1 merged tool call: search({"query": "test"})
```

## Solution

Enhanced the `merge_lists` function in `langchain_core/utils/_merge.py`
with intelligent tool call chunk merging:

1. **Preserves existing behavior**: Same-index chunks still merge as
before
2. **Adds special handling**: Tool call chunks with
`name=None`/`id=None` that don't match any existing index are now merged
with the most recent complete tool call chunk
3. **Maintains backward compatibility**: All existing functionality
works unchanged
4. **Targeted fix**: Only affects tool call chunks, doesn't change
behavior for other list items

The fix specifically handles the pattern where:
- A continuation chunk has `name=None` and `id=None` (indicating it's
part of an ongoing tool call)
- No matching index is found in existing chunks
- There exists a recent tool call chunk with a valid name or ID to merge
with

## Testing

Added comprehensive test coverage including:
-  Qwen3-style chunks with different indices now merge correctly
-  Existing same-index behavior preserved  
-  Multiple distinct tool calls remain separate
-  Edge cases handled (empty chunks, orphaned continuations)
-  Backward compatibility maintained

Fixes #31511.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-28 22:31:41 +00:00
Copilot
ad88e5aaec fix(core): resolve cache validation error by safely converting Generation to ChatGeneration objects (#32156)
## Problem

ChatLiteLLM encounters a `ValidationError` when using cache on
subsequent calls, causing the following error:

```
ValidationError(model='ChatResult', errors=[{'loc': ('generations', 0, 'type'), 'msg': "unexpected value; permitted: 'ChatGeneration'", 'type': 'value_error.const', 'ctx': {'given': 'Generation', 'permitted': ('ChatGeneration',)}}])
```

This occurs because:
1. The cache stores `Generation` objects (with `type="Generation"`)
2. But `ChatResult` expects `ChatGeneration` objects (with
`type="ChatGeneration"` and a required `message` field)
3. When cached values are retrieved, validation fails due to the type
mismatch

## Solution

Added graceful handling in both sync (`_generate_with_cache`) and async
(`_agenerate_with_cache`) cache methods to:

1. **Detect** when cached values contain `Generation` objects instead of
expected `ChatGeneration` objects
2. **Convert** them to `ChatGeneration` objects by wrapping the text
content in an `AIMessage`
3. **Preserve** all original metadata (`generation_info`)
4. **Allow** `ChatResult` creation to succeed without validation errors

## Example

```python
# Before: This would fail with ValidationError
from langchain_community.chat_models import ChatLiteLLM
from langchain_community.cache import SQLiteCache
from langchain.globals import set_llm_cache

set_llm_cache(SQLiteCache(database_path="cache.db"))
llm = ChatLiteLLM(model_name="openai/gpt-4o", cache=True, temperature=0)

print(llm.predict("test"))  # Works fine (cache empty)
print(llm.predict("test"))  # Now works instead of ValidationError

# After: Seamlessly handles both Generation and ChatGeneration objects
```

## Changes

- **`libs/core/langchain_core/language_models/chat_models.py`**: 
  - Added `Generation` import from `langchain_core.outputs`
- Enhanced cache retrieval logic in `_generate_with_cache` and
`_agenerate_with_cache` methods
- Added conversion from `Generation` to `ChatGeneration` objects when
needed

-
**`libs/core/tests/unit_tests/language_models/chat_models/test_cache.py`**:
- Added test case to validate the conversion logic handles mixed object
types

## Impact

- **Backward Compatible**: Existing code continues to work unchanged
- **Minimal Change**: Only affects cache retrieval path, no API changes
- **Robust**: Handles both legacy cached `Generation` objects and new
`ChatGeneration` objects
- **Preserves Data**: All original content and metadata is maintained
during conversion

Fixes #22389.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-28 22:28:16 +00:00
Mason Daugherty
30e3ed6a19 fix: add space in run-name for better readability 2025-07-28 17:46:27 -04:00
Mason Daugherty
8641a95c43 fix: update run-name in scheduled_test.yml to include dynamic inputs 2025-07-28 17:45:05 -04:00
Mason Daugherty
df70c5c186 chore: update actions run-names and add default inputs (#32293) 2025-07-28 17:33:27 -04:00
Mason Daugherty
d5ca77e065 fix: remove erreneous rocket emoji in run-name 2025-07-28 17:11:14 -04:00
Mason Daugherty
b7e4797e8b release(anthropic): 0.3.18 (#32292) 2025-07-28 17:07:11 -04:00
Mason Daugherty
3a487bf720 refactor(anthropic): AnthropicLLM to use Messages API (#32290)
re: #32189
2025-07-28 16:22:58 -04:00
Mason Daugherty
e5fd67024c fix: update link text for reporting security vulnerabilities in SECURITY.md 2025-07-28 15:05:31 -04:00
Mason Daugherty
b86841ac40 fix: update alt attribute for GitHub Codespace badge in README 2025-07-28 15:04:57 -04:00
Mason Daugherty
8db16b5633 fix: use new Google model names in examples (#32288) 2025-07-28 19:03:42 +00:00
Mason Daugherty
6f10160a45 fix: scripts/ errors 2025-07-28 15:03:25 -04:00
Mason Daugherty
e79e0bd6b4 fix(openai): add max_retries parameter to ChatOpenAI for handling 503 capacity errors (#32286)
Some integration tests were failing
2025-07-28 13:58:23 -04:00
ccurme
c55294ecb0 chore(core): add test for nested pydantic fields in schemas (#32285) 2025-07-28 17:27:24 +00:00
Mason Daugherty
7a26c3d233 fix: update bar_model to use the correct model version claude-3-7-sonnet-20250219 (#32284) 2025-07-28 12:57:40 -04:00
Mason Daugherty
c6ffac3ce0 refactor: mdx lint (#32282) 2025-07-28 12:56:22 -04:00
Mason Daugherty
a07d2c5016 refactor: remove references to unsupported model claude-3-sonnet-20240229 (#32281)
Addresses some (but not all) test issues brought about in #32280
2025-07-28 11:57:43 -04:00
Aleksandr Filippov
f0b6baa0ef fix(core): track within-batch deduplication in indexing num_skipped count (#32273)
**Description:** Fixes incorrect `num_skipped` count in the LangChain
indexing API. The current implementation only counts documents that
already exist in RecordManager (cross-batch duplicates) but fails to
count documents removed during within-batch deduplication via
`_deduplicate_in_order()`.

This PR adds tracking of the original batch size before deduplication
and includes the difference in `num_skipped`, ensuring that `num_added +
num_skipped` equals the total number of input documents.

**Issue:** Fixes incorrect document count reporting in indexing
statistics

**Dependencies:** None

Fixes #32272

---------

Co-authored-by: Alex Feel <afilippov@spotware.com>
2025-07-28 09:58:51 -04:00
Mason Daugherty
12c0e9b7d8 fix(docs): local API reference documentation build (#32271)
ensure all relevant packages are correctly processed - cli wasn't
included, also fix ValueError
2025-07-28 00:50:20 -04:00
Mason Daugherty
ed682ae62d fix: explicitly tell uv to copy when using devcontainer (#32267) 2025-07-28 00:01:06 -04:00
Mason Daugherty
caf1919217 fix: devcontainer to use volume to store the workspace (#32266)
should resolve the file sharing issue for users on macOS.
2025-07-27 23:43:06 -04:00
Mason Daugherty
904066f1ec feat: add VSCode configuration files for Python development (#32263) 2025-07-27 23:37:59 -04:00
Mason Daugherty
96cbd90cba fix: formatting issues in docstrings (#32265)
Ensures proper reStructuredText formatting by adding the required blank
line before closing docstring quotes, which resolves the "Block quote
ends without a blank line; unexpected unindent" warning.
2025-07-27 23:37:47 -04:00
Mason Daugherty
a8a2cff129 Merge branch 'master' of github.com:langchain-ai/langchain 2025-07-27 23:34:59 -04:00
Mason Daugherty
f4ff4514ef fix: update workspace folder path in devcontainer configuration 2025-07-27 23:34:57 -04:00
Mason Daugherty
d1679cec91 chore: add .editorconfig for consistent coding styles across files (#32261)
Following existing codebase conventions
2025-07-27 23:25:30 -04:00
Mason Daugherty
5295f2add0 fix: update dev container name to match service name 2025-07-27 22:30:16 -04:00
Mason Daugherty
5f5b87e9a3 fix: update service name in devcontainer configuration 2025-07-27 22:28:47 -04:00
Mason Daugherty
e0ef98dac0 feat: add markdownlint configuration file (#32264) 2025-07-27 22:24:58 -04:00
Mason Daugherty
62212c7ee2 fix: update links in SECURITY.md to use markdown format 2025-07-27 21:54:25 -04:00
Mason Daugherty
9d38f170ce refactor: enhance workflow names and descriptions for clarity (#32262) 2025-07-27 21:31:59 -04:00
Mason Daugherty
c6cb1fae61 fix: devcontainer (#32260) 2025-07-27 20:24:16 -04:00
Kanav Bansal
e42b1d23dc docs(docs): update RAG tutorials link to point to correct path (#32256)
- **Description:** This PR updates the internal documentation link for
the RAG tutorials to reflect the updated path. Previously, the link
pointed to the root `/docs/tutorials/`, which was generic. It now
correctly routes to the RAG-specific tutorial page.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** N/A
2025-07-27 20:00:41 -04:00
Mason Daugherty
53d0bfe9cd refactor: markdownlint (#32259) 2025-07-27 20:00:16 -04:00
Mason Daugherty
eafab52483 refactor: markdownlint SECURITY.md (#32258) 2025-07-27 19:55:25 -04:00
Christophe Bornet
efdfa00d10 chore(langchain): add ruff rules ARG (#32110)
See https://docs.astral.sh/ruff/rules/#flake8-unused-arguments-arg

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-26 18:32:34 -04:00
Christophe Bornet
a2ad5aca41 chore(langchain): add ruff rules TC (#31921)
See https://docs.astral.sh/ruff/rules/#flake8-type-checking-tc
2025-07-26 18:27:26 -04:00
Mason Daugherty
5ecbb5f277 fix(docs): temporary workaround until the underlying dependency issues in the AI21 package ecosystem are resolved. (#32248) 2025-07-25 15:12:44 -04:00
Mason Daugherty
c1028171af fix(docs): update protobuf version constraint to <5.0 in vercel_overrides.txt (#32247) 2025-07-25 15:08:44 -04:00
ccurme
f6236d9f12 fix(infra): add pypdf to vercel overrides (#32242)
>   × No solution found when resolving dependencies:
  ╰─▶ Because only langchain-neo4j==0.5.0 is available and
langchain-neo4j==0.5.0 depends on neo4j-graphrag>=1.9.0, we can conclude
that all versions of langchain-neo4j depend on neo4j-graphrag>=1.9.0.
      And because only neo4j-graphrag<=1.9.0 is available and
neo4j-graphrag==1.9.0 depends on pypdf>=5.1.0,<6.0.0, we can conclude
that all versions of langchain-neo4j depend on pypdf>=5.1.0,<6.0.0.
And because langchain-upstage==0.6.0 depends on pypdf>=4.2.0,<5.0.0
and only langchain-upstage==0.6.0 is available, we can conclude that
all versions of langchain-neo4j and all versions of langchain-upstage
      are incompatible.
And because you require langchain-neo4j and langchain-upstage, we can
      conclude that your requirements are unsatisfiable.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-25 15:05:21 -04:00
Mason Daugherty
df20f111a8 fix(docs): add validation for repository format and name in API docs build workflow (#32246)
for build
2025-07-25 15:05:06 -04:00
Eugene Yurtsev
db22311094 ci(infra): no need for . in the regexp (#32245)
No need for allowing `.`
2025-07-25 15:02:02 -04:00
Mason Daugherty
f624ad489a feat(docs): improve devx, fix Makefile targets (#32237)
**TL;DR much of the provided `Makefile` targets were broken, and any
time I wanted to preview changes locally I either had to refer to a
command Chester gave me or try waiting on a Vercel preview deployment.
With this PR, everything should behave like normal.**

Significant updates to the `Makefile` and documentation files, focusing
on improving usability, adding clear messaging, and fixing/enhancing
documentation workflows.

### Updates to `Makefile`:

#### Enhanced build and cleaning processes:
- Added informative messages (e.g., "📚 Building LangChain
documentation...") to makefile targets like `docs_build`, `docs_clean`,
and `api_docs_build` for better user feedback during execution.
- Introduced a `clean-cache` target to the `docs` `Makefile` to clear
cached dependencies and ensure clean builds.

#### Improved dependency handling:
- Modified `install-py-deps` to create a `.venv/deps_installed` marker,
preventing redundant/duplicate dependency installations and improving
efficiency.

#### Streamlined file generation and infrastructure setup:
- Added caching for the LangServe README download and parallelized
feature table generation
- Added user-friendly completion messages for targets like `copy-infra`
and `render`.

#### Documentation server updates:
- Enhanced the `start` target with messages indicating server start and
URL for local documentation viewing.

---

### Documentation Improvements:

#### Content clarity and consistency:
- Standardized section titles for consistency across documentation
files.
[[1]](diffhunk://#diff-9b1a85ea8a9dcf79f58246c88692cd7a36316665d7e05a69141cfdc50794c82aL1-R1)
[[2]](diffhunk://#diff-944008ad3a79d8a312183618401fcfa71da0e69c75803eff09b779fc8e03183dL1-R1)
- Refined phrasing and formatting in sections like "Dependency
management" and "Formatting and linting" for better readability.
[[1]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L6-R6)
[[2]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L84-R82)

#### Enhanced workflows:
- Updated instructions for building and viewing documentation locally,
including tips for specifying server ports and handling API reference
previews.
[[1]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L60-R94)
[[2]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L82-R126)
- Expanded guidance on cleaning documentation artifacts and using
linting tools effectively.
[[1]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L82-R126)
[[2]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L107-R142)

#### API reference documentation:
- Improved instructions for generating and formatting in-code
documentation, highlighting best practices for docstring writing.
[[1]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L107-R142)
[[2]](diffhunk://#diff-048deddcfd44b242e5b23aed9f2e9ec73afc672244ce14df2a0a316d95840c87L144-R186)

---

### Minor Changes:
- Added support for a new package name (`langchain_v1`) in the API
documentation generation script.
- Fixed minor capitalization and formatting issues in documentation
files.
[[1]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L40-R40)
[[2]](diffhunk://#diff-2069d4f956ab606ae6d51b191439283798adaf3a6648542c409d258131617059L166-R160)

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-25 14:49:03 -04:00
Eugene Yurtsev
549ecd3e78 chore(infra): harden api docs build workflow (#32243)
Harden permissions for api docs build workflow
2025-07-25 14:40:20 -04:00
dishaprakash
a0671676ae feat(docs): add PGVectorStore (#30950)
Thank you for contributing to LangChain!

-  **Adding documentation for PGVectorStore**: 
docs: Adding documentation for the new PGVectorStore as a part of
langchain-postgres

- **Add docs**: The notebook for PGVectorStore is now added to the
directory `docs/docs/integrations`.
As a part of this change, we've also updated the VectorStore features
table and VectorStoreTabs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-07-25 13:22:58 -04:00
Christophe Bornet
12ae42c5e9 chore(langchain): add ruff rules D1 (except D100 and D104) (#32123) 2025-07-25 11:59:48 -04:00
Christophe Bornet
e1238b8085 chore(langchain): add ruff rules SLF (#32112)
See https://docs.astral.sh/ruff/rules/private-member-access/
2025-07-25 11:56:40 -04:00
Chaitanya varma
8f5ec20ccf chore(langchain): strip_ansi fucntion to remove ANSI escape sequences (#32200)
**Description:** 
Fixes a bug in the file callback test where ANSI escape codes were
causing test failures. The improved test now properly handles ANSI
escape sequences by:
- Using exact string comparison instead of substring checking
- Applying the `strip_ansi` function consistently to all file contents
- Adding descriptive assertion messages
- Maintaining test coverage and backward compatibility

The changes ensure tests pass reliably even when terminal control
sequences are present in the output

**Issue:** Fixes #32150

**Dependencies:** None required - uses existing dependencies only.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-07-25 15:53:19 +00:00
niceg
0d6f915442 fix: LLM mimicking Unicode responses due to forced Unicode conversion of non-ASCII characters. (#32222)
fix: Fix LLM mimicking Unicode responses due to forced Unicode
conversion of non-ASCII characters.

- **Description:** This PR fixes an issue where the LLM would mimic
Unicode responses due to forced Unicode conversion of non-ASCII
characters in tool calls. The fix involves disabling the `ensure_ascii`
flag in `json.dumps()` when converting tool calls to OpenAI format.
- **Issue:** Fixes ↓↓↓
input:
```json
{'role': 'assistant', 'tool_calls': [{'type': 'function', 'id': 'call_nv9trcehdpihr21zj9po19vq', 'function': {'name': 'create_customer', 'arguments': '{"customer_name": "你好啊集团"}'}}]}
```
output:
```json
{'role': 'assistant', 'tool_calls': [{'type': 'function', 'id': 'call_nv9trcehdpihr21zj9po19vq', 'function': {'name': 'create_customer', 'arguments': '{"customer_name": "\\u4f60\\u597d\\u554a\\u96c6\\u56e2"}'}}]}
```
then:
llm will mimic outputting unicode. Unicode's vast number of symbols can
lengthen LLM responses, leading to slower performance.
<img width="686" height="277" alt="image"
src="https://github.com/user-attachments/assets/28f3b007-3964-4455-bee2-68f86ac1906d"
/>

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-24 17:01:31 -04:00
Mason Daugherty
d53ebf367e fix(docs): capitalization, codeblock formatting, and hyperlinks, note blocks (#32235)
widespread cleanup attempt
2025-07-24 16:55:04 -04:00
Copilot
54542b9385 docs(openai): add comprehensive documentation and examples for extra_body + others (#32149)
This PR addresses the common issue where users struggle to pass custom
parameters to OpenAI-compatible APIs like LM Studio, vLLM, and others.
The problem occurs when users try to use `model_kwargs` for custom
parameters, which causes API errors.

## Problem

Users attempting to pass custom parameters (like LM Studio's `ttl`
parameter) were getting errors:

```python
#  This approach fails
llm = ChatOpenAI(
    base_url="http://localhost:1234/v1",
    model="mlx-community/QwQ-32B-4bit",
    model_kwargs={"ttl": 5}  # Causes TypeError: unexpected keyword argument 'ttl'
)
```

## Solution

The `extra_body` parameter is the correct way to pass custom parameters
to OpenAI-compatible APIs:

```python
#  This approach works correctly
llm = ChatOpenAI(
    base_url="http://localhost:1234/v1",
    model="mlx-community/QwQ-32B-4bit",
    extra_body={"ttl": 5}  # Custom parameters go in extra_body
)
```

## Changes Made

1. **Enhanced Documentation**: Updated the `extra_body` parameter
docstring with comprehensive examples for LM Studio, vLLM, and other
providers

2. **Added Documentation Section**: Created a new "OpenAI-compatible
APIs" section in the main class docstring with practical examples

3. **Unit Tests**: Added tests to verify `extra_body` functionality
works correctly:
- `test_extra_body_parameter()`: Verifies custom parameters are included
in request payload
- `test_extra_body_with_model_kwargs()`: Ensures `extra_body` and
`model_kwargs` work together

4. **Clear Guidance**: Documented when to use `extra_body` vs
`model_kwargs`

## Examples Added

**LM Studio with TTL (auto-eviction):**
```python
ChatOpenAI(
    base_url="http://localhost:1234/v1",
    api_key="lm-studio",
    model="mlx-community/QwQ-32B-4bit",
    extra_body={"ttl": 300}  # Auto-evict after 5 minutes
)
```

**vLLM with custom sampling:**
```python
ChatOpenAI(
    base_url="http://localhost:8000/v1",
    api_key="EMPTY",
    model="meta-llama/Llama-2-7b-chat-hf",
    extra_body={
        "use_beam_search": True,
        "best_of": 4
    }
)
```

## Why This Works

- `model_kwargs` parameters are passed directly to the OpenAI client's
`create()` method, causing errors for non-standard parameters
- `extra_body` parameters are included in the HTTP request body, which
is exactly what OpenAI-compatible APIs expect for custom parameters

Fixes #32115.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-24 16:43:16 -04:00
Mason Daugherty
7d2a13f519 fix: various typos (#32231) 2025-07-24 12:35:08 -04:00
Christophe Bornet
0b34be4ce5 refactor(langchain): refactor unit test stub classes (#32209)
See
https://github.com/langchain-ai/langchain/pull/32098#discussion_r2225961563
2025-07-24 11:05:56 -04:00
Mason Daugherty
6f3169eb49 chore: update copilot development guidelines for clarity and structure (#32230) 2025-07-24 15:05:09 +00:00
Eugene Yurtsev
7995c719c5 chore(langchain_v1): clean anything uncertain (#32228)
Further clean up of namespace:

- Removed prompts (we'll re-add in a separate commit)
- Remove LocalFileStore until we can review whether all the
implementation details are necessary
- Remove message processing logic from memory (we'll figure out where to
expose it)
- Remove `Tool` primitive (should be sufficient to use `BaseTool` for
typing purposes)
- Remove utilities to create kv stores. Unclear if they've had much
usage outside MultiparentRetriever
2025-07-24 14:41:05 +00:00
Mason Daugherty
bdf1cd383c fix(langchain): update deps 2025-07-24 10:37:08 -04:00
Mason Daugherty
77c981999e fix(text-splitters): update langchain-core version to 0.3.72 2025-07-24 10:35:07 -04:00
Mason Daugherty
7f015b6f14 fix(text-splitters): update lock for release 2025-07-24 10:32:04 -04:00
Mason Daugherty
71ad451e1f Merge branch 'master' of github.com:langchain-ai/langchain 2025-07-24 10:24:17 -04:00
Mason Daugherty
2c42893703 fix(langchain): update langchain-core version to 0.3.72 2025-07-24 10:24:04 -04:00
Mason Daugherty
0e139fb9a6 release(langchain): 0.3.27 (#32227) 2025-07-24 10:20:20 -04:00
tanwirahmad
622bb05751 fix(langchain): class HTMLSemanticPreservingSplitter ignores the text inside the div tag (#32213)
**Description:** We collect the text from the "html", "body", "div", and
"main" nodes, if they have any.

**Issue:** Fixes #32206.
2025-07-24 10:09:03 -04:00
Eugene Yurtsev
56dde3ade3 feat(langchain): v1 scaffolding (#32166)
This PR adds scaffolding for langchain 1.0 entry package.

Most contents have been removed. 

Currently remaining entrypoints for:

* chat models
* embedding models
* memory -> trimming messages, filtering messages and counting tokens
[we may remove this]
* prompts -> we may remove some prompts
* storage: primarily to support cache backed embeddings, may remove the
kv store
* tools -> report tool primitives

Things to be added:

* Selected agent implementations
* Selected workflows
* Common primitives: messages, Document
* Primitives for type hinting: BaseChatModel, BaseEmbeddings
* Selected retrievers
* Selected text splitters

Things to be removed:

* Globals needs to be removed (needs an update in langchain core)


Todos: 

* TBD indexing api (requires sqlalchemy which we don't want as a
dependency)
* Be explicit about public/private interfaces (e.g., likely rename
chat_models.base.py to something more internal)
* Remove dockerfiles
* Update module doc-strings and README.md
2025-07-24 09:47:48 -04:00
Mason Daugherty
bd3d6496f3 release(core): 0.3.72 (#32214)
fixes #32170
2025-07-23 20:33:48 -04:00
jmaillefaud
fb5da8384e fix(core): Dereference Refs for pydantic schema fails in tool schema generation (#32203)
The `_dereference_refs_helper` in `langchain_core.utils.json_schema`
incorrectly handled objects with a reference and other fields.

**Issue**: #32170

# Description

We change the check so that it accepts other keys in the object.
2025-07-23 20:28:27 -04:00
Maxime Grenu
a7d0e42f3f docs: fix typos in documentation (#32201)
## Summary
- Fixed redundant word "done" in SECURITY.md line 69  
- Fixed grammar errors in Fireworks README.md line 77: "how it fares
compares" → "how it compares" and "in terms just" → "in terms of"

## Test plan
- [x] Verified changes improve readability and correct grammar
- [x] No functional changes, documentation only

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-23 10:43:25 -04:00
Christophe Bornet
3496e1739e feat(langchain): add ruff rules PL (#32079)
See https://docs.astral.sh/ruff/rules/#pylint-pl
2025-07-22 23:55:32 -04:00
Jacob Lee
0f39155f62 docs: Specify environment variables for BedrockConverse (#32194) 2025-07-22 17:37:47 -04:00
ccurme
6aeda24a07 docs(chroma): update feature table (#32193)
Supports multi-tenancy.
2025-07-22 20:55:07 +00:00
Mason Daugherty
3ed804a5f3 fix(perplexity): undo xfails (#32192) 2025-07-22 16:29:37 -04:00
Mason Daugherty
ca137bfe62 . 2025-07-22 16:25:02 -04:00
Mason Daugherty
fa487fb62d fix(perplexity): temp xfail int tests (#32191)
It appears the API has changes since the 2025-04-15 release, leading to
failed integration tests.
2025-07-22 16:20:51 -04:00
ccurme
053fb16a05 revert: drop anthropic from core test matrix (#32190)
Reverts langchain-ai/langchain#32185
2025-07-22 20:13:02 +00:00
ccurme
3672bbc71e fix(anthropic): update integration test models (#32189)
Multiple models were
[retired](https://docs.anthropic.com/en/docs/about-claude/model-deprecations#model-status)
yesterday.

Tests remain broken until we figure out what to do with the legacy
Anthropic LLM integration— currently uses their (legacy) text
completions API, for which there appear to be no remaining supported
models.
2025-07-22 19:51:39 +00:00
Mason Daugherty
a02ad3d192 docs: formatting cleanup (#32188)
* formatting cleaning
* make `init_chat_model` more prominent in list of guides
2025-07-22 15:46:15 -04:00
ccurme
0c4054a7fc release(core): 0.3.71 (#32186) 2025-07-22 15:44:36 -04:00
ccurme
75517c3ea9 chore(infra): drop anthropic from core test matrix (#32185) 2025-07-22 19:38:58 +00:00
ccurme
ebf2e11bcb fix(core): exclude api_key from tracing metadata (#32184)
(standard param)
2025-07-22 15:32:12 -04:00
ccurme
e41e6ec6aa release(chroma): 0.2.5 (#32183) 2025-07-22 15:24:03 -04:00
itaismith
09769373b3 feat(chroma): Add Chroma Cloud support (#32125)
* Adding support for more Chroma client options (`HttpClient` and
`CloundClient`). This includes adding arguments necessary for
instantiating these clients.
* Adding support for Chroma's new persisted collection configuration (we
moved index configuration into this new construct).
* Delegate `Settings` configuration to Chroma's client constructors.
2025-07-22 15:14:15 -04:00
ccurme
3fc27e7a95 docs: update feature table for Chroma (#32182) 2025-07-22 18:21:17 +00:00
ccurme
8acfd677bc fix(core): add type key when tracing in some cases (#31825) 2025-07-22 18:08:16 +00:00
Mason Daugherty
af3789b9ed fix(deepseek): release openai version (#32181)
used sdk version instead of langchain by accident
2025-07-22 13:29:52 -04:00
Mason Daugherty
a6896794ca release(ollama): 0.3.6 (#32180) 2025-07-22 13:24:17 -04:00
Copilot
d40fd5a3ce feat(ollama): warn on empty load responses (#32161)
## Problem

When using `ChatOllama` with `create_react_agent`, agents would
sometimes terminate prematurely with empty responses when Ollama
returned `done_reason: 'load'` responses with no content. This caused
agents to return empty `AIMessage` objects instead of actual generated
text.

```python
from langchain_ollama import ChatOllama
from langgraph.prebuilt import create_react_agent
from langchain_core.messages import HumanMessage

llm = ChatOllama(model='qwen2.5:7b', temperature=0)
agent = create_react_agent(model=llm, tools=[])

result = agent.invoke(HumanMessage('Hello'), {"configurable": {"thread_id": "1"}})
# Before fix: AIMessage(content='', response_metadata={'done_reason': 'load'})
# Expected: AIMessage with actual generated content
```

## Root Cause

The `_iterate_over_stream` and `_aiterate_over_stream` methods treated
any response with `done: True` as final, regardless of `done_reason`.
When Ollama returns `done_reason: 'load'` with empty content, it
indicates the model was loaded but no actual generation occurred - this
should not be considered a complete response.

## Solution

Modified the streaming logic to skip responses when:
- `done: True`
- `done_reason: 'load'` 
- Content is empty or contains only whitespace

This ensures agents only receive actual generated content while
preserving backward compatibility for load responses that do contain
content.

## Changes

- **`_iterate_over_stream`**: Skip empty load responses instead of
yielding them
- **`_aiterate_over_stream`**: Apply same fix to async streaming
- **Tests**: Added comprehensive test cases covering all edge cases

## Testing

All scenarios now work correctly:
-  Empty load responses are skipped (fixes original issue)
-  Load responses with actual content are preserved (backward
compatibility)
-  Normal stop responses work unchanged
-  Streaming behavior preserved
-  `create_react_agent` integration fixed

Fixes #31482.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-22 13:21:11 -04:00
Mason Daugherty
116b758498 fix: bump deps for release (#32179)
forgot to bump the `pyproject.toml` files
2025-07-22 13:12:14 -04:00
Mason Daugherty
10996a2821 release(perplexity): 0.1.2 (#32176) 2025-07-22 13:02:19 -04:00
Mason Daugherty
2aed07efb6 release(deepseek): 0.1.4 (#32178) 2025-07-22 13:01:54 -04:00
Mason Daugherty
64dac1faf7 release(huggingface): 0.3.1 (#32177) 2025-07-22 13:01:34 -04:00
Mason Daugherty
58768d8aef release(xai): 0.2.5 (#32174) 2025-07-22 13:01:26 -04:00
Mason Daugherty
d65da13299 docs(ollama): add validate_model_on_init note, bump lock (#32172) 2025-07-22 10:58:45 -04:00
Kanav Bansal
c14bd1fcfe fix(docs): update RAG tutorials link to point to correct path (#32169)
## **Description:** 
This PR updates the internal documentation link for the RAG tutorials to
reflect the updated path. Previously, the link pointed to the root
`/docs/tutorials/`, which was generic. It now correctly routes to the
RAG-specific tutorial page for the following text-embedding models.

1. DatabricksEmbeddings
2. IBM watsonx.ai
3. OpenAIEmbeddings
4. NomicEmbeddings
5. CohereEmbeddings
6. MistralAIEmbeddings
7. FireworksEmbeddings
8. TogetherEmbeddings
9. LindormAIEmbeddings
10. ModelScopeEmbeddings
11. ClovaXEmbeddings
12. NetmindEmbeddings
13. SambaNovaCloudEmbeddings
14. SambaStudioEmbeddings
15. ZhipuAIEmbeddings

## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A
2025-07-22 10:24:50 -04:00
Byeongjin Kang
a1ccabf85d docs: add documentation about how to use extended thinking with ChatBedrockConverse (#32168) 2025-07-22 08:44:08 -04:00
Copilot
2104cf0d9a fix: replace deprecated Pydantic .schema() calls with v1/v2 compatible pattern (#32162)
This PR addresses deprecation warnings users encounter when using
LangChain tools with Pydantic v2:

```
PydanticDeprecatedSince20: The `schema` method is deprecated; use `model_json_schema` instead. 
Deprecated in Pydantic V2.0 to be removed in V3.0.
```

## Root Cause

Several LangChain components were still using the deprecated `.schema()`
method directly instead of the Pydantic v1/v2 compatible approach. While
users calling `.schema()` on returned models will still see warnings
(which is correct), LangChain's internal code should not generate these
warnings.

## Changes Made

Updated 3 files to use the standard compatibility pattern:

```python
# Before (deprecated)
schema = model.schema()

# After (compatible with both v1 and v2) 
if hasattr(model, "model_json_schema"):
    schema = model.model_json_schema()  # Pydantic v2
else:
    schema = model.schema()  # Pydantic v1
```

### Files Updated:
- **`evaluation/parsing/json_schema.py`**: Fixed `_parse_json()` method
to handle Pydantic models correctly
- **`output_parsers/yaml.py`**: Fixed `get_format_instructions()` to use
compatible schema access
- **`chains/openai_functions/citation_fuzzy_match.py`**: Fixed direct
`.schema()` call on QuestionAnswer model

## Verification

 **Zero breaking changes** - all existing functionality preserved  
 **No deprecation warnings** from LangChain internal code  
 **Backward compatible** with Pydantic v1  
 **Forward compatible** with Pydantic v2  
 **Edge cases handled** (strings, plain objects, etc.)

## User Impact

LangChain users will no longer see deprecation warnings from internal
LangChain code. Users who directly call `.schema()` on schemas returned
by LangChain should adopt the same compatibility pattern:

```python
# User code should use this pattern
input_schema = tool.get_input_schema()
if hasattr(input_schema, "model_json_schema"):
    schema_result = input_schema.model_json_schema()
else:
    schema_result = input_schema.schema()
```

Fixes #31458.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-21 21:19:53 -04:00
Copilot
18c64aed6d feat(core): add sanitize_for_postgres utility to fix PostgreSQL NUL byte DataError (#32157)
This PR fixes the PostgreSQL NUL byte issue that causes
`psycopg.DataError` when inserting documents containing `\x00` bytes
into PostgreSQL-based vector stores.

## Problem

PostgreSQL text fields cannot contain NUL (0x00) bytes. When documents
with such characters are processed by PGVector or langchain-postgres
implementations, they fail with:

```
(psycopg.DataError) PostgreSQL text fields cannot contain NUL (0x00) bytes
```

This commonly occurs when processing PDFs, documents from various
loaders, or text extracted by libraries like unstructured that may
contain embedded NUL bytes.

## Solution

Added `sanitize_for_postgres()` utility function to
`langchain_core.utils.strings` that removes or replaces NUL bytes from
text content.

### Key Features

- **Simple API**: `sanitize_for_postgres(text, replacement="")`
- **Configurable**: Replace NUL bytes with empty string (default) or
space for readability
- **Comprehensive**: Handles all problematic examples from the original
issue
- **Well-tested**: Complete unit tests with real-world examples
- **Backward compatible**: No breaking changes, purely additive

### Usage Example

```python
from langchain_core.utils import sanitize_for_postgres
from langchain_core.documents import Document

# Before: This would fail with DataError
problematic_content = "Getting\x00Started with embeddings"

# After: Clean the content before database insertion
clean_content = sanitize_for_postgres(problematic_content)
# Result: "GettingStarted with embeddings"

# Or preserve readability with spaces
readable_content = sanitize_for_postgres(problematic_content, " ")
# Result: "Getting Started with embeddings"

# Use in Document processing
doc = Document(page_content=clean_content, metadata={...})
```

### Integration Pattern

PostgreSQL vector store implementations should sanitize content before
insertion:

```python
def add_documents(self, documents: List[Document]) -> List[str]:
    # Sanitize documents before insertion
    sanitized_docs = []
    for doc in documents:
        sanitized_content = sanitize_for_postgres(doc.page_content, " ")
        sanitized_doc = Document(
            page_content=sanitized_content,
            metadata=doc.metadata,
            id=doc.id
        )
        sanitized_docs.append(sanitized_doc)
    
    return self._insert_documents_to_db(sanitized_docs)
```

## Changes Made

- Added `sanitize_for_postgres()` function in
`langchain_core/utils/strings.py`
- Updated `langchain_core/utils/__init__.py` to export the new function
- Added comprehensive unit tests in
`tests/unit_tests/utils/test_strings.py`
- Validated against all examples from the original issue report

## Testing

All tests pass, including:
- Basic NUL byte removal and replacement
- Multiple consecutive NUL bytes
- Empty string handling
- Real examples from the GitHub issue
- Backward compatibility with existing string utilities

This utility enables PostgreSQL integrations in both langchain-community
and langchain-postgres packages to handle documents with NUL bytes
reliably.

Fixes #26033.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-21 20:33:20 -04:00
Copilot
fc802d8f9f docs: fix vectorstore feature table - correct "IDs in add Documents" values (#32153)
The vectorstore feature table in the documentation was showing incorrect
information for the "IDs in add Documents" capability. Most vectorstores
were marked as  (not supported) when they actually support extracting
IDs from documents.

## Problem

The issue was an inconsistency between two sources of truth:
- **JavaScript feature table** (`docs/src/theme/FeatureTables.js`):
Hardcoded `idsInAddDocuments: false` for most vectorstores
- **Python script** (`docs/scripts/vectorstore_feat_table.py`):
Correctly showed `"IDs in add Documents": True` for most vectorstores

## Root Cause

All vectorstores inherit the base `VectorStore.add_documents()` method
which automatically extracts document IDs:

```python
# From libs/core/langchain_core/vectorstores/base.py lines 277-284
if "ids" not in kwargs:
    ids = [doc.id for doc in documents]
    
    # If there's at least one valid ID, we'll assume that IDs should be used.
    if any(ids):
        kwargs["ids"] = ids
```

Since no vectorstores override `add_documents()`, they all inherit this
behavior and support IDs in documents.

## Solution

Updated `idsInAddDocuments` from `false` to `true` for 13 vectorstores:
- AstraDBVectorStore, Chroma, Clickhouse, DatabricksVectorSearch
- ElasticsearchStore, FAISS, InMemoryVectorStore,
MongoDBAtlasVectorSearch
- PGVector, PineconeVectorStore, Redis, Weaviate, SQLServer

The other 4 vectorstores (CouchbaseSearchVectorStore, Milvus, openGauss,
QdrantVectorStore) were already correctly marked as `true`.

## Impact

Users visiting
https://python.langchain.com/docs/integrations/vectorstores/ will now
see accurate information. The "IDs in add Documents" column will
correctly show  for all vectorstores instead of incorrectly showing 
for most of them.

This aligns with the API documentation which states: "if kwargs contains
ids and documents contain ids, the ids in the kwargs will receive
precedence" - clearly indicating that document IDs are supported.

Fixes #30622.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
2025-07-21 20:29:34 -04:00
Mason Daugherty
b4d87c709c chore: update copilot-instructions.md (#32159) 2025-07-21 20:17:41 -04:00
ccurme
383bc8f2ef revert: drop anthropic from core test matrix (#32152)
Reverts langchain-ai/langchain#32146
2025-07-21 20:15:27 +00:00
Christophe Bornet
64261449b8 feat(langchain): add ruff rules TRY (#32047)
See https://docs.astral.sh/ruff/rules/#tryceratops-try

* TRY004 (replace by TypeError) in main code is escaped with `noqa` to
not break backward compatibility. The rule is still interesting for new
code.
* TRY301 ignored at the moment. This one is quite hard to fix and I'm
not sure it's very interesting to activate it.

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-21 13:41:20 -04:00
Christophe Bornet
8b8d90bea5 feat(langchain): add ruff rules PT (#32010)
See https://docs.astral.sh/ruff/rules/#flake8-pytest-style-pt
2025-07-21 13:15:05 -04:00
Mohammad Mohtashim
095f4a7c28 fix(core): fix parse_resultin case of self.first_tool_only with multiple keys matching for JsonOutputKeyToolsParser (#32106)
* **Description:** Updated `parse_result` logic to handle cases where
`self.first_tool_only` is `True` and multiple matching keys share the
same function name. Instead of returning the first match prematurely,
the method now prioritizes filtering results by the specified key to
ensure correct selection.
* **Issue:** #32100

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-21 12:50:22 -04:00
Mason Daugherty
ddaba21e83 chore: copilot instructions (#32075)
https://docs.github.com/en/copilot/how-tos/custom-instructions/adding-repository-custom-instructions-for-github-copilot
2025-07-21 12:50:07 -04:00
diego-coder
8e4396bb32 fix(ollama): robustly parse single-quoted JSON in tool calls (#32109)
**Description:**
This PR makes argument parsing for Ollama tool calls more robust. Some
LLMs—including Ollama—may return arguments as Python-style dictionaries
with single quotes (e.g., `{'a': 1}`), which are not valid JSON and
previously caused parsing to fail.
The updated `_parse_json_string` method in
`langchain_ollama.chat_models` now attempts standard JSON parsing and,
if that fails, falls back to `ast.literal_eval` for safe evaluation of
Python-style dictionaries. This improves interoperability with LLMs and
fixes a common usability issue for tool-based agents.

**Issue:**
Closes #30910

**Dependencies:**
None

**Tests:**
- Added new unit tests for double-quoted JSON, single-quoted dicts,
mixed quoting, and malformed/failure cases.
- All tests pass locally, including new coverage for single-quoted
inputs.

**Notes:**
- No breaking changes.
- No new dependencies introduced.
- Code is formatted and linted (`ruff format`, `ruff check`).
- If maintainers have suggestions for further improvements, I’m happy to
revise!

Thank you for maintaining LangChain! Looking forward to your feedback.
2025-07-21 12:11:22 -04:00
ccurme
6794422b85 chore(infra): drop anthropic from core test matrix (#32146)
Stricter JSON schema validation broke a test. Test was fixed in
https://github.com/langchain-ai/langchain/pull/32145. Core release runs
old tests (i.e., last released version of langchain-anthropic) against
new core. So we bypass anthropic for release. Will revert after.
2025-07-21 15:06:52 +00:00
ccurme
2ef9465893 fix(anthropic): fix test (#32145) 2025-07-21 14:49:40 +00:00
ccurme
0355da3159 release(core): 0.3.70 (#32144) 2025-07-21 10:49:32 -04:00
Ziafat Majeed
6c18073fe6 docs(core): fix grammar from 'as an bonus' to 'as a bonus (#32143) 2025-07-21 10:48:34 -04:00
Kanav Bansal
38581f31dd docs(docs): update RAG tutorials link to point to correct path in Google Vertex AI Embeddings (#32141) 2025-07-21 09:17:54 -04:00
Kanav Bansal
8246b5b660 docs(docs): update RAG tutorials link to point to correct path in AzureOpenAI (#32131) 2025-07-21 09:17:35 -04:00
astraszab
668c084520 docs(core): move incorrect arg limitation in rate limiter's docstring (#32118) 2025-07-20 14:28:35 -04:00
ccurme
cc076ed891 fix(huggingface): update model used in standard tests (#32116) 2025-07-20 01:50:31 +00:00
Yoshi
6d71bb83de fix(core): fix docstrings and add sleep to FakeListChatModel._call (#32108) 2025-07-19 17:30:15 -04:00
Kanav Bansal
f7d1b1fbb1 docs(docs): update RAG tutorials link to point to correct path (#32113) 2025-07-19 17:27:31 -04:00
Isaac Francisco
98bfd57a76 fix(core): better error message for empty var names (#32073)
Previously, we hit an index out of range error with empty variable names
(accessing tag[0]), now we through a slightly nicer error

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-18 17:00:02 -04:00
Gurram Siddarth Reddy
427d2d6397 fix(core): implement sleep delay in FakeMessagesListChatModel _generate (#32014)
implement sleep delay in FakeMessagesListChatModel._generate so the
sleep parameter is respected, matching the documented behavior. This
adds artificial latency between responses for testing purposes.

Issue: closes
[#31974](https://github.com/langchain-ai/langchain/issues/31974)
following
[docs](https://python.langchain.com/api_reference/core/language_models/langchain_core.language_models.fake_chat_models.FakeMessagesListChatModel.html#langchain_core.language_models.fake_chat_models.FakeMessagesListChatModel.sleep)

Dependencies: none

Twitter handle: [@siddarthreddyg2](https://x.com/siddarthreddyg2)

---------

Signed-off-by: Siddarthreddygsr <siddarthreddygsr@gmail.com>
2025-07-18 15:54:28 -04:00
Kanav Bansal
50a12a7ee5 fix(docs): fix broken link in VertexAILLM and NVIDIA LLM integrations (#32096)
## **Description:**   
This PR updates the `link` values for the following integration metadata
entries:

1. **VertexAILLM**  
   - Changed from: `google_vertexai`  
   - To: `google_vertex_ai_palm`  
2. **NVIDIA**  
   - Changed from: `NVIDIA`  
   - To: `nvidia_ai_endpoints`  

These changes ensure that the documentation links correspond to the
correct integration paths, improving documentation navigation and
consistency with the integration structure.

## **Issue:** N/A
## **Dependencies:** None
## **Twitter handle:** N/A

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-18 14:00:49 +00:00
Kanav Bansal
72a0f425ec docs(docs): correct package name from langchain-google_vertexai to langchain-google-vertexai for VertexAILLM (#32095)
- **Description:** This PR updates the `package` field for the VertexAI
integration in the documentation metadata. The original value was
`langchain-google_vertexai`, which has been corrected to
`langchain-google-vertexai` to reflect the actual package name used in
PyPI and LangChain integrations.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** N/A
2025-07-18 09:45:28 -04:00
Sarah Guthals
22535eb4b3 docs: add tensorlake provider (#32046) 2025-07-17 19:28:14 -04:00
open-swe[bot]
5da986c3f6 fix(core): JSON Schema reference resolution for list indices (#32088)
Fixes #32042

## Summary
Fixes a critical bug in JSON Schema reference resolution that prevented
correctly dereferencing numeric components in JSON pointer paths,
specifically for list indices in `anyOf`, `oneOf`, and `allOf` arrays.

## Changes
- Fixed `_retrieve_ref` function in
`libs/core/langchain_core/utils/json_schema.py` to properly handle
numeric components
- Added comprehensive test function `test_dereference_refs_list_index()`
in `libs/core/tests/unit_tests/utils/test_json_schema.py`
- Resolved line length formatting issues
- Improved type checking and index validation for list and dictionary
references

## Key Improvements
- Correctly handles list index references in JSON pointer paths
- Maintains backward compatibility with existing dictionary numeric key
functionality
- Adds robust error handling for out-of-bounds and invalid indices
- Passes all test cases covering various reference scenarios

## Test Coverage
- Verified fix for `#/properties/payload/anyOf/1/properties/startDate`
reference
- Tested edge cases including out-of-bounds and negative indices
- Ensured no regression in existing reference resolution functionality

Resolves the reported issue with JSON Schema reference dereferencing for
list indices.

---------

Co-authored-by: open-swe-dev[bot] <open-swe-dev@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-17 15:54:38 -04:00
Mason Daugherty
6d449df8bb chore: update PR lint (#32091)
remove regex
2025-07-17 15:33:48 -04:00
ccurme
3f4d27fe21 fix(infra): update some notebook cassettes (#32087) 2025-07-17 13:57:29 -04:00
Mason Daugherty
59407338dd docs: remove AI21 embeddings section (#32084)
// no longer exists
2025-07-17 11:32:34 -04:00
Mason Daugherty
a1519af513 fix(docs): fix broken links (#32083) 2025-07-17 10:38:51 -04:00
Christophe Bornet
b61ce9178c refactor(langchain): remove model_rebuild (#32080)
Since #29963 BaseCache and Callbacks are imported in BaseLanguageModel
so there's no need to import them and rebuild the models.
Note: fix is available since `langchain-core==0.3.39` and the current
langchain dependency on core is `>=0.3.66` so the fix will always be
there.
2025-07-17 10:34:41 -04:00
Mason Daugherty
9165cde538 feat(docs): add Slack community link to footer (#32053) 2025-07-17 10:12:09 -04:00
Kanav Bansal
2c0e8dce0d docs(docs): fix broken link in Google Gemini text embedding integration (#32082)
- **Description:** Corrected the `link` path in the Google Gemini
integration entry from
`/docs/integrations/text_embedding/google-generative-ai` to
`/docs/integrations/text_embedding/google_generative_ai` to align with
actual directory structure and prevent broken documentation links.
  - **Issue:** N/A
  - **Dependencies:** None
  - **Twitter handle:** N/A
2025-07-17 09:58:07 -04:00
Mason Daugherty
491f63ca82 release(ollama): release 0.3.5 (#32076) 2025-07-16 18:45:32 -04:00
Mason Daugherty
587c213760 bump lcok 2025-07-16 18:44:56 -04:00
Copilot
98c3bbbaf0 fix(ollama): num_gpu parameter not working in async OllamaEmbeddings method (#32074)
The `num_gpu` parameter in `OllamaEmbeddings` was not being passed to
the Ollama client in the async embedding method, causing GPU
acceleration settings to be ignored when using async operations.

## Problem

The issue was in the `aembed_documents` method where the `options`
parameter (containing `num_gpu` and other configuration) was missing:

```python
# Sync method (working correctly)
return self._client.embed(
    self.model, texts, options=self._default_params, keep_alive=self.keep_alive
)["embeddings"]

# Async method (missing options parameter)
return (
    await self._async_client.embed(
        self.model, texts, keep_alive=self.keep_alive  #  No options!
    )
)["embeddings"]
```

This meant that when users specified `num_gpu=4` (or any other GPU
configuration), it would work with sync calls but be ignored with async
calls.

## Solution

Added the missing `options=self._default_params` parameter to the async
embed call to match the sync version:

```python
# Fixed async method
return (
    await self._async_client.embed(
        self.model,
        texts,
        options=self._default_params,  #  Now includes num_gpu!
        keep_alive=self.keep_alive,
    )
)["embeddings"]
```

## Validation

-  Added unit test to verify options are correctly passed in both sync
and async methods
-  All existing tests continue to pass
-  Manual testing confirms `num_gpu` parameter now works correctly
-  Code passes linting and formatting checks

The fix ensures that GPU configuration works consistently across both
synchronous and asynchronous embedding operations.

Fixes #32059.

<!-- START COPILOT CODING AGENT TIPS -->
---

💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-16 18:42:52 -04:00
efj-amzn
d3072e2d2e feat(core): update _import_utils.py to not mask the thrown exception (#32071) 2025-07-16 17:11:56 -04:00
Lauren Hirata Singh
b49372595e docs: update LangSmith links (#32070) 2025-07-16 16:31:28 -04:00
Mason Daugherty
16664d3b68 fix(docs): make docs link absolute (#32068) 2025-07-16 20:15:28 +00:00
Inácio Nery
ea8f2a05ba feat(perplexity): expose search_results in chat model (#31468)
Description
The Perplexity chat model already returns a search_results field, but
LangChain dropped it when mapping Perplexity responses to
additional_kwargs.
This patch adds "search_results" to the allowed attribute lists in both
_stream and _generate, so downstream code can access it just like
images, citations, or related_questions.

Dependencies
None. The change is purely internal; no new imports or optional
dependencies required.


https://community.perplexity.ai/t/new-feature-search-results-field-with-richer-metadata/398

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-16 15:16:35 -04:00
zygimantas-jac
2df05f6f6a docs: add oxylabs to web browsing table (#31931)
Added Oxylabs to the web browsing table

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 14:00:14 -04:00
nikk0o046
b1c7de98f5 fix(deepseek): convert tool output arrays to strings (#31913)
## Description
When ChatDeepSeek invokes a tool that returns a list, it results in an
openai.UnprocessableEntityError due to a failure in deserializing the
JSON body.

The root of the problem is that ChatDeepSeek uses BaseChatOpenAI
internally, but the APIs are not identical: OpenAI v1/chat/completions
accepts arrays as tool results, but Deepseek API does not.

As a solution added `_get_request_payload` method to ChatDeepSeek, which
inherits the behavior from BaseChatOpenAI but adds a step to stringify
tool message content in case the content is an array. I also add a unit
test for this.

From the linked issue you can find the full reproducible example the
reporter of the issue provided. After the changes it works as expected.

Source: [Deepseek
docs](https://api-docs.deepseek.com/api/create-chat-completion/)


![image](https://github.com/user-attachments/assets/a59ed3e7-6444-46d1-9dcf-97e40e4e8952)

Source: [OpenAI
docs](https://platform.openai.com/docs/api-reference/chat/create)


![image](https://github.com/user-attachments/assets/728f4fc6-e1a3-4897-b39f-6f1ade07d3dc)


## Issue
Fixes #31394

## Dependencies:
No new dependencies.

## Twitter handle:
Don't have one.

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 12:19:44 -04:00
Mohammad Mohtashim
96bf8262e2 fix: fixing missing Docstring Bug if no Docstring is provided in BaseModel class (#31608)
- **Description:** Ensure that the tool description is an empty string
when creating a Structured Tool from a Pydantic class in case no
description is provided
- **Issue:** Fixes #31606

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 11:56:05 -04:00
Mason Daugherty
15103b0520 chore: add closing keyword to PR template (#32065) 2025-07-16 11:54:26 -04:00
Casi
686a6b754c fix: issue a warning if np.nan or np.inf are in _cosine_similarity argument Matrices (#31532)
- **Description**: issues a warning if inf and nan are passed as inputs
to langchain_core.vectorstores.utils._cosine_similarity
- **Issue**: Fixes #31496
- **Dependencies**: no external dependencies added, only warnings module
imported

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 11:50:09 -04:00
Michael Li
12d370a55a fix(cli): exception to prevent swallowing unexpected errors (#31983) 2025-07-16 10:23:43 -04:00
Michael Li
5a4c0c0816 fix(cli): handle exception in remove() (#31982) 2025-07-16 10:23:02 -04:00
Krishna Somani
e2dc36b126 chore: update SECURITY.md (#32060)
Made minor changes, making it neat
2025-07-16 10:20:59 -04:00
Kanav Bansal
c133eff6c8 docs(docs): fix product name in Google SQL for MySQL description (#32062)
- **Description:** Corrected the service name from "Cloud Cloud SQL" to
"Google Cloud SQL" to accurately reflect the official product branding.
2025-07-16 10:17:59 -04:00
Ahmad Elmalah
1892a67eef docs: adding context for Textract linearization-config param (#32064)
Before jumping into tech implementation, I added a context for
linearization-config param, and explained what's linealization in this
context.
I also linked an AWS blog for more advanced use cases, as this single
example doesn't cover all use cases.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-16 10:17:20 -04:00
Ahmad Elmalah
2ab2cab203 docs: update titles for Textract examples (#32063)
**On this PR I am doing two things:**

1. Adding titles to the 4 example we have, to allow the reader to
capture the essence of the paragraph quickly
2. Replacing 'samples' with 'examples', for more clarity, 

**Why 'examples' could be a better terminology over 'samples' here?**
1. On the page, we were using both 'samples' and 'examples'
interchangeably which lead to confusion, now 'examples' are the use
cases, while 'samples' are the the sample data being used
2. This is consistent with the rest of the docs, we typically use
'examples' for examples, for example
https://python.langchain.com/docs/integrations/callbacks/fiddler/
2025-07-16 10:17:02 -04:00
Mason Daugherty
ad44f0688b release(core): release 0.3.69 (#32056) 2025-07-15 17:13:46 -04:00
Mason Daugherty
8ad12f3fcf docs: add missing js providers to table (#32055)
Update to show that Cerebras, xAI, and Cloudflare now have JS/TS
equivalents
2025-07-15 17:09:35 -04:00
Jacob Lee
535ba43b0d feat(core): add an option to make deserialization more permissive (#32054)
## Description

Currently when deserializing objects that contain non-deserializable
values, we throw an error. However, there are cases (e.g. proxies that
return response fields containing extra fields like Python datetimes),
where these values are not important and we just want to drop them.

Twitter handle: @hacubu

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-15 17:00:01 -04:00
Eugene Yurtsev
3628dccbf3 chore(docs): add gtm tag to docs (#32048)
Docusarus gtm langchain v2
2025-07-15 15:58:11 -04:00
Mason Daugherty
8d2135ad8a chore: update links to LangChain how-to guides in issue templates (#32052) 2025-07-15 15:38:35 -04:00
Mason Daugherty
8199a5562a chore: update hyperlink (#32051) 2025-07-15 14:41:27 -04:00
Mason Daugherty
0807711dad chore: update contribution guidelines and templates to direct users to the LangChain Forum (#32050) 2025-07-15 14:39:40 -04:00
Eugene Yurtsev
7a36d6b99c chore(docs): bump langgraph in docs & reformat all docs (#32044)
Trying to unblock documentation build pipeline

* Bump langgraph dep in docs
* Update langgraph in lock file (resolves an issue in API reference
generation)
2025-07-15 15:06:59 +00:00
Mason Daugherty
3b9dd1eba0 docs(groq): cleanup (#32043) 2025-07-15 10:37:37 -04:00
Eugene Yurtsev
02d0a9af6c chore(core): unpin packaging dependency (#32032)
Unpin packaging dependency

---------

Co-authored-by: ntjohnson1 <24689722+ntjohnson1@users.noreply.github.com>
2025-07-14 21:42:32 +00:00
Christophe Bornet
953592d4f7 feat(langchain): add ruff rules G (#32029)
https://docs.astral.sh/ruff/rules/#flake8-logging-format-g
2025-07-14 15:19:36 -04:00
Christophe Bornet
19fff8cba9 feat(langchain): add ruff rules DTZ (#32021)
See https://docs.astral.sh/ruff/rules/#flake8-datetimez-dtz
2025-07-14 12:47:16 -04:00
Marco Vinciguerra
26c2c8f70a docs: update ScrapeGraphAI tools (#32026)
It was outdated

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-14 12:38:55 -04:00
Hunter Lovell
d96b75f9d3 chore: update readme with forum link (#32027) 2025-07-14 09:15:26 -07:00
Ahmad Elmalah
2fdccd789c docs: update Textract docs (#31992)
I am modifying two things:

1. "This sample demonstrates" with "The following samples demonstrate"
as we're talking about at least 4 samples
2. Bringing the sentence to after talking about the definition of
textract to keep the document organized (textract definition then
samples)

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-14 15:36:29 +00:00
董哥的黑板报
553ac1863b docs: add deprecation notice for PipelinePromptTemplate (#31999)
**PR title**: 
add deprecation notice for PipelinePromptTemplate

**PR message**: 
In the API documentation, PipelinePromptTemplate is marked as
deprecated, but this is not mentioned in the docs.

I'm submitting this PR to add a deprecation notice to the docs.

**Tests**:
N/A (documentation only)

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-14 15:27:29 +00:00
Andreas V. Jonsterhaug
6dcca35a34 fix(core): correct return type hints in BaseChatPromptTemplate (#32009)
This PR changes the return type hints of the `format_prompt` and
`aformat_prompt` methods in `BaseChatPromptTemplate` from `PromptValue`
to `ChatPromptValue`. Since both methods always return a
`ChatPromptValue`.
2025-07-14 11:00:01 -04:00
Christophe Bornet
d57216c295 feat(core): add ruff rules D to tests except D1 (#32000)
Docs are not required for tests but when there are docstrings, they
shall be correctly formatted.
See https://docs.astral.sh/ruff/rules/#pydocstyle-d
2025-07-14 10:42:03 -04:00
Christophe Bornet
58d4261875 feat(langchain): add ruff rules PTH (#32008)
See https://docs.astral.sh/ruff/rules/#flake8-use-pathlib-pth
2025-07-14 10:41:37 -04:00
Fabio Fontana
fd168e1c11 feat(text-splitters): add Visual Basic 6 support (#31173)
### **Description**

Add Visual Basic 6 support.

---

### **Issue**

No specific issue addressed.

---

### **Dependencies**

No additional dependencies required.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-14 13:51:16 +00:00
Mason Daugherty
7e146a185b chore: add text splitters to PR linting (#32018) 2025-07-14 09:24:40 -04:00
ccurme
4b89483fe6 release(openai): 0.3.28 (#32015) 2025-07-14 10:38:11 +00:00
ccurme
de13f6ae4f fix(openai): support acknowledged safety checks in computer use (#31984) 2025-07-14 07:33:37 -03:00
Mason Daugherty
a5f1774b76 chore: update PR template (#32011) 2025-07-13 17:00:08 -04:00
Michael Li
a04131489e chore: update error message formatting (#31980) 2025-07-11 15:19:51 -04:00
Mason Daugherty
56d6d69ce9 release(groq): 0.3.6 (#31975) 2025-07-11 10:55:26 -04:00
Fazal
8197504fdc docs: fix typo (#31972)
**Description**:

Fixed a typo from "Hi, I'm Bob and I life in SF." to "Hi, I'm Bob and I
live in SF." .

**Connect with me**: https://www.linkedin.com/in/0xFazal/

**More about me**: https://fazal.me/

-**Fazal**.
2025-07-11 09:47:23 -04:00
Akshara
103fd6ac0c docs: add Google-style docstrings to tools and llms modules (zapier, … (#31957)
**Description:**
Added standardized Google-style docstrings to improve documentation
consistency across key modules.

Updated files:
- `tools/zapier/tool.py`
- `tools/jira/tool.py`
- `tools/json/tool.py`
- `llms/base.py`

These changes enhance readability and maintain consistency with
LangChain’s documentation style guide.

**Issue:**
Fixes #21983

**Dependencies:**
None

**Twitter handle :**
@Akshara_p_
2025-07-11 09:46:21 -04:00
ccurme
612ccf847a chore: [openai] bump sdk (#31958) 2025-07-10 15:53:41 -04:00
Mason Daugherty
b5462b8979 chore: update pr_lint.yml to add description (#31954) 2025-07-10 14:45:28 -04:00
Mason Daugherty
6594eb8cc1 docs(xai): update for Grok 4 (#31953) 2025-07-10 11:06:37 -04:00
Christophe Bornet
060fc0e3c9 text-splitters: Add ruff rules FBT (#31935)
See [flake8-boolean-trap
(FBT)](https://docs.astral.sh/ruff/rules/#flake8-boolean-trap-fbt)
2025-07-09 18:36:58 -04:00
Azhagammal
4d9c0b0883 fix[core]: added error message if the query vector or embedding contains NaN values (#31822)
**Description:**  
Added an explicit validation step in
`langchain_core.vectorstores.utils._cosine_similarity` to raise a
`ValueError` if the input query or any embedding contains `NaN` values.
This prevents silent failures or unstable behavior during similarity
calculations, especially when using maximal_marginal_relevance.

**Issue**:
Fixes #31806 

**Dependencies:**  
None

---------

Co-authored-by: Azhagammal S C <azhagammal@kofluence.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-09 18:30:26 -04:00
Michael Li
a8998a1f57 chore[langchain]: fix broad base except in crawler.py (#31941)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-09 15:17:42 -04:00
Mason Daugherty
042da2a2b2 chore: clean up capitalization in workflow names (#31944) 2025-07-09 15:16:48 -04:00
Mason Daugherty
c026a71a06 chore: add PR title linting (#31943) 2025-07-09 15:04:25 -04:00
Eugene Yurtsev
059942b5fc ci: update issue template to refer Q&A to langchain forum (#31829)
Update issue template to link to the langchain forum
2025-07-09 09:49:16 -04:00
Mason Daugherty
8064d3bdc4 ollama: bump core (#31929) 2025-07-08 16:53:18 -04:00
Mason Daugherty
791c0e2e8f ollama: release 0.3.4 (#31928) 2025-07-08 16:44:36 -04:00
Mason Daugherty
0002b1dafa ollama[patch]: fix model validation, ensure per-call reasoning can be set, tests (#31927)
* update model validation due to change in [Ollama
client](https://github.com/ollama/ollama) - ensure you are running the
latest version (0.9.6) to use `validate_model_on_init`
* add code example and fix formatting for ChatOllama reasoning
* ensure that setting `reasoning` in invocation kwargs overrides
class-level setting
* tests
2025-07-08 16:39:41 -04:00
Mason Daugherty
f33a25773e chroma[nit]: descriptions in pyproject.toml (#31925) 2025-07-08 13:52:12 -04:00
Mason Daugherty
1f829aacf4 ollama[patch]: ruff fixes and rules (#31924)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 13:42:19 -04:00
Mason Daugherty
4d9eefecab fix: bump lockfiles (#31923)
* bump lockfiles after upgrading ruff
* resolve resulting linting fixes
2025-07-08 13:27:55 -04:00
Mason Daugherty
e7f1ceee67 nomic[patch]: ruff fixes and rules (#31922)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
* fix makefile targets
2025-07-08 13:25:18 -04:00
Mason Daugherty
71b361936d ruff: restore stacklevels, disable autofixing (#31919) 2025-07-08 12:55:47 -04:00
Mason Daugherty
cbb418b4bf mistralai[patch]: ruff fixes and rules (#31918)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 12:44:42 -04:00
Mason Daugherty
ae210c1590 ruff: add bugbear across packages (#31917)
WIP, other packages will get in next PRs
2025-07-08 12:22:55 -04:00
Michael Li
5b3e29f809 text splitters: add chunk_size and chunk_overlap validations (#31916)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-08 12:22:33 -04:00
Michael Li
0a17a62548 exception: update Exception to ValueError for clearer error handling (#31915)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-08 11:58:53 -04:00
Michael Li
a1c1421bf4 cli: ensure the connection always get closed in github.py (#31914)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-08 11:58:33 -04:00
Eugene Yurtsev
83d8be756a langchain[patch]: harden xml parser for xmloutput agent (#31859)
Harden the default implementation of the XML parser for the agent

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:57:21 -04:00
Christophe Bornet
3f839d566a langchain: Add ruff rules B (#31908)
See https://docs.astral.sh/ruff/rules/#flake8-bugbear-b

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:48:18 -04:00
Chris G
65b098325b core: docs: clarify where the kwargs in on_tool_start and on_tool_end go (#31909)
**Description:**  
I traced the kwargs starting at `.invoke()` and it was not clear where
they go. it was clarified to two layers down. so I changed it to make it
more documented for the next person.


**Issue:**  
No related issue.

**Dependencies:**  
No dependency changes.

**Twitter handle:**  
Nah. We're good.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:35:31 -04:00
burtenshaw
4e513539f8 docs: update huggingface inference to latest usage (#31906)
This PR updates the doc on Hugging Face's inference offering from
'inference API' to 'inference providers'

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:35:10 -04:00
Christophe Bornet
b8e2420865 langchain: Fix Evaluator's _check_evaluation_args (#31910)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:31:02 -04:00
Christophe Bornet
a3e3fd20f2 langchain: Use pytest.raises and pytest.fail to handle exceptions in tests (#31911)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-08 10:30:35 -04:00
ak2-lucky
2c7eafffec docs: update tbs parameter (#31905)
Description: update serper dev parameter example
2025-07-08 10:27:40 -04:00
Mason Daugherty
dd76209bbd groq[patch]: ruff fixes and rules (#31904)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 10:25:46 -04:00
Mason Daugherty
750721b4c3 huggingface[patch]: ruff fixes and rules (#31912)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 10:07:57 -04:00
Mason Daugherty
06ab2972e3 fireworks[patch]: ruff fixes and rules (#31903)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 02:14:59 +00:00
Mason Daugherty
63e3f2dea6 exa[patch]: ruff fixes and rules (#31902)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-08 02:02:42 +00:00
Mason Daugherty
231e8d0f43 deepseek[patch]: ruff fixes and rules (#31901)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-07 21:54:44 -04:00
Mason Daugherty
38bd1abb8c chroma[patch]: ruff fixes and rules (#31900)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-07 21:45:19 -04:00
Mason Daugherty
2a7645300c anthropic[patch]: ruff fixes and rules (#31899)
* bump ruff deps
* add more thorough ruff rules
* fix said rules
2025-07-07 18:32:27 -04:00
Mason Daugherty
e7eac27241 ruff: more rules across the board & fixes (#31898)
* standardizes ruff dep version across all `pyproject.toml` files
* cli: ruff rules and corrections
* langchain: rules and corrections
2025-07-07 17:48:01 -04:00
Mason Daugherty
706a66eccd fix: automatically fix issues with ruff (#31897)
* Perform safe automatic fixes instead of only selecting
[isort](https://docs.astral.sh/ruff/rules/#isort-i)
2025-07-07 14:13:10 -04:00
Mason Daugherty
e686a70ee0 ollama: thinking, tool streaming, docs, tests (#31772)
* New `reasoning` (bool) param to support toggling [Ollama
thinking](https://ollama.com/blog/thinking) (#31573, #31700). If
`reasoning=True`, Ollama's `thinking` content will be placed in the
model responses' `additional_kwargs.reasoning_content`.
  * Supported by:
    * ChatOllama (class level, invocation level TODO)
    * OllamaLLM (TODO)
* Added tests to ensure streaming tool calls is successful (#29129)
* Refactored tests that relied on `extract_reasoning()`
* Myriad docs additions and consistency/typo fixes
* Improved type safety in some spots

Closes #29129
Addresses #31573 and #31700
Supersedes #31701
2025-07-07 13:56:41 -04:00
ccurme
0eb10f31c1 mistralai: release 0.2.11 (#31896) 2025-07-07 17:47:36 +00:00
Michael Li
47d330f4e6 fix: fix file open with encoding in chat_history.py (#31884)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 13:30:59 -04:00
Christophe Bornet
4215261be1 core: Cleanup pyproject (#31857)
* Reorganize some toml properties
* Fix some E501: line too long

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 13:30:48 -04:00
Mason Daugherty
a751a23c4e fix: remove unused type ignore from three_values fixture in TestAsyncInMemoryStore (#31895) 2025-07-07 13:22:53 -04:00
Michael Li
8ef01f8444 fix: complete exception handling for UpstashRedisEntityStore (#31893)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-07-07 13:13:38 -04:00
Mason Daugherty
6c23c711fb fix: lint/format (#31894) 2025-07-07 13:11:06 -04:00
Christophe Bornet
03e8327e01 core: Ruff preview fixes (#31877)
Auto-fixes from `uv run ruff check --fix --unsafe-fixes --preview`

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 13:02:40 -04:00
Christophe Bornet
ba144c9d7f langchain: Add ruff rule RUF (#31874)
All auto-fixes
See https://docs.astral.sh/ruff/rules/#ruff-specific-rules-ruf

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 12:49:38 -04:00
Christophe Bornet
ed35372580 langchain: Add ruff rules FBT (#31885)
* Fixed for private functions and in tests
* Added noqa escapes for public functions
See https://docs.astral.sh/ruff/rules/#flake8-boolean-trap-fbt

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 12:35:55 -04:00
Christophe Bornet
56bbfd9723 langchain: Add ruff rule RET (#31875)
All auto-fixes
See https://docs.astral.sh/ruff/rules/#flake8-return-ret

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-07 15:33:18 +00:00
Christophe Bornet
fceebbb387 langchain: Add ruff rules C4 (#31879)
All auto-fixes
See https://docs.astral.sh/ruff/rules/#flake8-comprehensions-c4

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 10:55:52 -04:00
Christophe Bornet
4134b36db8 core: make ruff rule PLW1510 unfixable (#31868)
See
https://github.com/astral-sh/ruff/discussions/17087#discussioncomment-12675815

Tha autofix is misleading: it chooses to add `check=False` to keep the
runtime behavior but in reality it hides the fact that most probably the
user would prefer `check=True`.
2025-07-07 10:28:30 -04:00
Christophe Bornet
9368b92b2c standard-tests: Ruff autofixes (#31862)
Auto-fixes from ruff with rule ALL
2025-07-07 10:27:39 -04:00
Christophe Bornet
2df3fdf40d langchain: Add ruff rules SIM (#31881)
See https://docs.astral.sh/ruff/rules/#flake8-simplify-sim

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 10:27:04 -04:00
Christophe Bornet
f06380516f langchain: Add ruff rules A (#31888)
See https://docs.astral.sh/ruff/rules/#flake8-builtins-a

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 10:20:27 -04:00
Christophe Bornet
49c316667d langchain: Add ruff rules EM (#31873)
All auto-fixes.
See https://docs.astral.sh/ruff/rules/#flake8-errmsg-em

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-07-07 10:13:56 -04:00
Christophe Bornet
1276bf3e1d standard-tests: Add ruff rules PGH (#31869)
See https://docs.astral.sh/ruff/rules/#pygrep-hooks-pgh
2025-07-07 10:07:39 -04:00
Christophe Bornet
53c75abba2 langchain: Add ruff rules PIE (#31880)
All auto-fixes
See https://docs.astral.sh/ruff/rules/#flake8-pie-pie
2025-07-07 10:07:12 -04:00
Christophe Bornet
a46a2b8bda cli: Ruff autofixes (#31863)
Auto-fixes from ruff with rule ALL
2025-07-07 10:06:34 -04:00
Christophe Bornet
451c90fefa text-splitters: Ruff autofixes (#31858)
Auto-fixes from ruff with rule `ALL`
2025-07-07 10:06:08 -04:00
Christophe Bornet
8aed3b61a9 core: Bump ruff version to 0.12 (#31846) 2025-07-07 10:02:51 -04:00
Michael Li
73552883c3 cli: fix dockerfile incorrect copy (#31883) 2025-07-06 21:20:57 -04:00
m27315
013ce2c47f huggingface: fix HuggingFaceEndpoint._astream() got multiple values for argument 'stop' (#31385) 2025-07-06 15:18:53 +00:00
CuberMessenegr
e934788ca2 docs: Fix uppercase typo in the Similarity search section (#31886) 2025-07-06 10:24:13 -04:00
Christophe Bornet
bf05229029 langchain: Add ruff rule W (#31876)
All auto-fixes
See https://docs.astral.sh/ruff/rules/#warning-w

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-07-05 21:57:30 +00:00
ccurme
3f4b355eef anthropic[patch]: pass back in citations in multi-turn conversations (#31882)
Also adds VCR cassettes for some heavy tests.
2025-07-05 17:33:22 -04:00
Christophe Bornet
46fe09f013 cli: Bump ruff version to 0.12 (#31864) 2025-07-05 17:15:24 -04:00
Christophe Bornet
df5cc024fd langchain: Bump ruff version to 0.12 (#31867) 2025-07-05 17:13:55 -04:00
Christophe Bornet
a15c3e0856 text-splitters: Bump ruff version to 0.12 (#31866) 2025-07-05 17:13:08 -04:00
Christophe Bornet
e1eb3f8d6f standard-tests: Bump ruff version to 0.12 (#31865) 2025-07-05 17:12:00 -04:00
NeatGuyCoding
64815445e4 langchain[patch]: fix a bug where now.replace(day=now.day - 1) would raise a ValueError when now.day is equal to 1 (#31878) 2025-07-05 14:54:26 -04:00
Viet Hoang
15dc684d34 docs: Integration with GreenNode Serverless AI (#31836) 2025-07-05 13:48:35 -04:00
Nithish Raghunandanan
8bdb1de006 [docs] Update couchbase provider, vector store & features list (#31719) 2025-07-05 13:34:48 -04:00
Mohammad Mohtashim
b26d2250ba core[patch]: Int Combine when Merging Dicts (#31572)
- **Description:** Combining the Int Types by adding them which makes
the most sense.
- **Issue:**  #31565
2025-07-04 14:44:16 -04:00
Mason Daugherty
6a5073b227 langchain[patch]: Add bandit rules (#31818)
Integrate Bandit for security analysis, suppress warnings for specific issues, and address potential vulnerabilities such as hardcoded passwords and SQL injection risks. Adjust documentation and formatting for clarity.
2025-07-03 14:20:33 -04:00
ccurme
df06041eb2 docs: Anthropic search_result nits (#31855) 2025-07-03 14:12:10 -04:00
ccurme
ade642b7c5 Revert "infra: temporarily skip tests" (#31854)
Reverts langchain-ai/langchain#31853
2025-07-03 13:55:29 -04:00
ccurme
c9f45dc323 infra: temporarily skip tests (#31853)
Tests failed twice with different timeout errors.
2025-07-03 13:39:14 -04:00
ccurme
f88fff0b8a anthropic: release 0.3.17 (#31852) 2025-07-03 13:18:43 -04:00
ccurme
7cb9388c33 Revert "infra: drop anthropic from core test matrix" (#31851)
Reverts langchain-ai/langchain#31850
2025-07-03 17:14:26 +00:00
ccurme
21664985c7 infra: drop anthropic from core test matrix (#31850)
Overloaded errors blocking release. Will revert after.
2025-07-03 12:52:25 -04:00
ccurme
b140d16696 docs: update ChatAnthropic guide (#31849) 2025-07-03 12:51:11 -04:00
ccurme
2090f85789 core: release 0.3.68 (#31848)
Also add `search_result` to recognized tool message block types.
2025-07-03 12:36:25 -04:00
Mason Daugherty
572020c4d8 ollama: add validate_model_on_init, catch more errors (#31784)
* Ensure access to local model during `ChatOllama` instantiation
(#27720). This adds a new param `validate_model_on_init` (default:
`true`)
* Catch a few more errors from the Ollama client to assist users
2025-07-03 11:07:11 -04:00
Mason Daugherty
1a3a8db3c9 docs: anthropic formatting cleanup (#31847)
inline URLs, capitalization, code blocks
2025-07-03 14:50:23 +00:00
Christophe Bornet
ee3709535d text-splitters: bump spacy version to 3.8.7 (#31834)
This allows to use spacy with Python 3.13
2025-07-03 10:13:25 -04:00
Christophe Bornet
b8e9b4adfc cli: Add ruff rule UP (pyupgrade) (#31843)
See https://docs.astral.sh/ruff/rules/#pyupgrade-up
All auto-fixed

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-07-03 14:12:46 +00:00
Christophe Bornet
cd7dce687a standard-tests: Add ruff rule UP (pyupgrade) (#31842)
See https://docs.astral.sh/ruff/rules/#pyupgrade-up
All auto-fixed
2025-07-03 10:12:31 -04:00
Christophe Bornet
802d2bf249 text-splitters: Add ruff rule UP (pyupgrade) (#31841)
See https://docs.astral.sh/ruff/rules/#pyupgrade-up
All auto-fixed except `typing.AbstractSet` -> `collections.abc.Set`
2025-07-03 10:11:35 -04:00
Mason Daugherty
911b0b69ea groq: Add service tier option to ChatGroq (#31801)
- Allows users to select a [flex
processing](https://console.groq.com/docs/flex-processing) service tier
2025-07-03 10:11:18 -04:00
Eugene Yurtsev
10ec5c8f02 text-splitters: 0.3.9 (#31844)
Release langchain-text-splitters 0.3.9
2025-07-03 10:02:35 -04:00
Eugene Yurtsev
6dca787a9d ci: set explicit workflow permissions (#31830)
* Set explicit workflow permissions
* Should be a no-op since we're using restricted GITHUB_TOKENs by
default
2025-07-03 10:02:18 -04:00
Christophe Bornet
46745f91b5 core: Use parametric tests in test_openai_tools (#31839) 2025-07-03 08:43:46 -04:00
Eugene Yurtsev
181c22c512 update CODEOWNERS (#31831)
Update CODEOWNERS
2025-07-02 17:31:49 -04:00
Cole Murray
43eef43550 security: Remove xslt_path and harden XML parsers in HTMLSectionSplitter: package: langchain-text-splitters (#31819)
## Summary
- Removes the `xslt_path` parameter from HTMLSectionSplitter to
eliminate XXE attack vector
- Hardens XML/HTML parsers with secure configurations to prevent XXE
attacks
- Adds comprehensive security tests to ensure the vulnerability is fixed

  ## Context
This PR addresses a critical XXE vulnerability discovered in the
HTMLSectionSplitter component. The vulnerability allowed attackers to:
- Read sensitive local files (SSH keys, passwords, configuration files)
  - Perform Server-Side Request Forgery (SSRF) attacks
  - Exfiltrate data to attacker-controlled servers

  ## Changes Made
1. **Removed `xslt_path` parameter** - This eliminates the primary
attack vector where users could supply malicious XSLT files
2. **Hardened XML parsers** - Added security configurations to prevent
XXE attacks even with the default XSLT:
     - `no_network=True` - Blocks network access
- `resolve_entities=False` - Prevents entity expansion -
`load_dtd=False` - Disables DTD processing -
`XSLTAccessControl.DENY_ALL` - Blocks all file/network I/O in XSLT
transformations

3. **Added security tests** - New test file `test_html_security.py` with
comprehensive tests for various XXE attack vectors
4. **Updated existing tests** - Modified tests that were using the
removed `xslt_path` parameter

  ## Test Plan
  - [x] All existing tests pass
  - [x] New security tests verify XXE attacks are blocked
  - [x] Code passes linting and formatting checks
  - [x] Tested with both old and new versions of lxml


Twitter handle: @_colemurray
2025-07-02 15:24:08 -04:00
Mason Daugherty
815d11ed6a docs: Add PR info doc (#31833) 2025-07-02 19:20:27 +00:00
Eugene Yurtsev
73fefe0295 core[path]: Use context manager for FileCallbackHandler (#31813)
Recommend using context manager for FileCallbackHandler to avoid opening
too many file descriptors

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-07-02 13:31:58 -04:00
ojumah20
377e5f5204 docs: Update agents.ipynb (#31820) 2025-07-02 10:42:28 -04:00
Mason Daugherty
eb12294583 langchain-xai[patch]: Add ruff bandit rules to linter (#31816)
- Add ruff bandit rules
- Some formatting
2025-07-01 18:59:06 +00:00
Mason Daugherty
86a698d1b6 langchain-qdrant[patch]: Add ruff bandit rules to linter (#31815)
- Add ruff bandit rules
- Address a few s101s
- Some formatting
2025-07-01 18:42:55 +00:00
Mason Daugherty
b03e326231 langchain-prompty[patch]: Add ruff bandit rules to linter (#31814)
- Add ruff bandit rules
- Address some s101 assertion warnings
- Address s506 by using `yaml.safe_load()`
2025-07-01 18:32:02 +00:00
Mason Daugherty
3190c4132f langchain-perplexity[patch]: Add ruff bandit rules to linter (#31812)
- Add ruff bandit rules
2025-07-01 18:17:28 +00:00
Mason Daugherty
f30fe07620 update pyproject.toml flake8 comment (#31810) 2025-07-01 18:16:38 +00:00
Mason Daugherty
d0dce5315f langchain-ollama[patch]: Add ruff bandit rules to linter (#31811)
- Add ruff bandit rules
2025-07-01 18:16:07 +00:00
Mason Daugherty
c9e1ce2966 groq: release 0.3.5 (#31809) 2025-07-01 13:21:23 -04:00
Mason Daugherty
404d8408f4 langchain-nomic[patch]: Add ruff bandit rules to linter (#31805)
- Add ruff bandit rules
- Some formatting
2025-07-01 11:39:11 -04:00
Mason Daugherty
0279af60b5 langchain-mistralai[patch]: Add ruff bandit rules to linter, formatting (#31803)
- Add ruff bandit rules
- Address a s101 error
- Formatting
2025-07-01 11:08:01 -04:00
Mason Daugherty
425ee52581 langchain-huggingface[patch]: Add ruff bandit rules to linter (#31798)
- Add ruff bandit rules
2025-07-01 11:07:52 -04:00
Mason Daugherty
0efaa483e4 langchain-groq[patch]: Add ruff bandit rules to linter (#31797)
- Add ruff bandit rules
- Address s105 errors
2025-07-01 11:07:42 -04:00
Mason Daugherty
479b6fd7c5 langchain-fireworks[patch]: Add ruff bandit rules to linter (#31796)
- Add ruff bandit rules
- Address a s113 error
2025-07-01 11:07:26 -04:00
Lauren Hirata Singh
625f7c3710 docs: Add forum link to footer (#31795)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-06-30 16:14:03 -04:00
Anush
2d3020f6cd docs: Update vectorstores feature matrix for Qdrant (#31786)
## Description

- `Qdrant` vector store supports `add_documents` with IDs.
- Multi-tenancy is supported via [payload
filters](https://qdrant.tech/documentation/guides/multiple-partitions/)
and
[JWT](https://qdrant.tech/documentation/guides/security/#granular-access-control-with-jwt)
if needed.
2025-06-30 14:02:07 -04:00
Mason Daugherty
33c9bf1adc langchain-openai[patch]: Add ruff bandit rules to linter (#31788) 2025-06-30 14:01:32 -04:00
Mason Daugherty
645e25f624 langchain-anthropic[patch]: Add ruff bandit rules (#31789) 2025-06-30 14:00:53 -04:00
Mason Daugherty
247673ddb8 chroma: add ruff bandit rules (#31790) 2025-06-30 14:00:08 -04:00
Mason Daugherty
1a5120dc9d langchain-deepseek[patch]: add ruff bandit rules (#31792)
add ruff bandit rules
2025-06-30 13:59:35 -04:00
Mason Daugherty
6572399174 langchain-exa: add ruff bandit rules (#31793)
Add ruff bandit rules
2025-06-30 13:58:38 -04:00
ccurme
04cc674e80 core: release 0.3.67 (#31791) 2025-06-30 12:00:39 -04:00
ccurme
46cef90f7b core: expose tool message recognized block types (#31787) 2025-06-30 11:19:34 -04:00
ccurme
428c276948 infra: skip notebook in CI (#31773) 2025-06-28 14:00:45 -04:00
Yiwei
375f53adac IBM DB2 vector store documentation addition (#31008) 2025-06-27 18:34:32 +00:00
ccurme
9f17fabc43 openai: release 0.3.27 (#31769)
To pick up https://github.com/langchain-ai/langchain/pull/31756.
2025-06-27 13:44:45 -04:00
Andrew Jaeger
0189c50570 openai[fix]: Correctly set usage metadata for OpenAI Responses API (#31756) 2025-06-27 15:35:14 +00:00
Mason Daugherty
9aa75eaef3 docs: enhance docstring for disable_streaming parameter in BaseChatModel (#31759)
Resolves #31758
2025-06-27 11:27:41 -04:00
ccurme
e8e89b0b82 docs: updates from langchain-openai 0.3.26 (#31764) 2025-06-27 11:27:25 -04:00
Eugene Yurtsev
eb08b064bb docs: Remove giscus comments (#31755)
Remove giscus comments from langchain
2025-06-27 09:56:55 -04:00
Mason Daugherty
e1aff00cc1 groq: support reasoning_effort, update docs for clarity (#31754)
- There was some ambiguous wording that has been updated to hopefully
clarify the functionality of `reasoning_format` in ChatGroq.
- Added support for `reasoning_effort`
- Added links to see models capable of `reasoning_format` and
`reasoning_effort`
- Other minor nits
2025-06-27 09:43:40 -04:00
ccurme
ea1345a58b openai[patch]: update cassette (#31752)
Following changes in `openai==1.92`.
2025-06-26 14:52:12 -04:00
ccurme
066be383e3 openai[patch]: update test following release of openai 1.92 (#31751)
Added new required fields for `ResponseFunctionWebSearch`
2025-06-26 18:22:58 +00:00
ccurme
61feaa4656 openai: release 0.3.26 (#31749) 2025-06-26 13:51:51 -04:00
ccurme
88d5f3edcc openai[patch]: allow specification of output format for Responses API (#31686) 2025-06-26 13:41:43 -04:00
Mason Daugherty
59c2b81627 docs: fix some inline links (#31748) 2025-06-26 13:35:14 -04:00
Lauren Hirata Singh
83774902e7 docs: Academy banner (#31745)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being
modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI
changes.
  - Example: "core: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-06-26 10:03:06 -04:00
ccurme
0ae434be21 anthropic: release 0.3.16 (#31744) 2025-06-26 09:09:29 -04:00
Mason Daugherty
a08e73f07e docs: remove trailing backticks (#31740) 2025-06-26 01:23:39 -04:00
Martin Schaer
421554007f docs: Update surrealdb vectorestore documentation (#31199) 2025-06-25 20:16:43 +00:00
Mason Daugherty
2fb27b63f5 ollama: update tests, docs (#31736)
- docs: for the Ollama notebooks, improve the specificity of some links,
add `homebrew` install info, update some wording
- tests: reduce number of local models needed to run in half from 4 → 2
(shedding 8gb of required installs)
- bump deps (non-breaking) in anticipation of upcoming "thinking" PR
2025-06-25 20:13:20 +00:00
ccurme
a1f3147989 docs: update sort order for integrations table (#31737)
Pull latest download statistics.
2025-06-25 16:03:36 -04:00
ccurme
84500704ab openai[patch]: fix bug where function call IDs were not populated (#31735)
(optional) IDs were getting dropped in some cases.
2025-06-25 19:08:27 +00:00
ccurme
0bf223d6cf openai[patch]: add attribute to always use previous_response_id (#31734) 2025-06-25 19:01:43 +00:00
ccurme
b02bd67788 anthropic[patch]: cache clients (#31659) 2025-06-25 14:49:02 -04:00
Michael Li
e3f1ce0ac5 docs: fix retriever typos (#31733) 2025-06-25 16:06:21 +00:00
Michael Li
5d734ac8a8 docs: fix typo in clarifai.ipynb (#31732) 2025-06-25 15:59:29 +00:00
Michael Li
a09583204c docs: fix typos in tool_feat_table.py (#31731) 2025-06-25 15:56:59 +00:00
Michael Li
df1a4c0085 docs: fix typo in timescalevector.ipynb (#31727) 2025-06-25 11:49:19 -04:00
Michael Li
990a69d9d7 docs: fix typo in globals.py (#31728) 2025-06-25 11:47:02 -04:00
Mason Daugherty
3c3320ae30 fix: update import paths for ChatOllama to use langchain_ollama instead of community (#31721) 2025-06-24 16:19:31 -04:00
1705 changed files with 64124 additions and 42178 deletions

View File

@@ -5,26 +5,31 @@ This project includes a [dev container](https://containers.dev/), which lets you
You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
## GitHub Codespaces
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
You may use the button above, or follow these steps to open this repo in a Codespace:
1. Click the **Code** drop-down menu at the top of https://github.com/langchain-ai/langchain.
1. Click the **Code** drop-down menu at the top of <https://github.com/langchain-ai/langchain>.
1. Click on the **Codespaces** tab.
1. Click **Create codespace on master**.
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
## VS Code Dev Containers
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
Note: If you click the link above you will open the main repo (langchain-ai/langchain) and not your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
```
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
> [!NOTE]
> If you click the link above you will open the main repo (`langchain-ai/langchain`) and *not* your local cloned repo. This is fine if you only want to run and test the library, but if you want to contribute you can use the link below and replace with your username and cloned repo name:
```txt
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/&lt;YOUR_USERNAME&gt;/&lt;YOUR_CLONED_REPO_NAME&gt;
```
Then you will have a local cloned repo where you can contribute and then create pull requests.
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
If you already have VS Code and Docker installed, you can use the button above to get started. This will use VSCode to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
Alternatively you can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
@@ -40,5 +45,5 @@ You can learn more in the [Dev Containers documentation](https://code.visualstud
## Tips and tricks
* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
- If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
- If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.

View File

@@ -1,36 +1,58 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
{
// Name for the dev container
"name": "langchain",
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/langchain",
// Prevent the container from shutting down
"overrideCommand": true
// Features to add to the dev container. More info: https://containers.dev/features
// "features": {
// "ghcr.io/devcontainers-contrib/features/poetry:2": {}
// }
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created.
// "postCreateCommand": "cat /etc/os-release",
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
// Name for the dev container
"name": "langchain",
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/langchain",
"mounts": [
"source=langchain-workspaces,target=/workspaces/langchain,type=volume"
],
// Prevent the container from shutting down
"overrideCommand": true,
// Features to add to the dev container. More info: https://containers.dev/features
"features": {
"ghcr.io/devcontainers/features/git:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {}
},
"containerEnv": {
"UV_LINK_MODE": "copy"
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Run commands after the container is created
"postCreateCommand": "uv sync && echo 'LangChain (Python) dev environment ready!'",
// Configure tool-specific properties.
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.debugpy",
"ms-python.mypy-type-checker",
"ms-python.isort",
"unifiedjs.vscode-mdx",
"davidanson.vscode-markdownlint",
"ms-toolsai.jupyter",
"GitHub.copilot",
"GitHub.copilot-chat"
],
"settings": {
"python.defaultInterpreterPath": ".venv/bin/python",
"python.formatting.provider": "none",
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
}
}
}
}
}
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}

View File

@@ -4,26 +4,9 @@ services:
build:
dockerfile: libs/langchain/dev.Dockerfile
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces/langchain:cached
networks:
- langchain-network
# environment:
# MONGO_ROOT_USERNAME: root
# MONGO_ROOT_PASSWORD: example123
# depends_on:
# - mongo
# mongo:
# image: mongo
# restart: unless-stopped
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: example123
# ports:
# - "27017:27017"
# networks:
# - langchain-network
networks:
langchain-network:

52
.editorconfig Normal file
View File

@@ -0,0 +1,52 @@
# top-most EditorConfig file
root = true
# All files
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Python files
[*.py]
indent_style = space
indent_size = 4
max_line_length = 88
# JSON files
[*.json]
indent_style = space
indent_size = 2
# YAML files
[*.{yml,yaml}]
indent_style = space
indent_size = 2
# Markdown files
[*.md]
indent_style = space
indent_size = 2
trim_trailing_whitespace = false
# Configuration files
[*.{toml,ini,cfg}]
indent_style = space
indent_size = 4
# Shell scripts
[*.sh]
indent_style = space
indent_size = 2
# Makefile
[Makefile]
indent_style = tab
indent_size = 4
# Jupyter notebooks
[*.ipynb]
# Jupyter may include trailing whitespace in cell
# outputs that's semantically meaningful
trim_trailing_whitespace = false

3
.github/CODEOWNERS vendored
View File

@@ -1,2 +1,3 @@
/.github/ @baskaryan @ccurme
/.github/ @baskaryan @ccurme @eyurtsev
/libs/core/ @eyurtsev
/libs/packages.yml @ccurme

View File

@@ -3,4 +3,8 @@
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
## New features
For new features, please start a new [discussion](https://forum.langchain.com/), where the maintainers will help with scoping out the necessary changes.

View File

@@ -1,38 +0,0 @@
labels: [idea]
body:
- type: checkboxes
id: checks
attributes:
label: Checked
description: Please confirm and check all the following options.
options:
- label: I searched existing ideas and did not find a similar one
required: true
- label: I added a very descriptive title
required: true
- label: I've clearly described the feature request and motivation for it
required: true
- type: textarea
id: feature-request
validations:
required: true
attributes:
label: Feature request
description: |
A clear and concise description of the feature proposal. Please provide links to any relevant GitHub repos, papers, or other resources if relevant.
- type: textarea
id: motivation
validations:
required: true
attributes:
label: Motivation
description: |
Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too.
- type: textarea
id: proposal
validations:
required: false
attributes:
label: Proposal (If applicable)
description: |
If you would like to propose a solution, please describe it here.

View File

@@ -1,122 +0,0 @@
labels: [Question]
body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain 🦜️🔗!
Please follow these instructions, fill every question, and do every step. 🙏
We're asking for this because answering questions and solving problems in GitHub takes a lot of time --
this is time that we cannot spend on adding new features, fixing bugs, writing documentation or reviewing pull requests.
By asking questions in a structured way (following this) it will be much easier for us to help you.
There's a high chance that by following this process, you'll find the solution on your own, eliminating the need to submit a question and wait for an answer. 😎
As there are many questions submitted every day, we will **DISCARD** and close the incomplete ones.
That will allow us (and others) to focus on helping people like you that follow the whole process. 🤓
Relevant links to check before opening a question to see if your question has already been answered, fixed or
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this question.
required: true
- label: I searched the LangChain documentation with the integrated search.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- type: checkboxes
id: help
attributes:
label: Commit to Help
description: |
After submitting this, I commit to one of:
* Read open questions until I find 2 where I can help someone and add a comment to help there.
* I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
* Once my question is answered, I will mark the answer as "accepted".
options:
- label: I commit to help with one of those options 👆
required: true
- type: textarea
id: example
attributes:
label: Example Code
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
raise NotImplementedError('For demo purpose')
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
render: python
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: |
What is the problem, question, or error?
Write a short description explaining what you are doing, what you expect to happen, and what is currently happening.
placeholder: |
* I'm trying to use the `langchain` library to do X.
* I expect to see Y.
* Instead, it does Z.
validations:
required: true
- type: textarea
id: system-info
attributes:
label: System Info
description: |
Please share your system info with us.
"pip freeze | grep langchain"
platform (windows / linux / mac)
python version
OR if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
placeholder: |
"pip freeze | grep langchain"
platform
python version
Alternatively, if you're on a recent version of langchain-core you can paste the output of:
python -m langchain_core.sys_info
These will only surface LangChain packages, don't forget to include any other relevant
packages you're using (if you're not sure what's relevant, you can paste the entire output of `pip freeze`).
validations:
required: true

View File

@@ -1,33 +1,33 @@
name: "\U0001F41B Bug Report"
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the GitHub Discussions.
labels: ["02 Bug Report"]
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the LangChain forum.
labels: ["bug"]
body:
- type: markdown
attributes:
value: >
value: |
Thank you for taking the time to file a bug report.
Use this to report bugs in LangChain.
If you're not certain that your issue is due to a bug in LangChain, please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions)
to ask for help with your issue.
Use this to report BUGS in LangChain. For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
if there's another way to solve your problem:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
* [LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
* [LangChain how-to guides](https://python.langchain.com/docs/how_to/),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
- label: This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
required: true
- label: I added a clear and descriptive title that summarizes this issue.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
@@ -35,6 +35,8 @@ body:
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
required: true
- type: textarea
@@ -45,19 +47,19 @@ body:
label: Example Code
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
**Important!**
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
placeholder: |
The following code:
```python
from langchain_core.runnables import RunnableLambda
@@ -99,16 +101,18 @@ body:
Please share your system info with us. Do NOT skip this step and please don't trim
the output. Most users don't include enough information here and it makes it harder
for us to help you.
Run the following command in your terminal and paste the output here:
python -m langchain_core.sys_info
`python -m langchain_core.sys_info`
or if you have an existing python interpreter running:
```python
from langchain_core import sys_info
sys_info.print_sys_info()
```
alternatively, put the entire output of `pip freeze` here.
placeholder: |
python -m langchain_core.sys_info

View File

@@ -1,12 +1,6 @@
blank_issues_enabled: false
version: 2.1
contact_links:
- name: 🤔 Question or Problem
about: Ask a question or ask about a problem in GitHub Discussions.
url: https://www.github.com/langchain-ai/langchain/discussions/categories/q-a
- name: Feature Request
url: https://www.github.com/langchain-ai/langchain/discussions/categories/ideas
about: Suggest a feature or an idea
- name: Show and tell
about: Show what you built with LangChain
url: https://www.github.com/langchain-ai/langchain/discussions/categories/show-and-tell
- name: LangChain Forum
url: https://forum.langchain.com/
about: General community discussions, support, and feature requests

View File

@@ -1,58 +1,59 @@
name: Documentation
description: Report an issue related to the LangChain documentation.
title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"
labels: [03 - Documentation]
title: "docs: <Please write a comprehensive title after the 'docs: ' prefix>"
labels: [documentation]
body:
- type: markdown
attributes:
value: >
Thank you for taking the time to report an issue in the documentation.
Only report issues with documentation here, explain if there are
any missing topics or if you found a mistake in the documentation.
Do **NOT** use this to ask usage questions or reporting issues with your code.
If you have usage questions or need help solving some problem,
please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions).
If you're in the wrong place, here are some helpful links to find a better
place to ask your question:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://python.langchain.com/api_reference/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: input
id: url
attributes:
label: URL
description: URL to documentation
validations:
required: false
- type: checkboxes
id: checks
attributes:
label: Checklist
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I included a link to the documentation page I am referring to (if applicable).
required: true
- type: textarea
attributes:
label: "Issue with current documentation:"
description: >
Please make sure to leave a reference to the document/code you're
referring to. Feel free to include names of classes, functions, methods
or concepts you'd like to see documented more.
- type: textarea
attributes:
label: "Idea or request for content:"
description: >
Please describe as clearly as possible what topics you think are missing
from the current documentation.
- type: markdown
attributes:
value: |
Thank you for taking the time to report an issue in the documentation.
Only report issues with documentation here, explain if there are
any missing topics or if you found a mistake in the documentation.
Do **NOT** use this to ask usage questions or reporting issues with your code.
If you have usage questions or need help solving some problem,
please use the [LangChain Forum](https://forum.langchain.com/).
If you're in the wrong place, here are some helpful links to find a better
place to ask your question:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
* [LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
* [LangChain how-to guides](https://python.langchain.com/docs/how_to/),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: input
id: url
attributes:
label: URL
description: URL to documentation
validations:
required: false
- type: checkboxes
id: checks
attributes:
label: Checklist
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I included a link to the documentation page I am referring to (if applicable).
required: true
- type: textarea
attributes:
label: "Issue with current documentation:"
description: >
Please make sure to leave a reference to the document/code you're
referring to. Feel free to include names of classes, functions, methods
or concepts you'd like to see documented more.
- type: textarea
attributes:
label: "Idea or request for content:"
description: >
Please describe as clearly as possible what topics you think are missing
from the current documentation.

View File

@@ -5,9 +5,9 @@ body:
attributes:
value: |
Thanks for your interest in LangChain! 🚀
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation in a [Question in GitHub Discussions](https://github.com/langchain-ai/langchain/discussions/categories/q-a) instead.
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
or are a regular contributor to LangChain with previous merged pull requests.
- type: checkboxes

View File

@@ -1,28 +1,32 @@
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes.
- Example: "core: add foobar LLM"
Thank you for contributing to LangChain! Follow these steps to mark your pull request as ready for review. **If any of these steps are not completed, your PR will not be considered for review.**
- [ ] **PR title**: Follows the format: {TYPE}({SCOPE}): {DESCRIPTION}
- Examples:
- feat(core): add multi-tenant support
- fix(cli): resolve flag parsing error
- docs(openai): update API usage examples
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
- Note: the `{DESCRIPTION}` must not start with an uppercase letter.
- Once you've written the title, please delete this checklist item; do not include it in the PR.
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
- **Description:** a description of the change. Include a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) if applicable to a relevant issue.
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
1. A test for the integration, preferably unit tests that do not rely on network access,
2. An example notebook showing its use. It lives in `docs/docs/integrations` directory.
- [ ] **Add tests and docs**: If you're adding a new integration, please include
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. **We will not consider a PR unless these three are passing in CI.** See [contribution guidelines](https://python.langchain.com/docs/contributing/) for more.
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
If no one reviews your PR within a few days, please @-mention one of baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

325
.github/copilot-instructions.md vendored Normal file
View File

@@ -0,0 +1,325 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

View File

@@ -16,6 +16,7 @@ LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/langchain",
"libs/langchain_v1",
]
# when set to True, we are ignoring core dependents

View File

@@ -73,6 +73,7 @@ def main():
for p in package_yaml["packages"]
if (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
and p["name"] != "langchain-ai21" # Skip AI21 due to dependency conflicts
])
# Move libraries to their new locations
@@ -82,6 +83,7 @@ def main():
if not p.get("disabled", False)
and (p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref"))
and p["repo"] != "langchain-ai/langchain"
and p["name"] != "langchain-ai21" # Skip AI21 due to dependency conflicts
])
# Delete ones without a pyproject.toml

View File

@@ -1,6 +0,0 @@
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx
self.assertIn(
from trulens_eval import Tru
tru = Tru()

View File

@@ -1,4 +1,4 @@
name: compile-integration-test
name: '🔗 Compile Integration Tests'
on:
workflow_call:
@@ -12,6 +12,9 @@ on:
type: string
description: "Python version to use"
permissions:
contents: read
env:
UV_FROZEN: "true"
@@ -22,24 +25,24 @@ jobs:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "uv run pytest -m compile tests/integration_tests #${{ inputs.python-version }}"
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
- name: Install integration dependencies
- name: '📦 Install Integration Dependencies'
shell: bash
run: uv sync --group test --group test_integration
- name: Check integration tests compile
- name: '🔗 Check Integration Tests Compile'
shell: bash
run: uv run pytest -m compile tests/integration_tests
- name: Ensure the tests did not create any additional files
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu

View File

@@ -1,4 +1,5 @@
name: Integration tests
name: '🚀 Integration Tests'
run-name: 'Test ${{ inputs.working-directory }} on Python ${{ inputs.python-version }}'
on:
workflow_dispatch:
@@ -11,6 +12,10 @@ on:
required: true
type: string
description: "Python version to use"
default: "3.11"
permissions:
contents: read
env:
UV_FROZEN: "true"
@@ -21,20 +26,20 @@ jobs:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: Python ${{ inputs.python-version }}
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
- name: Install dependencies
- name: '📦 Install Integration Dependencies'
shell: bash
run: uv sync --group test --group test_integration
- name: Run integration tests
- name: '🚀 Run Integration Tests'
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}

View File

@@ -1,4 +1,6 @@
name: lint
name: '🧹 Code Linting'
# Runs code quality checks using ruff, mypy, and other linting tools
# Checks both package code and test code for consistency
on:
workflow_call:
@@ -12,6 +14,9 @@ on:
type: string
description: "Python version to use"
permissions:
contents: read
env:
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
@@ -21,19 +26,21 @@ env:
UV_FROZEN: "true"
jobs:
# Linting job - runs quality checks on package and test code
build:
name: "make lint #${{ inputs.python-version }}"
name: 'Python ${{ inputs.python-version }}'
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v4
- name: '📋 Checkout Code'
uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
- name: Install dependencies
- name: '📦 Install Lint & Typing Dependencies'
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
@@ -46,12 +53,12 @@ jobs:
run: |
uv sync --group lint --group typing
- name: Analysing the code with our lint
- name: '🔍 Analyze Package Code with Linters'
working-directory: ${{ inputs.working-directory }}
run: |
make lint_package
- name: Install unit test dependencies
- name: '📦 Install Unit Test Dependencies'
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
@@ -64,13 +71,13 @@ jobs:
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --inexact --group test
- name: Install unit+integration test dependencies
- name: '📦 Install Unit + Integration Test Dependencies'
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --inexact --group test --group test_integration
- name: Analysing the code with our lint
- name: '🔍 Analyze Test Code with Linters'
working-directory: ${{ inputs.working-directory }}
run: |
make lint_tests

View File

@@ -1,5 +1,5 @@
name: release
run-name: Release ${{ inputs.working-directory }} by @${{ github.actor }}
name: '🚀 Package Release'
run-name: 'Release ${{ inputs.working-directory }} ${{ inputs.release-version }}'
on:
workflow_call:
inputs:
@@ -14,11 +14,16 @@ on:
type: string
description: "From which folder this pipeline executes"
default: 'libs/langchain'
release-version:
required: true
type: string
default: '0.1.0'
description: "New version of package being released"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
description: "Release from a non-master branch (danger!) - Only use for hotfixes"
env:
PYTHON_VERSION: "3.11"
@@ -26,6 +31,8 @@ env:
UV_NO_SYNC: "true"
jobs:
# Build the distribution package and extract version info
# Runs in isolated environment with minimal permissions for security
build:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
environment: Scheduled testing
@@ -64,7 +71,7 @@ jobs:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Check Version
- name: Check version
id: check-version
shell: python
working-directory: ${{ inputs.working-directory }}
@@ -93,7 +100,7 @@ jobs:
${{ inputs.working-directory }}
ref: ${{ github.ref }} # this scopes to just ref'd branch
fetch-depth: 0 # this fetches entire commit history
- name: Check Tags
- name: Check tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
@@ -109,7 +116,7 @@ jobs:
# Look for the latest release of the same base version
REGEX="^$PKG_NAME==$BASE_VERSION\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
# If no exact base version match, look for the latest release of any kind
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
@@ -120,7 +127,7 @@ jobs:
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
# backup case if releasing e.g. 0.3.0, looks up last release
# note if last release (chronologically) was e.g. 0.1.47 it will get
# note if last release (chronologically) was e.g. 0.1.47 it will get
# that instead of the last 0.2 release
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
@@ -482,7 +489,7 @@ jobs:
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Create Tag
uses: ncipollo/release-action@v1
with:

View File

@@ -1,4 +1,6 @@
name: test
name: '🧪 Unit Testing'
# Runs unit tests with both current and minimum supported dependency versions
# to ensure compatibility across the supported range
on:
workflow_call:
@@ -12,36 +14,41 @@ on:
type: string
description: "Python version to use"
permissions:
contents: read
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
# Main test job - runs unit tests with current deps, then retests with minimum versions
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "make test #${{ inputs.python-version }}"
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v4
- name: '📋 Checkout Code'
uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
id: setup-python
with:
python-version: ${{ inputs.python-version }}
- name: Install dependencies
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test --dev
- name: Run core tests
- name: '🧪 Run Core Unit Tests'
shell: bash
run: |
make test
- name: Get minimum versions
- name: '🔍 Calculate Minimum Dependency Versions'
working-directory: ${{ inputs.working-directory }}
id: min-version
shell: bash
@@ -52,7 +59,7 @@ jobs:
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: Run unit tests with minimum dependency versions
- name: '🧪 Run Tests with Minimum Dependencies'
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
@@ -61,7 +68,7 @@ jobs:
make tests
working-directory: ${{ inputs.working-directory }}
- name: Ensure the tests did not create any additional files
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu

View File

@@ -1,4 +1,4 @@
name: test_doc_imports
name: '📑 Documentation Import Testing'
on:
workflow_call:
@@ -8,6 +8,9 @@ on:
type: string
description: "Python version to use"
permissions:
contents: read
env:
UV_FROZEN: "true"
@@ -15,29 +18,30 @@ jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 20
name: "check doc imports #${{ inputs.python-version }}"
name: '🔍 Check Doc Imports (Python ${{ inputs.python-version }})'
steps:
- uses: actions/checkout@v4
- name: '📋 Checkout Code'
uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
- name: Install dependencies
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test
- name: Install langchain editable
- name: '📦 Install LangChain in Editable Mode'
run: |
VIRTUAL_ENV=.venv uv pip install langchain-experimental langchain-community -e libs/core libs/langchain
- name: Check doc imports
- name: '🔍 Validate Documentation Import Statements'
shell: bash
run: |
uv run python docs/scripts/check_imports.py
- name: Ensure the test did not create any additional files
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu

View File

@@ -1,4 +1,4 @@
name: test pydantic intermediate versions
name: '🐍 Pydantic Version Testing'
on:
workflow_call:
@@ -17,6 +17,9 @@ on:
type: string
description: "Pydantic version to test."
permissions:
contents: read
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
@@ -28,29 +31,30 @@ jobs:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
timeout-minutes: 20
name: "make test # pydantic: ~=${{ inputs.pydantic-version }}, python: ${{ inputs.python-version }}, "
name: 'Pydantic ~=${{ inputs.pydantic-version }}'
steps:
- uses: actions/checkout@v4
- name: '📋 Checkout Code'
uses: actions/checkout@v4
- name: Set up Python ${{ inputs.python-version }} + uv
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
- name: Install dependencies
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test
- name: Overwrite pydantic version
- name: '🔄 Install Specific Pydantic Version'
shell: bash
run: VIRTUAL_ENV=.venv uv pip install pydantic~=${{ inputs.pydantic-version }}
- name: Run core tests
- name: '🧪 Run Core Tests'
shell: bash
run: |
make test
- name: Ensure the tests did not create any additional files
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu

View File

@@ -1,4 +1,4 @@
name: test-release
name: '🧪 Test Release Package'
on:
workflow_call:
@@ -29,7 +29,7 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Set up Python + uv
- name: '🐍 Set up Python + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
@@ -45,17 +45,17 @@ jobs:
# > It is strongly advised to separate jobs for building [...]
# > from the publish job.
# https://github.com/pypa/gh-action-pypi-publish#non-goals
- name: Build project for distribution
- name: '📦 Build Project for Distribution'
run: uv build
working-directory: ${{ inputs.working-directory }}
- name: Upload build
- name: '⬆️ Upload Build Artifacts'
uses: actions/upload-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: Check Version
- name: '🔍 Extract Version Information'
id: check-version
shell: python
working-directory: ${{ inputs.working-directory }}

View File

@@ -1,17 +1,21 @@
name: API docs build
name: '📚 API Docs'
run-name: 'Build & Deploy API Reference'
# Runs daily or can be triggered manually for immediate updates
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *'
- cron: '0 13 * * *' # Daily at 1PM UTC
env:
PYTHON_VERSION: "3.11"
jobs:
# Only runs on main repository to prevent unnecessary builds on forks
build:
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
runs-on: ubuntu-latest
permissions: write-all
permissions:
contents: read
steps:
- uses: actions/checkout@v4
with:
@@ -22,7 +26,7 @@ jobs:
path: langchain-api-docs-html
token: ${{ secrets.TOKEN_GITHUB_API_DOCS_HTML }}
- name: Get repos with yq
- name: '📋 Extract Repository List with yq'
id: get-unsorted-repos
uses: mikefarah/yq@master
with:
@@ -41,56 +45,67 @@ jobs:
| .repo
' langchain/libs/packages.yml
- name: Parse YAML and checkout repos
- name: '📋 Parse YAML & Checkout Repositories'
env:
REPOS_UNSORTED: ${{ steps.get-unsorted-repos.outputs.result }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Get unique repositories
REPOS=$(echo "$REPOS_UNSORTED" | sort -u)
# Checkout each unique repository that is in langchain-ai org
# Checkout each unique repository
for repo in $REPOS; do
# Validate repository format (allow any org with proper format)
if [[ ! "$repo" =~ ^[a-zA-Z0-9_.-]+/[a-zA-Z0-9_.-]+$ ]]; then
echo "Error: Invalid repository format: $repo"
exit 1
fi
REPO_NAME=$(echo $repo | cut -d'/' -f2)
# Additional validation for repo name
if [[ ! "$REPO_NAME" =~ ^[a-zA-Z0-9_.-]+$ ]]; then
echo "Error: Invalid repository name: $REPO_NAME"
exit 1
fi
echo "Checking out $repo to $REPO_NAME"
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
done
- name: Setup python ${{ env.PYTHON_VERSION }}
- name: '🐍 Setup Python ${{ env.PYTHON_VERSION }}'
uses: actions/setup-python@v5
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install initial py deps
- name: '📦 Install Initial Python Dependencies'
working-directory: langchain
run: |
python -m pip install -U uv
python -m uv pip install --upgrade --no-cache-dir pip setuptools pyyaml
- name: Move libs with script
- name: '📦 Organize Library Directories'
run: python langchain/.github/scripts/prep_api_docs_build.py
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Rm old html
- name: '🧹 Remove Old HTML Files'
run:
rm -rf langchain-api-docs-html/api_reference_build/html
- name: Install dependencies
- name: '📦 Install Documentation Dependencies'
working-directory: langchain
run: |
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}") --overrides ./docs/vercel_overrides.txt
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental libs/standard-tests
python -m uv pip install -r docs/api_reference/requirements.txt
- name: Set Git config
- name: '🔧 Configure Git Settings'
working-directory: langchain
run: |
git config --local user.email "actions@github.com"
git config --local user.name "Github Actions"
- name: Build docs
- name: '📚 Build API Documentation'
working-directory: langchain
run: |
python docs/api_reference/create_api_rst.py

View File

@@ -1,25 +1,28 @@
name: Check Broken Links
name: '🔗 Check Broken Links'
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *'
permissions:
contents: read
jobs:
check-links:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 18.x
- name: '🟢 Setup Node.js 18.x'
uses: actions/setup-node@v4
with:
node-version: 18.x
cache: "yarn"
cache-dependency-path: ./docs/yarn.lock
- name: Install dependencies
- name: '📦 Install Node Dependencies'
run: yarn install --immutable --mode=skip-build
working-directory: ./docs
- name: Check broken links
- name: '🔍 Scan Documentation for Broken Links'
run: yarn check-broken-links
working-directory: ./docs

View File

@@ -1,4 +1,6 @@
name: Check `langchain-core` version equality
name: '🔍 Check `core` Version Equality'
# Ensures version numbers in pyproject.toml and version.py stay in sync
# Prevents releases with mismatched version numbers
on:
pull_request:
@@ -6,6 +8,9 @@ on:
- 'libs/core/pyproject.toml'
- 'libs/core/langchain_core/version.py'
permissions:
contents: read
jobs:
check_version_equality:
runs-on: ubuntu-latest
@@ -13,7 +18,7 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Check version equality
- name: '✅ Verify pyproject.toml & version.py Match'
run: |
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)

View File

@@ -1,4 +1,4 @@
name: CI
name: '🔧 CI'
on:
push:
@@ -6,6 +6,7 @@ on:
pull_request:
merge_group:
# Optimizes CI performance by canceling redundant workflow runs
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
@@ -16,21 +17,31 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
jobs:
# This job analyzes which files changed and creates a dynamic test matrix
# to only run tests/lints for the affected packages, improving CI efficiency
build:
name: 'Detect Changes & Set Matrix'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
- name: '📋 Checkout Code'
uses: actions/checkout@v4
- name: '🐍 Setup Python 3.11'
uses: actions/setup-python@v5
with:
python-version: '3.11'
- id: files
- name: '📂 Get Changed Files'
id: files
uses: Ana06/get-changed-files@v2.3.0
- id: set-matrix
- name: '🔍 Analyze Changed Files & Generate Build Matrix'
id: set-matrix
run: |
python -m pip install packaging requests
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
@@ -42,8 +53,8 @@ jobs:
dependencies: ${{ steps.set-matrix.outputs.dependencies }}
test-doc-imports: ${{ steps.set-matrix.outputs.test-doc-imports }}
test-pydantic: ${{ steps.set-matrix.outputs.test-pydantic }}
# Run linting only on packages that have changed files
lint:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.lint != '[]' }}
strategy:
@@ -56,8 +67,8 @@ jobs:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Run unit tests only on packages that have changed files
test:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.test != '[]' }}
strategy:
@@ -70,8 +81,8 @@ jobs:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Test compatibility with different Pydantic versions for affected packages
test-pydantic:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.test-pydantic != '[]' }}
strategy:
@@ -92,12 +103,12 @@ jobs:
job-configs: ${{ fromJson(needs.build.outputs.test-doc-imports) }}
fail-fast: false
uses: ./.github/workflows/_test_doc_imports.yml
secrets: inherit
with:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Verify integration tests compile without actually running them (faster feedback)
compile-integration-tests:
name: cd ${{ matrix.job-configs.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.compile-integration-tests != '[]' }}
strategy:
@@ -110,8 +121,9 @@ jobs:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Run extended test suites that require additional dependencies
extended-tests:
name: "cd ${{ matrix.job-configs.working-directory }} / make extended_tests #${{ matrix.job-configs.python-version }}"
name: 'Extended Tests'
needs: [ build ]
if: ${{ needs.build.outputs.extended-tests != '[]' }}
strategy:
@@ -127,12 +139,12 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.job-configs.python-version }} + uv
- name: '🐍 Set up Python ${{ matrix.job-configs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ matrix.job-configs.python-version }}
- name: Install dependencies and run extended tests
- name: '📦 Install Dependencies & Run Extended Tests'
shell: bash
run: |
echo "Running extended tests, installing dependencies with uv..."
@@ -141,7 +153,7 @@ jobs:
VIRTUAL_ENV=.venv uv pip install -r extended_testing_deps.txt
VIRTUAL_ENV=.venv make extended_tests
- name: Ensure the tests did not create any additional files
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu
@@ -153,8 +165,9 @@ jobs:
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
# Final status check - ensures all required jobs passed before allowing merge
ci_success:
name: "CI Success"
name: '✅ CI Success'
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic]
if: |
always()
@@ -164,7 +177,7 @@ jobs:
RESULTS_JSON: ${{ toJSON(needs.*.result) }}
EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
steps:
- name: "CI Success"
- name: '🎉 All Checks Passed'
run: |
echo $JOBS_JSON
echo $RESULTS_JSON

View File

@@ -1,4 +1,4 @@
name: Integration docs lint
name: '📑 Integration Docs Lint'
on:
push:
@@ -15,6 +15,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
@@ -30,6 +33,6 @@ jobs:
*.ipynb
*.md
*.mdx
- name: Check new docs
- name: '🔍 Check New Documentation Templates'
run: |
python docs/scripts/check_templates.py ${{ steps.files.outputs.added }}

View File

@@ -1,35 +0,0 @@
name: CI / cd . / make spell_check
on:
push:
branches: [master, v0.1, v0.2]
pull_request:
permissions:
contents: read
jobs:
codespell:
name: (Check for spelling errors)
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install Dependencies
run: |
pip install toml
- name: Extract Ignore Words List
run: |
# Use a Python script to extract the ignore words list from pyproject.toml
python .github/workflows/extract_ignored_words_list.py
id: extract_ignore_words
# - name: Codespell
# uses: codespell-project/actions-codespell@v2
# with:
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
# exclude_file: ./.github/workflows/codespell-exclude

View File

@@ -1,4 +1,4 @@
name: CodSpeed
name: '⚡ CodSpeed'
on:
push:
@@ -7,6 +7,9 @@ on:
pull_request:
workflow_dispatch:
permissions:
contents: read
env:
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: foo
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: foo
@@ -15,7 +18,7 @@ env:
jobs:
codspeed:
name: Run benchmarks
name: 'Benchmark'
runs-on: ubuntu-latest
strategy:
matrix:
@@ -35,7 +38,7 @@ jobs:
- uses: actions/checkout@v4
# We have to use 3.12 as 3.13 is not yet supported
- name: Install uv
- name: '📦 Install UV Package Manager'
uses: astral-sh/setup-uv@v6
with:
python-version: "3.12"
@@ -44,19 +47,19 @@ jobs:
with:
python-version: "3.12"
- name: Install dependencies
- name: '📦 Install Test Dependencies'
run: uv sync --group test
working-directory: ${{ matrix.working-directory }}
- name: Run benchmarks ${{ matrix.working-directory }}
- name: '⚡ Run Benchmarks: ${{ matrix.working-directory }}'
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd ${{ matrix.working-directory }}
if [ "${{ matrix.working-directory }}" = "libs/core" ]; then
uv run --no-sync pytest ./tests/benchmarks --codspeed --codspeed-max-time=5
uv run --no-sync pytest ./tests/benchmarks --codspeed
else
uv run --no-sync pytest ./tests/ --codspeed --codspeed-max-time=5
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.mode || 'instrumentation' }}

View File

@@ -1,5 +1,6 @@
name: LangChain People
name: '👥 LangChain People'
run-name: 'Update People Data'
# This workflow updates the LangChain People data by fetching the latest information from the LangChain Git
on:
schedule:
- cron: "0 14 1 * *"
@@ -11,16 +12,17 @@ jobs:
langchain-people:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
permissions: write-all
permissions:
contents: write
steps:
- name: Dump GitHub context
- name: '📋 Dump GitHub Context'
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- uses: actions/checkout@v4
# Ref: https://github.com/actions/runner/issues/2033
- name: Fix git safe.directory in container
- name: '🔧 Fix Git Safe Directory in Container'
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
- uses: ./.github/actions/people
with:
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}

111
.github/workflows/pr_lint.yml vendored Normal file
View File

@@ -0,0 +1,111 @@
# -----------------------------------------------------------------------------
# PR Title Lint Workflow
#
# Purpose:
# Enforces Conventional Commits format for pull request titles to maintain a
# clear, consistent, and machine-readable change history across our repository.
# This helps with automated changelog generation and semantic versioning.
#
# Enforced Commit Message Format (Conventional Commits 1.0.0):
# <type>[optional scope]: <description>
# [optional body]
# [optional footer(s)]
#
# Allowed Types:
# • feat — a new feature (MINOR bump)
# • fix — a bug fix (PATCH bump)
# • docs — documentation only changes
# • style — formatting, missing semi-colons, etc.; no code change
# • refactor — code change that neither fixes a bug nor adds a feature
# • perf — code change that improves performance
# • test — adding missing tests or correcting existing tests
# • build — changes that affect the build system or external dependencies
# • ci — continuous integration/configuration changes
# • chore — other changes that don't modify src or test files
# • revert — reverts a previous commit
# • release — prepare a new release
#
# Allowed Scopes (optional):
# core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek,
# exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai,
# perplexity, prompty, qdrant, xai
#
# Rules & Tips for New Committers:
# 1. Subject (type) must start with a lowercase letter and, if possible, be
# followed by a scope wrapped in parenthesis `(scope)`
# 2. Breaking changes:
# Append "!" after type/scope (e.g., feat!: drop Node 12 support)
# Or include a footer "BREAKING CHANGE: <details>"
# 3. Example PR titles:
# feat(core): add multitenant support
# fix(cli): resolve flag parsing error
# docs: update API usage examples
# docs(openai): update API usage examples
#
# Resources:
# • Conventional Commits spec: https://www.conventionalcommits.org/en/v1.0.0/
# -----------------------------------------------------------------------------
name: '🏷️ PR Title Lint'
permissions:
pull-requests: read
on:
pull_request:
types: [opened, edited, synchronize]
jobs:
# Validates that PR title follows Conventional Commits specification
lint-pr-title:
name: 'Validate PR Title Format'
runs-on: ubuntu-latest
steps:
- name: '✅ Validate Conventional Commits Format'
uses: amannn/action-semantic-pull-request@v5
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
types: |
feat
fix
docs
style
refactor
perf
test
build
ci
chore
revert
release
scopes: |
core
cli
langchain
langchain_v1
standard-tests
text-splitters
docs
anthropic
chroma
deepseek
exa
fireworks
groq
huggingface
mistralai
nomic
ollama
openai
perplexity
prompty
qdrant
xai
infra
requireScope: false
disallowScopes: |
release
[A-Z]+
ignoreLabels: |
ignore-lint-pr-title

View File

@@ -1,5 +1,5 @@
name: Run notebooks
name: '📓 Validate Documentation Notebooks'
run-name: 'Test notebooks in ${{ inputs.working-directory }}'
on:
workflow_dispatch:
inputs:
@@ -14,6 +14,9 @@ on:
schedule:
- cron: '0 13 * * *'
permissions:
contents: read
env:
UV_FROZEN: "true"
@@ -21,43 +24,43 @@ jobs:
build:
runs-on: ubuntu-latest
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
name: "Test docs"
name: '📑 Test Documentation Notebooks'
steps:
- uses: actions/checkout@v4
- name: Set up Python + uv
- name: '🐍 Set up Python + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ github.event.inputs.python_version || '3.11' }}
- name: 'Authenticate to Google Cloud'
- name: '🔐 Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Configure AWS Credentials
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Install dependencies
- name: '📦 Install Dependencies'
run: |
uv sync --group dev --group test
- name: Pre-download files
- name: '📦 Pre-download Test Files'
run: |
uv run python docs/scripts/cache_data.py
curl -s https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql | sqlite3 docs/docs/how_to/Chinook.db
cp docs/docs/how_to/Chinook.db docs/docs/tutorials/Chinook.db
- name: Prepare notebooks
- name: '🔧 Prepare Notebooks for CI'
run: |
uv run python docs/scripts/prepare_notebooks_for_ci.py --comment-install-cells --working-directory ${{ github.event.inputs.working-directory || 'all' }}
- name: Run notebooks
- name: '🚀 Execute Notebooks'
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}

View File

@@ -1,7 +1,8 @@
name: Scheduled tests
name: '⏰ Scheduled Integration Tests'
run-name: "Run Integration Tests - ${{ inputs.working-directory-force || 'all libs' }} (Python ${{ inputs.python-version-force || '3.9, 3.11' }})"
on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
workflow_dispatch: # Allows maintainers to trigger the workflow manually in GitHub UI
inputs:
working-directory-force:
type: string
@@ -10,7 +11,10 @@ on:
type: string
description: "Python version to use - defaults to 3.9 and 3.11 in matrix - example value: 3.9"
schedule:
- cron: '0 13 * * *'
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
permissions:
contents: read
env:
POETRY_VERSION: "1.8.4"
@@ -19,14 +23,16 @@ env:
POETRY_LIBS: ("libs/partners/google-vertexai" "libs/partners/google-genai" "libs/partners/aws")
jobs:
# Generate dynamic test matrix based on input parameters or defaults
# Only runs on the main repo (for scheduled runs) or when manually triggered
compute-matrix:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
name: Compute matrix
name: '📋 Compute Test Matrix'
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: Set matrix
- name: '🔢 Generate Python & Library Matrix'
id: set-matrix
env:
DEFAULT_LIBS: ${{ env.DEFAULT_LIBS }}
@@ -47,9 +53,11 @@ jobs:
matrix="{\"python-version\": $python_version, \"working-directory\": $working_directory}"
echo $matrix
echo "matrix=$matrix" >> $GITHUB_OUTPUT
# Run integration tests against partner libraries with live API credentials
# Tests are run with both Poetry and UV depending on the library's setup
build:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
name: '🐍 Python ${{ matrix.python-version }}: ${{ matrix.working-directory }}'
runs-on: ubuntu-latest
needs: [compute-matrix]
timeout-minutes: 20
@@ -72,7 +80,7 @@ jobs:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: Move libs
- name: '📦 Organize External Libraries'
run: |
rm -rf \
langchain/libs/partners/google-genai \
@@ -81,7 +89,7 @@ jobs:
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: Set up Python ${{ matrix.python-version }} with poetry
- name: '🐍 Set up Python ${{ matrix.python-version }} + Poetry'
if: contains(env.POETRY_LIBS, matrix.working-directory)
uses: "./langchain/.github/actions/poetry_setup"
with:
@@ -90,40 +98,40 @@ jobs:
working-directory: langchain/${{ matrix.working-directory }}
cache-key: scheduled
- name: Set up Python ${{ matrix.python-version }} + uv
- name: '🐍 Set up Python ${{ matrix.python-version }} + UV'
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
uses: "./langchain/.github/actions/uv_setup"
with:
python-version: ${{ matrix.python-version }}
- name: 'Authenticate to Google Cloud'
- name: '🔐 Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Configure AWS Credentials
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Install dependencies (poetry)
- name: '📦 Install Dependencies (Poetry)'
if: contains(env.POETRY_LIBS, matrix.working-directory)
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: Install dependencies (uv)
- name: '📦 Install Dependencies (UV)'
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
run: |
echo "Running scheduled tests, installing dependencies with uv..."
cd langchain/${{ matrix.working-directory }}
uv sync --group test --group test_integration
- name: Run integration tests
- name: '🚀 Run Integration Tests'
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
@@ -152,14 +160,15 @@ jobs:
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: Remove external libraries
run: |
- name: '🧹 Clean up External Libraries'
# Clean up external libraries to avoid affecting git status check
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/aws
- name: Ensure the tests did not create any additional files
- name: '🧹 Verify Clean Working Directory'
working-directory: langchain
run: |
set -eu

1
.gitignore vendored
View File

@@ -1,5 +1,4 @@
.vs/
.vscode/
.idea/
# Byte-compiled / optimized / DLL files
__pycache__/

14
.markdownlint.json Normal file
View File

@@ -0,0 +1,14 @@
{
"MD013": false,
"MD024": {
"siblings_only": true
},
"MD025": false,
"MD033": false,
"MD034": false,
"MD036": false,
"MD041": false,
"MD046": {
"style": "fenced"
}
}

View File

@@ -1,111 +1,111 @@
repos:
- repo: local
hooks:
- id: core
name: format core
language: system
entry: make -C libs/core format
files: ^libs/core/
pass_filenames: false
- id: langchain
name: format langchain
language: system
entry: make -C libs/langchain format
files: ^libs/langchain/
pass_filenames: false
- id: standard-tests
name: format standard-tests
language: system
entry: make -C libs/standard-tests format
files: ^libs/standard-tests/
pass_filenames: false
- id: text-splitters
name: format text-splitters
language: system
entry: make -C libs/text-splitters format
files: ^libs/text-splitters/
pass_filenames: false
- id: anthropic
name: format partners/anthropic
language: system
entry: make -C libs/partners/anthropic format
files: ^libs/partners/anthropic/
pass_filenames: false
- id: chroma
name: format partners/chroma
language: system
entry: make -C libs/partners/chroma format
files: ^libs/partners/chroma/
pass_filenames: false
- id: couchbase
name: format partners/couchbase
language: system
entry: make -C libs/partners/couchbase format
files: ^libs/partners/couchbase/
pass_filenames: false
- id: exa
name: format partners/exa
language: system
entry: make -C libs/partners/exa format
files: ^libs/partners/exa/
pass_filenames: false
- id: fireworks
name: format partners/fireworks
language: system
entry: make -C libs/partners/fireworks format
files: ^libs/partners/fireworks/
pass_filenames: false
- id: groq
name: format partners/groq
language: system
entry: make -C libs/partners/groq format
files: ^libs/partners/groq/
pass_filenames: false
- id: huggingface
name: format partners/huggingface
language: system
entry: make -C libs/partners/huggingface format
files: ^libs/partners/huggingface/
pass_filenames: false
- id: mistralai
name: format partners/mistralai
language: system
entry: make -C libs/partners/mistralai format
files: ^libs/partners/mistralai/
pass_filenames: false
- id: nomic
name: format partners/nomic
language: system
entry: make -C libs/partners/nomic format
files: ^libs/partners/nomic/
pass_filenames: false
- id: ollama
name: format partners/ollama
language: system
entry: make -C libs/partners/ollama format
files: ^libs/partners/ollama/
pass_filenames: false
- id: openai
name: format partners/openai
language: system
entry: make -C libs/partners/openai format
files: ^libs/partners/openai/
pass_filenames: false
- id: prompty
name: format partners/prompty
language: system
entry: make -C libs/partners/prompty format
files: ^libs/partners/prompty/
pass_filenames: false
- id: qdrant
name: format partners/qdrant
language: system
entry: make -C libs/partners/qdrant format
files: ^libs/partners/qdrant/
pass_filenames: false
- id: root
name: format docs, cookbook
language: system
entry: make format
files: ^(docs|cookbook)/
pass_filenames: false
- repo: local
hooks:
- id: core
name: format core
language: system
entry: make -C libs/core format
files: ^libs/core/
pass_filenames: false
- id: langchain
name: format langchain
language: system
entry: make -C libs/langchain format
files: ^libs/langchain/
pass_filenames: false
- id: standard-tests
name: format standard-tests
language: system
entry: make -C libs/standard-tests format
files: ^libs/standard-tests/
pass_filenames: false
- id: text-splitters
name: format text-splitters
language: system
entry: make -C libs/text-splitters format
files: ^libs/text-splitters/
pass_filenames: false
- id: anthropic
name: format partners/anthropic
language: system
entry: make -C libs/partners/anthropic format
files: ^libs/partners/anthropic/
pass_filenames: false
- id: chroma
name: format partners/chroma
language: system
entry: make -C libs/partners/chroma format
files: ^libs/partners/chroma/
pass_filenames: false
- id: couchbase
name: format partners/couchbase
language: system
entry: make -C libs/partners/couchbase format
files: ^libs/partners/couchbase/
pass_filenames: false
- id: exa
name: format partners/exa
language: system
entry: make -C libs/partners/exa format
files: ^libs/partners/exa/
pass_filenames: false
- id: fireworks
name: format partners/fireworks
language: system
entry: make -C libs/partners/fireworks format
files: ^libs/partners/fireworks/
pass_filenames: false
- id: groq
name: format partners/groq
language: system
entry: make -C libs/partners/groq format
files: ^libs/partners/groq/
pass_filenames: false
- id: huggingface
name: format partners/huggingface
language: system
entry: make -C libs/partners/huggingface format
files: ^libs/partners/huggingface/
pass_filenames: false
- id: mistralai
name: format partners/mistralai
language: system
entry: make -C libs/partners/mistralai format
files: ^libs/partners/mistralai/
pass_filenames: false
- id: nomic
name: format partners/nomic
language: system
entry: make -C libs/partners/nomic format
files: ^libs/partners/nomic/
pass_filenames: false
- id: ollama
name: format partners/ollama
language: system
entry: make -C libs/partners/ollama format
files: ^libs/partners/ollama/
pass_filenames: false
- id: openai
name: format partners/openai
language: system
entry: make -C libs/partners/openai format
files: ^libs/partners/openai/
pass_filenames: false
- id: prompty
name: format partners/prompty
language: system
entry: make -C libs/partners/prompty format
files: ^libs/partners/prompty/
pass_filenames: false
- id: qdrant
name: format partners/qdrant
language: system
entry: make -C libs/partners/qdrant format
files: ^libs/partners/qdrant/
pass_filenames: false
- id: root
name: format docs, cookbook
language: system
entry: make format
files: ^(docs|cookbook)/
pass_filenames: false

View File

@@ -13,7 +13,7 @@ build:
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/api_reference/conf.py
configuration: docs/api_reference/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
formats:
@@ -21,5 +21,5 @@ formats:
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/api_reference/requirements.txt
install:
- requirements: docs/api_reference/requirements.txt

21
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,21 @@
{
"recommendations": [
"ms-python.python",
"charliermarsh.ruff",
"ms-python.mypy-type-checker",
"ms-toolsai.jupyter",
"ms-toolsai.jupyter-keymap",
"ms-toolsai.jupyter-renderers",
"ms-toolsai.vscode-jupyter-cell-tags",
"ms-toolsai.vscode-jupyter-slideshow",
"yzhang.markdown-all-in-one",
"davidanson.vscode-markdownlint",
"bierner.markdown-mermaid",
"bierner.markdown-preview-github-styles",
"eamodio.gitlens",
"github.vscode-pull-request-github",
"github.vscode-github-actions",
"redhat.vscode-yaml",
"editorconfig.editorconfig",
],
}

80
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,80 @@
{
"python.analysis.include": [
"libs/**",
"docs/**",
"cookbook/**"
],
"python.analysis.exclude": [
"**/node_modules",
"**/__pycache__",
"**/.pytest_cache",
"**/.*",
"_dist/**",
"docs/_build/**",
"docs/api_reference/_build/**"
],
"python.analysis.autoImportCompletions": true,
"python.analysis.typeCheckingMode": "basic",
"python.testing.cwd": "${workspaceFolder}",
"python.linting.enabled": true,
"python.linting.ruffEnabled": true,
"[python]": {
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": "explicit",
"source.fixAll": "explicit"
},
"editor.defaultFormatter": "charliermarsh.ruff"
},
"editor.rulers": [
88
],
"editor.tabSize": 4,
"editor.insertSpaces": true,
"editor.trimAutoWhitespace": true,
"files.trimTrailingWhitespace": true,
"files.insertFinalNewline": true,
"files.exclude": {
"**/__pycache__": true,
"**/.pytest_cache": true,
"**/*.pyc": true,
"**/.mypy_cache": true,
"**/.ruff_cache": true,
"_dist/**": true,
"docs/_build/**": true,
"docs/api_reference/_build/**": true,
"**/node_modules": true,
"**/.git": false
},
"search.exclude": {
"**/__pycache__": true,
"**/*.pyc": true,
"_dist/**": true,
"docs/_build/**": true,
"docs/api_reference/_build/**": true,
"**/node_modules": true,
"**/.git": true,
"uv.lock": true,
"yarn.lock": true
},
"git.autofetch": true,
"git.enableSmartCommit": true,
"jupyter.askForKernelRestart": false,
"jupyter.interactiveWindow.textEditor.executeSelection": true,
"[markdown]": {
"editor.wordWrap": "on",
"editor.quickSuggestions": {
"comments": "off",
"strings": "off",
"other": "off"
}
},
"[yaml]": {
"editor.tabSize": 2,
"editor.insertSpaces": true
},
"[json]": {
"editor.tabSize": 2,
"editor.insertSpaces": true
},
}

325
CLAUDE.md Normal file
View File

@@ -0,0 +1,325 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

View File

@@ -7,5 +7,5 @@ Please see the following guides for migrating LangChain code:
* Migrating from [LangChain 0.0.x Chains](https://python.langchain.com/docs/versions/migrating_chains/)
* Upgrade to [LangGraph Memory](https://python.langchain.com/docs/versions/migrating_memory/)
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help you automatically upgrade your code to use non-deprecated imports.
The [LangChain CLI](https://python.langchain.com/docs/versions/v0_3/#migrate-using-langchain-cli) can help you automatically upgrade your code to use non-deprecated imports.
This will be especially helpful if you're still on either version 0.0.x or 0.1.x of LangChain.

View File

@@ -8,9 +8,6 @@ help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@sed -n 's/^## //p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
## all: Default target, shows help.
all: help
## clean: Clean documentation and API documentation artifacts.
clean: docs_clean api_docs_clean
@@ -19,49 +16,79 @@ clean: docs_clean api_docs_clean
######################
## docs_build: Build the documentation.
docs_build:
docs_build: docs_clean
@echo "📚 Building LangChain documentation..."
cd docs && make build
@echo "✅ Documentation build complete!"
## docs_clean: Clean the documentation build artifacts.
docs_clean:
@echo "🧹 Cleaning documentation artifacts..."
cd docs && make clean
@echo "✅ LangChain documentation cleaned"
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
uv run --no-group test linkchecker _dist/docs/ --ignore-url node_modules
@echo "🔗 Checking documentation links..."
@if [ -d _dist/docs ]; then \
uv run --group test linkchecker _dist/docs/ --ignore-url node_modules; \
else \
echo "⚠️ Documentation not built. Run 'make docs_build' first."; \
exit 1; \
fi
@echo "✅ Link check complete"
## api_docs_build: Build the API Reference documentation.
api_docs_build:
uv run --no-group test python docs/api_reference/create_api_rst.py
cd docs/api_reference && uv run --no-group test make html
uv run --no-group test python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
api_docs_build: clean
@echo "📖 Building API Reference documentation..."
uv pip install -e libs/cli
uv run --group docs python docs/api_reference/create_api_rst.py
cd docs/api_reference && uv run --group docs make html
uv run --group docs python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
@echo "✅ API documentation built"
@echo "🌐 Opening documentation in browser..."
open docs/api_reference/_build/html/reference.html
API_PKG ?= text-splitters
api_docs_quick_preview:
uv run --no-group test python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && uv run make html
uv run --no-group test python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
api_docs_quick_preview: clean
@echo "⚡ Building quick API preview for $(API_PKG)..."
uv run --group docs python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && uv run --group docs make html
uv run --group docs python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
@echo "🌐 Opening preview in browser..."
open docs/api_reference/_build/html/reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
@echo "🧹 Cleaning API documentation artifacts..."
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm -f docs/api_reference/index.md
@echo "✅ API documentation cleaned"
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:
uv run --no-group test linkchecker docs/api_reference/_build/html/index.html
@echo "🔗 Checking API documentation links..."
@if [ -f docs/api_reference/_build/html/index.html ]; then \
uv run --group test linkchecker docs/api_reference/_build/html/index.html; \
else \
echo "⚠️ API documentation not built. Run 'make api_docs_build' first."; \
exit 1; \
fi
@echo "✅ API link check complete"
## spell_check: Run codespell on the project.
spell_check:
uv run --no-group test codespell --toml pyproject.toml
@echo "✏️ Checking spelling across project..."
uv run --group codespell codespell --toml pyproject.toml
@echo "✅ Spell check complete"
## spell_fix: Run codespell on the project and fix the errors.
spell_fix:
uv run --no-group test codespell --toml pyproject.toml -w
@echo "✏️ Fixing spelling errors across project..."
uv run --group codespell codespell --toml pyproject.toml -w
@echo "✅ Spelling errors fixed"
######################
# LINTING AND FORMATTING
@@ -69,19 +96,24 @@ spell_fix:
## lint: Run linting on the project.
lint lint_package lint_tests:
@echo "🔍 Running code linting and checks..."
uv run --group lint ruff check docs cookbook
uv run --group lint ruff format docs cookbook cookbook --diff
uv run --group lint ruff check --select I docs cookbook
git --no-pager grep 'from langchain import' docs cookbook | grep -vE 'from langchain import (hub)' && echo "Error: no importing langchain from root in docs, except for hub" && exit 1 || exit 0
git --no-pager grep 'api.python.langchain.com' -- docs/docs ':!docs/docs/additional_resources/arxiv_references.mdx' ':!docs/docs/integrations/document_loaders/sitemap.ipynb' || exit 0 && \
echo "Error: you should link python.langchain.com/api_reference, not api.python.langchain.com in the docs" && \
exit 1
@echo "✅ Linting complete"
## format: Format the project files.
format format_diff:
@echo "🎨 Formatting project files..."
uv run --group lint ruff format docs cookbook
uv run --group lint ruff check --select I --fix docs cookbook
uv run --group lint ruff check --fix docs cookbook
@echo "✅ Formatting complete"
update-package-downloads:
@echo "📊 Updating package download statistics..."
uv run python docs/scripts/packages_yml_get_downloads.py
@echo "✅ Package downloads updated"

View File

@@ -15,7 +15,7 @@
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![CodSpeed Badge](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/langchain-ai/langchain)
@@ -40,9 +40,10 @@ controllable agent workflows.
## Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard
interface for models, embeddings, vector stores, and more.
interface for models, embeddings, vector stores, and more.
Use LangChain for:
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
external / internal systems, drawing from LangChains vast library of integrations with
model providers, tools, vector stores, retrievers, and more.
@@ -52,9 +53,10 @@ frontier evolves, adapt quickly — LangChains abstractions keep you moving w
losing momentum.
## LangChains ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly
with any LangChain product, giving developers a full suite of tools when building LLM
applications.
applications.
To improve your LLM application development, pair LangChain with:
@@ -73,11 +75,13 @@ teams — and iterate quickly with visual prototyping in
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
## Additional resources
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with
guided examples on getting started with LangChain.
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code
snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key
concepts behind the LangChain framework.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
navigating base packages and integrations for LangChain.

View File

@@ -11,6 +11,7 @@ When building such applications developers should remember to follow good securi
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It's best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use.
Risks of not doing so include, but are not limited to:
* Data corruption or loss.
* Unauthorized access to confidential information.
* Compromised performance or availability of critical resources.
@@ -27,19 +28,17 @@ design and secure your applications.
## Reporting OSS Vulnerabilities
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
Please report security vulnerabilities associated with the LangChain
open source projects by visiting the following link:
[https://huntr.com/bounties/disclose/](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true)
Please report security vulnerabilities associated with the LangChain
open source projects at [huntr](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true).
Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) The [Best practices](#best-practices) above to
3) The [Best Practices](#best-practices) above to
understand what we consider to be a security vulnerability vs. developer
responsibility.
@@ -47,39 +46,39 @@ Before reporting a vulnerability, please review:
The following packages and repositories are eligible for bug bounties:
- langchain-core
- langchain (see exceptions)
- langchain-community (see exceptions)
- langgraph
- langserve
* langchain-core
* langchain (see exceptions)
* langchain-community (see exceptions)
* langgraph
* langserve
### Out of Scope Targets
All out of scope targets defined by huntr as well as:
- **langchain-experimental**: This repository is for experimental code and is not
* **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
* **tools**: Tools in either langchain or langchain-community are not eligible for bug
bounties. This includes the following directories
- libs/langchain/langchain/tools
- libs/community/langchain_community/tools
- Please review the [best practices](#best-practices)
* libs/langchain/langchain/tools
* libs/community/langchain_community/tools
* Please review the [Best Practices](#best-practices)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
- Code documented with security notices. This will be decided done on a case by
* Code documented with security notices. This will be decided on a case by
case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
- Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
* Any LangSmith related repositories or APIs (see [Reporting LangSmith Vulnerabilities](#reporting-langsmith-vulnerabilities)).
## Reporting LangSmith Vulnerabilities
Please report security vulnerabilities associated with LangSmith by email to `security@langchain.dev`.
- LangSmith site: https://smith.langchain.com
- SDK client: https://github.com/langchain-ai/langsmith-sdk
* LangSmith site: [https://smith.langchain.com](https://smith.langchain.com)
* SDK client: [https://github.com/langchain-ai/langsmith-sdk](https://github.com/langchain-ai/langsmith-sdk)
### Other Security Concerns

View File

@@ -47,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "6a75a5c6-34ee-4ab9-a664-d9b432d812ee",
"metadata": {},
"outputs": [
@@ -61,7 +61,7 @@
],
"source": [
"# Local\n",
"from langchain_community.chat_models import ChatOllama\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"llama2_chat = ChatOllama(model=\"llama2:13b-chat\")\n",
"llama2_code = ChatOllama(model=\"codellama:7b-instruct\")\n",

View File

@@ -144,7 +144,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "kWDWfSDBMPl8",
"metadata": {},
"outputs": [
@@ -185,7 +185,7 @@
" )\n",
" # Text summary chain\n",
" model = VertexAI(\n",
" temperature=0, model_name=\"gemini-2.0-flash-lite-001\", max_tokens=1024\n",
" temperature=0, model_name=\"gemini-2.5-flash\", max_tokens=1024\n",
" ).with_fallbacks([empty_response])\n",
" summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()\n",
"\n",
@@ -235,7 +235,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "PeK9bzXv3olF",
"metadata": {},
"outputs": [],
@@ -254,7 +254,7 @@
"\n",
"def image_summarize(img_base64, prompt):\n",
" \"\"\"Make image summary\"\"\"\n",
" model = ChatVertexAI(model=\"gemini-2.0-flash\", max_tokens=1024)\n",
" model = ChatVertexAI(model=\"gemini-2.5-flash\", max_tokens=1024)\n",
"\n",
" msg = model.invoke(\n",
" [\n",
@@ -431,7 +431,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "GlwCErBaCKQW",
"metadata": {},
"outputs": [],
@@ -553,7 +553,7 @@
" \"\"\"\n",
"\n",
" # Multi-modal LLM\n",
" model = ChatVertexAI(temperature=0, model_name=\"gemini-2.0-flash\", max_tokens=1024)\n",
" model = ChatVertexAI(temperature=0, model_name=\"gemini-2.5-flash\", max_tokens=1024)\n",
"\n",
" # RAG pipeline\n",
" chain = (\n",

View File

@@ -204,14 +204,14 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "523e6ed2-2132-4748-bdb7-db765f20648d",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatOllama\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate"
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_ollama import ChatOllama"
]
},
{

View File

@@ -229,9 +229,9 @@
" \"smoke\",\n",
" \"temp\",\n",
"]\n",
"assert all(\n",
" [column in df.columns for column in expected_columns]\n",
"), \"DataFrame does not have the expected columns\""
"assert all([column in df.columns for column in expected_columns]), (\n",
" \"DataFrame does not have the expected columns\"\n",
")"
]
},
{

View File

@@ -487,7 +487,7 @@
" print(\"*\" * 40)\n",
" print(\n",
" colored(\n",
" f\"After {i+1} observations, Tommie's summary is:\\n{tommie.get_summary(force_refresh=True)}\",\n",
" f\"After {i + 1} observations, Tommie's summary is:\\n{tommie.get_summary(force_refresh=True)}\",\n",
" \"blue\",\n",
" )\n",
" )\n",

View File

@@ -20,11 +20,7 @@
"cell_type": "markdown",
"id": "5939a54c-3198-4ba4-8346-1cc088c473c0",
"metadata": {},
"source": [
"##### You can embed text in the same VectorDB space as images, and retreive text and images as well based on input text or image.\n",
"##### Following link demonstrates that.\n",
"<a> https://python.langchain.com/v0.2/docs/integrations/text_embedding/open_clip/ </a>"
]
"source": "##### You can embed text in the same VectorDB space as images, and retrieve text and images as well based on input text or image.\n##### Following link demonstrates that.\n<a> https://python.langchain.com/v0.2/docs/integrations/text_embedding/open_clip/ </a>"
},
{
"cell_type": "markdown",
@@ -389,7 +385,7 @@
" ax = axs[idx]\n",
" ax.imshow(img)\n",
" # Assuming similarity is not available in the new data, removed sim_score\n",
" ax.title.set_text(f\"\\nProduct ID: {data[\"id\"]}\\n Score: {score}\")\n",
" ax.title.set_text(f\"\\nProduct ID: {data['id']}\\n Score: {score}\")\n",
" ax.axis(\"off\") # Turn off axis\n",
"\n",
" # Hide any remaining empty subplots\n",
@@ -600,4 +596,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -148,11 +148,11 @@
"\n",
" instructions = \"None\"\n",
" for i in range(max_meta_iters):\n",
" print(f\"[Episode {i+1}/{max_meta_iters}]\")\n",
" print(f\"[Episode {i + 1}/{max_meta_iters}]\")\n",
" chain = initialize_chain(instructions, memory=None)\n",
" output = chain.predict(human_input=task)\n",
" for j in range(max_iters):\n",
" print(f\"(Step {j+1}/{max_iters})\")\n",
" print(f\"(Step {j + 1}/{max_iters})\")\n",
" print(f\"Assistant: {output}\")\n",
" print(\"Human: \")\n",
" human_input = input()\n",

View File

@@ -183,7 +183,7 @@
"outputs": [],
"source": [
"game_description = f\"\"\"Here is the topic for a Dungeons & Dragons game: {quest}.\n",
" The characters are: {*character_names,}.\n",
" The characters are: {(*character_names,)}.\n",
" The story is narrated by the storyteller, {storyteller_name}.\"\"\"\n",
"\n",
"player_descriptor_system_message = SystemMessage(\n",
@@ -334,7 +334,7 @@
" You are the storyteller, {storyteller_name}.\n",
" Please make the quest more specific. Be creative and imaginative.\n",
" Please reply with the specified quest in {word_limit} words or less. \n",
" Speak directly to the characters: {*character_names,}.\n",
" Speak directly to the characters: {(*character_names,)}.\n",
" Do not add anything else.\"\"\"\n",
" ),\n",
"]\n",

View File

@@ -200,7 +200,7 @@
"outputs": [],
"source": [
"game_description = f\"\"\"Here is the topic for the presidential debate: {topic}.\n",
"The presidential candidates are: {', '.join(character_names)}.\"\"\"\n",
"The presidential candidates are: {\", \".join(character_names)}.\"\"\"\n",
"\n",
"player_descriptor_system_message = SystemMessage(\n",
" content=\"You can add detail to the description of each presidential candidate.\"\n",
@@ -595,7 +595,7 @@
" Frame the debate topic as a problem to be solved.\n",
" Be creative and imaginative.\n",
" Please reply with the specified topic in {word_limit} words or less. \n",
" Speak directly to the presidential candidates: {*character_names,}.\n",
" Speak directly to the presidential candidates: {(*character_names,)}.\n",
" Do not add anything else.\"\"\"\n",
" ),\n",
"]\n",

View File

@@ -215,8 +215,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatOllama\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_ollama import ChatOllama\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Prompt\n",

View File

@@ -395,8 +395,7 @@
"prompt_messages = [\n",
" SystemMessage(\n",
" content=(\n",
" \"You are a world class algorithm to answer \"\n",
" \"questions in a specific format.\"\n",
" \"You are a world class algorithm to answer questions in a specific format.\"\n",
" )\n",
" ),\n",
" HumanMessage(content=\"Answer question using the following context\"),\n",

View File

@@ -552,9 +552,7 @@
"cell_type": "markdown",
"id": "77deb6a0-0950-450a-916a-f2a029676c20",
"metadata": {},
"source": [
"**Appending all retreived documents in a single document**"
]
"source": "**Appending all retrieved documents in a single document**"
},
{
"cell_type": "code",
@@ -758,4 +756,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -34,7 +34,7 @@
"tools = [multiply, exponentiate, add]\n",
"\n",
"gpt35 = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0).bind_tools(tools)\n",
"claude3 = ChatAnthropic(model=\"claude-3-sonnet-20240229\").bind_tools(tools)\n",
"claude3 = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\").bind_tools(tools)\n",
"llm_with_tools = gpt35.configurable_alternatives(\n",
" ConfigurableField(id=\"llm\"), default_key=\"gpt35\", claude3=claude3\n",
")"
@@ -113,14 +113,15 @@
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc', 'function': {'arguments': '{\"x\": 8, \"y\": 2.743}', 'name': 'exponentiate'}, 'type': 'function'}, {'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp', 'function': {'arguments': '{\"x\": 17.24, \"y\": -918.1241}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 58, 'prompt_tokens': 168, 'total_tokens': 226}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-528302fc-7acf-4c11-82c4-119ccf40c573-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8, 'y': 2.743}, 'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc'}, {'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp'}]),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='call_6yMU2WsS4Bqgi1WxFHxtfJRc'),\n",
" ToolMessage(content='-900.8841', tool_call_id='call_GAL3dQiKFF9XEV0RrRLPTvVp'),\n",
" AIMessage(content='The result of \\\\(3 + 5^{2.743}\\\\) is approximately 300.04, and the result of \\\\(17.24 - 918.1241\\\\) is approximately -900.88.', response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 251, 'total_tokens': 295}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-d1161669-ed09-4b18-94bd-6d8530df5aa8-0')]}"
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\", additional_kwargs={}, response_metadata={}),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_xuNXwm2P6U2Pp2pAbC1sdIBz', 'function': {'arguments': '{\"x\": 3, \"y\": 5}', 'name': 'add'}, 'type': 'function'}, {'id': 'call_0pImUJUDlYa5zfBcxxuvWyYS', 'function': {'arguments': '{\"x\": 8, \"y\": 2.743}', 'name': 'exponentiate'}, 'type': 'function'}, {'id': 'call_yaownQ9TZK0dkqD1KSFyax4H', 'function': {'arguments': '{\"x\": 17.24, \"y\": -918.1241}', 'name': 'add'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 75, 'prompt_tokens': 131, 'total_tokens': 206, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-ByJm2qxSWU3oTTSZQv64J4XQKZhA6', 'service_tier': 'default', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run--35fad027-47f7-44d3-aa8b-99f4fc24098c-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 5}, 'id': 'call_xuNXwm2P6U2Pp2pAbC1sdIBz', 'type': 'tool_call'}, {'name': 'exponentiate', 'args': {'x': 8, 'y': 2.743}, 'id': 'call_0pImUJUDlYa5zfBcxxuvWyYS', 'type': 'tool_call'}, {'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'call_yaownQ9TZK0dkqD1KSFyax4H', 'type': 'tool_call'}], usage_metadata={'input_tokens': 131, 'output_tokens': 75, 'total_tokens': 206, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}),\n",
" ToolMessage(content='8.0', tool_call_id='call_xuNXwm2P6U2Pp2pAbC1sdIBz'),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='call_0pImUJUDlYa5zfBcxxuvWyYS'),\n",
" ToolMessage(content='-900.8841', tool_call_id='call_yaownQ9TZK0dkqD1KSFyax4H'),\n",
" AIMessage(content='The results are:\\n1. 3 plus 5 is 8.\\n2. 5 raised to the power of 2.743 is approximately 300.04.\\n3. 17.24 minus 918.1241 is approximately -900.88.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 55, 'prompt_tokens': 236, 'total_tokens': 291, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'id': 'chatcmpl-ByJm345MYnpowGS90iAZAlSs7haed', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--5fa66d47-d80e-45d0-9c32-31348c735d72-0', usage_metadata={'input_tokens': 236, 'output_tokens': 55, 'total_tokens': 291, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})]}"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
@@ -146,17 +147,17 @@
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
" AIMessage(content=[{'text': \"Okay, let's break this down into two parts:\", 'type': 'text'}, {'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC', 'input': {'x': 3, 'y': 5}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_01AkLGH8sxMHaH15yewmjwkF', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 450, 'output_tokens': 81}}, id='run-f35bfae8-8ded-4f8a-831b-0940d6ad16b6-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 5}, 'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC'}]),\n",
" ToolMessage(content='8.0', tool_call_id='toolu_01DEhqcXkXTtzJAiZ7uMBeDC'),\n",
" AIMessage(content=[{'id': 'toolu_013DyMLrvnrto33peAKMGMr1', 'input': {'x': 8.0, 'y': 2.743}, 'name': 'exponentiate', 'type': 'tool_use'}], response_metadata={'id': 'msg_015Fmp8aztwYcce2JDAFfce3', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 545, 'output_tokens': 75}}, id='run-48aaeeeb-a1e5-48fd-a57a-6c3da2907b47-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8.0, 'y': 2.743}, 'id': 'toolu_013DyMLrvnrto33peAKMGMr1'}]),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='toolu_013DyMLrvnrto33peAKMGMr1'),\n",
" AIMessage(content=[{'text': 'So 3 plus 5 raised to the 2.743 power is 300.04.\\n\\nFor the second part:', 'type': 'text'}, {'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46', 'input': {'x': 17.24, 'y': -918.1241}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_015TkhfRBENPib2RWAxkieH6', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 638, 'output_tokens': 105}}, id='run-45fb62e3-d102-4159-881d-241c5dbadeed-0', tool_calls=[{'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46'}]),\n",
" ToolMessage(content='-900.8841', tool_call_id='toolu_01UTmMrGTmLpPrPCF1rShN46'),\n",
" AIMessage(content='Therefore, 17.24 - 918.1241 = -900.8841', response_metadata={'id': 'msg_01LgKnRuUcSyADCpxv9tPoYD', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 759, 'output_tokens': 24}}, id='run-1008254e-ccd1-497c-8312-9550dd77bd08-0')]}"
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\", additional_kwargs={}, response_metadata={}),\n",
" AIMessage(content=[{'text': \"I'll solve these calculations for you.\\n\\nFor the first part, I need to calculate 3 plus 5 raised to the power of 2.743.\\n\\nLet me break this down:\\n1) First, I'll calculate 5 raised to the power of 2.743\\n2) Then add 3 to the result\", 'type': 'text'}, {'id': 'toolu_01L1mXysBQtpPLQ2AZTaCGmE', 'input': {'x': 5, 'y': 2.743}, 'name': 'exponentiate', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01HCbDmuzdg9ATMyKbnecbEE', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 563, 'output_tokens': 146, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--9f6469fb-bcbb-4c1c-9eec-79f6979c38e6-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 5, 'y': 2.743}, 'id': 'toolu_01L1mXysBQtpPLQ2AZTaCGmE', 'type': 'tool_call'}], usage_metadata={'input_tokens': 563, 'output_tokens': 146, 'total_tokens': 709, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
" ToolMessage(content='82.65606421491815', tool_call_id='toolu_01L1mXysBQtpPLQ2AZTaCGmE'),\n",
" AIMessage(content=[{'text': \"Now I'll add 3 to this result:\", 'type': 'text'}, {'id': 'toolu_01NARC83e9obV35mZ6jYzBiN', 'input': {'x': 3, 'y': 82.65606421491815}, 'name': 'add', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01ELwyCtVLeGC685PUFqmdz2', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 727, 'output_tokens': 87, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--d5af3d7c-e8b7-4cc2-997a-ad2dafd08751-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 82.65606421491815}, 'id': 'toolu_01NARC83e9obV35mZ6jYzBiN', 'type': 'tool_call'}], usage_metadata={'input_tokens': 727, 'output_tokens': 87, 'total_tokens': 814, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
" ToolMessage(content='85.65606421491815', tool_call_id='toolu_01NARC83e9obV35mZ6jYzBiN'),\n",
" AIMessage(content=[{'text': \"For the second part, you asked for 17.24 - 918.1241. I don't have a subtraction function available, but I can rewrite this as adding a negative number: 17.24 + (-918.1241)\", 'type': 'text'}, {'id': 'toolu_01Q6fLcZkBWZpMPCZ55WXR3N', 'input': {'x': 17.24, 'y': -918.1241}, 'name': 'add', 'type': 'tool_use'}], additional_kwargs={}, response_metadata={'id': 'msg_01WkmDwUxWjjaKGnTtdLGJnN', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 832, 'output_tokens': 130, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--39a6fbda-4c81-47a6-b361-524bd4ee5823-0', tool_calls=[{'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'toolu_01Q6fLcZkBWZpMPCZ55WXR3N', 'type': 'tool_call'}], usage_metadata={'input_tokens': 832, 'output_tokens': 130, 'total_tokens': 962, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}),\n",
" ToolMessage(content='-900.8841', tool_call_id='toolu_01Q6fLcZkBWZpMPCZ55WXR3N'),\n",
" AIMessage(content='So, the answers are:\\n1) 3 plus 5 raised to the 2.743 = 85.65606421491815\\n2) 17.24 - 918.1241 = -900.8841', additional_kwargs={}, response_metadata={'id': 'msg_015Yoc62CvdJbANGFouiQ6AQ', 'model': 'claude-3-7-sonnet-20250219', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 978, 'output_tokens': 58, 'server_tool_use': None, 'service_tier': 'standard'}, 'model_name': 'claude-3-7-sonnet-20250219'}, id='run--174c0882-6180-47ea-8f63-d7b747302327-0', usage_metadata={'input_tokens': 978, 'output_tokens': 58, 'total_tokens': 1036, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}})]}"
]
},
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -177,7 +178,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "langchain",
"language": "python",
"name": "python3"
},
@@ -191,7 +192,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.10.16"
}
},
"nbformat": 4,

View File

@@ -227,7 +227,7 @@
"outputs": [],
"source": [
"conversation_description = f\"\"\"Here is the topic of conversation: {topic}\n",
"The participants are: {', '.join(names.keys())}\"\"\"\n",
"The participants are: {\", \".join(names.keys())}\"\"\"\n",
"\n",
"agent_descriptor_system_message = SystemMessage(\n",
" content=\"You can add detail to the description of the conversation participant.\"\n",
@@ -396,7 +396,7 @@
" You are the moderator.\n",
" Please make the topic more specific.\n",
" Please reply with the specified quest in {word_limit} words or less. \n",
" Speak directly to the participants: {*names,}.\n",
" Speak directly to the participants: {(*names,)}.\n",
" Do not add anything else.\"\"\"\n",
" ),\n",
"]\n",

View File

@@ -1,9 +1,9 @@
# we build the docs in these stages:
# 1. install vercel and python dependencies
# 2. copy files from "source dir" to "intermediate dir"
# 2. generate files like model feat table, etc in "intermediate dir"
# 3. copy files to their right spots (e.g. langserve readme) in "intermediate dir"
# 4. build the docs from "intermediate dir" to "output dir"
# We build the docs in these stages:
# 1. Install vercel and python dependencies
# 2. Copy files from "source dir" to "intermediate dir"
# 2. Generate files like model feat table, etc in "intermediate dir"
# 3. Copy files to their right spots (e.g. langserve readme) in "intermediate dir"
# 4. Build the docs from "intermediate dir" to "output dir"
SOURCE_DIR = docs/
INTERMEDIATE_DIR = build/intermediate/docs
@@ -18,32 +18,45 @@ PORT ?= 3001
clean:
rm -rf build
clean-cache:
rm -rf build .venv/deps_installed
install-vercel-deps:
yum -y -q update
yum -y -q install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
install-py-deps:
python3 -m venv .venv
$(PYTHON) -m pip install -q --upgrade pip
$(PYTHON) -m pip install -q --upgrade uv
$(PYTHON) -m uv pip install -q --pre -r vercel_requirements.txt
$(PYTHON) -m uv pip install -q --pre $$($(PYTHON) scripts/partner_deps_list.py) --overrides vercel_overrides.txt
@echo "📦 Installing Python dependencies..."
@if [ ! -d .venv ]; then python3 -m venv .venv; fi
@if [ ! -f .venv/deps_installed ]; then \
$(PYTHON) -m pip install -q --upgrade pip --disable-pip-version-check; \
$(PYTHON) -m pip install -q --upgrade uv; \
$(PYTHON) -m uv pip install -q --pre -r vercel_requirements.txt; \
$(PYTHON) -m uv pip install -q --pre $$($(PYTHON) scripts/partner_deps_list.py) --overrides vercel_overrides.txt; \
touch .venv/deps_installed; \
fi
@echo "✅ Dependencies installed"
generate-files:
@echo "📄 Generating documentation files..."
mkdir -p $(INTERMEDIATE_DIR)
cp -rp $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
$(PYTHON) scripts/tool_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR)
curl https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > $(INTERMEDIATE_DIR)/langserve.md
@if [ ! -f build/langserve_readme_cache.md ] || [ $$(find build/langserve_readme_cache.md -mtime +1 -print) ]; then \
echo "🌐 Downloading LangServe README..."; \
curl -s https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > build/langserve_readme_cache.md; \
fi
cp build/langserve_readme_cache.md $(INTERMEDIATE_DIR)/langserve.md
cp ../SECURITY.md $(INTERMEDIATE_DIR)/security.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/
@echo "🔧 Generating feature tables and processing links..."
$(PYTHON) scripts/tool_feat_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/ & \
wait
@echo "✅ Files generated"
copy-infra:
@echo "📂 Copying infrastructure files..."
mkdir -p $(OUTPUT_NEW_DIR)
cp -r src $(OUTPUT_NEW_DIR)
cp vercel.json $(OUTPUT_NEW_DIR)
@@ -55,15 +68,22 @@ copy-infra:
cp -r static $(OUTPUT_NEW_DIR)
cp -r ../libs/cli/langchain_cli/integration_template $(OUTPUT_NEW_DIR)/src/theme
cp yarn.lock $(OUTPUT_NEW_DIR)
@echo "✅ Infrastructure files copied"
render:
@echo "📓 Converting notebooks (this may take a while)..."
$(PYTHON) scripts/notebook_convert.py $(INTERMEDIATE_DIR) $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Notebooks converted"
md-sync:
@echo "📝 Syncing markdown files..."
rsync -avmq --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --include="*/_category_.yml" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Markdown files synced"
append-related:
@echo "🔗 Appending related links..."
$(PYTHON) scripts/append_related_links.py $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Related links appended"
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
@@ -71,6 +91,10 @@ generate-references:
update-md: generate-files md-sync
build: install-py-deps generate-files copy-infra render md-sync append-related
@echo ""
@echo "🎉 Documentation build complete!"
@echo "📖 To view locally, run: cd docs && make start"
@echo ""
vercel-build: install-vercel-deps build generate-references
rm -rf docs
@@ -84,4 +108,9 @@ vercel-build: install-vercel-deps build generate-references
NODE_OPTIONS="--max-old-space-size=5000" yarn run docusaurus build
start:
cd $(OUTPUT_NEW_DIR) && yarn && yarn start --port=$(PORT)
@echo "🚀 Starting documentation server on port $(PORT)..."
@echo "📖 Installing Node.js dependencies..."
cd $(OUTPUT_NEW_DIR) && yarn install --silent
@echo "🌐 Starting server at http://localhost:$(PORT)"
@echo "Press Ctrl+C to stop the server"
cd $(OUTPUT_NEW_DIR) && yarn start --port=$(PORT)

View File

@@ -108,7 +108,7 @@ class GalleryGridDirective(SphinxDirective):
# Parse the template with Sphinx Design to create an output container
# Prep the options for the template grid
class_ = "gallery-directive" + f' {self.options.get("class-container", "")}'
class_ = "gallery-directive" + f" {self.options.get('class-container', '')}"
options = {"gutter": 2, "class-container": class_}
options_str = "\n".join(f":{k}: {v}" for k, v in options.items())

View File

@@ -262,6 +262,8 @@ myst_enable_extensions = ["colon_fence"]
# generate autosummary even if no references
autosummary_generate = True
# Don't fail on autosummary import warnings
autosummary_ignore_module_all = False
html_copy_source = False
html_show_sourcelink = False

View File

@@ -202,6 +202,12 @@ def _load_package_modules(
if file_path.name.startswith("_"):
continue
if "integration_template" in file_path.parts:
continue
if "project_template" in file_path.parts:
continue
relative_module_name = file_path.relative_to(package_path)
# Skip if any module part starts with an underscore
@@ -267,7 +273,7 @@ def _construct_doc(
.. _{package_namespace}:
======================================
{package_namespace.replace('_', '-')}: {package_version}
{package_namespace.replace("_", "-")}: {package_version}
======================================
.. automodule:: {package_namespace}
@@ -325,7 +331,7 @@ def _construct_doc(
index_autosummary += f"""
:ref:`{package_namespace}_{module}`
{'^' * (len(package_namespace) + len(module) + 8)}
{"^" * (len(package_namespace) + len(module) + 8)}
"""
if classes:
@@ -364,7 +370,7 @@ def _construct_doc(
"""
index_autosummary += f"""
{class_['qualified_name']}
{class_["qualified_name"]}
"""
if functions:
@@ -427,7 +433,7 @@ def _construct_doc(
"""
index_autosummary += f"""
{class_['qualified_name']}
{class_["qualified_name"]}
"""
if deprecated_functions:
@@ -495,15 +501,7 @@ def _package_namespace(package_name: str) -> str:
def _package_dir(package_name: str = "langchain") -> Path:
"""Return the path to the directory containing the documentation."""
if package_name in (
"langchain",
"experimental",
"community",
"core",
"cli",
"text-splitters",
"standard-tests",
):
if (ROOT_DIR / "libs" / package_name).exists():
return ROOT_DIR / "libs" / package_name / _package_namespace(package_name)
else:
return (
@@ -592,7 +590,12 @@ For the legacy API reference hosted on ReadTheDocs see [https://api.python.langc
if integrations:
integration_headers = [
" ".join(
custom_names.get(x, x.title().replace("ai", "AI").replace("db", "DB"))
custom_names.get(
x,
x.title().replace("db", "DB")
if dir_ == "langchain_v1"
else x.title().replace("ai", "AI").replace("db", "DB"),
)
for x in dir_.split("-")
)
for dir_ in integrations
@@ -660,18 +663,12 @@ def main(dirs: Optional[list] = None) -> None:
print("Starting to build API reference files.")
if not dirs:
dirs = [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs")
if dir_ not in ("cli", "partners", "packages.yml")
and "pyproject.toml" in os.listdir(ROOT_DIR / "libs" / dir_)
p.parent.name
for p in (ROOT_DIR / "libs").rglob("pyproject.toml")
# Exclude packages that are not directly under libs/ or libs/partners/
if p.parent.parent.name in ("libs", "partners")
]
dirs += [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs" / "partners")
if os.path.isdir(ROOT_DIR / "libs" / "partners" / dir_)
and "pyproject.toml" in os.listdir(ROOT_DIR / "libs" / "partners" / dir_)
]
for dir_ in dirs:
for dir_ in sorted(dirs):
# Skip any hidden directories
# Some of these could be present by mistake in the code base
# e.g., .pytest_cache from running tests from the wrong location.
@@ -682,7 +679,7 @@ def main(dirs: Optional[list] = None) -> None:
print("Building package:", dir_)
_build_rst_file(package_name=dir_)
_build_index(dirs)
_build_index(sorted(dirs))
print("API reference files built.")

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNqdVX1sE+cZTxS12la1o4KVFKZy9aa2Yzn7PuyLz643EueDkA97cXA+Jhq9vnvtO+e+cnd2bKOwAduqCQo7xtZqGqyAE4PJAlnCx2hpV6ZqUCbaratoaEUnUbGubAsboKnsI3vPcWgi+Gsnf9zd87y/93l+v+d53i2FNNQNUVUqx0TFhDrgTPRgWFsKOhxMQcP8zqgMTUHl8+FQpOtAShenVwumqRk+lwtoolPVoAJEJ6fKrjTp4gRgutC9JsESTD6m8tlLlU9tdMjQMEACGg7fNzc6OBVtpZgOn2MtlCTVUePQVQmix5QBdcfwhhqHrPJQQi8Smom7VVwWFRF5GaYOgezwxYFkwBqHqarSHKCZ1ezl8ZRSCh+53rn1bXQoQLatOjR1EaYhsvLQ4HRRm3NwdJYNmKjEVV0G9mtMhxIwIY+ZKgYwxISedaKFGtARGOLJsIE1HaWvmyIsPZWc7JtyNChaUUk4hodReohLUYc8CrbshnIsu6mxJORM5Da8YbggQMAj8J15QTVMa3wxw0cAx0FECFQ4lUfQ1s8TOVGrwXgYt2MtIloVWMraKg5AqOFAEtNwdG6VdRRomiRypexcSUNVxsoy4HYgd5uLthg40kwxrakQCqKuxRXOolJQMNLJME7qaAY3TCAqEpIWlwCKZ1Qr2V9aaNAAN4BA8HKZWaNzi8cX+qiGNdIOuFBkESTQOcEaAbrMuCcXvtdTiinK0CoEw3dvVzbe2a5AO0kSfSYWIRtZhbNGSmV0YtFqVAlZnFMRiLWPGJ8nSIJKwhSsAxTLHNShoaHKhltH0TIzZWzJIzHgb88WyhW+P9Q6r+LliuX5BiSMdToCzBqMorB2oGMUQXkwkvHRtI+msOb2rrFgeZuue+ow0aUDxYgjLRrndS9wQkoZgHwxeE/FT9uKo2zs8FFj4TCjqQbEy1FZYz1451xv4y0Nk3Plhat6AihirrStdbok/VAuM8RzKZ4X0kMywebctBiDKS4+VV6Cqt/eBgWEy4Z1gGaZ8bJlnvwiypXASQInyFMZXEdUSKIsIj5Lv+UBY1h5D0EQJ+92MNUBiEZRwU2UrlcWeuhQRqLZe38K42ZZ9uV7O81D0ax9eU4t9jLgwmhISjZO3u1QhthPGGOZeW9c5K3pL6OHfj7m4WM0ZN2AomMMx/AsSXo9EPI0wcVjdO0v7WnAIRRbTE3VTdyAHJqmZtaarpFBxm60AE16aAZl6keTiJNSPIykYg2qnYPhxzQ0kVTAH+HiOAc4AeJz9WcVGno76tpbgsUICjKoqgMi3HWpsrq/n4v3x+TAuvVUQvV2s/G14a5kk8gEBba+qS+irQ8yjZoCRVqT3ZnmhgTUEzhZ66YYhqRICiedhBP1Dd4Y68vEpXqDBx6qtUPpVcgs2aaTcTrJ9Aw2JshWRalfRzPpgbZ+dn1YVOtz1ECUTHka0+5mZ39LJCszfK4u0zsUVenWcJOXjMqiVBsWhhKhlqbMIAjXO3sHmyhvrtmNUgSmEHD5MVSwaFoagXLb4KhtcLtpan3EfNP4Mb5ETMC5eEb6sbXodAopUtaPRWyGIfpHIzsimjDQoSpwejciJpUW+UDyG8mM1i2AyAAEdan2ru6oJ9YIhnrDvOLNCDLb3t+QbusksmanuoAZhmBxokwOQ7i9pdL8NPT/M6rjPfjCKYCHtLljuKCohiLG46MRqKOusoqcpKZ4NO51OBpswjvreq0plnLXkrWeOMt6WW/c68Hr0SCdR7szM/L2WVEAEiq8NGdNCnTA4XO7aYcfk0HAy6AeKx3Wm0fnjq3XK4+t2vaZitJVhb6zs9t3hXa+TSx5feaTZVvffHDdfnP25i86j9Xf99fNVcvXHD47+MSh1/5evbH+UPQ/fzvbcuLcGcfgObDpxhMzz/3pK5VLTj28cknS1XvrQv69fwU/uXL08ImZ45+/feM37ycat6369/Pvhb72rSWHD754e+XqTdHT6//g3zp2rqMnu3v7YHLThke6u7PW8i99dKhuKdxzZSo/dTFaaIzuFtN7rtYmbu2dOoMx1z2VFZkVr/6uIF9//t01x3PjN6u3/CUsOdsrzvzw+194PPjW5hE48ZN9jvNC8zW/+3uXf7XjoRfz3/0n8dk/rq0iPP/InFiWk57FDmT91ZMVv++LPfrrnT9e9eaHJikeNPZ+fP3ZrqlLHx898vQba5gVn2vr2H5zd8XTO85XP/Ph0guRmZOx2zuuzLyhr3jy6pMvgL0/e3xpK33x4rV3vvp28f7e5NJHzi9bVwwwk/rll7+4ov2BH9RQs/LVY9MRRX5p9tGvV428RTI/uvGucIu6/v5V8rGPXr3k9D/24LZv/7cQ7ZloWvXBvteiD8V9Kx/esXniSJ/sv3nh2p/Tfd3v1B47teF822rrp7s+KGlTVXF/3x6tBQn1P0jA3k8=
eNqdVWtsFNcVtnEDDaKpqhC1jYuYTm3ktp71zL68683SENsYGz/W3rV3bYKsuzN3dseeF/NYdm1cpSSqkhCVDIpIWuFAYnuXGMvBtUtIqBMiEqCPtKVJkexISd/QR6SmLWmjBOiZ3TXYgl+92se995x77jnn+865e3MprOmCIpdOCbKBNcQasNCtvTkN7zKxbjySlbCRVLjxUEc4MmZqwsI3koah6nU1NUgVHIqKZSQ4WEWqSTE1bBIZNTBXRZw3Mx5XuMxiadUwKWFdRwmsk3U7hklWgatkg6wjt2FRVMhqUlNEDEtTxxo5srOalBQOi7CRUA3KrVCSIAugpRsaRhJZxyNRx9WkoShiwaCRUe3jvCnn3QfVG9O6YVJGki3VsKEJOIVBymGd1QS1oEB2FQWEIPOKJiF7m9CwiAzMEYZCIAIyoWUccFBFGhiDPOm2YVWD8DVDwPlVXsmeFL0BbwU5QY6MQHiQS0HDHDhbVIMYi2pKfACzBqiN7BzJJTHiwPj+8aSiG9b0ygy/iFgWQ0KwzCocmLZmE0OCWk1wmLd9rSaGdIObhOTKOB+7NTmIsUohUUjhbOGsdRypqiiw+RhrBnRFniqCQdnu3CqetCGhADnZsOY6wJUtzTWhDBBCJhiHz+egj6cp3UCCLALAlIjAq6yal59aLlAROwhGqCLZrGzh8PRyHUW3JtoQ2xFeYRJpbNKaQJrkdc8u39dM2RAkbOXqQ7deVxTeuC7ncjAMfGZWWNYzMmtN5Mn00orTwIcMxSpgxHqOnl5KkIjlhJG0xpx+71EN6yrwGz+chWOGqe8dB0jwz8/nijx/vmP7EpbvlXx5vAHgseYjSbOaYGqJFlMknLTTQzCeOsZX52aIprbIVH3xmshtcZiJaEjWecCicQn9HJs05UHMTdbfFvF5G3GIxnYfyovCaVXRMVX0ypqKUV2FCqeaG2YLJKMULYFkYSh/rfWCjSZUtCDPFcXAd9skXE5JujXmqaWnb0psFlvz9qS/mVdTQnu9M5X2DLQJvVsb3cl4o6hnlrSXYJmELNAUQ1M0M58GWqeUDGWqhSqngHcpgcVUHmW4y/1KmtIgj6IgCQBG/rfYo4A4LhrGyVs1DGUQQzt7gfHQhfHqch0NSxCeHc9NS04/jB/fXuuGNbc/P7yvrNTT8TKfxpySfvJWedHG87Q+lV5SpgTOWqiART/tYTGuddNxvwc76VpgiTvudmOa5f1e2Im/bHcVFqzYdFAVzYA8sdCVjYy1UC2htF2qQRfjcXkh1gB0NFY0ORw24w2KHYQeIFTobAriXmR5ikVsElMFBlu5ht72LW3N9ZNhcLJeUQYFfGCx9Ev9/SzfH5eCrDMcjwmJRGe/GXVudyaTfQ3bU61pxPHdYu9ApMHvHIj2RSIuzTtIMbUeZ63X43EyFOOgHVB5VE8r19AtJ+OqudvVt6svEXmgZxfvibobfXLa40Btu7raRdfAgIaT0VATH+repummb/fuHildG9IH2hvSLV7ULnODTZFQqFn2qGJoCxtzxRjFETbbeSMW1VLNPi2cUrshahUZyWBNgADKQ9fVg8XCo6DwqELZuZfKLkBw+cQEHSt7bYDYBq9chyxmAkTYzjCGf2j9YcHAwXZFxgtPQWLMlMAFOxuFHr6vuXeINmqllozAtrT2R4TIYNTX7/RGW5VWVeIbTJbrcrcty4yf8VN0MTle2u3Lk/Om6/+nVydi1PI+QnWohec8Jyu6LPB8NgxVhTVrkhUVk4NnQ8PZ+q1U15Zea87vpVHc6/e7eFTL+DCiGqNdx5es3eg64/abk0MiEC/FWrNJV5Csc7tdZICQUNDnddN0/tH/Trbw/L1Zemzjvs+W5EcZfK9ff6Lrdfldet38J9/0jf7hq3cemHhr4v7OddTQebncu/675IXv9fSurxjaEfrj+2t+0vz3ufJf7btrz553hvfv+XrJIxfin3vgg4pjF9/954nhqdfOXU182qW8c2X01NWPrshvv/HWn/+a+dubFUzyk7vvORzbeM+P3iPLteDUqpazbvNE3y+ihw/+7OPqMkqYe+xi2vHFyImz39c2j9b89C+Th3ZuWAg4147FzuxfVfLbyteac2//t2kK1fm7+57o9FZdvLSp5Avns48/yfzg/K9nnll3hDx3+Yf/Hp47uPl08Orqyg333d/xUNA8Pf6na3zrutjuzk0fTs+mPrPt0qPYebjyWxusM73jE2fXLJ6NRjrLE7+fXlv1tZcrqpwZ60J96T+eOnPZ8emdG49eeZAJstfkmcvnHkN3lPkCqyufiS0+9+p9liPbfW/vveVNaG3lb+66cvorB54e9bpXPW5deil+zXPqP9KD/3r2w7Ij64e27vz4ocWj28/U7zhtHLk+GEgcumPfo797ujM28/qTHx2ZOTj6wS+3vjHCiccWT65++POb3m9fZHdVXVtlI1JW8u01h2Y2Azz/A3nj8PY=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNqdVWtwE9cVtgemdBpCX3nUbV02miTNJF55pV1JSK4mMfIDAXqr8iNDndXdK2vR7t71PmRLxlOgpJMQJnQT0o4bhjbYSK1wHBy7IXUKTEmYUCalJUxoDBP3R1rSMCTUpT9oJwm9qwexB351Z6Xdu+dxv3O+c87dXsxCReWRVD/BSxpUWKDhhWpsLypwQIeqtqMgQi2NuPFwKBYf0xV+7sG0psmqp7mZlXkrkqHE8laAxOasrRmkWa0Zv8sCLLsZTyIud77+gWGLCFWV7YeqxfPosAUgvJWkWTyWdVAQkKXJoiAB4qWuQsUysqnJIiIOCvhDv6yRDCJFXuKxlqopkBUtnhQrqLDJoiEkVBxqOdk0T+lSGT5WvfHqGbZIrGhKFagpPMxCLOWgChRerihYolUBwUsppIis+ZlQoMBqkCM0RLAEzoSSs2JDmVWwM5wn1XQsKzh8ReNheVVWMl+qaDBaXuq3jIzg8HAueQVyGGxVDcdYVUPJzRBoWG1k00gxDVkOO989nkaqZkwuzfBLLAAQJwRKAHHYtfFif56XmwgOpkysJZxWCZajNkoZCGWSFfgsLFSsjEOsLAs8KEfXvFlF0kSVBtIEcrO4ZJJBYs4kzZgJYRCt/uZwDpeCRNisTqfVfmiIVDWWlwRMLSmwGE9BLstfWyyQWZDBTshqmRmFivHkYh2kGgcCLAjFlrhkFZA2DrCK6GSmF39XdEnjRWgUfeGbt6sKb2xXpK02G76nlnhWcxIwDpTL6PASa1wJORIg7MR4gZqsJUiAUr+WNsbsbuevFKjKuLLhjwrYTNPV7eOYDPjWyWK1wveHNtRYnK+7e7wNE2McibFaE2G3EwFWIeyU3UHYnB7ahW+iMxCf8FW3id+Sh6m4wkpqCnPRXuO9CNK6lIFcyXdLxo+YjONoTPi4sUg4JCMVklVUxkQ3Ga30Nulvm66UF4mUflbi8+VtjSNl6gfzQ4Mc0DkunR0UKXeeofkk1EFqpmqCq9/cBgMiRdUYc9psk1VJLfklHCtF2iiSss0OkQpOhcCLPM5n+b86YFRj3EFR1Ks3K2goA/EoKjJU+Tq6WEOBIibN3PtzN4zb7f7drZVqrmi3eTlml2qpcDEam11UX71ZoepiP6VODNW0SZ4z5u7Fiz6GAzTltrscbofbRTPJNa6Ug4b4kUw63S4O/NacBgB7McmUkaKRKgR4mmo5Y65JZIfMRvPSNgftxJG24EkEBJ2DMT3ZhswY1BZCxhMJsdxLIEUCFqQhWak/o9jWE2wN+H2lGAbpQyjDw2fO13+jrw+k+pKiN5lQOyFHBTYHab2L6drAyKFoJNjT1RuB6UB4IDTgSAE2FxwcSGZIm4uxO502xuEibVbKivuGdPp62qSNyTTrttlla1pL9m0Yas37Y628VeoKMEzCQUXXu3q5lFVQkmKYbk/0ZiJBuq0dMHJaifiRNZKXsyGuryeTCzrDGg06rF3ieocjAeLWfhRPBSi+py8S72AADpHV0t7mFgIXLJ6WqrfaNiRuG9JsGpeHqjVNC8GVE+O1Lp2RLcQ6fDqFJCHXQsTMDEP8xCM7xmvQG0QSnNuDE6Nnec7bJ4L2pCB0CszafIJx5ZS4P+P0A5++1iqEYoFgaKNKDUUVF6V/f1FmnBRDUrXkUMyacml+Dv3/RPVKN7l4CpAhuXIMFyWkSnwqVYhBBXeVUQIC0jk87hVY8HWQ0dYeY8ZtZ1w2zuF2sna7m3Jw5Fo8SGvebsyMcfOsKLICLrwsMKbTtNfiYRja0kKIrHeNE/dY+bDeVqgcWyfqp1c/9cW68rUM/65f3/VMK7pArXz8ymd37j298NiK7LMXozPv3uO77TJ1mbnjx01nn/b03HFvflP4b8e+/PLZd9/zPHb36msLZ65EtvjrdkhvrNrhWZc4Qn/4XvGDF0+c3/L74dOnPv3oddfR6Z6Pj83Pv7ZiaueXSr8ILxy8NLZ/voELZI+fGuN2xa4Gpo7OnnpKobfv+yYTOecO3rZ+WNRnT59r/7n49pbhxOp1D61Mrvjr1+u2/mzhWyPnv/DDJxu5g39Zv/uek6tOfPz2cuJ7o91ftS9v7O6962DDgy/vys4c7QT/WHb5w4Y9M/9suL/+23xdoeM76Kdrd3/0FY5BK8Bdj/RuaALXzvzr/dG91vtv10/HGj/7yc7Eo42rnutsONnh2/aHJ419X9v65kDD689vOZ671Ni18WH5k02Rje+/8cly+q2rNJfkL+Z7JudHE46T0ecPdRfC0cZDo89dvfj3P6d7t3LGfxB95Y/7lMyFT/90reGBx5+e+vXeNzsOHt72gXP3wiPX38nOLaS7l//74d8c3+n477VVt7+iUGfnnpBX7hp5Z/OFFwZ/EDp8drY0f+7S7J7RuMV937F6k5lldb+MfveMH9P0P8WT3G8=
eNqdVWtsHFcVtnFaaGkDtDRQQekwVFVVdtYzuzP7sFmn9saP9WtX9jquE6LN7Mxdz9gzc8fz2HodDE2KCmlSwpTSFhUJGj822bhOk1jBaesKAk3S0qQiRUgxAlVJRECplRKIBVLTcmZ3ndhKfnHlx8ycc797zvm+c+62fBYZpoy1yilZs5DBCxa8mM62vIGGbGRa359UkSVhcTwR706O2YZ85mHJsnSzprqa12Uv1pHGy14Bq9VZplqQeKsannUFFWHG01jMzVc+tIVUkWny/cgkazZuIQUMR2kWWUO2IEXBpIc0sILg1TaRQY5u8pAqFpECH/p1i2IxpcqaDF6mZSBeJWsyvGIiD2lhrJQArZzubs/YWjF8cL32WLOF1HjVtRrIMmSURWAVkSkYsl5yILvKBkLWMthQefczYSCFt5BIWJjgCaiEkfPCRp03AAzqZLrAugHpG5aMim9FJ/ehHA1EK2v95OgopAe1lA0kQrBlN8ix7IbTA0iwwG1002heQrwI4LvGJWxazvTKCu/nBQFBQZAmYBGgnUP9I7LuIUSUcWP1ECOmJRaguBoq5u4UBhHSKV6Rs2iytNd5hdd1RRaKOVYPmFibKpNBueHcaC64lFDAnGY5M3EIpT5WnciBIDSC8YZCXvqVYcq0eFlTgGBK4SGqSb1of225QeeFQQChymJzJkubp5f7YNOZ6OCFePcKSN4QJGeCN9QAe2j5d8PWLFlFTj6auPG4svHacXm/l2Hg58AKZDOnCc5EUUy/WrEb9JCjBAwgzkv09FKBFKT1W5Iz5gsH9hjI1EHf6IlJ2GbZ5rZxoAS9cyJf1vnueNsSl3+tuHd8HdDjzCUl20MwQaLVVggf7eMIhqvx0TVciGjuSE5Fy8ckb8rDgaTBa2YGuGhcYj8vSLY2iMRC9KaMz7mMQzZu+NBeFBrWsYmoclTO1KNUV6nDqdi6QyWRUdjo5zV5pHiss9dlEzpa1mbKZtC7CwmHU6rpjHGsf/q6xVWxM+c+pGIZPSt3Rn3ZYW6gQ+5ramSldKNi5pa8l2gpQBVoiqEpmpkbBllncY6y9VKXU6C7rCwgqsjyGMcFXh2mDKijIqsykFH8W55RIBw/DWv2Rg8LDyIYZ3sZji6tN5b7GEiF9Nx8riP5wrBev7nXNTQ2XFwrYwJRoGUxjflUc/ZGexljN21ODS85U7LonHkAXlI+NpjxsT6RQ0LQJ/rYAJ9mQwLyhQU6k6bDoSPuVBEAxZWDjg0L6iTAVLZyzhmPyg+7rRrxM5w/ALnWwkQTFFtE3XZ6HXaTMGsJHSYb5sX9QoYSeEFCVEnBTn5dX2d9Ryxa6IYgoxgPyuiZ+covp1JCJpVWI/Wg6+BIpjFk0X1ie7Il14PD7TGLV5uDPdlWtn+gnull5A1xmjcpJsj5ggEuwIUoxkt7ofOonqgWT4ihFN2hSk1qPNbZnsKxRDOvJ+ykFWsYsTpjCdy5IZlqHGoKMiirp6IN4VYxYfTTfVZftrEzG4tbWbNvyN+c1PmWnvV2S4Mp2bFUTnlMDAZTXYHu9YMjyCfbjZAib0mR6loCJA9T14yUG4+CxqNKbccttV0tIRYLE/GunLW1RAvccnFNydUS3W6FEfyH0d8tWyjSiTV05lkojJ2VxYgYHRhsSMm93f6srWWGWKk5HWxTuGYk6HG2PSAxbG9DAGmtQlhYVhmaZSi6XJwAzYaK4rwe+v8Z1eFHqeVzhIrrpes8r2FTkzOZyW7oKmQ4BUHBtgjXhoEmo01UV32fMxMO0Hw6jBAX5lhWpBHVAKN4Ce3a1Bl375w8r4DwsoJzSPJHyBqW9ZO1hMpHQgGWpouX/tbJ0vX3ZuXU/Ts+U1FcVfD7ySc7u37zo/foz89d+Oatv/jKH5873Hdl28xdUuVtq3/4WemeI4vsP6fPMYur9qy/+piYV0/f8oC37p0Xg9+9dPJb91QclQ7c8tKGtwr/Ovz6wtqZxYvPvzj72tV/L57cFx7df/7clvnvfPv0b/uZtjvmI7886784sfvURjGqPd7zyFsvTHoLVw7OTwUvPlx1nzyz/U/D3t5k9tjPjHBq+u1cj1fr/OC5nUcfOvzhyxUVw2/j0/9gP75r4wvvnvrpauHZO59amLjtkVUH95LizgcPWp7MN/acYF71XF398sSX9Oa19bFPX9pM/HrryObTlz8cab1j9tJY+9lNeGHVj0994fdiXdsba79+fEdL29Ofm9nx7p40/vl29b5EkxFLiJfJseOfer/uB39I/ffuusSFI5mR3330lwFu593nbm/Sh25vpQ/MCKOjD27q2McdZAOdR7/YWlhz5cRXf/L8XICtfGr7hbVvfqQuvv/n2b9979iux7v+fq+9UDUwNn3nLrXyva0f71t4ZmiNh3zy/ib+5K3R+SeOzX2woAfIr5FC76WnB+n/bL68MB5Zc/xssEhJVUXP+fNVdcDP/wBFIfTx

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNp9Vn9wFFcdTxpmlKYBQfzRFNvlZETq7d3t/cxdDDS5/OBIrgm5C/kBMb7bfXe7uf2VfXvJ3cXIFFpqC526Hf2jWlDLkWtDSKD8aCGCdjoWigpIh47BqTpWq05KO8MwFWklvt27CxeTcWdnZ9/7/vp8f763MzsIFcRJYuk4J6pQAbSKF0jbmVXgQAIi9fFRAaqsxGTaWkPhAwmFm36YVVUZ+axWIHMWSYYi4Cy0JFgHKSvNAtWK/2UeGmoyEYlJXSvbNWwSIEIgBpHJt23YREvYlKiafKZOLEDUKRwQ1yGChVyMVTduFwn85L4hKaHQ0EdQuWVAjEqKAHTdPiKUSKcIDhHudfbtpsWE7IsIbYYKFDjA6oIRXhIZuJikYxFJA6Uu5iA4kWYhIhArKThkhMpigo7GZDYpEg+xXwkEFdNIr9kkSAzk8UZMVkmnRAqcyOlcEMk4PLAvp9/kGzapKVkX7EeS2IewegFgvuIV5rn7x0BEK5ysI8NCtSIaMnBAQjdMGInDJCICEGQISeRT+GPQY9wgFAlk+InMBBAZguZUaNDyu7oOxoKtywrOrqJyetaGTcAwstB4GEuCPABpIQYzMcRydC7e/x+NblLlVCN+OY/0jVxYkKpwYsw0YjZhtEZC0OJI9CKOYfOBekRIUcNIqK3BH2gM+Of8ywHqT2B80ZTBksOvA8CxEFBRPvL6dMsFbP45BHPwgKKAlGlkRM/sQIJTIIPrvBCxYsy989RA5n/9lCL9kFbxGjAMp4sAvq0oC1HAI4itiEBYqEKPEZb1qUoC6lDwGgKhIJRlIWBwq/+xZEWGlZCqTcxv30lA0xAXKRRpicGx1g7H0pxsJhgY5YEKx3DPitAYDtpYHEKZBDxO3mhOSjsCZJnnaMNLq1614/keJ3XHFpLH9Boh8UAQVe14KwZRG7C2pfCcEQnK4nZb7EeSJFIBJ/J4bpA8wHhGZYM+VUyQAR3HSsj8DNNGc8ITxTwS0g4GAd0amqcSKDSrHQSK4HYeK95XEqLKCVDL+tsWmssT58xlHRaKwu/ReZpRSqS1g0bUTxUTWMjjRJK5capNRaAKLPrAtBQNTIsMFARfnWcUqkqKpCVsW/uZbZSWpDgHtWuln+nro6N9EaGmvotLDVlsdXE23dwzyDfUbelMyVvi3T1+TyAW3jIUqHVzoD7WBltiJOVx2t1uymW3k5TFZsHgSfcm3uYVkp3hlH+wMS76ORjpo6h4Sg42D7QmOzoRGGyJ2RJNg2JdZ4s3GXTHGkFXi2OzJdno6W7oCHscTcFWO9ftckpbAaWiTeGG9p5aiulo87CgJ+Ia6O/o49Oon5WrqgkMOTHIMTXpONNuqwu29qD6uqZQl9ITdYF4dMjRKVL+TaFUjJEa0nRLqnVz2FaE2e31kLYCbJuzyqY/E4VK46EYU1ntQJXX+1JhwO4axYFUE2hnBlc1/M35bP4cerG1+W5DfDFTjytcOxMCqpmw24kgUAi7ze4iKLfPUeVzOoimYHjcnzcTXrSgj4YV3O5RnN+GQgNlaTYhxiEz5l+0dc7orYPzq8PHpwYJk7KEIJlHpY13ke25E5gM1B/L9SkpKTEgcmnDrHbG6KGhdHKIoRMMww4OCTZv2ungIjBBR4/nRfAQ181gQKSAtAMup3ciTylU8Rj2FUfZRtqo00lSwaHgOYHD8TS++WsA0jIuHOzXFjKoUhziC0PWaWTDdraYA5+1uIx123fVOL1e788XZyqocmAWr9N7ej4XgsVoKLuAXlvIkFfxog2NJwvcJMdo02vxog8Cj91G4Q+sckSdjJf2VgH80pSdomhXxHsqN0NJVU+mjE93EkEa33nUlDZtFkBSn1g1DsrlcGNPq/V7AJ9gYCgRqZd0H1A1ISuQlwAzSUdJGuDjmszVn5at7360Nhjwn+wiiwuJbJVz962sKCGRi0ZHQ1DBidHGaF5KMHj0KnDU30i213Zrx712p4eCUZcr4nZ4bU6GrMNDraBtruwy+tzOAh5jH6S1Y6yjxuRzOh2makIANVVunCbjVvbYaO5M/VXp+Yf2fLbEeMr2PheUXrB9bvdHn94bt5x6ZIm68ZXVndrtgZeu+a/6u9ZM3v/tSrr6ZKDn8Me/6Ix6ygOh9V9a9t33R54fii5bQT/iWnVf/bkz/3im5eSrrT9+cnL5jT+c++Shn5zlN85+Kna+/PqaHTfX165elZ4tf2LH1t7ydx94av9xHwDa8H0T02ltcuu0/VLT5ZlK+P0Tb64r33cz8/W33v6OlplZO+U9VB285yurPrBE9h5941bFyFjlDv/rV279+2DT35N72rY//eWyQzBZ+uFq76UfPpncWfeNZ/v5/7Td0/nhUmFpw9Jtzdt7ve+sObEv9tff33j41xdeeXxm5V9WLkk9uPL0yqenzHeOVF8USPeu99b8qDLzQunEDzaU+3/67vOrrSsy4jNXh/+1+d6K9Uov1KoP101djJd9taZx7/U3t1RYr+w6etpz9sbtB8/9Cc5cfWv5aEXQeeni5filv7Wc6D4n3epwpR+98tvdFTv3LJsJ+ITdy13/vP970Zdnr79PfhRS2q8Pf7COWFL5bAKa31n7ra/tv2DeXXanbxpeoN/48+UNz/3ywuevHXmq8oEN59/ufuyT9775cYd/rCFu3iceoq7eiVrdzfuV0t9Rt8tLSmZny0q2zdz8AldWUvJfxWqIYQ==
eNp9VglsFNcZxiKoqUTURIkEFSmZbEmiEM96jt317hrT2utrfeC1vQ6YQpe3M292xp4r82Y2u0YQrlYECGTUQ23TRICNN2wNAcxtnKoHEW0OIaIiAQUapWlJCUoVSNUqVdw3s2t7XRNG9mre/P/7/u8/39uUS0MDSZpaNiypJjQAZ+IFsjflDPicBZG5ZUiBpqjxg7H2rviAZUiXFoumqaNwRQXQJa+mQxVIXk5TKtJ0BScCswK/6zJ0YQaTGp+9PDu31qNAhEAKIk/4e2s9nIZNqaYn7FmONxC1hgTUpxAhQiklmt9ZpRLFZ+qtS7MMDoYJeupTVBU0QwGOnTDRZfVnCQkRgaeYVZ57ATBfAdAMDahIQHRAkrKm8vBeKOxXoLieOBAsIamcCBGBRM3AYSVMEQsclp5yj6HJEPtuIWh41q0u9ygaD2X8IaWbpE8jFUmVHC2IdBxCmCjge8JrPWZWdzb2Ik1NIAyvAKxXusI6U288RJwh6Q4zvKlGRc+7PCDhGCbc5GIRkQQI8oSmyln848pTUhqqBHJ9ReUEUHmCk0zoyopfHQzei63rBq4Aw5SczK71ANfITONxvBMUCWgzOZQTz4sSV4j9vdk4Jk3JdONX8Mj5UAgLMg1JTXnWlXswWzch6O5MnEJPYfPROkRogmukK1YfiTZEI5P+FQj1WpifkHVVCvwdAjgWCirJRxHPsTzBLTLJYJIeMAyQ9axb52T2OUsyII97YSJipZxXT4OB/P/7qSV7IWfiNeB5ydkC5FhJFgQgI4itqECZCeHECO8Nm4YFHSp4DYEysSknQsDjcXBt1kODooZM++D0Fn8DcBzERQpVTuNxrO2RVL+klxM8FGRgwnKiH5l8Hne3Ct0xYuf7INRJIOMUDhX22oeArssS5/pa4dTucHEakI57M8V5p1JIPDpU0z7ajqnURCtiWTyRVIL2BoNe6lCGRCaQVBlPGFIGmNWQ7spHSwU64PowCFmcdvZQYfPBUh0N2fvaANfeNQ0SGJxo7wOGEvCNlH43LNWUFGjnIrGZ5orCSXM51kvT+O/wNGSUVTl7nxv7U6UCEco4nWRh8NqjSWgCrzNavSWj1asDA8ET04xC08iSnIZt23uoIU7T+iRoXy77WiLBCYmkUp3uTNU30L5nrVhlQ7+QMpO6ZWq9fdFMUwcDmnxtXhCKgN4E313ZQ9KVfqYy4A/6QyTtpbyYPNnZ44s1tgUoUaHMjBXt96Nlz2bYZc1WM82KKN0TZVrYZFwWNa2TE1tjQncj11WTCmUygrc9lIyjmNkSEJqbOjq6fcujEVZqkZuCeqaDX86ujNQwXUxznO3TvaoS6OqoIjBlKy3x1QIf6uPame7KNN+/0qrNNtf3dtehWGMipmqtDcuiahvbv8Ib4rTuVAlnhmVJqkg7QPmClPMcnKg0GaopU7QHQqz/9Ykxu3kIB9K00KZBXNvwnXO54om1t71lqi3mD9bhOrfH4qJVTtCVRLMlEwzF+AnaH2Z8YYYiGtviw5GimfhdC/pw3MBNL+D81k+0UY4TLbUP8vnIXVtnzGkdnF+HPj47SJjRNQTJIit7eAXZWTiryWjdSKFbSc1IAVXqd83a+522wAUkqUeLYjy2HUhsnFSQPRCkQgenJM54scecl0RU0NPSsgiTzvh726SehnqfmKyXUXZCe6K+8zgKFElTJEWPZfB8SGtZ0tIL04XEDZyWOEi67TIQZIOnM6SB4yhLioST4f4Wbxu4A1knVSdnaphaH8QXk/20nyo8b5bq4NMbu+f4M4XEhPBz5u5ak2g+Rynko05P10OwhNMAo6CTM+VFjL0UGs5MKJMSb19ahBcJEEgGGVoQKlkfmwR+iuZpJsiCQIgVGMhR8FRhFpOmUw46viXgOHH4fmVm7UvlCsg4M6+apf1sAPta5dwnZIuHXVayTnOcQFWEbkBZA/wbnEByAB/7ZKGC7Vxdz7Katmjk+AqytBTJdr1wt8upGlIlQRjqwomBhp3nZM3i8Qg34FCkgeys6bGPhgIUSEKeSyYFNigkg2QtHosTaJOFO+jM/xyQMfc0Z4+IbLUn7POxnipCAdXBgI+i3BvgxqHC2Xy27PePbb9/lvvMxv/j4zs631avUA+eufnM1rG/eub8dMGVF3fWnG+x71zYO6c2svsPx44+XX9k0Z8a546vrfnP46e+sXvewnT6h+zo6NvzZ3fG6JcPbD2CTp/89PrHF7ad2PWP2zc3k9dvrr/d9NjS9f0rl46t/8ES+pO/zYFLDh/cd+uXbzVdfHTNxx11rfmPeukEOe/E9tGyG8O1r4ovbT/7Ehlm4u+kbr+SPL74j1e+PXpsEf3ds6+xb82/8oD2mfTIlxu2vftZ177fPVi3OfbqK/Nna1Z+w/0bax95snbppgu/2CocevNDovXfO1+LnP3g2o+o5RuPNbUt+Oac4ycXt7wXWoXOhZq2eZ6Y+2j+78wLI9faYbX6BdHpe7+vumwL/a+m1o0XG/kXdu5WPjxzdcv11w89EdlzoHrbmrYjC/Za1O4vdhz+nMp/6akOalXzboq//ss/V59aufDOmj+f3/reLrRFHDj/8MWrL1p7lj+zY5P5s7kLBl8m83ZYbbiz7ZP7zj25f9y8XXkrYjDvjp/bteHHNzz5y9n4ooqHdykL3y8b/9WtJR/9hrrvzvWm5Le+vu7ylt8euPH5B8PEf+c+NNrfMlTz6ZIHTvwk8PNbIwk3VbNnVT1+4er3cd7+BxYmsIY=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNrVVk9vG0UU58+NI59gtEJCQl577V3bjVEOUajaQqMW1VRFbbUazz7vDtmd2c7MxnEjHyg9Iy2foCVRXEUtRQVxgUocOfAFwoHPwpu1nbZO2pReEJYt2/P+zG/e+73f7J3pFijNpXj7IRcGFGUG/+jv7kwV3CpAm7v7GZhERrvnzvZ3C8UPP0iMyXWv0aA5r+uMm6SeUhGzhHJRZzJrcDGUewMZjX+fJkAjTH/34AsNyl2LQZjyZ+tdxbn5uOHVm/Vm0H28xhjkxj0rmIy4iMtH8W2e10gEw5Qa2J+Zyx9pnqecUYux8ZWW4mBdCgEV5vJgEyB3acq34IECneMx4Jt9bagp9J09zAt//jHNQGsaw/eXPluA+/ut9x9fc+0GqXt2O+cYWt73HmJig3Dd/jiH4/v+sDBfBBGbpNztdFv7lxWNM1o+ENJllCXwm02qtWt9lUzdtTSVI/f8bNvy/keH5ESHdQURpuY01eWeUQVM11LjXtli5WE98VedXhD4zscko6ut9krL87xa4rutlRMMJwPYqPppAex9grUtn16hpkZaLbJBFWl5rTZpdnr+Sq/tkXMb/SfXXPSikYzdPrID3AtReUgCGjHw/DPeANrtM34TvKDrddvRwPMhiFr+7lVOywNsLYmljFP49UQklxSPuSjvPV2yYhukhucq9Wjd1nNhL5/kxQDbUcOjbrvYzdWO98tSig20IN+wMZ43XbT1aT8pasRrkk+pIM2Vrkc8r1e97Un3rlI1LvdnoH7q8wx5uAS0ovXXyCmFtr/eubfjzKfH6TlefaXe7To1hwvknGAQInVj7fR2nEEqB6E2EvkBIQg6SCFyera1tWUbbgOY7IqPiSKsuwYTwjbN8hR0mBWp4TlVZjnJ6R44ejjcBkLKQ5xrNV52kCoOmYKK4mHE9dw4RBaiNafjDDm5HJTj6aWgaYjR+niU9l92ag1UseTYaiJHoTFpWPDFkrGcCw0HFUaFmqOj46qsqRSxnXaMD3AKbLgy84VmMKk5I6k2dW4TaCZzsChDLra4AX2E8bY2UYiylSNBbCdfxIRJBtQgUuw3iiE6iiGP7eaFhheqHeUSBfToJIymEBZ5eEvz24gf+RODQlheBXRhFSbBkkc6TJFtGNzsLIyRHIlQQJab8bPoAK023cK7ynW0EA7G1cFa3kq32W55k8l7L1fx1dNUHD+4qhsqzRrYQTdXWCPTsGqszf9I3r89JtVnguA/V+Z9LBhKRzkttjiTSjyqLgyXzQXulCvkmYL/O53di16l98GKVcGHbF4uc+LFd6pUnyycr5LnF0X13Ys7zox7YUJ1glrY9oKgRYdN34dOs93tQDdo+x3Wabc7DIbQHFKGFGM08vzWMPC73UHHY4HfgQ6L2KADqKQZFXyIvLWDy3G0rztHZEfrjNoaf+GKwa91/LpcLfZxAi1DnZs1J2U4cqhIWBdEhWVCxAVDfcOIzRFVM62fMxB/X3+tvc4XCG5jFvSme86Snna4uVfNedNtzCKi53wpC0IVELxHKcqmvfAMGUpFKrVBsrhU6BHYjhK8xDZ1naBGEJMAellmWEPOAUlB5JAowOYDCjep2LdtiJFklqGKWWStkwtDMsa9Iyk+NGRTyFFln7nWyFeFNkTTMS5Ss+S4QKAAiAbLcbs5PkHwrMgwQ0SswDyXzmJhXEP9hvh8vn+P7CygTMgNsT4Di6tz2HZxrQruVQ8CeWHCLaq4vVEsI5xFtO3/LMSWf1HYECuYISt6ztCdjYMzwdfN1041mTx7FkCfm5N/AJjRbLE=

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More