Compare commits

...

205 Commits

Author SHA1 Message Date
Sydney Runkle
9f6a29c44d minimal changes 2025-10-03 12:11:05 -04:00
Sydney Runkle
ec1bd054f1 tool node fixes 2025-10-03 12:09:19 -04:00
Sydney Runkle
f0139330f7 refactoring tests 2025-10-03 11:59:22 -04:00
Sydney Runkle
984e1e984f Merge branch 'master' into sr/create-agent-api 2025-10-03 11:15:26 -04:00
Sydney Runkle
e529445b29 no printing 2025-10-03 11:12:57 -04:00
Sydney Runkle
5b972fb0e9 fixes 2025-10-03 11:11:52 -04:00
Eugene Yurtsev
1074ce5fe5 chore(langchain_v1)!: Remove ToolNode from agents (#33250)
Remove ToolNode from agents namespace. It should only be present in tools
2025-10-03 10:57:54 -04:00
Sydney Runkle
b8a20329d8 fixing up tests 2025-10-03 10:39:48 -04:00
Sydney Runkle
54e507f331 more fixes 2025-10-03 10:32:22 -04:00
Sydney Runkle
6001543093 fix tests + support builtins 2025-10-03 10:31:22 -04:00
Sydney Runkle
894ffa0be5 minor fixes 2025-10-03 10:04:47 -04:00
Sydney Runkle
56c93fa82f adding back some tests 2025-10-03 09:31:42 -04:00
Sydney Runkle
3743d596c7 beginnings of a refactor 2025-10-03 09:06:47 -04:00
Sydney Runkle
3d2f13a2f1 feat(langchain): model call limits (#33178)
This PR adds a model call limit middleware that helps to manage:

* number of model calls during a run (helps w/ avoiding tool calling
loops) - implemented w/ `UntrackedValue`
* number of model calls on a thread (helps w/ avoiding lengthy convos) -
standard state

Concern here is w/ other middlewares overwriting the model call count...
we could use a `_` prefixed field?
2025-10-03 08:28:56 -04:00
SN
99361e623a feat(core): add optional include_id param to convert_to_openai_messages function (#33242) 2025-10-03 08:22:43 -04:00
Mason Daugherty
5a016de53f chore: delete deprecated items (#33192)
Removed:
- `libs/core/langchain_core/chat_history.py`: `add_user_message` and
`add_ai_message` in favor of `add_messages` and `aadd_messages`
- `libs/core/langchain_core/language_models/base.py`: `predict`,
`predict_messages`, and async versions in favor of `invoke`. removed
`_all_required_field_names` since it was a wrapper on
`get_pydantic_field_names`
- `libs/core/langchain_core/language_models/chat_models.py`:
`callback_manager` param in favor of `callbacks`. `__call__` and
`call_as_llm` method in favor of `invoke`
- `libs/core/langchain_core/language_models/llms.py`: `callback_manager`
param in favor of `callbacks`. `__call__`, `predict`, `apredict`, and
`apredict_messages` methods in favor of `invoke`
- `libs/core/langchain_core/prompts/chat.py`: `from_role_strings` and
`from_strings` in favor of `from_messages`
- `libs/core/langchain_core/prompts/pipeline.py`: removed
`PipelinePromptTemplate`
- `libs/core/langchain_core/prompts/prompt.py`: `input_variables` param
on `from_file` as it wasn't used
- `libs/core/langchain_core/tools/base.py`: `callback_manager` param in
favor of `callbacks`
- `libs/core/langchain_core/tracers/context.py`: `tracing_enabled` in
favor of `tracing_enabled_v2`
- `libs/core/langchain_core/tracers/langchain_v1.py`: entire module
- `libs/core/langchain_core/utils/loading.py`: entire module,
`try_load_from_hub`
- `libs/core/langchain_core/vectorstores/in_memory.py`: `upsert` in
favor of `add_documents`
- `libs/standard-tests/langchain_tests/integration_tests/chat_models.py`
and `libs/standard-tests/langchain_tests/unit_tests/chat_models.py`:
`tool_choice_value` as models should accept `tool_choice="any"`
- `langchain` will consequently no longer expose these items if it was
previously

---------

Co-authored-by: Mohammad Mohtashim <45242107+keenborder786@users.noreply.github.com>
Co-authored-by: Caspar Broekhuizen <caspar@langchain.dev>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Sadra Barikbin <sadraqazvin1@yahoo.com>
Co-authored-by: Vadym Barda <vadim.barda@gmail.com>
2025-10-03 03:33:24 +00:00
Mason Daugherty
b541a56c66 chore(langchain): uncomment some optional deps (#33243)
remaining:
- azure-ai
- cohere
- huggingface
- community
2025-10-02 23:29:14 -04:00
Mason Daugherty
4a6890a4e5 chore(langchain_v1): uncomment some optional deps (#33244)
remaining:
- azure-ai
- cohere
- huggingface
- community
2025-10-02 23:18:06 -04:00
Mason Daugherty
e2e0327c90 ci: add workflow for manually building API ref for v0.3 (#33241) 2025-10-02 20:33:12 -04:00
Mason Daugherty
bba37bd6be chore: add libs/ note (#33238) 2025-10-02 19:57:50 -04:00
Mason Daugherty
b051ff4a84 chore(infra): remove formatting and linting hook for root (#33237) 2025-10-02 19:43:09 -04:00
Mason Daugherty
13812f0df8 release(qdrant): 1.0.0a1 (#33236) 2025-10-02 19:37:00 -04:00
Mason Daugherty
420dcf5c4a release(prompty): 1.0.0a1 (#33235) 2025-10-02 19:29:55 -04:00
Mason Daugherty
9f75e20d4f release(perplexity): 1.0.0a1 (#33234) 2025-10-02 19:23:22 -04:00
Mason Daugherty
743e9b2ad1 release(nomic): 1.0.0a1 (#33233) 2025-10-02 19:23:06 -04:00
Mason Daugherty
ea438f9e8a release(groq): 1.0.0a1 (#33231) 2025-10-02 19:04:27 -04:00
Mason Daugherty
86cf3fad4d release(chroma): 1.0.0a1 (#33227) 2025-10-02 19:04:14 -04:00
Mason Daugherty
79a12c8f27 release(mistralai): 1.0.0a1 (#33232) 2025-10-02 19:04:03 -04:00
Mason Daugherty
e85b03d5e4 release(fireworks): 1.0.0a1 (#33230) 2025-10-02 19:03:54 -04:00
Mason Daugherty
21ba7adbab release(exa): 1.0.0a1 (#33229) 2025-10-02 19:03:45 -04:00
Mason Daugherty
f9a87971ba release(deepseek): 1.0.0a1 (#33228) 2025-10-02 19:03:39 -04:00
Mason Daugherty
638d1ff912 release(cli): 1.0.0a1 (#33226) 2025-10-02 19:03:29 -04:00
Mason Daugherty
ae5b105d11 docs: v1 docs updates (#33173)
Co-authored-by: Mohammad Mohtashim <45242107+keenborder786@users.noreply.github.com>
Co-authored-by: Caspar Broekhuizen <caspar@langchain.dev>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Sadra Barikbin <sadraqazvin1@yahoo.com>
Co-authored-by: Vadym Barda <vadim.barda@gmail.com>
2025-10-02 18:46:26 -04:00
Mason Daugherty
d07cb63c75 fix(xai): update langchain dependencies to latest alpha versions (#33224) 2025-10-02 17:08:16 -04:00
Mason Daugherty
b8c9b20db4 release(xai): 1.0.0a1 (#33223)
Drop Python 3.9
2025-10-02 17:00:14 -04:00
Mason Daugherty
89b4d7b6c1 fix(infra): _release.yml permissions (#33222) 2025-10-02 16:41:51 -04:00
Mason Daugherty
65cd214f67 chore(infra): more tweaks to PR linting (#33220) 2025-10-02 20:11:05 +00:00
Mason Daugherty
38a971cb3b release(standard-tests): 1.0.0a2 (#33219) 2025-10-02 16:09:57 -04:00
Mason Daugherty
9c9b80c70a docs(standard-tests): add clarity to docstrings (#33218) 2025-10-02 16:09:34 -04:00
Mason Daugherty
5fd4b192bc chore(infra): update integration test workflow (#33216) 2025-10-02 14:49:16 -04:00
Mason Daugherty
ae16392ada release(text-splitters): 1.0.0a1 (#33214) 2025-10-02 13:56:10 -04:00
Mason Daugherty
ccfea37d17 style(infra): update release guidelines for IDE autogen (#33215)
VSCode looks at this file. Should help auto-gen commits for releases.
2025-10-02 17:55:35 +00:00
Mason Daugherty
5e8cb58e6a refactor(text-splitters): drop python 3.9 (#33212) 2025-10-02 13:51:10 -04:00
Mason Daugherty
740ad00d36 chore(infra): add text-splitters labeling (#33213) 2025-10-02 13:50:34 -04:00
Mason Daugherty
9459ab189a docs(openai): use text property instead of method (#33211) 2025-10-02 13:50:25 -04:00
Mason Daugherty
63097db4fc fix(ollama): exclude None parameters from options dictionary (#33208) 2025-10-02 11:25:15 -04:00
Mason Daugherty
eaa6dcce9e release: v1.0.0 (#32567)
Co-authored-by: Mohammad Mohtashim <45242107+keenborder786@users.noreply.github.com>
Co-authored-by: Caspar Broekhuizen <caspar@langchain.dev>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Sadra Barikbin <sadraqazvin1@yahoo.com>
Co-authored-by: Vadym Barda <vadim.barda@gmail.com>
2025-10-02 10:49:42 -04:00
ccurme
d7cce2f469 feat(langchain_v1): update messages namespace (#33207) 2025-10-02 10:35:00 -04:00
Mason Daugherty
48b77752d0 release(ollama): 0.3.9 (#33200) 2025-10-01 22:31:20 -04:00
Mason Daugherty
6f2d16e6be refactor(ollama): simplify options handling (#33199)
Fixes #32744

Don't restrict options; the client accepts any dict
2025-10-01 21:58:12 -04:00
Mason Daugherty
a9eda18e1e refactor(ollama): clean up tests (#33198) 2025-10-01 21:52:01 -04:00
Mason Daugherty
a89c549cb0 feat(ollama): add basic auth support (#32328)
support for URL authentication in the format
`https://user:password@host:port` for all LangChain Ollama clients.

Related to #32327 and #25055
2025-10-01 20:46:37 -04:00
Sydney Runkle
a336afaecd feat(langchain): use decorators for jumps instead (#33179)
The old `before_model_jump_to` classvar approach was quite clunky, this
is nicer imo and easier to document. Also moving from `jump_to` to
`can_jump_to` which is more idiomatic.

Before:

```py
class MyMiddleware(AgentMiddleware):
    before_model_jump_to: ClassVar[list[JumpTo]] = ["end"]

    def before_model(state, runtime) -> dict[str, Any]:
        return {"jump_to": "end"}
```

After

```py
class MyMiddleware(AgentMiddleware):

    @hook_config(can_jump_to=["end"])
    def before_model(state, runtime) -> dict[str, Any]:
        return {"jump_to": "end"}
```
2025-10-01 16:49:27 -07:00
Lauren Hirata Singh
af07949d13 fix(docs): Redirects (#33190) 2025-10-01 16:28:47 -04:00
Sydney Runkle
a10e880c00 feat(langchain_v1): add async support for create_agent (#33175)
This makes branching **much** more simple internally and helps greatly
w/ type safety for users. It just allows for one signature on hooks
instead of multiple.

Opened after https://github.com/langchain-ai/langchain/pull/33164
ballooned more than expected, w/ branching for:
* sync vs async
* runtime vs no runtime (this is self imposed)

**This also removes support for nodes w/o `runtime` in the signature.**
We can always go back and add support for nodes w/o `runtime`.

I think @christian-bromann's idea to re-export `runtime` from
langchain's agents might make sense due to the abundance of imports
here.

Check out the value of the change based on this diff:
https://github.com/langchain-ai/langchain/pull/33176
2025-10-01 19:15:39 +00:00
Eugene Yurtsev
7b5e839be3 chore(langchain_v1): use list[str] for modifyModelRequest (#33166)
Update model request to return tools by name. This will decrease the
odds of misusing the API.

We'll need to extend the type for built-in tools later.
2025-10-01 14:46:19 -04:00
ccurme
740842485c fix(openai): bump min core version (#33188)
Required for new tests added in
https://github.com/langchain-ai/langchain/pull/32541 and
https://github.com/langchain-ai/langchain/pull/33183.
2025-10-01 11:01:15 -04:00
noeliecherrier
08bb74f148 fix(mistralai): handle HTTP errors in async embed documents (#33187)
The async embed function does not properly handle HTTP errors.

For instance with large batches, Mistral AI returns `Too many inputs in
request, split into more batches.` in a 400 error.

This leads to a KeyError in `response.json()["data"]` l.288

This PR fixes the issue by:
- calling `response.raise_for_status()` before returning
- adding a retry similarly to what is done in the synchronous
counterpart `embed_documents`

I also added an integration test, but willing to move it to unit tests
if more relevant.
2025-10-01 10:57:47 -04:00
ccurme
7d78ed9b53 release(standard-tests): 0.3.22 (#33186) 2025-10-01 10:39:17 -04:00
ccurme
7ccff656eb release(core): 0.3.77 (#33185) 2025-10-01 10:24:07 -04:00
ccurme
002d623f2d feat: (core, standard-tests) support PDF inputs in ToolMessages (#33183) 2025-10-01 10:16:16 -04:00
Mohammad Mohtashim
34f8031bd9 feat(langchain): Using Structured Response as Key in Output Schema for Middleware Agent (#33159)
- **Description:** Changing the key from `response` to
`structured_response` for middleware agent to keep it sync with agent
without middleware. This a breaking change.
 - **Issue:** #33154
2025-10-01 03:24:59 +00:00
Mason Daugherty
a541b5bee1 chore(infra): rfc README.md for better presentation (#33172) 2025-09-30 17:44:42 -04:00
Mason Daugherty
3e970506ba chore(core): remove runnable section from README.md (#33171) 2025-09-30 17:15:31 -04:00
Mason Daugherty
d1b0196faa chore(infra): whitespace fix (#33170) 2025-09-30 17:14:55 -04:00
ccurme
aac69839a9 release(openai): 0.3.34 (#33169) 2025-09-30 16:48:39 -04:00
ccurme
64141072a3 feat(openai): support openai sdk 2.0 (#33168) 2025-09-30 16:34:00 -04:00
Mason Daugherty
0795be2a04 docs(core): remove non-existent param from as_tool docstring (#33165) 2025-09-30 19:43:34 +00:00
Eugene Yurtsev
9c97597175 chore(langchain_v1): expose middleware decorators and selected messages (#33163)
* Make it easy to improve the middleware shortcuts
* Export the messages that we're confident we'll expose
2025-09-30 14:14:57 -04:00
Sydney Runkle
eed0f6c289 feat(langchain): todo middleware (#33152)
Porting the [planning
middleware](39c0138d0f/src/deepagents/middleware.py (L21))
over from deepagents.

Also adding the ability to configure:
* System prompt
* Tool description

```py
from langchain.agents.middleware.planning import PlanningMiddleware
from langchain.agents import create_agent

agent = create_agent("openai:gpt-4o", middleware=[PlanningMiddleware()])

result = await agent.invoke({"messages": [HumanMessage("Help me refactor my codebase")]})

print(result["todos"])  # Array of todo items with status tracking
```
2025-09-30 02:23:26 +00:00
ccurme
729637a347 docs(anthropic): document support for memory tool and context management (#33149) 2025-09-29 16:38:01 -04:00
Mason Daugherty
3325196be1 fix(langchain): handle gpt-5 model name in init_chat_model (#33148)
expand to match any `gpt-*` model to openai
2025-09-29 16:16:17 -04:00
Mason Daugherty
f402fdcea3 fix(langchain): add context_management to Anthropic chat model init (#33150) 2025-09-29 16:13:47 -04:00
ccurme
ca9217c02d release(anthropic): 0.3.21 (#33147) 2025-09-29 19:56:28 +00:00
ccurme
f9bae40475 feat(anthropic): support memory and context management features (#33146)
https://docs.claude.com/en/docs/build-with-claude/context-editing

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-29 15:42:38 -04:00
ccurme
839a18e112 fix(openai): remove __future__.annotations import from test files (#33144)
Breaks schema conversion in places.
2025-09-29 16:23:32 +00:00
Mohammad Mohtashim
33a6def762 fix(core): Support of 'reasoning' type in 'convert_to_openai_messages' (#33050) 2025-09-29 09:17:05 -04:00
nhuang-lc
c456c8ae51 fix(langchain): fix response action for HITL (#33131)
Multiple improvements to HITL flow:

* On a `response` type resume, we should still append the tool call to
the last AIMessage (otherwise we have a ToolResult without a
corresponding ToolCall)
* When all interrupts have `response` types (so there's no pending tool
calls), we should jump back to the first node (instead of end) as we
enforced in the previous `post_model_hook_router`
* Added comments to `model_to_tools` router so clarify all of the
potential exit conditions

Additionally:
* Lockfile update to use latest LG alpha release
* Added test for `jump_to` behaving ephemerally, this was fixed in LG
but surfaced as a bug w/ `jump_to`.
* Bump version to v1.0.0a10 to prep for alpha release

---------

Co-authored-by: Sydney Runkle <sydneymarierunkle@gmail.com>
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-09-29 13:08:18 +00:00
Eugene Yurtsev
54ea62050b chore(langchain_v1): move tool node to tools namespace (#33132)
* Move ToolNode to tools namespace
* Expose injected variable as well in tools namespace
* Update doc-strings throughout
2025-09-26 15:23:57 -04:00
Mason Daugherty
986302322f docs: more standardization (#33124) 2025-09-25 20:46:20 -04:00
Mason Daugherty
a5137b0a3e refactor(langchain): resolve pydantic deprecation warnings (#33125) 2025-09-25 17:33:18 -04:00
Mason Daugherty
5bea28393d docs: standardize .. code-block directive usage (#33122)
and fix typos
2025-09-25 16:49:56 -04:00
Mason Daugherty
c3fed20940 docs: correct ported over directives (#33121)
Match rest of repo
2025-09-25 15:54:54 -04:00
Mason Daugherty
6d418ba983 test(mistralai): add xfail for structured output test (#33119)
In rare cases (difficult to reproduce), Mistral's API fails to return
valid bodies, leading to failures from `PydanticToolsParser`
2025-09-25 13:05:31 -04:00
Mason Daugherty
12daba63ff test(openai): raise token limit for o1 test (#33118)
`test_o1[False-False]` was sometimes failing because the OpenAI o1 model
was hitting a token limit with only 100 tokens
2025-09-25 12:57:33 -04:00
Christophe Bornet
eaf8dce7c2 chore: bump ruff version to 0.13 (#33043)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-25 12:27:39 -04:00
Mason Daugherty
f82de1a8d7 chore: bump locks (#33114) 2025-09-25 01:46:01 -04:00
Mason Daugherty
e3efd1e891 test(text-splitters): capture beta warnings (#33113) 2025-09-25 01:30:20 -04:00
Mason Daugherty
d6769cf032 test(text-splitters): resolve pytest marker warning (#33112)
#33111
2025-09-25 01:29:42 -04:00
Mason Daugherty
7ab2e0dd3b test(core): resolve pytest marker warning (#33111)
Remove redundant/outdated `@pytest.mark.requires("jinja2")` decorator

Pytest marks (like `@pytest.mark.requires(...)`) applied directly to
fixtures have no effect and are deprecated.
2025-09-25 01:08:54 -04:00
Mason Daugherty
81319ad3f0 test(core): resolve pydantic_v1 deprecation warning (#33110)
Excluded pydantic_v1 module from import testing

Acceptable since this pydantic_v1 is explicitly deprecated. Testing its
importability at this stage serves little purpose since users should
migrate away from it.
2025-09-25 01:08:03 -04:00
Mason Daugherty
e3f3c13b75 refactor(core): use aadd_documents in vectorstores unit tests (#33109)
Don't use the deprecated `upsert()` and `aupsert()`

Instead use the recommended alternatives
2025-09-25 00:57:08 -04:00
Mason Daugherty
c30844fce4 fix(core): use version agnostic get_fields (#33108)
Resolves a warning
2025-09-25 00:54:29 -04:00
Mason Daugherty
c9eb3bdb2d test(core): use secure hash algorithm in indexing test to eliminate SHA-1 warning (#33107)
Finish work from #33101
2025-09-25 00:49:11 -04:00
Mason Daugherty
e97baeb9a6 test(core): suppress pydantic_v1 deprecation warnings during import tests (#33106)
We intentionally import these. Hide warnings to reduce testing noise.
2025-09-25 00:37:40 -04:00
Mason Daugherty
3a6046b157 test(core): don't use deprecated input_variables param in from_file (#33105)
finish #33104
2025-09-25 04:29:31 +00:00
Mason Daugherty
8fdc619f75 refactor(core): don't use deprecated input_variables param in from_file (#33104)
Missed awhile back; causes warnings during tests
2025-09-25 00:14:17 -04:00
Ali Ismail
729bfe8369 test(core): enhance stringify_value test coverage for nested structures (#33099)
## Summary
Adds test coverage for the `stringify_value` utility function to handle
complex nested data structures that weren't previously tested.

## Changes
- Added `test_stringify_value_nested_structures()` to `test_strings.py`
- Tests nested dictionaries within lists
- Tests mixed-type lists with various data types
- Verifies proper stringification of complex nested structures

## Why This Matters
- Fills a gap in test coverage for edge cases
- Ensures `stringify_value` handles complex data structures correctly  
- Improves confidence in string utility functions used throughout the
codebase
- Low risk addition that strengthens existing test suite

## Testing
```bash
uv run --group test pytest libs/core/tests/unit_tests/utils/test_strings.py::test_stringify_value_nested_structures -v
```

This test addition follows the project's testing patterns and adds
meaningful coverage without introducing any breaking changes.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-25 00:04:47 -04:00
Mason Daugherty
9b624a79b2 test(core): suppress deprecation warnings in PipelinePromptTemplate (#33102)
We're intentionally testing this still so as not to regress. Reduce
warning noise.
2025-09-25 04:03:27 +00:00
Mason Daugherty
c60c5a91cb fix(core): use secure hash algorithm in indexing test to eliminate SHA-1 warning (#33101)
Use SHA-256 (collision-resistant) instead of the default SHA-1. No
functional changes to test behavior.
2025-09-25 00:02:11 -04:00
Mason Daugherty
d9e0c212e0 chore(infra): add tests to label mapping (#33103) 2025-09-25 00:01:53 -04:00
Sydney Runkle
f015526e42 release(langchain): v1.0.0a9 (#33098) 2025-09-24 21:02:53 +00:00
Sydney Runkle
57d931532f fix(langchain): extra arg for anthropic caching, __end__ -> end for jump_to (#33097)
Also updating `jump_to` to use `end` instead of `__end__`
2025-09-24 17:00:40 -04:00
Mason Daugherty
50012d95e2 chore: update pull_request_target types, harden (#33096)
Enhance the pull request workflows by updating the `pull_request_target`
types and ensuring safety by avoiding checkout of the PR's head. Update
the action to use a specific commit from the archived repository.
2025-09-24 16:37:16 -04:00
Mason Daugherty
33f06875cb fix(langchain_v1): version equality check (#33095) 2025-09-24 16:27:55 -04:00
dependabot[bot]
e5730307e7 chore: bump actions/setup-node from 4 to 5 (#32952)
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4
to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/setup-node/releases">actions/setup-node's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<h3>Breaking Changes</h3>
<ul>
<li>Enhance caching in setup-node with automatic package manager
detection by <a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
</ul>
<p>This update, introduces automatic caching when a valid
<code>packageManager</code> field is present in your
<code>package.json</code>. This aims to improve workflow performance and
make dependency management more seamless.
To disable this automatic caching, set <code>package-manager-cache:
false</code></p>
<pre lang="yaml"><code>steps:
- uses: actions/checkout@v5
- uses: actions/setup-node@v5
  with:
    package-manager-cache: false
</code></pre>
<ul>
<li>Upgrade action to use node24 by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p>Make sure your runner is on version v2.327.1 or later to ensure
compatibility with this release. <a
href="https://github.com/actions/runner/releases/tag/v2.327.1">See
Release Notes</a></p>
<h3>Dependency Upgrades</h3>
<ul>
<li>Upgrade <code>@​octokit/request-error</code> and
<code>@​actions/github</code> by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1227">actions/setup-node#1227</a></li>
<li>Upgrade uuid from 9.0.1 to 11.1.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1273">actions/setup-node#1273</a></li>
<li>Upgrade undici from 5.28.5 to 5.29.0 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1295">actions/setup-node#1295</a></li>
<li>Upgrade form-data to bring in fix for critical vulnerability by <a
href="https://github.com/gowridurgad"><code>@​gowridurgad</code></a> in
<a
href="https://redirect.github.com/actions/setup-node/pull/1332">actions/setup-node#1332</a></li>
<li>Upgrade actions/checkout from 4 to 5 by <a
href="https://github.com/dependabot"><code>@​dependabot</code></a>[bot]
in <a
href="https://redirect.github.com/actions/setup-node/pull/1345">actions/setup-node#1345</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/priya-kinthali"><code>@​priya-kinthali</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1348">actions/setup-node#1348</a></li>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1325">actions/setup-node#1325</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v4...v5.0.0">https://github.com/actions/setup-node/compare/v4...v5.0.0</a></p>
<h2>v4.4.0</h2>
<h2>What's Changed</h2>
<h3>Bug fixes:</h3>
<ul>
<li>Make eslint-compact matcher compatible with Stylelint by <a
href="https://github.com/FloEdelmann"><code>@​FloEdelmann</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/98">actions/setup-node#98</a></li>
<li>Add support for indented eslint output by <a
href="https://github.com/fregante"><code>@​fregante</code></a> in <a
href="https://redirect.github.com/actions/setup-node/pull/1245">actions/setup-node#1245</a></li>
</ul>
<h3>Enhancement:</h3>
<ul>
<li>Support private mirrors by <a
href="https://github.com/marco-ippolito"><code>@​marco-ippolito</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1240">actions/setup-node#1240</a></li>
</ul>
<h3>Dependency update:</h3>
<ul>
<li>Upgrade <code>@​action/cache</code> from 4.0.2 to 4.0.3 by <a
href="https://github.com/aparnajyothi-y"><code>@​aparnajyothi-y</code></a>
in <a
href="https://redirect.github.com/actions/setup-node/pull/1262">actions/setup-node#1262</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/FloEdelmann"><code>@​FloEdelmann</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/98">actions/setup-node#98</a></li>
<li><a href="https://github.com/fregante"><code>@​fregante</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1245">actions/setup-node#1245</a></li>
<li><a
href="https://github.com/marco-ippolito"><code>@​marco-ippolito</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/setup-node/pull/1240">actions/setup-node#1240</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/setup-node/compare/v4...v4.4.0">https://github.com/actions/setup-node/compare/v4...v4.4.0</a></p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="a0853c2454"><code>a0853c2</code></a>
Bump actions/checkout from 4 to 5 (<a
href="https://redirect.github.com/actions/setup-node/issues/1345">#1345</a>)</li>
<li><a
href="b7234cc9fe"><code>b7234cc</code></a>
Upgrade action to use node24 (<a
href="https://redirect.github.com/actions/setup-node/issues/1325">#1325</a>)</li>
<li><a
href="d7a11313b5"><code>d7a1131</code></a>
Enhance caching in setup-node with automatic package manager detection
(<a
href="https://redirect.github.com/actions/setup-node/issues/1348">#1348</a>)</li>
<li><a
href="5e2628c959"><code>5e2628c</code></a>
Bumps form-data (<a
href="https://redirect.github.com/actions/setup-node/issues/1332">#1332</a>)</li>
<li><a
href="65beceff8e"><code>65becef</code></a>
Bump undici from 5.28.5 to 5.29.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1295">#1295</a>)</li>
<li><a
href="7e24a656e1"><code>7e24a65</code></a>
Bump uuid from 9.0.1 to 11.1.0 (<a
href="https://redirect.github.com/actions/setup-node/issues/1273">#1273</a>)</li>
<li><a
href="08f58d1471"><code>08f58d1</code></a>
Bump <code>@​octokit/request-error</code> and
<code>@​actions/github</code> (<a
href="https://redirect.github.com/actions/setup-node/issues/1227">#1227</a>)</li>
<li>See full diff in <a
href="https://github.com/actions/setup-node/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/setup-node&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-24 16:26:05 -04:00
Mason Daugherty
4783a9c18e style: update workflow name for version equality check (#33094) 2025-09-24 20:11:30 +00:00
Mason Daugherty
ee4d84de7c style(core): typo/docs lint pass (#33093) 2025-09-24 16:11:21 -04:00
Mason Daugherty
092dd5e174 chore: update link for monorepo structure (#33091) 2025-09-24 19:55:00 +00:00
Sydney Runkle
dd81e1c3fb release(langchain): 1.0.0a8 (#33090) 2025-09-24 15:31:29 -04:00
Sydney Runkle
135a5b97e6 feat(langchain): improvements to anthropic prompt caching (#33058)
Adding an `unsupported_model_behavior` arg that can be `'ignore'`,
`'warn'`, or `'raise'`. Defaults to `'warn'`.
2025-09-24 15:28:49 -04:00
Mason Daugherty
b92b394804 style: repo linting pass (#33089)
enable docstring-code-format
2025-09-24 15:25:55 -04:00
Sydney Runkle
083bb3cdd7 fix(langchain): need to inject all state for tools registered by middleware (#33087)
Type hints matter for conditional edges!
2025-09-24 15:25:51 -04:00
Mason Daugherty
2e9291cdd7 fix: lift openai version constraints across packages (#33088)
re: #33038 and https://github.com/openai/openai-python/issues/2644
2025-09-24 15:25:10 -04:00
Sydney Runkle
4f8a76b571 chore(langchain): renaming for HITL (#33067) 2025-09-24 07:19:44 -04:00
Mason Daugherty
05ba941230 style(cli): linting pass (#33078) 2025-09-24 01:24:52 -04:00
Mason Daugherty
ae4976896e chore: delete erroneous .readthedocs.yaml (#33079)
From the legacy docs/not needed here
2025-09-24 01:24:42 -04:00
Mason Daugherty
504ef96500 chore: add commit message generation instructions for VSCode (#33077) 2025-09-24 05:06:43 +00:00
Mason Daugherty
d99a02bb27 chore: add AGENTS.md (#33076)
it would be super cool if Anthropic supported this instead of
`CLAUDE.md` :/

https://agents.md/
2025-09-24 05:02:14 +00:00
Mason Daugherty
793de80429 chore: update label mapping in PR title labeler configuration (#33075) 2025-09-24 01:00:14 -04:00
Mason Daugherty
7d4e9d8cda revert(infra): put SECURITY.md at root (#33074) 2025-09-24 00:54:37 -04:00
Mason Daugherty
54dca494cf chore: delete erroneous poetry.toml configuration file (#33073)
- Not used by the current build system
- Potentially confusing for new contributors
- A leftover artifact from the Poetry to uv migration
2025-09-24 04:40:17 +00:00
Mason Daugherty
7b30e58386 chore: delete erroneous yarn.lock in root (#33072)
Appears to have had no purpose/was added by mistake and nobody
questioned it
2025-09-24 04:35:00 +00:00
Mason Daugherty
e62b541dfd chore(infra): move SECURITY.md to .github (#33071)
cleaning up top-level. `.github` folder placement will continue to show
on repo homepage:
https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository#about-security-policies
2025-09-24 00:27:48 -04:00
Mason Daugherty
8699980d09 chore(scripts): remove obsolete release and mypy/ruff update scripts (#33070)
Outdated scripts related to release management and mypy/ruff updates

Cleaning up the root-level
2025-09-24 04:24:38 +00:00
Mason Daugherty
79e536b0d6 chore(infra): further docs build cleanup (#33057)
Reorganize the requirements for better clarity and consistency. Improve
documentation on scripts and workflows.
2025-09-23 17:29:58 -04:00
Sydney Runkle
b5720ff17a chore(langchain): simplifying HITL condition (#33065)
Simplifying condition
2025-09-23 21:24:14 +00:00
nhuang-lc
48b05224ad fix(langchain_v1): only interrupt if at least one ToolConfig value is True (#33064)
**Description:** Right now, we interrupt even if the provided ToolConfig
has all false values. We should ignore ToolConfigs which do not have at
least one value marked as true (just as we would if tool_name: False was
passed into the dict).
2025-09-23 17:20:34 -04:00
Sydney Runkle
89079ad411 feat(langchain): new decorator pattern for dynamically generated middleware (#33053)
# Main Changes

1. Adding decorator utilities for dynamically defining middleware with
single hook functions (see an example below for dynamic system prompt)
2. Adding better conditional edge drawing with jump configuration
attached to middleware. Can be registered w/ the decorator new
decorator!

## Decorator Utilities

```py
from langchain.agents.middleware_agent import create_agent, AgentState, ModelRequest
from langchain.agents.middleware.types import modify_model_request
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import InMemorySaver


@modify_model_request
def modify_system_prompt(request: ModelRequest, state: AgentState) -> ModelRequest:
    request.system_prompt = (
        "You are a helpful assistant."
        f"Please record the number of previous messages in your response: {len(state['messages'])}"
    )
    return request

agent = create_agent(
    model="openai:gpt-4o-mini", 
    middleware=[modify_system_prompt]
).compile(checkpointer=InMemorySaver())
```

## Visualization and Routing improvements

We now require that middlewares define the valid jumps for each hook.

If using the new decorator syntax, this can be done with:

```py
@before_model(jump_to=["__end__"])
@after_model(jump_to=["tools", "__end__"])
```

If using the subclassing syntax, you can use these two class vars:

```py
class MyMiddlewareAgentMiddleware):
    before_model_jump_to = ["__end__"]
    after_model_jump_to = ["tools", "__end__"]
```

Open for debate if we want to bundle these in a single jump map / config
for a middleware. Easy to migrate later if we decide to add more hooks.

We will need to **really clearly document** that these must be
explicitly set in order to enable conditional edges.

Notice for the below case, `Middleware2` does actually enable jumps.

<table>
  <thead>
    <tr>
      <th>Before (broken), adding conditional edges unconditionally</th>
      <th>After (fixed), adding conditional edges sparingly</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>
<img width="619" height="508" alt="Screenshot 2025-09-23 at 10 23 23 AM"
src="https://github.com/user-attachments/assets/bba2d098-a839-4335-8e8c-b50dd8090959"
/>
      </td>
      <td>
<img width="469" height="490" alt="Screenshot 2025-09-23 at 10 23 13 AM"
src="https://github.com/user-attachments/assets/717abf0b-fc73-4d5f-9313-b81247d8fe26"
/>
      </td>
    </tr>
  </tbody>
</table>

<details>
<summary>Snippet for the above</summary>

```py
from typing import Any
from langchain.agents.tool_node import InjectedState
from langgraph.runtime import Runtime
from langchain.agents.middleware.types import AgentMiddleware, AgentState
from langchain.agents.middleware_agent import create_agent
from langchain_core.tools import tool
from typing import Annotated
from langchain_core.messages import HumanMessage
from typing_extensions import NotRequired

@tool
def simple_tool(input: str) -> str:
    """A simple tool."""
    return "successful tool call"


class Middleware1(AgentMiddleware):
    """Custom middleware that adds a simple tool."""

    tools = [simple_tool]

    def before_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

    def after_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

class Middleware2(AgentMiddleware):

    before_model_jump_to = ["tools", "__end__"]

    def before_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

    def after_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

class Middleware3(AgentMiddleware):

    def before_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

    def after_model(self, state: AgentState, runtime: Runtime) -> None:
        return None

builder = create_agent(
    model="openai:gpt-4o-mini",
    middleware=[Middleware1(), Middleware2(), Middleware3()],
    system_prompt="You are a helpful assistant.",
)
agent = builder.compile()
```

</details>

## More Examples

### Guardrails `after_model`

<img width="379" height="335" alt="Screenshot 2025-09-23 at 10 40 09 AM"
src="https://github.com/user-attachments/assets/45bac7dd-398e-45d1-ae58-6ecfa27dfc87"
/>

<details>
<summary>Code</summary>

```py
from langchain.agents.middleware_agent import create_agent, AgentState, ModelRequest
from langchain.agents.middleware.types import after_model
from langchain_core.messages import HumanMessage, AIMessage
from langgraph.checkpoint.memory import InMemorySaver
from typing import cast, Any

@after_model(jump_to=["model", "__end__"])
def after_model_hook(state: AgentState) -> dict[str, Any]:
    """Check the last AI message for safety violations."""
    last_message_content = cast(AIMessage, state["messages"][-1]).content.lower()
    print(last_message_content)

    unsafe_keywords = ["pineapple"]
    if any(keyword in last_message_content for keyword in unsafe_keywords):

        # Jump back to model to regenerate response
        return {"jump_to": "model", "messages": [HumanMessage("Please regenerate your response, and don't talk about pineapples. You can talk about apples instead.")]}

    return {"jump_to": "__end__"}

# Create agent with guardrails middleware
agent = create_agent(
    model="openai:gpt-4o-mini",
    middleware=[after_model_hook],
    system_prompt="Keep your responses to one sentence please!"
).compile()

# Test with potentially unsafe input
result = agent.invoke(
    {"messages": [HumanMessage("Tell me something about pineapples")]},
)

for msg in result["messages"]:
    print(msg.pretty_print())

"""
================================ Human Message =================================

Tell me something about pineapples
None
================================== Ai Message ==================================

Pineapples are tropical fruits known for their sweet, tangy flavor and distinctive spiky exterior.
None
================================ Human Message =================================

Please regenerate your response, and don't talk about pineapples. You can talk about apples instead.
None
================================== Ai Message ==================================

Apples are popular fruits that come in various varieties, known for their crisp texture and sweetness, and are often used in cooking and baking.
None
"""
```

</details>
2025-09-23 13:25:55 -04:00
Mason Daugherty
2c95586f2a chore(infra): audit workflows, scripts (#33055)
Mostly adding a descriptive frontmatter to workflow files. Also address
some formatting and outdated artifacts

No functional changes outside of
[d5457c3](d5457c39ee),
[90708a0](90708a0d99),
and
[338c82d](338c82d21e)
2025-09-23 17:08:19 +00:00
Mason Daugherty
9c1285cf5b chore(infra): fix ping pong pr labeler config (#33054)
The title-based labeler was clearing all pre-existing labels (including
the file-based ones) before adding its semantic labels.
2025-09-22 21:19:53 -04:00
Sydney Runkle
c3be45bf14 fix(langchain): HITL bug causing dupe interrupt (#33052)
Need to find **last** AI msg (not first). Getting too creative w/
generators.
2025-09-22 20:09:12 -04:00
Arman Tsaturian
8f488d62b2 docs: fix stripe toolkit import in the guide (#33044)
**Description:**
Stripe tools integration guide incorrectly referenced the `crewai`
toolkit. Updated the import to use the correct `langchain` toolkit.

Stripe docs reference:
https://docs.stripe.com/agents?framework=langchain&lang=python
2025-09-22 15:17:09 -04:00
Mason Daugherty
cdae9e4942 fix(infra): prevent labeler workflow from adding/removing same labels (#33039)
The file-based and title-based labeler workflows were conflicting,
causing the bot to add and remove identical labels in the same
operation. Hopefully this fixes
2025-09-21 04:37:59 +00:00
Mason Daugherty
7ddc798f95 fix(openai): pin upper bound to prevent Pydantic 2.7.0 issues (#33038)
https://github.com/openai/openai-python/issues/2644
2025-09-21 00:27:03 -04:00
Mason Daugherty
7dcf6a515e fix: update method calls from dict to model_dump in Chain (#33035) 2025-09-20 23:47:44 -04:00
Mason Daugherty
043a7560a5 test: use .get() for safe ls_params access (#33034) 2025-09-20 23:46:37 -04:00
Mason Daugherty
5b418d3f26 feat(infra): add PR labeler configurations and workflows (#33031) 2025-09-20 22:33:08 -04:00
Mason Daugherty
6b4054c795 chore(infra): update pre-commit hooks to include linting (#33029) 2025-09-21 02:26:19 +00:00
Mason Daugherty
30fde5af38 chore(infra): remove couchbase formatting hook from pre-commit (#33030)
Should've been done when it was removed from the monorepo
2025-09-20 22:09:57 -04:00
Mason Daugherty
781db9d892 chore: update pyproject.toml files, remove codespell (#33028)
- Removes Codespell from deps, docs, and `Makefile`s
- Python version requirements in all `pyproject.toml` files now use the
`~=` (compatible release) specifier
- All dependency groups and main dependencies now use explicit lower and
upper bounds, reducing potential for breaking changes
2025-09-20 22:09:33 -04:00
Sydney Runkle
f2b0afd0b7 release(langchain): 1.0.0a6 (#33024)
w/ improvements to HITL, state schema merging, dynamic system prompt
2025-09-19 18:47:41 +00:00
Sydney Runkle
c3654202a3 fix(langchain): use state schema as input schema to middleware nodes (#33023)
We want state schema as the input schema to middleware nodes because the
conditional edges after these nodes need access to the full state.

Also, we just generally want all state passed to middleware nodes, so we
should be specifying this explicitly. If we don't, the state annotations
used by users in their node signatures are used (so they might be
missing fields).
2025-09-19 18:43:33 +00:00
Sydney Runkle
4d118777bc feat(langchain): dynamic system prompt middleware (#33006)
# Changes

## Adds support for `DynamicSystemPromptMiddleware`

```py
from langchain.agents.middleware import DynamicSystemPromptMiddleware
from langgraph.runtime import Runtime
from typing_extensions import TypedDict

class Context(TypedDict):
    user_name: str

def system_prompt(state: AgentState, runtime: Runtime[Context]) -> str:
    user_name = runtime.context.get("user_name", "n/a")
    return f"You are a helpful assistant. Always address the user by their name: {user_name}"

middleware = DynamicSystemPromptMiddleware(system_prompt)
```

## Adds support for `runtime` in middleware hooks

```py
class AgentMiddleware(Generic[StateT, ContextT]):
    def modify_model_request(
        self,
        request: ModelRequest,
        state: StateT,
        runtime: Runtime[ContextT],  # Optional runtime parameter
    ) -> ModelRequest:
        # upgrade model if runtime.context.subscription is `top-tier` or whatever
```

## Adds support for omitting state attributes from input / output
schemas

```py
from typing import Annotated, NotRequired
from langchain.agents.middleware.types import PrivateStateAttr, OmitFromInput, OmitFromOutput

class CustomState(AgentState):
    # Private field - not in input or output schemas
    internal_counter: NotRequired[Annotated[int, PrivateStateAttr]]
    
    # Input-only field - not in output schema
    user_input: NotRequired[Annotated[str, OmitFromOutput]]
    
    # Output-only field - not in input schema  
    computed_result: NotRequired[Annotated[str, OmitFromInput]]
```

## Additionally
* Removes filtering of state before passing into middleware hooks

Typing is not foolproof here, still need to figure out some of the
generics stuff w/ state and context schema extensions for middleware.

TODO:
* More docs for middleware, should hold off on this until other prios
like MCP and deepagents are met

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-18 16:07:16 -04:00
Mason Daugherty
f158cea1e8 release(mistralai): 0.2.12 (#33008) 2025-09-18 11:42:11 -04:00
Sadiq Khan
90280d1f58 docs(core): fix bugs and improve example code in chat_history.py (#32994)
## Summary

This PR fixes several bugs and improves the example code in
`BaseChatMessageHistory` docstring that would prevent it from working
correctly.

### Bugs Fixed
- **Critical bug**: Fixed `json.dump(messages, f)` →
`json.dump(serialized, f)` - was using wrong variable
- **NameError**: Fixed bare variable references to use
`self.storage_path` and `self.session_id`
- **Missing imports**: Added required imports (`json`, `os`, message
converters) to make example runnable

### Improvements
- Added missing type hints following project standards (`messages() ->
list[BaseMessage]`, `clear() -> None`)
- Added robust error handling with `FileNotFoundError` exception
handling
- Added directory creation with `os.makedirs(exist_ok=True)` to prevent
path errors
- Improved performance: `json.load(f)` instead of `json.loads(f.read())`
- Added explicit UTF-8 encoding to all file operations
- Updated stores.py to use modern union syntax (`int | None` vs
`Optional[int]`)

### Test Plan
- [x] Code passes linting (`ruff check`)
- [x] Example code now has all required imports and proper syntax
- [x] Fixed variable references prevent runtime errors
- [x] Follows project's type annotation standards

The example code in the docstring is now fully functional and follows
LangChain's coding standards.

---------

Co-authored-by: sadiqkhzn <sadiqkhzn@users.noreply.github.com>
2025-09-18 09:34:19 -04:00
Dushmanta
ee340e0a3b fix(docs): update dead link to docling github and docs (#33001)
- **Description:** Updated the dead/unreachable links to Docling from
the additional resources section of the langchain-docling docs
  - **Issue:** Fixes langchain-ai/docs/issues/574
  - **Dependencies:** None
2025-09-18 09:30:29 -04:00
Sydney Runkle
d5ba5d3511 feat(langchain): improved HITL patterns (#32996)
# Main changes / new features

## Better support for parallel tool calls

1. Support for multiple tool calls requiring human input
2. Support for combination of tool calls requiring human input + those
that are auto-approved
3. Support structured output w/ tool calls requiring human input
4. Support structured output w/ standard tool calls

## Shortcut for allowed actions

Adds a shortcut where tool config can be specified as a `bool`, meaning
"all actions allowed"

```py
HumanInTheLoopMiddleware(tool_configs={"expensive_tool": True})
```

## A few design decisions here
* We only raise one interrupt w/ all `HumanInterrupt`s, currently we
won't be able to execute all tools until all of these are resolved. This
isn't super blocking bc we can't re-invoke the model until all tools
have finished execution. That being said, if you have a long running
auto-approved tool, this could slow things down.

## TODOs

* Ideally, we would rename `accept` -> `approve`
* Ideally, we would rename `respond` -> `reject`
* Docs update (@sydney-runkle to own)
* In another PR I'd like to refactor testing to have one file for each
prebuilt middleware :)

Fast follow to https://github.com/langchain-ai/langchain/pull/32962
which was deemed as too breaking
2025-09-17 16:53:01 -04:00
Mason Daugherty
76d0758007 fix(docs): json_mode -> json_schema (#32993) 2025-09-17 18:21:34 +00:00
Mason Daugherty
8b3f74012c docs: update GenAI structured output section to include JSON mode details (#32992) 2025-09-17 17:40:34 +00:00
Mason Daugherty
54a9556f5c chore(cli): update lock (#32986) 2025-09-17 02:08:20 +00:00
Mason Daugherty
66041a2778 refactor(cli): target ruff 310 (#32985)
Use union types for optional parameters
2025-09-16 22:04:28 -04:00
Mason Daugherty
ab1b822523 chore: update PR title lint (#32983) 2025-09-16 22:04:19 -04:00
Chase Lean
543d90e108 docs: add langchain-scraperapi (#31973)
Adds documentation for the integration langchain-scraperapi, which
contains 3 tools using the ScraperAPI service.

The tools give AI agents the ability to

Scrape the web and return HTML/text/markdown
Perform Google search and return json output
Perform Amazon search and return json output

For reference, here is the official repo for langchain_scraperapi:
https://github.com/scraperapi/langchain-scraperapi
2025-09-16 21:46:20 -04:00
Adam Deedman
f8640630d8 docs: fix memory for agents (#32979)
Replaced `input_message` parameter with a directly called tuple, e.g.
`{"messages": [("user", "What is my name?")]}`

Before, the memory function wasn't working with the agent, using the
format of the input_message parameter.

Specifically, on page [Build an
Agent#adding-in-memory](https://python.langchain.com/docs/tutorials/agents/#adding-in-memory)

In the previous code, the query "What's my name?" wasn't working, as the
agent could not recall memory correctly.

<img width="860" height="679" alt="image"
src="https://github.com/user-attachments/assets/dfbca21e-ffe9-4645-a810-3be7a46d81d5"
/>
2025-09-16 15:46:15 -04:00
Mason Daugherty
f9605c7438 chore(infra): update contribution guide link in CONTRIBUTING.md (#32976) 2025-09-16 15:15:53 +00:00
Mason Daugherty
ebd6f7d8a3 chore(infra): update security guidelines formatting (#32975) 2025-09-16 15:12:10 +00:00
ccurme
e63c1d7171 chore(langchain): drop cap on python version (#32974) 2025-09-16 10:44:21 -04:00
Mason Daugherty
8180020b93 chore: restore commented out optional deps (#32971)
langchain & langchain_v1
2025-09-16 10:10:49 -04:00
Username46786
435194acf6 docs: add cross-links between summarization how-to pages (#32968)
This PR improves navigation in the summarization how-to section by
adding
cross-links from the single-call guide to the related map-reduce and
refine
guides. This mirrors the docs style guide’s emphasis on clear
cross-references
and should help readers discover the appropriate pattern for longer
texts.

- Source edited: docs/docs/how_to/summarize_stuff.ipynb
- Links added:
  - /docs/how_to/summarize_map_reduce/
  - /docs/how_to/summarize_refine/

Type: docs-only (no code changes)
2025-09-16 09:59:03 -04:00
Mason Daugherty
244c699551 refactor(cli): drop Python 3.9 (#32964) 2025-09-15 19:22:53 -04:00
Mason Daugherty
369858de19 chore(infra): fix codspeed (#32963)
Related to #32950

CodSpeed v4 runs pytest inside its own runner process, which does not
automatically inherit environment variables from the job
2025-09-15 21:52:46 +00:00
Ali Ismail
4ebce80fbb docs(langchain): add docstring for _load_map_reduce_chain (#32961)
Description:
Add a docstring to _load_map_reduce_chain in chains/summarize/ to
explain the purpose of the prompt argument and document function
parameters. This addresses an existing TODO in the codebase.

Issue:
N/A (documentation improvement only)

Dependencies:
None
2025-09-15 17:19:20 -04:00
Mason Daugherty
8670b24c8e test(groq): xfail tool integration test (#32960)
Groq models have known issues with tool calling consistency,
[particularly with complex tools derived from
runnables](https://github.com/langchain-ai/langchain/discussions/19990).
[(more)](https://github.com/langchain-ai/langchain/discussions/24309)

xfail until we can dedicate time to wrangling their API/model handling
2025-09-15 14:23:22 -04:00
Ademílson Tonato
8d60ddba3a docs: update installation command for firecrawl-py package (#32958) 2025-09-15 14:10:08 -04:00
Mason Daugherty
9f6431924f feat(openai): add max_tokens to AzureChatOpenAI (#32959)
Fixes #32949

This pattern is [present in
`ChatOpenAI`](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py#L2821)
but wasn't carried over to Azure.


[CI](https://github.com/langchain-ai/langchain/actions/runs/17741751797/job/50417180998)
2025-09-15 14:09:20 -04:00
Ali Ismail
569a3d9602 docs(langchain): add docstring for _load_stuff_chain (#32932)
**Description:**  
Add a docstring to `_load_stuff_chain` in `chains/summarize/` to explain
the purpose of the `prompt` argument and document function parameters.
This addresses an existing TODO in the codebase.

**Issue:**  
N/A (documentation improvement only)

**Dependencies:**  
None
2025-09-15 10:02:50 -04:00
dependabot[bot]
8ef4df903f chore(infra): bump CodSpeedHQ/action from 3 to 4 (#32950)
Bumps [CodSpeedHQ/action](https://github.com/codspeedhq/action) from 3
to 4.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/codspeedhq/action/releases">CodSpeedHQ/action's
releases</a>.</em></p>
<blockquote>
<h2>v4.0.0</h2>
<h2>💥 BREAKING</h2>
<p>It's now required to explicitly set the runner mode to
<code>instrumentation</code> or <code>walltime</code> using either:</p>
<ul>
<li>the <code>mode</code> argument</li>
<li>or the <code>CODSPEED_RUNNER_MODE</code> environment variable</li>
</ul>
<blockquote>
<p>[!TIP]
Before, this variable was automatically set to
<code>instrumentation</code> on every runner except for <a
href="https://codspeed.io/docs/instruments/walltime">CodSpeed macro
runners</a> where it was set to <code>walltime</code> by default.</p>
</blockquote>
<p>Find more details in <a
href="https://codspeed.io/docs/instruments">the instruments
documentation</a>.</p>
<h2>Details</h2>
<h3><!-- raw HTML omitted -->🚀 Features</h3>
<ul>
<li>Make perf profiling enabled by default by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/110">#110</a></li>
<li>Make the runner mode argument required by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a></li>
<li>Use introspected node in walltime mode by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/108">#108</a></li>
<li>Add instrumented go shell script by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/102">#102</a></li>
</ul>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>Compute proper load bias by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/107">#107</a></li>
<li>Increase timeout for first perf ping by <a
href="https://github.com/GuillaumeLagrange"><code>@​GuillaumeLagrange</code></a></li>
<li>Prevent running with valgrind by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/106">#106</a></li>
</ul>
<h3><!-- raw HTML omitted -->🏗️ Refactor</h3>
<ul>
<li>Change go-runner binary name by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/111">#111</a></li>
</ul>
<p><strong>Full Runner Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md">https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md</a></p>
<h2>v3.8.1</h2>
<h2>What's Changed</h2>
<h3><!-- raw HTML omitted -->🐛 Bug Fixes</h3>
<ul>
<li>Don't show error when libpython is not found by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a></li>
</ul>
<h3><!-- raw HTML omitted -->🏗️ Refactor</h3>
<ul>
<li>Improve conditional compilation in
<code>get_pipe_open_options</code> by <a
href="https://github.com/art049"><code>@​art049</code></a> in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/100">#100</a></li>
</ul>
<h3><!-- raw HTML omitted -->⚙️ Internals</h3>
<ul>
<li>Change log level to warn for venv_compat error by <a
href="https://github.com/not-matthias"><code>@​not-matthias</code></a>
in <a
href="https://redirect.github.com/CodSpeedHQ/runner/pull/104">#104</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/action/compare/v3.8.0...v3.8.1">https://github.com/CodSpeedHQ/action/compare/v3.8.0...v3.8.1</a>
<strong>Full Runner Changelog</strong>: <a
href="https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md">https://github.com/CodSpeedHQ/runner/blob/main/CHANGELOG.md</a></p>
<h2>v3.8.0</h2>
<h2>What's Changed</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="653fdc30e6"><code>653fdc3</code></a>
Release v4.0.1 🚀</li>
<li><a
href="4da7be1bda"><code>4da7be1</code></a>
chore: bump runner version to 4.0.1</li>
<li><a
href="172d6c5630"><code>172d6c5</code></a>
chore: make the comment about input validation more discrete</li>
<li><a
href="d15e1ce813"><code>d15e1ce</code></a>
chore: improve the release script</li>
<li><a
href="6eeb021fd0"><code>6eeb021</code></a>
Release v4.0.0 🚀</li>
<li><a
href="74312dabbe"><code>74312da</code></a>
chore: improve the release script</li>
<li><a
href="8a17a350a8"><code>8a17a35</code></a>
ci: add modes to the matrix</li>
<li><a
href="8e3f02a649"><code>8e3f02a</code></a>
feat: make the mode argument required</li>
<li><a
href="97c7a6f5fc"><code>97c7a6f</code></a>
chore: bump runner version to 4.0.0</li>
<li><a
href="8a4cadd026"><code>8a4cadd</code></a>
chore: point the changelog to the runner</li>
<li>See full diff in <a
href="https://github.com/codspeedhq/action/compare/v3...v4">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=CodSpeedHQ/action&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:56:46 +00:00
doubleinfinity
b944bbc766 docs: add ZeusDB vector store integration (#32822)
## Description

This PR adds documentation for the new ZeusDB vector store integration
with LangChain.

## Motivation

ZeusDB is a high-performance vector database (Python/Rust backend)
designed for AI applications that need fast similarity search and
real-time vector ops. This integration brings ZeusDB's capabilities to
the LangChain ecosystem, giving developers another production-oriented
option for vector storage and retrieval.

**Key Features:**
- **User-Friendly Python API**: Intuitive interface that integrates
seamlessly with Python ML workflows
- **High Performance**: Powered by a robust Rust backend for
lightning-fast vector operations
- **Enterprise Logging**: Comprehensive logging capabilities for
monitoring and debugging production systems
- **Advanced Features**: Includes product quantization and persistence
capabilities
- **AI-Optimized**: Purpose-built for modern AI applications and RAG
pipelines

## Changes

- Added provider documentation:
`docs/docs/integrations/providers/zeusdb.mdx` (installation, setup).

- Added vector store documentation:
`docs/docs/integrations/vectorstores/zeusdb.ipynb` (quickstart for
creating/querying a ZeusDBVectorStore).

- Registered langchain-zeusdb in `libs/packages.yml` for discovery.

## Target users

- AI/ML engineers building RAG pipelines

- Data scientists working with large document collections

- Developers needing high-throughput vector search

- Teams requiring near real-time vector operations

## Testing

- Followed LangChain's "How to add standard tests to an integration"
guidance.
- Code passes format, lint, and test checks locally.
- Tested with LangChain Core 0.3.74
- Works with Python 3.10 to 3.13

## Package Information
**PyPI:** https://pypi.org/project/langchain-zeusdb
**Github:** https://github.com/ZeusDB/langchain-zeusdb
2025-09-15 09:55:14 -04:00
Filip Makraduli
0be7515abc docs: add superlinked retriever integration (#32433)
# feat(superlinked): add superlinked retriever integration

**Description:** 
Add Superlinked as a custom retriever with full LangChain compatibility.
This integration enables users to leverage Superlinked's multi-modal
vector search capabilities including text similarity, categorical
similarity, recency, and numerical spaces with flexible weighting
strategies. The implementation provides a `SuperlinkedRetriever` class
that extends LangChain's `BaseRetriever` with comprehensive error
handling, parameter validation, and support for various vector databases
(in-memory, Qdrant, Redis, MongoDB).

**Key Features:**
- Full LangChain `BaseRetriever` compatibility with `k` parameter
support
- Multi-modal search spaces (text, categorical, numerical, recency)
- Flexible weighting strategies for complex search scenarios
- Vector database agnostic implementation
- Comprehensive validation and error handling
- Complete test coverage (unit tests, integration tests)
- Detailed documentation with 6 practical usage examples

**Issue:** N/A (new integration)

**Dependencies:** 
- `superlinked==33.5.1` (peer dependency, imported within functions)
- `pandas^2.2.0` (required by superlinked)

**Linkedin handle:** https://www.linkedin.com/in/filipmakraduli/

## Implementation Details

### Files Added/Modified:
- `libs/partners/superlinked/` - Complete package structure
- `libs/partners/superlinked/langchain_superlinked/retrievers.py` - Main
retriever implementation
- `libs/partners/superlinked/tests/unit_tests/test_retrievers.py` - unit
tests
- `libs/partners/superlinked/tests/integration_tests/test_retrievers.py`
- Integration tests with mocking
- `docs/docs/integrations/retrievers/superlinked.ipynb` - Documentation
a few usage examples

### Testing:
- `make format` - passing
- `make lint` - passing 
- `make test` - passing (16 unit tests, integration tests)
- Comprehensive test coverage including error handling, validation, and
edge cases

### Documentation:
- Example notebook with 6 practical scenarios:
  1. Simple text search
  2. Multi-space blog search (content + category + recency)
  3. E-commerce product search (price + brand + ratings)
  4. News article search (sentiment + topics + recency)
  5. LangChain RAG integration example
  6. Qdrant vector database integration

### Code Quality:
- Follows LangChain contribution guidelines
- Backwards compatible
- Optional dependencies imported within functions
- Comprehensive error handling and validation
- Type hints and docstrings throughout

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:54:04 +00:00
Sadiq Khan
cc9a97a477 docs(core): add type hints to BaseStore example code (#32946)
## Summary
- Add comprehensive type hints to the MyInMemoryStore example code in
BaseStore docstring
- Improve documentation quality and educational value for developers
- Align with LangChain's coding standards requiring type hints on all
Python code

## Changes Made
- Added return type annotations to all methods (__init__, mget, mset,
mdelete, yield_keys)
- Added parameter type annotations using proper generic types (Sequence,
Iterator)
- Added instance variable type annotation for the store attribute
- Used modern Python union syntax (str | None) for optional types

## Test Plan
- Verified Python syntax validity with ast.parse()
- No functional changes to actual code, only documentation improvements
- Example code now follows best practices and coding standards

This change improves the educational value of the example code and
ensures consistency with LangChain's requirement that "All Python code
MUST include type hints and return types" as specified in the
development guidelines.

---------

Co-authored-by: sadiqkhzn <sadiqkhzn@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 13:45:34 +00:00
Dmitry
ee17adb022 docs: add AI/ML API integration (#32430)
**Description:**
Introduces documentation notebooks for AI/ML API integration covering
the following use cases:
- Chat models (`ChatAimlapi`)
- Text completion models (`AimlapiLLM`)
- Provider usage examples
- Text embedding models (`AimlapiEmbeddings`)

Additionally, adds the `langchain-aimlapi` package entry to
`libs/packages.yml` for package management.

This PR aims to provide a comprehensive starting point for developers
integrating AI/ML API models with LangChain via the new
`langchain-aimlapi` package.

**Issue:** N/A

**Dependencies:** None

**Twitter handle:** @aimlapi

---

### **To-Do Before Submitting PR:**

* [x] Run `make format`
* [x] Run `make lint`
* [x] Confirm all documentation notebooks are in
`docs/docs/integrations/`
* [x] Double-check `libs/packages.yml` has the correct repo path
* [x] Confirm no `pyproject.toml` modifications were made unnecessarily

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-15 09:41:40 -04:00
Noraina
6a43f140bc docs: update SerpApi free searches amount in tool feature table (#32945)
**Description:** 
This PR updates the free searches per month from **100** to **250** and
renames SerpAPI to [SerpApi](https://serpapi.com/) to prevent confusion.
Add import API keys and enhance usage instructions in the Jupyter
notebook

**Issue:** N/A

**Dependencies:** N/A

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.
2025-09-14 21:42:59 -04:00
Youngho Kim
4619a2727f docs(anthropic): update documentation links (#32938)
**Description:**
This PR updated links to the latest Anthropic documentation. Changes
include revised links for model overview, tool usage, web search tool,
text editor tool, and more.

**Issue:**
N/A

**Dependencies:**
None

**Twitter handle:**
N/A
2025-09-14 21:38:51 -04:00
湛露先生
6487a7e2e5 chore(langchain): remove duplicate .pdf listing (#32929) 2025-09-14 21:33:40 -04:00
湛露先生
406ebc9141 chore(langchain): Fix typos in core docstrings (#32928)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-09-14 21:33:06 -04:00
Nikhil Chandrappa
e6b5ff213a docs: add YugabyteDB Distributed SQL database (#32571)
- **Description:** The `langchain-yugabytedb` package implementations of
core LangChain abstractions using `YugabyteDB` Distributed SQL Database.
  
YugabyteDB is a cloud-native distributed PostgreSQL-compatible database
that combines strong consistency with ultra-resilience, seamless
scalability, geo-distribution, and highly flexible data locality to
deliver business-critical, transactional applications.

[YugabyteDB](https://www.yugabyte.com/ai/) combines the power of the
`pgvector` PostgreSQL extension with an inherently distributed
architecture. This future-proofed foundation helps you build GenAI
applications using RAG retrieval that demands high-performance vector
search.

- [ ] **tests and docs**: 
1. `langchain-yugabytedb`
[github](https://github.com/yugabyte/langchain-yugabytedb) repo.
2. YugabyteDB VectorStore example notebook showing its use. It lives in
`langchain/docs/docs/integrations/vectorstores/yugabytedb.ipynb`
directory.
  3. Running `langchain-yugabytedb` unit tests 
  
- Setting up a Development Environment

This document details how to set up a local development environment that
will
allow you to contribute changes to the project.

Acquire sources and create virtualenv.
```shell
git clone https://github.com/yugabyte/langchain-yugabytedb
cd langchain-yugabytedb
uv venv --python=3.13
source .venv/bin/activate
```

Install package in editable mode.
```shell
uv pip install pipx  
pipx install poetry
poetry install
uv pip install pytest pytest_asyncio pytest-timeout langchain-core langchain_tests sqlalchemy psycopg psycopg-binary numpy pgvector
```

Start YugabyteDB RF-1 Universe.
```shell
docker run -d --name yugabyte_node01 --hostname yugabyte01 \
  -p 7000:7000 -p 9000:9000 -p 15433:15433 -p 5433:5433 -p 9042:9042 \
  yugabytedb/yugabyte:2.25.2.0-b359 bin/yugabyted start --background=false \
  --master_flags="allowed_preview_flags_csv=ysql_yb_enable_advisory_locks,ysql_yb_enable_advisory_locks=true" \
  --tserver_flags="allowed_preview_flags_csv=ysql_yb_enable_advisory_locks,ysql_yb_enable_advisory_locks=true"

docker exec -it yugabyte_node01 bin/ysqlsh -h yugabyte01 -c "CREATE extension vector;"
```

Invoke test cases.
```shell
pytest -vvv tests/unit_tests/yugabytedb_tests
```
2025-09-12 16:55:09 -04:00
Michael Yilma
03f0ebd93e docs: add Bigtable Key-value Store and Vector Store Docs (#32598)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

- [x] **feat(docs)**: add Bigtable Key-value store doc
- [X] **feat(docs)**: add Bigtable Vector store doc 

This PR adds a doc for Bigtable and LangChain Key-value store
integration. It contains guides on how to add, delete, get, and yield
key-value pairs from Bigtable Key-value Store for LangChain.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. **We will not consider
a PR unless these three are passing in CI.** See [contribution
guidelines](https://python.langchain.com/docs/contributing/) for more.

Additional guidelines:

- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to `pyproject.toml` files (even
optional ones) unless they are **required** for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:53:59 -04:00
Bar Cohen
c9eed530ce docs: add Timbr tools integration (#32862)
# feat(integrations): Add Timbr tools integration

## DESCRIPTION

This PR adds comprehensive documentation and integration support for
Timbr's semantic layer tools in LangChain.

[Timbr](https://timbr.ai/) provides an ontology-driven semantic layer
that enables natural language querying of databases through
business-friendly concepts. It connects raw data to governed business
measures for consistent access across BI, APIs, and AI applications.

[`langchain-timbr`](https://pypi.org/project/langchain-timbr/) is a
Python SDK that extends
[LangChain](https://github.com/WPSemantix/Timbr-GenAI/tree/main/LangChain)
and
[LangGraph](https://github.com/WPSemantix/Timbr-GenAI/tree/main/LangGraph)
with custom agents, chains, and nodes for seamless integration with the
Timbr semantic layer. It enables converting natural language prompts
into optimized semantic-SQL queries and executing them directly against
your data.

**What's Added:**
- Complete integration documentation for `langchain-timbr` package
- Tool documentation page with usage examples and API reference

**Integration Components:**
- `IdentifyTimbrConceptChain` - Identify relevant concepts from user
prompts
- `GenerateTimbrSqlChain` - Generate SQL queries from natural language
- `ValidateTimbrSqlChain` - Validate queries against knowledge graph
schemas
- `ExecuteTimbrQueryChain` - Execute queries against semantic databases
- `GenerateAnswerChain` - Generate human-readable answers from results

## Documentation Added

- `/docs/integrations/providers/timbr.mdx` - Provider overview and
configuration
- `/docs/integrations/tools/timbr.ipynb` - Comprehensive tool usage
examples

## Links

- [PyPI Package](https://pypi.org/project/langchain-timbr/)
- [GitHub Repository](https://github.com/WPSemantix/langchain-timbr)
- [Official
Documentation](https://docs.timbr.ai/doc/docs/integration/langchain-sdk/)

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:51:42 -04:00
tbice
e6c38a043f docs: add Qwen integration guide and update qwq documentation (#32817)
Thank you for contributing to LangChain! Follow these steps to mark your
pull request as ready for review. **If any of these steps are not
completed, your PR will not be considered for review.**

**Description:**  
Add documentation for Qwen integration in LangChain, including setup
instructions, usage examples, and configuration details. Update related
qwq documentation to reflect current best practices and improve clarity
for users.

This PR enhances the documentation ecosystem by:
- Adding a new guide for integrating Qwen models
- Updating outdated or incomplete qwq documentation
- Improving structure and readability of relevant sections

**Issue:** N/A  
**Dependencies:** None

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:49:20 -04:00
Elif Sema Balcioglu
dc47c2c598 docs: update langchain-oracledb documentation (#32805)
`Oracle AI Vector Search` integrations for LangChain have been moved to
a dedicated package, [langchain-oracledb
](https://pypi.org/project/langchain-oracledb/), and a new repository,
[langchain-oracle
](https://github.com/oracle/langchain-oracle/tree/main/libs/oracledb).
This PR updates the corresponding documentation, including installation
instructions and import statements, to reflect these changes.

This PR is complemented with:
https://github.com/langchain-ai/langchain-community/pull/283

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:47:10 -04:00
Yuvraj Chandra
3420ca1da2 docs: add ZenRows provider and tool integration docs (#31742)
**Description:** Adds documentation for ZenRows integration with
LangChain, including provider overview and detailed tool documentation.
ZenRows is an enterprise-grade web scraping solution that enables
LangChain agents to extract web content at scale with advanced features
like JavaScript rendering, anti-bot bypass, geo-targeting, and multiple
output formats.

This PR includes:
- Provider documentation
(`docs/docs/integrations/providers/zenrows.ipynb`)
- Tool documentation
(`docs/docs/integrations/tools/zenrows_universal_scraper.ipynb`)
- Complete usage examples and API reference links

**Issue:** N/A

**Dependencies:** 
- [langchain-zenrows](https://github.com/ZenRows-Hub/langchain-zenrows)
package (external, available on
[PyPI](https://pypi.org/project/langchain-zenrows/))
- No changes to core LangChain dependencies

**LinkedIn handle:** https://www.linkedin.com/company/zenrows/

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:37:49 -04:00
Vishal Karwande
f11dd177e9 docs: update oci documentation and examples. (#32749)
Adding Oracle Generative AI as one of the providers for langchain.
Updated the old examples in the documentation with the new working
examples.

---------

Co-authored-by: Vishal Karwande <vishalkarwande@Vishals-MacBook-Pro.local>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-12 16:28:03 -04:00
Ali Ismail
d5a4abf960 docs(core): remove duplicate 'the' in indexing/api.py (#32924)
**Description:** Fixes a small typo in `_get_document_with_hash` inside 
`libs/core/langchain_core/indexing/api.py`.

**Issue:** N/A (no related issue)

**Dependencies:** None
2025-09-12 15:49:54 -04:00
Eugene Yurtsev
b1497bcea1 chore(core): test that default values in tool calls are preserved in json schema representation (#32921)
Add unit test coverage for this issue:
https://github.com/langchain-ai/langchain/issues/32232
2025-09-12 12:50:54 -04:00
Sydney Runkle
84f9824cc9 chore: use uv caches (#32919)
Especially helpful for the text splitters tests where we're installing
pytorch (expensive and slow slow slow). Should speed up CI by 5-10 mins.

w/o caches, CI taking 20 minutes 😨 
w/ caches, CI taking 3 minutes
2025-09-12 10:29:35 -04:00
Sydney Runkle
0814bfe5ed ci: use partial runs w/ codspeed (#32920)
Taking advantage of [partial
runs](https://codspeed.io/docs/features/partial-runs)!

This should save us minutes on every CI job, we only run codspeed for
libs w/ changes and this doesn't affect benchmarking drops
2025-09-12 09:46:01 -04:00
Christophe Bornet
cbaf97ada4 chore: bump mypy version to 1.18 (#32914) 2025-09-12 09:19:23 -04:00
Sydney Runkle
dc2da95ac0 release(langchain): v1.0.0a5 (#32917) 2025-09-12 08:36:44 -04:00
Sydney Runkle
9e78ff19ab fix(langchain): use messages from model request (#32908)
Oversight when moving back to basic function call for
`modify_model_request` rather than implementation as its own node.

Basic test right now failing on main, passing on this branch

Revealed a gap in testing. Will write up a more robust test suite for
basic middleware features.
2025-09-12 08:18:02 -04:00
Mason Daugherty
649d8a8223 test(anthropic): enable VCR for web fetch test (#32913)
The API issues have been resolved; no longer xfailing
2025-09-12 03:19:55 +00:00
Mason Daugherty
338d3d2795 chore: remove infra tag from task issue template (#32912) 2025-09-11 22:02:14 -04:00
Mason Daugherty
31f641a11f chore(infra): issue template updates (#32911) 2025-09-11 22:00:44 -04:00
open-swe[bot]
91286b0b27 chore(infra): issue template updates (#32910)
Fixes: #32909

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-11 21:53:35 -04:00
dishaprakash
bea72bac3e docs: add hybrid search documentation to PGVectorStore (#32549)
Adding documentation for Hybrid Search in the PGVectorStore Notebook

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-09-11 21:12:58 -04:00
Caspar Broekhuizen
15d558ff16 fix(core): resolve mermaid node id collisions when special chars are used (#32857)
### Description

* Replace the Mermaid graph node label escaping logic
(`_escape_node_label`) with `_to_safe_id`, which converts a string into
a unique, Mermaid-compatible node id. Ensures nodes with special
characters always render correctly.

**Before**
* Invalid characters (e.g. `开`) replaced with `_`. Causes collisions
between nodes with names that are the same length and contain all
non-safe characters:
```python
_escape_node_label("开") # '_'
_escape_node_label("始") # '_'  same as above, but different character passed in. not a unique mapping.
```

**After**
```python
_to_safe_id("开") # \5f00
_to_safe_id("始") # \59cb  unique!
```

### Tests
* Rename `test_graph_mermaid_escape_node_label()` to
`test_graph_mermaid_to_safe_id()` and update function logic to use
`_to_safe_id`
* Add `test_graph_mermaid_special_chars()`

### Issue

Fixes langchain-ai/langgraph#6036
2025-09-11 14:15:17 -07:00
Hyunjoon Jeong
9cc85387d1 fix(text-splitters): add validation to prevent infinite loop and prevent empty token splitter (#32205)
### Description
1) Add validation to prevent infinite loop condition when
```tokenizer.tokens_per_chunk > tokenizer.chunk_overlap```
2) Avoid empty decoded chunk when splitter appends tokens

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 16:55:32 -04:00
Mason Daugherty
7e5180e2fa refactor: inline test release (#32903)
Reusable workflows are not currently supported by PyPI's Trusted
Publishing
functionality, and are subject to breakage. Users are strongly
encouraged
to avoid using reusable workflows for Trusted Publishing until support
becomes official. Please, do not report bugs if this breaks.
2025-09-11 16:20:07 -04:00
Mason Daugherty
bbb1b9085d release(prompty): 0.1.2 (#32907) 2025-09-11 16:19:07 -04:00
Vincent Min
ff9f17bc66 fix(core): preserve ordering in RunnableRetry batch/abatch results (#32526)
Description: Fixes a bug in RunnableRetry where .batch / .abatch could
return misordered outputs (e.g. inputs [0,1,2] yielding [1,1,2]) when
some items succeeded on an earlier attempt and others were retried. Root
cause: successful results were stored keyed by the index within the
shrinking “pending” subset rather than the original input index, causing
collisions and reordered/duplicated outputs after retries. Fix updates
_batch and _abatch to:

- Track remaining original indices explicitly.
- Call underlying batch/abatch only on remaining inputs.
- Map results back to original indices.
- Preserve final ordering by reconstructing outputs in original
positional order.

Issue: Fixes #21326

Tests:

- Added regression tests: test_retry_batch_preserves_order and
test_async_retry_batch_preserves_order asserting correct ordering after
a single controlled failure + retry.
- Existing retry tests still pass.

Dependencies:

- None added or changed.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-09-11 16:18:25 -04:00
Matthew Lapointe
b1f08467cd feat(core): allow overriding ls_model_name from kwargs (#32541) 2025-09-11 16:18:06 -04:00
Eugene Yurtsev
2903e08311 chore(docs): remove langchain_experimental from api reference (#32904)
This removes langchain-experimental from api reference.

We do not recommend it to users for production use cases, so let's also
deprecate it from documentation
2025-09-11 16:13:58 -04:00
Mason Daugherty
115e20a0bc release(ollama): 0.3.8 (#32906) 2025-09-11 16:00:41 -04:00
Mason Daugherty
0ea945d291 release(nomic): 0.1.5 (#32905) 2025-09-11 15:54:19 -04:00
Mason Daugherty
5795ec3c4d release(exa): 0.3.1 (#32902) 2025-09-11 15:53:13 -04:00
2865 changed files with 54929 additions and 490566 deletions

2
.github/CODEOWNERS vendored
View File

@@ -1,3 +1,3 @@
/.github/ @baskaryan @ccurme @eyurtsev
/libs/core/ @eyurtsev
/libs/packages.yml @ccurme
/libs/partners/ @ccurme @mdrxy

View File

@@ -3,8 +3,4 @@
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).
## New features
For new features, please start a new [discussion on our forum](https://forum.langchain.com/), where the maintainers will help with scoping out the necessary changes.
To learn how to contribute to LangChain, please follow the [contribution guide here](https://docs.langchain.com/oss/python/contributing).

View File

@@ -1,6 +1,7 @@
name: "\U0001F41B Bug Report"
description: Report a bug in LangChain. To report a security issue, please instead use the security option below. For questions, please use the LangChain forum.
labels: ["bug"]
type: bug
body:
- type: markdown
attributes:
@@ -13,10 +14,8 @@ body:
if there's another way to solve your problem:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
* [LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
* [LangChain how-to guides](https://python.langchain.com/docs/how_to/),
* [API Reference](https://python.langchain.com/api_reference/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://reference.langchain.com/python/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: checkboxes
@@ -25,7 +24,7 @@ body:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
- label: This is a bug, not a usage question.
required: true
- label: I added a clear and descriptive title that summarizes this issue.
required: true
@@ -35,6 +34,8 @@ body:
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: This is not related to the langchain-community package.
required: true
- label: I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

View File

@@ -1,9 +1,9 @@
blank_issues_enabled: false
version: 2.1
contact_links:
- name: Documentation
- name: 📚 Documentation
url: https://github.com/langchain-ai/docs/issues/new?template=langchain.yml
about: Report an issue related to the LangChain documentation
- name: LangChain Forum
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: General community discussions and support

View File

@@ -0,0 +1,118 @@
name: "✨ Feature Request"
description: Request a new feature or enhancement for LangChain. For questions, please use the LangChain forum.
labels: ["feature request"]
type: feature
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to request a new feature.
Use this to request NEW FEATURES or ENHANCEMENTS in LangChain. For bug reports, please use the bug report template. For usage questions and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a feature request to see if your request has already been made or
if there's another way to achieve what you want:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://reference.langchain.com/python/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
- type: checkboxes
id: checks
attributes:
label: Checked other resources
description: Please confirm and check all the following options.
options:
- label: This is a feature request, not a bug report or usage question.
required: true
- label: I added a clear and descriptive title that summarizes the feature request.
required: true
- label: I used the GitHub search to find a similar feature request and didn't find it.
required: true
- label: I checked the LangChain documentation and API reference to see if this feature already exists.
required: true
- label: This is not related to the langchain-community package.
required: true
- type: textarea
id: feature-description
validations:
required: true
attributes:
label: Feature Description
description: |
Please provide a clear and concise description of the feature you would like to see added to LangChain.
What specific functionality are you requesting? Be as detailed as possible.
placeholder: |
I would like LangChain to support...
This feature would allow users to...
- type: textarea
id: use-case
validations:
required: true
attributes:
label: Use Case
description: |
Describe the specific use case or problem this feature would solve.
Why do you need this feature? What problem does it solve for you or other users?
placeholder: |
I'm trying to build an application that...
Currently, I have to work around this by...
This feature would help me/users to...
- type: textarea
id: proposed-solution
validations:
required: false
attributes:
label: Proposed Solution
description: |
If you have ideas about how this feature could be implemented, please describe them here.
This is optional but can be helpful for maintainers to understand your vision.
placeholder: |
I think this could be implemented by...
The API could look like...
```python
# Example of how the feature might work
```
- type: textarea
id: alternatives
validations:
required: false
attributes:
label: Alternatives Considered
description: |
Have you considered any alternative solutions or workarounds?
What other approaches have you tried or considered?
placeholder: |
I've tried using...
Alternative approaches I considered:
1. ...
2. ...
But these don't work because...
- type: textarea
id: additional-context
validations:
required: false
attributes:
label: Additional Context
description: |
Add any other context, screenshots, examples, or references that would help explain your feature request.
placeholder: |
Related issues: #...
Similar features in other libraries:
- ...
Additional context or examples:
- ...

View File

@@ -4,12 +4,7 @@ body:
- type: markdown
attributes:
value: |
Thanks for your interest in LangChain! 🚀
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
or are a regular contributor to LangChain with previous merged pull requests.
If you are not a LangChain maintainer, employee, or were not asked directly by a maintainer to create an issue, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead.
- type: checkboxes
id: privileged
attributes:

91
.github/ISSUE_TEMPLATE/task.yml vendored Normal file
View File

@@ -0,0 +1,91 @@
name: "📋 Task"
description: Create a task for project management and tracking by LangChain maintainers. If you are not a maintainer, please use other templates or the forum.
labels: ["task"]
type: task
body:
- type: markdown
attributes:
value: |
Thanks for creating a task to help organize LangChain development.
This template is for **maintainer tasks** such as project management, development planning, refactoring, documentation updates, and other organizational work.
If you are not a LangChain maintainer or were not asked directly by a maintainer to create a task, then please start the conversation on the [LangChain Forum](https://forum.langchain.com/) instead or use the appropriate bug report or feature request templates on the previous page.
- type: checkboxes
id: maintainer
attributes:
label: Maintainer task
description: Confirm that you are allowed to create a task here.
options:
- label: I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create a task here.
required: true
- type: textarea
id: task-description
attributes:
label: Task Description
description: |
Provide a clear and detailed description of the task.
What needs to be done? Be specific about the scope and requirements.
placeholder: |
This task involves...
The goal is to...
Specific requirements:
- ...
- ...
validations:
required: true
- type: textarea
id: acceptance-criteria
attributes:
label: Acceptance Criteria
description: |
Define the criteria that must be met for this task to be considered complete.
What are the specific deliverables or outcomes expected?
placeholder: |
This task will be complete when:
- [ ] ...
- [ ] ...
- [ ] ...
validations:
required: true
- type: textarea
id: context
attributes:
label: Context and Background
description: |
Provide any relevant context, background information, or links to related issues/PRs.
Why is this task needed? What problem does it solve?
placeholder: |
Background:
- ...
Related issues/PRs:
- #...
Additional context:
- ...
validations:
required: false
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: |
List any dependencies or blockers for this task.
Are there other tasks, issues, or external factors that need to be completed first?
placeholder: |
This task depends on:
- [ ] Issue #...
- [ ] PR #...
- [ ] External dependency: ...
Blocked by:
- ...
validations:
required: false

View File

@@ -10,8 +10,7 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
- Allowed `{TYPE}` values:
- feat, fix, docs, style, refactor, perf, test, build, ci, chore, revert, release
- Allowed `{SCOPE}` values (optional):
- core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai
- *Note:* the `{DESCRIPTION}` must not start with an uppercase letter.
- core, cli, langchain, standard-tests, text-splitters, docs, anthropic, chroma, deepseek, exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant, xai, infra
- Once you've written the title, please delete this checklist item; do not include it in the PR.
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
@@ -19,15 +18,11 @@ Thank you for contributing to LangChain! Follow these steps to mark your pull re
- **Issue:** the issue # it fixes, if applicable (e.g. Fixes #123)
- **Dependencies:** any dependencies required for this change
- [ ] **Add tests and docs**: If you're adding a new integration, you must include:
1. A test for the integration, preferably unit tests that do not rely on network access,
2. An example notebook showing its use. It lives in `docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. **We will not consider a PR unless these three are passing in CI.** See [contribution guidelines](https://python.langchain.com/docs/contributing/) for more.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. **We will not consider a PR unless these three are passing in CI.** See [contribution guidelines](https://docs.langchain.com/oss/python/contributing) for more.
Additional guidelines:
- Most PRs should not touch more than one package.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests.
- Please do not add dependencies to `pyproject.toml` files (even optional ones) unless they are **required** for unit tests. Likewise, please do not update the `uv.lock` files unless you are adding a required dependency.
- Changes should be backwards compatible.
- Make sure optional dependencies are imported within a function.

View File

@@ -1,7 +0,0 @@
FROM python:3.9
RUN pip install httpx PyGithub "pydantic==2.0.2" pydantic-settings "pyyaml>=5.3.1,<6.0.0"
COPY ./app /app
CMD ["python", "/app/main.py"]

View File

@@ -1,11 +0,0 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/action.yml
name: "Generate LangChain People"
description: "Generate the data for the LangChain People page"
author: "Jacob Lee <jacob@langchain.dev>"
inputs:
token:
description: "User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}"
required: true
runs:
using: "docker"
image: "Dockerfile"

View File

@@ -1,646 +0,0 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/app/main.py
import logging
import subprocess
import sys
from collections import Counter
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Any, Container, Dict, List, Set, Union
import httpx
import yaml
from github import Github
from pydantic import BaseModel, SecretStr
from pydantic_settings import BaseSettings
github_graphql_url = "https://api.github.com/graphql"
questions_category_id = "DIC_kwDOIPDwls4CS6Ve"
# discussions_query = """
# query Q($after: String, $category_id: ID) {
# repository(name: "langchain", owner: "langchain-ai") {
# discussions(first: 100, after: $after, categoryId: $category_id) {
# edges {
# cursor
# node {
# number
# author {
# login
# avatarUrl
# url
# }
# title
# createdAt
# comments(first: 100) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# isAnswer
# replies(first: 10) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# }
# }
# }
# }
# }
# }
# }
# }
# }
# """
# issues_query = """
# query Q($after: String) {
# repository(name: "langchain", owner: "langchain-ai") {
# issues(first: 100, after: $after) {
# edges {
# cursor
# node {
# number
# author {
# login
# avatarUrl
# url
# }
# title
# createdAt
# state
# comments(first: 100) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# }
# }
# }
# }
# }
# }
# }
# """
prs_query = """
query Q($after: String) {
repository(name: "langchain", owner: "langchain-ai") {
pullRequests(first: 100, after: $after, states: MERGED) {
edges {
cursor
node {
changedFiles
additions
deletions
number
labels(first: 100) {
nodes {
name
}
}
author {
login
avatarUrl
url
... on User {
twitterUsername
}
}
title
createdAt
state
reviews(first:100) {
nodes {
author {
login
avatarUrl
url
... on User {
twitterUsername
}
}
state
}
}
}
}
}
}
}
"""
class Author(BaseModel):
login: str
avatarUrl: str
url: str
twitterUsername: Union[str, None] = None
# Issues and Discussions
class CommentsNode(BaseModel):
createdAt: datetime
author: Union[Author, None] = None
class Replies(BaseModel):
nodes: List[CommentsNode]
class DiscussionsCommentsNode(CommentsNode):
replies: Replies
class Comments(BaseModel):
nodes: List[CommentsNode]
class DiscussionsComments(BaseModel):
nodes: List[DiscussionsCommentsNode]
class IssuesNode(BaseModel):
number: int
author: Union[Author, None] = None
title: str
createdAt: datetime
state: str
comments: Comments
class DiscussionsNode(BaseModel):
number: int
author: Union[Author, None] = None
title: str
createdAt: datetime
comments: DiscussionsComments
class IssuesEdge(BaseModel):
cursor: str
node: IssuesNode
class DiscussionsEdge(BaseModel):
cursor: str
node: DiscussionsNode
class Issues(BaseModel):
edges: List[IssuesEdge]
class Discussions(BaseModel):
edges: List[DiscussionsEdge]
class IssuesRepository(BaseModel):
issues: Issues
class DiscussionsRepository(BaseModel):
discussions: Discussions
class IssuesResponseData(BaseModel):
repository: IssuesRepository
class DiscussionsResponseData(BaseModel):
repository: DiscussionsRepository
class IssuesResponse(BaseModel):
data: IssuesResponseData
class DiscussionsResponse(BaseModel):
data: DiscussionsResponseData
# PRs
class LabelNode(BaseModel):
name: str
class Labels(BaseModel):
nodes: List[LabelNode]
class ReviewNode(BaseModel):
author: Union[Author, None] = None
state: str
class Reviews(BaseModel):
nodes: List[ReviewNode]
class PullRequestNode(BaseModel):
number: int
labels: Labels
author: Union[Author, None] = None
changedFiles: int
additions: int
deletions: int
title: str
createdAt: datetime
state: str
reviews: Reviews
# comments: Comments
class PullRequestEdge(BaseModel):
cursor: str
node: PullRequestNode
class PullRequests(BaseModel):
edges: List[PullRequestEdge]
class PRsRepository(BaseModel):
pullRequests: PullRequests
class PRsResponseData(BaseModel):
repository: PRsRepository
class PRsResponse(BaseModel):
data: PRsResponseData
class Settings(BaseSettings):
input_token: SecretStr
github_repository: str
httpx_timeout: int = 30
def get_graphql_response(
*,
settings: Settings,
query: str,
after: Union[str, None] = None,
category_id: Union[str, None] = None,
) -> Dict[str, Any]:
headers = {"Authorization": f"token {settings.input_token.get_secret_value()}"}
# category_id is only used by one query, but GraphQL allows unused variables, so
# keep it here for simplicity
variables = {"after": after, "category_id": category_id}
response = httpx.post(
github_graphql_url,
headers=headers,
timeout=settings.httpx_timeout,
json={"query": query, "variables": variables, "operationName": "Q"},
)
if response.status_code != 200:
logging.error(
f"Response was not 200, after: {after}, category_id: {category_id}"
)
logging.error(response.text)
raise RuntimeError(response.text)
data = response.json()
if "errors" in data:
logging.error(f"Errors in response, after: {after}, category_id: {category_id}")
logging.error(data["errors"])
logging.error(response.text)
raise RuntimeError(response.text)
return data
# def get_graphql_issue_edges(*, settings: Settings, after: Union[str, None] = None):
# data = get_graphql_response(settings=settings, query=issues_query, after=after)
# graphql_response = IssuesResponse.model_validate(data)
# return graphql_response.data.repository.issues.edges
# def get_graphql_question_discussion_edges(
# *,
# settings: Settings,
# after: Union[str, None] = None,
# ):
# data = get_graphql_response(
# settings=settings,
# query=discussions_query,
# after=after,
# category_id=questions_category_id,
# )
# graphql_response = DiscussionsResponse.model_validate(data)
# return graphql_response.data.repository.discussions.edges
def get_graphql_pr_edges(*, settings: Settings, after: Union[str, None] = None):
if after is None:
print("Querying PRs...")
else:
print(f"Querying PRs with cursor {after}...")
data = get_graphql_response(settings=settings, query=prs_query, after=after)
graphql_response = PRsResponse.model_validate(data)
return graphql_response.data.repository.pullRequests.edges
# def get_issues_experts(settings: Settings):
# issue_nodes: List[IssuesNode] = []
# issue_edges = get_graphql_issue_edges(settings=settings)
# while issue_edges:
# for edge in issue_edges:
# issue_nodes.append(edge.node)
# last_edge = issue_edges[-1]
# issue_edges = get_graphql_issue_edges(settings=settings, after=last_edge.cursor)
# commentors = Counter()
# last_month_commentors = Counter()
# authors: Dict[str, Author] = {}
# now = datetime.now(tz=timezone.utc)
# one_month_ago = now - timedelta(days=30)
# for issue in issue_nodes:
# issue_author_name = None
# if issue.author:
# authors[issue.author.login] = issue.author
# issue_author_name = issue.author.login
# issue_commentors = set()
# for comment in issue.comments.nodes:
# if comment.author:
# authors[comment.author.login] = comment.author
# if comment.author.login != issue_author_name:
# issue_commentors.add(comment.author.login)
# for author_name in issue_commentors:
# commentors[author_name] += 1
# if issue.createdAt > one_month_ago:
# last_month_commentors[author_name] += 1
# return commentors, last_month_commentors, authors
# def get_discussions_experts(settings: Settings):
# discussion_nodes: List[DiscussionsNode] = []
# discussion_edges = get_graphql_question_discussion_edges(settings=settings)
# while discussion_edges:
# for discussion_edge in discussion_edges:
# discussion_nodes.append(discussion_edge.node)
# last_edge = discussion_edges[-1]
# discussion_edges = get_graphql_question_discussion_edges(
# settings=settings, after=last_edge.cursor
# )
# commentors = Counter()
# last_month_commentors = Counter()
# authors: Dict[str, Author] = {}
# now = datetime.now(tz=timezone.utc)
# one_month_ago = now - timedelta(days=30)
# for discussion in discussion_nodes:
# discussion_author_name = None
# if discussion.author:
# authors[discussion.author.login] = discussion.author
# discussion_author_name = discussion.author.login
# discussion_commentors = set()
# for comment in discussion.comments.nodes:
# if comment.author:
# authors[comment.author.login] = comment.author
# if comment.author.login != discussion_author_name:
# discussion_commentors.add(comment.author.login)
# for reply in comment.replies.nodes:
# if reply.author:
# authors[reply.author.login] = reply.author
# if reply.author.login != discussion_author_name:
# discussion_commentors.add(reply.author.login)
# for author_name in discussion_commentors:
# commentors[author_name] += 1
# if discussion.createdAt > one_month_ago:
# last_month_commentors[author_name] += 1
# return commentors, last_month_commentors, authors
# def get_experts(settings: Settings):
# (
# discussions_commentors,
# discussions_last_month_commentors,
# discussions_authors,
# ) = get_discussions_experts(settings=settings)
# commentors = discussions_commentors
# last_month_commentors = discussions_last_month_commentors
# authors = {**discussions_authors}
# return commentors, last_month_commentors, authors
def _logistic(x, k):
return x / (x + k)
def get_contributors(settings: Settings):
pr_nodes: List[PullRequestNode] = []
pr_edges = get_graphql_pr_edges(settings=settings)
while pr_edges:
for edge in pr_edges:
pr_nodes.append(edge.node)
last_edge = pr_edges[-1]
pr_edges = get_graphql_pr_edges(settings=settings, after=last_edge.cursor)
contributors = Counter()
contributor_scores = Counter()
recent_contributor_scores = Counter()
reviewers = Counter()
authors: Dict[str, Author] = {}
for pr in pr_nodes:
pr_reviewers: Set[str] = set()
for review in pr.reviews.nodes:
if review.author:
authors[review.author.login] = review.author
pr_reviewers.add(review.author.login)
for reviewer in pr_reviewers:
reviewers[reviewer] += 1
if pr.author:
authors[pr.author.login] = pr.author
contributors[pr.author.login] += 1
files_changed = pr.changedFiles
lines_changed = pr.additions + pr.deletions
score = _logistic(files_changed, 20) + _logistic(lines_changed, 100)
contributor_scores[pr.author.login] += score
three_months_ago = datetime.now(timezone.utc) - timedelta(days=3 * 30)
if pr.createdAt > three_months_ago:
recent_contributor_scores[pr.author.login] += score
return (
contributors,
contributor_scores,
recent_contributor_scores,
reviewers,
authors,
)
def get_top_users(
*,
counter: Counter,
min_count: int,
authors: Dict[str, Author],
skip_users: Container[str],
):
users = []
for commentor, count in counter.most_common():
if commentor in skip_users:
continue
if count >= min_count:
author = authors[commentor]
users.append(
{
"login": commentor,
"count": count,
"avatarUrl": author.avatarUrl,
"twitterUsername": author.twitterUsername,
"url": author.url,
}
)
return users
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
settings = Settings()
logging.info(f"Using config: {settings.model_dump_json()}")
g = Github(settings.input_token.get_secret_value())
repo = g.get_repo(settings.github_repository)
# question_commentors, question_last_month_commentors, question_authors = get_experts(
# settings=settings
# )
(
contributors,
contributor_scores,
recent_contributor_scores,
reviewers,
pr_authors,
) = get_contributors(settings=settings)
# authors = {**question_authors, **pr_authors}
authors = {**pr_authors}
maintainers_logins = {
"hwchase17",
"agola11",
"baskaryan",
"hinthornw",
"nfcampos",
"efriis",
"eyurtsev",
"rlancemartin",
"ccurme",
"vbarda",
}
hidden_logins = {
"dev2049",
"vowelparrot",
"obi1kenobi",
"langchain-infra",
"jacoblee93",
"isahers1",
"dqbd",
"bracesproul",
"akira",
}
bot_names = {"dosubot", "github-actions", "CodiumAI-Agent"}
maintainers = []
for login in maintainers_logins:
user = authors[login]
maintainers.append(
{
"login": login,
"count": contributors[login], # + question_commentors[login],
"avatarUrl": user.avatarUrl,
"twitterUsername": user.twitterUsername,
"url": user.url,
}
)
# min_count_expert = 10
# min_count_last_month = 3
min_score_contributor = 1
min_count_reviewer = 5
skip_users = maintainers_logins | bot_names | hidden_logins
# experts = get_top_users(
# counter=question_commentors,
# min_count=min_count_expert,
# authors=authors,
# skip_users=skip_users,
# )
# last_month_active = get_top_users(
# counter=question_last_month_commentors,
# min_count=min_count_last_month,
# authors=authors,
# skip_users=skip_users,
# )
top_recent_contributors = get_top_users(
counter=recent_contributor_scores,
min_count=min_score_contributor,
authors=authors,
skip_users=skip_users,
)
top_contributors = get_top_users(
counter=contributor_scores,
min_count=min_score_contributor,
authors=authors,
skip_users=skip_users,
)
top_reviewers = get_top_users(
counter=reviewers,
min_count=min_count_reviewer,
authors=authors,
skip_users=skip_users,
)
people = {
"maintainers": maintainers,
# "experts": experts,
# "last_month_active": last_month_active,
"top_recent_contributors": top_recent_contributors,
"top_contributors": top_contributors,
"top_reviewers": top_reviewers,
}
people_path = Path("./docs/data/people.yml")
people_old_content = people_path.read_text(encoding="utf-8")
new_people_content = yaml.dump(
people, sort_keys=False, width=200, allow_unicode=True
)
if people_old_content == new_people_content:
logging.info("The LangChain People data hasn't changed, finishing.")
sys.exit(0)
people_path.write_text(new_people_content, encoding="utf-8")
logging.info("Setting up GitHub Actions git user")
subprocess.run(["git", "config", "user.name", "github-actions"], check=True)
subprocess.run(
["git", "config", "user.email", "github-actions@github.com"], check=True
)
branch_name = "langchain/langchain-people"
logging.info(f"Creating a new branch {branch_name}")
subprocess.run(["git", "checkout", "-B", branch_name], check=True)
logging.info("Adding updated file")
subprocess.run(["git", "add", str(people_path)], check=True)
logging.info("Committing updated file")
message = "👥 Update LangChain people data"
result = subprocess.run(["git", "commit", "-m", message], check=True)
logging.info("Pushing branch")
subprocess.run(["git", "push", "origin", branch_name, "-f"], check=True)
logging.info("Creating PR")
pr = repo.create_pull(title=message, body=message, base="master", head=branch_name)
logging.info(f"Created PR: {pr.number}")
logging.info("Finished")

View File

@@ -1,12 +1,24 @@
# TODO: https://docs.astral.sh/uv/guides/integration/github/#caching
# Helper to set up Python and uv with caching
name: uv-install
description: Set up Python and uv
description: Set up Python and uv with caching
inputs:
python-version:
description: Python version, supporting MAJOR.MINOR only
required: true
enable-cache:
description: Enable caching for uv dependencies
required: false
default: "true"
cache-suffix:
description: Custom cache key suffix for cache invalidation
required: false
default: ""
working-directory:
description: Working directory for cache glob scoping
required: false
default: "**"
env:
UV_VERSION: "0.5.25"
@@ -15,7 +27,13 @@ runs:
using: composite
steps:
- name: Install uv and set the python version
uses: astral-sh/setup-uv@v5
uses: astral-sh/setup-uv@v6
with:
version: ${{ env.UV_VERSION }}
python-version: ${{ inputs.python-version }}
enable-cache: ${{ inputs.enable-cache }}
cache-dependency-glob: |
${{ inputs.working-directory }}/pyproject.toml
${{ inputs.working-directory }}/uv.lock
${{ inputs.working-directory }}/requirements*.txt
cache-suffix: ${{ inputs.cache-suffix }}

View File

@@ -26,7 +26,7 @@ def get_user(user_id: str, verbose: bool = False) -> User:
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
- Mark experimental features clearly with docstring admonitions (using MkDocs Material, like `!!! warning`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
@@ -130,7 +130,7 @@ def load_config(path: str) -> dict:
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Use Google-style docstrings with Args and Returns sections for all public functions.**
**Insufficient Documentation:**
@@ -149,7 +149,7 @@ def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
priority: Email priority level.
Returns:
True if email was sent successfully, False otherwise.
@@ -166,7 +166,6 @@ def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
@@ -204,7 +203,14 @@ class DataProcessor:
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
"""Process and store data with notifications.
Args:
data: List of data items to process.
Returns:
ProcessingResult with details of the operation.
"""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
@@ -291,16 +297,15 @@ def search_database(query: str) -> str:
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `!fix(cli): resolve flag parsing error` (breaking change uses exclamation mark)
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Follow the existing patterns in `langchain_core` for base abstractions
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
- Avoid deprecated components
### Partner Integrations

View File

Before

Width:  |  Height:  |  Size: 6.4 KiB

After

Width:  |  Height:  |  Size: 6.4 KiB

View File

Before

Width:  |  Height:  |  Size: 6.4 KiB

After

Width:  |  Height:  |  Size: 6.4 KiB

84
.github/pr-file-labeler.yml vendored Normal file
View File

@@ -0,0 +1,84 @@
# Label PRs (config)
# Automatically applies labels based on changed files and branch patterns
# Core packages
core:
- changed-files:
- any-glob-to-any-file:
- "libs/core/**/*"
langchain:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain/**/*"
- "libs/langchain_v1/**/*"
v1:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain_v1/**/*"
cli:
- changed-files:
- any-glob-to-any-file:
- "libs/cli/**/*"
standard-tests:
- changed-files:
- any-glob-to-any-file:
- "libs/standard-tests/**/*"
text-splitters:
- changed-files:
- any-glob-to-any-file:
- "libs/text-splitters/**/*"
# Partner integrations
integration:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/**/*"
# Infrastructure and DevOps
infra:
- changed-files:
- any-glob-to-any-file:
- ".github/**/*"
- "Makefile"
- ".pre-commit-config.yaml"
- "scripts/**/*"
- "docker/**/*"
- "Dockerfile*"
github_actions:
- changed-files:
- any-glob-to-any-file:
- ".github/workflows/**/*"
- ".github/actions/**/*"
dependencies:
- changed-files:
- any-glob-to-any-file:
- "**/pyproject.toml"
- "uv.lock"
- "**/requirements*.txt"
- "**/poetry.lock"
# Documentation
documentation:
- changed-files:
- any-glob-to-any-file:
- "**/*.md"
- "**/*.rst"
- "**/README*"
# Security related changes
security:
- changed-files:
- any-glob-to-any-file:
- "**/*security*"
- "**/*auth*"
- "**/*credential*"
- "**/*secret*"
- "**/*token*"
- ".github/workflows/security*"

41
.github/pr-title-labeler.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
# PR title labeler config
#
# Labels PRs based on conventional commit patterns in titles
#
# Format: type(scope): description or type!: description (breaking)
add-missing-labels: true
clear-prexisting: false
include-commits: false
include-title: true
label-for-breaking-changes: breaking
label-mapping:
documentation: ["docs"]
feature: ["feat"]
fix: ["fix"]
infra: ["build", "ci", "chore"]
integration:
[
"anthropic",
"chroma",
"deepseek",
"exa",
"fireworks",
"groq",
"huggingface",
"mistralai",
"nomic",
"ollama",
"openai",
"perplexity",
"prompty",
"qdrant",
"xai",
]
linting: ["style"]
performance: ["perf"]
refactor: ["refactor"]
release: ["release"]
revert: ["revert"]
tests: ["test"]

View File

@@ -1,3 +1,18 @@
"""Analyze git diffs to determine which directories need to be tested.
Intelligently determines which LangChain packages and directories need to be tested,
linted, or built based on the changes. Handles dependency relationships between
packages, maps file changes to appropriate CI job configurations, and outputs JSON
configurations for GitHub Actions.
- Maps changed files to affected package directories (libs/core, libs/partners/*, etc.)
- Builds dependency graph to include dependent packages when core components change
- Generates test matrix configurations with appropriate Python versions
- Handles special cases for Pydantic version testing and performance benchmarks
Used as part of the check_diffs workflow.
"""
import glob
import json
import os
@@ -17,7 +32,7 @@ LANGCHAIN_DIRS = [
"libs/langchain_v1",
]
# when set to True, we are ignoring core dependents
# When set to True, we are ignoring core dependents
# in order to be able to get CI to pass for each individual
# package that depends on core
# e.g. if you touch core, we don't then add textsplitters/etc to CI
@@ -35,10 +50,6 @@ IGNORED_PARTNERS = [
"prompty",
]
PY_312_MAX_PACKAGES = [
"libs/partners/chroma", # https://github.com/chroma-core/chroma/issues/4382
]
def all_package_dirs() -> Set[str]:
return {
@@ -49,9 +60,9 @@ def all_package_dirs() -> Set[str]:
def dependents_graph() -> dict:
"""
Construct a mapping of package -> dependents, such that we can
run tests on all dependents of a package when a change is made.
"""Construct a mapping of package -> dependents
Done such that we can run tests on all dependents of a package when a change is made.
"""
dependents = defaultdict(set)
@@ -121,25 +132,21 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
if job == "codspeed":
py_versions = ["3.12"] # 3.13 is not yet supported
elif dir_ == "libs/core":
py_versions = ["3.9", "3.10", "3.11", "3.12", "3.13"]
py_versions = ["3.10", "3.11", "3.12", "3.13"]
# custom logic for specific directories
elif dir_ == "libs/partners/milvus":
# milvus doesn't allow 3.12 because they declare deps in funny way
py_versions = ["3.9", "3.11"]
elif dir_ in PY_312_MAX_PACKAGES:
py_versions = ["3.9", "3.12"]
elif dir_ == "libs/langchain" and job == "extended-tests":
py_versions = ["3.9", "3.13"]
py_versions = ["3.10", "3.13"]
elif dir_ == "libs/langchain_v1":
py_versions = ["3.10", "3.13"]
elif dir_ in {"libs/cli"}:
py_versions = ["3.10", "3.13"]
elif dir_ == ".":
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
py_versions = ["3.9", "3.12"]
py_versions = ["3.10", "3.12"]
else:
py_versions = ["3.9", "3.13"]
py_versions = ["3.10", "3.13"]
return [{"working-directory": dir_, "python-version": py_v} for py_v in py_versions]
@@ -252,10 +259,9 @@ if __name__ == "__main__":
):
# add all LANGCHAIN_DIRS for infra changes
dirs_to_run["extended-test"].update(LANGCHAIN_DIRS)
dirs_to_run["lint"].add(".")
if file.startswith("libs/core"):
dirs_to_run["codspeed"].add(f"libs/core")
dirs_to_run["codspeed"].add("libs/core")
if any(file.startswith(dir_) for dir_ in LANGCHAIN_DIRS):
# add that dir and all dirs after in LANGCHAIN_DIRS
# for extended testing
@@ -296,19 +302,21 @@ if __name__ == "__main__":
dirs_to_run["test"].add(f"libs/partners/{partner_dir}")
dirs_to_run["codspeed"].add(f"libs/partners/{partner_dir}")
# Skip if the directory was deleted or is just a tombstone readme
elif file == "libs/packages.yml":
continue
elif file.startswith("libs/"):
# Check if this is a root-level file in libs/ (e.g., libs/README.md)
file_parts = file.split("/")
if len(file_parts) == 2:
# Root-level file in libs/, skip it (no tests needed)
continue
raise ValueError(
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif file.startswith("docs/") or file in [
elif file in [
"pyproject.toml",
"uv.lock",
]: # docs or root uv files
]: # root uv files
docs_edited = True
dirs_to_run["lint"].add(".")
dependents = dependents_graph()
@@ -326,9 +334,6 @@ if __name__ == "__main__":
"codspeed",
]
}
map_job_to_configs["test-doc-imports"] = (
[{"python-version": "3.12"}] if docs_edited else []
)
for key, value in map_job_to_configs.items():
json_output = json.dumps(value)

View File

@@ -1,3 +1,5 @@
"""Check that no dependencies allow prereleases unless we're releasing a prerelease."""
import sys
import tomllib
@@ -6,15 +8,14 @@ if __name__ == "__main__":
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# read toml file
with open(toml_file, "rb") as file:
toml_data = tomllib.load(file)
# see if we're releasing an rc
# See if we're releasing an rc or dev version
version = toml_data["project"]["version"]
releasing_rc = "rc" in version or "dev" in version
# if not, iterate through dependencies and make sure none allow prereleases
# If not, iterate through dependencies and make sure none allow prereleases
if not releasing_rc:
dependencies = toml_data["project"]["dependencies"]
for dep_version in dependencies:

View File

@@ -1,3 +1,5 @@
"""Get minimum versions of dependencies from a pyproject.toml file."""
import sys
from collections import defaultdict
from typing import Optional
@@ -5,7 +7,7 @@ from typing import Optional
if sys.version_info >= (3, 11):
import tomllib
else:
# for python 3.10 and below, which doesnt have stdlib tomllib
# For Python 3.10 and below, which doesnt have stdlib tomllib
import tomli as tomllib
import re
@@ -34,14 +36,13 @@ SKIP_IF_PULL_REQUEST = [
def get_pypi_versions(package_name: str) -> List[str]:
"""
Fetch all available versions for a package from PyPI.
"""Fetch all available versions for a package from PyPI.
Args:
package_name (str): Name of the package
package_name: Name of the package
Returns:
List[str]: List of all available versions
List of all available versions
Raises:
requests.exceptions.RequestException: If PyPI API request fails
@@ -54,24 +55,23 @@ def get_pypi_versions(package_name: str) -> List[str]:
def get_minimum_version(package_name: str, spec_string: str) -> Optional[str]:
"""
Find the minimum published version that satisfies the given constraints.
"""Find the minimum published version that satisfies the given constraints.
Args:
package_name (str): Name of the package
spec_string (str): Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
package_name: Name of the package
spec_string: Version specification string (e.g., ">=0.2.43,<0.4.0,!=0.3.0")
Returns:
Optional[str]: Minimum compatible version or None if no compatible version found
Minimum compatible version or None if no compatible version found
"""
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
spec_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", spec_string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1 (can be anywhere in constraint string)
for y in range(1, 10):
spec_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}", spec_string
)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
spec_string = re.sub(
rf"\^{x}\.(\d+)\.(\d+)", rf">={x}.\1.\2,<{x + 1}", spec_string
@@ -154,22 +154,25 @@ def get_min_version_from_toml(
def check_python_version(version_string, constraint_string):
"""
Check if the given Python version matches the given constraints.
"""Check if the given Python version matches the given constraints.
:param version_string: A string representing the Python version (e.g. "3.8.5").
:param constraint_string: A string representing the package's Python version constraints (e.g. ">=3.6, <4.0").
:return: True if the version matches the constraints, False otherwise.
Args:
version_string: A string representing the Python version (e.g. "3.8.5").
constraint_string: A string representing the package's Python version
constraints (e.g. ">=3.6, <4.0").
Returns:
True if the version matches the constraints
"""
# rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
# Rewrite occurrences of ^0.0.z to 0.0.z (can be anywhere in constraint string)
constraint_string = re.sub(r"\^0\.0\.(\d+)", r"0.0.\1", constraint_string)
# rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
# Rewrite occurrences of ^0.y.z to >=0.y.z,<0.y+1.0 (can be anywhere in constraint string)
for y in range(1, 10):
constraint_string = re.sub(
rf"\^0\.{y}\.(\d+)", rf">=0.{y}.\1,<0.{y + 1}.0", constraint_string
)
# rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
# Rewrite occurrences of ^x.y.z to >=x.y.z,<x+1.0.0 (can be anywhere in constraint string)
for x in range(1, 10):
constraint_string = re.sub(
rf"\^{x}\.0\.(\d+)", rf">={x}.0.\1,<{x + 1}.0.0", constraint_string

View File

@@ -1,112 +0,0 @@
#!/usr/bin/env python
"""Script to sync libraries from various repositories into the main langchain repository."""
import os
import shutil
from pathlib import Path
from typing import Any, Dict
import yaml
def load_packages_yaml() -> Dict[str, Any]:
"""Load and parse the packages.yml file."""
with open("langchain/libs/packages.yml", "r") as f:
return yaml.safe_load(f)
def get_target_dir(package_name: str) -> Path:
"""Get the target directory for a given package."""
package_name_short = package_name.replace("langchain-", "")
base_path = Path("langchain/libs")
if package_name_short == "experimental":
return base_path / "experimental"
if package_name_short == "community":
return base_path / "community"
return base_path / "partners" / package_name_short
def clean_target_directories(packages: list) -> None:
"""Remove old directories that will be replaced."""
for package in packages:
target_dir = get_target_dir(package["name"])
if target_dir.exists():
print(f"Removing {target_dir}")
shutil.rmtree(target_dir)
def move_libraries(packages: list) -> None:
"""Move libraries from their source locations to the target directories."""
for package in packages:
repo_name = package["repo"].split("/")[1]
source_path = package["path"]
target_dir = get_target_dir(package["name"])
# Handle root path case
if source_path == ".":
source_dir = repo_name
else:
source_dir = f"{repo_name}/{source_path}"
print(f"Moving {source_dir} to {target_dir}")
# Ensure target directory exists
os.makedirs(os.path.dirname(target_dir), exist_ok=True)
try:
# Move the directory
shutil.move(source_dir, target_dir)
except Exception as e:
print(f"Error moving {source_dir} to {target_dir}: {e}")
def main():
"""Main function to orchestrate the library sync process."""
try:
# Load packages configuration
package_yaml = load_packages_yaml()
# Clean target directories
clean_target_directories(
[
p
for p in package_yaml["packages"]
if (
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
)
and p["repo"] != "langchain-ai/langchain"
and p["name"]
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
]
)
# Move libraries to their new locations
move_libraries(
[
p
for p in package_yaml["packages"]
if not p.get("disabled", False)
and (
p["repo"].startswith("langchain-ai/") or p.get("include_in_api_ref")
)
and p["repo"] != "langchain-ai/langchain"
and p["name"]
!= "langchain-ai21" # Skip AI21 due to dependency conflicts
]
)
# Delete ones without a pyproject.toml
for partner in Path("langchain/libs/partners").iterdir():
if partner.is_dir() and not (partner / "pyproject.toml").exists():
print(f"Removing {partner} as it does not have a pyproject.toml")
shutil.rmtree(partner)
print("Library sync completed successfully!")
except Exception as e:
print(f"Error during library sync: {e}")
raise
if __name__ == "__main__":
main()

View File

@@ -1,3 +1,11 @@
# Validates that a package's integration tests compile without syntax or import errors.
#
# (If an integration test fails to compile, it won't run.)
#
# Called as part of check_diffs.yml workflow
#
# Runs pytest with compile marker to check syntax/imports.
name: '🔗 Compile Integration Tests'
on:
@@ -33,6 +41,8 @@ jobs:
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: compile-integration-tests-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Integration Dependencies'
shell: bash

View File

@@ -1,94 +0,0 @@
name: '🚀 Integration Tests'
run-name: 'Test ${{ inputs.working-directory }} on Python ${{ inputs.python-version }}'
on:
workflow_dispatch:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
python-version:
required: true
type: string
description: "Python version to use"
default: "3.11"
permissions:
contents: read
env:
UV_FROZEN: "true"
jobs:
build:
defaults:
run:
working-directory: ${{ inputs.working-directory }}
runs-on: ubuntu-latest
name: 'Python ${{ inputs.python-version }}'
steps:
- uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
- name: '📦 Install Integration Dependencies'
shell: bash
run: uv sync --group test --group test_integration
- name: '🚀 Run Integration Tests'
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: |
make integration_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,6 +1,11 @@
name: '🧹 Code Linting'
# Runs code quality checks using ruff, mypy, and other linting tools
# Checks both package code and test code for consistency
# Runs linting.
#
# Uses the package's Makefile to run the checks, specifically the
# `lint_package` and `lint_tests` targets.
#
# Called as part of check_diffs.yml workflow.
name: '🧹 Linting'
on:
workflow_call:
@@ -39,16 +44,10 @@ jobs:
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: lint-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Lint & Typing Dependencies'
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
#
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --group lint --group typing
@@ -58,20 +57,13 @@ jobs:
run: |
make lint_package
- name: '📦 Install Unit Test Dependencies'
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
# https://github.com/langchain-ai/langchain/pull/10249/files#diff-935185cd488d015f026dcd9e19616ff62863e8cde8c0bee70318d3ccbca98341
#
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
- name: '📦 Install Test Dependencies (non-partners)'
# (For directories NOT starting with libs/partners/)
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
uv sync --inexact --group test
- name: '📦 Install Unit + Integration Test Dependencies'
- name: '📦 Install Test Dependencies'
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |

View File

@@ -1,5 +1,11 @@
name: '🚀 Package Release'
run-name: 'Release ${{ inputs.working-directory }} ${{ inputs.release-version }}'
# Builds and publishes LangChain packages to PyPI.
#
# Manually triggered, though can be used as a reusable workflow (workflow_call).
#
# Handles version bumping, building, and publishing to PyPI with authentication.
name: "🚀 Package Release"
run-name: "Release ${{ inputs.working-directory }} ${{ inputs.release-version }}"
on:
workflow_call:
inputs:
@@ -13,11 +19,11 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
default: 'libs/langchain'
default: "libs/langchain"
release-version:
required: true
type: string
default: '0.1.0'
default: "0.1.0"
description: "New version of package being released"
dangerous-nonmaster-release:
required: false
@@ -30,6 +36,9 @@ env:
UV_FROZEN: "true"
UV_NO_SYNC: "true"
permissions:
contents: write # Required for creating GitHub releases
jobs:
# Build the distribution package and extract version info
# Runs in isolated environment with minimal permissions for security
@@ -37,6 +46,8 @@ jobs:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
environment: Scheduled testing
runs-on: ubuntu-latest
permissions:
contents: read
outputs:
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
@@ -52,8 +63,8 @@ jobs:
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
# The release stage has trusted publishing and GitHub repo contents write access,
# and we want to keep the scope of that access limited just to the release job.
# (Release stage has trusted publishing and GitHub repo contents write access,
#
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
# could get access to our GitHub or PyPI credentials.
#
@@ -89,6 +100,8 @@ jobs:
needs:
- build
runs-on: ubuntu-latest
permissions:
contents: read
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
@@ -183,13 +196,36 @@ jobs:
needs:
- build
- release-notes
uses:
./.github/workflows/_test_release.yml
permissions: write-all
with:
working-directory: ${{ inputs.working-directory }}
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
secrets: inherit
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v5
- uses: actions/download-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false
pre-release-checks:
needs:
@@ -197,6 +233,8 @@ jobs:
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
permissions:
contents: read
timeout-minutes: 20
steps:
- uses: actions/checkout@v5
@@ -265,16 +303,19 @@ jobs:
run: |
VIRTUAL_ENV=.venv uv pip install dist/*.whl
- name: Run unit tests
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Check for prerelease versions
# Block release if any dependencies allow prerelease versions
# (unless this is itself a prerelease version)
working-directory: ${{ inputs.working-directory }}
run: |
uv run python $GITHUB_WORKSPACE/.github/scripts/check_prerelease_dependencies.py pyproject.toml
- name: Run unit tests
run: make tests
working-directory: ${{ inputs.working-directory }}
- name: Get minimum versions
# Find the minimum published versions that satisfies the given constraints
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
@@ -299,6 +340,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
- name: Run integration tests
# Uses the Makefile's `integration_tests` target for the specified package
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
@@ -339,17 +381,22 @@ jobs:
working-directory: ${{ inputs.working-directory }}
# Test select published packages against new core
# Done when code changes are made to langchain-core
test-prior-published-packages-against-new-core:
# Installs the new core with old partners: Installs the new unreleased core
# alongside the previously published partner packages and runs integration tests
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
permissions:
contents: read
strategy:
matrix:
partner: [openai, anthropic]
fail-fast: false # Continue testing other partners if one fails
fail-fast: false # Continue testing other partners if one fails
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
@@ -367,6 +414,7 @@ jobs:
# We implement this conditional as Github Actions does not have good support
# for conditionally needing steps. https://github.com/actions/runner/issues/491
# TODO: this seems to be resolved upstream, so we can probably remove this workaround
- name: Check if libs/core
run: |
if [ "${{ startsWith(inputs.working-directory, 'libs/core') }}" != "true" ]; then
@@ -394,7 +442,7 @@ jobs:
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| grep -E '[0-9]+\.[0-9]+\.[0-9]+$' \
| grep -E '[0-9]+\.[0-9]+\.[0-9]+([a-zA-Z]+[0-9]+)?$' \
| sort -Vr \
| head -n 1
)"
@@ -421,6 +469,7 @@ jobs:
make integration_tests
publish:
# Publishes the package to PyPI
needs:
- build
- release-notes
@@ -463,6 +512,7 @@ jobs:
attestations: false
mark-release:
# Marks the GitHub release with the new version tag
needs:
- build
- release-notes
@@ -472,7 +522,7 @@ jobs:
runs-on: ubuntu-latest
permissions:
# This permission is needed by `ncipollo/release-action` to
# create the GitHub release.
# create the GitHub release/tag
contents: write
defaults:

View File

@@ -1,6 +1,7 @@
name: '🧪 Unit Testing'
# Runs unit tests with both current and minimum supported dependency versions
# to ensure compatibility across the supported range
# to ensure compatibility across the supported range.
name: '🧪 Unit Testing'
on:
workflow_call:
@@ -39,6 +40,9 @@ jobs:
id: setup-python
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test --dev

View File

@@ -1,54 +0,0 @@
name: '📑 Documentation Import Testing'
on:
workflow_call:
inputs:
python-version:
required: true
type: string
description: "Python version to use"
permissions:
contents: read
env:
UV_FROZEN: "true"
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 20
name: '🔍 Check Doc Imports (Python ${{ inputs.python-version }})'
steps:
- name: '📋 Checkout Code'
uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ inputs.python-version }} + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
- name: '📦 Install Test Dependencies'
shell: bash
run: uv sync --group test
- name: '📦 Install LangChain in Editable Mode'
run: |
VIRTUAL_ENV=.venv uv pip install langchain-experimental langchain-community -e libs/core libs/langchain
- name: '🔍 Validate Documentation Import Statements'
shell: bash
run: |
uv run python docs/scripts/check_imports.py
- name: '🧹 Verify Clean Working Directory'
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -1,3 +1,5 @@
# Facilitate unit testing against different Pydantic versions for a provided package.
name: '🐍 Pydantic Version Testing'
on:
@@ -40,6 +42,8 @@ jobs:
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ inputs.python-version }}
cache-suffix: test-pydantic-${{ inputs.working-directory }}
working-directory: ${{ inputs.working-directory }}
- name: '📦 Install Test Dependencies'
shell: bash

View File

@@ -1,106 +0,0 @@
name: '🧪 Test Release Package'
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
PYTHON_VERSION: "3.11"
UV_FROZEN: "true"
jobs:
build:
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
runs-on: ubuntu-latest
outputs:
pkg-name: ${{ steps.check-version.outputs.pkg-name }}
version: ${{ steps.check-version.outputs.version }}
steps:
- uses: actions/checkout@v5
- name: '🐍 Set up Python + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
# We want to keep this build stage *separate* from the release stage,
# so that there's no sharing of permissions between them.
# The release stage has trusted publishing and GitHub repo contents write access,
# and we want to keep the scope of that access limited just to the release job.
# Otherwise, a malicious `build` step (e.g. via a compromised dependency)
# could get access to our GitHub or PyPI credentials.
#
# Per the trusted publishing GitHub Action:
# > It is strongly advised to separate jobs for building [...]
# > from the publish job.
# https://github.com/pypa/gh-action-pypi-publish#non-goals
- name: '📦 Build Project for Distribution'
run: uv build
working-directory: ${{ inputs.working-directory }}
- name: '⬆️ Upload Build Artifacts'
uses: actions/upload-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: '🔍 Extract Version Information'
id: check-version
shell: python
working-directory: ${{ inputs.working-directory }}
run: |
import os
import tomllib
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
pkg_name = data["project"]["name"]
version = data["project"]["version"]
with open(os.environ["GITHUB_OUTPUT"], "a") as f:
f.write(f"pkg-name={pkg_name}\n")
f.write(f"version={version}\n")
publish:
needs:
- build
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:
# https://blog.pypi.org/posts/2023-04-20-introducing-trusted-publishers/
#
# Trusted publishing has to also be configured on PyPI for each package:
# https://docs.pypi.org/trusted-publishers/adding-a-publisher/
id-token: write
steps:
- uses: actions/checkout@v5
- uses: actions/download-artifact@v5
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
- name: Publish to test PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: ${{ inputs.working-directory }}/dist/
verbose: true
print-hash: true
repository-url: https://test.pypi.org/legacy/
# We overwrite any existing distributions with the same name and version.
# This is *only for CI use* and is *extremely dangerous* otherwise!
# https://github.com/pypa/gh-action-pypi-publish#tolerating-release-package-file-duplicates
skip-existing: true
# Temp workaround since attestations are on by default as of gh-action-pypi-publish v1.11.0
attestations: false

View File

@@ -1,16 +1,22 @@
name: '📚 API Docs'
run-name: 'Build & Deploy API Reference'
# Runs daily or can be triggered manually for immediate updates
# Build the API reference documentation for v0.3 branch.
#
# Manual trigger only.
#
# Built HTML pushed to langchain-ai/langchain-api-docs-html.
#
# Looks for langchain-ai org repos in packages.yml and checks them out.
# Calls prep_api_docs_build.py.
name: "📚 API Docs (v0.3)"
run-name: "Build & Deploy API Reference (v0.3)"
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *' # Daily at 1PM UTC
env:
PYTHON_VERSION: "3.11"
jobs:
# Only runs on main repository to prevent unnecessary builds on forks
build:
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
runs-on: ubuntu-latest
@@ -19,18 +25,22 @@ jobs:
steps:
- uses: actions/checkout@v5
with:
ref: v0.3
path: langchain
- uses: actions/checkout@v5
with:
repository: langchain-ai/langchain-api-docs-html
path: langchain-api-docs-html
token: ${{ secrets.TOKEN_GITHUB_API_DOCS_HTML }}
- name: '📋 Extract Repository List with yq'
- name: "📋 Extract Repository List with yq"
id: get-unsorted-repos
uses: mikefarah/yq@master
with:
cmd: |
# Extract repos from packages.yml that are in the langchain-ai org
# (excluding 'langchain' itself)
yq '
.packages[]
| select(
@@ -45,7 +55,7 @@ jobs:
| .repo
' langchain/libs/packages.yml
- name: '📋 Parse YAML & Checkout Repositories'
- name: "📋 Parse YAML & Checkout Repositories"
env:
REPOS_UNSORTED: ${{ steps.get-unsorted-repos.outputs.result }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -71,50 +81,72 @@ jobs:
git clone --depth 1 https://github.com/$repo.git $REPO_NAME
done
- name: '🐍 Setup Python ${{ env.PYTHON_VERSION }}'
- name: "🐍 Setup Python ${{ env.PYTHON_VERSION }}"
uses: actions/setup-python@v6
id: setup-python
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: '📦 Install Initial Python Dependencies'
- name: "📦 Install Initial Python Dependencies using uv"
working-directory: langchain
run: |
python -m pip install -U uv
python -m uv pip install --upgrade --no-cache-dir pip setuptools pyyaml
- name: '📦 Organize Library Directories'
- name: "📦 Organize Library Directories"
# Places cloned partner packages into libs/partners structure
run: python langchain/.github/scripts/prep_api_docs_build.py
- name: '🧹 Remove Old HTML Files'
- name: "🧹 Clear Prior Build"
run:
# Remove artifacts from prior docs build
rm -rf langchain-api-docs-html/api_reference_build/html
- name: '📦 Install Documentation Dependencies'
- name: "📦 Install Documentation Dependencies using uv"
working-directory: langchain
run: |
# Install all partner packages in editable mode with overrides
python -m uv pip install $(ls ./libs/partners | xargs -I {} echo "./libs/partners/{}") --overrides ./docs/vercel_overrides.txt
# Install core langchain and other main packages
python -m uv pip install libs/core libs/langchain libs/text-splitters libs/community libs/experimental libs/standard-tests
# Install Sphinx and related packages for building docs
python -m uv pip install -r docs/api_reference/requirements.txt
- name: '🔧 Configure Git Settings'
- name: "🔧 Configure Git Settings"
working-directory: langchain
run: |
git config --local user.email "actions@github.com"
git config --local user.name "Github Actions"
- name: '📚 Build API Documentation'
- name: "📚 Build API Documentation"
working-directory: langchain
run: |
# Generate the API reference RST files
python docs/api_reference/create_api_rst.py
# Build the HTML documentation using Sphinx
# -T: show full traceback on exception
# -E: don't use cached environment (force rebuild, ignore cached doctrees)
# -b html: build HTML docs (vs PDS, etc.)
# -d: path for the cached environment (parsed document trees / doctrees)
# - Separate from output dir for faster incremental builds
# -c: path to conf.py
# -j auto: parallel build using all available CPU cores
python -m sphinx -T -E -b html -d ../langchain-api-docs-html/_build/doctrees -c docs/api_reference docs/api_reference ../langchain-api-docs-html/api_reference_build/html -j auto
# Post-process the generated HTML
python docs/api_reference/scripts/custom_formatter.py ../langchain-api-docs-html/api_reference_build/html
# Default index page is blank so we copy in the actual home page.
cp ../langchain-api-docs-html/api_reference_build/html/{reference,index}.html
# Removes Sphinx's intermediate build artifacts after the build is complete.
rm -rf ../langchain-api-docs-html/_build/
# https://github.com/marketplace/actions/add-commit
# Commit and push changes to langchain-api-docs-html repo
- uses: EndBug/add-and-commit@v9
with:
cwd: langchain-api-docs-html
message: 'Update API docs build'
message: "Update API docs build from v0.3 branch"

View File

@@ -1,28 +0,0 @@
name: '🔗 Check Broken Links'
on:
workflow_dispatch:
schedule:
- cron: '0 13 * * *'
permissions:
contents: read
jobs:
check-links:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: '🟢 Setup Node.js 18.x'
uses: actions/setup-node@v4
with:
node-version: 18.x
cache: "yarn"
cache-dependency-path: ./docs/yarn.lock
- name: '📦 Install Node Dependencies'
run: yarn install --immutable --mode=skip-build
working-directory: ./docs
- name: '🔍 Scan Documentation for Broken Links'
run: yarn check-broken-links
working-directory: ./docs

View File

@@ -1,6 +1,8 @@
name: '🔍 Check `core` Version Equality'
# Ensures version numbers in pyproject.toml and version.py stay in sync
# Prevents releases with mismatched version numbers
# Ensures version numbers in pyproject.toml and version.py stay in sync.
#
# (Prevents releases with mismatched version numbers)
name: '🔍 Check Version Equality'
on:
pull_request:

View File

@@ -1,4 +1,18 @@
name: '🔧 CI'
# Primary CI workflow.
#
# Only runs against packages that have changed files.
#
# Runs:
# - Linting (_lint.yml)
# - Unit Tests (_test.yml)
# - Pydantic compatibility tests (_test_pydantic.yml)
# - Integration test compilation checks (_compile_integration_test.yml)
# - Extended test suites that require additional dependencies
# - Codspeed benchmarks (if not labeled 'codspeed-ignore')
#
# Reports status to GitHub checks and PR status.
name: "🔧 CI"
on:
push:
@@ -11,8 +25,8 @@ on:
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
# a limited number of job runners to be active at the same time, so it's better to
# cancel pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@@ -28,20 +42,20 @@ jobs:
# This job analyzes which files changed and creates a dynamic test matrix
# to only run tests/lints for the affected packages, improving CI efficiency
build:
name: 'Detect Changes & Set Matrix'
name: "Detect Changes & Set Matrix"
runs-on: ubuntu-latest
if: ${{ !contains(github.event.pull_request.labels.*.name, 'ci-ignore') }}
steps:
- name: '📋 Checkout Code'
- name: "📋 Checkout Code"
uses: actions/checkout@v5
- name: '🐍 Setup Python 3.11'
- name: "🐍 Setup Python 3.11"
uses: actions/setup-python@v6
with:
python-version: '3.11'
- name: '📂 Get Changed Files'
python-version: "3.11"
- name: "📂 Get Changed Files"
id: files
uses: Ana06/get-changed-files@v2.3.0
- name: '🔍 Analyze Changed Files & Generate Build Matrix'
- name: "🔍 Analyze Changed Files & Generate Build Matrix"
id: set-matrix
run: |
python -m pip install packaging requests
@@ -52,11 +66,11 @@ jobs:
extended-tests: ${{ steps.set-matrix.outputs.extended-tests }}
compile-integration-tests: ${{ steps.set-matrix.outputs.compile-integration-tests }}
dependencies: ${{ steps.set-matrix.outputs.dependencies }}
test-doc-imports: ${{ steps.set-matrix.outputs.test-doc-imports }}
test-pydantic: ${{ steps.set-matrix.outputs.test-pydantic }}
codspeed: ${{ steps.set-matrix.outputs.codspeed }}
# Run linting only on packages that have changed files
lint:
needs: [ build ]
needs: [build]
if: ${{ needs.build.outputs.lint != '[]' }}
strategy:
matrix:
@@ -70,7 +84,7 @@ jobs:
# Run unit tests only on packages that have changed files
test:
needs: [ build ]
needs: [build]
if: ${{ needs.build.outputs.test != '[]' }}
strategy:
matrix:
@@ -84,7 +98,7 @@ jobs:
# Test compatibility with different Pydantic versions for affected packages
test-pydantic:
needs: [ build ]
needs: [build]
if: ${{ needs.build.outputs.test-pydantic != '[]' }}
strategy:
matrix:
@@ -96,21 +110,10 @@ jobs:
pydantic-version: ${{ matrix.job-configs.pydantic-version }}
secrets: inherit
test-doc-imports:
needs: [ build ]
if: ${{ needs.build.outputs.test-doc-imports != '[]' }}
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.test-doc-imports) }}
fail-fast: false
uses: ./.github/workflows/_test_doc_imports.yml
with:
python-version: ${{ matrix.job-configs.python-version }}
secrets: inherit
# Verify integration tests compile without actually running them (faster feedback)
compile-integration-tests:
needs: [ build ]
name: "Compile Integration Tests"
needs: [build]
if: ${{ needs.build.outputs.compile-integration-tests != '[]' }}
strategy:
matrix:
@@ -124,8 +127,8 @@ jobs:
# Run extended test suites that require additional dependencies
extended-tests:
name: 'Extended Tests'
needs: [ build ]
name: "Extended Tests"
needs: [build]
if: ${{ needs.build.outputs.extended-tests != '[]' }}
strategy:
matrix:
@@ -140,12 +143,14 @@ jobs:
steps:
- uses: actions/checkout@v5
- name: '🐍 Set up Python ${{ matrix.job-configs.python-version }} + UV'
- name: "🐍 Set up Python ${{ matrix.job-configs.python-version }} + UV"
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ matrix.job-configs.python-version }}
cache-suffix: extended-tests-${{ matrix.job-configs.working-directory }}
working-directory: ${{ matrix.job-configs.working-directory }}
- name: '📦 Install Dependencies & Run Extended Tests'
- name: "📦 Install Dependencies & Run Extended Tests"
shell: bash
run: |
echo "Running extended tests, installing dependencies with uv..."
@@ -154,7 +159,7 @@ jobs:
VIRTUAL_ENV=.venv uv pip install -r extended_testing_deps.txt
VIRTUAL_ENV=.venv make extended_tests
- name: '🧹 Verify Clean Working Directory'
- name: "🧹 Verify Clean Working Directory"
shell: bash
run: |
set -eu
@@ -166,10 +171,81 @@ jobs:
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
# Run codspeed benchmarks only on packages that have changed files
codspeed:
name: "⚡ CodSpeed Benchmarks"
needs: [build]
if: ${{ needs.build.outputs.codspeed != '[]' && !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
runs-on: ubuntu-latest
strategy:
matrix:
job-configs: ${{ fromJson(needs.build.outputs.codspeed) }}
fail-fast: false
steps:
- uses: actions/checkout@v5
# We have to use 3.12 as 3.13 is not yet supported
- name: "📦 Install UV Package Manager"
uses: astral-sh/setup-uv@v6
with:
python-version: "3.12"
- uses: actions/setup-python@v6
with:
python-version: "3.12"
- name: "📦 Install Test Dependencies"
run: uv sync --group test
working-directory: ${{ matrix.job-configs.working-directory }}
- name: "⚡ Run Benchmarks: ${{ matrix.job-configs.working-directory }}"
uses: CodSpeedHQ/action@v4
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd ${{ matrix.job-configs.working-directory }}
if [ "${{ matrix.job-configs.working-directory }}" = "libs/core" ]; then
uv run --no-sync pytest ./tests/benchmarks --codspeed
else
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.job-configs.working-directory == 'libs/core' && 'walltime' || 'instrumentation' }}
# Final status check - ensures all required jobs passed before allowing merge
ci_success:
name: '✅ CI Success'
needs: [build, lint, test, compile-integration-tests, extended-tests, test-doc-imports, test-pydantic]
name: "✅ CI Success"
needs:
[
build,
lint,
test,
compile-integration-tests,
extended-tests,
test-pydantic,
codspeed,
]
if: |
always()
runs-on: ubuntu-latest
@@ -178,7 +254,7 @@ jobs:
RESULTS_JSON: ${{ toJSON(needs.*.result) }}
EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
steps:
- name: '🎉 All Checks Passed'
- name: "🎉 All Checks Passed"
run: |
echo $JOBS_JSON
echo $RESULTS_JSON

View File

@@ -1,38 +0,0 @@
name: '📑 Integration Docs Lint'
on:
push:
branches: [master]
pull_request:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/setup-python@v6
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.3.0
with:
filter: |
*.ipynb
*.md
*.mdx
- name: '🔍 Check New Documentation Templates'
run: |
python docs/scripts/check_templates.py ${{ steps.files.outputs.added }}

View File

@@ -1,66 +0,0 @@
name: '⚡ CodSpeed'
on:
push:
branches:
- master
pull_request:
workflow_dispatch:
permissions:
contents: read
env:
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: foo
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: foo
DEEPSEEK_API_KEY: foo
FIREWORKS_API_KEY: foo
jobs:
codspeed:
name: 'Benchmark'
runs-on: ubuntu-latest
if: ${{ !contains(github.event.pull_request.labels.*.name, 'codspeed-ignore') }}
strategy:
matrix:
include:
- working-directory: libs/core
mode: walltime
- working-directory: libs/partners/openai
- working-directory: libs/partners/anthropic
- working-directory: libs/partners/deepseek
- working-directory: libs/partners/fireworks
- working-directory: libs/partners/xai
- working-directory: libs/partners/mistralai
- working-directory: libs/partners/groq
fail-fast: false
steps:
- uses: actions/checkout@v5
# We have to use 3.12 as 3.13 is not yet supported
- name: '📦 Install UV Package Manager'
uses: astral-sh/setup-uv@v6
with:
python-version: "3.12"
- uses: actions/setup-python@v6
with:
python-version: "3.12"
- name: '📦 Install Test Dependencies'
run: uv sync --group test
working-directory: ${{ matrix.working-directory }}
- name: '⚡ Run Benchmarks: ${{ matrix.working-directory }}'
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd ${{ matrix.working-directory }}
if [ "${{ matrix.working-directory }}" = "libs/core" ]; then
uv run --no-sync pytest ./tests/benchmarks --codspeed
else
uv run --no-sync pytest ./tests/ --codspeed
fi
mode: ${{ matrix.mode || 'instrumentation' }}

View File

@@ -1,10 +0,0 @@
import toml
pyproject_toml = toml.load("pyproject.toml")
# Extract the ignore words list (adjust the key as per your TOML structure)
ignore_words_list = (
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
)
print(f"::set-output name=ignore_words_list::{ignore_words_list}")

View File

@@ -1,17 +1,23 @@
name: '⏰ Scheduled Integration Tests'
run-name: "Run Integration Tests - ${{ inputs.working-directory-force || 'all libs' }} (Python ${{ inputs.python-version-force || '3.9, 3.11' }})"
# Routine integration tests against partner libraries with live API credentials.
#
# Uses `make integration_tests` for each library in the matrix.
#
# Runs daily. Can also be triggered manually for immediate updates.
name: "⏰ Integration Tests"
run-name: "Run Integration Tests - ${{ inputs.working-directory-force || 'all libs' }} (Python ${{ inputs.python-version-force || '3.10, 3.13' }})"
on:
workflow_dispatch: # Allows maintainers to trigger the workflow manually in GitHub UI
workflow_dispatch:
inputs:
working-directory-force:
type: string
description: "From which folder this pipeline executes - defaults to all in matrix - example value: libs/partners/anthropic"
python-version-force:
type: string
description: "Python version to use - defaults to 3.9 and 3.11 in matrix - example value: 3.9"
description: "Python version to use - defaults to 3.10 and 3.13 in matrix - example value: 3.11"
schedule:
- cron: '0 13 * * *' # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
- cron: "0 13 * * *" # Runs daily at 1PM UTC (9AM EDT/6AM PDT)
permissions:
contents: read
@@ -28,11 +34,11 @@ jobs:
compute-matrix:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
name: '📋 Compute Test Matrix'
name: "📋 Compute Test Matrix"
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- name: '🔢 Generate Python & Library Matrix'
- name: "🔢 Generate Python & Library Matrix"
id: set-matrix
env:
DEFAULT_LIBS: ${{ env.DEFAULT_LIBS }}
@@ -40,9 +46,9 @@ jobs:
PYTHON_VERSION_FORCE: ${{ github.event.inputs.python-version-force || '' }}
run: |
# echo "matrix=..." where matrix is a json formatted str with keys python-version and working-directory
# python-version should default to 3.9 and 3.11, but is overridden to [PYTHON_VERSION_FORCE] if set
# python-version should default to 3.10 and 3.13, but is overridden to [PYTHON_VERSION_FORCE] if set
# working-directory should default to DEFAULT_LIBS, but is overridden to [WORKING_DIRECTORY_FORCE] if set
python_version='["3.9", "3.11"]'
python_version='["3.10", "3.13"]'
working_directory="$DEFAULT_LIBS"
if [ -n "$PYTHON_VERSION_FORCE" ]; then
python_version="[\"$PYTHON_VERSION_FORCE\"]"
@@ -54,13 +60,13 @@ jobs:
echo $matrix
echo "matrix=$matrix" >> $GITHUB_OUTPUT
# Run integration tests against partner libraries with live API credentials
# Tests are run with both Poetry and UV depending on the library's setup
# Tests are run with Poetry or UV depending on the library's setup
build:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
name: '🐍 Python ${{ matrix.python-version }}: ${{ matrix.working-directory }}'
name: "🐍 Python ${{ matrix.python-version }}: ${{ matrix.working-directory }}"
runs-on: ubuntu-latest
needs: [compute-matrix]
timeout-minutes: 20
timeout-minutes: 30
strategy:
fail-fast: false
matrix:
@@ -80,7 +86,7 @@ jobs:
repository: langchain-ai/langchain-aws
path: langchain-aws
- name: '📦 Organize External Libraries'
- name: "📦 Organize External Libraries"
run: |
rm -rf \
langchain/libs/partners/google-genai \
@@ -89,7 +95,7 @@ jobs:
mv langchain-google/libs/vertexai langchain/libs/partners/google-vertexai
mv langchain-aws/libs/aws langchain/libs/partners/aws
- name: '🐍 Set up Python ${{ matrix.python-version }} + Poetry'
- name: "🐍 Set up Python ${{ matrix.python-version }} + Poetry"
if: contains(env.POETRY_LIBS, matrix.working-directory)
uses: "./langchain/.github/actions/poetry_setup"
with:
@@ -98,45 +104,48 @@ jobs:
working-directory: langchain/${{ matrix.working-directory }}
cache-key: scheduled
- name: '🐍 Set up Python ${{ matrix.python-version }} + UV'
- name: "🐍 Set up Python ${{ matrix.python-version }} + UV"
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
uses: "./langchain/.github/actions/uv_setup"
with:
python-version: ${{ matrix.python-version }}
- name: '🔐 Authenticate to Google Cloud'
id: 'auth'
- name: "🔐 Authenticate to Google Cloud"
id: "auth"
uses: google-github-actions/auth@v3
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
credentials_json: "${{ secrets.GOOGLE_CREDENTIALS }}"
- name: '🔐 Configure AWS Credentials'
- name: "🔐 Configure AWS Credentials"
uses: aws-actions/configure-aws-credentials@v5
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: '📦 Install Dependencies (Poetry)'
- name: "📦 Install Dependencies (Poetry)"
if: contains(env.POETRY_LIBS, matrix.working-directory)
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
cd langchain/${{ matrix.working-directory }}
poetry install --with=test_integration,test
- name: '📦 Install Dependencies (UV)'
- name: "📦 Install Dependencies (UV)"
if: "!contains(env.POETRY_LIBS, matrix.working-directory)"
run: |
echo "Running scheduled tests, installing dependencies with uv..."
cd langchain/${{ matrix.working-directory }}
uv sync --group test --group test_integration
- name: '🚀 Run Integration Tests'
- name: "🚀 Run Integration Tests"
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ANTHROPIC_FILES_API_IMAGE_ID: ${{ secrets.ANTHROPIC_FILES_API_IMAGE_ID }}
ANTHROPIC_FILES_API_PDF_ID: ${{ secrets.ANTHROPIC_FILES_API_PDF_ID }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
@@ -144,31 +153,42 @@ jobs:
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
run: |
cd langchain/${{ matrix.working-directory }}
make integration_tests
- name: '🧹 Clean up External Libraries'
# Clean up external libraries to avoid affecting git status check
- name: "🧹 Clean up External Libraries"
# Clean up external libraries to avoid affecting the following git status check
run: |
rm -rf \
langchain/libs/partners/google-genai \
langchain/libs/partners/google-vertexai \
langchain/libs/partners/aws
- name: '🧹 Verify Clean Working Directory'
- name: "🧹 Verify Clean Working Directory"
working-directory: langchain
run: |
set -eu

View File

@@ -1,28 +0,0 @@
name: '👥 LangChain People'
run-name: 'Update People Data'
# This workflow updates the LangChain People data by fetching the latest information from the LangChain Git
on:
schedule:
- cron: "0 14 1 * *"
push:
branches: [jacob/people]
workflow_dispatch:
jobs:
langchain-people:
if: github.repository_owner == 'langchain-ai' || github.event_name != 'schedule'
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: '📋 Dump GitHub Context'
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- uses: actions/checkout@v5
# Ref: https://github.com/actions/runner/issues/2033
- name: '🔧 Fix Git Safe Directory in Container'
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
- uses: ./.github/actions/people
with:
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}

28
.github/workflows/pr_labeler_file.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
# Label PRs based on changed files.
#
# See `.github/pr-file-labeler.yml` to see rules for each label/directory.
name: "🏷️ Pull Request Labeler"
on:
# Safe since we're not checking out or running the PR's code
# Never check out the PR's head in a pull_request_target job
pull_request_target:
types: [opened, synchronize, reopened, edited]
jobs:
labeler:
name: 'label'
permissions:
contents: read
pull-requests: write
issues: write
runs-on: ubuntu-latest
steps:
- name: Label Pull Request
uses: actions/labeler@v6
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
configuration-path: .github/pr-file-labeler.yml
sync-labels: false

28
.github/workflows/pr_labeler_title.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
# Label PRs based on their titles.
#
# See `.github/pr-title-labeler.yml` to see rules for each label/title pattern.
name: "🏷️ PR Title Labeler"
on:
# Safe since we're not checking out or running the PR's code
# Never check out the PR's head in a pull_request_target job
pull_request_target:
types: [opened, synchronize, reopened, edited]
jobs:
pr-title-labeler:
name: 'label'
permissions:
contents: read
pull-requests: write
issues: write
runs-on: ubuntu-latest
steps:
- name: Label PR based on title
# Archived repo; latest commit (v0.1.0)
uses: grafana/pr-labeler-action@f19222d3ef883d2ca5f04420fdfe8148003763f0
with:
token: ${{ secrets.GITHUB_TOKEN }}
configuration-path: .github/pr-title-labeler.yml

View File

@@ -1,52 +1,48 @@
# -----------------------------------------------------------------------------
# PR Title Lint Workflow
# PR title linting.
#
# Purpose:
# Enforces Conventional Commits format for pull request titles to maintain a
# clear, consistent, and machine-readable change history across our repository.
# This helps with automated changelog generation and semantic versioning.
# FORMAT (Conventional Commits 1.0.0):
#
# Enforced Commit Message Format (Conventional Commits 1.0.0):
# <type>[optional scope]: <description>
# [optional body]
# [optional footer(s)]
#
# Examples:
# feat(core): add multitenant support
# fix(cli): resolve flag parsing error
# docs: update API usage examples
# docs(openai): update API usage examples
#
# Allowed Types:
# feat — a new feature (MINOR bump)
# fix — a bug fix (PATCH bump)
# docs — documentation only changes
# style — formatting, missing semi-colons, etc.; no code change
# refactor — code change that neither fixes a bug nor adds a feature
# perf — code change that improves performance
# test — adding missing tests or correcting existing tests
# build — changes that affect the build system or external dependencies
# ci — continuous integration/configuration changes
# chore — other changes that don't modify src or test files
# revert — reverts a previous commit
# release — prepare a new release
# * feat — a new feature (MINOR)
# * fix — a bug fix (PATCH)
# * docs — documentation only changes
# * style — formatting, linting, etc.; no code change or typing refactors
# * refactor — code change that neither fixes a bug nor adds a feature
# * perf — code change that improves performance
# * test — adding tests or correcting existing
# * build — changes that affect the build system/external dependencies
# * ci — continuous integration/configuration changes
# * chore — other changes that don't modify source or test files
# * revert — reverts a previous commit
# * release — prepare a new release
#
# Allowed Scopes (optional):
# core, cli, langchain, standard-tests, docs, anthropic, chroma, deepseek,
# exa, fireworks, groq, huggingface, mistralai, nomic, ollama, openai,
# perplexity, prompty, qdrant, xai
# core, cli, langchain, langchain_v1, langchain_legacy, standard-tests,
# text-splitters, docs, anthropic, chroma, deepseek, exa, fireworks, groq,
# huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant,
# xai, infra
#
# Rules & Tips for New Committers:
# 1. Subject (type) must start with a lowercase letter and, if possible, be
# followed by a scope wrapped in parenthesis `(scope)`
# 2. Breaking changes:
# Append "!" after type/scope (e.g., feat!: drop Node 12 support)
# Or include a footer "BREAKING CHANGE: <details>"
# 3. Example PR titles:
# feat(core): add multitenant support
# fix(cli): resolve flag parsing error
# docs: update API usage examples
# docs(openai): update API usage examples
# Rules:
# 1. The 'Type' must start with a lowercase letter.
# 2. Breaking changes: append "!" after type/scope (e.g., feat!: drop x support)
# 3. When releasing (updating the pyproject.toml and uv.lock), the commit message
# should be: `release(scope): x.y.z` (e.g., `release(core): 1.2.0` with no
# body, footer, or preceeding/proceeding text).
#
# Resources:
# • Conventional Commits spec: https://www.conventionalcommits.org/en/v1.0.0/
# -----------------------------------------------------------------------------
# Enforces Conventional Commits format for pull request titles to maintain a clear and
# machine-readable change history.
name: '🏷️ PR Title Lint'
name: "🏷️ PR Title Lint"
permissions:
pull-requests: read
@@ -56,12 +52,12 @@ on:
types: [opened, edited, synchronize]
jobs:
# Validates that PR title follows Conventional Commits specification
# Validates that PR title follows Conventional Commits 1.0.0 specification
lint-pr-title:
name: 'Validate PR Title Format'
name: "validate format"
runs-on: ubuntu-latest
steps:
- name: '✅ Validate Conventional Commits Format'
- name: "✅ Validate Conventional Commits Format"
uses: amannn/action-semantic-pull-request@v6
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -84,6 +80,7 @@ jobs:
cli
langchain
langchain_v1
langchain_legacy
standard-tests
text-splitters
docs

View File

@@ -1,75 +0,0 @@
name: '📓 Validate Documentation Notebooks'
run-name: 'Test notebooks in ${{ inputs.working-directory }}'
on:
workflow_dispatch:
inputs:
python_version:
description: 'Python version'
required: false
default: '3.11'
working-directory:
description: 'Working directory or subset (e.g., docs/docs/tutorials/llm_chain.ipynb or docs/docs/how_to)'
required: false
default: 'all'
schedule:
- cron: '0 13 * * *'
permissions:
contents: read
env:
UV_FROZEN: "true"
jobs:
build:
runs-on: ubuntu-latest
if: github.repository == 'langchain-ai/langchain' || github.event_name != 'schedule'
name: '📑 Test Documentation Notebooks'
steps:
- uses: actions/checkout@v5
- name: '🐍 Set up Python + UV'
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ github.event.inputs.python_version || '3.11' }}
- name: '🔐 Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v3
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: '🔐 Configure AWS Credentials'
uses: aws-actions/configure-aws-credentials@v5
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: '📦 Install Dependencies'
run: |
uv sync --group dev --group test
- name: '📦 Pre-download Test Files'
run: |
uv run python docs/scripts/cache_data.py
curl -s https://raw.githubusercontent.com/lerocha/chinook-database/master/ChinookDatabase/DataSources/Chinook_Sqlite.sql | sqlite3 docs/docs/how_to/Chinook.db
cp docs/docs/how_to/Chinook.db docs/docs/tutorials/Chinook.db
- name: '🔧 Prepare Notebooks for CI'
run: |
uv run python docs/scripts/prepare_notebooks_for_ci.py --comment-install-cells --working-directory ${{ github.event.inputs.working-directory || 'all' }}
- name: '🚀 Execute Notebooks'
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
TAVILY_API_KEY: ${{ secrets.TAVILY_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
WORKING_DIRECTORY: ${{ github.event.inputs.working-directory || 'all' }}
run: |
./docs/scripts/execute_notebooks.sh $WORKING_DIRECTORY

8
.github/workflows/v1_changes.md vendored Normal file
View File

@@ -0,0 +1,8 @@
With the deprecation of v0 docs, the following files will need to be migrated/supported
in the new docs repo:
- run_notebooks.yml: New repo should run Integration tests on code snippets?
- people.yml: Need to fix and somehow display on the new docs site
- Subsequently, `.github/actions/people/`
- _test_doc_imports.yml
- check-broken-links.yml

23
.gitignore vendored
View File

@@ -77,10 +77,6 @@ instance/
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
docs/docs/_build/
# PyBuilder
target/
@@ -161,25 +157,6 @@ data_map*
*replit*
node_modules
docs/.yarn/
docs/node_modules/
docs/.docusaurus/
docs/.cache-loader/
docs/_dist
docs/api_reference/*api_reference.rst
docs/api_reference/*.md
docs/api_reference/_build
docs/api_reference/*/
!docs/api_reference/_static/
!docs/api_reference/templates/
!docs/api_reference/themes/
!docs/api_reference/_extensions/
!docs/api_reference/scripts/
docs/docs/build
docs/docs/node_modules
docs/docs/yarn.lock
_dist
docs/docs/templates
prof
virtualenv/

View File

@@ -2,110 +2,98 @@ repos:
- repo: local
hooks:
- id: core
name: format core
name: format and lint core
language: system
entry: make -C libs/core format
entry: make -C libs/core format lint
files: ^libs/core/
pass_filenames: false
- id: langchain
name: format langchain
name: format and lint langchain
language: system
entry: make -C libs/langchain format
entry: make -C libs/langchain format lint
files: ^libs/langchain/
pass_filenames: false
- id: standard-tests
name: format standard-tests
name: format and lint standard-tests
language: system
entry: make -C libs/standard-tests format
entry: make -C libs/standard-tests format lint
files: ^libs/standard-tests/
pass_filenames: false
- id: text-splitters
name: format text-splitters
name: format and lint text-splitters
language: system
entry: make -C libs/text-splitters format
entry: make -C libs/text-splitters format lint
files: ^libs/text-splitters/
pass_filenames: false
- id: anthropic
name: format partners/anthropic
name: format and lint partners/anthropic
language: system
entry: make -C libs/partners/anthropic format
entry: make -C libs/partners/anthropic format lint
files: ^libs/partners/anthropic/
pass_filenames: false
- id: chroma
name: format partners/chroma
name: format and lint partners/chroma
language: system
entry: make -C libs/partners/chroma format
entry: make -C libs/partners/chroma format lint
files: ^libs/partners/chroma/
pass_filenames: false
- id: couchbase
name: format partners/couchbase
language: system
entry: make -C libs/partners/couchbase format
files: ^libs/partners/couchbase/
pass_filenames: false
- id: exa
name: format partners/exa
name: format and lint partners/exa
language: system
entry: make -C libs/partners/exa format
entry: make -C libs/partners/exa format lint
files: ^libs/partners/exa/
pass_filenames: false
- id: fireworks
name: format partners/fireworks
name: format and lint partners/fireworks
language: system
entry: make -C libs/partners/fireworks format
entry: make -C libs/partners/fireworks format lint
files: ^libs/partners/fireworks/
pass_filenames: false
- id: groq
name: format partners/groq
name: format and lint partners/groq
language: system
entry: make -C libs/partners/groq format
entry: make -C libs/partners/groq format lint
files: ^libs/partners/groq/
pass_filenames: false
- id: huggingface
name: format partners/huggingface
name: format and lint partners/huggingface
language: system
entry: make -C libs/partners/huggingface format
entry: make -C libs/partners/huggingface format lint
files: ^libs/partners/huggingface/
pass_filenames: false
- id: mistralai
name: format partners/mistralai
name: format and lint partners/mistralai
language: system
entry: make -C libs/partners/mistralai format
entry: make -C libs/partners/mistralai format lint
files: ^libs/partners/mistralai/
pass_filenames: false
- id: nomic
name: format partners/nomic
name: format and lint partners/nomic
language: system
entry: make -C libs/partners/nomic format
entry: make -C libs/partners/nomic format lint
files: ^libs/partners/nomic/
pass_filenames: false
- id: ollama
name: format partners/ollama
name: format and lint partners/ollama
language: system
entry: make -C libs/partners/ollama format
entry: make -C libs/partners/ollama format lint
files: ^libs/partners/ollama/
pass_filenames: false
- id: openai
name: format partners/openai
name: format and lint partners/openai
language: system
entry: make -C libs/partners/openai format
entry: make -C libs/partners/openai format lint
files: ^libs/partners/openai/
pass_filenames: false
- id: prompty
name: format partners/prompty
name: format and lint partners/prompty
language: system
entry: make -C libs/partners/prompty format
entry: make -C libs/partners/prompty format lint
files: ^libs/partners/prompty/
pass_filenames: false
- id: qdrant
name: format partners/qdrant
name: format and lint partners/qdrant
language: system
entry: make -C libs/partners/qdrant format
entry: make -C libs/partners/qdrant format lint
files: ^libs/partners/qdrant/
pass_filenames: false
- id: root
name: format docs, cookbook
language: system
entry: make format
files: ^(docs|cookbook)/
pass_filenames: false

View File

@@ -1,25 +0,0 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
version: 2
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.11"
commands:
- mkdir -p $READTHEDOCS_OUTPUT
- cp -r api_reference_build/* $READTHEDOCS_OUTPUT
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/api_reference/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
formats:
- pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/api_reference/requirements.txt

14
.vscode/settings.json vendored
View File

@@ -1,7 +1,6 @@
{
"python.analysis.include": [
"libs/**",
"docs/**",
"cookbook/**"
],
"python.analysis.exclude": [
@@ -10,8 +9,6 @@
"**/.pytest_cache",
"**/.*",
"_dist/**",
"docs/_build/**",
"docs/api_reference/_build/**"
],
"python.analysis.autoImportCompletions": true,
"python.analysis.typeCheckingMode": "basic",
@@ -41,8 +38,6 @@
"**/.mypy_cache": true,
"**/.ruff_cache": true,
"_dist/**": true,
"docs/_build/**": true,
"docs/api_reference/_build/**": true,
"**/node_modules": true,
"**/.git": false
},
@@ -50,8 +45,6 @@
"**/__pycache__": true,
"**/*.pyc": true,
"_dist/**": true,
"docs/_build/**": true,
"docs/api_reference/_build/**": true,
"**/node_modules": true,
"**/.git": true,
"uv.lock": true,
@@ -78,5 +71,10 @@
"editor.insertSpaces": true
},
"python.terminal.activateEnvironment": false,
"python.defaultInterpreterPath": "./.venv/bin/python"
"python.defaultInterpreterPath": "./.venv/bin/python",
"github.copilot.chat.commitMessageGeneration.instructions": [
{
"file": ".github/workflows/pr_lint.yml"
}
]
}

324
AGENTS.md Normal file
View File

@@ -0,0 +1,324 @@
# Global Development Guidelines for LangChain Projects
## Core Development Principles
### 1. Maintain Stable Public Interfaces ⚠️ CRITICAL
**Always attempt to preserve function signatures, argument positions, and names for exported/public methods.**
**Bad - Breaking Change:**
```python
def get_user(id, verbose=False): # Changed from `user_id`
pass
```
**Good - Stable Interface:**
```python
def get_user(user_id: str, verbose: bool = False) -> User:
"""Retrieve user by ID with optional verbose output."""
pass
```
**Before making ANY changes to public APIs:**
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using MkDocs Material admonitions, like `!!! warning`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
### 2. Code Quality Standards
**All Python code MUST include type hints and return types.**
**Bad:**
```python
def p(u, d):
return [x for x in u if x not in d]
```
**Good:**
```python
def filter_unknown_users(users: list[str], known_users: set[str]) -> list[str]:
"""Filter out users that are not in the known users set.
Args:
users: List of user identifiers to filter.
known_users: Set of known/valid user identifiers.
Returns:
List of users that are not in the known_users set.
"""
return [user for user in users if user not in known_users]
```
**Style Requirements:**
- Use descriptive, **self-explanatory variable names**. Avoid overly short or cryptic identifiers.
- Attempt to break up complex functions (>20 lines) into smaller, focused functions where it makes sense
- Avoid unnecessary abstraction or premature optimization
- Follow existing patterns in the codebase you're modifying
### 3. Testing Requirements
**Every new feature or bugfix MUST be covered by unit tests.**
**Test Organization:**
- Unit tests: `tests/unit_tests/` (no network calls allowed)
- Integration tests: `tests/integration_tests/` (network calls permitted)
- Use `pytest` as the testing framework
**Test Quality Checklist:**
- [ ] Tests fail when your new logic is broken
- [ ] Happy path is covered
- [ ] Edge cases and error conditions are tested
- [ ] Use fixtures/mocks for external dependencies
- [ ] Tests are deterministic (no flaky tests)
Checklist questions:
- [ ] Does the test suite fail if your new logic is broken?
- [ ] Are all expected behaviors exercised (happy path, invalid input, etc)?
- [ ] Do tests use fixtures or mocks where needed?
```python
def test_filter_unknown_users():
"""Test filtering unknown users from a list."""
users = ["alice", "bob", "charlie"]
known_users = {"alice", "bob"}
result = filter_unknown_users(users, known_users)
assert result == ["charlie"]
assert len(result) == 1
```
### 4. Security and Risk Assessment
**Security Checklist:**
- No `eval()`, `exec()`, or `pickle` on user-controlled input
- Proper exception handling (no bare `except:`) and use a `msg` variable for error messages
- Remove unreachable/commented code before committing
- Race conditions or resource leaks (file handles, sockets, threads).
- Ensure proper resource cleanup (file handles, connections)
**Bad:**
```python
def load_config(path):
with open(path) as f:
return eval(f.read()) # ⚠️ Never eval config
```
**Good:**
```python
import json
def load_config(path: str) -> dict:
with open(path) as f:
return json.load(f)
```
### 5. Documentation Standards
**Use Google-style docstrings with Args section for all public functions.**
**Insufficient Documentation:**
```python
def send_email(to, msg):
"""Send an email to a recipient."""
```
**Complete Documentation:**
```python
def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
"""
Send an email to a recipient with specified priority.
Args:
to: The email address of the recipient.
msg: The message body to send.
priority: Email priority level (``'low'``, ``'normal'``, ``'high'``).
Returns:
True if email was sent successfully, False otherwise.
Raises:
InvalidEmailError: If the email address format is invalid.
SMTPConnectionError: If unable to connect to email server.
"""
```
**Documentation Guidelines:**
- Types go in function signatures, NOT in docstrings
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.
### 6. Architectural Improvements
**When you encounter code that could be improved, suggest better designs:**
**Poor Design:**
```python
def process_data(data, db_conn, email_client, logger):
# Function doing too many things
validated = validate_data(data)
result = db_conn.save(validated)
email_client.send_notification(result)
logger.log(f"Processed {len(data)} items")
return result
```
**Better Design:**
```python
@dataclass
class ProcessingResult:
"""Result of data processing operation."""
items_processed: int
success: bool
errors: List[str] = field(default_factory=list)
class DataProcessor:
"""Handles data validation, storage, and notification."""
def __init__(self, db_conn: Database, email_client: EmailClient):
self.db = db_conn
self.email = email_client
def process(self, data: List[dict]) -> ProcessingResult:
"""Process and store data with notifications."""
validated = self._validate_data(data)
result = self.db.save(validated)
self._notify_completion(result)
return result
```
**Design Improvement Areas:**
If there's a **cleaner**, **more scalable**, or **simpler** design, highlight it and suggest improvements that would:
- Reduce code duplication through shared utilities
- Make unit testing easier
- Improve separation of concerns (single responsibility)
- Make unit testing easier through dependency injection
- Add clarity without adding complexity
- Prefer dataclasses for structured data
## Development Tools & Commands
### Package Management
```bash
# Add package
uv add package-name
# Sync project dependencies
uv sync
uv lock
```
### Testing
```bash
# Run unit tests (no network)
make test
# Don't run integration tests, as API keys must be set
# Run specific test file
uv run --group test pytest tests/unit_tests/test_specific.py
```
### Code Quality
```bash
# Lint code
make lint
# Format code
make format
# Type checking
uv run --group lint mypy .
```
### Dependency Management Patterns
**Local Development Dependencies:**
```toml
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain-tests = { path = "../standard-tests", editable = true }
```
**For tools, use the `@tool` decorator from `langchain_core.tools`:**
```python
from langchain_core.tools import tool
@tool
def search_database(query: str) -> str:
"""Search the database for relevant information.
Args:
query: The search query string.
"""
# Implementation here
return results
```
## Commit Standards
**Use Conventional Commits format for PR titles:**
- `feat(core): add multi-tenant support`
- `fix(cli): resolve flag parsing error`
- `docs: update API usage examples`
- `docs(openai): update API usage examples`
## Framework-Specific Guidelines
- Follow the existing patterns in `langchain-core` for base abstractions
- Use `langchain_core.callbacks` for execution tracking
- Implement proper streaming support where applicable
- Avoid deprecated components like legacy `LLMChain`
### Partner Integrations
- Follow the established patterns in existing partner libraries
- Implement standard interfaces (`BaseChatModel`, `BaseEmbeddings`, etc.)
- Include comprehensive integration tests
- Document API key requirements and authentication
---
## Quick Reference Checklist
Before submitting code changes:
- [ ] **Breaking Changes**: Verified no public API changes
- [ ] **Type Hints**: All functions have complete type annotations
- [ ] **Tests**: New functionality is fully tested
- [ ] **Security**: No dangerous patterns (eval, silent failures, etc.)
- [ ] **Documentation**: Google-style docstrings for public functions
- [ ] **Code Quality**: `make lint` and `make format` pass
- [ ] **Architecture**: Suggested improvements where applicable
- [ ] **Commit Message**: Follows Conventional Commits format

View File

@@ -26,7 +26,7 @@ def get_user(user_id: str, verbose: bool = False) -> User:
- Check if the function/class is exported in `__init__.py`
- Look for existing usage patterns in tests and examples
- Use keyword-only arguments for new parameters: `*, new_param: str = "default"`
- Mark experimental features clearly with docstring warnings (using reStructuredText, like `.. warning::`)
- Mark experimental features clearly with docstring warnings (using MkDocs Material admonitions, like `!!! warning`)
🧠 *Ask yourself:* "Would this change break someone's code if they used it last week?"
@@ -166,7 +166,6 @@ def send_email(to: str, msg: str, *, priority: str = "normal") -> bool:
- Focus on "why" rather than "what" in descriptions
- Document all parameters, return values, and exceptions
- Keep descriptions concise but clear
- Use reStructuredText for docstrings to enable rich formatting
📌 *Tip:* Keep descriptions concise but clear. Only document return values if non-obvious.

119
Makefile
View File

@@ -1,119 +0,0 @@
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck spell_check spell_fix lint lint_package lint_tests format format_diff
.EXPORT_ALL_VARIABLES:
UV_FROZEN = true
## help: Show this help info.
help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@sed -n 's/^## //p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
## clean: Clean documentation and API documentation artifacts.
clean: docs_clean api_docs_clean
######################
# DOCUMENTATION
######################
## docs_build: Build the documentation.
docs_build: docs_clean
@echo "📚 Building LangChain documentation..."
cd docs && make build
@echo "✅ Documentation build complete!"
## docs_clean: Clean the documentation build artifacts.
docs_clean:
@echo "🧹 Cleaning documentation artifacts..."
cd docs && make clean
@echo "✅ LangChain documentation cleaned"
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
@echo "🔗 Checking documentation links..."
@if [ -d _dist/docs ]; then \
uv run --group test linkchecker _dist/docs/ --ignore-url node_modules; \
else \
echo "⚠️ Documentation not built. Run 'make docs_build' first."; \
exit 1; \
fi
@echo "✅ Link check complete"
## api_docs_build: Build the API Reference documentation.
api_docs_build: clean
@echo "📖 Building API Reference documentation..."
uv pip install -e libs/cli
uv run --group docs python docs/api_reference/create_api_rst.py
cd docs/api_reference && uv run --group docs make html
uv run --group docs python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
@echo "✅ API documentation built"
@echo "🌐 Opening documentation in browser..."
open docs/api_reference/_build/html/reference.html
API_PKG ?= text-splitters
api_docs_quick_preview: clean
@echo "⚡ Building quick API preview for $(API_PKG)..."
uv run --group docs python docs/api_reference/create_api_rst.py $(API_PKG)
cd docs/api_reference && uv run --group docs make html
uv run --group docs python docs/api_reference/scripts/custom_formatter.py docs/api_reference/_build/html/
@echo "🌐 Opening preview in browser..."
open docs/api_reference/_build/html/reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
@echo "🧹 Cleaning API documentation artifacts..."
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
rm -f docs/api_reference/index.md
@echo "✅ API documentation cleaned"
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:
@echo "🔗 Checking API documentation links..."
@if [ -f docs/api_reference/_build/html/index.html ]; then \
uv run --group test linkchecker docs/api_reference/_build/html/index.html; \
else \
echo "⚠️ API documentation not built. Run 'make api_docs_build' first."; \
exit 1; \
fi
@echo "✅ API link check complete"
## spell_check: Run codespell on the project.
spell_check:
@echo "✏️ Checking spelling across project..."
uv run --group codespell codespell --toml pyproject.toml
@echo "✅ Spell check complete"
## spell_fix: Run codespell on the project and fix the errors.
spell_fix:
@echo "✏️ Fixing spelling errors across project..."
uv run --group codespell codespell --toml pyproject.toml -w
@echo "✅ Spelling errors fixed"
######################
# LINTING AND FORMATTING
######################
## lint: Run linting on the project.
lint lint_package lint_tests:
@echo "🔍 Running code linting and checks..."
uv run --group lint ruff check docs cookbook
uv run --group lint ruff format docs cookbook cookbook --diff
git --no-pager grep 'from langchain import' docs cookbook | grep -vE 'from langchain import (hub)' && echo "Error: no importing langchain from root in docs, except for hub" && exit 1 || exit 0
git --no-pager grep 'api.python.langchain.com' -- docs/docs ':!docs/docs/additional_resources/arxiv_references.mdx' ':!docs/docs/integrations/document_loaders/sitemap.ipynb' || exit 0 && \
echo "Error: you should link python.langchain.com/api_reference, not api.python.langchain.com in the docs" && \
exit 1
@echo "✅ Linting complete"
## format: Format the project files.
format format_diff:
@echo "🎨 Formatting project files..."
uv run --group lint ruff format docs cookbook
uv run --group lint ruff check --fix docs cookbook
@echo "✅ Formatting complete"
update-package-downloads:
@echo "📊 Updating package download statistics..."
uv run python docs/scripts/packages_yml_get_downloads.py
@echo "✅ Package downloads updated"

108
README.md
View File

@@ -1,83 +1,77 @@
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/static/img/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/static/img/logo-light.svg">
<img alt="LangChain Logo" src="docs/static/img/logo-dark.svg" width="80%">
</picture>
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset=".github/images/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset=".github/images/logo-light.svg">
<img alt="LangChain Logo" src=".github/images/logo-dark.svg" width="80%">
</picture>
</p>
<div>
<br>
</div>
<p align="center">
The platform for reliable agents.
</p>
[![PyPI - License](https://img.shields.io/pypi/l/langchain-core?style=flat-square)](https://opensource.org/licenses/MIT)
[![PyPI - Downloads](https://img.shields.io/pepy/dt/langchain)](https://pypistats.org/packages/langchain-core)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[![CodSpeed Badge](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
<p align="center">
<a href="https://opensource.org/licenses/MIT" target="_blank">
<img src="https://img.shields.io/pypi/l/langchain-core?style=flat-square" alt="PyPI - License">
</a>
<a href="https://pypistats.org/packages/langchain-core" target="_blank">
<img src="https://img.shields.io/pepy/dt/langchain" alt="PyPI - Downloads">
</a>
<a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain" target="_blank">
<img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square" alt="Open in Dev Containers">
</a>
<a href="https://codespaces.new/langchain-ai/langchain" target="_blank">
<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">
</a>
<a href="https://codspeed.io/langchain-ai/langchain" target="_blank">
<img src="https://img.shields.io/endpoint?url=https://codspeed.io/badge.json" alt="CodSpeed Badge">
</a>
<a href="https://twitter.com/langchainai" target="_blank">
<img src="https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI" alt="Twitter / X">
</a>
</p>
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
LangChain is a framework for building LLM-powered applications. It helps you chain
together interoperable components and third-party integrations to simplify AI
application development — all while future-proofing decisions as the underlying
technology evolves.
LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
```bash
pip install -U langchain
```
To learn more about LangChain, check out
[the docs](https://python.langchain.com/docs/introduction/). If youre looking for more
advanced customization or agent orchestration, check out
[LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building
controllable agent workflows.
---
**Documentation**: To learn more about LangChain, check out [the docs](https://docs.langchain.com/).
If you're looking for more advanced customization or agent orchestration, check out [LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building controllable agent workflows.
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
## Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard
interface for models, embeddings, vector stores, and more.
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
Use LangChain for:
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
external/internal systems, drawing from LangChains vast library of integrations with
model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team
experiments to find the best choice for your applications needs. As the industry
frontier evolves, adapt quickly — LangChains abstractions keep you moving without
losing momentum.
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChains vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your applications needs. As the industry frontier evolves, adapt quickly — LangChains abstractions keep you moving without losing momentum.
## LangChains ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly
with any LangChain product, giving developers a full suite of tools when building LLM
applications.
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
To improve your LLM application development, pair LangChain with:
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
reliably handle complex tasks with LangGraph, our low-level agent orchestration
framework. LangGraph offers customizable architecture, long-term memory, and
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
Uber, Klarna, and GitLab.
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy
and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across
teams — and iterate quickly with visual prototyping in
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows — and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
## Additional resources
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with
guided examples on getting started with LangChain.
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code
snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key
- [Conceptual Guides](https://docs.langchain.com/oss/python/langchain/overview): Explanations of key
concepts behind the LangChain framework.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
- [Tutorials](https://docs.langchain.com/oss/python/learn): Simple walkthroughs with
guided examples on getting started with LangChain.
- [API Reference](https://reference.langchain.com/python/): Detailed reference on
navigating base packages and integrations for LangChain.
- [LangChain Forum](https://forum.langchain.com/): Connect with the community and share all of your technical questions, ideas, and feedback.
- [Chat LangChain](https://chat.langchain.com/): Ask questions & chat with our documentation.

View File

@@ -22,9 +22,7 @@ Example scenarios with mitigation strategies:
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse.
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials.
If you're building applications that access external resources like file systems, APIs
or databases, consider speaking with your company's security team to determine how to best
design and secure your applications.
If you're building applications that access external resources like file systems, APIs or databases, consider speaking with your company's security team to determine how to best design and secure your applications.
## Reporting OSS Vulnerabilities
@@ -37,10 +35,8 @@ open source projects at [huntr](https://huntr.com/bounties/disclose/?target=http
Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) The [Best Practices](#best-practices) above to
understand what we consider to be a security vulnerability vs. developer
responsibility.
2) The [langchain-ai/langchain](https://docs.langchain.com/oss/python/contributing/code#repository-structure) monorepo structure.
3) The [Best Practices](#best-practices) above to understand what we consider to be a security vulnerability vs. developer responsibility.
### In-Scope Targets

View File

@@ -47,10 +47,12 @@
"source": [
"### Prerequisites\n",
"\n",
"Please install the Oracle Database [python-oracledb driver](https://pypi.org/project/oracledb/) to use LangChain with Oracle AI Vector Search:\n",
"You'll need to install `langchain-oracledb` with `python -m pip install -U langchain-oracledb` to use this integration.\n",
"\n",
"The `python-oracledb` driver is installed automatically as a dependency of langchain-oracledb.\n",
"\n",
"```\n",
"$ python -m pip install --upgrade oracledb\n",
"$ python -m pip install -U langchain-oracledb\n",
"```"
]
},
@@ -217,7 +219,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"\n",
"# please update with your related information\n",
"# make sure that you have onnx file in the system\n",
@@ -296,7 +298,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_oracledb.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_core.documents import Document\n",
"\n",
"# loading from Oracle Database table\n",
@@ -354,7 +356,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
"from langchain_core.documents import Document\n",
"\n",
"# using 'database' provider\n",
@@ -395,7 +397,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_oracledb.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_core.documents import Document\n",
"\n",
"# split by default parameters\n",
@@ -452,7 +454,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_core.documents import Document\n",
"\n",
"# using ONNX model loaded to Oracle Database\n",
@@ -498,14 +500,14 @@
"import sys\n",
"\n",
"import oracledb\n",
"from langchain_community.document_loaders.oracleai import (\n",
"from langchain_oracledb.document_loaders.oracleai import (\n",
" OracleDocLoader,\n",
" OracleTextSplitter,\n",
")\n",
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_community.vectorstores import oraclevs\n",
"from langchain_community.vectorstores.oraclevs import OracleVS\n",
"from langchain_oracledb.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_oracledb.utilities.oracleai import OracleSummary\n",
"from langchain_oracledb.vectorstores import oraclevs\n",
"from langchain_oracledb.vectorstores.oraclevs import OracleVS\n",
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
"from langchain_core.documents import Document"
]
@@ -677,19 +679,19 @@
"outputs": [],
"source": [
"query = \"What is Oracle AI Vector Store?\"\n",
"filter = {\"document_id\": [\"1\"]}\n",
"db_filter = {\"document_id\": \"1\"}\n",
"\n",
"# Similarity search without a filter\n",
"print(vectorstore.similarity_search(query, 1))\n",
"\n",
"# Similarity search with a filter\n",
"print(vectorstore.similarity_search(query, 1, filter=filter))\n",
"print(vectorstore.similarity_search(query, 1, filter=db_filter))\n",
"\n",
"# Similarity search with relevance score\n",
"print(vectorstore.similarity_search_with_score(query, 1))\n",
"\n",
"# Similarity search with relevance score with filter\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=filter))\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=db_filter))\n",
"\n",
"# Max marginal relevance search\n",
"print(vectorstore.max_marginal_relevance_search(query, 1, fetch_k=20, lambda_mult=0.5))\n",
@@ -697,7 +699,7 @@
"# Max marginal relevance search with filter\n",
"print(\n",
" vectorstore.max_marginal_relevance_search(\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=filter\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=db_filter\n",
" )\n",
")"
]

3
docs/.gitignore vendored
View File

@@ -1,3 +0,0 @@
/.quarto/
src/supabase.d.ts
build

View File

@@ -1 +0,0 @@
nodeLinker: node-modules

View File

@@ -1,116 +0,0 @@
# We build the docs in these stages:
# 1. Install vercel and python dependencies
# 2. Copy files from "source dir" to "intermediate dir"
# 2. Generate files like model feat table, etc in "intermediate dir"
# 3. Copy files to their right spots (e.g. langserve readme) in "intermediate dir"
# 4. Build the docs from "intermediate dir" to "output dir"
SOURCE_DIR = docs/
INTERMEDIATE_DIR = build/intermediate/docs
OUTPUT_NEW_DIR = build/output-new
OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PORT ?= 3001
clean:
rm -rf build
clean-cache:
rm -rf build .venv/deps_installed
install-vercel-deps:
yum -y -q update
yum -y -q install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
install-py-deps:
@echo "📦 Installing Python dependencies..."
@if [ ! -d .venv ]; then python3 -m venv .venv; fi
@if [ ! -f .venv/deps_installed ]; then \
$(PYTHON) -m pip install -q --upgrade pip --disable-pip-version-check; \
$(PYTHON) -m pip install -q --upgrade uv; \
$(PYTHON) -m uv pip install -q --pre -r vercel_requirements.txt; \
$(PYTHON) -m uv pip install -q --pre $$($(PYTHON) scripts/partner_deps_list.py) --overrides vercel_overrides.txt; \
touch .venv/deps_installed; \
fi
@echo "✅ Dependencies installed"
generate-files:
@echo "📄 Generating documentation files..."
mkdir -p $(INTERMEDIATE_DIR)
cp -rp $(SOURCE_DIR)/* $(INTERMEDIATE_DIR)
@if [ ! -f build/langserve_readme_cache.md ] || [ $$(find build/langserve_readme_cache.md -mtime +1 -print) ]; then \
echo "🌐 Downloading LangServe README..."; \
curl -s https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md | sed 's/<=/\&lt;=/g' > build/langserve_readme_cache.md; \
fi
cp build/langserve_readme_cache.md $(INTERMEDIATE_DIR)/langserve.md
cp ../SECURITY.md $(INTERMEDIATE_DIR)/security.md
@echo "🔧 Generating feature tables and processing links..."
$(PYTHON) scripts/tool_feat_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/kv_store_feat_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/partner_pkg_table.py $(INTERMEDIATE_DIR) & \
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/ & \
wait
@echo "✅ Files generated"
copy-infra:
@echo "📂 Copying infrastructure files..."
mkdir -p $(OUTPUT_NEW_DIR)
cp -r src $(OUTPUT_NEW_DIR)
cp vercel.json $(OUTPUT_NEW_DIR)
cp babel.config.js $(OUTPUT_NEW_DIR)
cp -r data $(OUTPUT_NEW_DIR)
cp docusaurus.config.js $(OUTPUT_NEW_DIR)
cp package.json $(OUTPUT_NEW_DIR)
cp sidebars.js $(OUTPUT_NEW_DIR)
cp -r static $(OUTPUT_NEW_DIR)
cp -r ../libs/cli/langchain_cli/integration_template $(OUTPUT_NEW_DIR)/src/theme
cp yarn.lock $(OUTPUT_NEW_DIR)
@echo "✅ Infrastructure files copied"
render:
@echo "📓 Converting notebooks (this may take a while)..."
$(PYTHON) scripts/notebook_convert.py $(INTERMEDIATE_DIR) $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Notebooks converted"
md-sync:
@echo "📝 Syncing markdown files..."
rsync -avmq --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --include="*/_category_.yml" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Markdown files synced"
append-related:
@echo "🔗 Appending related links..."
$(PYTHON) scripts/append_related_links.py $(OUTPUT_NEW_DOCS_DIR)
@echo "✅ Related links appended"
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
update-md: generate-files md-sync
build: install-py-deps generate-files copy-infra render md-sync append-related
@echo ""
@echo "🎉 Documentation build complete!"
@echo "📖 To view locally, run: cd docs && make start"
@echo ""
vercel-build: install-vercel-deps build generate-references
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
cp -r ../libs/cli/langchain_cli/integration_template src/theme
rm -rf build
mkdir static/api_reference
git clone --depth=1 https://github.com/langchain-ai/langchain-api-docs-html.git
mv langchain-api-docs-html/api_reference_build/html/* static/api_reference/
rm -rf langchain-api-docs-html
NODE_OPTIONS="--max-old-space-size=5000" yarn run docusaurus build
start:
@echo "🚀 Starting documentation server on port $(PORT)..."
@echo "📖 Installing Node.js dependencies..."
cd $(OUTPUT_NEW_DIR) && yarn install --silent
@echo "🌐 Starting server at http://localhost:$(PORT)"
@echo "Press Ctrl+C to stop the server"
cd $(OUTPUT_NEW_DIR) && yarn start --port=$(PORT)

View File

@@ -1,3 +0,0 @@
# LangChain Documentation
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/how_to/documentation)

View File

@@ -1,21 +0,0 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?= -j auto
SPHINXBUILD ?= sphinx-build
SPHINXAUTOBUILD ?= sphinx-autobuild
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View File

@@ -1,144 +0,0 @@
"""A directive to generate a gallery of images from structured data.
Generating a gallery of images that are all the same size is a common
pattern in documentation, and this can be cumbersome if the gallery is
generated programmatically. This directive wraps this particular use-case
in a helper-directive to generate it with a single YAML configuration file.
It currently exists for maintainers of the pydata-sphinx-theme,
but might be abstracted into a standalone package if it proves useful.
"""
from pathlib import Path
from typing import Any, ClassVar, Dict, List
from docutils import nodes
from docutils.parsers.rst import directives
from sphinx.application import Sphinx
from sphinx.util import logging
from sphinx.util.docutils import SphinxDirective
from yaml import safe_load
logger = logging.getLogger(__name__)
TEMPLATE_GRID = """
`````{{grid}} {columns}
{options}
{content}
`````
"""
GRID_CARD = """
````{{grid-item-card}} {title}
{options}
{content}
````
"""
class GalleryGridDirective(SphinxDirective):
"""A directive to show a gallery of images and links in a Bootstrap grid.
The grid can be generated from a YAML file that contains a list of items, or
from the content of the directive (also formatted in YAML). Use the parameter
"class-card" to add an additional CSS class to all cards. When specifying the grid
items, you can use all parameters from "grid-item-card" directive to customize
individual cards + ["image", "header", "content", "title"].
Danger:
This directive can only be used in the context of a Myst documentation page as
the templates use Markdown flavored formatting.
"""
name = "gallery-grid"
has_content = True
required_arguments = 0
optional_arguments = 1
final_argument_whitespace = True
option_spec: ClassVar[dict[str, Any]] = {
# A class to be added to the resulting container
"grid-columns": directives.unchanged,
"class-container": directives.unchanged,
"class-card": directives.unchanged,
}
def run(self) -> List[nodes.Node]:
"""Create the gallery grid."""
if self.arguments:
# If an argument is given, assume it's a path to a YAML file
# Parse it and load it into the directive content
path_data_rel = Path(self.arguments[0])
path_doc, _ = self.get_source_info()
path_doc = Path(path_doc).parent
path_data = (path_doc / path_data_rel).resolve()
if not path_data.exists():
logger.info(f"Could not find grid data at {path_data}.")
nodes.text("No grid data found at {path_data}.")
return
yaml_string = path_data.read_text()
else:
yaml_string = "\n".join(self.content)
# Use all the element with an img-bottom key as sites to show
# and generate a card item for each of them
grid_items = []
for item in safe_load(yaml_string):
# remove parameters that are not needed for the card options
title = item.pop("title", "")
# build the content of the card using some extra parameters
header = f"{item.pop('header')} \n^^^ \n" if "header" in item else ""
image = f"![image]({item.pop('image')}) \n" if "image" in item else ""
content = f"{item.pop('content')} \n" if "content" in item else ""
# optional parameter that influence all cards
if "class-card" in self.options:
item["class-card"] = self.options["class-card"]
loc_options_str = "\n".join(f":{k}: {v}" for k, v in item.items()) + " \n"
card = GRID_CARD.format(
options=loc_options_str, content=header + image + content, title=title
)
grid_items.append(card)
# Parse the template with Sphinx Design to create an output container
# Prep the options for the template grid
class_ = "gallery-directive" + f" {self.options.get('class-container', '')}"
options = {"gutter": 2, "class-container": class_}
options_str = "\n".join(f":{k}: {v}" for k, v in options.items())
# Create the directive string for the grid
grid_directive = TEMPLATE_GRID.format(
columns=self.options.get("grid-columns", "1 2 3 4"),
options=options_str,
content="\n".join(grid_items),
)
# Parse content as a directive so Sphinx Design processes it
container = nodes.container()
self.state.nested_parse([grid_directive], 0, container)
# Sphinx Design outputs a container too, so just use that
return [container.children[0]]
def setup(app: Sphinx) -> Dict[str, Any]:
"""Add custom configuration to sphinx app.
Args:
app: the Sphinx application
Returns:
the 2 parallel parameters set to ``True``.
"""
app.add_directive("gallery-grid", GalleryGridDirective)
return {
"parallel_read_safe": True,
"parallel_write_safe": True,
}

View File

@@ -1,396 +0,0 @@
@import url('https://fonts.googleapis.com/css2?family=Inter:wght@400;700&display=swap');
/*******************************************************************************
* master color map. Only the colors that actually differ between light and dark
* themes are specified separately.
*
* To see the full list of colors see https://www.figma.com/file/rUrrHGhUBBIAAjQ82x6pz9/PyData-Design-system---proposal-for-implementation-(2)?node-id=1234%3A765&t=ifcFT1JtnrSshGfi-1
*/
/**
* Function to get items from nested maps
*/
/* Assign base colors for the PyData theme */
:root {
--pst-teal-50: #f4fbfc;
--pst-teal-100: #e9f6f8;
--pst-teal-200: #d0ecf1;
--pst-teal-300: #abdde6;
--pst-teal-400: #3fb1c5;
--pst-teal-500: #0a7d91;
--pst-teal-600: #085d6c;
--pst-teal-700: #064752;
--pst-teal-800: #042c33;
--pst-teal-900: #021b1f;
--pst-violet-50: #f4eefb;
--pst-violet-100: #e0c7ff;
--pst-violet-200: #d5b4fd;
--pst-violet-300: #b780ff;
--pst-violet-400: #9c5ffd;
--pst-violet-500: #8045e5;
--pst-violet-600: #6432bd;
--pst-violet-700: #4b258f;
--pst-violet-800: #341a61;
--pst-violet-900: #1e0e39;
--pst-gray-50: #f9f9fa;
--pst-gray-100: #f3f4f5;
--pst-gray-200: #e5e7ea;
--pst-gray-300: #d1d5da;
--pst-gray-400: #9ca4af;
--pst-gray-500: #677384;
--pst-gray-600: #48566b;
--pst-gray-700: #29313d;
--pst-gray-800: #222832;
--pst-gray-900: #14181e;
--pst-pink-50: #fcf8fd;
--pst-pink-100: #fcf0fa;
--pst-pink-200: #f8dff5;
--pst-pink-300: #f3c7ee;
--pst-pink-400: #e47fd7;
--pst-pink-500: #c132af;
--pst-pink-600: #912583;
--pst-pink-700: #6e1c64;
--pst-pink-800: #46123f;
--pst-pink-900: #2b0b27;
--pst-foundation-white: #ffffff;
--pst-foundation-black: #14181e;
--pst-green-10: #f1fdfd;
--pst-green-50: #E0F7F6;
--pst-green-100: #B3E8E6;
--pst-green-200: #80D6D3;
--pst-green-300: #4DC4C0;
--pst-green-400: #4FB2AD;
--pst-green-500: #287977;
--pst-green-600: #246161;
--pst-green-700: #204F4F;
--pst-green-800: #1C3C3C;
--pst-green-900: #0D2427;
--pst-lilac-50: #f4eefb;
--pst-lilac-100: #DAD6FE;
--pst-lilac-200: #BCB2FD;
--pst-lilac-300: #9F8BFA;
--pst-lilac-400: #7F5CF6;
--pst-lilac-500: #6F3AED;
--pst-lilac-600: #6028D9;
--pst-lilac-700: #5021B6;
--pst-lilac-800: #431D95;
--pst-lilac-900: #1e0e39;
--pst-header-height: 2.5rem;
}
html {
--pst-font-family-base: 'Inter';
--pst-font-family-heading: 'Inter Tight', sans-serif;
--pst-icon-versionmodified-deprecated: var(--pst-icon-exclamation-triangle);
}
/*******************************************************************************
* write the color rules for each theme (light/dark)
*/
/* NOTE:
* Mixins enable us to reuse the same definitions for the different modes
* https://sass-lang.com/documentation/at-rules/mixin
* something inserts a variable into a CSS selector or property name
* https://sass-lang.com/documentation/interpolation
*/
/* Defaults to light mode if data-theme is not set */
html:not([data-theme]), html[data-theme=light] {
--pst-color-primary: #287977;
--pst-color-primary-bg: #80D6D3;
--pst-color-secondary: #6F3AED;
--pst-color-secondary-bg: #DAD6FE;
--pst-color-accent: #c132af;
--pst-color-accent-bg: #f8dff5;
--pst-color-info: #276be9;
--pst-color-info-bg: #dce7fc;
--pst-color-warning: #f66a0a;
--pst-color-warning-bg: #f8e3d0;
--pst-color-success: #00843f;
--pst-color-success-bg: #d6ece1;
--pst-color-attention: var(--pst-color-warning);
--pst-color-attention-bg: var(--pst-color-warning-bg);
--pst-color-danger: #d72d47;
--pst-color-danger-bg: #f9e1e4;
--pst-color-text-base: #222832;
--pst-color-text-muted: #48566b;
--pst-color-heading-color: #ffffff;
--pst-color-shadow: rgba(0, 0, 0, 0.1);
--pst-color-border: #d1d5da;
--pst-color-border-muted: rgba(23, 23, 26, 0.2);
--pst-color-inline-code: #912583;
--pst-color-inline-code-links: #246161;
--pst-color-target: #f3cf95;
--pst-color-background: #ffffff;
--pst-color-on-background: #F4F9F8;
--pst-color-surface: #F4F9F8;
--pst-color-on-surface: #222832;
--pst-color-deprecated: #f47d2e;
--pst-color-deprecated-bg: #fff3e8;
}
html[data-theme=dark] {
--pst-color-primary: #4FB2AD;
--pst-color-primary-bg: #1C3C3C;
--pst-color-secondary: #7F5CF6;
--pst-color-secondary-bg: #431D95;
--pst-color-accent: #e47fd7;
--pst-color-accent-bg: #46123f;
--pst-color-info: #79a3f2;
--pst-color-info-bg: #06245d;
--pst-color-warning: #ff9245;
--pst-color-warning-bg: #652a02;
--pst-color-success: #5fb488;
--pst-color-success-bg: #002f17;
--pst-color-attention: var(--pst-color-warning);
--pst-color-attention-bg: var(--pst-color-warning-bg);
--pst-color-danger: #e78894;
--pst-color-danger-bg: #4e111b;
--pst-color-text-base: #ced6dd;
--pst-color-text-muted: #9ca4af;
--pst-color-heading-color: #14181e;
--pst-color-shadow: rgba(0, 0, 0, 0.2);
--pst-color-border: #48566b;
--pst-color-border-muted: #29313d;
--pst-color-inline-code: #f3c7ee;
--pst-color-inline-code-links: #4FB2AD;
--pst-color-target: #675c04;
--pst-color-background: #14181e;
--pst-color-on-background: #222832;
--pst-color-surface: #29313d;
--pst-color-on-surface: #f3f4f5;
--pst-color-deprecated: #b46f3e;
--pst-color-deprecated-bg: #341906;
/* Adjust images in dark mode (unless they have class .only-dark or
* .dark-light, in which case assume they're already optimized for dark
* mode).
*/
/* Give images a light background in dark mode in case they have
* transparency and black text (unless they have class .only-dark or .dark-light, in
* which case assume they're already optimized for dark mode).
*/
color-scheme: dark;
}
html:not([data-theme]) {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html:not([data-theme]) .only-dark,
html:not([data-theme]) .only-dark ~ figcaption {
display: none !important;
}
/* NOTE: @each {...} is like a for-loop
* https://sass-lang.com/documentation/at-rules/control/each
*/
html[data-theme=light] {
color-scheme: light;
}
html[data-theme=light] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html[data-theme=light] .only-dark,
html[data-theme=light] .only-dark ~ figcaption {
display: none !important;
}
html[data-theme=dark] {
--pst-color-link: var(--pst-color-primary);
--pst-color-link-hover: var(--pst-color-secondary);
}
html[data-theme=dark] .only-light,
html[data-theme=dark] .only-light ~ figcaption {
display: none !important;
}
html[data-theme=dark] img:not(.only-dark):not(.dark-light) {
filter: brightness(0.8) contrast(1.2);
}
html[data-theme=dark] .bd-content img:not(.only-dark):not(.dark-light) {
background: rgb(255, 255, 255);
border-radius: 0.25rem;
}
html[data-theme=dark] .MathJax_SVG * {
fill: var(--pst-color-text-base);
}
.pst-color-primary {
color: var(--pst-color-primary);
}
.pst-color-secondary {
color: var(--pst-color-secondary);
}
.pst-color-accent {
color: var(--pst-color-accent);
}
.pst-color-info {
color: var(--pst-color-info);
}
.pst-color-warning {
color: var(--pst-color-warning);
}
.pst-color-success {
color: var(--pst-color-success);
}
.pst-color-attention {
color: var(--pst-color-attention);
}
.pst-color-danger {
color: var(--pst-color-danger);
}
.pst-color-text-base {
color: var(--pst-color-text-base);
}
.pst-color-text-muted {
color: var(--pst-color-text-muted);
}
.pst-color-heading-color {
color: var(--pst-color-heading-color);
}
.pst-color-shadow {
color: var(--pst-color-shadow);
}
.pst-color-border {
color: var(--pst-color-border);
}
.pst-color-border-muted {
color: var(--pst-color-border-muted);
}
.pst-color-inline-code {
color: var(--pst-color-inline-code);
}
.pst-color-inline-code-links {
color: var(--pst-color-inline-code-links);
}
.pst-color-target {
color: var(--pst-color-target);
}
.pst-color-background {
color: var(--pst-color-background);
}
.pst-color-on-background {
color: var(--pst-color-on-background);
}
.pst-color-surface {
color: var(--pst-color-surface);
}
.pst-color-on-surface {
color: var(--pst-color-on-surface);
}
/* Adjust the height of the navbar */
.bd-header .bd-header__inner{
height: 52px; /* Adjust this value as needed */
}
.navbar-nav > li > a {
line-height: 52px; /* Vertically center the navbar links */
}
/* Make sure the navbar items align properly */
.navbar-nav {
display: flex;
}
.bd-header .navbar-header-items__start{
margin-left: 0rem
}
.bd-header button.primary-toggle {
margin-right: 0rem;
}
.bd-header ul.navbar-nav .dropdown .dropdown-menu {
overflow-y: auto; /* Enable vertical scrolling */
max-height: 80vh
}
.bd-sidebar-primary {
width: max-content; /* Adjust this value to your preference */
line-height: 1.4;
}
.bd-sidebar-secondary {
line-height: 1.4;
}
.toc-entry a.nav-link, .toc-entry a>code {
background-color: transparent;
border-color: transparent;
}
.bd-sidebar-primary code{
background-color: transparent;
border-color: transparent;
}
.toctree-wrapper li[class^=toctree-l1]>a {
font-size: 1.3em
}
.toctree-wrapper li[class^=toctree-l1] {
margin-bottom: 2em;
}
.toctree-wrapper li[class^=toctree-l]>ul {
margin-top: 0.5em;
font-size: 0.9em;
}
*, :after, :before {
font-style: normal;
}
div.deprecated {
margin-top: 0.5em;
margin-bottom: 2em;
background-color: var(--pst-color-deprecated-bg);
border-color: var(--pst-color-deprecated);
}
span.versionmodified.deprecated:before {
color: var(--pst-color-deprecated);
}
.admonition-beta.admonition, div.admonition-beta.admonition {
border-color: var(--pst-color-warning);
margin-top:0.5em;
margin-bottom: 2em;
}
.admonition-beta>.admonition-title, div.admonition-beta>.admonition-title {
background-color: var(--pst-color-warning-bg);
}
dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) dd {
margin-left: 1rem;
}
p {
font-size: 0.9rem;
margin-bottom: 0.5rem;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 777 B

View File

@@ -1,11 +0,0 @@
<svg width="72" height="19" viewBox="0 0 72 19" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_4019_2020)">
<path d="M29.4038 5.84477C30.1256 6.56657 30.1256 7.74117 29.4038 8.46296L27.7869 10.0538L27.7704 9.96259C27.6524 9.30879 27.3415 8.71552 26.8723 8.24627C26.5189 7.8936 26.1012 7.63282 25.6305 7.47143C25.3383 7.76508 25.1777 8.14989 25.1777 8.55487C25.1777 8.63706 25.1851 8.72224 25.2001 8.80742C25.4593 8.90082 25.6887 9.04503 25.8815 9.23781C26.6033 9.9596 26.6033 11.1342 25.8815 11.856L24.4738 13.2637C24.1129 13.6246 23.6392 13.8047 23.1647 13.8047C22.6902 13.8047 22.2165 13.6246 21.8556 13.2637C21.1338 12.5419 21.1338 11.3673 21.8556 10.6455L23.4725 9.05549L23.489 9.14665C23.6063 9.79896 23.9171 10.3922 24.3879 10.8622C24.742 11.2164 25.1343 11.4518 25.6043 11.6124L25.691 11.5257C25.954 11.2627 26.0982 10.913 26.0982 10.5402C26.0982 10.4572 26.0907 10.3743 26.0765 10.2929C25.8053 10.2032 25.5819 10.0754 25.3786 9.87218C25.0857 9.57928 24.9034 9.20493 24.8526 8.79024C24.8489 8.76035 24.8466 8.73121 24.8437 8.70132C24.8033 8.16109 24.9983 7.63357 25.3786 7.25399L26.7864 5.84627C27.1353 5.49733 27.6001 5.30455 28.0955 5.30455C28.5909 5.30455 29.0556 5.49658 29.4046 5.84627L29.4038 5.84477ZM36.7548 9.56583C36.7548 14.7163 32.5645 18.9058 27.4148 18.9058H9.34C4.1903 18.9058 0 14.7163 0 9.56583C0 4.41538 4.1903 0.22583 9.34 0.22583H27.4148C32.5652 0.22583 36.7548 4.41613 36.7548 9.56583ZM18 14.25C18.1472 14.0714 17.4673 13.5686 17.3283 13.384C17.0459 13.0777 17.0444 12.6368 16.8538 12.2789C16.3876 11.1985 15.8518 10.1262 15.1024 9.21166C14.3104 8.21116 13.333 7.38326 12.4745 6.44403C11.8371 5.78873 11.6668 4.85548 11.1041 4.15087C10.3285 3.00541 7.87624 2.69308 7.51683 4.31077C7.51833 4.36158 7.50264 4.39371 7.45855 4.42584C7.2598 4.57005 7.08271 4.73518 6.93402 4.93468C6.57013 5.44129 6.51409 6.30057 6.96839 6.75561C6.98333 6.51576 6.99155 6.28936 7.18134 6.1175C7.53252 6.41862 8.06304 6.52547 8.47026 6.30057C9.36989 7.585 9.14573 9.36184 9.86005 10.7457C10.0573 11.0729 10.2561 11.4069 10.5094 11.6939C10.7148 12.0137 11.4247 12.391 11.4665 12.6869C11.474 13.195 11.4142 13.7502 11.7475 14.1753C11.9044 14.4936 11.5188 14.8134 11.208 14.7738C10.8045 14.8291 10.3121 14.5026 9.95868 14.7036C9.8339 14.8388 9.58957 14.6894 9.48197 14.8769C9.44461 14.9741 9.24286 15.1108 9.36316 15.2042C9.49691 15.1026 9.62095 14.9965 9.80102 15.057C9.77412 15.2035 9.88994 15.2244 9.98184 15.267C9.97886 15.3663 9.92057 15.468 9.99679 15.5524C10.0857 15.4627 10.1388 15.3357 10.28 15.2983C10.7492 15.9238 11.2267 14.6655 12.2421 15.2318C12.0359 15.2214 11.8528 15.2475 11.7139 15.4172C11.6795 15.4553 11.6503 15.5001 11.7109 15.5494C12.2586 15.196 12.2556 15.6705 12.6112 15.5248C12.8847 15.382 13.1567 15.2035 13.4817 15.2543C13.1657 15.3454 13.153 15.5995 12.9677 15.8139C12.9363 15.8468 12.9213 15.8842 12.9579 15.9387C13.614 15.8834 13.6678 15.6652 14.1975 15.3977C14.5928 15.1564 14.9866 15.7414 15.3288 15.4082C15.4043 15.3357 15.5074 15.3604 15.6008 15.3507C15.4812 14.7133 14.1669 15.4672 14.1878 14.6124C14.6107 14.3247 14.5136 13.7741 14.542 13.3295C15.0284 13.5992 15.5694 13.7561 16.0461 14.0139C16.2867 14.4025 16.6641 14.9158 17.1669 14.8822C17.1804 14.8433 17.1923 14.8089 17.2065 14.7693C17.359 14.7955 17.5547 14.8964 17.6384 14.7036C17.8663 14.9419 18.201 14.93 18.4992 14.8687C18.7196 14.6894 18.0845 14.4338 17.9993 14.2493L18 14.25ZM31.3458 7.15387C31.3458 6.28413 31.0081 5.46744 30.3946 4.85399C29.7812 4.24054 28.9645 3.9028 28.094 3.9028C27.2235 3.9028 26.4068 4.24054 25.7933 4.85399L24.3856 6.26171C24.0569 6.59048 23.8073 6.97678 23.6436 7.40941L23.6339 7.43407L23.6085 7.44154C23.0974 7.5992 22.6469 7.86969 22.2696 8.24702L20.8618 9.65475C19.5938 10.9235 19.5938 12.9873 20.8618 14.2553C21.4753 14.8687 22.292 15.2064 23.1617 15.2064C24.0314 15.2064 24.8489 14.8687 25.4623 14.2553L26.8701 12.8475C27.1973 12.5203 27.4454 12.1355 27.609 11.7036L27.6188 11.6789L27.6442 11.6707C28.1463 11.5168 28.6095 11.2373 28.9854 10.8622L30.3931 9.4545C31.0066 8.84105 31.3443 8.02436 31.3443 7.15387H31.3458ZM12.8802 13.1972C12.7592 13.6695 12.7196 14.4742 12.1054 14.4974C12.0546 14.7701 12.2944 14.8724 12.5119 14.785C12.7278 14.6856 12.8302 14.8635 12.9026 15.0406C13.2359 15.0891 13.7291 14.9292 13.7477 14.5347C13.2501 14.2478 13.0962 13.7023 12.8795 13.1972H12.8802Z" fill="#F4F3FF"/>
<path d="M43.5142 15.2258L47.1462 3.70583H49.9702L53.6022 15.2258H51.6182L48.3222 4.88983H48.7542L45.4982 15.2258H43.5142ZM45.5382 12.7298V10.9298H51.5862V12.7298H45.5382ZM55.0486 15.2258V3.70583H59.8086C59.9206 3.70583 60.0646 3.71116 60.2406 3.72183C60.4166 3.72716 60.5792 3.74316 60.7286 3.76983C61.3952 3.87116 61.9446 4.0925 62.3766 4.43383C62.8139 4.77516 63.1366 5.20716 63.3446 5.72983C63.5579 6.24716 63.6646 6.82316 63.6646 7.45783C63.6646 8.08716 63.5579 8.66316 63.3446 9.18583C63.1312 9.70316 62.8059 10.1325 62.3686 10.4738C61.9366 10.8152 61.3899 11.0365 60.7286 11.1378C60.5792 11.1592 60.4139 11.1752 60.2326 11.1858C60.0566 11.1965 59.9152 11.2018 59.8086 11.2018H56.9766V15.2258H55.0486ZM56.9766 9.40183H59.7286C59.8352 9.40183 59.9552 9.3965 60.0886 9.38583C60.2219 9.37516 60.3446 9.35383 60.4566 9.32183C60.7766 9.24183 61.0272 9.1005 61.2086 8.89783C61.3952 8.69516 61.5259 8.46583 61.6006 8.20983C61.6806 7.95383 61.7206 7.70316 61.7206 7.45783C61.7206 7.2125 61.6806 6.96183 61.6006 6.70583C61.5259 6.4445 61.3952 6.2125 61.2086 6.00983C61.0272 5.80716 60.7766 5.66583 60.4566 5.58583C60.3446 5.55383 60.2219 5.53516 60.0886 5.52983C59.9552 5.51916 59.8352 5.51383 59.7286 5.51383H56.9766V9.40183ZM65.4273 15.2258V3.70583H67.3553V15.2258H65.4273Z" fill="#F4F3FF"/>
</g>
<defs>
<clipPath id="clip0_4019_2020">
<rect width="71.0711" height="18.68" fill="white" transform="translate(0 0.22583)"/>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 5.7 KiB

View File

@@ -1,11 +0,0 @@
<svg width="72" height="20" viewBox="0 0 72 20" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_4019_689)">
<path d="M29.4038 5.97905C30.1256 6.70085 30.1256 7.87545 29.4038 8.59724L27.7869 10.188L27.7704 10.0969C27.6524 9.44307 27.3415 8.84979 26.8723 8.38055C26.5189 8.02787 26.1012 7.7671 25.6305 7.60571C25.3383 7.89936 25.1777 8.28416 25.1777 8.68915C25.1777 8.77134 25.1851 8.85652 25.2001 8.9417C25.4593 9.0351 25.6887 9.17931 25.8815 9.37209C26.6033 10.0939 26.6033 11.2685 25.8815 11.9903L24.4738 13.398C24.1129 13.7589 23.6392 13.939 23.1647 13.939C22.6902 13.939 22.2165 13.7589 21.8556 13.398C21.1338 12.6762 21.1338 11.5016 21.8556 10.7798L23.4725 9.18977L23.489 9.28093C23.6063 9.93323 23.9171 10.5265 24.3879 10.9965C24.742 11.3507 25.1343 11.586 25.6043 11.7467L25.691 11.66C25.954 11.397 26.0982 11.0473 26.0982 10.6745C26.0982 10.5915 26.0907 10.5086 26.0765 10.4271C25.8053 10.3375 25.5819 10.2097 25.3786 10.0065C25.0857 9.71356 24.9034 9.33921 24.8526 8.92451C24.8489 8.89463 24.8466 8.86549 24.8437 8.8356C24.8033 8.29537 24.9983 7.76785 25.3786 7.38827L26.7864 5.98055C27.1353 5.6316 27.6001 5.43883 28.0955 5.43883C28.5909 5.43883 29.0556 5.63086 29.4046 5.98055L29.4038 5.97905ZM36.7548 9.70011C36.7548 14.8506 32.5645 19.0401 27.4148 19.0401H9.34C4.1903 19.0401 0 14.8506 0 9.70011C0 4.54966 4.1903 0.360107 9.34 0.360107H27.4148C32.5652 0.360107 36.7548 4.55041 36.7548 9.70011ZM18 14.3843C18.1472 14.2057 17.4673 13.7029 17.3283 13.5183C17.0459 13.2119 17.0444 12.7711 16.8538 12.4132C16.3876 11.3327 15.8518 10.2605 15.1024 9.34594C14.3104 8.34543 13.333 7.51754 12.4745 6.57831C11.8371 5.92301 11.6668 4.98976 11.1041 4.28515C10.3285 3.13969 7.87624 2.82736 7.51683 4.44505C7.51833 4.49586 7.50264 4.52799 7.45855 4.56012C7.2598 4.70433 7.08271 4.86946 6.93402 5.06896C6.57013 5.57556 6.51409 6.43484 6.96839 6.88989C6.98333 6.65004 6.99155 6.42364 7.18134 6.25178C7.53252 6.5529 8.06304 6.65975 8.47026 6.43484C9.36989 7.71928 9.14573 9.49612 9.86005 10.8799C10.0573 11.2072 10.2561 11.5412 10.5094 11.8281C10.7148 12.1479 11.4247 12.5253 11.4665 12.8212C11.474 13.3293 11.4142 13.8844 11.7475 14.3096C11.9044 14.6279 11.5188 14.9477 11.208 14.9081C10.8045 14.9634 10.3121 14.6369 9.95868 14.8379C9.8339 14.9731 9.58957 14.8237 9.48197 15.0112C9.44461 15.1083 9.24286 15.2451 9.36316 15.3385C9.49691 15.2369 9.62095 15.1308 9.80102 15.1913C9.77412 15.3377 9.88994 15.3587 9.98184 15.4012C9.97886 15.5006 9.92057 15.6022 9.99679 15.6867C10.0857 15.597 10.1388 15.47 10.28 15.4326C10.7492 16.058 11.2267 14.7997 12.2421 15.3661C12.0359 15.3557 11.8528 15.3818 11.7139 15.5514C11.6795 15.5895 11.6503 15.6344 11.7109 15.6837C12.2586 15.3303 12.2556 15.8047 12.6112 15.659C12.8847 15.5163 13.1567 15.3377 13.4817 15.3885C13.1657 15.4797 13.153 15.7337 12.9677 15.9482C12.9363 15.9811 12.9213 16.0184 12.9579 16.073C13.614 16.0177 13.6678 15.7995 14.1975 15.532C14.5928 15.2907 14.9866 15.8757 15.3288 15.5425C15.4043 15.47 15.5074 15.4946 15.6008 15.4849C15.4812 14.8476 14.1669 15.6015 14.1878 14.7467C14.6107 14.459 14.5136 13.9083 14.542 13.4638C15.0284 13.7335 15.5694 13.8904 16.0461 14.1482C16.2867 14.5367 16.6641 15.0501 17.1669 15.0164C17.1804 14.9776 17.1923 14.9432 17.2065 14.9036C17.359 14.9298 17.5547 15.0306 17.6384 14.8379C17.8663 15.0762 18.201 15.0643 18.4992 15.003C18.7196 14.8237 18.0845 14.5681 17.9993 14.3836L18 14.3843ZM31.3458 7.28815C31.3458 6.41841 31.0081 5.60172 30.3946 4.98826C29.7812 4.37481 28.9645 4.03708 28.094 4.03708C27.2235 4.03708 26.4068 4.37481 25.7933 4.98826L24.3856 6.39599C24.0569 6.72476 23.8073 7.11106 23.6436 7.54369L23.6339 7.56835L23.6085 7.57582C23.0974 7.73348 22.6469 8.00396 22.2696 8.3813L20.8618 9.78902C19.5938 11.0578 19.5938 13.1215 20.8618 14.3895C21.4753 15.003 22.292 15.3407 23.1617 15.3407C24.0314 15.3407 24.8489 15.003 25.4623 14.3895L26.8701 12.9818C27.1973 12.6545 27.4454 12.2697 27.609 11.8378L27.6188 11.8132L27.6442 11.805C28.1463 11.651 28.6095 11.3716 28.9854 10.9965L30.3931 9.58878C31.0066 8.97532 31.3443 8.15863 31.3443 7.28815H31.3458ZM12.8802 13.3315C12.7592 13.8037 12.7196 14.6085 12.1054 14.6316C12.0546 14.9044 12.2944 15.0067 12.5119 14.9193C12.7278 14.8199 12.8302 14.9978 12.9026 15.1748C13.2359 15.2234 13.7291 15.0635 13.7477 14.669C13.2501 14.3821 13.0962 13.8366 12.8795 13.3315H12.8802Z" fill="#246161"/>
<path d="M43.5142 15.3601L47.1462 3.84011H49.9702L53.6022 15.3601H51.6182L48.3222 5.02411H48.7542L45.4982 15.3601H43.5142ZM45.5382 12.8641V11.0641H51.5862V12.8641H45.5382ZM55.0486 15.3601V3.84011H59.8086C59.9206 3.84011 60.0646 3.84544 60.2406 3.85611C60.4166 3.86144 60.5792 3.87744 60.7286 3.90411C61.3952 4.00544 61.9446 4.22677 62.3766 4.56811C62.8139 4.90944 63.1366 5.34144 63.3446 5.86411C63.5579 6.38144 63.6646 6.95744 63.6646 7.59211C63.6646 8.22144 63.5579 8.79744 63.3446 9.32011C63.1312 9.83744 62.8059 10.2668 62.3686 10.6081C61.9366 10.9494 61.3899 11.1708 60.7286 11.2721C60.5792 11.2934 60.4139 11.3094 60.2326 11.3201C60.0566 11.3308 59.9152 11.3361 59.8086 11.3361H56.9766V15.3601H55.0486ZM56.9766 9.53611H59.7286C59.8352 9.53611 59.9552 9.53077 60.0886 9.52011C60.2219 9.50944 60.3446 9.48811 60.4566 9.45611C60.7766 9.37611 61.0272 9.23477 61.2086 9.03211C61.3952 8.82944 61.5259 8.60011 61.6006 8.34411C61.6806 8.08811 61.7206 7.83744 61.7206 7.59211C61.7206 7.34677 61.6806 7.09611 61.6006 6.84011C61.5259 6.57877 61.3952 6.34677 61.2086 6.14411C61.0272 5.94144 60.7766 5.80011 60.4566 5.72011C60.3446 5.68811 60.2219 5.66944 60.0886 5.66411C59.9552 5.65344 59.8352 5.64811 59.7286 5.64811H56.9766V9.53611ZM65.4273 15.3601V3.84011H67.3553V15.3601H65.4273Z" fill="#246161"/>
</g>
<defs>
<clipPath id="clip0_4019_689">
<rect width="71.0711" height="18.68" fill="white" transform="translate(0 0.360107)"/>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 5.7 KiB

View File

@@ -1,284 +0,0 @@
"""Configuration file for the Sphinx documentation builder."""
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
import json
import os
import sys
from datetime import datetime
from pathlib import Path
import toml
from docutils import nodes
from docutils.parsers.rst.directives.admonitions import BaseAdmonition
from docutils.statemachine import StringList
from sphinx.util.docutils import SphinxDirective
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
_DIR = Path(__file__).parent.absolute()
sys.path.insert(0, os.path.abspath("."))
sys.path.insert(0, os.path.abspath("../../libs/langchain"))
with (_DIR.parents[1] / "libs" / "langchain" / "pyproject.toml").open("r") as f:
data = toml.load(f)
with (_DIR / "guide_imports.json").open("r") as f:
imported_classes = json.load(f)
class ExampleLinksDirective(SphinxDirective):
"""Directive to generate a list of links to examples.
We have a script that extracts links to API reference docs
from our notebook examples. This directive uses that information
to backlink to the examples from the API reference docs."""
has_content = False
required_arguments = 1
def run(self):
"""Run the directive.
Called any time :example_links:`ClassName` is used
in the template *.rst files."""
class_or_func_name = self.arguments[0]
links = imported_classes.get(class_or_func_name, {})
list_node = nodes.bullet_list()
for doc_name, link in sorted(links.items()):
item_node = nodes.list_item()
para_node = nodes.paragraph()
link_node = nodes.reference()
link_node["refuri"] = link
link_node.append(nodes.Text(doc_name))
para_node.append(link_node)
item_node.append(para_node)
list_node.append(item_node)
if list_node.children:
title_node = nodes.rubric()
title_node.append(nodes.Text(f"Examples using {class_or_func_name}"))
return [title_node, list_node]
return [list_node]
class Beta(BaseAdmonition):
required_arguments = 0
node_class = nodes.admonition
def run(self):
self.content = self.content or StringList(
[
(
"This feature is in beta. It is actively being worked on, so the "
"API may change."
)
]
)
self.arguments = self.arguments or ["Beta"]
return super().run()
def setup(app):
app.add_directive("example_links", ExampleLinksDirective)
app.add_directive("beta", Beta)
app.connect("autodoc-skip-member", skip_private_members)
def skip_private_members(app, what, name, obj, skip, options):
if skip:
return True
if hasattr(obj, "__doc__") and obj.__doc__ and ":private:" in obj.__doc__:
return True
if name == "__init__" and obj.__objclass__ is object:
# don't document default init
return True
return None
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
copyright = f"{datetime.now().year}, LangChain Inc"
author = "LangChain, Inc"
html_favicon = "_static/img/brand/favicon.png"
html_last_updated_fmt = "%b %d, %Y"
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.autodoc.typehints",
"sphinx.ext.autosummary",
"sphinx.ext.napoleon",
"sphinx.ext.viewcode",
"sphinxcontrib.autodoc_pydantic",
"IPython.sphinxext.ipython_console_highlighting",
"myst_parser",
"_extensions.gallery_directive",
"sphinx_design",
"sphinx_copybutton",
"sphinxcontrib.googleanalytics",
]
source_suffix = [".rst", ".md"]
# some autodoc pydantic options are repeated in the actual template.
# potentially user error, but there may be bugs in the sphinx extension
# with options not being passed through correctly (from either the location in the code)
autodoc_pydantic_model_show_json = False
autodoc_pydantic_field_list_validators = False
autodoc_pydantic_config_members = False
autodoc_pydantic_model_show_config_summary = False
autodoc_pydantic_model_show_validator_members = False
autodoc_pydantic_model_show_validator_summary = False
autodoc_pydantic_model_signature_prefix = "class"
autodoc_pydantic_field_signature_prefix = "param"
autodoc_member_order = "groupwise"
autoclass_content = "both"
autodoc_typehints_format = "short"
autodoc_typehints = "both"
# Add any paths that contain templates here, relative to this directory.
templates_path = ["templates"]
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
# The theme to use for HTML and HTML Help pages.
html_theme = "pydata_sphinx_theme"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
# # -- General configuration ------------------------------------------------
"sidebar_includehidden": True,
"use_edit_page_button": False,
# # "analytics": {
# # "plausible_analytics_domain": "scikit-learn.org",
# # "plausible_analytics_url": "https://views.scientific-python.org/js/script.js",
# # },
# # If "prev-next" is included in article_footer_items, then setting show_prev_next
# # to True would repeat prev and next links. See
# # https://github.com/pydata/pydata-sphinx-theme/blob/b731dc230bc26a3d1d1bb039c56c977a9b3d25d8/src/pydata_sphinx_theme/theme/pydata_sphinx_theme/layout.html#L118-L129
"show_prev_next": False,
"search_bar_text": "Search",
"navigation_with_keys": True,
"collapse_navigation": True,
"navigation_depth": 3,
"show_nav_level": 1,
"show_toc_level": 3,
"navbar_align": "left",
"header_links_before_dropdown": 5,
"header_dropdown_text": "Integrations",
"logo": {
"image_light": "_static/wordmark-api.svg",
"image_dark": "_static/wordmark-api-dark.svg",
},
"surface_warnings": True,
# # -- Template placement in theme layouts ----------------------------------
"navbar_start": ["navbar-logo"],
# # Note that the alignment of navbar_center is controlled by navbar_align
"navbar_center": ["navbar-nav"],
"navbar_end": ["langchain_docs", "theme-switcher", "navbar-icon-links"],
# # navbar_persistent is persistent right (even when on mobiles)
"navbar_persistent": ["search-field"],
"article_header_start": ["breadcrumbs"],
"article_header_end": [],
"article_footer_items": [],
"content_footer_items": [],
# # Use html_sidebars that map page patterns to list of sidebar templates
# "primary_sidebar_end": [],
"footer_start": ["copyright"],
"footer_center": [],
"footer_end": [],
# # When specified as a dictionary, the keys should follow glob-style patterns, as in
# # https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-exclude_patterns
# # In particular, "**" specifies the default for all pages
# # Use :html_theme.sidebar_secondary.remove: for file-wide removal
# "secondary_sidebar_items": {"**": ["page-toc", "sourcelink"]},
# "show_version_warning_banner": True,
# "announcement": None,
"icon_links": [
{
# Label for this link
"name": "GitHub",
# URL where the link will redirect
"url": "https://github.com/langchain-ai/langchain", # required
# Icon class (if "type": "fontawesome"), or path to local image (if "type": "local")
"icon": "fa-brands fa-square-github",
# The type of image to be used (see below for details)
"type": "fontawesome",
},
{
"name": "X / Twitter",
"url": "https://twitter.com/langchainai",
"icon": "fab fa-twitter-square",
},
],
"icon_links_label": "Quick Links",
"external_links": [],
}
html_context = {
"display_github": True, # Integrate GitHub
"github_user": "langchain-ai", # Username
"github_repo": "langchain", # Repo name
"github_version": "master", # Version
"conf_py_path": "/docs/api_reference", # Path in the checkout to the docs root
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
# These paths are either relative to html_static_path
# or fully qualified paths (e.g. https://...)
html_css_files = ["css/custom.css"]
html_use_index = False
myst_enable_extensions = ["colon_fence"]
# generate autosummary even if no references
autosummary_generate = True
# Don't fail on autosummary import warnings
autosummary_ignore_module_all = False
html_copy_source = False
html_show_sourcelink = False
# Set canonical URL from the Read the Docs Domain
html_baseurl = os.environ.get("READTHEDOCS_CANONICAL_URL", "")
googleanalytics_id = "G-9B66JQQH2F"
# Tell Jinja2 templates the build is running on Read the Docs
if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True
master_doc = "index"
# If a signatures length in characters exceeds 60,
# each parameter within the signature will be displayed on an individual logical line
maximum_signature_line_length = 60

View File

@@ -1,695 +0,0 @@
"""Script for auto-generating api_reference.rst."""
import importlib
import inspect
import os
import sys
import typing
from enum import Enum
from pathlib import Path
from typing import Dict, List, Literal, Optional, Sequence, TypedDict, Union
import toml
import typing_extensions
from langchain_core.runnables import Runnable, RunnableSerializable
from pydantic import BaseModel
ROOT_DIR = Path(__file__).parents[2].absolute()
HERE = Path(__file__).parent
ClassKind = Literal[
"TypedDict",
"Regular",
"Pydantic",
"enum",
"RunnablePydantic",
"RunnableNonPydantic",
]
class ClassInfo(TypedDict):
"""Information about a class."""
name: str
"""The name of the class."""
qualified_name: str
"""The fully qualified name of the class."""
kind: ClassKind
"""The kind of the class."""
is_public: bool
"""Whether the class is public or not."""
is_deprecated: bool
"""Whether the class is deprecated."""
class FunctionInfo(TypedDict):
"""Information about a function."""
name: str
"""The name of the function."""
qualified_name: str
"""The fully qualified name of the function."""
is_public: bool
"""Whether the function is public or not."""
is_deprecated: bool
"""Whether the function is deprecated."""
class ModuleMembers(TypedDict):
"""A dictionary of module members."""
classes_: Sequence[ClassInfo]
functions: Sequence[FunctionInfo]
def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
"""Load all members of a module.
Args:
module_path: Path to the module.
namespace: the namespace of the module.
Returns:
list: A list of loaded module objects.
"""
classes_: List[ClassInfo] = []
functions: List[FunctionInfo] = []
module = importlib.import_module(module_path)
if ":private:" in (module.__doc__ or ""):
return ModuleMembers(classes_=[], functions=[])
for name, type_ in inspect.getmembers(module):
if not hasattr(type_, "__module__"):
continue
if type_.__module__ != module_path:
continue
if ":private:" in (type_.__doc__ or ""):
continue
if inspect.isclass(type_):
# The type of the class is used to select a template
# for the object when rendering the documentation.
# See `templates` directory for defined templates.
# This is a hacky solution to distinguish between different
# kinds of thing that we want to render.
if type(type_) is typing_extensions._TypedDictMeta: # type: ignore
kind: ClassKind = "TypedDict"
elif type(type_) is typing._TypedDictMeta: # type: ignore
kind = "TypedDict"
elif (
issubclass(type_, Runnable)
and issubclass(type_, BaseModel)
and type_ is not Runnable
):
# RunnableSerializable subclasses from Pydantic which
# for which we use autodoc_pydantic for rendering.
# We need to distinguish these from regular Pydantic
# classes so we can hide inherited Runnable methods
# and provide a link to the Runnable interface from
# the template.
kind = "RunnablePydantic"
elif (
issubclass(type_, Runnable)
and not issubclass(type_, BaseModel)
and type_ is not Runnable
):
# These are not pydantic classes but are Runnable.
# We'll hide all the inherited methods from Runnable
# but use a regular class template to render.
kind = "RunnableNonPydantic"
elif issubclass(type_, Enum):
kind = "enum"
elif issubclass(type_, BaseModel):
kind = "Pydantic"
else:
kind = "Regular"
classes_.append(
ClassInfo(
name=name,
qualified_name=f"{namespace}.{name}",
kind=kind,
is_public=not name.startswith("_"),
is_deprecated=".. deprecated::" in (type_.__doc__ or ""),
)
)
elif inspect.isfunction(type_):
functions.append(
FunctionInfo(
name=name,
qualified_name=f"{namespace}.{name}",
is_public=not name.startswith("_"),
is_deprecated=".. deprecated::" in (type_.__doc__ or ""),
)
)
else:
continue
return ModuleMembers(
classes_=classes_,
functions=functions,
)
def _merge_module_members(
module_members: Sequence[ModuleMembers],
) -> ModuleMembers:
"""Merge module members."""
classes_: List[ClassInfo] = []
functions: List[FunctionInfo] = []
for module in module_members:
classes_.extend(module["classes_"])
functions.extend(module["functions"])
return ModuleMembers(
classes_=classes_,
functions=functions,
)
def _load_package_modules(
package_directory: Union[str, Path], submodule: Optional[str] = None
) -> Dict[str, ModuleMembers]:
"""Recursively load modules of a package based on the file system.
Traversal based on the file system makes it easy to determine which
of the modules/packages are part of the package vs. 3rd party or built-in.
Parameters:
package_directory (Union[str, Path]): Path to the package directory.
submodule (Optional[str]): Optional name of submodule to load.
Returns:
Dict[str, ModuleMembers]: A dictionary where keys are module names and values are ModuleMembers objects.
"""
package_path = (
Path(package_directory)
if isinstance(package_directory, str)
else package_directory
)
modules_by_namespace: Dict[str, ModuleMembers] = {}
# Get the high level package name
package_name = package_path.name
# If we are loading a submodule, add it in
if submodule is not None:
package_path = package_path / submodule
for file_path in package_path.rglob("*.py"):
if file_path.name.startswith("_"):
continue
if "integration_template" in file_path.parts:
continue
if "project_template" in file_path.parts:
continue
relative_module_name = file_path.relative_to(package_path)
# Skip if any module part starts with an underscore
if any(part.startswith("_") for part in relative_module_name.parts):
continue
# Get the full namespace of the module
namespace = str(relative_module_name).replace(".py", "").replace("/", ".")
# Keep only the top level namespace
top_namespace = namespace.split(".")[0]
try:
# If submodule is present, we need to construct the paths in a slightly
# different way
if submodule is not None:
module_members = _load_module_members(
f"{package_name}.{submodule}.{namespace}",
f"{submodule}.{namespace}",
)
else:
module_members = _load_module_members(
f"{package_name}.{namespace}", namespace
)
# Merge module members if the namespace already exists
if top_namespace in modules_by_namespace:
existing_module_members = modules_by_namespace[top_namespace]
_module_members = _merge_module_members(
[existing_module_members, module_members]
)
else:
_module_members = module_members
modules_by_namespace[top_namespace] = _module_members
except ImportError as e:
print(f"Error: Unable to import module '{namespace}' with error: {e}")
return modules_by_namespace
def _construct_doc(
package_namespace: str,
members_by_namespace: Dict[str, ModuleMembers],
package_version: str,
) -> List[typing.Tuple[str, str]]:
"""Construct the contents of the reference.rst file for the given package.
Args:
package_namespace: The package top level namespace
members_by_namespace: The members of the package, dict organized by top level
module contains a list of classes and functions
inside of the top level namespace.
Returns:
The contents of the reference.rst file.
"""
docs = []
index_doc = f"""\
:html_theme.sidebar_secondary.remove:
.. currentmodule:: {package_namespace}
.. _{package_namespace}:
======================================
{package_namespace.replace("_", "-")}: {package_version}
======================================
.. automodule:: {package_namespace}
:no-members:
:no-inherited-members:
.. toctree::
:hidden:
:maxdepth: 2
"""
index_autosummary = """
"""
namespaces = sorted(members_by_namespace)
for module in namespaces:
index_doc += f" {module}\n"
module_doc = f"""\
.. currentmodule:: {package_namespace}
.. _{package_namespace}_{module}:
"""
_members = members_by_namespace[module]
classes = [
el
for el in _members["classes_"]
if el["is_public"] and not el["is_deprecated"]
]
functions = [
el
for el in _members["functions"]
if el["is_public"] and not el["is_deprecated"]
]
deprecated_classes = [
el for el in _members["classes_"] if el["is_public"] and el["is_deprecated"]
]
deprecated_functions = [
el
for el in _members["functions"]
if el["is_public"] and el["is_deprecated"]
]
if not (classes or functions):
continue
section = f":mod:`{module}`"
underline = "=" * (len(section) + 1)
module_doc += f"""
{section}
{underline}
.. automodule:: {package_namespace}.{module}
:no-members:
:no-inherited-members:
"""
index_autosummary += f"""
:ref:`{package_namespace}_{module}`
{"^" * (len(package_namespace) + len(module) + 8)}
"""
if classes:
module_doc += f"""\
**Classes**
.. currentmodule:: {package_namespace}
.. autosummary::
:toctree: {module}
"""
index_autosummary += """
**Classes**
.. autosummary::
"""
for class_ in sorted(classes, key=lambda c: c["qualified_name"]):
if class_["kind"] == "TypedDict":
template = "typeddict.rst"
elif class_["kind"] == "enum":
template = "enum.rst"
elif class_["kind"] == "Pydantic":
template = "pydantic.rst"
elif class_["kind"] == "RunnablePydantic":
template = "runnable_pydantic.rst"
elif class_["kind"] == "RunnableNonPydantic":
template = "runnable_non_pydantic.rst"
else:
template = "class.rst"
module_doc += f"""\
:template: {template}
{class_["qualified_name"]}
"""
index_autosummary += f"""
{class_["qualified_name"]}
"""
if functions:
_functions = [f["qualified_name"] for f in functions]
fstring = "\n ".join(sorted(_functions))
module_doc += f"""\
**Functions**
.. currentmodule:: {package_namespace}
.. autosummary::
:toctree: {module}
:template: function.rst
{fstring}
"""
index_autosummary += f"""
**Functions**
.. autosummary::
{fstring}
"""
if deprecated_classes:
module_doc += f"""\
**Deprecated classes**
.. currentmodule:: {package_namespace}
.. autosummary::
:toctree: {module}
"""
index_autosummary += """
**Deprecated classes**
.. autosummary::
"""
for class_ in sorted(deprecated_classes, key=lambda c: c["qualified_name"]):
if class_["kind"] == "TypedDict":
template = "typeddict.rst"
elif class_["kind"] == "enum":
template = "enum.rst"
elif class_["kind"] == "Pydantic":
template = "pydantic.rst"
elif class_["kind"] == "RunnablePydantic":
template = "runnable_pydantic.rst"
elif class_["kind"] == "RunnableNonPydantic":
template = "runnable_non_pydantic.rst"
else:
template = "class.rst"
module_doc += f"""\
:template: {template}
{class_["qualified_name"]}
"""
index_autosummary += f"""
{class_["qualified_name"]}
"""
if deprecated_functions:
_functions = [f["qualified_name"] for f in deprecated_functions]
fstring = "\n ".join(sorted(_functions))
module_doc += f"""\
**Deprecated functions**
.. currentmodule:: {package_namespace}
.. autosummary::
:toctree: {module}
:template: function.rst
{fstring}
"""
index_autosummary += f"""
**Deprecated functions**
.. autosummary::
{fstring}
"""
docs.append((f"{module}.rst", module_doc))
docs.append(("index.rst", index_doc + index_autosummary))
return docs
def _build_rst_file(package_name: str = "langchain") -> None:
"""Create a rst file for building of documentation.
Args:
package_name: Can be either "langchain" or "core" or "experimental".
"""
package_dir = _package_dir(package_name)
package_members = _load_package_modules(package_dir)
package_version = _get_package_version(package_dir)
output_dir = _out_file_path(package_name)
os.mkdir(output_dir)
rsts = _construct_doc(
_package_namespace(package_name), package_members, package_version
)
for name, rst in rsts:
with open(output_dir / name, "w") as f:
f.write(rst)
def _package_namespace(package_name: str) -> str:
"""Returns the package name used.
Args:
package_name: Can be either "langchain" or "core" or "experimental".
Returns:
modified package_name: Can be either "langchain" or "langchain_{package_name}"
"""
if package_name == "langchain":
return "langchain"
if package_name == "standard-tests":
return "langchain_tests"
return f"langchain_{package_name.replace('-', '_')}"
def _package_dir(package_name: str = "langchain") -> Path:
"""Return the path to the directory containing the documentation."""
if (ROOT_DIR / "libs" / package_name).exists():
return ROOT_DIR / "libs" / package_name / _package_namespace(package_name)
else:
return (
ROOT_DIR
/ "libs"
/ "partners"
/ package_name
/ _package_namespace(package_name)
)
def _get_package_version(package_dir: Path) -> str:
"""Return the version of the package."""
try:
with open(package_dir.parent / "pyproject.toml", "r") as f:
pyproject = toml.load(f)
except FileNotFoundError as e:
print(
f"pyproject.toml not found in {package_dir.parent}.\n"
"You are either attempting to build a directory which is not a package or "
"the package is missing a pyproject.toml file which should be added."
"Aborting the build."
)
exit(1)
try:
# uses uv
return pyproject["project"]["version"]
except KeyError:
# uses poetry
return pyproject["tool"]["poetry"]["version"]
def _out_file_path(package_name: str) -> Path:
"""Return the path to the file containing the documentation."""
return HERE / f"{package_name.replace('-', '_')}"
def _build_index(dirs: List[str]) -> None:
custom_names = {
"aws": "AWS",
"ai21": "AI21",
"ibm": "IBM",
}
ordered = [
"core",
"langchain",
"text-splitters",
"community",
"experimental",
"standard-tests",
]
main_ = [dir_ for dir_ in ordered if dir_ in dirs]
integrations = sorted(dir_ for dir_ in dirs if dir_ not in main_)
doc = """# LangChain Python API Reference
Welcome to the LangChain Python API reference. This is a reference for all
`langchain-x` packages.
For user guides see [https://python.langchain.com](https://python.langchain.com).
For the legacy API reference hosted on ReadTheDocs see [https://api.python.langchain.com/](https://api.python.langchain.com/).
"""
if main_:
main_headers = [
" ".join(custom_names.get(x, x.title()) for x in dir_.split("-"))
for dir_ in main_
]
main_tree = "\n".join(
f"{header_name}<{dir_.replace('-', '_')}/index>"
for header_name, dir_ in zip(main_headers, main_)
)
main_grid = "\n".join(
f'- header: "**{header_name}**"\n content: "{_package_namespace(dir_).replace("_", "-")}: {_get_package_version(_package_dir(dir_))}"\n link: {dir_.replace("-", "_")}/index.html'
for header_name, dir_ in zip(main_headers, main_)
)
doc += f"""## Base packages
```{{gallery-grid}}
:grid-columns: "1 2 2 3"
{main_grid}
```
```{{toctree}}
:maxdepth: 2
:hidden:
:caption: Base packages
{main_tree}
```
"""
if integrations:
integration_headers = [
" ".join(
custom_names.get(
x,
x.title().replace("db", "DB")
if dir_ == "langchain_v1"
else x.title().replace("ai", "AI").replace("db", "DB"),
)
for x in dir_.split("-")
)
for dir_ in integrations
]
integration_tree = "\n".join(
f"{header_name}<{dir_.replace('-', '_')}/index>"
for header_name, dir_ in zip(integration_headers, integrations)
)
integration_grid = ""
integrations_to_show = [
"openai",
"anthropic",
"google-vertexai",
"aws",
"huggingface",
"mistralai",
]
for header_name, dir_ in sorted(
zip(integration_headers, integrations),
key=lambda h_d: (
integrations_to_show.index(h_d[1])
if h_d[1] in integrations_to_show
else len(integrations_to_show)
),
)[: len(integrations_to_show)]:
integration_grid += f'\n- header: "**{header_name}**"\n content: {_package_namespace(dir_).replace("_", "-")} {_get_package_version(_package_dir(dir_))}\n link: {dir_.replace("-", "_")}/index.html'
doc += f"""## Integrations
```{{gallery-grid}}
:grid-columns: "1 2 2 3"
{integration_grid}
```
See the full list of integrations in the Section Navigation.
```{{toctree}}
:maxdepth: 2
:hidden:
:caption: Integrations
{integration_tree}
```
"""
with open(HERE / "reference.md", "w") as f:
f.write(doc)
dummy_index = """\
# API reference
```{toctree}
:maxdepth: 3
:hidden:
Reference<reference>
```
"""
with open(HERE / "index.md", "w") as f:
f.write(dummy_index)
def main(dirs: Optional[list] = None) -> None:
"""Generate the api_reference.rst file for each package."""
print("Starting to build API reference files.")
if not dirs:
dirs = [
p.parent.name
for p in (ROOT_DIR / "libs").rglob("pyproject.toml")
# Exclude packages that are not directly under libs/ or libs/partners/
if p.parent.parent.name in ("libs", "partners")
]
for dir_ in sorted(dirs):
# Skip any hidden directories
# Some of these could be present by mistake in the code base
# e.g., .pytest_cache from running tests from the wrong location.
if dir_.startswith("."):
print("Skipping dir:", dir_)
continue
else:
print("Building package:", dir_)
_build_rst_file(package_name=dir_)
_build_index(sorted(dirs))
print("API reference files built.")
if __name__ == "__main__":
dirs = sys.argv[1:] or None
main(dirs=dirs)

File diff suppressed because one or more lines are too long

View File

@@ -1,35 +0,0 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=.
set BUILDDIR=_build
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd

View File

@@ -1,12 +0,0 @@
autodoc_pydantic>=2,<3
sphinx>=8,<9
myst-parser>=3
sphinx-autobuild>=2024
pydata-sphinx-theme>=0.15
toml>=0.10.2
myst-nb>=1.1.1
pyyaml
sphinx-design
sphinx-copybutton
beautifulsoup4
sphinxcontrib-googleanalytics

View File

@@ -1,44 +0,0 @@
import sys
from glob import glob
from pathlib import Path
from bs4 import BeautifulSoup
CUR_DIR = Path(__file__).parents[1]
def process_toc_h3_elements(html_content: str) -> str:
"""Update Class.method() TOC headers to just method()."""
# Create a BeautifulSoup object
soup = BeautifulSoup(html_content, "html.parser")
# Find all <li> elements with class "toc-h3"
toc_h3_elements = soup.find_all("li", class_="toc-h3")
# Process each element
for element in toc_h3_elements:
try:
element = element.a.code.span
except Exception:
continue
# Get the text content of the element
content = element.get_text()
# Apply the regex substitution
modified_content = content.split(".")[-1]
# Update the element's content
element.string = modified_content
# Return the modified HTML
return str(soup)
if __name__ == "__main__":
dir = sys.argv[1]
for fn in glob(str(f"{dir.rstrip('/')}/**/*.html"), recursive=True):
with open(fn, "r") as f:
html = f.read()
processed_html = process_toc_h3_elements(html)
with open(fn, "w") as f:
f.write(processed_html)

View File

@@ -1,27 +0,0 @@
Copyright (c) 2007-2023 The scikit-learn developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,36 +0,0 @@
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block attributes %}
{% if attributes %}
.. rubric:: {{ _('Attributes') }}
.. autosummary::
{% for item in attributes %}
~{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block methods %}
{% if methods %}
.. rubric:: {{ _('Methods') }}
.. autosummary::
{% for item in methods %}
~{{ item }}
{%- endfor %}
{% for item in methods %}
.. automethod:: {{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -1,14 +0,0 @@
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block attributes %}
{% for item in attributes %}
.. autoattribute:: {{ item }}
{% endfor %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -1,8 +0,0 @@
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autofunction:: {{ objname }}
.. example_links:: {{ objname }}

View File

@@ -1,12 +0,0 @@
<!-- This will display a link to LangChain docs -->
<head>
<style>
.text-link {
text-decoration: none; /* Remove underline */
color: inherit; /* Inherit color from parent element */
}
</style>
</head>
<body>
<a href="https://python.langchain.com/" class='text-link'>Docs</a>
</body>

View File

@@ -1,24 +0,0 @@
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autopydantic_model:: {{ objname }}
:model-show-json: False
:model-show-config-summary: False
:model-show-validator-members: False
:model-show-field-summary: False
:field-signature-prefix: param
:members:
:undoc-members:
:inherited-members:
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace, model_construct, model_copy, model_dump, model_dump_json, model_parametrized_name, model_post_init, model_rebuild, model_validate, model_validate_json, model_validate_strings, model_extra, model_fields_set, model_json_schema
{% block attributes %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -1,16 +0,0 @@
{% set redirect = pathto(redirects[pagename]) %}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="Refresh" content="0; url={{ redirect }}" />
<meta name="robots" content="follow, index">
<meta name="Description" content="Python API reference for LangChain.">
<link rel="canonical" href="{{ redirect }}" />
<title>LangChain Python API Reference Documentation.</title>
</head>
<body>
<p>You will be automatically redirected to the <a href="{{ redirect }}">new location of this page</a>.</p>
</body>
</html>

View File

@@ -1,40 +0,0 @@
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_config <langchain_core.runnables.base.Runnable.with_config>`, :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
{% block attributes %}
{% if attributes %}
.. rubric:: {{ _('Attributes') }}
.. autosummary::
{% for item in attributes %}
~{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block methods %}
{% if methods %}
.. rubric:: {{ _('Methods') }}
.. autosummary::
{% for item in methods %}
~{{ item }}
{%- endfor %}
{% for item in methods %}
.. automethod:: {{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -1,24 +0,0 @@
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autopydantic_model:: {{ objname }}
:model-show-json: False
:model-show-config-summary: False
:model-show-validator-members: False
:model-show-field-summary: False
:field-signature-prefix: param
:members:
:undoc-members:
:inherited-members:
:member-order: groupwise
:show-inheritance: True
:special-members: __call__
:exclude-members: construct, copy, dict, from_orm, parse_file, parse_obj, parse_raw, schema, schema_json, update_forward_refs, validate, json, is_lc_serializable, to_json_not_implemented, lc_secrets, lc_attributes, lc_id, get_lc_namespace, astream_log, transform, atransform, get_output_schema, get_prompts, config_schema, map, pick, pipe, InputType, OutputType, config_specs, output_schema, get_input_schema, get_graph, get_name, input_schema, name, assign, as_tool, get_config_jsonschema, get_input_jsonschema, get_output_jsonschema, model_construct, model_copy, model_dump, model_dump_json, model_parametrized_name, model_post_init, model_rebuild, model_validate, model_validate_json, model_validate_strings, to_json, model_extra, model_fields_set, model_json_schema, predict, apredict, predict_messages, apredict_messages, generate, generate_prompt, agenerate, agenerate_prompt, call_as_llm
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_config <langchain_core.runnables.base.Runnable.with_config>`, :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. example_links:: {{ objname }}

View File

@@ -1,14 +0,0 @@
{{ objname }}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block attributes %}
{% for item in attributes %}
.. autoattribute:: {{ item }}
{% endfor %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -1,12 +0,0 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*
* @format
*/
module.exports = {
presets: [require.resolve("@docusaurus/core/lib/babel/preset")],
};

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNrlV09z28YVb6c3nXLpfY3JKUNAAAiRoDSaDkUptqJItCXFlpzRcJaLBQETwMK7C4qkR4e4/QK49dZpLZMdjeokY0+bpnXPPfQLKId+iH6CvgUhi4o8k8zklJEO0P5577fv/3t8Ph1QLkKW/PI8TCTlmEjYiPz5lNOnGRXyd5OYyoB5p/fbe/svMh5efBRImYrlxUWchgZLaYJDg7B4cWAtkgDLRVinES1gTrvMG333qz8802IqBO5RoS2jz59phMFbiYSNdsgyhDlFGAU0Sv0sQliIUEgM1xWkcRZRRSZGQtJYO6mga9xBeGeeKhOUaydHcBIzj0bqqJdK3VE0Cews+C8kpziGjeQZhT3ApqC3zLiCMI26OmMsKkWVo7SA9rOkMI2CerdeBmkSHBcEEg/CaNQRFHMSdDgVWSRF54mYsXhUEB6mJZfWRDM6RJNemFDE4CYOx9RDPuNIWZDTgCYiHICImJAMBFSrxFNigyk8VL5goM8EVWZTjMfAg0Zg0YQChWTAII4pR4UnlUMQ7rJMIsDjYEFEB/AFiM0khVMRsCzyUFc5oxQPGPnIUAqkmIOiECGi0Drl4HkuQzrbFnTF6nuKzuMogSLG+ihLFeKlZcEhYdLTTk6UIyHoQk49ZfsS9GiOlHWfUCKBtIiDH+2aCCc9EYcyKL3zHofszeRURgwT+MZY3ZTm+hT49xS/gT4GApyMblj0HUmlsH8MPkIQjUgGoUAqnO78BCN+33ogIrgfzAb+4z/ZlEcn04BiD4T67y8+OA2YkPmr66n9JUQghTyiCWEePJD/pTcO0wryqB9BXJ5BQia0MHt+1qc01XEEgTuZceVf4TSNQlIYdFHlw3mZwLqS5eb1mcpiHYpFIvM3bRCiubl4fwQ1KEGW4biG+dVQh/oQJhHUFF35Np+kxf0/5i9STPoAopf1LZ/MmF/N0zCRv9zGpL13DVJFQv4S87jmvJ4/51kCWUrzaev+zefKy6vnqoZlGfWvrwGLUULylz6OBP3bNWZw5kgnDDDyP5oTAm4OaX7xv06H+J1uvPqw39qUtQfe2tOsNfC2HlqfdFwvpYa9Y2ztd9b47vHmuPHIP3y03detut0wrXq9ZumWYRqWYenuI+4d7nWfGs7GQcewNx5Vm87DVnWte9BYam7Y487OJ7Vk2xpxbm9Fw6f+w/XHtt11PzsYDrea9Z2dJTl6cHdIXKdtGoMN233czPpucwWBdNkg9FaJ3ep2HtDd0T1n3F4ye6mz5taOwyr/9LFnthph0FvvM9Eye87GnHhLtqubpYQ103FN9ffqMjYiqIwyyF807PqfodSlkGr0txMwmczE81OIQ/qff0/LrvKn9tZVCP/6dB1iMn+7r6q7tYTaRCLbtB1kucuOvezU0d3t/fNW+cy+CsELJOlQLhbVUJ/1hxUEvYwLKlcz6evu1/scSqkPcblxmQNTEmRJn3pnrfdG/1sV/eBapQ+0J50OUyaoXoqZnx/ou7MGq2+uv56lms54DyfhuEiF/G2RBsfj4bFHMs8LBsex2Rg71bBLM+K/KVmghKhnQCA9FvkLx669Km8uA/EMlDd1y9RN69uhrvpIBK0GDFx8yy4v8tMlsP43Nwkk60Mfyl8W7vnX/D2nMcSvevkKxGk0Gv98P9ElUL3RqDvfXqcBO8+BWHYsvrlJUAKcWo1YnA8v6fXQyy8+hE0HV52GT2zXIth3rTqpOtitNahXX6Jmo+Hjv6vqSABHuTJlHFxNoRGGcpRfVGI8VCVntWotVWug6QrUWBJlHt3LuutM6SBWEDTliGHvy9bHeguTgOp7RTjm0/XDneb2ZuuvB/p8XOntdDZMTRMmktD3J3uUg1vyMxKxzIPayekEsHabh/kb16tafrVuYRfbrk/r+lp7b4ojEHJA8tdBdVVbdpyqtoJivOrWHNMsZqsvJrOS/93C7z0sseodIVR8TQ1iBMYwvbnZS+7SsajbGzsu6UdbVnjYfFLbsb1d09Mql21gxmFcjW5GEd1AQCAbpOoi7xK3VrmcrmbDlW7WrCpQzka0jg/iUJ6CVACbZFEEGAELiWpyME+FiUeH2rJZgSYXSawtPyunN21u7rsa8TTYcJhvBI5maCcVLWI9CPuuuISHF0OhRi6shq0Z1dHJwsLP1yJX6t+jUcS0W6b0ndumMLrHjm+dzgQnt07nzVun8ayq3zq14UfgrdNZMg+PbpvWv7kVCv+wjpqQLNXmtPx8vb2zcbSw8H/cixps

View File

@@ -1 +0,0 @@
eNrlV01v48YZbtHb9tJL7rNEgQCFSJMSJZE2jECWnazi2Nq1HK+9gSGMyKHIiORwZ4b68MKHbnvshT+hXa8UGO4mwS7aNO323EP/gHPoj+gv6DsUFcvxAi2QUyEfaA7nnWfej+f90PPZkDAe0PinV0EsCMOOgAXPns8YeZoSLn47jYjwqXvxsN05fJGy4PpXvhAJX19bw0mg0YTEONAcGq0NjTXHx2IN3pOQ5DAXPepOvvvZ7JkSEc5xn3BlHX32THEo3BULWCgnNEWYEYSRT8LES0OEOQ+4wLBdQgqjIZFifMIFiZTzErp12g/uo9b7EerR3rJ0yglTzk/hS0RdEspP/USoppSJYWXAfy4YwREsBEsJrAE+AftFyiSErtXlN0rDQmUxSXJoL41zF0mo79/XQasYR7mAwMMgnHQ5wczxu4zwNBS8+zmfH3EJd1iQFKeUBprLIRL3g5ggCjtRcEZc5FGGpCcZ8UnMgyGoiB0nBQXlW+xKtcElLipu0NCnnEj3yYMjOIMm4NmYgISgcICPCEN5RGVgEO7RVCDAY+BJRIbwBIhWnMBX7tM0dFFPBqVQDw6yiSYNSDADQ4EpPLc6YcAAJgIyX+Zy+dsPDF3GkQqFlA5QmkjEhWchIEHcV87PZSCBfAEjrvR9AXq6JEp7nxNHgGjOh/85NCGO+zwKhF9E5x0B6cz1lE4MYnhGWO4U7voEznfkeQ19CAI4ntzx6Pcipdz/EcQIARuR8AOOJJ3u/wgn/tB7oCKEH9wG8WM/2pWn5zOfYBeU+tdPfnHhUy6yV7dT/EtgIIE8IrFDXbgg+2P/LEhKyCVeCLy8hMSMSe727HJASKLiEIg7nZ/KvsJJEgZO7tA1mQ9XRSKrUpe725cyi1UoGrHI3rRBiUZr7eEEalGMDM20NP2rsQp1IohDqC2qjG02TfL9vy5vJNgZAIha1LlsOj/8almG8uzlHnbanVuQkgnZS8yimvl6+TtLY8hSks2aD+9eV2zeXFfRDEOrf30LmE9iJ3vp4ZCTP986DMGcqA4FjOz3+tSBMAcku/53t+t43V60eTRotkTtkbv1NG0O3d0j4+Ou5SZEK+9ru4fdLXYwap3Zj72Tx3sD1aiXbd2o12uGami6ZmiGaj1m7kmn91Qzd467WnnncaVhHjUrW71ju9rYKZ919z+uxXvGhLHybjh+6h1tPymXe9anx+PxbqO+v18Vk0cfjR3LbOvacKdsPWmkA6uxgUC7dBi4m0652es+IgeTB+ZZu6r3E3PLqo2CCvvkias37cDvbw8ob+p9c2dJvWrZUvVCw5puWrr8e7XgRgiVUfjZC7tS/QJKXQKpRn4zBZeJlD+/AB6Sf/5jVnSXP7R3byj83sU2cDJ7eyiru1FFbUegsl42kWGtm5V1w0Af7R1eNYtrDiUFr5EgY7GWV0N13h82EPQ0xonYTIWnWl8fMiilHvByZ5EDM8dP4wFxL5vvZP9byX4IrbQH2pNKxgnlRC3UzK6O1YN5o1Vb26/nqaZS1sdxcJanQvY2T4PR2XjkOqnr+sNRpNtnZiXokdTx3hRHoITIa0AhNeLZi0pFf1XsLIh4CcbrqqGruvHtWJV9JIRWAw7On0W359lFFbz/zV0BQQfQh7KXeXj+vrzPSAT8lTffgJi2bf/t3UILoLpt18vf3pYBPy+BGOWIf3NXoAC4KBsRvxov5NXAza5/CYuua5sWqTvEs4jtVO1K3XZdXLUNy6qVDbvi/EVWRwdwZCgTyiDUBBphICbZdSnCY1lyNitGtVIDSzegxjph6pJO2tum0ga+gaAphxS7XzY/VJvY8YnayemYzbZP9ht7reafjtVlXqntZD5UzWLK48Dzph3CICzZpRPS1IXaycgUsA4aJ9kby60YnqnbGNc8yyN1davdmeEQlBw62Wu/sqmsm2ZF2UAR3rRqpq7nM9avp/OS/93Pf+digWXvCKDiK3Igc2AcUxutPrSzo11Bjf090TiIY7dZjmh7d3TYGSqlRRuYn9BuRjgtZzcIOJANQnaRReLaRmkxXc2HK1WvGRWQnI9qXQ/UISwBrQA2TsMQMHwaOLLJwTwVxC4ZK+t6CZpcKLCy/qyY3pSl+e9m1FNgwWC+4Tico52XlJD2gfY9voCHGwMuRy4sh6251On5vXv/vx65Mf8BCUOqrJjRaAvG+hWz+f7KBfkBHa2czQ6OV87m1spZPO9kK2c2/PBdOZsFdfFk1az+YCUM/u82KlzQRFmy8rPt9v7O6b17/wHa8ni8

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNrlV09v28gVb4GeAhTopfcJUaDAQqRJkZIpG0Ygy95E67WVxN7EzsIQRuRQ5Irk0DNDSXTgw2b7BfgR2nWkwnCzu0jQbrdNzz30C3gP/RD9BH1DUWs5DrAF9lTIB5rDee8378/vvXl6MR0SxgMa//wyiAVh2BGw4PmLKSMnKeHid5OICJ+65w87+wdfpiy4+sAXIuFrKys4CTSakBgHmkOjlaGx4vhYrMB7EpIC5rxH3ez7X3zwXIkI57hPuLKGPn2uOBTOigUslCOaIswIwsgnYeKlIcKcB1xg2K4ghdGQSDGecUEi5ayCbmj7wV0UZSjGEUEBRz3aW1RKOWG3VB6QMKRok/buogd0hBwco3Z5JsrAGEFdnN1bhLk26F2sEfj7Wz634N7ts4/hS0RdEspP/USolpSJYWXAfy4YwREsBEsJrMHDBFIgUiYhdG1VfqM0LKMmsqSA9tK4yJKE+uF9DSyTRkgBgYdBmHU5wczxu4zwNBS8+xmfqbiEOyxISi2liWZyiMT9ICaIwk4UnBIXeZQhmUxGfBLzYAgmYsdJwUD5FrvSbMiKi8oTNPQJJzKDUnEEOkU8YwISgoICHxGGClJJbiDco6lAgMcgmogM4QkQ7TiBr9ynaeiinuRFaR4oskyTDiSYgaNAVl54nTAgIRMBmS0LueLtHUcXcaRBIaUDlCYScR5ZSEgQ95WzM5lI4H/AiCtjX4IeL4jS3mfEAUacFZz4n1MT4rjPo0D4ZXbek5D9mZ0yiEEMzwjLnTJcH4P+vtTX0IcggOPsVkR/EKkU8Y8gRwjYiIQPFSLpdPcnBPHd6IGJkH4IG+SP/eRQHp9NfYJdMOrfP/vVuU+5yF/d7DJfAQMJ1BGJHerCAfmf+qdBUkEu8ULg5QUUZ0yKsOcXA0ISFYdA3MlMK/8aJ0kYOEVAV2Q9XJbFrEpbbm9fyCpWoW/FIn/TASOa7ZWHGbTDGBmaZWv612MVOkMQh9DeVJnbfJIU+39b3EiwMwAQtWy1+WSm/GpRhvL85S52Ovs3ICUT8peYRXXr9eJ3lsZQpSSfth7ePq7cvD7O1AxDW/3mBjDPYid/6eGQk7/cUIZkZqpDASP/vT5xIM0Bya/+0+06XrcXbTwZtNqi/sjdPElbQ3fnifFR13YTolX3tJ2D7iZ7PGqfNp56R093B6qxWm3oxupq3VANTdcMzVDtp8w92u+daNb2YVerbj81m9aTlrnZO2zUmtvV0+7eR/V418gYq+6E4xPvydazarVnf3I4Hu80V/f2aiJ7dH/s2FZH14bbVftZMx3YzXUE1qXDwN1wqq1e9xF5nD2wTjs1vZ9Ym3Z9FJjs42eu3moEfn9rQHlL71vbC+bVqraqlxbWdcvW5d+rOTdC6IzCz88NvV7/I/S6BGqNfDGBmImUvzgHIpJ//XNa3nB/6Oxcc/jX51tAyvztgWzvRg11HIGqetVChr1mmWuGju7vHly2ynMOJAevkCBjsVK0Q3V2QawjuFcZJ2IjFZ5qf3PAoJd6QMzteRFMHT+NB8S9aL2X/m8l/SG30iG4n1QyTignamlmfnmoPp5d9mp76/Ws1lTK+jgOTotayN8WdTA6HY9cJ3VdfziK9MapZQY9kjrem1IFeog8BgxSI55/Was2XpU7cyZegPO6auiqbnw3VuVFEsJdAxEunuXEwfPzGoT/29sCgg7gIspfFvn5x+I+IxEQWJ58DWI1Go2/v19oDrTaaNRq392UgTgvgBjViH97W6AEODfNiF+O5/Jq4OZXv4FFt25WPcOwa45tOI2qaXmGTlzDaGC7ZlrE6P1VtkcHcGQqE8og1QRuwkBk+VUlwmPZczZMo2bWwdN1aLJOmLpkP+1tUekDX0dwK4cUu1+1PlRb2PGJul/QMZ9uHe01d9utPx+qi7xSO8lssJvGlMeB5032CYO05BdOSFMXmicjE8B63DzK39iuaXgWpIqAwR5ZVTc7+1McgpFDJ3/tmxvKmmWZyjqK8IZdt3S9mPM+n8x6/ve/TF0ssLw8Amj5ihwKHRgJ1Wa7H5Os1hmMzHZbzwYndfv+/fZe2D70n9WUyvwemGlo12OkVrAbBByoBiGvkXnlNvTKfLyaTVeqXjdMkJyNi10PzCEsAasANk7DEDB8GjjyloOBKohdMlbWAAQgBFbWnpfj28LIV7me9xRYMBhwOA5naGcVJaR9oH2Pz+HhxIDLmQvLaWsmdXx2587/b0Su3YdRXY7TS+VzMdgvndMBXzqX4QfhsvmsLV2S4Qf/0vns4HjpfG4vncezgWXp3M5ounQ+eykTPlm6SezeUjj84z4qXNBEWfDy063O3vbxnTv/BYvRO3Y=

View File

@@ -1 +0,0 @@
eNqlVmtsFNcVXnDaEJSGNK2gUisyWaEWAbOemR3vw64T1muTGD/W9hq/iLXMztzdHe/szHge+zBxG9y0/EgbaeKa0KSlSbC9yDg2yA5QwECV4lRqUx5JaOwAjRQUFzUVAlKXJKXpmdk12IE/VfbH7Nx7z/nuud/5zrnTk00iReUlcdEwL2pIYVgNBqrRk1VQp45U7ZnBBNJiEtdfFwg27tYVfmpNTNNktbiwkJF5hyQjkeEdrJQoTJKFbIzRCuFdFpAF0x+WuMx0QXqrPYFUlYki1V6Mbd5qZyXYS9RgYG8Glx+omBZDWAox8KdgvIgFNzxmX4fZFUlAppGuIsXe3Q4zCYlDgjkVlTWcNm1EGJHwr2oKYhIwiDCCimBCQwkZDqTpiolBONzmnCQJ+Ri0jGxhR3TROrOJdeu9GNtqF5mEZaAxSV7IhFTEKGwspCBVFzQ11KHmXDiksgov573sPixnhyExyosIk2AlwXchDotICmZSo6AYElU+CSEyLKtDgOabyGGaoqsaGOZ3cGCbVBTRBcsxBT5YRtIxEYGFJoGDmgKqrBSZTGNMWNI1DPAUIBZDSXgCRKUow6wak3SBw8IIY+bCA0cl4zAPIDMKHBRSr1qnlhVIqaLxKDe07Ky3Lx10Po4ZkCBJcUyXTcQ5ZiEjvBi1d3ebmQQ18QriTO7zoO3zTKVwB2I1MAXb/yM1AiNG1QSvxfLZuUtCgrk4TRJ5EZ4JxlzJ01UN/kHT34FtAANGzNzB6C2TdRb/CcgRBnIEwfKgWpDTI1+BxC+zZ0pfQUAb5E/5ylS2d2djiOEgqIu2B/tjkqoZIwtrdhQUiKCQkMhKHGxgvBbt4uV1GIciAuhyCOpURBbtxlAcIRlnBBDuYM7L2MfIssCzFqGFZj0M5+saN2O5c3nILGMcuoCoGeMBCMJXWViXgeYiYqSD9jiIfWlc1RheFKBZ4GZujUHZWj8yf0Fm2DiA4PnGZQzmnEfm20iqMVDDsIHgAkhTCcYAoyRc9Nj8eUUXoUqRkfXX3bldfvH2dk4HSTrc+xcAqxmRNQasznNwgTMkM4OzEmAYrxCDLKSZR8bUtVCIjYTCidKmuL9Sc9VzZZ26P8lVNZEbQx5ORg6q1lHVGCpTGlKVXd7mSGtzTRwn3ZSXIN1uF4mTDsJBOkjc06xwrcFwp4OuaAk5qIpmp49u8jvLwi3eIl8F1RWq3egSa8iMolBVQroz0lTeRlFhz6aWdLrK566tLdIy9Y+nWQ8dIBzJCsrT5tPjHl8JBtHpSZ4rZSl/OFSPGjJP0F2BIiIq02UeV4p3KtVtHOH38rFoeVxS/USUrpgXXhHlwYl8hC6C9hDmb2ROGwJ0Ri1m7PZ4PHug1clQaugng0CZpqs9/aBD9Oc/ZvPXxauBqtsSXt5fDpo0Jhp1aJhkERZgNYwiKBojPcU0VUzT2OM1jcP+/DaNd5Xg/kYFOmcEZFgxJ/ksG9PFOOKG/HcV+4QpdsikGT5cRzhKy5KK8HxUxnAL3pC7KPHK8rFcZeGSEmVEvsva1piwVJ/qSqc4Vue4WDKVILxdtJMPI52NjOddoGOY20BAeEI1+skip3skvzQnvCE4LIGTBE6Qh9O4eW8IcLUAodYzf12DbxGwfehOA02Kw71jDFjpODZ/XUEJ0Ku59W0Q2uv1Hr270RyQ2+t1uw4vtFHR/EhIKqEeutMgD9BPehLqcHrOHuc5Y2oVDEIczbk93jAi6QiBKJIiIpzbzTldbq87TEc49DuzG7KAY+ZSlhQNVxFcfLyWMabWJZi02WJKnUChC05aAj2VFXQOBfVwuWSeQS3B4BIWJIYb9W/A/QwbQ3jQkp+RLW+t9dVU+g+04PN1hAfk3FdRVpRUkY9EBoNIgbQYQ6wg6Rz0SgUNAlaDr9UY93BOMuIsClMIsZ4IcuNl0IXm0G6prt9stFlGgNiTrDEWc5baQcFOewmWYEo9LpogrG+nbYO5zn9y0dsPP7vEZv0KhIYaaXr9gxOfN//8o+o1Jxb1SMYS+wuzp5+cfqi1919bdoxEL1zQ2565caXvG+/Vblm2992j3Vfe+usji9f/hlm858C3G+tfPPLmzJPLx3cWXr8qf0btenT53hv/uMR/unL02n0PdPys+/39S1ecW4GePSufbqrdp1Rvi797Rvjhw89Fxvjztp++9d4740fvT/079cbl1Y86DzStnd15YnSJ7YPT53Xx+auPVW/xHCpfOrtLmey/ec/Tf3mxuOfSxOy2VR+8XNDXd6Ihu7aPLMVa4g0Vk9pkVv37xJGvG+Tl8is/Gv7+qVVTb1R980+nPu0N/XLZJ6/NpJ6/+EJN2UsbG9/fufLiZG8gYlvdSJ9ff2X64+CpS++49I/Xfuupw9ebO2xPOEaX7qhr7tn+um34auCkeHzynnN71sy0LP4PWro5+fLXjvWeOGus/mdf+4dtz93wffKdC13HZ6dn0sEMNXNs14dvJusmPxoYk/4b/Fvn7orPD6+8VnPzJlm+4uDkvXuX9U6enXr1fn7f8daOqfqnftxwUiwo0b73299vKfjVyjj13fOnq5wP9L+0KXYwYEwr5+q314/+If725oey1OuXmtecab3v5sV7HWd/sePX7Ze1I0NnyM+W2WxffFFge3p2+/VXCmy2/wE4OX83

View File

@@ -1 +0,0 @@
eNrtHctu3Na1QYEuDBRoFwW6vB20UBKIHL7mJcMoRiPZUWRZtiVLfsggOOTlDD18mZechwwBbfrYD7rptrUsFYbrJEiQpmndRVdd9AecRRf9hK676LmXpIZja2Ln0TYeMnDk4X2ce97n8pzj0TvHfRwQy3Nfe2S5IQ40PYQH8qt3jgN8N8Ik/PmRg8OuZxxe3tzavh8F1tM3u2Hok6VyWfMt3vOxq1m87jnlvljWu1pYhs++jRmYw7ZnjD795of3Sg4mROtgUlpCt+6VdA/OckN4KN3wIqQFGGmoi23fjGykEWKRUIPpRVQKPBvTZWREQuyUDhbR1O4BHEhQ2MVogDX4K0CWi4j54+zWiOCgdHAbRhzPwDYd6vghp9A1LjyJ8DcJA6w58BAGEYZnOMsHZoRRQEEIfI2OeZ6d4B+OfAbajFzGLwrq5PMSoOhqDlsQan3LHqkEa4HeVQNMIjsk6h0SbzEw0QPLT3aVmiheh7DbsVyMPJhxrH1sINMLEGVrgLvYJVYfUNR0PQIE6SfXoGgDfwyUnMCjawRTXtKNA9iDRsBmF8OK0IMNZACcYuKlUkJa24tCBPACYCvCffgJINZcH0ZJ14tsA7WphBL0YGMw4ikBvhYAoaA2hFHtB6AOQWjh+JGtY5+eITQLhyJke14PRT6FmHIWBGK5ndLBARUkaKIVYIPyPgF6O7PUa9/BeghLmXK8tGhsze0Qxwq7iXROEchWjCdlouXCT0ejMwm7LsL+LbqfR+dhgeaOnuPoyZJFxn8HZIRAG0FfLVBaUKcffAkmPss9qvkBBraB/IIvzcrbB8ddrBmA1D++8Z3DrkfC8eNpe38XNBCDHWFX9ww4YPz7zr7lLyIDmzbo5UOwUhczto8f9jD2Oc0GxT2Kd43f03zftnTG0DK1h0eJVXMUl+enH1Ir5sCDuOH4w01AorlWvjwCx+QikVfqvPDekAOnYbk2OBqOynZ85LP5P2UnfE3vARAucXrjo3jz4+waj4wfbGj65tYUSKoJ4wda4FSVD7LjQeSCleLxcevy88clk5PjZF4U+dr7U4DJyNXHD0zNJvgPU5tBmCNO9wDG+DfCkQ5itvD46b9UVTfVtnNup9daC6tXjOW7UatvrO+Ib6t1w8e8dIlf31aXg6uDtf3Grnljd6PHiTWpIYi1WlXkRF7gRV7k6ruBcWOrfZdXVq+rvLS6KzeVnZa83L7eqDRXpX310ttVd0McBYG0bg/vmjsrNyWpXb92fThcb9YuXaqEoysXhnpd2RT4/qpUv9mMevXmWQTYRX3LOKdLrbZ6BV8dvaXsb1aEjq8s16sDSw4u3jSEVsPqdlZ6HmkJHWU1g15FqnNCgmFVUOoC/e9xqhs2eMawO77fUOq/A1fng6nhnx0By8KIvHMIeoj//rfjJNT8dnN9osLfO1wBnRw/2abeXaygTT1EkiApSKwvKfKSIKALG9uPWskx21QFn6IQD8My84ZcHB/OIghwAcHhuSg0ufr72wG4UhP0cjW1gWO9G7k9bDxsnar9T6j2g2gpPRCeODz0PYK5BM3xo+vc1TjqcmsrH8SmxnlBR3OtfWYK4yfMDAb7w4GhR4bR7Q8cobGvyFYbR7r5YbIFXAg9BhDiHDK+X2vUHyczqSI+BOIFThQ4QfxkyNE4YkOoAQazn0noJ+PDCnD/4+cXhF4P4tD4ARPPX7LzAXZAf+nJEyBKo9H48+mLUkCAotL4ZHoN8DkDRJQc8vHzC1IAoig75NEw3cBZxvjpD+FBlWVFrghG1RA1s1HHstDGhiS321IFV4EJ8h+pe9QBEJWl7wUgawyR0ApH46eLjjakPuecLFbkKpB6FpysbkcG3oraKx4lgpxFEJVtTzPebZ3nWprexdwW08fx8cqNS82NtdZH17msYnGbzIXDvOsR1zLNoy0cgFzGD3XbiwxwngE+AlhXmzfGH9YNWTTltqk3RKNu4hq3vLl1rNmAZF8ff9CVz5WWFEUunUWOdq5eVQSB3bh+ehT7/E+/+4ahhRoNHha4/BK9nulwOeOaax13OwrvOm4nuHIh3NheX3UvCFf98zW5MyotpnEg3sFPLnQ8U29YoIM5hDSMpJZbayym16v4dsUJVVGGlfHFTTUBHRz4gBWAdSPbBhhdz9JplIMLleUaeFhaEhYhytmhVlq6l1zfSpnb4OTiFwOgEVTVNdt+FkZMLEyoO/tv7e7cUfrhzZv25ZFldIM7YrRcXwVgcczL3BMy14T0lvCZ97eSFnQih96VYCHEzNuLEF7NiGh2jODBYsn2OmCLbZJiDFywCIWjUQjxqtsHZ868ulKaKYMsM7OMurfnUl4VzDmNOahgzUzW7JUK5sxiTnyTL9gzgz178H/BnMKwPj9zkpxIwaCZ2pPk3QoOzeSQVVx4ZjNnSyu4M5s75+GlVLeI7hU8mh3bixeK2dw5yAtrXkxjlnkZWm+tbF5avX3mzOyC15NsvStOCLB6V5wUiEtcr30b+O5bag+PWNmnb4+4cHulUmtuXe2t4uXo7uaN+tCorK+HeLilCBu9SzWao0+z++lFI1vGAteYcQC0fKUN0xQEbJEW00KKamA/7FIomtGH9cBomEsyVaoR56loxp+m+vHw1OF0dVwbglGWj85MBNpAnRTenp21nKS4xyZOage/fHiN5iubLG3/UZyPP8nplSVehj/vNuM6wurpdYSjeHp8/83ymzOyqo/T9NrFJD0sVRvTudznygmfJ3v8ixnHviClLNOUcprbe0AresNnET0UKw3xBZhOJ/S+VTupCL2EypRMz7a9gRr56klpKrWOVNDxUyq/WyyJlagYLXRaIcvC7c4+IwqoEac2MhgM+AQhai7UTrKJu9K9BduLqVsAP7NA02zwYWEK6MIiWghwJ16z0AJmm17gWhodZ+WQYEQnrrkWLXnSbCsmyDNR08EBsI4uA82BJXKNr9Uq9JGB4kRJ4hWxLsNIuK9aBoWSbCpf9IjaBKdjY8IAAJY2rd2o2Pf0LqxMXZcoZacpCCp1mksHJRDFJUVcOKCIxtJhVNoaCUEIBvWBz8ITBIHhO1nxPEhZoCjRmrSq050KL6TPJjxXajwlySKqoVHOiIxPrmGdsJlWMijYDYuEFJSlx7wtl3XDfVZeyWO5qgyrShlAlkVF5n23E/PfoDSLgixQKgfgiVXHp/TIvJIO9NhAZTJg4E6A6T5JFk7GrIBJfpeC9UHpSBRg1Wkz6GKdkXgybFF0ZYEXavGobvmq4yzQ0rwwGWGr4pFu5FjAAMqOBt3DcusMNJ01MbaJbfVwzE+Zr08NMqZW2SBFVe9atp1dORnMrARvEbK4lJXRZDAjKAMPfA8iWwpSyY4lEKkM+yDRnsOQZsDos2NRBV1CVTYS9Rnj6eJOBBoUS0LhK+lALIkaLx0c0OCqe7SfQeAbDVDjah1sPevV46T1xOinjBJxKHUC5wGKDgqLNjSfwPgO7I5Ib3SqM+gnk3HI1FzOzPiOiV+YPuufv/7JyWlmehotw4sCAoUki1P1eDPwHEQr6F7gwSUArNlG1KtTd7cI7rGtaw58IJEbWASzfgn4THAIkyAcNhBrkBXvQg4ljJ7HKvW+rekY8TxP3byDtsHs9z0XL6FTfAd6/dp2i6u9gcqolfhn6iZALRUwYxYgyqIAFoX4kwUppWcnnD2LtiKXIbbhAT5w9rT4qg25WqmeJr/YgbPYptKTS0sSX60cfJX9PGe+/zXo55neyKLYZMnUkdlL86zmHXrfRS8qGX2hHp/MbRyV7u3F0XsPHvZeGMD30uv7NI9uARSwsxjGZ8TdPTh+L90Yry6ibxF9i+j7/4i+e8xjzTDc2THyFBuejpR7kVQVhCJYTgdL4PbtbMigIWAqFKgv4/CLntGiZ7ToGS16Roue0TnoGT2UKnXlq20aVea5aVSRK1//ptGqIn2ZplGpWj+9abSqtau6UcGi3hBN3DYlUW/XRCxKdUlQavVXomnU0JU6WMbnbxpt/3t2Feh6bUPaiEa9u9bO2h3Rvyp2l5fPi/ubvS9UBYJ3wf9h02ip9N9o0Py6c2RC/nYXl3JGMjrp38kZ3SddOTmjm/ba5Ixk1kGTM5ozfTH5onwxd6KeJKALWc+7rOOaQv48OKuh5C9W549kLX8kxy1IuaN7aS93tzIud7q9nan45E2/cyfsvBEsKnmjmM8bwULeCP7re63c+a3X80ZxpVY4rjknWM6f4zqfN5LfKN6g5j847ea0TtVKG4SL98Z5FzXt+s4b0YXrzoHrhun8lWp8nL8CVZHrK96nijfmV5vg3CVzkeN3i46Rue/v9Bz271TyVlQPPCd3RIc57F8mXhR2Bzh/b5CoeIWcfxlfTv6BefFCVbxQzVnzhCDmjeR68Q5ZtIvM2ztku8jcF9euedPqt6LcvTyybyMqrpnFNXO+CG7krtXtR0WAmn8z3rGI1bbswmUXLnsOMwPFe3LxnjxnRtxz8veenL+LyGUbqMpfDc71QpzDYqsW5pDoHH4nQeZbSouvoJh/kiM/f2adu6+JQvSr7ItvV5l/kj2zSBjMe8JALJJg895+n7+MAfsVLvm7idBfyJS7GOUWMWrOCZYEqfgHNUUZsqjUFCJ+tURcKYpx80jwS/w2cBJ6/mm/B/w/5izgpg==

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNrlV91u28gV7gK9CrBAb3o/YQssUIg0KVF/NoxAlp1E69hKIm9ie2EII3IoMiI5zMxQf4EvmvYF+AjddaSF4WZ3kaDdbpte96Iv4L3oQ/QJeoak1nIcoAX2qpAvaA7nnG/OzzfnHL2cDwnjHg0/uvBCQRi2BCx48nLOyPOYcPH7WUCES+2zh+3OwZcx8y5/4woR8fW1NRx5Go1IiD3NosHa0FizXCzW4D3ySQpz1qP25Ief/+qFEhDOcZ9wZR19/kKxKJwVClgoRzRGmBGEkUv8yIl9hDn3uMCwXUAKoz6RYnzCBQmU0wK6pu16t1HrkwD1aG9ZOuaE3ZC9T3yfoi3au43u0xGycIha+WFoAlYIauPJnWWYK0vexxqBo59wFExQiANy5+bZJ/AloDbx5ad+JFRTyoSwMuA/F4zgABaCxQTW4FoEsRcxkxC6VpXfKPXzcIlJlEI7cZimR0L9+L4OlkkjpIDAQ8+fdDnBzHK7jPDYF7z7jGcqNuEW86JcS2mgTA6RsO+FBFHYCbwpsZFDGZJZZMQlIfeGYCK2rBgMlG+hLc2GdNgoP0FDn3EiUycVR6CTxjMkICEoKPARYShlkyQFwj0aCwR4DKKJyBCeANEKI/jKXRr7NupJQuTmgSKbaNKBCDNwFFjKU68jBuxjwiPZMpVL395zdBlHGuRTOkBxJBEXkYWEeGFfOT2ViQTie4zYMvY56MmSKO09IxYw4jTlxP+cGh+HfR54ws2z84GEdDI7ZRC9EJ4Bljt5uB6Afkfqa+guCOBwciOiP4oU0vgHkCMEbETC9TiSdLr9E4L4fvTAREg/hA3yx35yKE9O5y7BNhj1r5/94sylXCSvr5eXr4GBBO4RCS1qwwHJH/tTLyogmzg+8PIcLmdI0rAn5wNCIhX7QNxZppV8g6PI96w0oGvyPlzkl1mVttzcPpe3WIWCFYrkbRuMaLTWHk6gDobI0Myapn8zVqEyeKEPdU2VuU1mUbr/1+WNCFsDAFHzGpvMMuXXyzKUJ6/2sNXuXIOUTEheYRZUzDfL31kcwi0lybz58OZx+ebVcSXNMLTqt9eA+SS0klcO9jn58zVlSOZEtShgJH/QZxak2SPJ5b+7Xcvp9oLNJ4NmS1Qe2VvP4+bQ3n1ifNqt2RHRivva7kF3iz0etab1p87R072BalSLdd2oViuGami6ZmiGWnvK7KNO77lm7hx2teLO01LDfNIsbfUO6+XGTnHa3f+0Eu4ZE8aKu/74ufNk+7hY7NU+OxyPdxvV/f2ymDy6N7ZqZlvXhjvF2nEjHtQaGwisi4eevWkVm73uI/J4ct+ctst6PzK3apWRV2IPjm29Wffc/vaA8qbeN3eWzCsXa6qeW1jRzZou/14vuOFDZRRucmbo5fpXUOsiuGvkdzOImYj5yzMgIvnnP+Z5a/uivXvF4V+ebQMpk3cHsrwbZdS2BCrqRRMZtXWztG4U0b29g4tmfs6B5OAlEmQs1tJyqGYNYgNBQ2WciM1YOGrt2wMGtdQBYu4sLsHccuNwQOzz5gfp/07SH3IrHYL+pJJxRDlRczOTi0P1cdbl1db2m+yuqZT1cehN07uQvEvvwWg6HtlWbNvucBTo9alZ8noktpy3uQrUEHkMGKQGPPnSLBuv850FE8/BeV01dFU3vh+rspH40GsgwukzHzV4claG8H93U0DQATSi5FWan78v7zMSAIHlyVcgZr1e/9uHhRZA1Xq9XP3+ugzEeQnEKAb8u5sCOcBZCbYvxgt51bOTy1/DoutAwMslbFTqlWKtTvSyZVrVUqlqwncTsvEXWR4twJGpjCiDVBPohJ6YJJeFAI9lzdksGeVSBTzdgCJr+bFNOnFvm0of+AaCruxTbH/dvKs2seUStZPSMZlvH+039lrNPx2qy7xS21E20c1DykPPcWYdwiAtybnl09iG4snIDLAeN46StzW7ZDim7pi9olFzSFXdanfm2Acjh1byxi1tKuumWVI2UIA3axVT19MB77ezrOb/8HFsY4Fl8/Cg5CtyGrRgFlQbrT50pgejfmsUNw8ePbsXuXeP2fE03u47baWw6AOZhnY1P2opu0HAgtsgZBtZ3Nx6sbAYr7LpStUrRgkkszmx64A5hEVgFcCGse8Dhks9S3Y5GKi80CZjZV0vQJfzBVbWX+Tj29LIV7ia9xRYMBhwOPYztNOC4tM+0L7HF/BwosflzIXltJVJnZzeuvX/G5Er92FGl+P0SvmcDvYr57THV85l+EG4aj5rK5dk+MG/cj5bOFw5n1sr53E2sKyc2xMar5zPTsyES1ZuEruzEg7/dx8VLmikLHn5+XZ7f+fk1q3/ABRjSX4=

View File

@@ -1 +0,0 @@
eNrtVs1u20YQRoA+CLHoqRAlkvpnkYNhBAnQGg4Qt0ARBIsVOSS3JneZ3aVlxdAhbq49sE/Q1oYVGEnbQ9FLG7THHvoC7r3v0VlKyo+jIkZ6CloBgqSdnZlvZr75qOPFASjNpbj2hAsDikUGf+ivjhcK7legzaOzAkwm45ObN/ZOKsUv3s+MKXXY6bCSt3XBTdbOmUijjHHRjmTR4SKRpxMZz35bZMBiDP/o/BMNyt1KQZj6R3u78XPLWcdr+20/GH2/FUVQGveGiGTMRVo/TR/wsuXEkOTMwNnSXP/AyjLnEbMYO59rKc63pRDQYK7P9wFKl+X8AB4r0CWWAV+cacNMpY9PMS788fuiAK1ZCt/ufrQG9+VpjBnqZ3sVtBy/7+xGxgm8oOf4o7AXhL2uc3Nn70kksTvCuGZWwuswnkYsysC1l5TM68dCus3JLxa31u720uBu5bmcujtNQ3X9zQcXzsYL2wpiTMZZrutToyr46dK1HXZou1mfDDzv540hdhVPuai/Xmzlxr1zENUX7ax7nYS9Xpd86BTsetAfB57ntbKuG4w3GM5wYkiNelEd8EgqsbmUW8seYinfba869DGI1GT1SW/UP/mUs/ocB+ykUqY5PLsU48ZhKTW8FKShzUOcmUIO/HntryOyYicJidcetgceaRHsPOBoKRyWXDUzoIYXQEJR5XmLTJiJMor+SF6KE0l4SsIjUqFHUeWGl0wZCiIuJRKehLa9LaIjlgOtSnpf8wdAMX+agiKhb1v0wipMphCspjlHAqN5sDbGciqogKI0sxfePbTacOvbTaznB3QyM6BJGHjjod8PvHmLcIF0FRFQZH2qLWxcGVxKA5RxivuoZgidTXKI18ilSmmEoJo+xFyvjAlyx9aVySk1JqcVXzsY3HGskIOicbXqX8xmTbZcitTuDwboNWAzqczqwO8hQA1MYXcvYZhKta9LG1ZHsgRqMXFxwJvy1ki6VBupcPde9Z7P/1lqdt8kNfjGU93JpnimwR92ML1g3E0qsRQyl1nR6VgR0eZ/Vfqvq9KpH/j/Vpbe+/WILIlHkXQZShPzB/0+DmfiDVgyicf92B/2euMk8Lv+IBj74A2DQeJ34yhhyXjsRZEX+yNvNGTxYDRmfRS1ggmeIEXtFnJcjbvkOdPRWiqJyqLxG54Y/NjGj9vN4R5qjiUjuYfKGOGa4oYjGxAVzh0RVxFuHXrsT5laSsqKbPj97pVy3ZlpA8XO0uttky6jvqm61a0Weds0Zu0Rks9k5TAFDnMyyMukyh2mNbf6ahNwUVaGHjDFrRbZXmCOtTdNpCqw9pAk7nLoFhs+N5D7L/scze2rdaUurvqnb+eok5nMkVpXL0uWdqlZvhbcNQYqWNH4YQ6acSuwM3JVRLcqJN07N9ajZnLzjSNcHpF3Y5Kr5+LlUTbPK3yOKvs3pmQxjvPexlpfu9h6lQatdTewLgKHDBuy5Pn8b285bDI=

View File

@@ -1 +0,0 @@
eNqlVn1sG+UZd+mQAE0VTAy0Tduu3oeq4jvf2Wf7nCxDjuPSNM13SJNAcM93r+2r7yv3vufYCZEgoEpogurKKv6YQCt1bAhpStVs7QodG4zQfVVDQ2ihY9PUlX0VujLgj2mse+7stA7pPxP+43zv+z5f7+/5Pc9zs7UisrBi6BsWFJ0gS5QILLAzW7PQhI0webiqIZI35Epf7+DQIdtSVrbmCTFxSzAomgpjmEgXFUYytGCRC0p5kQTh3VSRZ6aSMeTyWxsz034NYSzmEPa3UPdM+yUDfOkEFv7tymZ/gPJbhorcpY2R5Z8Zhx3NkJHqbuVMQvOujA4rDv4xsZCowSIrqhjBBkGaCaET23JtsEzM3TMMteGNlE3PdtbWvdu5tq68t1DTfl3UPAEiFhW1nMZItKR82kLYVglO78F1FRlhyVLMhpY/QdXlKKTnFB1RBpxoyhSSqaxhUS4IFsojHStFCFGUJBsCdN90mSKWjQkINjww1N0YZW3VU5wEHaps2JSOQIIYoIAnkUV5yXAxpcSMYRMK7FkAIYWK8AQTnboJuzhv2KpMZRAlroYHilaZcS9gihZcFJKMvVubFiTPIgqqLz057+0TF2224wakGkaBsk3X4iqykBFFz/lnZtxMAm8UC8ku9g2j402iRmYPkgiIguz/kRpV1HNYU0i+kZ1rJGSwHqcLoqLDUxPdkwZcO0F/0NVnqG0gIOrldYheEQl4+GuQIwroSJG8gimXTps/BYifRA9ChPQDbJA/61NDOT5TyyNRhqD+4Lu5kjcwcRbXVucRYCCCQkK6ZMjgwDmcm1LMACWjrAq8nIeK1JEHuzNfQMikRRWIW61rOc+Lpqkqkgdo0K2HhUYF024s64/n3TKmod514iz1QhCJzmBfGdqITnEMLzDs8yUaE1HRVWgLtJtbp2p65y80H5iiVAAjdKNFOdW68mKzjIGduW5R6h1cY9JlgjMnWlqUP9a8b9k6VClyasm+9e4ah1fdhRmOY2JH1xjGZV1y5rzOc3yNMiSzTEsG2HAOslUJ0qwgZ+X9dFrKpjNa23Ah2Umi/XL7hJ0syl3D3I60IJuICfUwXUPpdmtgsnMqvis7uqu7QHOxUJzlYrEoR3MMy3AMRwu7LHl0MDPB8KmRNBNK7Qon+OFkuD0zEo8kUqGpdM+OqN7NlS0r1KWWJrLDHWOhUEa4e6RU6krEenoipNx/V0kS+F6WKaZCwljCLgiJVgqis4uK3CaFkpl0Pxoob+eneiNszuTbheikErZ2jslsMq7kcx0FAyfZHJ9qCi8SEmi2EWGU5QXW/S2uckOFzkjyziEhGn0GWp0JpYYeqgJkxMazFeAh+tXpWmMwPN3bdZXCt1U6gJPOqSEbGiYXoXolQoXYEE9xQgsfauHD1F3dQwvJhpuha1Lw6JAFnTMLNEytUr4m5W29gOT55DXJfsolO2TSDR/GEY1KpoER3YjKWRihB+ojke7sOFavLNqwcqKuTHlunVMe6yenSpOyZMtyvjipsfEpPqxkkC1llxoq0DFcNxAQrWGnwoUEdrFxtEq8ebgsS3MszXInS7Q7N1QYLQCo92wMZtCNANon1gsQowBzx5nz0vHj5nMLacBX1/VVI3w8Hn/x2kKrhmLxuBA6uVYGo+ZIuJCGT6wXaBiocGENL5RW5WlFdla+Dos0F+GlcDYcF0UWyTEhk4lwfFSUZC6K2JjIh3/kdkMJ7Li5NA2L0BjB4FNI2VkJaGLJbTFtYS4SjsJNW6GnSqoto0E702G4d8CtFAxh1RDlI8ltdFKU8oge9Ojn1DpGexLdnckfjtDNPKJ7zfr3T003sK5ks9VBZEFanHlJNWwZeqWFqmBrIDHqLAlymMuGIyE+izghi2J0O3ShVWtXWFdxG21NVCH2ouQcy4fb/C08H/a3UprYJkR5lvW+kh6s1jv/qxvSX/3ODT7vt1HtP6q/zN76+399/shTB3f4bol0pUZ3B5K3HJ4/Lez72eFzuVcfe+uGSxcPJK9LkEvmX1r/+OS+Db7Hv9D1wCOzrf1fjNy3gD9++fbvmcdvLr4ZeOJi6sWZt7XzZ/fyXypf/+5LfSfu+3DLM/xNT15Q+jdP/nX8UOn9W7dNfGvf2NOVzx544vzyo7lz5le6W1OJge+jG+//jM/32kcfrJC9LwUeNN/42jfGfonln5//nO+57eWHJwpz9r2t0ZP7Z5eSby+PHxz4wHfHvzf8NLLwp8ADe/eY3730TnnLwj/77nlv4z/Gnt105/j+Lb/oTk2fvXT7f6gfLKXGNh+vfPzcm45IT9/k+/bZ5Q+7b8xu3fpo15dnDn209/U//+3Ca4vnNv1O3/TO6YnIT2qn/r4c/PVvJp69sDv8zRNb2Bd2vLK7773/vn45cOb6uTMX750988jKckvgtw+9ER+u7n83uCQ9NXsnIHn58kbfzsdeOXD/dT7f/wBEhTET

View File

@@ -1 +0,0 @@
eNrlV89u28gZb9Fb0EMvvU+IngqRJiVKomwYhSw7G8drK7G8iZ2FIYyGQ5ErksPMDCXRgQ/N7gvwEdo4cmG43l0kaLfbpuce+gLeQx+iT9BvKGktrwO0wJ4K+UBzON//7/f90avzIeUiYPFPL4NYUo6JhIPIX51z+iKlQn4xiaj0mXv2uN05eJ3y4PrXvpSJWF1ZwUlgsITGODAIi1aG1grxsVyB9ySkhZizHnOz73529VKLqBC4T4W2ij59qREGumIJB+2IpQhzijDyaZh4aYiwEIGQGK5LSOMspIpMZELSSDstoVvcfnAfRRmKcURRIFCP9RaZUkG5dnoMXyLm0lB96idStxVNDCcL/gvJKY7gIHlK4QxaEgiDTLkSYRp19Y2xcGa5zJJCtJfGRaSUqO/fV8E4ZYoikHgYhFlXUMyJ3+VUpKEU3c/ElMWlgvAgmXFpTTSlQzTuBzFFDG6i4IS6yGMcqYBy6tNYBEMwEROSgoHqLXaV2RAZF800GOgTQVUUFeMIeFAGAY4pUEgGDGJEOSoSq/KDcI+lEoE8DgFFdAhPELEdJ/BV+CwNXdRTuZmZB4w8M5QDCebgKABGFF4nHIDAZUCnx4KuePuBo4tylEEhYwOUJkriPLKQkCDua6enKpGAwYBTV8V+JvR4gZT1PqNEAmkBi/85NSGO+yIKpD/LzgcS0pnaqYIYxPCMsLqZhetj4O8ofgM9AAIcZ3ci+j1JqYh/BDlCgEYkfUCpgtP9HxHEH0YPTIT0Q9ggf/xHh/L49Nyn2AWj/vWTX5z5TMj86nalfwkIpFBHNCbMBQX5H/snQVJCLvVCwOUF1GdMi7DnFwNKEx2HANzJlCv/CidJGJAioCuqHi5n9awrW+5eX6gq1qF3xDJ/1wYjmtsrjzNoSTGyDNsxzK/GOrSLIA6hxegqt/kkKe7/uniRYDIAIfqs3eWTKfPVIg0T+ZtdTNqdWyIVEvI3mEc1++3id57GUKU0P289vqtudnmjrmJYllH/+pZgkcUkf+PhUNA/32KGZGY6YSAj/505IZDmgObX/+52idftRetPB61tWXvibrxIW0N356n1qOu4CTXKe8bOQXeD74+2TxrPvKNnuwPdqpcbplWv1yzdMkzDMizdecbdo07vhWFvHXaN8tazStN+2qps9A4b1eZW+aS796gW71oZ5+WdcPzCe7r5vFzuOZ8cjsc7zfreXlVmTz4aE8dum8Zwq+w8b6YDp7mGwLp0GLjrpNzqdZ/Q/eyhfdKumv3E3nBqo6DCP37umq1G4Pc3B0y0zL69tWBetezo5szCmmk7pvq7mmMjhM4o/fx1wy7/AVpdAqVGP59AyGQqXp0BDuk//3E+GzK/b+/cQPiXZ5uAyfz9geruVhW1iURls2wjy1m1K6tmA320e3DZmqk5UBC8RpKO5UrRDfXpfFhDMNq4oHI9lZ7ufH3AoZV6gMuteQ2cEz+NB9S9aH0Q/e8V+iG1yh8YTzodJ0xQfWZmfnmo70/nrb69+XZaajrjfRwHJ0Up5O+LMhidjEcuSV3XH44is3FiV4IeTYn3bsYCLUSpAYP0SOSvK/Xq1exmDsQLcN7ULVM3rW/HupojIYwaCHDxnA19kZ9VIfrf3CWQbABzKH9TpOfvi/ecRoBfpflGiN1oNP72YaK5oHqj7tjf3qaBOC8IscqR+OYuwVyAVbMicTmeM+iBm1//Cg7dHq3WzYqHTeI0qrWaW673ana9YfYauEGo2fiLao8EBKlcJoxDrilMwkBm+XUpwmPVc9YrVrVSA1fXoMmSMHVpJ+1tMuWEWEMwlUOG3S9bD/QWJj7VOwUe8/PNo73m7nbrT4f6IrD0djJdrs5jJuLA8yYdyiEv+QUJWepC8+R0ArL2m0f5O8etWF7FIybBZcejdX2j3TnHIRg5JPlbv7Kurdp2RVtDEV53arZpFrvWbyfTnv/dz09dLLEaHgG0fE0tZgTWMr253Y/dTpDWB2lWblqPnAdmNjraq+PDo/G+VprPgSmHcbPKGQW8gYBAOUg1RuaV6zRK8/Vqul3pZs2qAOV0Zet6YA7lCVgFYuM0DEGGzwKiphwsVEHs0rG2apZgyoUSa6svZ+ubtrAH3qx8Ghw4LDgCh1NppyUtZH3AfU/MxYPGQKidC6tta0p1fHrv3v9vRG7cf0jDkGlL5nRp2RxGG/BDZsl8vr90SX7IRkvnM8Hx0vm8vXQeT0f30rkNP/WXzmfJXJwtm9e/WQqH/7uPmpAs0Ra8/HSzvbd1fO/efwCXssYa

View File

@@ -1 +0,0 @@
eNqdVX1sE2UY7xhEE40fASVGEroqRHDX3l2v7XWjIVtXYIytbN1Xx5bmeve2PXpfu4+uHYIyvmKQmDMgGKM4V9plzvGxReZgfIhDRI1GEBiGjwTRSDTGv4yo4HtdJ1vgL/tH7573ed7n6/f8nuvKJoCssKJQ0M8KKpApWoWCondlZdCuAUXdnOGBGhOZ9Gp/oL5Hk9nxxTFVlZQSm42SWKsoAYFirbTI2xKYjY5Rqg2+SxzIuUmHRSY13r7OwgNFoaJAsZSY16yz0CIMJahQsMTYIkux2SKLHDBETQGyZX0bPOFFBnDGUVRSEcKwEaCEwaeiyoDioRChOAXAAxXwEsxc1WTDB2p1rc/GAMXAul5Px0RF1QemZ7qfomkAvQKBFhlWiOofRjtZqdjMgAhHqaAPpieAXB/0vjgAEkJxbAJkJm7pByhJ4liaMvS2tYoo9OfLQdSUBO5X9xk1IbB4QdWH/DCJskrb6hRsqWDGrARpRQ8kEUWlWIGDPUI4CuaTkXL6I1MVEkXHoRMkD5eembg8MNVGVPR91RTtD0xzScl0TN9HybyTGJx6LmuCyvJAz3pX3x8ur7wXzm7FMKvr4DTHSkqg9X05GA5PuwxUOYXQIvShd6MDk/3hgBBVY3oPhtl7ZaBIcEDApgy8pmpKVxpiAb48k81Pyvv+qkkQr5rmpisgLvpovQbhxhxmP62acRQnzBhZQuAlBGZeXl3f782HqX8gDAfrZUpQIhAK3yTsWTqmCXHA9HkfCPioATisxkgfzicCkpKoACSfld7fjNRNUASprBicmC5ElKOUwHbmwuqjOeQ7OpMdDK0xTCzRwaPuTsLOhoFGR4byVyRZNMLAhBBe0XucTtdAXjPZ+z5YK4pgKIJiI0kEDjrgWJ6F/cz953mq6GkHiqLD9xuoYhxARu8joRo9NlUvAx5CZkS+54Rwu91HH2w06cjldpP4yHQbBUzNBMN5Zfh+g7yDNGbnlf7kpD3CMvr481AIualIGAdON0aTACNwewR12DECEATjZhgyDD6G1Gdp6MeAUhJlFVEADVeSmtLHi3kqabDMY8ccdiestNTMCjSnMSCghStEowal1CzJgBMpZr93GeKl6BhAArnp07MVwZqy6kpvXwCm6RXFOAveuFxQGArRkVCY9zTGvZWqs5Ypb9e8CaaqEVsZIhkJWPEaa1V9qFyu66jsdDdFgk3VcQRz4W4Uc7mcGIJZUStmxRCySWaCgXC7lfA1h6y4r8leRjR67eXhZrejzId3hmpWOoVqLCXLeBWXbI80VrTgeJhsaE4mq8pcNTUONVW7PEmThB+1Jnw42VKmxckyWA2lxjy2UjOcTBZ22JPnBwL5gRjscJdgk+woNTO5Hnis03dhqXkF3OZ+gUuVmgNGMwF8UjwIsCrw1IgCGN8Je6AlWMZD495wqBbUpVYQnX4HGpWIctLZwdrlVS0M6nWzsWhFXFS8aJTwTWmCAycRNN8HJ0rkhhC9l/r/zOqjZmQq3RG/NPHZygqiIrCRSCYAZEgfvY/mRI2Ba10GGYh5XVlQHyIZOxaxE5DcNEFGgAsphwtz0tt/yyFtfBOyFAdnLEHrgzG7x1JCEHZLqZmnPKSTQNHcx21jxphJITpW0Dp/+8Om3K+Q078SNqFPbP5tydiFn9JddzZvrdpye3fr7RkNW+YNvjL7hu9Sb3fD1dFD3UUN8u/U6WvJxD+zTehbgzP39A4/pfX8+PPIeXbDtaXrjz/9te3IWMnV0eO/Vh/d9T0B7Mu/iWz/5E7txa5by7p3pR9aM3QGGTn/wrHg1tp5tnLx/M4euXn80p9XlrRWHH6uVp8Ta54xv8B04vWlba53bry74Hpq//nYozZ9wWdVpv4FN+ec3qGflvlxDp/Fbr98sa27Tp314u0ZJx39lOm97GPFN4++vGTWmuKSU23UlUWOW9vo2T5w7tvTr92ptaXfzJxt+WHr0Lrvmtfu6Xi84ER27/Bfrpnxlr5n3w5e7k5s/KJoR/UHnj+KgwML3Z172s6clYdaFyuHbAuHe9G9Z1DymV/+/rzqyVNS3e7oopN7Pl0lFl8IFmmv8udW3i3coG+7Ptdkunu30DT2iPBScobJ9C/lF2Dr

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNqFVktv20YQRi75HQtd1KYSxZeeQREocpwUsaU0UmAUQUFsyJG0Fblkdpd6xPCh6eOuv9C4dhGkrU+95eJTD/0F+TWdJUVZzqMFJIj77ezszPfNDPXyfA5CspjfeMO4AkF9hQu5fnku4HkKUv14FoGaxsHpo8Fw9CoV7O1UqUR2ajWaMEPROQtXhh9HNQlU+NPTZ3Gwenfj5nEJt70ZrEodUlLzcFVVo716szt8PLsHd9Png29ay6D+8KGC5dA1D2f9ZqlCSnijyE4splQRJomaAlkAxR9BGCfDfW0V0aUnQKahkmhrI5Lf7QWQqKk+ToM55T4E2ppxP0wD8II4oozrE0+/RRiWH4ULa8rlAgSiYxpK2NkQdOH5MVLF1Ud2WUQnIIuNk/Mp0AD5/fn1Ewmi2p3gqfVfyQoJ5dUNwbJmGw5+/uz6PsZfvcf9OGB8sv598oIlFRLAOKQKzvLt9atbtVuvezHnkCm1fj0DSKo0ZHP4o5fHVT0APlHT9Su7Yb8psNEqgfUFTZKQ+VSfrH0nY/4b8pig3vDDmVRUpfLlKd4O//x9HoGUmMsvg4dFEj994trTPQxv/XaUIhFWnQx8RWzTdonV6rh2x2mR+4ejsyEILLT1r3zC+PL9QE+thuv+T6RZYX2PUQrk5t3N8Lgolv+qldI4DsN44aWJl3GtS7vU4WkYVkqFwvmqEA6LoLStrafHJcVUCHjJ0Y5jysm+wPpi0o/xjlSEaFB0xWKxMDZR6AbRnYE224opHZfDOE+r3CHHZU4jwIfyNaflCikLmOQ25R6yPI4FZ1TjfpxyJVZ64wlnCgIyROFAknhMuhEI5EybYcmgidM0ms26XmauqpZtG67VchBRLzwWaC+bQ7WDWHpdPoEQZOYAowwVi8CDJPanaGk17bZpNZuWvbutXWi5q5ZZRfUtq+Na5RMdaCoEppxlGVKpUIQAAw3e92eaZhbvlcWHLh1Th6QgSjxfn3QNs1iPcV1vGjolJr2AamasjCcesC3NOGZ0JOVDJpV2xfyc21rND/j7em2WtYa7bLg1dFmzXMdI+CTnP9A5W6Zj6iwXjAdelOh8HMMtgFkG1K+AACYC9DnbMbcYE5nyR9ptgkUnUwFe9CzzbrWyFLcw0+E6pmE2c9RniRdFiJmFnUYyqxyZphFDAjQdbX3GD+M0yFzr3TFAKEM2g5xPx2hdAzNSGxmoQ/WnLAx3La/AHUscEwpxWO5qdAXuCBXAIonxdVO4dHexjUet4RwVnUVZ0JkzvY6YLtAOaWRIOs+I18aTFCsoV8I16gWQK9E07JMTbEPsLYHdbBrtttV2Wg3s9d1xrkfBSeWq6ftZn9KQFN2vpxjz4YOmx/YEHyu4qCRjEs9rOL19FMBIpskdxpNU5bPrS2z1L7atXul1r42HXt41ZFu+kuB0G3b7ZP9xt9/7atgbkL3BUX+EX/LZcH/Qsz4nB1Rtet2sNy4v+gcxR850r9uNtnt5cXQvhDkidaRxrAzS7xooxeXFvoEyXV70DGIY+Egj8mhvlA1wCyXJWrCRXIG2mYPEIPubhMkeJpFK/QcC0W6Qx4yMPQYZp8IHmfv+FJO3r09TcpeuSFcArZBe9zaxLXIf/5EAeRAnCR7qzvEVM1RxQuq3yaHmTMBKm5K203adar1u4kYvjiKkUN4hXxcz/8517c1Gu2F9TPt8+GcvRE8Pt1IHOayf/AtKkges

View File

@@ -1 +0,0 @@
eNrNWA2MHFUdL6CxEQI2aMVq9Lk5257d2bu93ftsmvS8a8u117v27oqlvXp5O/N29/Vm5w3z3uzetpSkKIQoDW4hIMW0Ktc7uZztVRoRoUmRQqoiKibilQIx2GgABZQqGCj+35vZ3dm9j0KCiZu93M7M+7//x+/3/5h383iWOJwy65JJagniYF3ABS/cPO6QG1zCxTfGMkSkmTG6qbd/4H7XodNfSgth87a6OmzTCLZE2mE21SM6y9Rlo3UZwjlOET6aYEb+zKLI7lAGjwwJNkwsHmqL1jfEw6HimlDb9t0hh5kk1BZyOXFC4ZDOwApLwI1r6RdDe3bAYmYQE651E7sG0WJao8aZZRGhmViAgSAkGDO9zSyckZsJnKVmfogT7OhpWEAt2xVDXE+TDA617Q7ZYDJxBJUm7A6Bn05e/jAI1x1qywjAJv1KGqmnSDBkMjaMXFvqy9tSCxcOtVKhPXJ/3QTjhgyWwdRSm2Ir35tUNlFBMurWDDH/BnYcnIfr0grLNU3lu0GS2DUhGtvVRdC6dmRSLhBLIl+pNNEhcnNdIM9zeQ3i8lFk0Bq0kP/ZwgkSacqRjR0IGMCOcmlitZVXRCNoIE2QBAWREdukOhVmHvmc4IhaSeZksLQFJR2WQdwmOk1SHeVIgoPHHC0nkVQkjAZDa6llIN2ksJwgAwvsSViY40iKZQdDtWW9DQG9GeCB5CLCFmJOClt0l6eQOQjIZkOEUY4CN13hq89DXMEx4oekyoSgzTghpeimNLPAVGVPu22bRBlTNqfLQgkm0kjHnPAwyjMXcdBnGsiQUctQiyh92JaMcqjy0IfDV759MFT2dIe0He5gqUtmDNyqBf8MAExUQVKBWJ8PZI6aJurt6b5eRoB4hksD/PATo6ReQxZDYDsEkjPX0YkvnCDIZ6sRKe/f6RENgQE9EBK0HIT9IBY5BWGrDUhI/pKR/w/a+3b4TKyg/ofMe1CGs4wCn6F8yOjMQfeZXCsmgJ7GVoqgBNyymPARBBpLxBUhLpYNUioHZVdyvZjflSkYTBZeShEojfOmhy1zAXbMUpLjRfvSUuR/kh4B2MqZUhGIDydZktSUOMsIYLj0+8s8mfP+skLZzmfJCY9+QwaxRboyIYjlZuBHKIE51aGLYCOLLZ0YoR0zO8o8OVGSr0yMDnAMWikv8h9Ad5ibSkN94ypoHldU5XQIsJ7TLJEPffsrs2TQ0zIYQsBjxGkGIFCdEFqmygYqDYU7VB8OIzAb01RawNocdgxQx3MwVQQiU7170XdQsNz3rFapkuaZZKSoK+zhg026CxASctCAe+XdHOyQYLKFJWdkTsNNTWEAxmAzzwFC+FqEVNa9YPOG/EyRqiLmo5CAAYNga35gktjkpBqWzmIigP6kR7hifRKuY8lgmgRiASRT6oGkDAIrkxYJMiICdawijP0yBRgacFzi+SvmK1tZyl2wrhwcT1dO5Wp5U05TlqwgYA5IEystMUKuZQCYAjgkIS8Vjf40y0FVgg2xQAkT68MoDVHi3pxk0mESBnQDFPDLDLQSF5ySJbyPWJjCKCi1YEdAdZmVMt0EZ2EBR2tlhKsIk2HQDgIMwGaJp+AqsENFEbDnbhJco5D8VfgLmiFDjqzKs+erAV0qHMoRMhyS0ygMvPA/Dyh+wMSVd6rp0U0zVPDAnFYqULabgE6XBtZLLkDFweUCLy1Oyso3a7jUeFAqkXOSozxeqf0QDMSUBdlQQtqbs1F7F+RPjitUoXkLqpvF2cnE8FxGaC4I1wJSpqxENrNdEzuSiRaFebyU02AeSjgMA9c8exQiPGjPoBd90A/SgwoC0Kc6icWBUwGIy2k1R/70KgR4G+wD+Moy1BBP10pvPTfQ8mZDXRZVLo/Vezc8tXDd1GjM7mxV26imm3R4dqaloCA72AR6yUDDvyS1ZHJ80A5R3qfqjcbvdaUmoQOyKQavNzKTGCzLqDo7R8y80u1vPqNyq0Qsle2uno7uLZ1dPetkIwI5v6A5Ga5KQ0Vd8Pjl8Qoc938rjB2iA5pwpWjskCRxVOMBzH07grk/Y9gbLEbQb2QZ7AwTAeZRKwtqZArABYGcYxnIK/lyojqIJwVtRzF+lm3lbdhTpZrc2GYmFYrH3GaO3FRp2yk7mus4kpEkK7VBfsMLN8Q4kQ/EQA0ewiE4A2lpUAzTe0/vgNd38zCy61imB/bfRTEf5kqrsgPMqAxBmWzANq9Xw5AB9PLec8tcYomdRBeSk9Uztk8PYqXkCFcmRrFDlwaIMMI6OAgAhtWQIRyXiwCDZLiSrqkEFYRyRpRdWE3SakxAqkd5lciblCsCFkFd3qTMLNXRgP/wgKMtfd3eXMMtatsKVDmzsiRQBB7480XREzUKwBJ/esvAUJAiEv5woNj4syL3XFE90pcPe7VUnjPAViaVZyZgCzY1KV0dBKivCgstAU0xEIwueQpRnJATEk8eOGKIQBLvGU8TWQH5CwsWjULeiMKxGYcsR0EZeKMRS2eyIxceTO2iNrhGkqYyYRcXxoQuj0jUS1thYpgQW8PS6gdHNC5fW2QpVqZDvAsPANWG1nVdt6ZnzNu6MCVHbhgDpHjdTs6sSb8raZI5Mx9PyN6iQbgsUXiovWhs3aY85L6F6iON8Uj9VFC1CeEujNnq+SPBBzZEDPbR/EOpwpgnfCS4hvHC4Y1Y7+2v2FIGsnAYO5mmeIWXjmtJRwvjHZtmqvMfltSNxyLRKHyPVezM85ZeOKzGu2MlKEoyEw31DTGtvkmrjz5UsTdQNa/pDFQUvl9/pBhBE5JKpAuj8Vhryw+BGTYQn3x9DOSEy28eBUzJU6fH/ROyH/RuKDPiM6OdgG/hxIALIEeb0XrXQqC7EUXjbQ3wbUbrNg5Mdvh6BmZF6tgAMJ1DimhrivQZ19OuBTSd6JiVMtNLyy5LiptyYtHUgZrmnepp6lZhLF4vP9Nfvuh6h8gklLqLMkvfhwxMNIXj0l8Z62jzgO9107bpZbNJA7NnmHi4RWlbcfH1ZRN9mWXvR2YOE5u3TYdmEy+O5751o/E5Q1FaWbZrNNba2nqRfee0KL5tGs0mWQWo53rNPCuDUHqr0byr5wRxwjdao0bhUfg9VB+NdmzuzNkN6/TrEls3p3uyuK8xmRp42DuN0oQksuy1GidQfKnIF6bDGTwiy9CqWLQx1gTWrCyedfW7iU7v9X4lgu5lwpz5cNnQ4IGJNGC6JtbaFGs0YgmNJJKGFm9tadZaWxuiWqKhocWIt0Sb40bT/VmKCxNQLlCKsZRJjupJTccw02peMhfGO6/vad/Y1TG5VetjCQaADGAAzoKZcKyfOFBAChO6yVwDCrdDxjrWan3t1xeOtzZGY6Ack5ZYU2ss0aqt+WrfVDGtS2k7Kqu+OmTfO+YNhU9csv8L31q4QH0ug7/33jM2b7jj96uvuLDiyXtOjG6583nn9Eff+ERi0cdqVl87uSneffbUL089/fTImX0TF3J3rV50250/eo385q3bzyb26S8f2PH2/vOvnzt/9VWvvXjTW//63Qt3Df7930enVt+3bNvAXftfWXj57UsWnzx86hffvWZxy84Nlwy/+di5u5seuu/dpQ8snB47syV37sjE1OqjiZpbzz/+9pKdbx7/47N9L638ePuhx9799cJVay7t/2ttd/3wCz9ftrh3w+UnVu4/e81P45suv/KZzoNXvfqnaH/DdXvvndyqhx7Ofs25SXvkTK+5quXeT+eN9StanGefO/DoLbtf77nQfWtn5D+fv3LJ3nPfu0G8csWnWn52+sDV3+l59eo/n3pm7/mPHHq+8PiN7n4RPn7L0ROxFnrl3w58dmTroZbbDp5897dm3ev/OHHSaGssDPY9tqTrzJPOJ9/ZWLPrqZqDU3d/7it786vW71l7d+12ve3bNZO/euTln6x+7cbefXccP3L2nRXTO94+d+eFv8Sfe/H0P398svON0weX6N/c+pLzzh+eWFx3zyEV+MsWTL1Yd3L9pQsW/BfdSCzc

View File

@@ -1 +0,0 @@
eNrNWAtwXFUZLoI8qqMCFoRx4Li2pYG9m928k0ofJmknJSQl2VpIA+nZe8/unubuPbf3nLvJpuJYXoIU6FZAi6AiaQKxA+kAVmxBFCLMWCsjpRCqwICIIr6t4CCt/zn37jOPwozOmGknueee//yP7/sf91w1miYOp8w6bie1BHGwLuCBZ68adchGl3BxzUiKiCQzhld3dkfvcR06uSAphM2bKiuxTUPYEkmH2VQP6SxVmY5UpgjnOEH4cIwZmRdPbdoUSOHBPsH6icUDTZFwVU0wkNsTaFq3KeAwkwSaAi4nTiAY0BlYYQlYWJvE4jyOUhlk4RRZGrjychBkBjHhnW5i1yBatVarcWZZRGgmFmAsHCAYM72DpRjsFThNzUwfJ9jRk7CBWrYr+rieJCkcAPNsMJ84gkpzNgXAZycj/zAI1x1qy2jAId1KGqm3SDBkMtaPXFvqy9hSCxcOtRKBK+X5ugnG9RkshamlDsVWpjOubKKCpNTSFDF/ATsOzsBzfoflmqby3SBx7JoQmXXqodi65cikXCAWR75SaaJD5OG6QJ7n8hnE5atQr9VrIf9nDSdIJClHNnYgYEABNJAkVlNhRySEokmCJECIDNom1akwM8jnB0fUijMnhaUtKO6wFOI20Wmc6miAxDh4zNEiEkqEgqg3sIJaBtJNCtsJMrDAnoSFOQ4lWLo3UFHQW1WkNwWckLxE2ELMSWCLDnkKmYOAeDZEGA1Q4KkrfPUZiCs4RvyQlJlQbDOOSSm6OsksMFXZs9y2TaKMKZjTZqEYE0mkY054EGWYizjoMw1kyKilqEWUPmxLRjlUeejD4Stf1xsoeHq5tB1WsNQlsweWKsA/AwATZZCUINblAzlATRN1drRfJiNAPMOlAX74iZFXryGLIbAdAsmZ6+jEF44R5LPVCBXOb/GIhsCADggJWgTCfhBznIKwVRRJSP6Swf8P2vt2+Ewsof5/mfegDKcZBT5D+ZDRmYHuU7mWSwA9ia0EQTFYspjwEQQaS8QVIY6VDVJqAEqw5Houv0tTsDhZeD5FoDTOmh62zAU4MU3JAM/Zl5Qi/5P0KIKtkCklgfjvJEucmhJnGQEMj36vmSVz3l9WKNv5NDnh0a/PILZIliYEsdwU/BGIYU516CLYSGNLJ0bg8qkdZZacyMuXJkYzOAZtlef4D6A7zE0kob5xFTSPK6pyOgRYz2mayJe+/aVZ0utp6Q0g4DHiNAUQqE4ILVNlA5WGwgrV+4MIzMY0kRSwdwA7BqjjAzBhFEWm/PSc76Bgke9ZhVIlzTPJYE5X0MMHm3QIEBJy6IC1wmkOdkhxsgUlZ2ROw6KmMABjsJnhACH8swgprXvFzRvyM0HKipiPQgwGDIKt2YGJY5OTclhacokA+uMe4XL1SbiOJYNpEogFkEypB5IyCKxMWiTIoCiqYyVh7JYpwFDUcYnnr5itbKUpd8G6QnA8XQMqVwuHcpqwZAUBc0CaWEmJEXItA8AUwCEJeb5odCfZAFQlOBALFDOx3o+SECXuzUkm7SdBQLeIAn6ZgVbiglOyhHcRC1MYC6UW7AioLtNSpp3gNGzgaIWMcBlhUgzaQREDsJnnKbgK7FBRBOy5GwfXKCR/Gf6CpkifI6vy9PlqQJcKBgYI6Q/IaRSGX/idARQ/YOLKlXJ6tNMUFbxoTssXKNuNQadLAuslF6Di4EKBlxbHZeWbNlxqPMiXyBnJURiv1HkIBmLKitmQR9qbs9HyNsifAa5QheYtqG7mZicTw3sZoZkgXAFImbIS2cx2TexIJloU5vF8ToN5KOYwDFzz7FGI8GJ7er3og36Q7lUQgD7VSSwOnCqCuJBWM+RPp0KAN8E5gK8sQ1U1yQrprecGWlRvqMecykXVYW/BUwvPdbXG9M6WtY1yukmHp2daAgqyg02glww0/IpTSybHB+0QhXPKvmj8XpdvEjogm2DweSMzicG2lKqzM8TMK93+4VMqt0rEfNlu62huX9PS1rFSNiKQ8wuak+KqNJTUBY9fHq/Acf9vhbFDdEATnhSNHRInjmo8gLlvR3HuTxn2enMR9BtZCjv9RIB51EqDGpkC8EAg51gK8kp+nKgO4klB21GMn+ZYuQxnqlSTB9vMpELxmNvMkYcqbRtkR3MdRzKSpKU2yG/4+IYYxzJFMVCDh3AITkFaGhTD9N7RGfX6bgZGdh3L9MD+tyjm/VxpVXaAGaUhKJAN2Ob1ahgygF7ed26BSyy2gehCcrJ8xvbpQayEHOEKxMh16PwAEURYBwcBwKAaMoTjclHEIBmuuGsqQQWhnBFlF1aTtBoTkOpRXiXyJuWSgIVQmzcpM0t1NOA/vOBoTVe7N9dwi9q2AlXOrCwOFIEX/nyR80SNArDFn95SMBQkiIQ/WFRs/FmRe66oHunLB71aKu8Z4CiTyvsTsAWbmpQuDwLUV4WFFoOmWBSMNnkLkZuQYxJPXnTFEIIkvnI0SWQF5C/NOXUY8kZkd025cHkAlIE3GrF0Jjty9sHEELXBNRI3lQlDXBhjurwiUR9t2bF+QmwNS6sfHNS4/GyRpViZDvHO3gdU61vZ9oXWjhHv6Oy4HLlhDJDilRs4s3b6XUmTzJn6ekz2Fg3CZYns7uU5YytXZyD3LRQO1daEwuPFqk0Id3bEVu/3FL+wIWJwjuZfUGVHPOH7i/cwnt1xMdY7u0uOlIHM7sBOqq6mxEvHtaSj2dHm1VPV+S/z6karQ5EI/NtVcjLPWHp2hxrvduWhyMuMVYWrqrVwnRaO7C45G6ia0XQGKrJ3h+/PRdCEpBLJ7HBNTSR8LzDDBuKTq0dATrj8qmHAlOx7etS/Lftu50UFRpw13AL4Zh+NugBypB6tci0EumtRpKapqrYp0oBWXhzd2ezriU6L1K4oMJ1DimitOfqM6knXApqONU9LmcmFBZclxU05sWjqQk3zbvg0tZQdqQnLn8nzj7nfITIJpe6czML3IQMTTfYh6a+MdaQ+6ntd3zN53nTSwOwpJu5oUNouOPb+gom+zHnvR2YGExt6JgPTiefGc9+64ZoZQ5HfWbBruLqxsfEY585oUV3PJJpOsgxQz/X5s+wshtLbjWbdPSOIY77RGjWye+HvvnAk0nxJywC361sTkTU1G6vWWtFVuD39iHcbpQlJZNlrNU6g+FKRyU4GU3hQlqELqyO11XVgzeLcXVe3G2vxPu8XI+heJsyZjxQMLb4wkQZMzq9urKuuNapjGonFDa2msaFea2ysimixqqoGo6YhUl9j1N2Tpjg7BuUCJRhLmOQBPa7pGGZazUvm7GjLZR3LL25r3nmp1sViDACJYgDOgplwpJs4UECyY7rJXAMKt0NGmldoXcsvyz7UWBuprq6P6LUNYb2xOtaota7tGs+ldT5th2XVVxfum0e8oXDiuEPn3njyHPVzPPw/elR0/XjDoWWfeO82HHrs6Z+mv9Yl/rDjwPDm04YumNiSfurgtue2/HDx91pq33s78LPr7tj67E82sZf3ZvadcvVTZ+/54q8eqL9l6W/oO0cOH+aZ1W88ds3Koa2s89K7b7qw8/z922/eMm/hTacva/yKOf+6eek3G2M3hbr+uvff43v74vPwOuv5Q3t+PrRnYu5dX931qYNLO9d2v/b879/9130bTzrQf0B/54l5NRPrT7qC4Gd++fiy/Rv3r587/7Pf2LBvvXbW3uQN52zRr7kZH9j2nTduXbPo81uvHrk5eO+yj/7izJ57X5v31uPZL6/69Geqn2hpuG1z97cOzz9nVXz89s0nzs3e8fTfX5l7xYuv9NgLFpxx42vm6U9OvHrWyVXddwy+ed8pkRcaVt6F9z/z9YUnDpze+frnKq7feNr3L1pyyR7rpdHw1sjvFr97yT0T6PqPN//l2lc/ueT1eW997NmDH1l65vqhdc8daP3wntaTtMPvRYPz//SD/obl4Ybk0LP75h49Mhb+5j/fffLWNeMrzvjR7m0nVLb89uVrJyIHu2I7x++8ZXQJ3vrYCy/9Y/sC/dFHDn1p5zPZ/syfT/v2H9/eve3wbZ3tR7TRiSMrl/x67dHh6J090du333l237btPcPGjhNWDVfc0Fqx6uGH5ygEj59Td8K5W/72oTlz/gMJcU9B

View File

@@ -1 +0,0 @@
eNrNWX9wHFUdB6sMyI8RnCo/Rvu8Ak3x9pLLJWkSBiEkbQmmSZuklKap4d3uu9uX7O5b9u3e5dphOqDIjIzoKcIICCppQmMFKghUqDiDBSz++EemxB/AKCOMDKIMI8yg4vf7du9uL78KMzhjppncvX3f9/3x+Xx/7Ot1MwXmSS6c4/dzx2ce1X34IsvXzXjs6oBJ/0vTNvNNYUxtHhgavjvw+Nx5pu+7srOxkbo8RR3f9ITL9ZQu7MZCutFmUtI8k1NZYZR+d8aZuxM2nRzzxQRzZKIz3dTckkxU9iQ6d+xOeMJiic5EIJmXSCZ0AVY4PixcxpOkd41NLhXZzySuSVY3Uim59EFv/W5mWULtJf1cZ8QXxGbMJyURpNQxJvPUqsksF1dJkfsm4U5OeDZFnwl1DPiVRYgHCSR38sQ3GaEFyi2atRiRjHq6CWcIS6bINpP6pCgCy1CnWXxCHT/hiCKhWRH4F8eNXuAdiq+RxC4Rh9oM9u6EsAiDWfBMt2hgMC2jtWpSOA7zNYv6AAUcoJSrsKEY7PVpgVulsdA22MAdN/DHpG4ymyY6dydcAId5Psdg704Aol4JPxhM6h530W84ZCj0TD1FJywhJkjgor6Si1qk70E8wCE4X7fAuDFD2JQ76lDqlAZyyibuM1stLRCLFqjn0ZKKS7TgBJalfDdYjgYWRGaH+hK3rguCK30iciRSiiZ6DA/X/Qoq8B3E8VFq1Bl1SPSzVQIqJpfEpR4EDAhOiiZzOms70ikyDDAjQIRNuhbXuW+VSMR+WceQnCdsIl2m8xzXSZFlJXgsSQNL5VNJMprYwIFCusVhOyMG9Wko4VBJU3lRGE2sreltjum1gROYdUA/Irw8dfiuUKHwCKSVCxFWdAVWRepLFXqGIZlnQh2rkYuEbzaFA6Yqe7pc12LKmJo5vQ7JCkgInUomk4rS0lTsNjBqNndYmA4uMsrjysMIjkj5jtFEzdOdaDusUNSFtQGW1qoUk5CV9ZDUITYYAVnklkUG+vu2YwRYaDgaEIWfGVX1GnEEAdshkFIEns4i4SwjEVuNVO38npBoBAzoh5CQBhCOgljhFIRtbUwC+csm/z9oH9kRMbGO+h8w70EZLQgOfIbygdFZgu4LuVZJAN2kTp6RLCw5wo8QBBoj4ooQx8oGlCpCoUeuV/K7PgXjySKrKQKlcdn0cDEX4MQCZ0VZsc9Ekf9JesRgq2VKXSA+mGTJcQtxxghQ+Br1mmUy571lhbJdLpITIf3GDOb6Zn1CMCew4UMiSyXXoYtQo0AdnRmJnQs7yjI5UZWvT4xucAzaqqw2ZFN4IsibUN+kClrIFVU5PQasl7zA8GFkf32WjIZaRhMEeEwktwEC1QmhZaps4GgorHB9IknAbMrzpg97i9SrzguxyMw/veI7KGiIPFurVKF5Fpus6EqG+FCL7wKEfBypYK12mkc9Fk+2JHIGcxoWNYUBGEOtEgxHiKLDWH3dizdvyM88m1fEIhSyMGAw6iwPTI5aks2HpaeSCKA/FxKuUp/8wHMwmBaDWADJlHogqYDAqjHMZ5N+rI7VhXEIU0CQYS9gob/+cmWrwGUA1tWCE+oKB7XaoZLnHawgYA5IM8dEjEjgGAAmzJYGQl4tGkMmzHQ2aoeRL2tRfYKYECUZzkk4+SUB3RgFojIDrSQAp7CEDzKHchh6UQv1fKgui1Kmj9ECbJBkA0Z4HmFsAe0gxgBqVXlaDKdbjCJgL4McuMYh+efh73ObjXlYlRfPVwO6VDJRZGwigdMojPbwtwQovs/ExZX59OjjNvdlbE6rFig3yEKnM4H1yAWoOLRW4NHiHFa+RcOlxoNqiVySHLXxSp1HYCDmIs6GKtLhnE26eiF/ilKhCs3b57pVmZ0sCs8xQktBuAGQsrASucINLOohEx0O83g1p8E8kvUEBa6F9ihEZNye0TD6oB+kRxUEoE91EkcCp2IQ19JqifwZUAjITjgH8MUy1NxirkVvQzdIwzpDfa2obMg0hQuhWvje1mos7uy8tjGfbujw4kzLQ0H2qAX0wkDDnxx3MDneb4eonTPvjSbqddUmoQOyeQGvN5hJArbZqs4uEbOwdEeHL6jcKhGrZbu3v7tva09v/0ZsRCAXFTTPluFLYbwuhPwKeQWOR58Vxh7TAU34pmjssRzzVOMBzCM74rm/YNgbrUQwamQ29SaYD+ZxpwBqMAXgC4OcEzbkFb6cqA4SSkHbUYxf5FhchjNVquHBrrC4r3gsXeHhoUrbOHa0wPOQkayA2iC/C1CVDJItxWKgBg/fY9SGtDQ4hem9f2A47LslGNl1iulBo3dRKiek0qrsADPqQ1AjG7At7NUwZAC9wvfcGpdEdpzpPnJy/owd0YM5eRzhasSodOjqAJEkVAcHAcCkGjJ8L5B+jEEYrlxgKUEFIc6I2IXVJK3GBKJ6VFiJwkm5LmAp0htOysJRHQ34Dw8k2TrYF8410uGuq0DFmVXkcnhTUZkvKp6oUQC2RNObDUNBniH8yVixiWZFGbqiemQknwxrKd4zwFEWx9shsIVaGkrPDwLUV4WFloWmGAtGL95CVCbkLOIpY1cMKUjia2ZMhhVQPn/c6VOQN375wILrpPtAGXijMUcX2JHLD+R3cRdcYzlLmbBL+sasjlck6qWtPDvBmKtRtPqBSU3iawuWYmU6xLu8D6g2trH3ivX90+HR5ftx5IYxAMUbx6Vw9kddSUPmLHw8i71Fg3A5fvnhroqxjZtLkPsOaUq1tqSa7o+rtiDc5WlXPX80/sCFiME5WnT9Vp4Ohe+N7xGyvHcT1QeG6o7EQJb3Us9ua6nz0gscdLQ80715obroYVXdTCaVTsO/A3Uny5Kjl/eq8e5AFYqqzGxzU3NGa2rTmtIP150NVC1pugAV5e813VuJoAVJ5ZvlqZa2TPM9wAwXiM++OA1yfiCvmwJM2S+fnonuAr8/8PkaI86a6gF8y4eGAwA5vY5cHjgEdLeSdEtnc2tnuo1s3DS8vzvSM7woUgeGgekSUkRbX6HPjG4GDtB0tntRysydX3MZKW7hxKKpCzUtvL/U1FJ5uqUJf+YuOOZ+j2ESou6KzPnvQQYmmvKD6C/GOr1uOPK6dWRuzWLSwOwFJu5tV9o+e+z9NRMjmTXvRWYJE9tG5hKLiVfG88i6qZYlQ1HdWbNrKtPR0XGMc5e0qGVkjiwmOQ/Q0PVzl9kZhzLcTZbdvSSIs5HRGjfKj8HnsaZ0untLT1Gal27csql0ZXfPFn28MKhvPxjeRmk+Ehl7rSYZFF/ul8pzSZtOYhm6KJNuzbSBNRdW7rqGgmxP+Hp/IYHuZcGcebBmaPzCBA2YOzfT0ZZpNTJZjWVzhtbS0b5O6+hoTmvZ5uZ2o6U9va7FaLu7wGl5FsoFyQuRt9h9ek7TKcy0WpjM5Zme7f1dm3q791+pDYqsAECGKQDnwEw4PcQ8KCDlWd0SgQGF22PT3Ru0wa7t5Qc7WtOZzLp0c4YZ7R2ZbIe2ftvg/ZW0rqbtFFZ99d8J106HQ+Hh47esuvHE49TPCvh9912j/Kv+HV2nXD92+I3p2086cuowz39707PBJ797y1/HO+XNfQcPPPHc0NnrS0X/2ivIFbf8VP5rz59G2t94/qE/e2+/UXr73+5br/1l5PE9s4/87XZq75NXXfzppkevuuWhF3+2aeascx8Yv/PnHX//wsnn33THzG9P/sStN37t5KHT387+eHTnqc88+5UV+ZU3tD+1ambP+k/tefnNmY+VvnnK9cOv3HNoy0oxsuufs2fcWZ666J6z7TePHv7hH45su+CCVXd8uOGly/94Se9NxmXa609vXXN76Yzdl6x+YevqH3wjfevdg+f8aGCj952PvvaPnzz1zCm/P2HyW6995K49J7QdfeK5r/ddv7r7yCuHTnr8xC+/c9ovXmy45rYPvTUjD398W4e/aufr5MmrHxlpXPHyzcUzZfdtR895YeQ/r56677GjD74z4R08zTL3jD/5ysqvrv6NWLtv5efuWnU8xm3FcU2v/vq8l+DzfwFf21Ms

Some files were not shown because too many files have changed in this diff Show More