Compare commits

...

212 Commits

Author SHA1 Message Date
Sydney Runkle
59f5faddb7 model settings in create agent 2025-11-19 11:14:23 -05:00
ccurme
328ba36601 chore(openai): skip Azure text completions tests (#34021) 2025-11-19 09:29:12 -05:00
Sydney Runkle
6f677ef5c1 chore: temporarily skip openai integration tests (#34020)
getting around deprecated azure model issues blocking core release
2025-11-19 14:05:22 +00:00
Sydney Runkle
d47d41cbd3 release: langchain-core 1.0.6 (#34018) 2025-11-19 08:16:34 -05:00
William FH
32bbe99efc chore: Support tool runtime injection when custom args schema is prov… (#33999)
Support injection of injected args (like `InjectedToolCallId`,
`ToolRuntime`) when an `args_schema` is specified that doesn't contain
said args.

This allows for pydantic validation of other args while retaining the
ability to inject langchain specific arguments.

fixes https://github.com/langchain-ai/langchain/issues/33646
fixes https://github.com/langchain-ai/langchain/issues/31688

Taking a deep dive here reminded me that we definitely need to revisit
our internal tooling logic, but I don't think we should do that in this
PR.

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
Co-authored-by: Sydney Runkle <sydneymarierunkle@gmail.com>
2025-11-18 17:09:59 +00:00
ccurme
990e346c46 release(anthropic): 1.1 (#33997) 2025-11-17 16:24:29 -05:00
ccurme
9b7792631d feat(anthropic): support native structured output feature and strict tool calling (#33980) 2025-11-17 16:14:20 -05:00
CKLogic
558a8fe25b feat(core): add proxy support for mermaid png rendering (#32400)
### Description

This PR adds support for configuring HTTP/HTTPS proxies when rendering
Mermaid diagrams as PNG images using the remote Mermaid.INK API. This
enhancement allows users in restricted network environments to access
the API via a proxy, making the remote rendering feature more robust and
accessible.

The changes include:
- Added optional `proxies` parameter to `draw_mermaid_png` and
`_render_mermaid_using_api` functions
- Updated `Graph.draw_mermaid_png` method to support and pass through
proxy configuration
- Enhanced docstrings with usage examples for the new parameter
- Maintained full backward compatibility with existing code

### Usage Example

```python
proxies = {
        "http": "http://127.0.0.1:7890",
        "https": "http://127.0.0.1:7890"
}

display(Image(chain.get_graph().draw_mermaid_png(proxies=proxies)))

```

### Dependencies

No new dependencies required. Uses existing `requests` library for HTTP
requests.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-17 12:45:17 -06:00
Mason Daugherty
52b1516d44 style(langchain): fix some middleware ref syntax (#33988) 2025-11-16 00:33:17 -05:00
Mason Daugherty
8a3bb73c05 release(openai): 1.0.3 (#33981)
- Respect 300k token limit for embeddings API requests #33668
- fix create_agent / response_format for Responses API #33939
- fix response.incomplete event is not handled when using
stream_mode=['messages'] #33871
2025-11-14 19:18:50 -05:00
Mason Daugherty
099c042395 refactor(openai): embedding utils and calculations (#33982)
Now returns (`_iter`, `tokens`, `indices`, token_counts`). The
`token_counts` are calculated directly during tokenization, which is
more accurate and efficient than splitting strings later.
2025-11-14 19:18:37 -05:00
Kaparthy Reddy
2d4f00a451 fix(openai): Respect 300k token limit for embeddings API requests (#33668)
## Description

Fixes #31227 - Resolves the issue where `OpenAIEmbeddings` exceeds
OpenAI's 300,000 token per request limit, causing 400 BadRequest errors.

## Problem

When embedding large document sets, LangChain would send batches
containing more than 300,000 tokens in a single API request, causing
this error:
```
openai.BadRequestError: Error code: 400 - {'error': {'message': 'Requested 673477 tokens, max 300000 tokens per request'}}
```

The issue occurred because:
- The code chunks texts by `embedding_ctx_length` (8191 tokens per
chunk)
- Then batches chunks by `chunk_size` (default 1000 chunks per request)
- **But didn't check**: Total tokens per batch against OpenAI's 300k
limit
- Result: `1000 chunks × 8191 tokens = 8,191,000 tokens` → Exceeds
limit!

## Solution

This PR implements dynamic batching that respects the 300k token limit:

1. **Added constant**: `MAX_TOKENS_PER_REQUEST = 300000`
2. **Track token counts**: Calculate actual tokens for each chunk
3. **Dynamic batching**: Instead of fixed `chunk_size` batches,
accumulate chunks until approaching the 300k limit
4. **Applied to both sync and async**: Fixed both
`_get_len_safe_embeddings` and `_aget_len_safe_embeddings`

## Changes

- Modified `langchain_openai/embeddings/base.py`:
  - Added `MAX_TOKENS_PER_REQUEST` constant
  - Replaced fixed-size batching with token-aware dynamic batching
  - Applied to both sync (line ~478) and async (line ~527) methods
- Added test in `tests/unit_tests/embeddings/test_base.py`:
- `test_embeddings_respects_token_limit()` - Verifies large document
sets are properly batched

## Testing

All existing tests pass (280 passed, 4 xfailed, 1 xpassed).

New test verifies:
- Large document sets (500 texts × 1000 tokens = 500k tokens) are split
into multiple API calls
- Each API call respects the 300k token limit

## Usage

After this fix, users can embed large document sets without errors:
```python
from langchain_openai import OpenAIEmbeddings
from langchain_chroma import Chroma
from langchain_text_splitters import CharacterTextSplitter

# This will now work without exceeding token limits
embeddings = OpenAIEmbeddings()
documents = CharacterTextSplitter().split_documents(large_documents)
Chroma.from_documents(documents, embeddings)
```

Resolves #31227

---------

Co-authored-by: Kaparthy Reddy <kaparthyreddy@Kaparthys-MacBook-Air.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-14 18:12:07 -05:00
Sydney Runkle
9bd401a6d4 fix: resumable shell, works w/ interrupts (#33978)
fixes https://github.com/langchain-ai/langchain/issues/33684

Now able to run this minimal snippet successfully

```py
import os

from langchain.agents import create_agent
from langchain.agents.middleware import (
    HostExecutionPolicy,
    HumanInTheLoopMiddleware,
    ShellToolMiddleware,
)
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.types import Command


shell_middleware = ShellToolMiddleware(
    workspace_root=os.getcwd(),
    env=os.environ,  # danger
    execution_policy=HostExecutionPolicy()
)

hil_middleware = HumanInTheLoopMiddleware(interrupt_on={"shell": True})

checkpointer = InMemorySaver()

agent = create_agent(
    "openai:gpt-4.1-mini",
    middleware=[shell_middleware, hil_middleware],
    checkpointer=checkpointer,
)

input_message = {"role": "user", "content": "run `which python`"}

config = {"configurable": {"thread_id": "1"}}

result = agent.invoke(
    {"messages": [input_message]},
    config=config,
    durability="exit",
)
```
2025-11-14 15:32:25 -05:00
ccurme
6aa3794b74 feat(langchain): reference model profiles for provider strategy (#33974) 2025-11-14 19:24:18 +00:00
Sydney Runkle
189dcf7295 chore: increase coverage for shell, filesystem, and summarization middleware (#33928)
cc generated, just a start here but wanted to bump things up from 70%
ish
2025-11-14 13:30:36 -05:00
Sydney Runkle
1bc88028e6 fix(anthropic): execute bash + file tools via tool node (#33960)
* use `override` instead of directly patching things on `ModelRequest`
* rely on `ToolNode` for execution of tools related to said middleware,
using `wrap_model_call` to inject the relevant claude tool specs +
allowing tool node to forward them along to corresponding langchain tool
implementations
* making the same change for the native shell tool middleware
* allowing shell tool middleware to specify a name for the shell tool
(negative diff then for claude bash middleware)


long term I think the solution might be to attach metadata to a tool to
map the provider spec to a langchain implementation, which we could also
take some lessons from on the MCP front.
2025-11-14 13:17:01 -05:00
Mason Daugherty
d2942351ce release(core): 1.0.5 (#33973) 2025-11-14 11:51:27 -05:00
Sydney Runkle
83c078f363 fix: adding missing async hooks (#33957)
* filling in missing async gaps
* using recommended tool runtime injection instead of injected state
  * updating tests to use helper function as well
2025-11-14 09:13:39 -05:00
ZhangShenao
26d39ffc4a docs: Fix doc links (#33964) 2025-11-14 09:07:32 -05:00
Mason Daugherty
421e2ceeee fix(core): don't mask exceptions (#33959) 2025-11-14 09:05:29 -05:00
Mason Daugherty
275dcbf69f docs(core): add clarity to base token counting methods (#33958)
Wasn't immediately obvious that `get_num_tokens_from_messages` adds
additional prefixes to represent user roles in conversation, which adds
to the overall token count.

```python
from langchain_google_genai import GoogleGenerativeAI

llm = GoogleGenerativeAI(model="gemini-2.5-flash")
num_tokens = llm.get_num_tokens("Hello, world!")
print(f"Number of tokens: {num_tokens}")
# Number of tokens: 4
```

```python
from langchain.messages import HumanMessage

messages = [HumanMessage(content="Hello, world!")]

num_tokens = llm.get_num_tokens_from_messages(messages)
print(f"Number of tokens: {num_tokens}")
# Number of tokens: 6
```
2025-11-13 17:15:47 -05:00
Sydney Runkle
9f87b27a5b fix: add filesystem middleware in init (#33955) 2025-11-13 15:07:33 -05:00
Mason Daugherty
b2e1196e29 chore(core,infra): nits (#33954) 2025-11-13 14:50:54 -05:00
Sydney Runkle
2dc1396380 chore(langchain): update deps (#33951) 2025-11-13 14:21:25 -05:00
Mason Daugherty
77941ab3ce feat(infra): add automatic issue labeling (#33952) 2025-11-13 14:13:52 -05:00
Mason Daugherty
ee19a30dde fix(groq): bump min ver for core dep (#33949)
Due to issue with unit tests and docs URL for exceptions
2025-11-13 11:46:54 -05:00
Mason Daugherty
5d799b3174 release(nomic): 1.0.1 (#33948)
support Python 3.14 #33655
2025-11-13 11:25:39 -05:00
Mason Daugherty
8f33a985a2 release(groq): 1.0.1 (#33947)
- fix: handle tool calls with no args #33896
- add prompt caching token usage details #33708
2025-11-13 11:25:00 -05:00
Mason Daugherty
78eeccef0e release(deepseek): 1.0.1 (#33946)
- support strict beta structured output #32727
2025-11-13 11:24:39 -05:00
ccurme
3d415441e8 fix(langchain, openai): backward compat for response_format (#33945) 2025-11-13 11:11:35 -05:00
ccurme
74385e0ebd fix(langchain, openai): fix create_agent / response_format for Responses API (#33939) 2025-11-13 10:18:15 -05:00
Christophe Bornet
2bfbc29ccc chore(core): fix some ruff TC rules (#33929)
fix some ruff TC rules but still don't enforce them as Pydantic model
fields use type annotations at runtime.
2025-11-12 14:07:19 -05:00
Christophe Bornet
ef79c26f18 chore(cli,standard-tests,text-splitters): fix some ruff TC rules (#33934)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-12 14:06:31 -05:00
ccurme
fbe32c8e89 release(anthropic): 1.0.3 (#33935) 2025-11-12 10:55:28 -05:00
Mohammad Mohtashim
2511c28f92 feat(anthropic): support code_execution_20250825 (#33925) 2025-11-12 10:44:51 -05:00
Sydney Runkle
637bb1cbbc feat: refactor tests coverage (#33927)
middleware tests have gotten quite unwieldy, major restructuring, sets
the stage for coverage increase

this is super hard to review -- as a proof that we've retained important
tests, I ran coverage on `master` and this branch and confirmed
identical coverage.

* moving all middleware related tests to `agents/middleware` folder
* consolidating related test files
* adding coverage utility to makefile
2025-11-11 10:40:12 -05:00
Mason Daugherty
3dfea96ec1 chore: update README.md files (#33919) 2025-11-10 22:51:35 -05:00
ccurme
68643153e5 feat(langchain): support async summarization in SummarizationMiddleware (#33918) 2025-11-10 15:48:51 -05:00
Abbas Syed
462762f75b test(core): add comprehensive tests for groq block translator (#33906) 2025-11-10 15:45:36 -05:00
ccurme
4f3729c004 release(model-profiles): 0.0.4 (#33917) 2025-11-10 12:06:32 -05:00
Mason Daugherty
ba428cdf54 chore(infra): add note to pr linting workflow (#33916) 2025-11-10 11:49:31 -05:00
Mason Daugherty
69c7d1b01b test(groq,openai): add retries for flaky tests (#33914) 2025-11-10 10:36:11 -05:00
Mason Daugherty
733299ec13 revert(core): "applied secrets_map in load to plain string values" (#33913)
Reverts langchain-ai/langchain#33678

Breaking API change
2025-11-10 10:29:30 -05:00
ccurme
e1adf781c6 feat(langchain): (SummarizationMiddleware) support use of model context windows when triggering summarization (#33825) 2025-11-10 10:08:52 -05:00
Shahroz Ahmad
31b5e4810c feat(deepseek): support strict beta structured output (#32727)
**Description:** This PR adds support for DeepSeek's beta strict mode
feature for structured
outputs and tool calling. It overrides `bind_tools()` and
`with_structured_output()` to automatically use
DeepSeek's beta endpoint (https://api.deepseek.com/beta) when
`strict=True`. Both methods need overriding because they're independent
entry points and user can call either directly. When DeepSeek's strict
mode graduates from beta, we can just remove both overriden methods. You
can read more about the beta feature here:
https://api-docs.deepseek.com/guides/function_calling#strict-mode-beta
  
**Issue:** Implements #32670 


**Dependencies:** None


**Sample Code**

```python
from langchain_deepseek import ChatDeepSeek
from pydantic import BaseModel, Field
from typing import Optional
import os


# Enter your DeepSeek API Key here
API_KEY = "YOUR_API_KEY"


# location, temperature, condition are required fields
# humidity is optional field with default value
class WeatherInfo(BaseModel):
    location: str = Field(description="City name")
    temperature: int = Field(description="Temperature in Celsius")
    condition: str = Field(description="Weather condition (sunny, cloudy, rainy)")
    humidity: Optional[int] = Field(default=None, description="Humidity percentage")


llm = ChatDeepSeek(
    model="deepseek-chat",
    api_key=API_KEY,
)

# just to confirm that a new instance will use the default base url (instead of beta)
print(f"Default API base: {llm.api_base}")



# Test 1: bind_tools with strict=True shoud list all the tools calls
print("\nTest 1: bind_tools with strict=True")
llm_with_tools = llm.bind_tools([WeatherInfo], strict=True)
response = llm_with_tools.invoke("Tell me the weather in New York. It's 22 degrees, sunny.")
print(response.tool_calls)



# Test 2: with_structured_output with strict=True
print("\nTest 2: with_structured_output with strict=True")
structured_llm = llm.with_structured_output(WeatherInfo, strict=True)
result = structured_llm.invoke("Tell me the weather in New York.")
print(f"  Result: {result}")
assert isinstance(result, WeatherInfo), "Result should be a WeatherInfo instance"
```

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-09 22:24:33 -05:00
Mason Daugherty
c6801fe159 chore: fix URL underlining in README.md (#33905) 2025-11-09 22:22:56 -05:00
AmazingcatAndrew
1b563067f8 fix(chroma): resolve OpenCLIP + Chroma image embedding test regression (#33899)
**Description:**  
Fixes the OpenCLIP × Chroma regression that caused nested embedding
errors when adding or searching image data.
The test case `test_openclip_chroma_embed_no_nesting_error` has been
restored and verified to work correctly with the current LangChain core
dependencies.
Functional validation confirms that `similarity_search_by_image` now
returns correct, metadata‑preserving results.

**Issue:**  
Fixes #33851

**Dependencies:**  
No new dependencies introduced.  

**Testing:**  
All tests under  
```bash
uv run --group test pytest tests/unit_tests
```  
result:
```
30 passed in 91.26s (0:01:31)
```
have passed successfully using Python 3.13.9 and uv‑managed environment.
This confirms that the regression has been fixed.  

Running  
```bash
make test
```  
still produces cleanup‑time `AttributeError: 'ProactorEventLoop' object
has no attribute '_ssock'` on Windows (Python 3.13+).
This is a benign asyncio teardown message rather than a functional
failure.
`uv run pytest` closes event loops immediately after tests, while `make
test` invokes pytest through a secondary process layer that leaves a
background loop alive at interpreter shutdown.
This difference in teardown behavior explains the extra messages seen
only when using `make test`.

**Summary:**  
- Verified the OpenCLIP + Chroma image pipeline works correctly.  
- `uv run --group test pytest` fully passes; the fix is complete.  
- The residual `_ssock` warnings occur only during
Windows asyncio cleanup and are not related to this code change.

This is my first time contributing code, please contact me with any
questions

---

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-09 21:24:33 -05:00
Mason Daugherty
1996d81d72 chore(langchain): pass on reference docstrings (middleware) (#33904) 2025-11-09 21:18:28 -05:00
Mason Daugherty
ab0677c6f1 fix(groq): handle tool calls with no args (#33896)
When Groq returns tool calls with no arguments, it sends arguments:
`'null'` (JSON null), but LangChain's core parsing expects either a dict
or converts null to Python None, which fails the `isinstance(args_,
dict)` check and incorrectly marks the tool call as invalid.

Related to #32017
2025-11-08 22:30:44 -05:00
artreimus
bdb53c93cc docs(langchain): correct IBM provider link in chat_models docstring (#33897)
**PR title**

```
docs(langchain): correct IBM provider link in chat_models docstring
```

**PR message**

**Description**
Fix broken link in the `chat_models` docstring. The **ibm** bullet
incorrectly linked to the DeepSeek provider page; update it to the
canonical IBM provider docs.

This only affects generated API reference content on
`reference.langchain.com`. No runtime behavior changes.

**Issue**
N/A (documentation-only).

**Dependencies**
None.

**Testing & quality**

* Ran `make format`, `make lint`, and `make test` in the package (no
code changes expected to affect tests).
2025-11-08 07:02:33 -06:00
Alazar Genene
94d5271cb5 fix(standard-tests): fix semantic typo in if statement (#33890) 2025-11-07 18:01:59 -05:00
ccurme
e499db4266 release(langchain): 1.0.5 (#33893) 2025-11-07 17:54:43 -05:00
npage902
cc3af82b47 fix(core): applied secrets_map in load to plain string values (#33678)
Replaces #33618 

**Description:** Fixes the bug in the `load()` function where secret
placeholders in plain dicts were not replaced, even if they match a key
in `secrets_map`, and adds a test case.

Example:
```py
obj = {"api_key": "__SECRET_API_KEY__"}
secret_key = "secret_key_1234"
secrets_map = {"__SECRET_API_KEY__": secret_key}
result = load(obj, secrets_map=secrets_map)
```
Before this change, printing `api_key` in `result` would output
`"__SECRET_API_KEY__"`. Now, it will properly output
`"secret_key_1234"`.

**Issue:** Fixes #31804 

**Dependencies:** None

`make format`, `make lint`, and `make test` have all passed on my
machine.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-07 17:14:13 -05:00
Mshari
9383b78be1 feat(groq): add prompt caching token usage details (#33708)
**Description:** 
Adds support for prompt caching usage metadata in ChatGroq. The
integration now captures cached token information from the Groq API
response and includes it in the `input_token_details` field of the
`usage_metadata`.

Changes:
- Created new `_create_usage_metadata()` helper function to centralize
usage metadata creation logic
- Extracts `cached_tokens` from `prompt_tokens_details` in API responses
and maps to `input_token_details.cache_read`
- Integrated the helper function in both streaming
(`_convert_chunk_to_message_chunk`) and non-streaming
(`_create_chat_result`) code paths
- Added comprehensive unit tests to verify caching metadata handling and
backward compatibility

This enables users to monitor prompt caching effectiveness when using
Groq models with prompt caching enabled.

**Issue:** N/A

**Dependencies:** None

---------

Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-07 17:05:22 -05:00
ccurme
3c492571ab release(anthropic): 1.0.2 (#33888) 2025-11-07 16:47:25 -05:00
ccurme
f2410f7ea7 revert: Support for SystemMessage in create_agent (#33889)
Reverts langchain-ai/langchain#33640

Introduces lint errors into langchain-anthropic

Should incorporate into 1.1 instead of patch release.
2025-11-07 16:44:11 -05:00
Mason Daugherty
91560b6a7a chore(infra): expand PR labeling (#33887) 2025-11-07 16:37:35 -05:00
ccurme
b1dd448233 release(core): 1.0.4 (#33886) 2025-11-07 16:26:44 -05:00
dy93
904daf6f40 feat(core): support draw subgraph using pygraphviz (#32966)
The `draw_png()` method currently does not support drawing subgraphs.
This PR adds the ability to render subgraph outlines, improving
visualization clarity when working with nested structures.
2025-11-07 15:58:35 -05:00
Mohammad Mohtashim
8e31a5d7bd fix(core): Fix tool name check in name_dict for PydanticToolsParser (#33479)
- **Description:** The root cause of this issue is that when a user
defines `model_config` in a `BaseModel`, the `{"type": <tool_name>}`
value is derived from the title specified in `model_config` when the
results are parsed
[here](https://vscode.dev/github/keenborder786/langchain/blob/fix/tool_name_dict/libs/core/langchain_core/output_parsers/openai_tools.py#L199).
However,
[tool.__name__](https://vscode.dev/github/keenborder786/langchain/blob/fix/tool_name_dict/libs/core/langchain_core/output_parsers/openai_tools.py#L331)
uses the class name (in uppercase) of the `BaseModel`, resulting in a
`KeyError` when a custom title is provided in `model_config`.
 

The Best Solution will be to use the title provided in `model_config`
attribute if provided one since that is what `type` will be parsed to,
if not then use `tool.__name__`. But need to make sure that this works
only for Pydantic V2.

  - **Issue:** #27260

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-07 15:39:47 -05:00
Sydney Runkle
ee630b4539 fix: bump up default recursion limit (#33881)
Fixes https://github.com/langchain-ai/langchain/issues/33740

We don't want to depend on recursion limit here, model call limit
middleware is more appropriate
2025-11-07 13:49:12 -06:00
Jacob Lee
46971447df fix(core): Filter empty content blocks from formatted prompts (#32519)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-07 14:39:25 -05:00
Azibek
d8b94007c1 fix(huggingface): pass llm params to ChatHuggingFace (#32368)
This PR fixes #32234 and improves HuggingFace chat model integration by:

Ensuring ChatHuggingFace inherits key parameters (temperature,
max_tokens, top_p, streaming, etc.) from the underlying LLM when not
explicitly set.
Adding and updating unit tests to verify property inheritance.
No breaking changes; these updates enhance reliability and
maintainability.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-07 14:29:15 -05:00
Mohammad Mohtashim
cf595dcc38 chore(langchain): Support for SystemMessage in create_agent (#33640)
- **Description:** Updated Function Signature of `create_agent`, the
system prompt can be both a list and string. I see no harm in doing
this, since SystemMessage accepts both.
- **Issue:** #33630

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-11-07 13:00:38 -06:00
Copilot
d27211cfa7 fix(core): context preservation in shielded async callbacks (#32163)
The `@shielded` decorator in async callback managers was not preserving
context variables, breaking OpenTelemetry instrumentation and other
context-dependent functionality.

## Problem

When using async callbacks with the `@shielded` decorator (applied to
methods like `on_llm_end`, `on_chain_end`, etc.), context variables were
not being preserved across the shield boundary. This caused issues with:

- OpenTelemetry span context propagation
- Other instrumentation that relies on context variables
- Inconsistent context behavior between sync and async execution

The issue was reproducible with:

```python
from contextvars import copy_context
import asyncio
from langgraph.graph import StateGraph

# Sync case: context remains consistent
print("SYNC")
print(copy_context())  # Same object
graph.invoke({"result": "init"})
print(copy_context())  # Same object

# Async case: context was inconsistent (before fix)
print("ASYNC") 
asyncio.run(graph.ainvoke({"result": "init"}))
print(copy_context())  # Different object than expected
```

## Root Cause

The original `shielded` decorator implementation:

```python
async def wrapped(*args: Any, **kwargs: Any) -> Any:
    return await asyncio.shield(func(*args, **kwargs))
```

Used `asyncio.shield()` directly without preserving the current
execution context, causing context variables to be lost.

## Solution

Modified the `shielded` decorator to:

1. Capture the current context using `copy_context()`
2. Create a task with explicit context using `asyncio.create_task(coro,
context=ctx)` for Python 3.11+
3. Shield the context-aware task
4. Fallback to regular task creation for Python < 3.11

```python
async def wrapped(*args: Any, **kwargs: Any) -> Any:
    # Capture the current context to preserve context variables
    ctx = copy_context()
    coro = func(*args, **kwargs)
    
    try:
        # Create a task with the captured context to preserve context variables
        task = asyncio.create_task(coro, context=ctx)
        return await asyncio.shield(task)
    except TypeError:
        # Python < 3.11 fallback
        task = asyncio.create_task(coro)
        return await asyncio.shield(task)
```

## Testing

- Added comprehensive test
`test_shielded_callback_context_preservation()` that validates context
variables are preserved across shielded callback boundaries
- Verified the fix resolves the original LangGraph context consistency
issue
- Confirmed all existing callback manager tests still pass
- Validated OpenTelemetry-like instrumentation scenarios work correctly

The fix is minimal, maintains backward compatibility, and ensures proper
context preservation for both modern Python versions and older ones.

Fixes #31398.

<!-- START COPILOT CODING AGENT TIPS -->
---

💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-07 13:09:47 -05:00
Swastik-Swarup-Dash
ca1a3fbe88 fix(core): RunnablePick may not return a dict if keys is a string (#31321)
Change made From:
```python
class RunnablePick(RunnableSerializable[dict[str, Any], dict[str, Any]]):
```
To:
```python
class RunnablePick(RunnableSerializable[dict[str, Any], Any]):
```
As suggested by @cbornet 

Fixes ##31309

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-07 13:04:20 -05:00
williamzhu54
c955b53aed fix(core): fix Runnable parallel schema being empty when children runnable input schemas use TypedDict (#28196)
# Description
This submission is a part of a school project from our team of 4
@EminGul @williamzhu54 @annay54 @donttouch22.

Our pull request fixes the issue with RunnableParallel scheme being
empty by returning the correct schema output when children runnable
input schemas use TypedDicts.

# Issue
Fixes #24326


# Dependencies
No extra dependencies required for this fix.

# Feedback
Any feedback and advice is gladly welcomed. Please feel free to let us
know what we can change or improve upon regarding this issue.

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
2025-11-07 12:01:21 -05:00
Christophe Bornet
2a626d9608 refactor(langchain): use create_importer for HypotheticalDocumentEmbedder (#32078) 2025-11-07 11:16:00 -05:00
Abhinav
0861cba04b fix(chroma): pydantic validation error when using retriever.invoke() (#31377) 2025-11-07 10:59:16 -05:00
Lê Nam Khánh
88246f45b3 docs: fix typos in libs/core/langchain_core/utils/function_calling.py (#33873) 2025-11-07 10:34:28 -05:00
Lê Nam Khánh
1d04514354 docs: fix typos in libs/core/tests/unit_tests/utils/test_strings.py (#33875) 2025-11-07 10:34:12 -05:00
Lê Nam Khánh
c2324b8f3e docs: fix typos in libs/langchain/langchain_classic/chains/summarize/chain.py (#33877) 2025-11-07 10:33:53 -05:00
Lê Nam Khánh
957ea65d12 docs: fix typos in libs/core/tests/unit_tests/indexing/test_hashed_document.py (#33874) 2025-11-07 10:32:20 -05:00
Lê Nam Khánh
00fa38a295 docs: fix typos in libs/core/tests/unit_tests/test_tools.py (#33876) 2025-11-07 10:31:57 -05:00
Lê Nam Khánh
9d98c1b669 docs: fix typos in libs/partners/groq/langchain_groq/chat_models.py (#33878) 2025-11-07 10:31:35 -05:00
Mahmut CAVDAR
00cc9d421f fix(langchain): Update langchain-core dependency version (#33775) 2025-11-07 10:31:06 -05:00
Mohammad Mohtashim
65716cf590 feat(perplexity): Created Dedicated Output Parser to Support Reasoning Model Output for perplexity (#33670) 2025-11-07 10:17:35 -05:00
riunyfir
1b77a191f4 feat: The response.incomplete event is not handled when using stream_mode=['messages'] (#33871) 2025-11-07 09:46:11 -05:00
repeat-Q
ebfde9173c docs: expand "Why use LangChain?" section in README (#33846) 2025-11-07 09:09:05 -05:00
Lê Nam Khánh
2fe0369049 docs: fix typos in some files (#33867) 2025-11-07 09:04:29 -05:00
Mason Daugherty
e023201d42 style: some cleanup (#33857) 2025-11-06 23:50:46 -05:00
Mason Daugherty
d40e340479 chore: attribute package change versions (#33854)
Needed to disambiguate for within inherited docs
2025-11-06 16:57:30 -05:00
Sydney Runkle
9a09ed0659 fix: don't trace conditional edges and no todos in input state (#33842)
while experimenting w/ todo middleware

| Before | After |
|--------|-------|
| ![Screenshot 2025-11-05 at 1 56 21
PM](https://github.com/user-attachments/assets/63195ae4-8122-4662-8246-0fbc16cb1e22)
| ![Screenshot 2025-11-05 at 1 56 03
PM](https://github.com/user-attachments/assets/255e2fa8-e52d-4d1a-949a-33df52ee6668)
|
| Tracing conditional edges (verbose) | Not tracing conditional edges
(cleaner) |
| ![Screenshot 2025-11-05 at 1 57 56
PM](https://github.com/user-attachments/assets/449ccfe9-4c21-4c87-8e0e-6e89d7a97611)
| ![Screenshot 2025-11-05 at 1 56 58
PM](https://github.com/user-attachments/assets/c5c28d0e-2153-4572-af29-b2528761fec6)
|
| Todos in input state (cluttered) | No todos in input state (cleaner) |
2025-11-05 14:25:57 -05:00
Mason Daugherty
5f27b546dd chore: update README.md with deepagents (#33843) 2025-11-05 14:22:20 -05:00
Mason Daugherty
022fdd52c3 fix(core): handle missing dependency version information (#33844)
Follow up to #33347

This continues to make searching issues difficult
2025-11-05 14:19:55 -05:00
Sydney Runkle
7946a8f64e release: langchain v1.0.4 (#33839) 2025-11-05 12:37:58 -05:00
Sydney Runkle
7af79039fc fix: only increment thread count on successful executions (#33837)
* for run count + thread count overflow we should warn model not to call
again
* don't tally mocked tool calls in thread limit -- consider the
following
  * run limit is 1 
  * thread limit is 3
  * first run calls the tool 2 times, 1 executes, 1 is blocked
* we should only count the successful execution above towards the total
thread count
* raise more helpful warnings on invalid config
2025-11-05 10:00:07 -05:00
Sydney Runkle
1755750ca1 fix: more robust tool call limit middleware (#33817)
* improving typing (covariance)
* adding in support for continuing w/ tool calls not yet at threshold,
switching default to continue
* moving all logic into after model

```py
ExitBehavior = Literal["continue", "error", "end"]
"""How to handle execution when tool call limits are exceeded.
- `"continue"`: Block exceeded tools with error messages, let other tools continue (default)
- `"error"`: Raise a `ToolCallLimitExceededError` exception
- `"end"`: Stop execution immediately, injecting a ToolMessage and an AI message
    for the single tool call that exceeded the limit. Raises `NotImplementedError`
    if there are multiple tool calls
"""
```
2025-11-05 09:18:21 -05:00
Mason Daugherty
ddb53672e2 chore(infra): remove unused pr-title-labeler.yml (#33831) 2025-11-04 20:06:52 -05:00
Mason Daugherty
eeae34972f chore(infra): drop langchain_v1 pr lint (#33830)
Just use `langchain`
2025-11-04 19:46:05 -05:00
Mason Daugherty
47d89b1e47 fix(langchain): remove Tigris (#33829)
Removing this code as there is no possible way for it to work.

See https://github.com/langchain-ai/langchain-community/pull/159
2025-11-04 19:45:52 -05:00
Mason Daugherty
ee0bdaeb79 chore: correct langchain-community references (#33827)
fix docstrings that referenced community versions of now-native packages
2025-11-04 17:01:35 -05:00
Christophe Bornet
915c446c48 chore(core): add ruff rule PLR2004 (#33706)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-11-04 13:33:37 -05:00
Mason Daugherty
d1e2099408 chore(core): clean pyproject formatting (#33821) 2025-11-04 18:21:15 +00:00
Mason Daugherty
6ea15b9efa docs(model-profiles): fix typo (#33820) 2025-11-04 18:19:55 +00:00
Mason Daugherty
69f33aaff5 chore(infra): remova unused poetry_setup action (#33819) 2025-11-04 13:18:55 -05:00
Mason Daugherty
3f66f102d2 chore: update issue template xref url (#33818) 2025-11-04 13:17:42 -05:00
Mason Daugherty
c6547f58b7 style(standard-tests): refs pass (#33814) 2025-11-04 00:01:16 -05:00
Mason Daugherty
dfb05a7fa0 style: refs pass (#33813) 2025-11-03 22:11:10 -05:00
ccurme
2f67f9ddcb release(huggingface): 1.0.1 (#33803) 2025-11-03 14:49:52 -05:00
Hyejeong Jo
0e36185933 fix(huggingface): add stream_usage support for ChatHuggingFace invoke/stream (#32708) 2025-11-03 14:44:32 -05:00
Michael Li
6617865440 fix(core): add no colors check (#33780)
Patch edge case in get_color_mapping
2025-11-03 13:23:23 -05:00
ccurme
6dba4912be release(model-profiles): 0.0.3 (#33798) 2025-11-03 11:17:08 -05:00
ccurme
7a3827471b fix(model-profiles): fix pdf_inputs field (#33797) 2025-11-03 11:10:33 -05:00
ccurme
f006bc4c7e feat(langchain): add model-profiles as optional dependency (#33794) 2025-11-03 10:13:58 -05:00
Mason Daugherty
0a442644e3 test(anthropic): add vcr to test_search_result_tool_message (#33793)
To fix nondeterministic results causing integration testing to sometimes
fail

Also speeds up from 10s to 0.5

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-11-03 15:13:30 +00:00
repeat-Q
4960663546 docs: add Code of Conduct link to README (#33782)
**Description:** Add link to Code of Conduct in the Additional resources
section to make community guidelines more accessible for all
contributors.

**Rationale:** 
- **Community Health:** Making the Code of Conduct easily discoverable
helps set clear expectations for community behavior and fosters a more
inclusive, respectful environment
- **New Contributor Experience:** Many new contributors look to the
README as the primary source of project information. Having the Code of
Conduct readily available helps onboard them properly
- **Best Practices:** Prominent Code of Conduct links are considered a
best practice in open source projects and improve project accessibility
- **Low Impact:** This is a simple, non-breaking change that
significantly improves documentation completeness

**Issue:** N/A

**Dependencies:** None
2025-11-03 09:50:47 -05:00
ccurme
1381137c37 release(standard-tests): 1.0.1 (#33792) 2025-11-03 09:46:39 -05:00
ccurme
b4a042dfc4 release(core): 1.0.3 (#33768) 2025-11-03 09:19:32 -05:00
ccurme
81c4f21b52 fix(standard-tests): update multimodal tests (#33781) 2025-11-01 16:38:20 -04:00
Mason Daugherty
f2dab562a8 style: misc refs work (#33771) 2025-10-31 18:29:53 -04:00
ccurme
61196a8280 release(openai): 1.0.2 (#33769) 2025-10-31 14:21:32 -04:00
ccurme
7a97c31ac0 release(model-profiles): 0.0.2 (#33767) 2025-10-31 13:58:04 -04:00
ccurme
424214041e feat(model-profiles): support more providers (#33766) 2025-10-31 13:48:56 -04:00
ccurme
b06bd6a913 fix(model-profiles): add typing-extensions as explicit dep (#33762) 2025-10-31 11:21:55 -04:00
ccurme
1c762187e8 fix(model-profiles): remove langchain-core as a dependency (#33761) 2025-10-31 11:04:14 -04:00
Mason Daugherty
90aefc607f docs(core): improve tools module docstrings (#33755)
styling in `base.py`, content updates in
`libs/core/langchain_core/tools/convert.py`
2025-10-31 10:54:30 -04:00
ccurme
2ca73c479b fix(infra): fix release workflow for new packages (#33760) 2025-10-31 10:38:38 -04:00
ccurme
17c7c273b8 fix(infra): fix release workflow for new packages (#33759) 2025-10-31 10:21:12 -04:00
ccurme
493be259c3 feat(core): mint langchain-model-profiles and add profile property to BaseChatModel (#33728) 2025-10-31 09:44:46 -04:00
Mason Daugherty
106c6ac273 revert: "chore: skip anthropic tests while waiting on new anthropic release" (#33753)
Reverts langchain-ai/langchain#33739
2025-10-30 16:37:12 -04:00
Mason Daugherty
7aaaa371e7 release(anthropic): 1.0.1 (#33752) 2025-10-30 16:19:44 -04:00
Mason Daugherty
468dad1780 chore: use model IDs, latest anthropic models (#33747)
- standardize on using model IDs, no more aliases - makes future
maintenance easier
- use latest models in docstrings to highlight support
- remove remaining sonnet 3-7 usage due to deprecation

Depends on #33751
2025-10-30 16:13:28 -04:00
Mason Daugherty
32d294b89a fix(anthropic): clean up tests, update default model to use ID (#33751)
- use latest models in examples to highlight support
- standardize on using IDs in examples - no more aliases to improve
determinism in future tests
- bump lock
- in integration tests, fix stale casettes and use `MODEL_NAME`
uniformly where possible
- add case for default max tokens for sonnet-4-5 (was missing)
2025-10-30 16:08:18 -04:00
Mason Daugherty
dc5b7dace8 test(openai): mark tests flaky (#33750)
see:
https://github.com/langchain-ai/langchain/actions/runs/18921929210/job/54020065079#step:10:560
2025-10-30 16:07:58 -04:00
Mason Daugherty
e00b7233cf chore(langchain): fix lint_imports paths (#33749) 2025-10-30 16:06:08 -04:00
Mason Daugherty
91f7e73c27 fix(langchain): use system_prompt in integration tests (#33748) 2025-10-30 16:05:57 -04:00
Shagun Gupta
75fff151e8 fix(openai): replace pytest.warns(None) with warnings.catch_warnings in ChatOpenAI test to resolve TypeError . Resolves issue #33705 (#33741) 2025-10-30 09:22:34 -04:00
Sydney Runkle
d05a0cb80d chore: skip anthropic tests while waiting on new anthropic release (#33739)
like https://github.com/langchain-ai/langchain/pull/33312/files

temporarily skip while waiting on new anthropic release

dependent on https://github.com/langchain-ai/langchain/pull/33737
2025-10-29 16:10:42 -07:00
Sydney Runkle
d24aa69ceb chore: don't pick up alphas for testing (#33738)
reverting change made in
eaa6dcce9e
2025-10-29 16:04:57 -07:00
Sydney Runkle
fabcacc3e5 chore: remove mentions of sonnet 3.5 (#33737)
see
https://docs.claude.com/en/docs/about-claude/model-deprecations#2025-08-13%3A-claude-sonnet-3-5-models
2025-10-29 15:49:27 -07:00
Christian Bromann
ac58d75113 fix(langchain_v1): remove thread_model_call_count and run_model_call_count from tool node test (#33725)
While working on ToolRuntime in TS I discovered that Python still uses
`thread_model_call_count` and `run_model_call_count` in ToolNode tests
which afaik we removed.
2025-10-29 15:36:18 -07:00
Sydney Runkle
28564ef94e release: core 1.0.2 and langchain 1.0.3 (#33736) 2025-10-29 15:30:17 -07:00
Christian Bromann
b62a9b57f3 fix(langchain_v1): removed unsed functions in tool_call_limit middleware (#33735)
These functions seem unused and can be removed.
2025-10-29 15:21:38 -07:00
Sydney Runkle
76dd656f2a fix: filter out injected args from tracing (#33729)
this is CC generated and I want to do a thorough review + update the
tests. but should be able to ship today.

before eek

<img width="637" height="485" alt="Screenshot 2025-10-29 at 12 34 52 PM"
src="https://github.com/user-attachments/assets/121def87-fb7b-4847-b9e2-74f37b3b4763"
/>

now, woo

<img width="651" height="158" alt="Screenshot 2025-10-29 at 12 36 09 PM"
src="https://github.com/user-attachments/assets/1fc0e19e-a83f-417c-81e2-3aa0028630d6"
/>
2025-10-29 22:20:53 +00:00
ccurme
d218936763 fix(openai): update model used in test (#33733)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-29 17:09:18 -04:00
Mason Daugherty
123e29dc26 style: more refs fixes (#33730) 2025-10-29 16:34:46 -04:00
Sydney Runkle
6a1dca113e chore: move ToolNode improvements back to langgraph (#33634)
Moving all `ToolNode` related improvements back to LangGraph and
importing them in LC!
pairing w/ https://github.com/langchain-ai/langgraph/pull/6321

this fixes a couple of things:
1. `InjectedState`, store etc will continue to work as expected no
matter where the import is from
2. `ToolRuntime` is now usable w/in langgraph, woohoo!
2025-10-29 11:44:23 -07:00
Sydney Runkle
8aea6dd23a feat: support structured output retry middleware (#33663)
* attach the latest `AIMessage` to all `StructuredOutputError`s so that
relevant middleware can use as desired
* raise `StructuredOutputError` from `ProviderStrategy` logic in case of
failed parsing (so that we can retry from middleware)
* added a test suite w/ example custom middleware that retries for tool
+ provider strategy

Long term, we could add our own opinionated structured output retry
middleware, but this at least unblocks folks who want to use custom
retry logic in the short term :)

```py
class StructuredOutputRetryMiddleware(AgentMiddleware):
    """Retries model calls when structured output parsing fails."""

    def __init__(self, max_retries: int) -> None:
        self.max_retries = max_retries

    def wrap_model_call(
        self, request: ModelRequest, handler: Callable[[ModelRequest], ModelResponse]
    ) -> ModelResponse:
        for attempt in range(self.max_retries + 1):
            try:
                return handler(request)
            except StructuredOutputError as exc:
                if attempt == self.max_retries:
                    raise

                ai_content = exc.ai_message.content
                error_message = (
                    f"Your previous response was:\n{ai_content}\n\n"
                    f"Error: {exc}. Please try again with a valid response."
                )
                request.messages.append(HumanMessage(content=error_message))
```
2025-10-29 08:41:44 -07:00
Vincent Koc
78a2f86f70 fix(core): improve JSON get_format_instructions using Opik Agent Optimizer (#33718) 2025-10-29 11:05:24 -04:00
Mason Daugherty
b5e23e5823 fix(langchain_v1): correct ref url (#33715) 2025-10-28 23:29:19 -04:00
Mason Daugherty
7872643910 chore(standard-tests): Update API reference link in README (#33714) 2025-10-28 23:29:02 -04:00
Mason Daugherty
f15391f4fc chore(text-splitters): API reference link in README (#33713) 2025-10-28 23:28:48 -04:00
Mason Daugherty
ca9b81cc2e chore(infra): update README (#33712)
Updated the README to clarify LangChain's focus on building agents and
LLM-powered applications. Added a section for community discussions and
refined the ecosystem description.
2025-10-28 23:22:18 -04:00
Mason Daugherty
a2a9a02ecb style(core): more cleanup all around (#33711) 2025-10-28 22:58:19 -04:00
Mason Daugherty
e5e1d6c705 style: more refs work (#33707) 2025-10-28 14:43:28 -04:00
dependabot[bot]
6ee19473ba chore(infra): bump actions/download-artifact from 5 to 6 (#33682)
Bumps
[actions/download-artifact](https://github.com/actions/download-artifact)
from 5 to 6.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/download-artifact/releases">actions/download-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v6.0.0</h2>
<h2>What's Changed</h2>
<p><strong>BREAKING CHANGE:</strong> this update supports Node
<code>v24.x</code>. This is not a breaking change per-se but we're
treating it as such.</p>
<ul>
<li>Update README for download-artifact v5 changes by <a
href="https://github.com/yacaovsnc"><code>@​yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/417">actions/download-artifact#417</a></li>
<li>Update README with artifact extraction details by <a
href="https://github.com/yacaovsnc"><code>@​yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/download-artifact/pull/424">actions/download-artifact#424</a></li>
<li>Readme: spell out the first use of GHES by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/download-artifact/pull/431">actions/download-artifact#431</a></li>
<li>Bump <code>@actions/artifact</code> to <code>v4.0.0</code></li>
<li>Prepare <code>v6.0.0</code> by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/download-artifact/pull/438">actions/download-artifact#438</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/download-artifact/pull/431">actions/download-artifact#431</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/download-artifact/compare/v5...v6.0.0">https://github.com/actions/download-artifact/compare/v5...v6.0.0</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="018cc2cf5b"><code>018cc2c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/438">#438</a>
from actions/danwkennedy/prepare-6.0.0</li>
<li><a
href="815651c680"><code>815651c</code></a>
Revert &quot;Remove <code>github.dep.yml</code>&quot;</li>
<li><a
href="bb3a066a8b"><code>bb3a066</code></a>
Remove <code>github.dep.yml</code></li>
<li><a
href="fa1ce46bbd"><code>fa1ce46</code></a>
Prepare <code>v6.0.0</code></li>
<li><a
href="4a24838f3d"><code>4a24838</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/431">#431</a>
from danwkennedy/patch-1</li>
<li><a
href="5e3251c4ff"><code>5e3251c</code></a>
Readme: spell out the first use of GHES</li>
<li><a
href="abefc31eaf"><code>abefc31</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/424">#424</a>
from actions/yacaovsnc/update_readme</li>
<li><a
href="ac43a6070a"><code>ac43a60</code></a>
Update README with artifact extraction details</li>
<li><a
href="de96f4613b"><code>de96f46</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/download-artifact/issues/417">#417</a>
from actions/yacaovsnc/update_readme</li>
<li><a
href="7993cb44e9"><code>7993cb4</code></a>
Remove migration guide for artifact download changes</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/download-artifact/compare/v5...v6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/download-artifact&package-manager=github_actions&previous-version=5&new-version=6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-28 14:07:16 -04:00
dependabot[bot]
a59551f3b4 chore(infra): bump actions/upload-artifact from 4 to 5 (#33681)
Bumps
[actions/upload-artifact](https://github.com/actions/upload-artifact)
from 4 to 5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/actions/upload-artifact/releases">actions/upload-artifact's
releases</a>.</em></p>
<blockquote>
<h2>v5.0.0</h2>
<h2>What's Changed</h2>
<p><strong>BREAKING CHANGE:</strong> this update supports Node
<code>v24.x</code>. This is not a breaking change per-se but we're
treating it as such.</p>
<ul>
<li>Update README.md by <a
href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/681">actions/upload-artifact#681</a></li>
<li>Update README.md by <a
href="https://github.com/nebuk89"><code>@​nebuk89</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/712">actions/upload-artifact#712</a></li>
<li>Readme: spell out the first use of GHES by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/upload-artifact/pull/727">actions/upload-artifact#727</a></li>
<li>Update GHES guidance to include reference to Node 20 version by <a
href="https://github.com/patrikpolyak"><code>@​patrikpolyak</code></a>
in <a
href="https://redirect.github.com/actions/upload-artifact/pull/725">actions/upload-artifact#725</a></li>
<li>Bump <code>@actions/artifact</code> to <code>v4.0.0</code></li>
<li>Prepare <code>v5.0.0</code> by <a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a> in
<a
href="https://redirect.github.com/actions/upload-artifact/pull/734">actions/upload-artifact#734</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/GhadimiR"><code>@​GhadimiR</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/681">actions/upload-artifact#681</a></li>
<li><a href="https://github.com/nebuk89"><code>@​nebuk89</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/712">actions/upload-artifact#712</a></li>
<li><a
href="https://github.com/danwkennedy"><code>@​danwkennedy</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/727">actions/upload-artifact#727</a></li>
<li><a
href="https://github.com/patrikpolyak"><code>@​patrikpolyak</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/725">actions/upload-artifact#725</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v5.0.0">https://github.com/actions/upload-artifact/compare/v4...v5.0.0</a></p>
<h2>v4.6.2</h2>
<h2>What's Changed</h2>
<ul>
<li>Update to use artifact 2.3.2 package &amp; prepare for new
upload-artifact release by <a
href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/685">actions/upload-artifact#685</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/salmanmkc"><code>@​salmanmkc</code></a>
made their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/685">actions/upload-artifact#685</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v4.6.2">https://github.com/actions/upload-artifact/compare/v4...v4.6.2</a></p>
<h2>v4.6.1</h2>
<h2>What's Changed</h2>
<ul>
<li>Update to use artifact 2.2.2 package by <a
href="https://github.com/yacaovsnc"><code>@​yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/673">actions/upload-artifact#673</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v4.6.1">https://github.com/actions/upload-artifact/compare/v4...v4.6.1</a></p>
<h2>v4.6.0</h2>
<h2>What's Changed</h2>
<ul>
<li>Expose env vars to control concurrency and timeout by <a
href="https://github.com/yacaovsnc"><code>@​yacaovsnc</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/662">actions/upload-artifact#662</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/actions/upload-artifact/compare/v4...v4.6.0">https://github.com/actions/upload-artifact/compare/v4...v4.6.0</a></p>
<h2>v4.5.0</h2>
<h2>What's Changed</h2>
<ul>
<li>fix: deprecated <code>Node.js</code> version in action by <a
href="https://github.com/hamirmahal"><code>@​hamirmahal</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/578">actions/upload-artifact#578</a></li>
<li>Add new <code>artifact-digest</code> output by <a
href="https://github.com/bdehamer"><code>@​bdehamer</code></a> in <a
href="https://redirect.github.com/actions/upload-artifact/pull/656">actions/upload-artifact#656</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a
href="https://github.com/hamirmahal"><code>@​hamirmahal</code></a> made
their first contribution in <a
href="https://redirect.github.com/actions/upload-artifact/pull/578">actions/upload-artifact#578</a></li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="330a01c490"><code>330a01c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/734">#734</a>
from actions/danwkennedy/prepare-5.0.0</li>
<li><a
href="03f2824452"><code>03f2824</code></a>
Update <code>github.dep.yml</code></li>
<li><a
href="905a1ecb59"><code>905a1ec</code></a>
Prepare <code>v5.0.0</code></li>
<li><a
href="2d9f9cdfa9"><code>2d9f9cd</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/725">#725</a>
from patrikpolyak/patch-1</li>
<li><a
href="9687587dec"><code>9687587</code></a>
Merge branch 'main' into patch-1</li>
<li><a
href="2848b2cda0"><code>2848b2c</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/727">#727</a>
from danwkennedy/patch-1</li>
<li><a
href="9b511775fd"><code>9b51177</code></a>
Spell out the first use of GHES</li>
<li><a
href="cd231ca1ed"><code>cd231ca</code></a>
Update GHES guidance to include reference to Node 20 version</li>
<li><a
href="de65e23aa2"><code>de65e23</code></a>
Merge pull request <a
href="https://redirect.github.com/actions/upload-artifact/issues/712">#712</a>
from actions/nebuk89-patch-1</li>
<li><a
href="8747d8cd76"><code>8747d8c</code></a>
Update README.md</li>
<li>Additional commits viewable in <a
href="https://github.com/actions/upload-artifact/compare/v4...v5">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/upload-artifact&package-manager=github_actions&previous-version=4&new-version=5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-28 14:07:03 -04:00
ccurme
3286a98b27 fix(core): translate Google GenAI text blocks to v1 (#33699) 2025-10-28 09:53:01 -04:00
Mason Daugherty
62769a0dac feat(langchain): export UsageMetadata (#33692)
as well as `InputTokenDetails`, and `OutputTokenDetails` from
`langchain_core.messages`
2025-10-27 19:47:41 -04:00
Mason Daugherty
f94108b4bc fix: links (#33691)
* X-ref to new docs
* Formatting updates
2025-10-27 19:04:29 -04:00
ccurme
60a0ff8217 fix(standard-tests): fix tool description in agent loop test (#33690) 2025-10-27 15:02:13 -04:00
Christophe Bornet
b3dffc70e2 fix(core): fix PydanticOutputParser's get_format_instructions for v1 models (#32479) 2025-10-27 13:44:20 -04:00
Arun Prasad
86ac39e11f refactor(core): Minor refactor for code readability (#33674) 2025-10-27 11:39:36 -04:00
John Eismeier
6e036d38b2 fix(infra): add emacs backup files to gitignore (#33675) 2025-10-27 11:26:47 -04:00
Shanto Mathew
2d30ebb53b docs(langchain): clarify create_tool_calling_agent system_prompt formatting and add troubleshooting (#33679) 2025-10-27 11:18:10 -04:00
Arun Prasad
b3934b9580 refactor(anthropic): remove unnecessary url check (#33671)
if "url" in annotation: in Line 15 , already ensures "url" is key in
annotation , so no need to check again to set "url" key in out object

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-27 11:13:54 -04:00
Mason Daugherty
09102a634a fix: update some links (#33686) 2025-10-27 11:12:11 -04:00
ccurme
95ff5901a1 chore(anthropic): update integration test cassette (#33685) 2025-10-27 10:43:36 -04:00
Mason Daugherty
f3d7152074 style(core): more refs work (#33664) 2025-10-24 16:06:24 -04:00
Christophe Bornet
dff37f6048 fix(nomic): support Python 3.14 (#33655)
Pyarrow just published 3.14 binaries

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-24 13:32:07 -04:00
ccurme
832036ef0f chore(infra): remove openai from langchain-core release test matrix (#33661) 2025-10-24 11:55:33 -04:00
ccurme
f1742954ab fix(core): make handling of schemas more defensive (#33660) 2025-10-24 11:10:06 -04:00
ccurme
6ab0476676 fix(openai): update test (#33659) 2025-10-24 11:04:33 -04:00
ccurme
d36413c821 release(mistralai): 1.0.1 (#33657) 2025-10-24 09:50:23 -04:00
Romi45
99097f799c fix(mistralai): resolve duplicate tool calls when converting to mistral chat message (#33648) 2025-10-24 09:40:31 -04:00
Mohammad Mohtashim
0666571519 chore(perplexity): Added all keys for usage metadata (#33480) 2025-10-24 09:32:35 -04:00
ccurme
ef85161525 release(core): 1.0.1 (#33639) 2025-10-22 14:25:21 -04:00
ccurme
079eb808f8 release(qdrant): 1.1.0 (#33638) 2025-10-22 13:24:36 -04:00
Anush
39fb2d1a3b feat(qdrant): Use Qdrant's built-in MMR search (#32302) 2025-10-22 13:19:32 -04:00
Mason Daugherty
db7f2db1ae feat(infra): langchain docs MCP (#33636) 2025-10-22 11:50:35 -04:00
Yu Zhong
df46c82ae2 feat(core): automatic set required to include all properties in strict mode (#32930) 2025-10-22 11:31:08 -04:00
Eugene Yurtsev
f8adbbc461 chore(langchain_v1): bump version from 1.0.1 to 1.0.2 (#33629)
Release 1.0.2
2025-10-21 17:05:51 -04:00
Eugene Yurtsev
17f0716d6c fix(langchain_v1): remove non llm controllable params from tool message on invocation failure (#33625)
The LLM shouldn't be seeing parameters it cannot control in the
ToolMessage error it gets when it invokes a tool with incorrect args.

This fixes the behavior within langchain to address immediate issue.

We may want to change the behavior in langchain_core as well to prevent
validation of injected arguments. But this would be done in a separate
change
2025-10-21 15:40:30 -04:00
Ali Ismail
5acd34ae92 feat(openai): add unit test for streaming error in _generate (#33134) 2025-10-21 15:08:37 -04:00
Aaron Sequeira
84dbebac4f fix(langchain): correctly initialize huggingface models in init_chat_model (#33167) 2025-10-21 14:21:46 -04:00
Mohammad Mohtashim
eddfcd2c88 docs(core): Updated docs for mustache_template_vars (#33481) 2025-10-21 13:01:25 -04:00
noeliecherrier
9f470d297f feat(mistralai): remove tenacity retries for embeddings (#33491) 2025-10-21 12:35:10 -04:00
ccurme
2222470f69 release(openai): 1.0.1 (#33624) 2025-10-21 11:37:47 -04:00
Marlene
78175fcb96 feat(openai): add callable support for openai_api_key parameter (#33532) 2025-10-21 11:16:02 -04:00
Mason Daugherty
d9e659ca4f style: even more refs work (#33619) 2025-10-21 01:09:52 -04:00
Mason Daugherty
e731ba1e47 style: more refs work (#33616) 2025-10-20 18:40:19 -04:00
Cole Murray
557fc9a817 fix(infra): harden pydantic test workflow against command injection (#33446) 2025-10-20 10:35:48 -04:00
Christophe Bornet
965dac74e5 chore(infra): test pydantic with python 3.12 (#33421) 2025-10-20 10:28:41 -04:00
Sydney Runkle
7d7a50d4cc release(langchain_v1): 1.0.1 (#33610) 2025-10-20 13:03:16 +00:00
Sydney Runkle
9319eecaba fix(langchain_v1): ToolRuntime default for args (#33606)
added some noqas, this is a quick patch to support a bug uncovered in
the quickstart, will resolve fully depending on where we centralize
ToolNode stuff.
2025-10-20 08:45:50 -04:00
Mason Daugherty
a47386f6dc style: more refs polishing (#33601) 2025-10-20 00:52:52 -04:00
Mason Daugherty
aaf88c157f docs(langchain): update reference documentation to note moved embeddings modules (#33600) 2025-10-19 20:10:25 -04:00
Christophe Bornet
3dcf4ae1e9 fix(cli): support Python 3.14 (#33598)
Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-19 19:37:34 -04:00
Christophe Bornet
3391168777 ci(infra): test CodSpeed with Python 3.13 (#33599) 2025-10-19 19:33:20 -04:00
repeat-Q
28728dca9f docs: add contributing guide to README (#33490)
**Description:** Added a beginner-friendly tip to the README to help
first-time contributors find a starting point. This is a documentation
improvement aimed at lowering the barrier for newcomers to participate
in open source.

**Issue:** No related issue

**Dependencies:** None

---

## Note to maintainers

I'm new to open source and this is my first PR! If there's anything that
needs improvement, please guide me and I'll be happy to learn and make
changes. Thank you for your patience! 😊

## What does this PR do?
- Added a noticeable beginner tip box after the badges section in README
- Provided specific guidance (Good First Issues link)
- Encourages newcomers to start with documentation fixes

## Why is this change needed?
- Makes it easier for new contributors to get started
- Provides clear direction and reduces confusion
- Creates a more welcoming open source community environment

---------

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-19 00:01:21 -04:00
Christophe Bornet
1ae7fb7694 chore(langchain-classic): remove unused duckdb dependency (#33582)
* The dependency is not used.
* It takes a long time to build in Python 3.14 as there are no prebuilt
binaries yet. This slows down CI a lot.

Co-authored-by: Mason Daugherty <mason@langchain.dev>
2025-10-17 18:45:30 -04:00
Mason Daugherty
7aef3388d9 release(xai): 1.0.0 (#33591) 2025-10-17 17:42:29 -04:00
Mason Daugherty
1d056487c7 style(anthropic): use aliases for model names (#33590) 2025-10-17 21:40:22 +00:00
Mason Daugherty
64e6798a39 chore: update pyproject.toml url entries (#33587) 2025-10-17 17:16:55 -04:00
Sydney Runkle
4a65e827f7 release(langchain_v1): v1.0.0 (#33588)
waiting on langgraph bump
2025-10-17 16:49:07 -04:00
Sydney Runkle
35b89b8b10 fix: shell tool middleware (#33589)
the fact that this was broken showcases that we need significantly
better test coverage, this is literally the most minimalistic usage of
this middleware there could be 😿

will document these two gotchas better for custom middleware

```py
from langchain.agents.middleware.shell_tool import ShellToolMiddleware
from langchain.agents import create_agent

agent = create_agent(model="openai:gpt-4",middleware = [ShellToolMiddleware()])
agent.invoke({"messages":[{"role": "user", "content": "hi"}]})
```
2025-10-17 16:48:30 -04:00
Mason Daugherty
8efa75d04c fix(xai): inject model_provider in response_metadata (#33543)
plus tests minor rfc
2025-10-17 16:11:03 -04:00
Sydney Runkle
8fd54f13b5 feat(langchain_v1): Python 3.14 support (#33560)
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
2025-10-17 15:10:01 -04:00
ccurme
952fa8aa99 fix(langchain,langchain_v1): enable huggingface optional dep (#33586) 2025-10-17 18:42:53 +00:00
Mason Daugherty
3948273350 release(prompty): 1.0.0 (#33584) 2025-10-17 14:10:01 -04:00
Eugene Yurtsev
a16307fe84 chore(infra): change scope names (#33580)
Change scope names
2025-10-17 15:55:58 +00:00
Eugene Yurtsev
af6f2cf366 chore(langchain_legacy): bump version 1.0 (#33579)
Bump version for langchain-classic
2025-10-17 11:55:13 -04:00
Mason Daugherty
6997867f0e release(deepseek): 1.0.0 (#33581) 2025-10-17 11:52:08 -04:00
Mason Daugherty
de791bc3ef fix(deepseek): inject model_provider in response_metadata (#33544)
& slight tests rfc
2025-10-17 11:47:59 -04:00
Mason Daugherty
69c6e7de59 release(ollama): 1.0.0 (#33567) 2025-10-17 11:39:24 -04:00
Mason Daugherty
10cee59f2e release(mistralai): 1.0.0 (#33573) 2025-10-17 11:33:17 -04:00
Mason Daugherty
58f521ea4f release(fireworks): 1.0.0 (#33571) 2025-10-17 11:32:57 -04:00
Mason Daugherty
a194ae6959 release(huggingface): 1.0.0 (#33572) 2025-10-17 11:26:48 -04:00
ccurme
4d623133a5 release(openai): 1.0.0 (#33578) 2025-10-17 11:25:25 -04:00
Mason Daugherty
8fbf192c2a release(perplexity): 1.0.0 (#33576) 2025-10-17 11:18:43 -04:00
Mason Daugherty
241a382fba docs: fix Anthropic, OpenAI docstrings (#33566)
minor
2025-10-17 11:18:32 -04:00
513 changed files with 41211 additions and 17936 deletions

View File

@@ -8,16 +8,15 @@ body:
value: |
Thank you for taking the time to file a bug report.
Use this to report BUGS in LangChain. For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
if there's another way to solve your problem:
Check these before submitting to see if your issue has already been reported, fixed or if there's another way to solve your problem:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://reference.langchain.com/python/),
* [Documentation](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference Documentation](https://reference.langchain.com/python/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
* [LangChain Forum](https://forum.langchain.com/),
- type: checkboxes
id: checks
attributes:
@@ -36,16 +35,48 @@ body:
required: true
- label: This is not related to the langchain-community package.
required: true
- label: I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
required: true
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Which `langchain` package(s) is this bug related to? Select at least one.
Note that if the package you are reporting for is not listed here, it is not in this repository (e.g. `langchain-google-genai` is in [`langchain-ai/langchain-google`](https://github.com/langchain-ai/langchain-google/)).
Please report issues for other packages to their respective repositories.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Example Code
label: Example Code (Python)
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
@@ -53,15 +84,12 @@ body:
**Important!**
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
* Avoid screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
* Reduce your code to the minimum required to reproduce the issue if possible.
(This will be automatically formatted into code, so no need for backticks.)
render: python
placeholder: |
The following code:
```python
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
@@ -69,17 +97,14 @@ body:
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
```
- type: textarea
id: error
validations:
required: false
attributes:
label: Error Message and Stack Trace (if applicable)
description: |
If you are reporting an error, please include the full error message and stack trace.
placeholder: |
Exception + full stack trace
If you are reporting an error, please copy and paste the full error message and
stack trace.
(This will be automatically formatted into code, so no need for backticks.)
render: shell
- type: textarea
id: description
attributes:
@@ -99,9 +124,7 @@ body:
attributes:
label: System Info
description: |
Please share your system info with us. Do NOT skip this step and please don't trim
the output. Most users don't include enough information here and it makes it harder
for us to help you.
Please share your system info with us.
Run the following command in your terminal and paste the output here:
@@ -113,8 +136,6 @@ body:
from langchain_core import sys_info
sys_info.print_sys_info()
```
alternatively, put the entire output of `pip freeze` here.
placeholder: |
python -m langchain_core.sys_info
validations:

View File

@@ -1,9 +1,18 @@
blank_issues_enabled: false
version: 2.1
contact_links:
- name: 📚 Documentation
url: https://github.com/langchain-ai/docs/issues/new?template=langchain.yml
- name: 📚 Documentation issue
url: https://github.com/langchain-ai/docs/issues/new?template=01-langchain.yml
about: Report an issue related to the LangChain documentation
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: General community discussions and support
- name: 📚 LangChain Documentation
url: https://docs.langchain.com/oss/python/langchain/overview
about: View the official LangChain documentation
- name: 📚 API Reference Documentation
url: https://reference.langchain.com/python/
about: View the official LangChain API reference documentation
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: Ask questions and get help from the community

View File

@@ -13,11 +13,11 @@ body:
Relevant links to check before filing a feature request to see if your request has already been made or
if there's another way to achieve what you want:
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://reference.langchain.com/python/),
* [Documentation](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference Documentation](https://reference.langchain.com/python/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
* [LangChain Forum](https://forum.langchain.com/),
- type: checkboxes
id: checks
attributes:
@@ -34,6 +34,40 @@ body:
required: true
- label: This is not related to the langchain-community package.
required: true
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Which `langchain` package(s) is this request related to? Select at least one.
Note that if the package you are requesting for is not listed here, it is not in this repository (e.g. `langchain-google-genai` is in `langchain-ai/langchain`).
Please submit feature requests for other packages to their respective repositories.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general
- type: textarea
id: feature-description
validations:

View File

@@ -18,3 +18,33 @@ body:
attributes:
label: Issue Content
description: Add the content of the issue here.
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Please select package(s) that this issue is related to.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general

View File

@@ -25,13 +25,13 @@ body:
label: Task Description
description: |
Provide a clear and detailed description of the task.
What needs to be done? Be specific about the scope and requirements.
placeholder: |
This task involves...
The goal is to...
Specific requirements:
- ...
- ...
@@ -43,7 +43,7 @@ body:
label: Acceptance Criteria
description: |
Define the criteria that must be met for this task to be considered complete.
What are the specific deliverables or outcomes expected?
placeholder: |
This task will be complete when:
@@ -58,15 +58,15 @@ body:
label: Context and Background
description: |
Provide any relevant context, background information, or links to related issues/PRs.
Why is this task needed? What problem does it solve?
placeholder: |
Background:
- ...
Related issues/PRs:
- #...
Additional context:
- ...
validations:
@@ -77,15 +77,45 @@ body:
label: Dependencies
description: |
List any dependencies or blockers for this task.
Are there other tasks, issues, or external factors that need to be completed first?
placeholder: |
This task depends on:
- [ ] Issue #...
- [ ] PR #...
- [ ] External dependency: ...
Blocked by:
- ...
validations:
required: false
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Please select package(s) that this task is related to.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general

View File

@@ -1,93 +0,0 @@
# An action for setting up poetry install with caching.
# Using a custom action since the default action does not
# take poetry install groups into account.
# Action code from:
# https://github.com/actions/setup-python/issues/505#issuecomment-1273013236
name: poetry-install-with-caching
description: Poetry install with support for caching of dependency groups.
inputs:
python-version:
description: Python version, supporting MAJOR.MINOR only
required: true
poetry-version:
description: Poetry version
required: true
cache-key:
description: Cache key to use for manual handling of caching
required: true
working-directory:
description: Directory whose poetry.lock file should be cached
required: true
runs:
using: composite
steps:
- uses: actions/setup-python@v5
name: Setup python ${{ inputs.python-version }}
id: setup-python
with:
python-version: ${{ inputs.python-version }}
- uses: actions/cache@v4
id: cache-bin-poetry
name: Cache Poetry binary - Python ${{ inputs.python-version }}
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "1"
with:
path: |
/opt/pipx/venvs/poetry
# This step caches the poetry installation, so make sure it's keyed on the poetry version as well.
key: bin-poetry-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-${{ inputs.poetry-version }}
- name: Refresh shell hashtable and fixup softlinks
if: steps.cache-bin-poetry.outputs.cache-hit == 'true'
shell: bash
env:
POETRY_VERSION: ${{ inputs.poetry-version }}
PYTHON_VERSION: ${{ inputs.python-version }}
run: |
set -eux
# Refresh the shell hashtable, to ensure correct `which` output.
hash -r
# `actions/cache@v3` doesn't always seem able to correctly unpack softlinks.
# Delete and recreate the softlinks pipx expects to have.
rm /opt/pipx/venvs/poetry/bin/python
cd /opt/pipx/venvs/poetry/bin
ln -s "$(which "python$PYTHON_VERSION")" python
chmod +x python
cd /opt/pipx_bin/
ln -s /opt/pipx/venvs/poetry/bin/poetry poetry
chmod +x poetry
# Ensure everything got set up correctly.
/opt/pipx/venvs/poetry/bin/python --version
/opt/pipx_bin/poetry --version
- name: Install poetry
if: steps.cache-bin-poetry.outputs.cache-hit != 'true'
shell: bash
env:
POETRY_VERSION: ${{ inputs.poetry-version }}
PYTHON_VERSION: ${{ inputs.python-version }}
# Install poetry using the python version installed by setup-python step.
run: pipx install "poetry==$POETRY_VERSION" --python '${{ steps.setup-python.outputs.python-path }}' --verbose
- name: Restore pip and poetry cached dependencies
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "4"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
with:
path: |
~/.cache/pip
~/.cache/pypoetry/virtualenvs
~/.cache/pypoetry/cache
~/.cache/pypoetry/artifacts
${{ env.WORKDIR }}/.venv
key: py-deps-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-poetry-${{ inputs.poetry-version }}-${{ inputs.cache-key }}-${{ hashFiles(format('{0}/**/poetry.lock', env.WORKDIR)) }}

View File

@@ -7,13 +7,12 @@ core:
- any-glob-to-any-file:
- "libs/core/**/*"
langchain:
langchain-classic:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain/**/*"
- "libs/langchain_v1/**/*"
v1:
langchain:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain_v1/**/*"
@@ -28,6 +27,11 @@ standard-tests:
- any-glob-to-any-file:
- "libs/standard-tests/**/*"
model-profiles:
- changed-files:
- any-glob-to-any-file:
- "libs/model-profiles/**/*"
text-splitters:
- changed-files:
- any-glob-to-any-file:
@@ -39,6 +43,81 @@ integration:
- any-glob-to-any-file:
- "libs/partners/**/*"
anthropic:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/anthropic/**/*"
chroma:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/chroma/**/*"
deepseek:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/deepseek/**/*"
exa:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/exa/**/*"
fireworks:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/fireworks/**/*"
groq:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/groq/**/*"
huggingface:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/huggingface/**/*"
mistralai:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/mistralai/**/*"
nomic:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/nomic/**/*"
ollama:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/ollama/**/*"
openai:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/openai/**/*"
perplexity:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/perplexity/**/*"
prompty:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/prompty/**/*"
qdrant:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/qdrant/**/*"
xai:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/xai/**/*"
# Infrastructure and DevOps
infra:
- changed-files:

View File

@@ -1,41 +0,0 @@
# PR title labeler config
#
# Labels PRs based on conventional commit patterns in titles
#
# Format: type(scope): description or type!: description (breaking)
add-missing-labels: true
clear-prexisting: false
include-commits: false
include-title: true
label-for-breaking-changes: breaking
label-mapping:
documentation: ["docs"]
feature: ["feat"]
fix: ["fix"]
infra: ["build", "ci", "chore"]
integration:
[
"anthropic",
"chroma",
"deepseek",
"exa",
"fireworks",
"groq",
"huggingface",
"mistralai",
"nomic",
"ollama",
"openai",
"perplexity",
"prompty",
"qdrant",
"xai",
]
linting: ["style"]
performance: ["perf"]
refactor: ["refactor"]
release: ["release"]
revert: ["revert"]
tests: ["test"]

View File

@@ -30,6 +30,7 @@ LANGCHAIN_DIRS = [
"libs/text-splitters",
"libs/langchain",
"libs/langchain_v1",
"libs/model-profiles",
]
# When set to True, we are ignoring core dependents
@@ -130,21 +131,12 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
return _get_pydantic_test_configs(dir_)
if job == "codspeed":
py_versions = ["3.12"] # 3.13 is not yet supported
py_versions = ["3.13"]
elif dir_ == "libs/core":
py_versions = ["3.10", "3.11", "3.12", "3.13", "3.14"]
# custom logic for specific directories
elif dir_ == "libs/langchain" and job == "extended-tests":
py_versions = ["3.10", "3.14"]
elif dir_ == "libs/langchain_v1":
elif dir_ in {"libs/partners/chroma"}:
py_versions = ["3.10", "3.13"]
elif dir_ in {"libs/cli", "libs/partners/chroma", "libs/partners/nomic"}:
py_versions = ["3.10", "3.13"]
elif dir_ == ".":
# unable to install with 3.13 because tokenizers doesn't support 3.13 yet
py_versions = ["3.10", "3.12"]
else:
py_versions = ["3.10", "3.14"]
@@ -152,7 +144,7 @@ def _get_configs_for_single_dir(job: str, dir_: str) -> List[Dict[str, str]]:
def _get_pydantic_test_configs(
dir_: str, *, python_version: str = "3.11"
dir_: str, *, python_version: str = "3.12"
) -> List[Dict[str, str]]:
with open("./libs/core/uv.lock", "rb") as f:
core_uv_lock_data = tomllib.load(f)
@@ -306,7 +298,9 @@ if __name__ == "__main__":
if not filename.startswith(".")
] != ["README.md"]:
dirs_to_run["test"].add(f"libs/partners/{partner_dir}")
dirs_to_run["codspeed"].add(f"libs/partners/{partner_dir}")
# Skip codspeed for partners without benchmarks or in IGNORED_PARTNERS
if partner_dir not in IGNORED_PARTNERS:
dirs_to_run["codspeed"].add(f"libs/partners/{partner_dir}")
# Skip if the directory was deleted or is just a tombstone readme
elif file.startswith("libs/"):
# Check if this is a root-level file in libs/ (e.g., libs/README.md)

View File

@@ -98,7 +98,7 @@ def _check_python_version_from_requirement(
return True
else:
marker_str = str(requirement.marker)
if "python_version" or "python_full_version" in marker_str:
if "python_version" in marker_str or "python_full_version" in marker_str:
python_version_str = "".join(
char
for char in marker_str

View File

@@ -77,7 +77,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
- name: Upload build
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -149,8 +149,8 @@ jobs:
fi
fi
# if PREV_TAG is empty, let it be empty
if [ -z "$PREV_TAG" ]; then
# if PREV_TAG is empty or came out to 0.0.0, let it be empty
if [ -z "$PREV_TAG" ] || [ "$PREV_TAG" = "$PKG_NAME==0.0.0" ]; then
echo "No previous tag found - first release"
else
# confirm prev-tag actually exists in git repo with git tag
@@ -179,8 +179,8 @@ jobs:
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
run: |
PREAMBLE="Changes since $PREV_TAG"
# if PREV_TAG is empty, then we are releasing the first version
if [ -z "$PREV_TAG" ]; then
# if PREV_TAG is empty or 0.0.0, then we are releasing the first version
if [ -z "$PREV_TAG" ] || [ "$PREV_TAG" = "$PKG_NAME==0.0.0" ]; then
PREAMBLE="Initial release"
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
fi
@@ -208,7 +208,7 @@ jobs:
steps:
- uses: actions/checkout@v5
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -258,7 +258,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -377,6 +377,7 @@ jobs:
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
LANGCHAIN_TESTS_USER_AGENT: ${{ secrets.LANGCHAIN_TESTS_USER_AGENT }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
@@ -395,7 +396,7 @@ jobs:
contents: read
strategy:
matrix:
partner: [openai, anthropic]
partner: [anthropic]
fail-fast: false # Continue testing other partners if one fails
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
@@ -409,6 +410,7 @@ jobs:
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
LANGCHAIN_TESTS_USER_AGENT: ${{ secrets.LANGCHAIN_TESTS_USER_AGENT }}
steps:
- uses: actions/checkout@v5
@@ -428,7 +430,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
if: startsWith(inputs.working-directory, 'libs/core')
with:
name: dist
@@ -442,7 +444,7 @@ jobs:
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| grep -E '[0-9]+\.[0-9]+\.[0-9]+([a-zA-Z]+[0-9]+)?$' \
| grep -E '[0-9]+\.[0-9]+\.[0-9]+$' \
| sort -Vr \
| head -n 1
)"
@@ -497,7 +499,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -537,7 +539,7 @@ jobs:
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v5
- uses: actions/download-artifact@v6
with:
name: dist
path: ${{ inputs.working-directory }}/dist/

View File

@@ -13,7 +13,7 @@ on:
required: false
type: string
description: "Python version to use"
default: "3.11"
default: "3.12"
pydantic-version:
required: true
type: string
@@ -51,7 +51,9 @@ jobs:
- name: "🔄 Install Specific Pydantic Version"
shell: bash
run: VIRTUAL_ENV=.venv uv pip install pydantic~=${{ inputs.pydantic-version }}
env:
PYDANTIC_VERSION: ${{ inputs.pydantic-version }}
run: VIRTUAL_ENV=.venv uv pip install "pydantic~=$PYDANTIC_VERSION"
- name: "🧪 Run Core Tests"
shell: bash

View File

@@ -0,0 +1,107 @@
name: Auto Label Issues by Package
on:
issues:
types: [opened, edited]
jobs:
label-by-package:
permissions:
issues: write
runs-on: ubuntu-latest
steps:
- name: Sync package labels
uses: actions/github-script@v6
with:
script: |
const body = context.payload.issue.body || "";
// Extract text under "### Package"
const match = body.match(/### Package\s+([\s\S]*?)\n###/i);
if (!match) return;
const packageSection = match[1].trim();
// Mapping table for package names to labels
const mapping = {
"langchain": "langchain",
"langchain-openai": "openai",
"langchain-anthropic": "anthropic",
"langchain-classic": "langchain-classic",
"langchain-core": "core",
"langchain-cli": "cli",
"langchain-model-profiles": "model-profiles",
"langchain-tests": "standard-tests",
"langchain-text-splitters": "text-splitters",
"langchain-chroma": "chroma",
"langchain-deepseek": "deepseek",
"langchain-exa": "exa",
"langchain-fireworks": "fireworks",
"langchain-groq": "groq",
"langchain-huggingface": "huggingface",
"langchain-mistralai": "mistralai",
"langchain-nomic": "nomic",
"langchain-ollama": "ollama",
"langchain-perplexity": "perplexity",
"langchain-prompty": "prompty",
"langchain-qdrant": "qdrant",
"langchain-xai": "xai",
};
// All possible package labels we manage
const allPackageLabels = Object.values(mapping);
const selectedLabels = [];
// Check if this is checkbox format (multiple selection)
const checkboxMatches = packageSection.match(/- \[x\]\s+([^\n\r]+)/gi);
if (checkboxMatches) {
// Handle checkbox format
for (const match of checkboxMatches) {
const packageName = match.replace(/- \[x\]\s+/i, '').trim();
const label = mapping[packageName];
if (label && !selectedLabels.includes(label)) {
selectedLabels.push(label);
}
}
} else {
// Handle dropdown format (single selection)
const label = mapping[packageSection];
if (label) {
selectedLabels.push(label);
}
}
// Get current issue labels
const issue = await github.rest.issues.get({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number
});
const currentLabels = issue.data.labels.map(label => label.name);
const currentPackageLabels = currentLabels.filter(label => allPackageLabels.includes(label));
// Determine labels to add and remove
const labelsToAdd = selectedLabels.filter(label => !currentPackageLabels.includes(label));
const labelsToRemove = currentPackageLabels.filter(label => !selectedLabels.includes(label));
// Add new labels
if (labelsToAdd.length > 0) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: labelsToAdd
});
}
// Remove old labels
for (const label of labelsToRemove) {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
name: label
});
}

View File

@@ -184,15 +184,14 @@ jobs:
steps:
- uses: actions/checkout@v5
# We have to use 3.12 as 3.13 is not yet supported
- name: "📦 Install UV Package Manager"
uses: astral-sh/setup-uv@v7
with:
python-version: "3.12"
python-version: "3.13"
- uses: actions/setup-python@v6
with:
python-version: "3.12"
python-version: "3.13"
- name: "📦 Install Test Dependencies"
run: uv sync --group test

View File

@@ -155,6 +155,7 @@ jobs:
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
LANGCHAIN_TESTS_USER_AGENT: ${{ secrets.LANGCHAIN_TESTS_USER_AGENT }}
run: |
cd langchain/${{ matrix.working-directory }}
make integration_tests

View File

@@ -26,11 +26,13 @@
# * revert — reverts a previous commit
# * release — prepare a new release
#
# Allowed Scopes (optional):
# core, cli, langchain, langchain_v1, langchain_legacy, standard-tests,
# Allowed Scope(s) (optional):
# core, cli, langchain, langchain_v1, langchain-classic, standard-tests,
# text-splitters, docs, anthropic, chroma, deepseek, exa, fireworks, groq,
# huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant,
# xai, infra
# xai, infra, deps
#
# Multiple scopes can be used by separating them with a comma.
#
# Rules:
# 1. The 'Type' must start with a lowercase letter.
@@ -79,8 +81,8 @@ jobs:
core
cli
langchain
langchain_v1
langchain_legacy
langchain-classic
model-profiles
standard-tests
text-splitters
docs

2
.gitignore vendored
View File

@@ -1,6 +1,8 @@
.vs/
.claude/
.idea/
#Emacs backup
*~
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]

8
.mcp.json Normal file
View File

@@ -0,0 +1,8 @@
{
"mcpServers": {
"docs-langchain": {
"type": "http",
"url": "https://docs.langchain.com/mcp"
}
}
}

View File

@@ -1,50 +1,43 @@
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset=".github/images/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset=".github/images/logo-light.svg">
<img alt="LangChain Logo" src=".github/images/logo-dark.svg" width="80%">
</picture>
</p>
<div align="center">
<a href="https://www.langchain.com/">
<picture>
<source media="(prefers-color-scheme: light)" srcset=".github/images/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset=".github/images/logo-light.svg">
<img alt="LangChain Logo" src=".github/images/logo-dark.svg" width="80%">
</picture>
</a>
</div>
<p align="center">
The platform for reliable agents.
</p>
<div align="center">
<h3>The platform for reliable agents.</h3>
</div>
<p align="center">
<a href="https://opensource.org/licenses/MIT" target="_blank">
<img src="https://img.shields.io/pypi/l/langchain" alt="PyPI - License">
</a>
<a href="https://pypistats.org/packages/langchain" target="_blank">
<img src="https://img.shields.io/pepy/dt/langchain" alt="PyPI - Downloads">
</a>
<a href="https://pypi.org/project/langchain/#history" target="_blank">
<img src="https://img.shields.io/pypi/v/langchain?label=%20" alt="Version">
</a>
<a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain" target="_blank">
<img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode" alt="Open in Dev Containers">
</a>
<a href="https://codespaces.new/langchain-ai/langchain" target="_blank">
<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">
</a>
<a href="https://codspeed.io/langchain-ai/langchain" target="_blank">
<img src="https://img.shields.io/endpoint?url=https://codspeed.io/badge.json" alt="CodSpeed Badge">
</a>
<a href="https://twitter.com/langchainai" target="_blank">
<img src="https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI" alt="Twitter / X">
</a>
</p>
<div align="center">
<a href="https://opensource.org/licenses/MIT" target="_blank"><img src="https://img.shields.io/pypi/l/langchain" alt="PyPI - License"></a>
<a href="https://pypistats.org/packages/langchain" target="_blank"><img src="https://img.shields.io/pepy/dt/langchain" alt="PyPI - Downloads"></a>
<a href="https://pypi.org/project/langchain/#history" target="_blank"><img src="https://img.shields.io/pypi/v/langchain?label=%20" alt="Version"></a>
<a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain" target="_blank"><img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode" alt="Open in Dev Containers"></a>
<a href="https://codespaces.new/langchain-ai/langchain" target="_blank"><img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20"></a>
<a href="https://codspeed.io/langchain-ai/langchain" target="_blank"><img src="https://img.shields.io/endpoint?url=https://codspeed.io/badge.json" alt="CodSpeed Badge"></a>
<a href="https://twitter.com/langchainai" target="_blank"><img src="https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI" alt="Twitter / X"></a>
</div>
LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development all while future-proofing decisions as the underlying technology evolves.
LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development all while future-proofing decisions as the underlying technology evolves.
```bash
pip install langchain
```
If you're looking for more advanced customization or agent orchestration, check out [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview), our framework for building controllable agent workflows.
---
**Documentation**: To learn more about LangChain, check out [the docs](https://docs.langchain.com/oss/python/langchain/overview).
**Documentation**:
If you're looking for more advanced customization or agent orchestration, check out [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview), our framework for building controllable agent workflows.
- [docs.langchain.com](https://docs.langchain.com/oss/python/langchain/overview) Comprehensive documentation, including conceptual overviews and guides
- [reference.langchain.com/python](https://reference.langchain.com/python) API reference docs for LangChain packages
**Discussions**: Visit the [LangChain Forum](https://forum.langchain.com) to connect with the community and share all of your technical questions, ideas, and feedback.
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
@@ -55,23 +48,27 @@ LangChain helps developers build applications powered by LLMs through a standard
Use LangChain for:
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChains vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your applications needs. As the industry frontier evolves, adapt quickly LangChains abstractions keep you moving without losing momentum.
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly LangChain's abstractions keep you moving without losing momentum.
- **Rapid prototyping**. Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle.
- **Production-ready features**. Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices.
- **Vibrant community and ecosystem**. Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community.
- **Flexible abstraction layers**. Work at the level of abstraction that suits your needs - from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity.
## LangChains ecosystem
## LangChain ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
To improve your LLM application development, pair LangChain with:
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview) - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows — and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- [LangGraph Platform](https://docs.langchain.com/langgraph-platform) - Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio).
- [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview) Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- [Integrations](https://docs.langchain.com/oss/python/integrations/providers/overview) List of LangChain integrations, including chat & embedding models, tools & toolkits, and more
- [LangSmith](https://www.langchain.com/langsmith) Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangSmith Deployment](https://docs.langchain.com/langsmith/deployments) Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams and iterate quickly with visual prototyping in [LangSmith Studio](https://docs.langchain.com/langsmith/studio).
- [Deep Agents](https://github.com/langchain-ai/deepagents) *(new!)* Build agents that can plan, use subagents, and leverage file systems for complex tasks
## Additional resources
- [Learn](https://docs.langchain.com/oss/python/learn): Use cases, conceptual overviews, and more.
- [API Reference](https://reference.langchain.com/python): Detailed reference on
navigating base packages and integrations for LangChain.
- [LangChain Forum](https://forum.langchain.com): Connect with the community and share all of your technical questions, ideas, and feedback.
- [Chat LangChain](https://chat.langchain.com): Ask questions & chat with our documentation.
- [API Reference](https://reference.langchain.com/python) Detailed reference on navigating base packages and integrations for LangChain.
- [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview) Learn how to contribute to LangChain projects and find good first issues.
- [Code of Conduct](https://github.com/langchain-ai/langchain/blob/master/.github/CODE_OF_CONDUCT.md) Our community guidelines and standards for participation.

View File

@@ -55,10 +55,10 @@ All out of scope targets defined by huntr as well as:
* **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
* **tools**: Tools in either langchain or langchain-community are not eligible for bug
* **tools**: Tools in either `langchain` or `langchain-community` are not eligible for bug
bounties. This includes the following directories
* libs/langchain/langchain/tools
* libs/community/langchain_community/tools
* `libs/langchain/langchain/tools`
* `libs/community/langchain_community/tools`
* Please review the [Best Practices](#best-practices)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible

View File

@@ -295,7 +295,7 @@
"source": [
"## TODO: Any functionality specific to this vector store\n",
"\n",
"E.g. creating a persisten database to save to your disk, etc."
"E.g. creating a persistent database to save to your disk, etc."
]
},
{

View File

@@ -6,9 +6,8 @@ import hashlib
import logging
import re
import shutil
from collections.abc import Sequence
from pathlib import Path
from typing import Any, TypedDict
from typing import TYPE_CHECKING, Any, TypedDict
from git import Repo
@@ -18,6 +17,9 @@ from langchain_cli.constants import (
DEFAULT_GIT_SUBDIRECTORY,
)
if TYPE_CHECKING:
from collections.abc import Sequence
logger = logging.getLogger(__name__)
@@ -182,7 +184,7 @@ def parse_dependencies(
inner_branches = _list_arg_to_length(branch, num_deps)
return list(
map( # type: ignore[call-overload]
map( # type: ignore[call-overload, unused-ignore]
parse_dependency_string,
inner_deps,
inner_repos,

View File

@@ -20,12 +20,13 @@ description = "CLI for interacting with LangChain"
readme = "README.md"
[project.urls]
homepage = "https://docs.langchain.com/"
repository = "https://github.com/langchain-ai/langchain/tree/master/libs/cli"
changelog = "https://github.com/langchain-ai/langchain/releases?q=%22langchain-cli%3D%3D1%22"
twitter = "https://x.com/LangChainAI"
slack = "https://www.langchain.com/join-community"
reddit = "https://www.reddit.com/r/LangChain/"
Homepage = "https://docs.langchain.com/"
Documentation = "https://docs.langchain.com/"
Source = "https://github.com/langchain-ai/langchain/tree/master/libs/cli"
Changelog = "https://github.com/langchain-ai/langchain/releases?q=%22langchain-cli%3D%3D1%22"
Twitter = "https://x.com/LangChainAI"
Slack = "https://www.langchain.com/join-community"
Reddit = "https://www.reddit.com/r/LangChain/"
[project.scripts]
langchain = "langchain_cli.cli:app"
@@ -42,14 +43,14 @@ lint = [
]
test = [
"langchain-core",
"langchain"
"langchain-classic"
]
typing = ["langchain"]
typing = ["langchain-classic"]
test_integration = []
[tool.uv.sources]
langchain-core = { path = "../core", editable = true }
langchain = { path = "../langchain", editable = true }
langchain-classic = { path = "../langchain", editable = true }
[tool.ruff.format]
docstring-code-format = true

View File

@@ -1,9 +1,11 @@
from __future__ import annotations
from dataclasses import dataclass
from typing import TYPE_CHECKING
from .file import File
from .folder import Folder
if TYPE_CHECKING:
from .file import File
from .folder import Folder
@dataclass

View File

@@ -1,9 +1,12 @@
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING
from .file import File
if TYPE_CHECKING:
from pathlib import Path
class Folder:
def __init__(self, name: str, *files: Folder | File) -> None:

View File

@@ -1,5 +1,5 @@
import pytest
from langchain._api import suppress_langchain_deprecation_warning as sup2
from langchain_classic._api import suppress_langchain_deprecation_warning as sup2
from langchain_core._api import suppress_langchain_deprecation_warning as sup1
from langchain_cli.namespaces.migrate.generate.generic import (

466
libs/cli/uv.lock generated
View File

@@ -327,7 +327,21 @@ wheels = [
[[package]]
name = "langchain"
version = "0.3.27"
version = "1.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "langchain-core" },
{ name = "langgraph" },
{ name = "pydantic" },
]
sdist = { url = "https://files.pythonhosted.org/packages/7d/b8/36078257ba52351608129ee983079a4d77ee69eb1470ee248cd8f5728a31/langchain-1.0.0.tar.gz", hash = "sha256:56bf90d935ac1dda864519372d195ca58757b755dd4c44b87840b67d069085b7", size = 466932, upload-time = "2025-10-17T20:53:20.319Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c4/4d/2758a16ad01716c0fb3fe9ec205fd530eae4528b35a27ff44837c399e032/langchain-1.0.0-py3-none-any.whl", hash = "sha256:8c95e41250fc86d09a978fbdf999f86c18d50a28a2addc5da88546af00a1ad15", size = 106202, upload-time = "2025-10-17T20:53:18.685Z" },
]
[[package]]
name = "langchain-classic"
version = "1.0.0"
source = { editable = "../langchain" }
dependencies = [
{ name = "async-timeout", marker = "python_full_version < '3.11'" },
@@ -344,20 +358,28 @@ dependencies = [
requires-dist = [
{ name = "async-timeout", marker = "python_full_version < '3.11'", specifier = ">=4.0.0,<5.0.0" },
{ name = "langchain-anthropic", marker = "extra == 'anthropic'" },
{ name = "langchain-community", marker = "extra == 'community'" },
{ name = "langchain-aws", marker = "extra == 'aws'" },
{ name = "langchain-core", editable = "../core" },
{ name = "langchain-deepseek", marker = "extra == 'deepseek'" },
{ name = "langchain-fireworks", marker = "extra == 'fireworks'" },
{ name = "langchain-google-genai", marker = "extra == 'google-genai'" },
{ name = "langchain-google-vertexai", marker = "extra == 'google-vertexai'" },
{ name = "langchain-groq", marker = "extra == 'groq'" },
{ name = "langchain-huggingface", marker = "extra == 'huggingface'" },
{ name = "langchain-mistralai", marker = "extra == 'mistralai'" },
{ name = "langchain-ollama", marker = "extra == 'ollama'" },
{ name = "langchain-openai", marker = "extra == 'openai'", editable = "../partners/openai" },
{ name = "langchain-perplexity", marker = "extra == 'perplexity'" },
{ name = "langchain-text-splitters", editable = "../text-splitters" },
{ name = "langchain-together", marker = "extra == 'together'" },
{ name = "langchain-xai", marker = "extra == 'xai'" },
{ name = "langsmith", specifier = ">=0.1.17,<1.0.0" },
{ name = "pydantic", specifier = ">=2.7.4,<3.0.0" },
{ name = "pyyaml", specifier = ">=5.3.0,<7.0.0" },
{ name = "requests", specifier = ">=2.0.0,<3.0.0" },
{ name = "sqlalchemy", specifier = ">=1.4.0,<3.0.0" },
]
provides-extras = ["community", "anthropic", "openai", "google-vertexai", "google-genai", "together"]
provides-extras = ["anthropic", "openai", "google-vertexai", "google-genai", "fireworks", "ollama", "together", "mistralai", "huggingface", "groq", "aws", "deepseek", "xai", "perplexity"]
[package.metadata.requires-dev]
dev = [
@@ -376,7 +398,6 @@ test = [
{ name = "blockbuster", specifier = ">=1.5.18,<1.6.0" },
{ name = "cffi", marker = "python_full_version < '3.10'", specifier = "<1.17.1" },
{ name = "cffi", marker = "python_full_version >= '3.10'" },
{ name = "duckdb-engine", specifier = ">=0.9.2,<1.0.0" },
{ name = "freezegun", specifier = ">=1.2.2,<2.0.0" },
{ name = "langchain-core", editable = "../core" },
{ name = "langchain-openai", editable = "../partners/openai" },
@@ -411,9 +432,10 @@ test-integration = [
{ name = "wrapt", specifier = ">=1.15.0,<2.0.0" },
]
typing = [
{ name = "fastapi", specifier = ">=0.116.1,<1.0.0" },
{ name = "langchain-core", editable = "../core" },
{ name = "langchain-text-splitters", editable = "../text-splitters" },
{ name = "mypy", specifier = ">=1.15.0,<1.16.0" },
{ name = "mypy", specifier = ">=1.18.2,<1.19.0" },
{ name = "mypy-protobuf", specifier = ">=3.0.0,<4.0.0" },
{ name = "numpy", marker = "python_full_version < '3.13'", specifier = ">=1.26.4" },
{ name = "numpy", marker = "python_full_version >= '3.13'", specifier = ">=2.1.0" },
@@ -448,11 +470,11 @@ lint = [
{ name = "ruff" },
]
test = [
{ name = "langchain" },
{ name = "langchain-classic" },
{ name = "langchain-core" },
]
typing = [
{ name = "langchain" },
{ name = "langchain-classic" },
]
[package.metadata]
@@ -475,15 +497,15 @@ lint = [
{ name = "ruff", specifier = ">=0.13.1,<0.14" },
]
test = [
{ name = "langchain", editable = "../langchain" },
{ name = "langchain-classic", editable = "../langchain" },
{ name = "langchain-core", editable = "../core" },
]
test-integration = []
typing = [{ name = "langchain", editable = "../langchain" }]
typing = [{ name = "langchain-classic", editable = "../langchain" }]
[[package]]
name = "langchain-core"
version = "1.0.0a6"
version = "1.0.0"
source = { editable = "../core" }
dependencies = [
{ name = "jsonpatch" },
@@ -541,7 +563,7 @@ typing = [
[[package]]
name = "langchain-text-splitters"
version = "1.0.0a1"
version = "1.0.0"
source = { editable = "../text-splitters" }
dependencies = [
{ name = "langchain-core" },
@@ -574,8 +596,8 @@ test-integration = [
{ name = "nltk", specifier = ">=3.9.1,<4.0.0" },
{ name = "scipy", marker = "python_full_version == '3.12.*'", specifier = ">=1.7.0,<2.0.0" },
{ name = "scipy", marker = "python_full_version >= '3.13'", specifier = ">=1.14.1,<2.0.0" },
{ name = "sentence-transformers", specifier = ">=3.0.1,<4.0.0" },
{ name = "spacy", specifier = ">=3.8.7,<4.0.0" },
{ name = "sentence-transformers", marker = "python_full_version < '3.14'", specifier = ">=3.0.1,<4.0.0" },
{ name = "spacy", marker = "python_full_version < '3.14'", specifier = ">=3.8.7,<4.0.0" },
{ name = "thinc", specifier = ">=8.3.6,<9.0.0" },
{ name = "tiktoken", specifier = ">=0.8.0,<1.0.0" },
{ name = "transformers", specifier = ">=4.51.3,<5.0.0" },
@@ -588,6 +610,62 @@ typing = [
{ name = "types-requests", specifier = ">=2.31.0.20240218,<3.0.0.0" },
]
[[package]]
name = "langgraph"
version = "1.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "langchain-core" },
{ name = "langgraph-checkpoint" },
{ name = "langgraph-prebuilt" },
{ name = "langgraph-sdk" },
{ name = "pydantic" },
{ name = "xxhash" },
]
sdist = { url = "https://files.pythonhosted.org/packages/57/f7/7ae10f1832ab1a6a402f451e54d6dab277e28e7d4e4204e070c7897ca71c/langgraph-1.0.0.tar.gz", hash = "sha256:5f83ed0e9bbcc37635bc49cbc9b3d9306605fa07504f955b7a871ed715f9964c", size = 472835, upload-time = "2025-10-17T20:23:38.263Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/07/42/6f6d0fe4eb661b06da8e6c59e58044e9e4221fdbffdcacae864557de961e/langgraph-1.0.0-py3-none-any.whl", hash = "sha256:4d478781832a1bc67e06c3eb571412ec47d7c57a5467d1f3775adf0e9dd4042c", size = 155416, upload-time = "2025-10-17T20:23:36.978Z" },
]
[[package]]
name = "langgraph-checkpoint"
version = "2.1.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "langchain-core" },
{ name = "ormsgpack" },
]
sdist = { url = "https://files.pythonhosted.org/packages/29/83/6404f6ed23a91d7bc63d7df902d144548434237d017820ceaa8d014035f2/langgraph_checkpoint-2.1.2.tar.gz", hash = "sha256:112e9d067a6eff8937caf198421b1ffba8d9207193f14ac6f89930c1260c06f9", size = 142420, upload-time = "2025-10-07T17:45:17.129Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c4/f2/06bf5addf8ee664291e1b9ffa1f28fc9d97e59806dc7de5aea9844cbf335/langgraph_checkpoint-2.1.2-py3-none-any.whl", hash = "sha256:911ebffb069fd01775d4b5184c04aaafc2962fcdf50cf49d524cd4367c4d0c60", size = 45763, upload-time = "2025-10-07T17:45:16.19Z" },
]
[[package]]
name = "langgraph-prebuilt"
version = "1.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "langchain-core" },
{ name = "langgraph-checkpoint" },
]
sdist = { url = "https://files.pythonhosted.org/packages/02/2d/934b1129e217216a0dfaf0f7df0a10cedf2dfafe6cc8e1ee238cafaaa4a7/langgraph_prebuilt-1.0.0.tar.gz", hash = "sha256:eb75dad9aca0137451ca0395aa8541a665b3f60979480b0431d626fd195dcda2", size = 119927, upload-time = "2025-10-17T20:15:21.429Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/33/2e/ffa698eedc4c355168a9207ee598b2cc74ede92ce2b55c3469ea06978b6e/langgraph_prebuilt-1.0.0-py3-none-any.whl", hash = "sha256:ceaae4c5cee8c1f9b6468f76c114cafebb748aed0c93483b7c450e5a89de9c61", size = 28455, upload-time = "2025-10-17T20:15:20.043Z" },
]
[[package]]
name = "langgraph-sdk"
version = "0.2.9"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "httpx" },
{ name = "orjson" },
]
sdist = { url = "https://files.pythonhosted.org/packages/23/d8/40e01190a73c564a4744e29a6c902f78d34d43dad9b652a363a92a67059c/langgraph_sdk-0.2.9.tar.gz", hash = "sha256:b3bd04c6be4fa382996cd2be8fbc1e7cc94857d2bc6b6f4599a7f2a245975303", size = 99802, upload-time = "2025-09-20T18:49:14.734Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/66/05/b2d34e16638241e6f27a6946d28160d4b8b641383787646d41a3727e0896/langgraph_sdk-0.2.9-py3-none-any.whl", hash = "sha256:fbf302edadbf0fb343596f91c597794e936ef68eebc0d3e1d358b6f9f72a1429", size = 56752, upload-time = "2025-09-20T18:49:13.346Z" },
]
[[package]]
name = "langserve"
version = "0.0.51"
@@ -780,6 +858,61 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/28/01/d6b274a0635be0468d4dbd9cafe80c47105937a0d42434e805e67cd2ed8b/orjson-3.11.3-cp314-cp314-win_arm64.whl", hash = "sha256:e8f6a7a27d7b7bec81bd5924163e9af03d49bbb63013f107b48eb5d16db711bc", size = 125985, upload-time = "2025-08-26T17:46:16.67Z" },
]
[[package]]
name = "ormsgpack"
version = "1.11.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/65/f8/224c342c0e03e131aaa1a1f19aa2244e167001783a433f4eed10eedd834b/ormsgpack-1.11.0.tar.gz", hash = "sha256:7c9988e78fedba3292541eb3bb274fa63044ef4da2ddb47259ea70c05dee4206", size = 49357, upload-time = "2025-10-08T17:29:15.621Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ff/3d/6996193cb2babc47fc92456223bef7d141065357ad4204eccf313f47a7b3/ormsgpack-1.11.0-cp310-cp310-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:03d4e658dd6e1882a552ce1d13cc7b49157414e7d56a4091fbe7823225b08cba", size = 367965, upload-time = "2025-10-08T17:28:06.736Z" },
{ url = "https://files.pythonhosted.org/packages/35/89/c83b805dd9caebb046f4ceeed3706d0902ed2dbbcf08b8464e89f2c52e05/ormsgpack-1.11.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1bb67eb913c2b703f0ed39607fc56e50724dd41f92ce080a586b4d6149eb3fe4", size = 195209, upload-time = "2025-10-08T17:28:08.395Z" },
{ url = "https://files.pythonhosted.org/packages/3a/17/427d9c4f77b120f0af01d7a71d8144771c9388c2a81f712048320e31353b/ormsgpack-1.11.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1e54175b92411f73a238e5653a998627f6660de3def37d9dd7213e0fd264ca56", size = 205868, upload-time = "2025-10-08T17:28:09.688Z" },
{ url = "https://files.pythonhosted.org/packages/82/32/a9ce218478bdbf3fee954159900e24b314ab3064f7b6a217ccb1e3464324/ormsgpack-1.11.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ca2b197f4556e1823d1319869d4c5dc278be335286d2308b0ed88b59a5afcc25", size = 207391, upload-time = "2025-10-08T17:28:11.031Z" },
{ url = "https://files.pythonhosted.org/packages/7a/d3/4413fe7454711596fdf08adabdfa686580e4656702015108e4975f00a022/ormsgpack-1.11.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:bc62388262f58c792fe1e450e1d9dbcc174ed2fb0b43db1675dd7c5ff2319d6a", size = 377078, upload-time = "2025-10-08T17:28:12.39Z" },
{ url = "https://files.pythonhosted.org/packages/f0/ad/13fae555a45e35ca1ca929a27c9ee0a3ecada931b9d44454658c543f9b9c/ormsgpack-1.11.0-cp310-cp310-musllinux_1_2_armv7l.whl", hash = "sha256:c48bc10af74adfbc9113f3fb160dc07c61ad9239ef264c17e449eba3de343dc2", size = 470776, upload-time = "2025-10-08T17:28:13.484Z" },
{ url = "https://files.pythonhosted.org/packages/36/60/51178b093ffc4e2ef3381013a67223e7d56224434fba80047249f4a84b26/ormsgpack-1.11.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:a608d3a1d4fa4acdc5082168a54513cff91f47764cef435e81a483452f5f7647", size = 380862, upload-time = "2025-10-08T17:28:14.747Z" },
{ url = "https://files.pythonhosted.org/packages/a6/e3/1cb6c161335e2ae7d711ecfb007a31a3936603626e347c13e5e53b7c7cf8/ormsgpack-1.11.0-cp310-cp310-win_amd64.whl", hash = "sha256:97217b4f7f599ba45916b9c4c4b1d5656e8e2a4d91e2e191d72a7569d3c30923", size = 112058, upload-time = "2025-10-08T17:28:15.777Z" },
{ url = "https://files.pythonhosted.org/packages/a4/7c/90164d00e8e94b48eff8a17bc2f4be6b71ae356a00904bc69d5e8afe80fb/ormsgpack-1.11.0-cp311-cp311-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:c7be823f47d8e36648d4bc90634b93f02b7d7cc7480081195f34767e86f181fb", size = 367964, upload-time = "2025-10-08T17:28:16.778Z" },
{ url = "https://files.pythonhosted.org/packages/7b/c2/fb6331e880a3446c1341e72c77bd5a46da3e92a8e2edf7ea84a4c6c14fff/ormsgpack-1.11.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:68accf15d1b013812755c0eb7a30e1fc2f81eb603a1a143bf0cda1b301cfa797", size = 195209, upload-time = "2025-10-08T17:28:17.796Z" },
{ url = "https://files.pythonhosted.org/packages/18/50/4943fb5df8cc02da6b7b1ee2c2a7fb13aebc9f963d69280b1bb02b1fb178/ormsgpack-1.11.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:805d06fb277d9a4e503c0c707545b49cde66cbb2f84e5cf7c58d81dfc20d8658", size = 205869, upload-time = "2025-10-08T17:28:19.01Z" },
{ url = "https://files.pythonhosted.org/packages/1c/fa/e7e06835bfea9adeef43915143ce818098aecab0cbd3df584815adf3e399/ormsgpack-1.11.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a1e57cdf003e77acc43643bda151dc01f97147a64b11cdee1380bb9698a7601c", size = 207391, upload-time = "2025-10-08T17:28:20.352Z" },
{ url = "https://files.pythonhosted.org/packages/33/f0/f28a19e938a14ec223396e94f4782fbcc023f8c91f2ab6881839d3550f32/ormsgpack-1.11.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:37fc05bdaabd994097c62e2f3e08f66b03f856a640ede6dc5ea340bd15b77f4d", size = 377081, upload-time = "2025-10-08T17:28:21.926Z" },
{ url = "https://files.pythonhosted.org/packages/4f/e3/73d1d7287637401b0b6637e30ba9121e1aa1d9f5ea185ed9834ca15d512c/ormsgpack-1.11.0-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:a6e9db6c73eb46b2e4d97bdffd1368a66f54e6806b563a997b19c004ef165e1d", size = 470779, upload-time = "2025-10-08T17:28:22.993Z" },
{ url = "https://files.pythonhosted.org/packages/9c/46/7ba7f9721e766dd0dfe4cedf444439447212abffe2d2f4538edeeec8ccbd/ormsgpack-1.11.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:e9c44eae5ac0196ffc8b5ed497c75511056508f2303fa4d36b208eb820cf209e", size = 380865, upload-time = "2025-10-08T17:28:24.012Z" },
{ url = "https://files.pythonhosted.org/packages/a7/7d/bb92a0782bbe0626c072c0320001410cf3f6743ede7dc18f034b1a18edef/ormsgpack-1.11.0-cp311-cp311-win_amd64.whl", hash = "sha256:11d0dfaf40ae7c6de4f7dbd1e4892e2e6a55d911ab1774357c481158d17371e4", size = 112058, upload-time = "2025-10-08T17:28:25.015Z" },
{ url = "https://files.pythonhosted.org/packages/28/1a/f07c6f74142815d67e1d9d98c5b2960007100408ade8242edac96d5d1c73/ormsgpack-1.11.0-cp311-cp311-win_arm64.whl", hash = "sha256:0c63a3f7199a3099c90398a1bdf0cb577b06651a442dc5efe67f2882665e5b02", size = 105894, upload-time = "2025-10-08T17:28:25.93Z" },
{ url = "https://files.pythonhosted.org/packages/1e/16/2805ebfb3d2cbb6c661b5fae053960fc90a2611d0d93e2207e753e836117/ormsgpack-1.11.0-cp312-cp312-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:3434d0c8d67de27d9010222de07fb6810fb9af3bb7372354ffa19257ac0eb83b", size = 368474, upload-time = "2025-10-08T17:28:27.532Z" },
{ url = "https://files.pythonhosted.org/packages/6f/39/6afae47822dca0ce4465d894c0bbb860a850ce29c157882dbdf77a5dd26e/ormsgpack-1.11.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d2da5bd097e8dbfa4eb0d4ccfe79acd6f538dee4493579e2debfe4fc8f4ca89b", size = 195321, upload-time = "2025-10-08T17:28:28.573Z" },
{ url = "https://files.pythonhosted.org/packages/f6/54/11eda6b59f696d2f16de469bfbe539c9f469c4b9eef5a513996b5879c6e9/ormsgpack-1.11.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:fdbaa0a5a8606a486960b60c24f2d5235d30ac7a8b98eeaea9854bffef14dc3d", size = 206036, upload-time = "2025-10-08T17:28:29.785Z" },
{ url = "https://files.pythonhosted.org/packages/1e/86/890430f704f84c4699ddad61c595d171ea2fd77a51fbc106f83981e83939/ormsgpack-1.11.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3682f24f800c1837017ee90ce321086b2cbaef88db7d4cdbbda1582aa6508159", size = 207615, upload-time = "2025-10-08T17:28:31.076Z" },
{ url = "https://files.pythonhosted.org/packages/b6/b9/77383e16c991c0ecb772205b966fc68d9c519e0b5f9c3913283cbed30ffe/ormsgpack-1.11.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:fcca21202bb05ccbf3e0e92f560ee59b9331182e4c09c965a28155efbb134993", size = 377195, upload-time = "2025-10-08T17:28:32.436Z" },
{ url = "https://files.pythonhosted.org/packages/20/e2/15f9f045d4947f3c8a5e0535259fddf027b17b1215367488b3565c573b9d/ormsgpack-1.11.0-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:c30e5c4655ba46152d722ec7468e8302195e6db362ec1ae2c206bc64f6030e43", size = 470960, upload-time = "2025-10-08T17:28:33.556Z" },
{ url = "https://files.pythonhosted.org/packages/b8/61/403ce188c4c495bc99dff921a0ad3d9d352dd6d3c4b629f3638b7f0cf79b/ormsgpack-1.11.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:7138a341f9e2c08c59368f03d3be25e8b87b3baaf10d30fb1f6f6b52f3d47944", size = 381174, upload-time = "2025-10-08T17:28:34.781Z" },
{ url = "https://files.pythonhosted.org/packages/14/a8/94c94bc48c68da4374870a851eea03fc5a45eb041182ad4c5ed9acfc05a4/ormsgpack-1.11.0-cp312-cp312-win_amd64.whl", hash = "sha256:d4bd8589b78a11026d47f4edf13c1ceab9088bb12451f34396afe6497db28a27", size = 112314, upload-time = "2025-10-08T17:28:36.259Z" },
{ url = "https://files.pythonhosted.org/packages/19/d0/aa4cf04f04e4cc180ce7a8d8ddb5a7f3af883329cbc59645d94d3ba157a5/ormsgpack-1.11.0-cp312-cp312-win_arm64.whl", hash = "sha256:e5e746a1223e70f111d4001dab9585ac8639eee8979ca0c8db37f646bf2961da", size = 106072, upload-time = "2025-10-08T17:28:37.518Z" },
{ url = "https://files.pythonhosted.org/packages/8b/35/e34722edb701d053cf2240f55974f17b7dbfd11fdef72bd2f1835bcebf26/ormsgpack-1.11.0-cp313-cp313-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:0e7b36ab7b45cb95217ae1f05f1318b14a3e5ef73cb00804c0f06233f81a14e8", size = 368502, upload-time = "2025-10-08T17:28:38.547Z" },
{ url = "https://files.pythonhosted.org/packages/2f/6a/c2fc369a79d6aba2aa28c8763856c95337ac7fcc0b2742185cd19397212a/ormsgpack-1.11.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:43402d67e03a9a35cc147c8c03f0c377cad016624479e1ee5b879b8425551484", size = 195344, upload-time = "2025-10-08T17:28:39.554Z" },
{ url = "https://files.pythonhosted.org/packages/8b/6a/0f8e24b7489885534c1a93bdba7c7c434b9b8638713a68098867db9f254c/ormsgpack-1.11.0-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:64fd992f932764d6306b70ddc755c1bc3405c4c6a69f77a36acf7af1c8f5ada4", size = 206045, upload-time = "2025-10-08T17:28:40.561Z" },
{ url = "https://files.pythonhosted.org/packages/99/71/8b460ba264f3c6f82ef5b1920335720094e2bd943057964ce5287d6df83a/ormsgpack-1.11.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0362fb7fe4a29c046c8ea799303079a09372653a1ce5a5a588f3bbb8088368d0", size = 207641, upload-time = "2025-10-08T17:28:41.736Z" },
{ url = "https://files.pythonhosted.org/packages/50/cf/f369446abaf65972424ed2651f2df2b7b5c3b735c93fc7fa6cfb81e34419/ormsgpack-1.11.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:de2f7a65a9d178ed57be49eba3d0fc9b833c32beaa19dbd4ba56014d3c20b152", size = 377211, upload-time = "2025-10-08T17:28:43.12Z" },
{ url = "https://files.pythonhosted.org/packages/2f/3f/948bb0047ce0f37c2efc3b9bb2bcfdccc61c63e0b9ce8088d4903ba39dcf/ormsgpack-1.11.0-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:f38cfae95461466055af966fc922d06db4e1654966385cda2828653096db34da", size = 470973, upload-time = "2025-10-08T17:28:44.465Z" },
{ url = "https://files.pythonhosted.org/packages/31/a4/92a8114d1d017c14aaa403445060f345df9130ca532d538094f38e535988/ormsgpack-1.11.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:c88396189d238f183cea7831b07a305ab5c90d6d29b53288ae11200bd956357b", size = 381161, upload-time = "2025-10-08T17:28:46.063Z" },
{ url = "https://files.pythonhosted.org/packages/d0/64/5b76447da654798bfcfdfd64ea29447ff2b7f33fe19d0e911a83ad5107fc/ormsgpack-1.11.0-cp313-cp313-win_amd64.whl", hash = "sha256:5403d1a945dd7c81044cebeca3f00a28a0f4248b33242a5d2d82111628043725", size = 112321, upload-time = "2025-10-08T17:28:47.393Z" },
{ url = "https://files.pythonhosted.org/packages/46/5e/89900d06db9ab81e7ec1fd56a07c62dfbdcda398c435718f4252e1dc52a0/ormsgpack-1.11.0-cp313-cp313-win_arm64.whl", hash = "sha256:c57357b8d43b49722b876edf317bdad9e6d52071b523fdd7394c30cd1c67d5a0", size = 106084, upload-time = "2025-10-08T17:28:48.305Z" },
{ url = "https://files.pythonhosted.org/packages/4c/0b/c659e8657085c8c13f6a0224789f422620cef506e26573b5434defe68483/ormsgpack-1.11.0-cp314-cp314-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:d390907d90fd0c908211592c485054d7a80990697ef4dff4e436ac18e1aab98a", size = 368497, upload-time = "2025-10-08T17:28:49.297Z" },
{ url = "https://files.pythonhosted.org/packages/1b/0e/451e5848c7ed56bd287e8a2b5cb5926e54466f60936e05aec6cb299f9143/ormsgpack-1.11.0-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6153c2e92e789509098e04c9aa116b16673bd88ec78fbe0031deeb34ab642d10", size = 195385, upload-time = "2025-10-08T17:28:50.314Z" },
{ url = "https://files.pythonhosted.org/packages/4c/28/90f78cbbe494959f2439c2ec571f08cd3464c05a6a380b0d621c622122a9/ormsgpack-1.11.0-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:c2b2c2a065a94d742212b2018e1fecd8f8d72f3c50b53a97d1f407418093446d", size = 206114, upload-time = "2025-10-08T17:28:51.336Z" },
{ url = "https://files.pythonhosted.org/packages/fb/db/34163f4c0923bea32dafe42cd878dcc66795a3e85669bc4b01c1e2b92a7b/ormsgpack-1.11.0-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:110e65b5340f3d7ef8b0009deae3c6b169437e6b43ad5a57fd1748085d29d2ac", size = 207679, upload-time = "2025-10-08T17:28:53.627Z" },
{ url = "https://files.pythonhosted.org/packages/b6/14/04ee741249b16f380a9b4a0cc19d4134d0b7c74bab27a2117da09e525eb9/ormsgpack-1.11.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c27e186fca96ab34662723e65b420919910acbbc50fc8e1a44e08f26268cb0e0", size = 377237, upload-time = "2025-10-08T17:28:56.12Z" },
{ url = "https://files.pythonhosted.org/packages/89/ff/53e588a6aaa833237471caec679582c2950f0e7e1a8ba28c1511b465c1f4/ormsgpack-1.11.0-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:d56b1f877c13d499052d37a3db2378a97d5e1588d264f5040b3412aee23d742c", size = 471021, upload-time = "2025-10-08T17:28:57.299Z" },
{ url = "https://files.pythonhosted.org/packages/a6/f9/f20a6d9ef2be04da3aad05e8f5699957e9a30c6d5c043a10a296afa7e890/ormsgpack-1.11.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:c88e28cd567c0a3269f624b4ade28142d5e502c8e826115093c572007af5be0a", size = 381205, upload-time = "2025-10-08T17:28:58.872Z" },
{ url = "https://files.pythonhosted.org/packages/f8/64/96c07d084b479ac8b7821a77ffc8d3f29d8b5c95ebfdf8db1c03dff02762/ormsgpack-1.11.0-cp314-cp314-win_amd64.whl", hash = "sha256:8811160573dc0a65f62f7e0792c4ca6b7108dfa50771edb93f9b84e2d45a08ae", size = 112374, upload-time = "2025-10-08T17:29:00Z" },
{ url = "https://files.pythonhosted.org/packages/88/a5/5dcc18b818d50213a3cadfe336bb6163a102677d9ce87f3d2f1a1bee0f8c/ormsgpack-1.11.0-cp314-cp314-win_arm64.whl", hash = "sha256:23e30a8d3c17484cf74e75e6134322255bd08bc2b5b295cc9c442f4bae5f3c2d", size = 106056, upload-time = "2025-10-08T17:29:01.29Z" },
{ url = "https://files.pythonhosted.org/packages/19/2b/776d1b411d2be50f77a6e6e94a25825cca55dcacfe7415fd691a144db71b/ormsgpack-1.11.0-cp314-cp314t-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:2905816502adfaf8386a01dd85f936cd378d243f4f5ee2ff46f67f6298dc90d5", size = 368661, upload-time = "2025-10-08T17:29:02.382Z" },
{ url = "https://files.pythonhosted.org/packages/a9/0c/81a19e6115b15764db3d241788f9fac093122878aaabf872cc545b0c4650/ormsgpack-1.11.0-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c04402fb9a0a9b9f18fbafd6d5f8398ee99b3ec619fb63952d3a954bc9d47daa", size = 195539, upload-time = "2025-10-08T17:29:03.472Z" },
{ url = "https://files.pythonhosted.org/packages/97/86/e5b50247a61caec5718122feb2719ea9d451d30ac0516c288c1dbc6408e8/ormsgpack-1.11.0-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a025ec07ac52056ecfd9e57b5cbc6fff163f62cb9805012b56cda599157f8ef2", size = 207718, upload-time = "2025-10-08T17:29:04.545Z" },
]
[[package]]
name = "packaging"
version = "25.0"
@@ -809,7 +942,7 @@ wheels = [
[[package]]
name = "pydantic"
version = "2.11.9"
version = "2.12.3"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "annotated-types" },
@@ -817,96 +950,123 @@ dependencies = [
{ name = "typing-extensions" },
{ name = "typing-inspection" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ff/5d/09a551ba512d7ca404d785072700d3f6727a02f6f3c24ecfd081c7cf0aa8/pydantic-2.11.9.tar.gz", hash = "sha256:6b8ffda597a14812a7975c90b82a8a2e777d9257aba3453f973acd3c032a18e2", size = 788495, upload-time = "2025-09-13T11:26:39.325Z" }
sdist = { url = "https://files.pythonhosted.org/packages/f3/1e/4f0a3233767010308f2fd6bd0814597e3f63f1dc98304a9112b8759df4ff/pydantic-2.12.3.tar.gz", hash = "sha256:1da1c82b0fc140bb0103bc1441ffe062154c8d38491189751ee00fd8ca65ce74", size = 819383, upload-time = "2025-10-17T15:04:21.222Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/3e/d3/108f2006987c58e76691d5ae5d200dd3e0f532cb4e5fa3560751c3a1feba/pydantic-2.11.9-py3-none-any.whl", hash = "sha256:c42dd626f5cfc1c6950ce6205ea58c93efa406da65f479dcb4029d5934857da2", size = 444855, upload-time = "2025-09-13T11:26:36.909Z" },
{ url = "https://files.pythonhosted.org/packages/a1/6b/83661fa77dcefa195ad5f8cd9af3d1a7450fd57cc883ad04d65446ac2029/pydantic-2.12.3-py3-none-any.whl", hash = "sha256:6986454a854bc3bc6e5443e1369e06a3a456af9d339eda45510f517d9ea5c6bf", size = 462431, upload-time = "2025-10-17T15:04:19.346Z" },
]
[[package]]
name = "pydantic-core"
version = "2.33.2"
version = "2.41.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ad/88/5f2260bdfae97aabf98f1778d43f69574390ad787afb646292a638c923d4/pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc", size = 435195, upload-time = "2025-04-23T18:33:52.104Z" }
sdist = { url = "https://files.pythonhosted.org/packages/df/18/d0944e8eaaa3efd0a91b0f1fc537d3be55ad35091b6a87638211ba691964/pydantic_core-2.41.4.tar.gz", hash = "sha256:70e47929a9d4a1905a67e4b687d5946026390568a8e952b92824118063cee4d5", size = 457557, upload-time = "2025-10-14T10:23:47.909Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/e5/92/b31726561b5dae176c2d2c2dc43a9c5bfba5d32f96f8b4c0a600dd492447/pydantic_core-2.33.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2b3d326aaef0c0399d9afffeb6367d5e26ddc24d351dbc9c636840ac355dc5d8", size = 2028817, upload-time = "2025-04-23T18:30:43.919Z" },
{ url = "https://files.pythonhosted.org/packages/a3/44/3f0b95fafdaca04a483c4e685fe437c6891001bf3ce8b2fded82b9ea3aa1/pydantic_core-2.33.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0e5b2671f05ba48b94cb90ce55d8bdcaaedb8ba00cc5359f6810fc918713983d", size = 1861357, upload-time = "2025-04-23T18:30:46.372Z" },
{ url = "https://files.pythonhosted.org/packages/30/97/e8f13b55766234caae05372826e8e4b3b96e7b248be3157f53237682e43c/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0069c9acc3f3981b9ff4cdfaf088e98d83440a4c7ea1bc07460af3d4dc22e72d", size = 1898011, upload-time = "2025-04-23T18:30:47.591Z" },
{ url = "https://files.pythonhosted.org/packages/9b/a3/99c48cf7bafc991cc3ee66fd544c0aae8dc907b752f1dad2d79b1b5a471f/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d53b22f2032c42eaaf025f7c40c2e3b94568ae077a606f006d206a463bc69572", size = 1982730, upload-time = "2025-04-23T18:30:49.328Z" },
{ url = "https://files.pythonhosted.org/packages/de/8e/a5b882ec4307010a840fb8b58bd9bf65d1840c92eae7534c7441709bf54b/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0405262705a123b7ce9f0b92f123334d67b70fd1f20a9372b907ce1080c7ba02", size = 2136178, upload-time = "2025-04-23T18:30:50.907Z" },
{ url = "https://files.pythonhosted.org/packages/e4/bb/71e35fc3ed05af6834e890edb75968e2802fe98778971ab5cba20a162315/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4b25d91e288e2c4e0662b8038a28c6a07eaac3e196cfc4ff69de4ea3db992a1b", size = 2736462, upload-time = "2025-04-23T18:30:52.083Z" },
{ url = "https://files.pythonhosted.org/packages/31/0d/c8f7593e6bc7066289bbc366f2235701dcbebcd1ff0ef8e64f6f239fb47d/pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bdfe4b3789761f3bcb4b1ddf33355a71079858958e3a552f16d5af19768fef2", size = 2005652, upload-time = "2025-04-23T18:30:53.389Z" },
{ url = "https://files.pythonhosted.org/packages/d2/7a/996d8bd75f3eda405e3dd219ff5ff0a283cd8e34add39d8ef9157e722867/pydantic_core-2.33.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:efec8db3266b76ef9607c2c4c419bdb06bf335ae433b80816089ea7585816f6a", size = 2113306, upload-time = "2025-04-23T18:30:54.661Z" },
{ url = "https://files.pythonhosted.org/packages/ff/84/daf2a6fb2db40ffda6578a7e8c5a6e9c8affb251a05c233ae37098118788/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:031c57d67ca86902726e0fae2214ce6770bbe2f710dc33063187a68744a5ecac", size = 2073720, upload-time = "2025-04-23T18:30:56.11Z" },
{ url = "https://files.pythonhosted.org/packages/77/fb/2258da019f4825128445ae79456a5499c032b55849dbd5bed78c95ccf163/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:f8de619080e944347f5f20de29a975c2d815d9ddd8be9b9b7268e2e3ef68605a", size = 2244915, upload-time = "2025-04-23T18:30:57.501Z" },
{ url = "https://files.pythonhosted.org/packages/d8/7a/925ff73756031289468326e355b6fa8316960d0d65f8b5d6b3a3e7866de7/pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:73662edf539e72a9440129f231ed3757faab89630d291b784ca99237fb94db2b", size = 2241884, upload-time = "2025-04-23T18:30:58.867Z" },
{ url = "https://files.pythonhosted.org/packages/0b/b0/249ee6d2646f1cdadcb813805fe76265745c4010cf20a8eba7b0e639d9b2/pydantic_core-2.33.2-cp310-cp310-win32.whl", hash = "sha256:0a39979dcbb70998b0e505fb1556a1d550a0781463ce84ebf915ba293ccb7e22", size = 1910496, upload-time = "2025-04-23T18:31:00.078Z" },
{ url = "https://files.pythonhosted.org/packages/66/ff/172ba8f12a42d4b552917aa65d1f2328990d3ccfc01d5b7c943ec084299f/pydantic_core-2.33.2-cp310-cp310-win_amd64.whl", hash = "sha256:b0379a2b24882fef529ec3b4987cb5d003b9cda32256024e6fe1586ac45fc640", size = 1955019, upload-time = "2025-04-23T18:31:01.335Z" },
{ url = "https://files.pythonhosted.org/packages/3f/8d/71db63483d518cbbf290261a1fc2839d17ff89fce7089e08cad07ccfce67/pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7", size = 2028584, upload-time = "2025-04-23T18:31:03.106Z" },
{ url = "https://files.pythonhosted.org/packages/24/2f/3cfa7244ae292dd850989f328722d2aef313f74ffc471184dc509e1e4e5a/pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246", size = 1855071, upload-time = "2025-04-23T18:31:04.621Z" },
{ url = "https://files.pythonhosted.org/packages/b3/d3/4ae42d33f5e3f50dd467761304be2fa0a9417fbf09735bc2cce003480f2a/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f", size = 1897823, upload-time = "2025-04-23T18:31:06.377Z" },
{ url = "https://files.pythonhosted.org/packages/f4/f3/aa5976e8352b7695ff808599794b1fba2a9ae2ee954a3426855935799488/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc", size = 1983792, upload-time = "2025-04-23T18:31:07.93Z" },
{ url = "https://files.pythonhosted.org/packages/d5/7a/cda9b5a23c552037717f2b2a5257e9b2bfe45e687386df9591eff7b46d28/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de", size = 2136338, upload-time = "2025-04-23T18:31:09.283Z" },
{ url = "https://files.pythonhosted.org/packages/2b/9f/b8f9ec8dd1417eb9da784e91e1667d58a2a4a7b7b34cf4af765ef663a7e5/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a", size = 2730998, upload-time = "2025-04-23T18:31:11.7Z" },
{ url = "https://files.pythonhosted.org/packages/47/bc/cd720e078576bdb8255d5032c5d63ee5c0bf4b7173dd955185a1d658c456/pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef", size = 2003200, upload-time = "2025-04-23T18:31:13.536Z" },
{ url = "https://files.pythonhosted.org/packages/ca/22/3602b895ee2cd29d11a2b349372446ae9727c32e78a94b3d588a40fdf187/pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e", size = 2113890, upload-time = "2025-04-23T18:31:15.011Z" },
{ url = "https://files.pythonhosted.org/packages/ff/e6/e3c5908c03cf00d629eb38393a98fccc38ee0ce8ecce32f69fc7d7b558a7/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d", size = 2073359, upload-time = "2025-04-23T18:31:16.393Z" },
{ url = "https://files.pythonhosted.org/packages/12/e7/6a36a07c59ebefc8777d1ffdaf5ae71b06b21952582e4b07eba88a421c79/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30", size = 2245883, upload-time = "2025-04-23T18:31:17.892Z" },
{ url = "https://files.pythonhosted.org/packages/16/3f/59b3187aaa6cc0c1e6616e8045b284de2b6a87b027cce2ffcea073adf1d2/pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf", size = 2241074, upload-time = "2025-04-23T18:31:19.205Z" },
{ url = "https://files.pythonhosted.org/packages/e0/ed/55532bb88f674d5d8f67ab121a2a13c385df382de2a1677f30ad385f7438/pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51", size = 1910538, upload-time = "2025-04-23T18:31:20.541Z" },
{ url = "https://files.pythonhosted.org/packages/fe/1b/25b7cccd4519c0b23c2dd636ad39d381abf113085ce4f7bec2b0dc755eb1/pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab", size = 1952909, upload-time = "2025-04-23T18:31:22.371Z" },
{ url = "https://files.pythonhosted.org/packages/49/a9/d809358e49126438055884c4366a1f6227f0f84f635a9014e2deb9b9de54/pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65", size = 1897786, upload-time = "2025-04-23T18:31:24.161Z" },
{ url = "https://files.pythonhosted.org/packages/18/8a/2b41c97f554ec8c71f2a8a5f85cb56a8b0956addfe8b0efb5b3d77e8bdc3/pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc", size = 2009000, upload-time = "2025-04-23T18:31:25.863Z" },
{ url = "https://files.pythonhosted.org/packages/a1/02/6224312aacb3c8ecbaa959897af57181fb6cf3a3d7917fd44d0f2917e6f2/pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7", size = 1847996, upload-time = "2025-04-23T18:31:27.341Z" },
{ url = "https://files.pythonhosted.org/packages/d6/46/6dcdf084a523dbe0a0be59d054734b86a981726f221f4562aed313dbcb49/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025", size = 1880957, upload-time = "2025-04-23T18:31:28.956Z" },
{ url = "https://files.pythonhosted.org/packages/ec/6b/1ec2c03837ac00886ba8160ce041ce4e325b41d06a034adbef11339ae422/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011", size = 1964199, upload-time = "2025-04-23T18:31:31.025Z" },
{ url = "https://files.pythonhosted.org/packages/2d/1d/6bf34d6adb9debd9136bd197ca72642203ce9aaaa85cfcbfcf20f9696e83/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f", size = 2120296, upload-time = "2025-04-23T18:31:32.514Z" },
{ url = "https://files.pythonhosted.org/packages/e0/94/2bd0aaf5a591e974b32a9f7123f16637776c304471a0ab33cf263cf5591a/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88", size = 2676109, upload-time = "2025-04-23T18:31:33.958Z" },
{ url = "https://files.pythonhosted.org/packages/f9/41/4b043778cf9c4285d59742281a769eac371b9e47e35f98ad321349cc5d61/pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1", size = 2002028, upload-time = "2025-04-23T18:31:39.095Z" },
{ url = "https://files.pythonhosted.org/packages/cb/d5/7bb781bf2748ce3d03af04d5c969fa1308880e1dca35a9bd94e1a96a922e/pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b", size = 2100044, upload-time = "2025-04-23T18:31:41.034Z" },
{ url = "https://files.pythonhosted.org/packages/fe/36/def5e53e1eb0ad896785702a5bbfd25eed546cdcf4087ad285021a90ed53/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1", size = 2058881, upload-time = "2025-04-23T18:31:42.757Z" },
{ url = "https://files.pythonhosted.org/packages/01/6c/57f8d70b2ee57fc3dc8b9610315949837fa8c11d86927b9bb044f8705419/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6", size = 2227034, upload-time = "2025-04-23T18:31:44.304Z" },
{ url = "https://files.pythonhosted.org/packages/27/b9/9c17f0396a82b3d5cbea4c24d742083422639e7bb1d5bf600e12cb176a13/pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea", size = 2234187, upload-time = "2025-04-23T18:31:45.891Z" },
{ url = "https://files.pythonhosted.org/packages/b0/6a/adf5734ffd52bf86d865093ad70b2ce543415e0e356f6cacabbc0d9ad910/pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290", size = 1892628, upload-time = "2025-04-23T18:31:47.819Z" },
{ url = "https://files.pythonhosted.org/packages/43/e4/5479fecb3606c1368d496a825d8411e126133c41224c1e7238be58b87d7e/pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2", size = 1955866, upload-time = "2025-04-23T18:31:49.635Z" },
{ url = "https://files.pythonhosted.org/packages/0d/24/8b11e8b3e2be9dd82df4b11408a67c61bb4dc4f8e11b5b0fc888b38118b5/pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab", size = 1888894, upload-time = "2025-04-23T18:31:51.609Z" },
{ url = "https://files.pythonhosted.org/packages/46/8c/99040727b41f56616573a28771b1bfa08a3d3fe74d3d513f01251f79f172/pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f", size = 2015688, upload-time = "2025-04-23T18:31:53.175Z" },
{ url = "https://files.pythonhosted.org/packages/3a/cc/5999d1eb705a6cefc31f0b4a90e9f7fc400539b1a1030529700cc1b51838/pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6", size = 1844808, upload-time = "2025-04-23T18:31:54.79Z" },
{ url = "https://files.pythonhosted.org/packages/6f/5e/a0a7b8885c98889a18b6e376f344da1ef323d270b44edf8174d6bce4d622/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef", size = 1885580, upload-time = "2025-04-23T18:31:57.393Z" },
{ url = "https://files.pythonhosted.org/packages/3b/2a/953581f343c7d11a304581156618c3f592435523dd9d79865903272c256a/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a", size = 1973859, upload-time = "2025-04-23T18:31:59.065Z" },
{ url = "https://files.pythonhosted.org/packages/e6/55/f1a813904771c03a3f97f676c62cca0c0a4138654107c1b61f19c644868b/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916", size = 2120810, upload-time = "2025-04-23T18:32:00.78Z" },
{ url = "https://files.pythonhosted.org/packages/aa/c3/053389835a996e18853ba107a63caae0b9deb4a276c6b472931ea9ae6e48/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a", size = 2676498, upload-time = "2025-04-23T18:32:02.418Z" },
{ url = "https://files.pythonhosted.org/packages/eb/3c/f4abd740877a35abade05e437245b192f9d0ffb48bbbbd708df33d3cda37/pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d", size = 2000611, upload-time = "2025-04-23T18:32:04.152Z" },
{ url = "https://files.pythonhosted.org/packages/59/a7/63ef2fed1837d1121a894d0ce88439fe3e3b3e48c7543b2a4479eb99c2bd/pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56", size = 2107924, upload-time = "2025-04-23T18:32:06.129Z" },
{ url = "https://files.pythonhosted.org/packages/04/8f/2551964ef045669801675f1cfc3b0d74147f4901c3ffa42be2ddb1f0efc4/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5", size = 2063196, upload-time = "2025-04-23T18:32:08.178Z" },
{ url = "https://files.pythonhosted.org/packages/26/bd/d9602777e77fc6dbb0c7db9ad356e9a985825547dce5ad1d30ee04903918/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e", size = 2236389, upload-time = "2025-04-23T18:32:10.242Z" },
{ url = "https://files.pythonhosted.org/packages/42/db/0e950daa7e2230423ab342ae918a794964b053bec24ba8af013fc7c94846/pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162", size = 2239223, upload-time = "2025-04-23T18:32:12.382Z" },
{ url = "https://files.pythonhosted.org/packages/58/4d/4f937099c545a8a17eb52cb67fe0447fd9a373b348ccfa9a87f141eeb00f/pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849", size = 1900473, upload-time = "2025-04-23T18:32:14.034Z" },
{ url = "https://files.pythonhosted.org/packages/a0/75/4a0a9bac998d78d889def5e4ef2b065acba8cae8c93696906c3a91f310ca/pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9", size = 1955269, upload-time = "2025-04-23T18:32:15.783Z" },
{ url = "https://files.pythonhosted.org/packages/f9/86/1beda0576969592f1497b4ce8e7bc8cbdf614c352426271b1b10d5f0aa64/pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9", size = 1893921, upload-time = "2025-04-23T18:32:18.473Z" },
{ url = "https://files.pythonhosted.org/packages/a4/7d/e09391c2eebeab681df2b74bfe6c43422fffede8dc74187b2b0bf6fd7571/pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac", size = 1806162, upload-time = "2025-04-23T18:32:20.188Z" },
{ url = "https://files.pythonhosted.org/packages/f1/3d/847b6b1fed9f8ed3bb95a9ad04fbd0b212e832d4f0f50ff4d9ee5a9f15cf/pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5", size = 1981560, upload-time = "2025-04-23T18:32:22.354Z" },
{ url = "https://files.pythonhosted.org/packages/6f/9a/e73262f6c6656262b5fdd723ad90f518f579b7bc8622e43a942eec53c938/pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9", size = 1935777, upload-time = "2025-04-23T18:32:25.088Z" },
{ url = "https://files.pythonhosted.org/packages/30/68/373d55e58b7e83ce371691f6eaa7175e3a24b956c44628eb25d7da007917/pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5c4aa4e82353f65e548c476b37e64189783aa5384903bfea4f41580f255fddfa", size = 2023982, upload-time = "2025-04-23T18:32:53.14Z" },
{ url = "https://files.pythonhosted.org/packages/a4/16/145f54ac08c96a63d8ed6442f9dec17b2773d19920b627b18d4f10a061ea/pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d946c8bf0d5c24bf4fe333af284c59a19358aa3ec18cb3dc4370080da1e8ad29", size = 1858412, upload-time = "2025-04-23T18:32:55.52Z" },
{ url = "https://files.pythonhosted.org/packages/41/b1/c6dc6c3e2de4516c0bb2c46f6a373b91b5660312342a0cf5826e38ad82fa/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87b31b6846e361ef83fedb187bb5b4372d0da3f7e28d85415efa92d6125d6e6d", size = 1892749, upload-time = "2025-04-23T18:32:57.546Z" },
{ url = "https://files.pythonhosted.org/packages/12/73/8cd57e20afba760b21b742106f9dbdfa6697f1570b189c7457a1af4cd8a0/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa9d91b338f2df0508606f7009fde642391425189bba6d8c653afd80fd6bb64e", size = 2067527, upload-time = "2025-04-23T18:32:59.771Z" },
{ url = "https://files.pythonhosted.org/packages/e3/d5/0bb5d988cc019b3cba4a78f2d4b3854427fc47ee8ec8e9eaabf787da239c/pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2058a32994f1fde4ca0480ab9d1e75a0e8c87c22b53a3ae66554f9af78f2fe8c", size = 2108225, upload-time = "2025-04-23T18:33:04.51Z" },
{ url = "https://files.pythonhosted.org/packages/f1/c5/00c02d1571913d496aabf146106ad8239dc132485ee22efe08085084ff7c/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:0e03262ab796d986f978f79c943fc5f620381be7287148b8010b4097f79a39ec", size = 2069490, upload-time = "2025-04-23T18:33:06.391Z" },
{ url = "https://files.pythonhosted.org/packages/22/a8/dccc38768274d3ed3a59b5d06f59ccb845778687652daa71df0cab4040d7/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:1a8695a8d00c73e50bff9dfda4d540b7dee29ff9b8053e38380426a85ef10052", size = 2237525, upload-time = "2025-04-23T18:33:08.44Z" },
{ url = "https://files.pythonhosted.org/packages/d4/e7/4f98c0b125dda7cf7ccd14ba936218397b44f50a56dd8c16a3091df116c3/pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:fa754d1850735a0b0e03bcffd9d4b4343eb417e47196e4485d9cca326073a42c", size = 2238446, upload-time = "2025-04-23T18:33:10.313Z" },
{ url = "https://files.pythonhosted.org/packages/ce/91/2ec36480fdb0b783cd9ef6795753c1dea13882f2e68e73bce76ae8c21e6a/pydantic_core-2.33.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a11c8d26a50bfab49002947d3d237abe4d9e4b5bdc8846a63537b6488e197808", size = 2066678, upload-time = "2025-04-23T18:33:12.224Z" },
{ url = "https://files.pythonhosted.org/packages/7b/27/d4ae6487d73948d6f20dddcd94be4ea43e74349b56eba82e9bdee2d7494c/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8", size = 2025200, upload-time = "2025-04-23T18:33:14.199Z" },
{ url = "https://files.pythonhosted.org/packages/f1/b8/b3cb95375f05d33801024079b9392a5ab45267a63400bf1866e7ce0f0de4/pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593", size = 1859123, upload-time = "2025-04-23T18:33:16.555Z" },
{ url = "https://files.pythonhosted.org/packages/05/bc/0d0b5adeda59a261cd30a1235a445bf55c7e46ae44aea28f7bd6ed46e091/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612", size = 1892852, upload-time = "2025-04-23T18:33:18.513Z" },
{ url = "https://files.pythonhosted.org/packages/3e/11/d37bdebbda2e449cb3f519f6ce950927b56d62f0b84fd9cb9e372a26a3d5/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7", size = 2067484, upload-time = "2025-04-23T18:33:20.475Z" },
{ url = "https://files.pythonhosted.org/packages/8c/55/1f95f0a05ce72ecb02a8a8a1c3be0579bbc29b1d5ab68f1378b7bebc5057/pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e", size = 2108896, upload-time = "2025-04-23T18:33:22.501Z" },
{ url = "https://files.pythonhosted.org/packages/53/89/2b2de6c81fa131f423246a9109d7b2a375e83968ad0800d6e57d0574629b/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8", size = 2069475, upload-time = "2025-04-23T18:33:24.528Z" },
{ url = "https://files.pythonhosted.org/packages/b8/e9/1f7efbe20d0b2b10f6718944b5d8ece9152390904f29a78e68d4e7961159/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf", size = 2239013, upload-time = "2025-04-23T18:33:26.621Z" },
{ url = "https://files.pythonhosted.org/packages/3c/b2/5309c905a93811524a49b4e031e9851a6b00ff0fb668794472ea7746b448/pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb", size = 2238715, upload-time = "2025-04-23T18:33:28.656Z" },
{ url = "https://files.pythonhosted.org/packages/32/56/8a7ca5d2cd2cda1d245d34b1c9a942920a718082ae8e54e5f3e5a58b7add/pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1", size = 2066757, upload-time = "2025-04-23T18:33:30.645Z" },
{ url = "https://files.pythonhosted.org/packages/a7/3d/9b8ca77b0f76fcdbf8bc6b72474e264283f461284ca84ac3fde570c6c49a/pydantic_core-2.41.4-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2442d9a4d38f3411f22eb9dd0912b7cbf4b7d5b6c92c4173b75d3e1ccd84e36e", size = 2111197, upload-time = "2025-10-14T10:19:43.303Z" },
{ url = "https://files.pythonhosted.org/packages/59/92/b7b0fe6ed4781642232755cb7e56a86e2041e1292f16d9ae410a0ccee5ac/pydantic_core-2.41.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:30a9876226dda131a741afeab2702e2d127209bde3c65a2b8133f428bc5d006b", size = 1917909, upload-time = "2025-10-14T10:19:45.194Z" },
{ url = "https://files.pythonhosted.org/packages/52/8c/3eb872009274ffa4fb6a9585114e161aa1a0915af2896e2d441642929fe4/pydantic_core-2.41.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d55bbac04711e2980645af68b97d445cdbcce70e5216de444a6c4b6943ebcccd", size = 1969905, upload-time = "2025-10-14T10:19:46.567Z" },
{ url = "https://files.pythonhosted.org/packages/f4/21/35adf4a753bcfaea22d925214a0c5b880792e3244731b3f3e6fec0d124f7/pydantic_core-2.41.4-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:e1d778fb7849a42d0ee5927ab0f7453bf9f85eef8887a546ec87db5ddb178945", size = 2051938, upload-time = "2025-10-14T10:19:48.237Z" },
{ url = "https://files.pythonhosted.org/packages/7d/d0/cdf7d126825e36d6e3f1eccf257da8954452934ede275a8f390eac775e89/pydantic_core-2.41.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1b65077a4693a98b90ec5ad8f203ad65802a1b9b6d4a7e48066925a7e1606706", size = 2250710, upload-time = "2025-10-14T10:19:49.619Z" },
{ url = "https://files.pythonhosted.org/packages/2e/1c/af1e6fd5ea596327308f9c8d1654e1285cc3d8de0d584a3c9d7705bf8a7c/pydantic_core-2.41.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:62637c769dee16eddb7686bf421be48dfc2fae93832c25e25bc7242e698361ba", size = 2367445, upload-time = "2025-10-14T10:19:51.269Z" },
{ url = "https://files.pythonhosted.org/packages/d3/81/8cece29a6ef1b3a92f956ea6da6250d5b2d2e7e4d513dd3b4f0c7a83dfea/pydantic_core-2.41.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2dfe3aa529c8f501babf6e502936b9e8d4698502b2cfab41e17a028d91b1ac7b", size = 2072875, upload-time = "2025-10-14T10:19:52.671Z" },
{ url = "https://files.pythonhosted.org/packages/e3/37/a6a579f5fc2cd4d5521284a0ab6a426cc6463a7b3897aeb95b12f1ba607b/pydantic_core-2.41.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ca2322da745bf2eeb581fc9ea3bbb31147702163ccbcbf12a3bb630e4bf05e1d", size = 2191329, upload-time = "2025-10-14T10:19:54.214Z" },
{ url = "https://files.pythonhosted.org/packages/ae/03/505020dc5c54ec75ecba9f41119fd1e48f9e41e4629942494c4a8734ded1/pydantic_core-2.41.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:e8cd3577c796be7231dcf80badcf2e0835a46665eaafd8ace124d886bab4d700", size = 2151658, upload-time = "2025-10-14T10:19:55.843Z" },
{ url = "https://files.pythonhosted.org/packages/cb/5d/2c0d09fb53aa03bbd2a214d89ebfa6304be7df9ed86ee3dc7770257f41ee/pydantic_core-2.41.4-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:1cae8851e174c83633f0833e90636832857297900133705ee158cf79d40f03e6", size = 2316777, upload-time = "2025-10-14T10:19:57.607Z" },
{ url = "https://files.pythonhosted.org/packages/ea/4b/c2c9c8f5e1f9c864b57d08539d9d3db160e00491c9f5ee90e1bfd905e644/pydantic_core-2.41.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a26d950449aae348afe1ac8be5525a00ae4235309b729ad4d3399623125b43c9", size = 2320705, upload-time = "2025-10-14T10:19:59.016Z" },
{ url = "https://files.pythonhosted.org/packages/28/c3/a74c1c37f49c0a02c89c7340fafc0ba816b29bd495d1a31ce1bdeacc6085/pydantic_core-2.41.4-cp310-cp310-win32.whl", hash = "sha256:0cf2a1f599efe57fa0051312774280ee0f650e11152325e41dfd3018ef2c1b57", size = 1975464, upload-time = "2025-10-14T10:20:00.581Z" },
{ url = "https://files.pythonhosted.org/packages/d6/23/5dd5c1324ba80303368f7569e2e2e1a721c7d9eb16acb7eb7b7f85cb1be2/pydantic_core-2.41.4-cp310-cp310-win_amd64.whl", hash = "sha256:a8c2e340d7e454dc3340d3d2e8f23558ebe78c98aa8f68851b04dcb7bc37abdc", size = 2024497, upload-time = "2025-10-14T10:20:03.018Z" },
{ url = "https://files.pythonhosted.org/packages/62/4c/f6cbfa1e8efacd00b846764e8484fe173d25b8dab881e277a619177f3384/pydantic_core-2.41.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:28ff11666443a1a8cf2a044d6a545ebffa8382b5f7973f22c36109205e65dc80", size = 2109062, upload-time = "2025-10-14T10:20:04.486Z" },
{ url = "https://files.pythonhosted.org/packages/21/f8/40b72d3868896bfcd410e1bd7e516e762d326201c48e5b4a06446f6cf9e8/pydantic_core-2.41.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:61760c3925d4633290292bad462e0f737b840508b4f722247d8729684f6539ae", size = 1916301, upload-time = "2025-10-14T10:20:06.857Z" },
{ url = "https://files.pythonhosted.org/packages/94/4d/d203dce8bee7faeca791671c88519969d98d3b4e8f225da5b96dad226fc8/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:eae547b7315d055b0de2ec3965643b0ab82ad0106a7ffd29615ee9f266a02827", size = 1968728, upload-time = "2025-10-14T10:20:08.353Z" },
{ url = "https://files.pythonhosted.org/packages/65/f5/6a66187775df87c24d526985b3a5d78d861580ca466fbd9d4d0e792fcf6c/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ef9ee5471edd58d1fcce1c80ffc8783a650e3e3a193fe90d52e43bb4d87bff1f", size = 2050238, upload-time = "2025-10-14T10:20:09.766Z" },
{ url = "https://files.pythonhosted.org/packages/5e/b9/78336345de97298cf53236b2f271912ce11f32c1e59de25a374ce12f9cce/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:15dd504af121caaf2c95cb90c0ebf71603c53de98305621b94da0f967e572def", size = 2249424, upload-time = "2025-10-14T10:20:11.732Z" },
{ url = "https://files.pythonhosted.org/packages/99/bb/a4584888b70ee594c3d374a71af5075a68654d6c780369df269118af7402/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3a926768ea49a8af4d36abd6a8968b8790f7f76dd7cbd5a4c180db2b4ac9a3a2", size = 2366047, upload-time = "2025-10-14T10:20:13.647Z" },
{ url = "https://files.pythonhosted.org/packages/5f/8d/17fc5de9d6418e4d2ae8c675f905cdafdc59d3bf3bf9c946b7ab796a992a/pydantic_core-2.41.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6916b9b7d134bff5440098a4deb80e4cb623e68974a87883299de9124126c2a8", size = 2071163, upload-time = "2025-10-14T10:20:15.307Z" },
{ url = "https://files.pythonhosted.org/packages/54/e7/03d2c5c0b8ed37a4617430db68ec5e7dbba66358b629cd69e11b4d564367/pydantic_core-2.41.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5cf90535979089df02e6f17ffd076f07237efa55b7343d98760bde8743c4b265", size = 2190585, upload-time = "2025-10-14T10:20:17.3Z" },
{ url = "https://files.pythonhosted.org/packages/be/fc/15d1c9fe5ad9266a5897d9b932b7f53d7e5cfc800573917a2c5d6eea56ec/pydantic_core-2.41.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:7533c76fa647fade2d7ec75ac5cc079ab3f34879626dae5689b27790a6cf5a5c", size = 2150109, upload-time = "2025-10-14T10:20:19.143Z" },
{ url = "https://files.pythonhosted.org/packages/26/ef/e735dd008808226c83ba56972566138665b71477ad580fa5a21f0851df48/pydantic_core-2.41.4-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:37e516bca9264cbf29612539801ca3cd5d1be465f940417b002905e6ed79d38a", size = 2315078, upload-time = "2025-10-14T10:20:20.742Z" },
{ url = "https://files.pythonhosted.org/packages/90/00/806efdcf35ff2ac0f938362350cd9827b8afb116cc814b6b75cf23738c7c/pydantic_core-2.41.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:0c19cb355224037c83642429b8ce261ae108e1c5fbf5c028bac63c77b0f8646e", size = 2318737, upload-time = "2025-10-14T10:20:22.306Z" },
{ url = "https://files.pythonhosted.org/packages/41/7e/6ac90673fe6cb36621a2283552897838c020db343fa86e513d3f563b196f/pydantic_core-2.41.4-cp311-cp311-win32.whl", hash = "sha256:09c2a60e55b357284b5f31f5ab275ba9f7f70b7525e18a132ec1f9160b4f1f03", size = 1974160, upload-time = "2025-10-14T10:20:23.817Z" },
{ url = "https://files.pythonhosted.org/packages/e0/9d/7c5e24ee585c1f8b6356e1d11d40ab807ffde44d2db3b7dfd6d20b09720e/pydantic_core-2.41.4-cp311-cp311-win_amd64.whl", hash = "sha256:711156b6afb5cb1cb7c14a2cc2c4a8b4c717b69046f13c6b332d8a0a8f41ca3e", size = 2021883, upload-time = "2025-10-14T10:20:25.48Z" },
{ url = "https://files.pythonhosted.org/packages/33/90/5c172357460fc28b2871eb4a0fb3843b136b429c6fa827e4b588877bf115/pydantic_core-2.41.4-cp311-cp311-win_arm64.whl", hash = "sha256:6cb9cf7e761f4f8a8589a45e49ed3c0d92d1d696a45a6feaee8c904b26efc2db", size = 1968026, upload-time = "2025-10-14T10:20:27.039Z" },
{ url = "https://files.pythonhosted.org/packages/e9/81/d3b3e95929c4369d30b2a66a91db63c8ed0a98381ae55a45da2cd1cc1288/pydantic_core-2.41.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:ab06d77e053d660a6faaf04894446df7b0a7e7aba70c2797465a0a1af00fc887", size = 2099043, upload-time = "2025-10-14T10:20:28.561Z" },
{ url = "https://files.pythonhosted.org/packages/58/da/46fdac49e6717e3a94fc9201403e08d9d61aa7a770fab6190b8740749047/pydantic_core-2.41.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:c53ff33e603a9c1179a9364b0a24694f183717b2e0da2b5ad43c316c956901b2", size = 1910699, upload-time = "2025-10-14T10:20:30.217Z" },
{ url = "https://files.pythonhosted.org/packages/1e/63/4d948f1b9dd8e991a5a98b77dd66c74641f5f2e5225fee37994b2e07d391/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:304c54176af2c143bd181d82e77c15c41cbacea8872a2225dd37e6544dce9999", size = 1952121, upload-time = "2025-10-14T10:20:32.246Z" },
{ url = "https://files.pythonhosted.org/packages/b2/a7/e5fc60a6f781fc634ecaa9ecc3c20171d238794cef69ae0af79ac11b89d7/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:025ba34a4cf4fb32f917d5d188ab5e702223d3ba603be4d8aca2f82bede432a4", size = 2041590, upload-time = "2025-10-14T10:20:34.332Z" },
{ url = "https://files.pythonhosted.org/packages/70/69/dce747b1d21d59e85af433428978a1893c6f8a7068fa2bb4a927fba7a5ff/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b9f5f30c402ed58f90c70e12eff65547d3ab74685ffe8283c719e6bead8ef53f", size = 2219869, upload-time = "2025-10-14T10:20:35.965Z" },
{ url = "https://files.pythonhosted.org/packages/83/6a/c070e30e295403bf29c4df1cb781317b6a9bac7cd07b8d3acc94d501a63c/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dd96e5d15385d301733113bcaa324c8bcf111275b7675a9c6e88bfb19fc05e3b", size = 2345169, upload-time = "2025-10-14T10:20:37.627Z" },
{ url = "https://files.pythonhosted.org/packages/f0/83/06d001f8043c336baea7fd202a9ac7ad71f87e1c55d8112c50b745c40324/pydantic_core-2.41.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:98f348cbb44fae6e9653c1055db7e29de67ea6a9ca03a5fa2c2e11a47cff0e47", size = 2070165, upload-time = "2025-10-14T10:20:39.246Z" },
{ url = "https://files.pythonhosted.org/packages/14/0a/e567c2883588dd12bcbc110232d892cf385356f7c8a9910311ac997ab715/pydantic_core-2.41.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ec22626a2d14620a83ca583c6f5a4080fa3155282718b6055c2ea48d3ef35970", size = 2189067, upload-time = "2025-10-14T10:20:41.015Z" },
{ url = "https://files.pythonhosted.org/packages/f4/1d/3d9fca34273ba03c9b1c5289f7618bc4bd09c3ad2289b5420481aa051a99/pydantic_core-2.41.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:3a95d4590b1f1a43bf33ca6d647b990a88f4a3824a8c4572c708f0b45a5290ed", size = 2132997, upload-time = "2025-10-14T10:20:43.106Z" },
{ url = "https://files.pythonhosted.org/packages/52/70/d702ef7a6cd41a8afc61f3554922b3ed8d19dd54c3bd4bdbfe332e610827/pydantic_core-2.41.4-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:f9672ab4d398e1b602feadcffcdd3af44d5f5e6ddc15bc7d15d376d47e8e19f8", size = 2307187, upload-time = "2025-10-14T10:20:44.849Z" },
{ url = "https://files.pythonhosted.org/packages/68/4c/c06be6e27545d08b802127914156f38d10ca287a9e8489342793de8aae3c/pydantic_core-2.41.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:84d8854db5f55fead3b579f04bda9a36461dab0730c5d570e1526483e7bb8431", size = 2305204, upload-time = "2025-10-14T10:20:46.781Z" },
{ url = "https://files.pythonhosted.org/packages/b0/e5/35ae4919bcd9f18603419e23c5eaf32750224a89d41a8df1a3704b69f77e/pydantic_core-2.41.4-cp312-cp312-win32.whl", hash = "sha256:9be1c01adb2ecc4e464392c36d17f97e9110fbbc906bcbe1c943b5b87a74aabd", size = 1972536, upload-time = "2025-10-14T10:20:48.39Z" },
{ url = "https://files.pythonhosted.org/packages/1e/c2/49c5bb6d2a49eb2ee3647a93e3dae7080c6409a8a7558b075027644e879c/pydantic_core-2.41.4-cp312-cp312-win_amd64.whl", hash = "sha256:d682cf1d22bab22a5be08539dca3d1593488a99998f9f412137bc323179067ff", size = 2031132, upload-time = "2025-10-14T10:20:50.421Z" },
{ url = "https://files.pythonhosted.org/packages/06/23/936343dbcba6eec93f73e95eb346810fc732f71ba27967b287b66f7b7097/pydantic_core-2.41.4-cp312-cp312-win_arm64.whl", hash = "sha256:833eebfd75a26d17470b58768c1834dfc90141b7afc6eb0429c21fc5a21dcfb8", size = 1969483, upload-time = "2025-10-14T10:20:52.35Z" },
{ url = "https://files.pythonhosted.org/packages/13/d0/c20adabd181a029a970738dfe23710b52a31f1258f591874fcdec7359845/pydantic_core-2.41.4-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:85e050ad9e5f6fe1004eec65c914332e52f429bc0ae12d6fa2092407a462c746", size = 2105688, upload-time = "2025-10-14T10:20:54.448Z" },
{ url = "https://files.pythonhosted.org/packages/00/b6/0ce5c03cec5ae94cca220dfecddc453c077d71363b98a4bbdb3c0b22c783/pydantic_core-2.41.4-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:e7393f1d64792763a48924ba31d1e44c2cfbc05e3b1c2c9abb4ceeadd912cced", size = 1910807, upload-time = "2025-10-14T10:20:56.115Z" },
{ url = "https://files.pythonhosted.org/packages/68/3e/800d3d02c8beb0b5c069c870cbb83799d085debf43499c897bb4b4aaff0d/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:94dab0940b0d1fb28bcab847adf887c66a27a40291eedf0b473be58761c9799a", size = 1956669, upload-time = "2025-10-14T10:20:57.874Z" },
{ url = "https://files.pythonhosted.org/packages/60/a4/24271cc71a17f64589be49ab8bd0751f6a0a03046c690df60989f2f95c2c/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:de7c42f897e689ee6f9e93c4bec72b99ae3b32a2ade1c7e4798e690ff5246e02", size = 2051629, upload-time = "2025-10-14T10:21:00.006Z" },
{ url = "https://files.pythonhosted.org/packages/68/de/45af3ca2f175d91b96bfb62e1f2d2f1f9f3b14a734afe0bfeff079f78181/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:664b3199193262277b8b3cd1e754fb07f2c6023289c815a1e1e8fb415cb247b1", size = 2224049, upload-time = "2025-10-14T10:21:01.801Z" },
{ url = "https://files.pythonhosted.org/packages/af/8f/ae4e1ff84672bf869d0a77af24fd78387850e9497753c432875066b5d622/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d95b253b88f7d308b1c0b417c4624f44553ba4762816f94e6986819b9c273fb2", size = 2342409, upload-time = "2025-10-14T10:21:03.556Z" },
{ url = "https://files.pythonhosted.org/packages/18/62/273dd70b0026a085c7b74b000394e1ef95719ea579c76ea2f0cc8893736d/pydantic_core-2.41.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a1351f5bbdbbabc689727cb91649a00cb9ee7203e0a6e54e9f5ba9e22e384b84", size = 2069635, upload-time = "2025-10-14T10:21:05.385Z" },
{ url = "https://files.pythonhosted.org/packages/30/03/cf485fff699b4cdaea469bc481719d3e49f023241b4abb656f8d422189fc/pydantic_core-2.41.4-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1affa4798520b148d7182da0615d648e752de4ab1a9566b7471bc803d88a062d", size = 2194284, upload-time = "2025-10-14T10:21:07.122Z" },
{ url = "https://files.pythonhosted.org/packages/f9/7e/c8e713db32405dfd97211f2fc0a15d6bf8adb7640f3d18544c1f39526619/pydantic_core-2.41.4-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:7b74e18052fea4aa8dea2fb7dbc23d15439695da6cbe6cfc1b694af1115df09d", size = 2137566, upload-time = "2025-10-14T10:21:08.981Z" },
{ url = "https://files.pythonhosted.org/packages/04/f7/db71fd4cdccc8b75990f79ccafbbd66757e19f6d5ee724a6252414483fb4/pydantic_core-2.41.4-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:285b643d75c0e30abda9dc1077395624f314a37e3c09ca402d4015ef5979f1a2", size = 2316809, upload-time = "2025-10-14T10:21:10.805Z" },
{ url = "https://files.pythonhosted.org/packages/76/63/a54973ddb945f1bca56742b48b144d85c9fc22f819ddeb9f861c249d5464/pydantic_core-2.41.4-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:f52679ff4218d713b3b33f88c89ccbf3a5c2c12ba665fb80ccc4192b4608dbab", size = 2311119, upload-time = "2025-10-14T10:21:12.583Z" },
{ url = "https://files.pythonhosted.org/packages/f8/03/5d12891e93c19218af74843a27e32b94922195ded2386f7b55382f904d2f/pydantic_core-2.41.4-cp313-cp313-win32.whl", hash = "sha256:ecde6dedd6fff127c273c76821bb754d793be1024bc33314a120f83a3c69460c", size = 1981398, upload-time = "2025-10-14T10:21:14.584Z" },
{ url = "https://files.pythonhosted.org/packages/be/d8/fd0de71f39db91135b7a26996160de71c073d8635edfce8b3c3681be0d6d/pydantic_core-2.41.4-cp313-cp313-win_amd64.whl", hash = "sha256:d081a1f3800f05409ed868ebb2d74ac39dd0c1ff6c035b5162356d76030736d4", size = 2030735, upload-time = "2025-10-14T10:21:16.432Z" },
{ url = "https://files.pythonhosted.org/packages/72/86/c99921c1cf6650023c08bfab6fe2d7057a5142628ef7ccfa9921f2dda1d5/pydantic_core-2.41.4-cp313-cp313-win_arm64.whl", hash = "sha256:f8e49c9c364a7edcbe2a310f12733aad95b022495ef2a8d653f645e5d20c1564", size = 1973209, upload-time = "2025-10-14T10:21:18.213Z" },
{ url = "https://files.pythonhosted.org/packages/36/0d/b5706cacb70a8414396efdda3d72ae0542e050b591119e458e2490baf035/pydantic_core-2.41.4-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:ed97fd56a561f5eb5706cebe94f1ad7c13b84d98312a05546f2ad036bafe87f4", size = 1877324, upload-time = "2025-10-14T10:21:20.363Z" },
{ url = "https://files.pythonhosted.org/packages/de/2d/cba1fa02cfdea72dfb3a9babb067c83b9dff0bbcb198368e000a6b756ea7/pydantic_core-2.41.4-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a870c307bf1ee91fc58a9a61338ff780d01bfae45922624816878dce784095d2", size = 1884515, upload-time = "2025-10-14T10:21:22.339Z" },
{ url = "https://files.pythonhosted.org/packages/07/ea/3df927c4384ed9b503c9cc2d076cf983b4f2adb0c754578dfb1245c51e46/pydantic_core-2.41.4-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d25e97bc1f5f8f7985bdc2335ef9e73843bb561eb1fa6831fdfc295c1c2061cf", size = 2042819, upload-time = "2025-10-14T10:21:26.683Z" },
{ url = "https://files.pythonhosted.org/packages/6a/ee/df8e871f07074250270a3b1b82aad4cd0026b588acd5d7d3eb2fcb1471a3/pydantic_core-2.41.4-cp313-cp313t-win_amd64.whl", hash = "sha256:d405d14bea042f166512add3091c1af40437c2e7f86988f3915fabd27b1e9cd2", size = 1995866, upload-time = "2025-10-14T10:21:28.951Z" },
{ url = "https://files.pythonhosted.org/packages/fc/de/b20f4ab954d6d399499c33ec4fafc46d9551e11dc1858fb7f5dca0748ceb/pydantic_core-2.41.4-cp313-cp313t-win_arm64.whl", hash = "sha256:19f3684868309db5263a11bace3c45d93f6f24afa2ffe75a647583df22a2ff89", size = 1970034, upload-time = "2025-10-14T10:21:30.869Z" },
{ url = "https://files.pythonhosted.org/packages/54/28/d3325da57d413b9819365546eb9a6e8b7cbd9373d9380efd5f74326143e6/pydantic_core-2.41.4-cp314-cp314-macosx_10_12_x86_64.whl", hash = "sha256:e9205d97ed08a82ebb9a307e92914bb30e18cdf6f6b12ca4bedadb1588a0bfe1", size = 2102022, upload-time = "2025-10-14T10:21:32.809Z" },
{ url = "https://files.pythonhosted.org/packages/9e/24/b58a1bc0d834bf1acc4361e61233ee217169a42efbdc15a60296e13ce438/pydantic_core-2.41.4-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:82df1f432b37d832709fbcc0e24394bba04a01b6ecf1ee87578145c19cde12ac", size = 1905495, upload-time = "2025-10-14T10:21:34.812Z" },
{ url = "https://files.pythonhosted.org/packages/fb/a4/71f759cc41b7043e8ecdaab81b985a9b6cad7cec077e0b92cff8b71ecf6b/pydantic_core-2.41.4-cp314-cp314-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fc3b4cc4539e055cfa39a3763c939f9d409eb40e85813257dcd761985a108554", size = 1956131, upload-time = "2025-10-14T10:21:36.924Z" },
{ url = "https://files.pythonhosted.org/packages/b0/64/1e79ac7aa51f1eec7c4cda8cbe456d5d09f05fdd68b32776d72168d54275/pydantic_core-2.41.4-cp314-cp314-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:b1eb1754fce47c63d2ff57fdb88c351a6c0150995890088b33767a10218eaa4e", size = 2052236, upload-time = "2025-10-14T10:21:38.927Z" },
{ url = "https://files.pythonhosted.org/packages/e9/e3/a3ffc363bd4287b80f1d43dc1c28ba64831f8dfc237d6fec8f2661138d48/pydantic_core-2.41.4-cp314-cp314-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:e6ab5ab30ef325b443f379ddb575a34969c333004fca5a1daa0133a6ffaad616", size = 2223573, upload-time = "2025-10-14T10:21:41.574Z" },
{ url = "https://files.pythonhosted.org/packages/28/27/78814089b4d2e684a9088ede3790763c64693c3d1408ddc0a248bc789126/pydantic_core-2.41.4-cp314-cp314-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:31a41030b1d9ca497634092b46481b937ff9397a86f9f51bd41c4767b6fc04af", size = 2342467, upload-time = "2025-10-14T10:21:44.018Z" },
{ url = "https://files.pythonhosted.org/packages/92/97/4de0e2a1159cb85ad737e03306717637842c88c7fd6d97973172fb183149/pydantic_core-2.41.4-cp314-cp314-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a44ac1738591472c3d020f61c6df1e4015180d6262ebd39bf2aeb52571b60f12", size = 2063754, upload-time = "2025-10-14T10:21:46.466Z" },
{ url = "https://files.pythonhosted.org/packages/0f/50/8cb90ce4b9efcf7ae78130afeb99fd1c86125ccdf9906ef64b9d42f37c25/pydantic_core-2.41.4-cp314-cp314-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d72f2b5e6e82ab8f94ea7d0d42f83c487dc159c5240d8f83beae684472864e2d", size = 2196754, upload-time = "2025-10-14T10:21:48.486Z" },
{ url = "https://files.pythonhosted.org/packages/34/3b/ccdc77af9cd5082723574a1cc1bcae7a6acacc829d7c0a06201f7886a109/pydantic_core-2.41.4-cp314-cp314-musllinux_1_1_aarch64.whl", hash = "sha256:c4d1e854aaf044487d31143f541f7aafe7b482ae72a022c664b2de2e466ed0ad", size = 2137115, upload-time = "2025-10-14T10:21:50.63Z" },
{ url = "https://files.pythonhosted.org/packages/ca/ba/e7c7a02651a8f7c52dc2cff2b64a30c313e3b57c7d93703cecea76c09b71/pydantic_core-2.41.4-cp314-cp314-musllinux_1_1_armv7l.whl", hash = "sha256:b568af94267729d76e6ee5ececda4e283d07bbb28e8148bb17adad93d025d25a", size = 2317400, upload-time = "2025-10-14T10:21:52.959Z" },
{ url = "https://files.pythonhosted.org/packages/2c/ba/6c533a4ee8aec6b812c643c49bb3bd88d3f01e3cebe451bb85512d37f00f/pydantic_core-2.41.4-cp314-cp314-musllinux_1_1_x86_64.whl", hash = "sha256:6d55fb8b1e8929b341cc313a81a26e0d48aa3b519c1dbaadec3a6a2b4fcad025", size = 2312070, upload-time = "2025-10-14T10:21:55.419Z" },
{ url = "https://files.pythonhosted.org/packages/22/ae/f10524fcc0ab8d7f96cf9a74c880243576fd3e72bd8ce4f81e43d22bcab7/pydantic_core-2.41.4-cp314-cp314-win32.whl", hash = "sha256:5b66584e549e2e32a1398df11da2e0a7eff45d5c2d9db9d5667c5e6ac764d77e", size = 1982277, upload-time = "2025-10-14T10:21:57.474Z" },
{ url = "https://files.pythonhosted.org/packages/b4/dc/e5aa27aea1ad4638f0c3fb41132f7eb583bd7420ee63204e2d4333a3bbf9/pydantic_core-2.41.4-cp314-cp314-win_amd64.whl", hash = "sha256:557a0aab88664cc552285316809cab897716a372afaf8efdbef756f8b890e894", size = 2024608, upload-time = "2025-10-14T10:21:59.557Z" },
{ url = "https://files.pythonhosted.org/packages/3e/61/51d89cc2612bd147198e120a13f150afbf0bcb4615cddb049ab10b81b79e/pydantic_core-2.41.4-cp314-cp314-win_arm64.whl", hash = "sha256:3f1ea6f48a045745d0d9f325989d8abd3f1eaf47dd00485912d1a3a63c623a8d", size = 1967614, upload-time = "2025-10-14T10:22:01.847Z" },
{ url = "https://files.pythonhosted.org/packages/0d/c2/472f2e31b95eff099961fa050c376ab7156a81da194f9edb9f710f68787b/pydantic_core-2.41.4-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:6c1fe4c5404c448b13188dd8bd2ebc2bdd7e6727fa61ff481bcc2cca894018da", size = 1876904, upload-time = "2025-10-14T10:22:04.062Z" },
{ url = "https://files.pythonhosted.org/packages/4a/07/ea8eeb91173807ecdae4f4a5f4b150a520085b35454350fc219ba79e66a3/pydantic_core-2.41.4-cp314-cp314t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:523e7da4d43b113bf8e7b49fa4ec0c35bf4fe66b2230bfc5c13cc498f12c6c3e", size = 1882538, upload-time = "2025-10-14T10:22:06.39Z" },
{ url = "https://files.pythonhosted.org/packages/1e/29/b53a9ca6cd366bfc928823679c6a76c7a4c69f8201c0ba7903ad18ebae2f/pydantic_core-2.41.4-cp314-cp314t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5729225de81fb65b70fdb1907fcf08c75d498f4a6f15af005aabb1fdadc19dfa", size = 2041183, upload-time = "2025-10-14T10:22:08.812Z" },
{ url = "https://files.pythonhosted.org/packages/c7/3d/f8c1a371ceebcaf94d6dd2d77c6cf4b1c078e13a5837aee83f760b4f7cfd/pydantic_core-2.41.4-cp314-cp314t-win_amd64.whl", hash = "sha256:de2cfbb09e88f0f795fd90cf955858fc2c691df65b1f21f0aa00b99f3fbc661d", size = 1993542, upload-time = "2025-10-14T10:22:11.332Z" },
{ url = "https://files.pythonhosted.org/packages/8a/ac/9fc61b4f9d079482a290afe8d206b8f490e9fd32d4fc03ed4fc698214e01/pydantic_core-2.41.4-cp314-cp314t-win_arm64.whl", hash = "sha256:d34f950ae05a83e0ede899c595f312ca976023ea1db100cd5aa188f7005e3ab0", size = 1973897, upload-time = "2025-10-14T10:22:13.444Z" },
{ url = "https://files.pythonhosted.org/packages/b0/12/5ba58daa7f453454464f92b3ca7b9d7c657d8641c48e370c3ebc9a82dd78/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-macosx_10_12_x86_64.whl", hash = "sha256:a1b2cfec3879afb742a7b0bcfa53e4f22ba96571c9e54d6a3afe1052d17d843b", size = 2122139, upload-time = "2025-10-14T10:22:47.288Z" },
{ url = "https://files.pythonhosted.org/packages/21/fb/6860126a77725c3108baecd10fd3d75fec25191d6381b6eb2ac660228eac/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-macosx_11_0_arm64.whl", hash = "sha256:d175600d975b7c244af6eb9c9041f10059f20b8bbffec9e33fdd5ee3f67cdc42", size = 1936674, upload-time = "2025-10-14T10:22:49.555Z" },
{ url = "https://files.pythonhosted.org/packages/de/be/57dcaa3ed595d81f8757e2b44a38240ac5d37628bce25fb20d02c7018776/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0f184d657fa4947ae5ec9c47bd7e917730fa1cbb78195037e32dcbab50aca5ee", size = 1956398, upload-time = "2025-10-14T10:22:52.19Z" },
{ url = "https://files.pythonhosted.org/packages/2f/1d/679a344fadb9695f1a6a294d739fbd21d71fa023286daeea8c0ed49e7c2b/pydantic_core-2.41.4-graalpy311-graalpy242_311_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1ed810568aeffed3edc78910af32af911c835cc39ebbfacd1f0ab5dd53028e5c", size = 2138674, upload-time = "2025-10-14T10:22:54.499Z" },
{ url = "https://files.pythonhosted.org/packages/c4/48/ae937e5a831b7c0dc646b2ef788c27cd003894882415300ed21927c21efa/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-macosx_10_12_x86_64.whl", hash = "sha256:4f5d640aeebb438517150fdeec097739614421900e4a08db4a3ef38898798537", size = 2112087, upload-time = "2025-10-14T10:22:56.818Z" },
{ url = "https://files.pythonhosted.org/packages/5e/db/6db8073e3d32dae017da7e0d16a9ecb897d0a4d92e00634916e486097961/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-macosx_11_0_arm64.whl", hash = "sha256:4a9ab037b71927babc6d9e7fc01aea9e66dc2a4a34dff06ef0724a4049629f94", size = 1920387, upload-time = "2025-10-14T10:22:59.342Z" },
{ url = "https://files.pythonhosted.org/packages/0d/c1/dd3542d072fcc336030d66834872f0328727e3b8de289c662faa04aa270e/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e4dab9484ec605c3016df9ad4fd4f9a390bc5d816a3b10c6550f8424bb80b18c", size = 1951495, upload-time = "2025-10-14T10:23:02.089Z" },
{ url = "https://files.pythonhosted.org/packages/2b/c6/db8d13a1f8ab3f1eb08c88bd00fd62d44311e3456d1e85c0e59e0a0376e7/pydantic_core-2.41.4-graalpy312-graalpy250_312_native-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bd8a5028425820731d8c6c098ab642d7b8b999758e24acae03ed38a66eca8335", size = 2139008, upload-time = "2025-10-14T10:23:04.539Z" },
{ url = "https://files.pythonhosted.org/packages/5d/d4/912e976a2dd0b49f31c98a060ca90b353f3b73ee3ea2fd0030412f6ac5ec/pydantic_core-2.41.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:1e5ab4fc177dd41536b3c32b2ea11380dd3d4619a385860621478ac2d25ceb00", size = 2106739, upload-time = "2025-10-14T10:23:06.934Z" },
{ url = "https://files.pythonhosted.org/packages/71/f0/66ec5a626c81eba326072d6ee2b127f8c139543f1bf609b4842978d37833/pydantic_core-2.41.4-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:3d88d0054d3fa11ce936184896bed3c1c5441d6fa483b498fac6a5d0dd6f64a9", size = 1932549, upload-time = "2025-10-14T10:23:09.24Z" },
{ url = "https://files.pythonhosted.org/packages/c4/af/625626278ca801ea0a658c2dcf290dc9f21bb383098e99e7c6a029fccfc0/pydantic_core-2.41.4-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7b2a054a8725f05b4b6503357e0ac1c4e8234ad3b0c2ac130d6ffc66f0e170e2", size = 2135093, upload-time = "2025-10-14T10:23:11.626Z" },
{ url = "https://files.pythonhosted.org/packages/20/f6/2fba049f54e0f4975fef66be654c597a1d005320fa141863699180c7697d/pydantic_core-2.41.4-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b0d9db5a161c99375a0c68c058e227bee1d89303300802601d76a3d01f74e258", size = 2187971, upload-time = "2025-10-14T10:23:14.437Z" },
{ url = "https://files.pythonhosted.org/packages/0e/80/65ab839a2dfcd3b949202f9d920c34f9de5a537c3646662bdf2f7d999680/pydantic_core-2.41.4-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:6273ea2c8ffdac7b7fda2653c49682db815aebf4a89243a6feccf5e36c18c347", size = 2147939, upload-time = "2025-10-14T10:23:16.831Z" },
{ url = "https://files.pythonhosted.org/packages/44/58/627565d3d182ce6dfda18b8e1c841eede3629d59c9d7cbc1e12a03aeb328/pydantic_core-2.41.4-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:4c973add636efc61de22530b2ef83a65f39b6d6f656df97f678720e20de26caa", size = 2311400, upload-time = "2025-10-14T10:23:19.234Z" },
{ url = "https://files.pythonhosted.org/packages/24/06/8a84711162ad5a5f19a88cead37cca81b4b1f294f46260ef7334ae4f24d3/pydantic_core-2.41.4-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:b69d1973354758007f46cf2d44a4f3d0933f10b6dc9bf15cf1356e037f6f731a", size = 2316840, upload-time = "2025-10-14T10:23:21.738Z" },
{ url = "https://files.pythonhosted.org/packages/aa/8b/b7bb512a4682a2f7fbfae152a755d37351743900226d29bd953aaf870eaa/pydantic_core-2.41.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:3619320641fd212aaf5997b6ca505e97540b7e16418f4a241f44cdf108ffb50d", size = 2149135, upload-time = "2025-10-14T10:23:24.379Z" },
{ url = "https://files.pythonhosted.org/packages/7e/7d/138e902ed6399b866f7cfe4435d22445e16fff888a1c00560d9dc79a780f/pydantic_core-2.41.4-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:491535d45cd7ad7e4a2af4a5169b0d07bebf1adfd164b0368da8aa41e19907a5", size = 2104721, upload-time = "2025-10-14T10:23:26.906Z" },
{ url = "https://files.pythonhosted.org/packages/47/13/0525623cf94627f7b53b4c2034c81edc8491cbfc7c28d5447fa318791479/pydantic_core-2.41.4-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:54d86c0cada6aba4ec4c047d0e348cbad7063b87ae0f005d9f8c9ad04d4a92a2", size = 1931608, upload-time = "2025-10-14T10:23:29.306Z" },
{ url = "https://files.pythonhosted.org/packages/d6/f9/744bc98137d6ef0a233f808bfc9b18cf94624bf30836a18d3b05d08bf418/pydantic_core-2.41.4-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:eca1124aced216b2500dc2609eade086d718e8249cb9696660ab447d50a758bd", size = 2132986, upload-time = "2025-10-14T10:23:32.057Z" },
{ url = "https://files.pythonhosted.org/packages/17/c8/629e88920171173f6049386cc71f893dff03209a9ef32b4d2f7e7c264bcf/pydantic_core-2.41.4-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6c9024169becccf0cb470ada03ee578d7348c119a0d42af3dcf9eda96e3a247c", size = 2187516, upload-time = "2025-10-14T10:23:34.871Z" },
{ url = "https://files.pythonhosted.org/packages/2e/0f/4f2734688d98488782218ca61bcc118329bf5de05bb7fe3adc7dd79b0b86/pydantic_core-2.41.4-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:26895a4268ae5a2849269f4991cdc97236e4b9c010e51137becf25182daac405", size = 2146146, upload-time = "2025-10-14T10:23:37.342Z" },
{ url = "https://files.pythonhosted.org/packages/ed/f2/ab385dbd94a052c62224b99cf99002eee99dbec40e10006c78575aead256/pydantic_core-2.41.4-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:ca4df25762cf71308c446e33c9b1fdca2923a3f13de616e2a949f38bf21ff5a8", size = 2311296, upload-time = "2025-10-14T10:23:40.145Z" },
{ url = "https://files.pythonhosted.org/packages/fc/8e/e4f12afe1beeb9823bba5375f8f258df0cc61b056b0195fb1cf9f62a1a58/pydantic_core-2.41.4-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:5a28fcedd762349519276c36634e71853b4541079cab4acaaac60c4421827308", size = 2315386, upload-time = "2025-10-14T10:23:42.624Z" },
{ url = "https://files.pythonhosted.org/packages/48/f7/925f65d930802e3ea2eb4d5afa4cb8730c8dc0d2cb89a59dc4ed2fcb2d74/pydantic_core-2.41.4-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:c173ddcd86afd2535e2b695217e82191580663a1d1928239f877f5a1649ef39f", size = 2147775, upload-time = "2025-10-14T10:23:45.406Z" },
]
[[package]]
@@ -1327,6 +1487,124 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/33/e8/e40370e6d74ddba47f002a32919d91310d6074130fe4e17dabcafc15cbf1/watchdog-6.0.0-py3-none-win_ia64.whl", hash = "sha256:a1914259fa9e1454315171103c6a30961236f508b9b623eae470268bbcc6a22f", size = 79067, upload-time = "2024-11-01T14:07:11.845Z" },
]
[[package]]
name = "xxhash"
version = "3.6.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/02/84/30869e01909fb37a6cc7e18688ee8bf1e42d57e7e0777636bd47524c43c7/xxhash-3.6.0.tar.gz", hash = "sha256:f0162a78b13a0d7617b2845b90c763339d1f1d82bb04a4b07f4ab535cc5e05d6", size = 85160, upload-time = "2025-10-02T14:37:08.097Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/34/ee/f9f1d656ad168681bb0f6b092372c1e533c4416b8069b1896a175c46e484/xxhash-3.6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:87ff03d7e35c61435976554477a7f4cd1704c3596a89a8300d5ce7fc83874a71", size = 32845, upload-time = "2025-10-02T14:33:51.573Z" },
{ url = "https://files.pythonhosted.org/packages/a3/b1/93508d9460b292c74a09b83d16750c52a0ead89c51eea9951cb97a60d959/xxhash-3.6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f572dfd3d0e2eb1a57511831cf6341242f5a9f8298a45862d085f5b93394a27d", size = 30807, upload-time = "2025-10-02T14:33:52.964Z" },
{ url = "https://files.pythonhosted.org/packages/07/55/28c93a3662f2d200c70704efe74aab9640e824f8ce330d8d3943bf7c9b3c/xxhash-3.6.0-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:89952ea539566b9fed2bbd94e589672794b4286f342254fad28b149f9615fef8", size = 193786, upload-time = "2025-10-02T14:33:54.272Z" },
{ url = "https://files.pythonhosted.org/packages/c1/96/fec0be9bb4b8f5d9c57d76380a366f31a1781fb802f76fc7cda6c84893c7/xxhash-3.6.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:48e6f2ffb07a50b52465a1032c3cf1f4a5683f944acaca8a134a2f23674c2058", size = 212830, upload-time = "2025-10-02T14:33:55.706Z" },
{ url = "https://files.pythonhosted.org/packages/c4/a0/c706845ba77b9611f81fd2e93fad9859346b026e8445e76f8c6fd057cc6d/xxhash-3.6.0-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b5b848ad6c16d308c3ac7ad4ba6bede80ed5df2ba8ed382f8932df63158dd4b2", size = 211606, upload-time = "2025-10-02T14:33:57.133Z" },
{ url = "https://files.pythonhosted.org/packages/67/1e/164126a2999e5045f04a69257eea946c0dc3e86541b400d4385d646b53d7/xxhash-3.6.0-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a034590a727b44dd8ac5914236a7b8504144447a9682586c3327e935f33ec8cc", size = 444872, upload-time = "2025-10-02T14:33:58.446Z" },
{ url = "https://files.pythonhosted.org/packages/2d/4b/55ab404c56cd70a2cf5ecfe484838865d0fea5627365c6c8ca156bd09c8f/xxhash-3.6.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8a8f1972e75ebdd161d7896743122834fe87378160c20e97f8b09166213bf8cc", size = 193217, upload-time = "2025-10-02T14:33:59.724Z" },
{ url = "https://files.pythonhosted.org/packages/45/e6/52abf06bac316db33aa269091ae7311bd53cfc6f4b120ae77bac1b348091/xxhash-3.6.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:ee34327b187f002a596d7b167ebc59a1b729e963ce645964bbc050d2f1b73d07", size = 210139, upload-time = "2025-10-02T14:34:02.041Z" },
{ url = "https://files.pythonhosted.org/packages/34/37/db94d490b8691236d356bc249c08819cbcef9273a1a30acf1254ff9ce157/xxhash-3.6.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:339f518c3c7a850dd033ab416ea25a692759dc7478a71131fe8869010d2b75e4", size = 197669, upload-time = "2025-10-02T14:34:03.664Z" },
{ url = "https://files.pythonhosted.org/packages/b7/36/c4f219ef4a17a4f7a64ed3569bc2b5a9c8311abdb22249ac96093625b1a4/xxhash-3.6.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:bf48889c9630542d4709192578aebbd836177c9f7a4a2778a7d6340107c65f06", size = 210018, upload-time = "2025-10-02T14:34:05.325Z" },
{ url = "https://files.pythonhosted.org/packages/fd/06/bfac889a374fc2fc439a69223d1750eed2e18a7db8514737ab630534fa08/xxhash-3.6.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:5576b002a56207f640636056b4160a378fe36a58db73ae5c27a7ec8db35f71d4", size = 413058, upload-time = "2025-10-02T14:34:06.925Z" },
{ url = "https://files.pythonhosted.org/packages/c9/d1/555d8447e0dd32ad0930a249a522bb2e289f0d08b6b16204cfa42c1f5a0c/xxhash-3.6.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:af1f3278bd02814d6dedc5dec397993b549d6f16c19379721e5a1d31e132c49b", size = 190628, upload-time = "2025-10-02T14:34:08.669Z" },
{ url = "https://files.pythonhosted.org/packages/d1/15/8751330b5186cedc4ed4b597989882ea05e0408b53fa47bcb46a6125bfc6/xxhash-3.6.0-cp310-cp310-win32.whl", hash = "sha256:aed058764db109dc9052720da65fafe84873b05eb8b07e5e653597951af57c3b", size = 30577, upload-time = "2025-10-02T14:34:10.234Z" },
{ url = "https://files.pythonhosted.org/packages/bb/cc/53f87e8b5871a6eb2ff7e89c48c66093bda2be52315a8161ddc54ea550c4/xxhash-3.6.0-cp310-cp310-win_amd64.whl", hash = "sha256:e82da5670f2d0d98950317f82a0e4a0197150ff19a6df2ba40399c2a3b9ae5fb", size = 31487, upload-time = "2025-10-02T14:34:11.618Z" },
{ url = "https://files.pythonhosted.org/packages/9f/00/60f9ea3bb697667a14314d7269956f58bf56bb73864f8f8d52a3c2535e9a/xxhash-3.6.0-cp310-cp310-win_arm64.whl", hash = "sha256:4a082ffff8c6ac07707fb6b671caf7c6e020c75226c561830b73d862060f281d", size = 27863, upload-time = "2025-10-02T14:34:12.619Z" },
{ url = "https://files.pythonhosted.org/packages/17/d4/cc2f0400e9154df4b9964249da78ebd72f318e35ccc425e9f403c392f22a/xxhash-3.6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b47bbd8cf2d72797f3c2772eaaac0ded3d3af26481a26d7d7d41dc2d3c46b04a", size = 32844, upload-time = "2025-10-02T14:34:14.037Z" },
{ url = "https://files.pythonhosted.org/packages/5e/ec/1cc11cd13e26ea8bc3cb4af4eaadd8d46d5014aebb67be3f71fb0b68802a/xxhash-3.6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2b6821e94346f96db75abaa6e255706fb06ebd530899ed76d32cd99f20dc52fa", size = 30809, upload-time = "2025-10-02T14:34:15.484Z" },
{ url = "https://files.pythonhosted.org/packages/04/5f/19fe357ea348d98ca22f456f75a30ac0916b51c753e1f8b2e0e6fb884cce/xxhash-3.6.0-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d0a9751f71a1a65ce3584e9cae4467651c7e70c9d31017fa57574583a4540248", size = 194665, upload-time = "2025-10-02T14:34:16.541Z" },
{ url = "https://files.pythonhosted.org/packages/90/3b/d1f1a8f5442a5fd8beedae110c5af7604dc37349a8e16519c13c19a9a2de/xxhash-3.6.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8b29ee68625ab37b04c0b40c3fafdf24d2f75ccd778333cfb698f65f6c463f62", size = 213550, upload-time = "2025-10-02T14:34:17.878Z" },
{ url = "https://files.pythonhosted.org/packages/c4/ef/3a9b05eb527457d5db13a135a2ae1a26c80fecd624d20f3e8dcc4cb170f3/xxhash-3.6.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:6812c25fe0d6c36a46ccb002f40f27ac903bf18af9f6dd8f9669cb4d176ab18f", size = 212384, upload-time = "2025-10-02T14:34:19.182Z" },
{ url = "https://files.pythonhosted.org/packages/0f/18/ccc194ee698c6c623acbf0f8c2969811a8a4b6185af5e824cd27b9e4fd3e/xxhash-3.6.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:4ccbff013972390b51a18ef1255ef5ac125c92dc9143b2d1909f59abc765540e", size = 445749, upload-time = "2025-10-02T14:34:20.659Z" },
{ url = "https://files.pythonhosted.org/packages/a5/86/cf2c0321dc3940a7aa73076f4fd677a0fb3e405cb297ead7d864fd90847e/xxhash-3.6.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:297b7fbf86c82c550e12e8fb71968b3f033d27b874276ba3624ea868c11165a8", size = 193880, upload-time = "2025-10-02T14:34:22.431Z" },
{ url = "https://files.pythonhosted.org/packages/82/fb/96213c8560e6f948a1ecc9a7613f8032b19ee45f747f4fca4eb31bb6d6ed/xxhash-3.6.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:dea26ae1eb293db089798d3973a5fc928a18fdd97cc8801226fae705b02b14b0", size = 210912, upload-time = "2025-10-02T14:34:23.937Z" },
{ url = "https://files.pythonhosted.org/packages/40/aa/4395e669b0606a096d6788f40dbdf2b819d6773aa290c19e6e83cbfc312f/xxhash-3.6.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:7a0b169aafb98f4284f73635a8e93f0735f9cbde17bd5ec332480484241aaa77", size = 198654, upload-time = "2025-10-02T14:34:25.644Z" },
{ url = "https://files.pythonhosted.org/packages/67/74/b044fcd6b3d89e9b1b665924d85d3f400636c23590226feb1eb09e1176ce/xxhash-3.6.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:08d45aef063a4531b785cd72de4887766d01dc8f362a515693df349fdb825e0c", size = 210867, upload-time = "2025-10-02T14:34:27.203Z" },
{ url = "https://files.pythonhosted.org/packages/bc/fd/3ce73bf753b08cb19daee1eb14aa0d7fe331f8da9c02dd95316ddfe5275e/xxhash-3.6.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:929142361a48ee07f09121fe9e96a84950e8d4df3bb298ca5d88061969f34d7b", size = 414012, upload-time = "2025-10-02T14:34:28.409Z" },
{ url = "https://files.pythonhosted.org/packages/ba/b3/5a4241309217c5c876f156b10778f3ab3af7ba7e3259e6d5f5c7d0129eb2/xxhash-3.6.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:51312c768403d8540487dbbfb557454cfc55589bbde6424456951f7fcd4facb3", size = 191409, upload-time = "2025-10-02T14:34:29.696Z" },
{ url = "https://files.pythonhosted.org/packages/c0/01/99bfbc15fb9abb9a72b088c1d95219fc4782b7d01fc835bd5744d66dd0b8/xxhash-3.6.0-cp311-cp311-win32.whl", hash = "sha256:d1927a69feddc24c987b337ce81ac15c4720955b667fe9b588e02254b80446fd", size = 30574, upload-time = "2025-10-02T14:34:31.028Z" },
{ url = "https://files.pythonhosted.org/packages/65/79/9d24d7f53819fe301b231044ea362ce64e86c74f6e8c8e51320de248b3e5/xxhash-3.6.0-cp311-cp311-win_amd64.whl", hash = "sha256:26734cdc2d4ffe449b41d186bbeac416f704a482ed835d375a5c0cb02bc63fef", size = 31481, upload-time = "2025-10-02T14:34:32.062Z" },
{ url = "https://files.pythonhosted.org/packages/30/4e/15cd0e3e8772071344eab2961ce83f6e485111fed8beb491a3f1ce100270/xxhash-3.6.0-cp311-cp311-win_arm64.whl", hash = "sha256:d72f67ef8bf36e05f5b6c65e8524f265bd61071471cd4cf1d36743ebeeeb06b7", size = 27861, upload-time = "2025-10-02T14:34:33.555Z" },
{ url = "https://files.pythonhosted.org/packages/9a/07/d9412f3d7d462347e4511181dea65e47e0d0e16e26fbee2ea86a2aefb657/xxhash-3.6.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:01362c4331775398e7bb34e3ab403bc9ee9f7c497bc7dee6272114055277dd3c", size = 32744, upload-time = "2025-10-02T14:34:34.622Z" },
{ url = "https://files.pythonhosted.org/packages/79/35/0429ee11d035fc33abe32dca1b2b69e8c18d236547b9a9b72c1929189b9a/xxhash-3.6.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b7b2df81a23f8cb99656378e72501b2cb41b1827c0f5a86f87d6b06b69f9f204", size = 30816, upload-time = "2025-10-02T14:34:36.043Z" },
{ url = "https://files.pythonhosted.org/packages/b7/f2/57eb99aa0f7d98624c0932c5b9a170e1806406cdbcdb510546634a1359e0/xxhash-3.6.0-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:dc94790144e66b14f67b10ac8ed75b39ca47536bf8800eb7c24b50271ea0c490", size = 194035, upload-time = "2025-10-02T14:34:37.354Z" },
{ url = "https://files.pythonhosted.org/packages/4c/ed/6224ba353690d73af7a3f1c7cdb1fc1b002e38f783cb991ae338e1eb3d79/xxhash-3.6.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:93f107c673bccf0d592cdba077dedaf52fe7f42dcd7676eba1f6d6f0c3efffd2", size = 212914, upload-time = "2025-10-02T14:34:38.6Z" },
{ url = "https://files.pythonhosted.org/packages/38/86/fb6b6130d8dd6b8942cc17ab4d90e223653a89aa32ad2776f8af7064ed13/xxhash-3.6.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2aa5ee3444c25b69813663c9f8067dcfaa2e126dc55e8dddf40f4d1c25d7effa", size = 212163, upload-time = "2025-10-02T14:34:39.872Z" },
{ url = "https://files.pythonhosted.org/packages/ee/dc/e84875682b0593e884ad73b2d40767b5790d417bde603cceb6878901d647/xxhash-3.6.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:f7f99123f0e1194fa59cc69ad46dbae2e07becec5df50a0509a808f90a0f03f0", size = 445411, upload-time = "2025-10-02T14:34:41.569Z" },
{ url = "https://files.pythonhosted.org/packages/11/4f/426f91b96701ec2f37bb2b8cec664eff4f658a11f3fa9d94f0a887ea6d2b/xxhash-3.6.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:49e03e6fe2cac4a1bc64952dd250cf0dbc5ef4ebb7b8d96bce82e2de163c82a2", size = 193883, upload-time = "2025-10-02T14:34:43.249Z" },
{ url = "https://files.pythonhosted.org/packages/53/5a/ddbb83eee8e28b778eacfc5a85c969673e4023cdeedcfcef61f36731610b/xxhash-3.6.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:bd17fede52a17a4f9a7bc4472a5867cb0b160deeb431795c0e4abe158bc784e9", size = 210392, upload-time = "2025-10-02T14:34:45.042Z" },
{ url = "https://files.pythonhosted.org/packages/1e/c2/ff69efd07c8c074ccdf0a4f36fcdd3d27363665bcdf4ba399abebe643465/xxhash-3.6.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:6fb5f5476bef678f69db04f2bd1efbed3030d2aba305b0fc1773645f187d6a4e", size = 197898, upload-time = "2025-10-02T14:34:46.302Z" },
{ url = "https://files.pythonhosted.org/packages/58/ca/faa05ac19b3b622c7c9317ac3e23954187516298a091eb02c976d0d3dd45/xxhash-3.6.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:843b52f6d88071f87eba1631b684fcb4b2068cd2180a0224122fe4ef011a9374", size = 210655, upload-time = "2025-10-02T14:34:47.571Z" },
{ url = "https://files.pythonhosted.org/packages/d4/7a/06aa7482345480cc0cb597f5c875b11a82c3953f534394f620b0be2f700c/xxhash-3.6.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:7d14a6cfaf03b1b6f5f9790f76880601ccc7896aff7ab9cd8978a939c1eb7e0d", size = 414001, upload-time = "2025-10-02T14:34:49.273Z" },
{ url = "https://files.pythonhosted.org/packages/23/07/63ffb386cd47029aa2916b3d2f454e6cc5b9f5c5ada3790377d5430084e7/xxhash-3.6.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:418daf3db71e1413cfe211c2f9a528456936645c17f46b5204705581a45390ae", size = 191431, upload-time = "2025-10-02T14:34:50.798Z" },
{ url = "https://files.pythonhosted.org/packages/0f/93/14fde614cadb4ddf5e7cebf8918b7e8fac5ae7861c1875964f17e678205c/xxhash-3.6.0-cp312-cp312-win32.whl", hash = "sha256:50fc255f39428a27299c20e280d6193d8b63b8ef8028995323bf834a026b4fbb", size = 30617, upload-time = "2025-10-02T14:34:51.954Z" },
{ url = "https://files.pythonhosted.org/packages/13/5d/0d125536cbe7565a83d06e43783389ecae0c0f2ed037b48ede185de477c0/xxhash-3.6.0-cp312-cp312-win_amd64.whl", hash = "sha256:c0f2ab8c715630565ab8991b536ecded9416d615538be8ecddce43ccf26cbc7c", size = 31534, upload-time = "2025-10-02T14:34:53.276Z" },
{ url = "https://files.pythonhosted.org/packages/54/85/6ec269b0952ec7e36ba019125982cf11d91256a778c7c3f98a4c5043d283/xxhash-3.6.0-cp312-cp312-win_arm64.whl", hash = "sha256:eae5c13f3bc455a3bbb68bdc513912dc7356de7e2280363ea235f71f54064829", size = 27876, upload-time = "2025-10-02T14:34:54.371Z" },
{ url = "https://files.pythonhosted.org/packages/33/76/35d05267ac82f53ae9b0e554da7c5e281ee61f3cad44c743f0fcd354f211/xxhash-3.6.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:599e64ba7f67472481ceb6ee80fa3bd828fd61ba59fb11475572cc5ee52b89ec", size = 32738, upload-time = "2025-10-02T14:34:55.839Z" },
{ url = "https://files.pythonhosted.org/packages/31/a8/3fbce1cd96534a95e35d5120637bf29b0d7f5d8fa2f6374e31b4156dd419/xxhash-3.6.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7d8b8aaa30fca4f16f0c84a5c8d7ddee0e25250ec2796c973775373257dde8f1", size = 30821, upload-time = "2025-10-02T14:34:57.219Z" },
{ url = "https://files.pythonhosted.org/packages/0c/ea/d387530ca7ecfa183cb358027f1833297c6ac6098223fd14f9782cd0015c/xxhash-3.6.0-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:d597acf8506d6e7101a4a44a5e428977a51c0fadbbfd3c39650cca9253f6e5a6", size = 194127, upload-time = "2025-10-02T14:34:59.21Z" },
{ url = "https://files.pythonhosted.org/packages/ba/0c/71435dcb99874b09a43b8d7c54071e600a7481e42b3e3ce1eb5226a5711a/xxhash-3.6.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:858dc935963a33bc33490128edc1c12b0c14d9c7ebaa4e387a7869ecc4f3e263", size = 212975, upload-time = "2025-10-02T14:35:00.816Z" },
{ url = "https://files.pythonhosted.org/packages/84/7a/c2b3d071e4bb4a90b7057228a99b10d51744878f4a8a6dd643c8bd897620/xxhash-3.6.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:ba284920194615cb8edf73bf52236ce2e1664ccd4a38fdb543506413529cc546", size = 212241, upload-time = "2025-10-02T14:35:02.207Z" },
{ url = "https://files.pythonhosted.org/packages/81/5f/640b6eac0128e215f177df99eadcd0f1b7c42c274ab6a394a05059694c5a/xxhash-3.6.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:4b54219177f6c6674d5378bd862c6aedf64725f70dd29c472eaae154df1a2e89", size = 445471, upload-time = "2025-10-02T14:35:03.61Z" },
{ url = "https://files.pythonhosted.org/packages/5e/1e/3c3d3ef071b051cc3abbe3721ffb8365033a172613c04af2da89d5548a87/xxhash-3.6.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:42c36dd7dbad2f5238950c377fcbf6811b1cdb1c444fab447960030cea60504d", size = 193936, upload-time = "2025-10-02T14:35:05.013Z" },
{ url = "https://files.pythonhosted.org/packages/2c/bd/4a5f68381939219abfe1c22a9e3a5854a4f6f6f3c4983a87d255f21f2e5d/xxhash-3.6.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f22927652cba98c44639ffdc7aaf35828dccf679b10b31c4ad72a5b530a18eb7", size = 210440, upload-time = "2025-10-02T14:35:06.239Z" },
{ url = "https://files.pythonhosted.org/packages/eb/37/b80fe3d5cfb9faff01a02121a0f4d565eb7237e9e5fc66e73017e74dcd36/xxhash-3.6.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:b45fad44d9c5c119e9c6fbf2e1c656a46dc68e280275007bbfd3d572b21426db", size = 197990, upload-time = "2025-10-02T14:35:07.735Z" },
{ url = "https://files.pythonhosted.org/packages/d7/fd/2c0a00c97b9e18f72e1f240ad4e8f8a90fd9d408289ba9c7c495ed7dc05c/xxhash-3.6.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:6f2580ffab1a8b68ef2b901cde7e55fa8da5e4be0977c68f78fc80f3c143de42", size = 210689, upload-time = "2025-10-02T14:35:09.438Z" },
{ url = "https://files.pythonhosted.org/packages/93/86/5dd8076a926b9a95db3206aba20d89a7fc14dd5aac16e5c4de4b56033140/xxhash-3.6.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:40c391dd3cd041ebc3ffe6f2c862f402e306eb571422e0aa918d8070ba31da11", size = 414068, upload-time = "2025-10-02T14:35:11.162Z" },
{ url = "https://files.pythonhosted.org/packages/af/3c/0bb129170ee8f3650f08e993baee550a09593462a5cddd8e44d0011102b1/xxhash-3.6.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:f205badabde7aafd1a31e8ca2a3e5a763107a71c397c4481d6a804eb5063d8bd", size = 191495, upload-time = "2025-10-02T14:35:12.971Z" },
{ url = "https://files.pythonhosted.org/packages/e9/3a/6797e0114c21d1725e2577508e24006fd7ff1d8c0c502d3b52e45c1771d8/xxhash-3.6.0-cp313-cp313-win32.whl", hash = "sha256:2577b276e060b73b73a53042ea5bd5203d3e6347ce0d09f98500f418a9fcf799", size = 30620, upload-time = "2025-10-02T14:35:14.129Z" },
{ url = "https://files.pythonhosted.org/packages/86/15/9bc32671e9a38b413a76d24722a2bf8784a132c043063a8f5152d390b0f9/xxhash-3.6.0-cp313-cp313-win_amd64.whl", hash = "sha256:757320d45d2fbcce8f30c42a6b2f47862967aea7bf458b9625b4bbe7ee390392", size = 31542, upload-time = "2025-10-02T14:35:15.21Z" },
{ url = "https://files.pythonhosted.org/packages/39/c5/cc01e4f6188656e56112d6a8e0dfe298a16934b8c47a247236549a3f7695/xxhash-3.6.0-cp313-cp313-win_arm64.whl", hash = "sha256:457b8f85dec5825eed7b69c11ae86834a018b8e3df5e77783c999663da2f96d6", size = 27880, upload-time = "2025-10-02T14:35:16.315Z" },
{ url = "https://files.pythonhosted.org/packages/f3/30/25e5321c8732759e930c555176d37e24ab84365482d257c3b16362235212/xxhash-3.6.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:a42e633d75cdad6d625434e3468126c73f13f7584545a9cf34e883aa1710e702", size = 32956, upload-time = "2025-10-02T14:35:17.413Z" },
{ url = "https://files.pythonhosted.org/packages/9f/3c/0573299560d7d9f8ab1838f1efc021a280b5ae5ae2e849034ef3dee18810/xxhash-3.6.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:568a6d743219e717b07b4e03b0a828ce593833e498c3b64752e0f5df6bfe84db", size = 31072, upload-time = "2025-10-02T14:35:18.844Z" },
{ url = "https://files.pythonhosted.org/packages/7a/1c/52d83a06e417cd9d4137722693424885cc9878249beb3a7c829e74bf7ce9/xxhash-3.6.0-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:bec91b562d8012dae276af8025a55811b875baace6af510412a5e58e3121bc54", size = 196409, upload-time = "2025-10-02T14:35:20.31Z" },
{ url = "https://files.pythonhosted.org/packages/e3/8e/c6d158d12a79bbd0b878f8355432075fc82759e356ab5a111463422a239b/xxhash-3.6.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:78e7f2f4c521c30ad5e786fdd6bae89d47a32672a80195467b5de0480aa97b1f", size = 215736, upload-time = "2025-10-02T14:35:21.616Z" },
{ url = "https://files.pythonhosted.org/packages/bc/68/c4c80614716345d55071a396cf03d06e34b5f4917a467faf43083c995155/xxhash-3.6.0-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:3ed0df1b11a79856df5ffcab572cbd6b9627034c1c748c5566fa79df9048a7c5", size = 214833, upload-time = "2025-10-02T14:35:23.32Z" },
{ url = "https://files.pythonhosted.org/packages/7e/e9/ae27c8ffec8b953efa84c7c4a6c6802c263d587b9fc0d6e7cea64e08c3af/xxhash-3.6.0-cp313-cp313t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:0e4edbfc7d420925b0dd5e792478ed393d6e75ff8fc219a6546fb446b6a417b1", size = 448348, upload-time = "2025-10-02T14:35:25.111Z" },
{ url = "https://files.pythonhosted.org/packages/d7/6b/33e21afb1b5b3f46b74b6bd1913639066af218d704cc0941404ca717fc57/xxhash-3.6.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fba27a198363a7ef87f8c0f6b171ec36b674fe9053742c58dd7e3201c1ab30ee", size = 196070, upload-time = "2025-10-02T14:35:26.586Z" },
{ url = "https://files.pythonhosted.org/packages/96/b6/fcabd337bc5fa624e7203aa0fa7d0c49eed22f72e93229431752bddc83d9/xxhash-3.6.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:794fe9145fe60191c6532fa95063765529770edcdd67b3d537793e8004cabbfd", size = 212907, upload-time = "2025-10-02T14:35:28.087Z" },
{ url = "https://files.pythonhosted.org/packages/4b/d3/9ee6160e644d660fcf176c5825e61411c7f62648728f69c79ba237250143/xxhash-3.6.0-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:6105ef7e62b5ac73a837778efc331a591d8442f8ef5c7e102376506cb4ae2729", size = 200839, upload-time = "2025-10-02T14:35:29.857Z" },
{ url = "https://files.pythonhosted.org/packages/0d/98/e8de5baa5109394baf5118f5e72ab21a86387c4f89b0e77ef3e2f6b0327b/xxhash-3.6.0-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:f01375c0e55395b814a679b3eea205db7919ac2af213f4a6682e01220e5fe292", size = 213304, upload-time = "2025-10-02T14:35:31.222Z" },
{ url = "https://files.pythonhosted.org/packages/7b/1d/71056535dec5c3177eeb53e38e3d367dd1d16e024e63b1cee208d572a033/xxhash-3.6.0-cp313-cp313t-musllinux_1_2_s390x.whl", hash = "sha256:d706dca2d24d834a4661619dcacf51a75c16d65985718d6a7d73c1eeeb903ddf", size = 416930, upload-time = "2025-10-02T14:35:32.517Z" },
{ url = "https://files.pythonhosted.org/packages/dc/6c/5cbde9de2cd967c322e651c65c543700b19e7ae3e0aae8ece3469bf9683d/xxhash-3.6.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:5f059d9faeacd49c0215d66f4056e1326c80503f51a1532ca336a385edadd033", size = 193787, upload-time = "2025-10-02T14:35:33.827Z" },
{ url = "https://files.pythonhosted.org/packages/19/fa/0172e350361d61febcea941b0cc541d6e6c8d65d153e85f850a7b256ff8a/xxhash-3.6.0-cp313-cp313t-win32.whl", hash = "sha256:1244460adc3a9be84731d72b8e80625788e5815b68da3da8b83f78115a40a7ec", size = 30916, upload-time = "2025-10-02T14:35:35.107Z" },
{ url = "https://files.pythonhosted.org/packages/ad/e6/e8cf858a2b19d6d45820f072eff1bea413910592ff17157cabc5f1227a16/xxhash-3.6.0-cp313-cp313t-win_amd64.whl", hash = "sha256:b1e420ef35c503869c4064f4a2f2b08ad6431ab7b229a05cce39d74268bca6b8", size = 31799, upload-time = "2025-10-02T14:35:36.165Z" },
{ url = "https://files.pythonhosted.org/packages/56/15/064b197e855bfb7b343210e82490ae672f8bc7cdf3ddb02e92f64304ee8a/xxhash-3.6.0-cp313-cp313t-win_arm64.whl", hash = "sha256:ec44b73a4220623235f67a996c862049f375df3b1052d9899f40a6382c32d746", size = 28044, upload-time = "2025-10-02T14:35:37.195Z" },
{ url = "https://files.pythonhosted.org/packages/7e/5e/0138bc4484ea9b897864d59fce9be9086030825bc778b76cb5a33a906d37/xxhash-3.6.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:a40a3d35b204b7cc7643cbcf8c9976d818cb47befcfac8bbefec8038ac363f3e", size = 32754, upload-time = "2025-10-02T14:35:38.245Z" },
{ url = "https://files.pythonhosted.org/packages/18/d7/5dac2eb2ec75fd771957a13e5dda560efb2176d5203f39502a5fc571f899/xxhash-3.6.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:a54844be970d3fc22630b32d515e79a90d0a3ddb2644d8d7402e3c4c8da61405", size = 30846, upload-time = "2025-10-02T14:35:39.6Z" },
{ url = "https://files.pythonhosted.org/packages/fe/71/8bc5be2bb00deb5682e92e8da955ebe5fa982da13a69da5a40a4c8db12fb/xxhash-3.6.0-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:016e9190af8f0a4e3741343777710e3d5717427f175adfdc3e72508f59e2a7f3", size = 194343, upload-time = "2025-10-02T14:35:40.69Z" },
{ url = "https://files.pythonhosted.org/packages/e7/3b/52badfb2aecec2c377ddf1ae75f55db3ba2d321c5e164f14461c90837ef3/xxhash-3.6.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4f6f72232f849eb9d0141e2ebe2677ece15adfd0fa599bc058aad83c714bb2c6", size = 213074, upload-time = "2025-10-02T14:35:42.29Z" },
{ url = "https://files.pythonhosted.org/packages/a2/2b/ae46b4e9b92e537fa30d03dbc19cdae57ed407e9c26d163895e968e3de85/xxhash-3.6.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:63275a8aba7865e44b1813d2177e0f5ea7eadad3dd063a21f7cf9afdc7054063", size = 212388, upload-time = "2025-10-02T14:35:43.929Z" },
{ url = "https://files.pythonhosted.org/packages/f5/80/49f88d3afc724b4ac7fbd664c8452d6db51b49915be48c6982659e0e7942/xxhash-3.6.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:3cd01fa2aa00d8b017c97eb46b9a794fbdca53fc14f845f5a328c71254b0abb7", size = 445614, upload-time = "2025-10-02T14:35:45.216Z" },
{ url = "https://files.pythonhosted.org/packages/ed/ba/603ce3961e339413543d8cd44f21f2c80e2a7c5cfe692a7b1f2cccf58f3c/xxhash-3.6.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0226aa89035b62b6a86d3c68df4d7c1f47a342b8683da2b60cedcddb46c4d95b", size = 194024, upload-time = "2025-10-02T14:35:46.959Z" },
{ url = "https://files.pythonhosted.org/packages/78/d1/8e225ff7113bf81545cfdcd79eef124a7b7064a0bba53605ff39590b95c2/xxhash-3.6.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:c6e193e9f56e4ca4923c61238cdaced324f0feac782544eb4c6d55ad5cc99ddd", size = 210541, upload-time = "2025-10-02T14:35:48.301Z" },
{ url = "https://files.pythonhosted.org/packages/6f/58/0f89d149f0bad89def1a8dd38feb50ccdeb643d9797ec84707091d4cb494/xxhash-3.6.0-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:9176dcaddf4ca963d4deb93866d739a343c01c969231dbe21680e13a5d1a5bf0", size = 198305, upload-time = "2025-10-02T14:35:49.584Z" },
{ url = "https://files.pythonhosted.org/packages/11/38/5eab81580703c4df93feb5f32ff8fa7fe1e2c51c1f183ee4e48d4bb9d3d7/xxhash-3.6.0-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:c1ce4009c97a752e682b897aa99aef84191077a9433eb237774689f14f8ec152", size = 210848, upload-time = "2025-10-02T14:35:50.877Z" },
{ url = "https://files.pythonhosted.org/packages/5e/6b/953dc4b05c3ce678abca756416e4c130d2382f877a9c30a20d08ee6a77c0/xxhash-3.6.0-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:8cb2f4f679b01513b7adbb9b1b2f0f9cdc31b70007eaf9d59d0878809f385b11", size = 414142, upload-time = "2025-10-02T14:35:52.15Z" },
{ url = "https://files.pythonhosted.org/packages/08/a9/238ec0d4e81a10eb5026d4a6972677cbc898ba6c8b9dbaec12ae001b1b35/xxhash-3.6.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:653a91d7c2ab54a92c19ccf43508b6a555440b9be1bc8be553376778be7f20b5", size = 191547, upload-time = "2025-10-02T14:35:53.547Z" },
{ url = "https://files.pythonhosted.org/packages/f1/ee/3cf8589e06c2164ac77c3bf0aa127012801128f1feebf2a079272da5737c/xxhash-3.6.0-cp314-cp314-win32.whl", hash = "sha256:a756fe893389483ee8c394d06b5ab765d96e68fbbfe6fde7aa17e11f5720559f", size = 31214, upload-time = "2025-10-02T14:35:54.746Z" },
{ url = "https://files.pythonhosted.org/packages/02/5d/a19552fbc6ad4cb54ff953c3908bbc095f4a921bc569433d791f755186f1/xxhash-3.6.0-cp314-cp314-win_amd64.whl", hash = "sha256:39be8e4e142550ef69629c9cd71b88c90e9a5db703fecbcf265546d9536ca4ad", size = 32290, upload-time = "2025-10-02T14:35:55.791Z" },
{ url = "https://files.pythonhosted.org/packages/b1/11/dafa0643bc30442c887b55baf8e73353a344ee89c1901b5a5c54a6c17d39/xxhash-3.6.0-cp314-cp314-win_arm64.whl", hash = "sha256:25915e6000338999236f1eb68a02a32c3275ac338628a7eaa5a269c401995679", size = 28795, upload-time = "2025-10-02T14:35:57.162Z" },
{ url = "https://files.pythonhosted.org/packages/2c/db/0e99732ed7f64182aef4a6fb145e1a295558deec2a746265dcdec12d191e/xxhash-3.6.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:c5294f596a9017ca5a3e3f8884c00b91ab2ad2933cf288f4923c3fd4346cf3d4", size = 32955, upload-time = "2025-10-02T14:35:58.267Z" },
{ url = "https://files.pythonhosted.org/packages/55/f4/2a7c3c68e564a099becfa44bb3d398810cc0ff6749b0d3cb8ccb93f23c14/xxhash-3.6.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:1cf9dcc4ab9cff01dfbba78544297a3a01dafd60f3bde4e2bfd016cf7e4ddc67", size = 31072, upload-time = "2025-10-02T14:35:59.382Z" },
{ url = "https://files.pythonhosted.org/packages/c6/d9/72a29cddc7250e8a5819dad5d466facb5dc4c802ce120645630149127e73/xxhash-3.6.0-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:01262da8798422d0685f7cef03b2bd3f4f46511b02830861df548d7def4402ad", size = 196579, upload-time = "2025-10-02T14:36:00.838Z" },
{ url = "https://files.pythonhosted.org/packages/63/93/b21590e1e381040e2ca305a884d89e1c345b347404f7780f07f2cdd47ef4/xxhash-3.6.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:51a73fb7cb3a3ead9f7a8b583ffd9b8038e277cdb8cb87cf890e88b3456afa0b", size = 215854, upload-time = "2025-10-02T14:36:02.207Z" },
{ url = "https://files.pythonhosted.org/packages/ce/b8/edab8a7d4fa14e924b29be877d54155dcbd8b80be85ea00d2be3413a9ed4/xxhash-3.6.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b9c6df83594f7df8f7f708ce5ebeacfc69f72c9fbaaababf6cf4758eaada0c9b", size = 214965, upload-time = "2025-10-02T14:36:03.507Z" },
{ url = "https://files.pythonhosted.org/packages/27/67/dfa980ac7f0d509d54ea0d5a486d2bb4b80c3f1bb22b66e6a05d3efaf6c0/xxhash-3.6.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:627f0af069b0ea56f312fd5189001c24578868643203bca1abbc2c52d3a6f3ca", size = 448484, upload-time = "2025-10-02T14:36:04.828Z" },
{ url = "https://files.pythonhosted.org/packages/8c/63/8ffc2cc97e811c0ca5d00ab36604b3ea6f4254f20b7bc658ca825ce6c954/xxhash-3.6.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:aa912c62f842dfd013c5f21a642c9c10cd9f4c4e943e0af83618b4a404d9091a", size = 196162, upload-time = "2025-10-02T14:36:06.182Z" },
{ url = "https://files.pythonhosted.org/packages/4b/77/07f0e7a3edd11a6097e990f6e5b815b6592459cb16dae990d967693e6ea9/xxhash-3.6.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:b465afd7909db30168ab62afe40b2fcf79eedc0b89a6c0ab3123515dc0df8b99", size = 213007, upload-time = "2025-10-02T14:36:07.733Z" },
{ url = "https://files.pythonhosted.org/packages/ae/d8/bc5fa0d152837117eb0bef6f83f956c509332ce133c91c63ce07ee7c4873/xxhash-3.6.0-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:a881851cf38b0a70e7c4d3ce81fc7afd86fbc2a024f4cfb2a97cf49ce04b75d3", size = 200956, upload-time = "2025-10-02T14:36:09.106Z" },
{ url = "https://files.pythonhosted.org/packages/26/a5/d749334130de9411783873e9b98ecc46688dad5db64ca6e04b02acc8b473/xxhash-3.6.0-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:9b3222c686a919a0f3253cfc12bb118b8b103506612253b5baeaac10d8027cf6", size = 213401, upload-time = "2025-10-02T14:36:10.585Z" },
{ url = "https://files.pythonhosted.org/packages/89/72/abed959c956a4bfc72b58c0384bb7940663c678127538634d896b1195c10/xxhash-3.6.0-cp314-cp314t-musllinux_1_2_s390x.whl", hash = "sha256:c5aa639bc113e9286137cec8fadc20e9cd732b2cc385c0b7fa673b84fc1f2a93", size = 417083, upload-time = "2025-10-02T14:36:12.276Z" },
{ url = "https://files.pythonhosted.org/packages/0c/b3/62fd2b586283b7d7d665fb98e266decadf31f058f1cf6c478741f68af0cb/xxhash-3.6.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:5c1343d49ac102799905e115aee590183c3921d475356cb24b4de29a4bc56518", size = 193913, upload-time = "2025-10-02T14:36:14.025Z" },
{ url = "https://files.pythonhosted.org/packages/9a/9a/c19c42c5b3f5a4aad748a6d5b4f23df3bed7ee5445accc65a0fb3ff03953/xxhash-3.6.0-cp314-cp314t-win32.whl", hash = "sha256:5851f033c3030dd95c086b4a36a2683c2ff4a799b23af60977188b057e467119", size = 31586, upload-time = "2025-10-02T14:36:15.603Z" },
{ url = "https://files.pythonhosted.org/packages/03/d6/4cc450345be9924fd5dc8c590ceda1db5b43a0a889587b0ae81a95511360/xxhash-3.6.0-cp314-cp314t-win_amd64.whl", hash = "sha256:0444e7967dac37569052d2409b00a8860c2135cff05502df4da80267d384849f", size = 32526, upload-time = "2025-10-02T14:36:16.708Z" },
{ url = "https://files.pythonhosted.org/packages/0f/c9/7243eb3f9eaabd1a88a5a5acadf06df2d83b100c62684b7425c6a11bcaa8/xxhash-3.6.0-cp314-cp314t-win_arm64.whl", hash = "sha256:bb79b1e63f6fd84ec778a4b1916dfe0a7c3fdb986c06addd5db3a0d413819d95", size = 28898, upload-time = "2025-10-02T14:36:17.843Z" },
{ url = "https://files.pythonhosted.org/packages/93/1e/8aec23647a34a249f62e2398c42955acd9b4c6ed5cf08cbea94dc46f78d2/xxhash-3.6.0-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0f7b7e2ec26c1666ad5fc9dbfa426a6a3367ceaf79db5dd76264659d509d73b0", size = 30662, upload-time = "2025-10-02T14:37:01.743Z" },
{ url = "https://files.pythonhosted.org/packages/b8/0b/b14510b38ba91caf43006209db846a696ceea6a847a0c9ba0a5b1adc53d6/xxhash-3.6.0-pp311-pypy311_pp73-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:5dc1e14d14fa0f5789ec29a7062004b5933964bb9b02aae6622b8f530dc40296", size = 41056, upload-time = "2025-10-02T14:37:02.879Z" },
{ url = "https://files.pythonhosted.org/packages/50/55/15a7b8a56590e66ccd374bbfa3f9ffc45b810886c8c3b614e3f90bd2367c/xxhash-3.6.0-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:881b47fc47e051b37d94d13e7455131054b56749b91b508b0907eb07900d1c13", size = 36251, upload-time = "2025-10-02T14:37:04.44Z" },
{ url = "https://files.pythonhosted.org/packages/62/b2/5ac99a041a29e58e95f907876b04f7067a0242cb85b5f39e726153981503/xxhash-3.6.0-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c6dc31591899f5e5666f04cc2e529e69b4072827085c1ef15294d91a004bc1bd", size = 32481, upload-time = "2025-10-02T14:37:05.869Z" },
{ url = "https://files.pythonhosted.org/packages/7b/d9/8d95e906764a386a3d3b596f3c68bb63687dfca806373509f51ce8eea81f/xxhash-3.6.0-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:15e0dac10eb9309508bfc41f7f9deaa7755c69e35af835db9cb10751adebc35d", size = 31565, upload-time = "2025-10-02T14:37:06.966Z" },
]
[[package]]
name = "zstandard"
version = "0.25.0"

View File

@@ -34,7 +34,7 @@ The LangChain ecosystem is built on top of `langchain-core`. Some of the benefit
## 📖 Documentation
For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_core/).
For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_core/). For conceptual guides, tutorials, and examples on using LangChain, see the [LangChain Docs](https://docs.langchain.com/oss/python/langchain/overview).
## 📕 Releases & Versioning

View File

@@ -5,12 +5,10 @@
!!! warning
New agents should be built using the
[langgraph library](https://github.com/langchain-ai/langgraph), which provides a
[`langchain` library](https://pypi.org/project/langchain/), which provides a
simpler and more flexible way to define agents.
Please see the
[migration guide](https://python.langchain.com/docs/how_to/migrate_agent/) for
information on how to migrate existing agents to modern langgraph agents.
See docs on [building agents](https://docs.langchain.com/oss/python/langchain/agents).
Agents use language models to choose a sequence of actions to take.
@@ -54,31 +52,33 @@ class AgentAction(Serializable):
"""The input to pass in to the Tool."""
log: str
"""Additional information to log about the action.
This log can be used in a few ways. First, it can be used to audit
what exactly the LLM predicted to lead to this (tool, tool_input).
Second, it can be used in future iterations to show the LLMs prior
thoughts. This is useful when (tool, tool_input) does not contain
full information about the LLM prediction (for example, any `thought`
before the tool/tool_input)."""
This log can be used in a few ways. First, it can be used to audit what exactly the
LLM predicted to lead to this `(tool, tool_input)`.
Second, it can be used in future iterations to show the LLMs prior thoughts. This is
useful when `(tool, tool_input)` does not contain full information about the LLM
prediction (for example, any `thought` before the tool/tool_input).
"""
type: Literal["AgentAction"] = "AgentAction"
# Override init to support instantiation by position for backward compat.
def __init__(self, tool: str, tool_input: str | dict, log: str, **kwargs: Any):
"""Create an AgentAction.
"""Create an `AgentAction`.
Args:
tool: The name of the tool to execute.
tool_input: The input to pass in to the Tool.
tool_input: The input to pass in to the `Tool`.
log: Additional information to log about the action.
"""
super().__init__(tool=tool, tool_input=tool_input, log=log, **kwargs)
@classmethod
def is_lc_serializable(cls) -> bool:
"""AgentAction is serializable.
"""`AgentAction` is serializable.
Returns:
True
`True`
"""
return True
@@ -100,19 +100,23 @@ class AgentAction(Serializable):
class AgentActionMessageLog(AgentAction):
"""Representation of an action to be executed by an agent.
This is similar to AgentAction, but includes a message log consisting of
chat messages. This is useful when working with ChatModels, and is used
to reconstruct conversation history from the agent's perspective.
This is similar to `AgentAction`, but includes a message log consisting of
chat messages.
This is useful when working with `ChatModels`, and is used to reconstruct
conversation history from the agent's perspective.
"""
message_log: Sequence[BaseMessage]
"""Similar to log, this can be used to pass along extra
information about what exact messages were predicted by the LLM
before parsing out the (tool, tool_input). This is again useful
if (tool, tool_input) cannot be used to fully recreate the LLM
prediction, and you need that LLM prediction (for future agent iteration).
"""Similar to log, this can be used to pass along extra information about what exact
messages were predicted by the LLM before parsing out the `(tool, tool_input)`.
This is again useful if `(tool, tool_input)` cannot be used to fully recreate the
LLM prediction, and you need that LLM prediction (for future agent iteration).
Compared to `log`, this is useful when the underlying LLM is a
chat model (and therefore returns messages rather than a string)."""
chat model (and therefore returns messages rather than a string).
"""
# Ignoring type because we're overriding the type from AgentAction.
# And this is the correct thing to do in this case.
# The type literal is used for serialization purposes.
@@ -120,12 +124,12 @@ class AgentActionMessageLog(AgentAction):
class AgentStep(Serializable):
"""Result of running an AgentAction."""
"""Result of running an `AgentAction`."""
action: AgentAction
"""The AgentAction that was executed."""
"""The `AgentAction` that was executed."""
observation: Any
"""The result of the AgentAction."""
"""The result of the `AgentAction`."""
@property
def messages(self) -> Sequence[BaseMessage]:
@@ -134,19 +138,22 @@ class AgentStep(Serializable):
class AgentFinish(Serializable):
"""Final return value of an ActionAgent.
"""Final return value of an `ActionAgent`.
Agents return an AgentFinish when they have reached a stopping condition.
Agents return an `AgentFinish` when they have reached a stopping condition.
"""
return_values: dict
"""Dictionary of return values."""
log: str
"""Additional information to log about the return value.
This is used to pass along the full LLM prediction, not just the parsed out
return value. For example, if the full LLM prediction was
`Final Answer: 2` you may want to just return `2` as a return value, but pass
along the full string as a `log` (for debugging or observability purposes).
return value.
For example, if the full LLM prediction was `Final Answer: 2` you may want to just
return `2` as a return value, but pass along the full string as a `log` (for
debugging or observability purposes).
"""
type: Literal["AgentFinish"] = "AgentFinish"
@@ -156,7 +163,7 @@ class AgentFinish(Serializable):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -204,7 +211,7 @@ def _convert_agent_observation_to_messages(
observation: Observation to convert to a message.
Returns:
AIMessage that corresponds to the original tool invocation.
`AIMessage` that corresponds to the original tool invocation.
"""
if isinstance(agent_action, AgentActionMessageLog):
return [_create_function_message(agent_action, observation)]
@@ -227,7 +234,7 @@ def _create_function_message(
observation: the result of the tool invocation.
Returns:
FunctionMessage that corresponds to the original tool invocation.
`FunctionMessage` that corresponds to the original tool invocation.
"""
if not isinstance(observation, str):
try:

View File

@@ -1,7 +1,9 @@
"""`caches` provides an optional caching layer for language models.
"""Optional caching layer for language models.
!!! warning
This is a beta feature! Please be wary of deploying experimental code to production
Distinct from provider-based [prompt caching](https://docs.langchain.com/oss/python/langchain/models#prompt-caching).
!!! warning "Beta feature"
This is a beta feature. Please be wary of deploying experimental code to production
unless you've taken appropriate precautions.
A cache is useful for two reasons:
@@ -47,17 +49,18 @@ class BaseCache(ABC):
"""Look up based on `prompt` and `llm_string`.
A cache implementation is expected to generate a key from the 2-tuple
of prompt and llm_string (e.g., by concatenating them with a delimiter).
of `prompt` and `llm_string` (e.g., by concatenating them with a delimiter).
Args:
prompt: A string representation of the prompt.
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
These invocation parameters are serialized into a string
representation.
These invocation parameters are serialized into a string representation.
Returns:
On a cache miss, return `None`. On a cache hit, return the cached value.
@@ -76,8 +79,10 @@ class BaseCache(ABC):
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
These invocation parameters are serialized into a string
representation.
return_val: The value to be cached. The value is a list of `Generation`
@@ -92,15 +97,17 @@ class BaseCache(ABC):
"""Async look up based on `prompt` and `llm_string`.
A cache implementation is expected to generate a key from the 2-tuple
of prompt and llm_string (e.g., by concatenating them with a delimiter).
of `prompt` and `llm_string` (e.g., by concatenating them with a delimiter).
Args:
prompt: A string representation of the prompt.
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
These invocation parameters are serialized into a string
representation.
@@ -123,8 +130,10 @@ class BaseCache(ABC):
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
These invocation parameters are serialized into a string
representation.
return_val: The value to be cached. The value is a list of `Generation`

View File

@@ -5,13 +5,12 @@ from __future__ import annotations
import logging
from typing import TYPE_CHECKING, Any
from typing_extensions import Self
if TYPE_CHECKING:
from collections.abc import Sequence
from uuid import UUID
from tenacity import RetryCallState
from typing_extensions import Self
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.documents import Document
@@ -420,8 +419,6 @@ class RunManagerMixin:
(includes inherited tags).
metadata: The metadata associated with the custom event
(includes inherited metadata).
!!! version-added "Added in version 0.2.15"
"""
@@ -882,8 +879,6 @@ class AsyncCallbackHandler(BaseCallbackHandler):
(includes inherited tags).
metadata: The metadata associated with the custom event
(includes inherited metadata).
!!! version-added "Added in version 0.2.15"
"""

View File

@@ -39,7 +39,6 @@ from langchain_core.tracers.context import (
tracing_v2_callback_var,
)
from langchain_core.tracers.langchain import LangChainTracer
from langchain_core.tracers.schemas import Run
from langchain_core.tracers.stdout import ConsoleCallbackHandler
from langchain_core.utils.env import env_var_is_set
@@ -52,6 +51,7 @@ if TYPE_CHECKING:
from langchain_core.documents import Document
from langchain_core.outputs import ChatGenerationChunk, GenerationChunk, LLMResult
from langchain_core.runnables.config import RunnableConfig
from langchain_core.tracers.schemas import Run
logger = logging.getLogger(__name__)
@@ -229,7 +229,24 @@ def shielded(func: Func) -> Func:
@functools.wraps(func)
async def wrapped(*args: Any, **kwargs: Any) -> Any:
return await asyncio.shield(func(*args, **kwargs))
# Capture the current context to preserve context variables
ctx = copy_context()
# Create the coroutine
coro = func(*args, **kwargs)
# For Python 3.11+, create task with explicit context
# For older versions, fallback to original behavior
try:
# Create a task with the captured context to preserve context variables
task = asyncio.create_task(coro, context=ctx) # type: ignore[call-arg, unused-ignore]
# `call-arg` used to not fail 3.9 or 3.10 tests
return await asyncio.shield(task)
except TypeError:
# Python < 3.11 fallback - create task normally then shield
# This won't preserve context perfectly but is better than nothing
task = asyncio.create_task(coro)
return await asyncio.shield(task)
return cast("Func", wrapped)
@@ -1566,9 +1583,6 @@ class CallbackManager(BaseCallbackManager):
Raises:
ValueError: If additional keyword arguments are passed.
!!! version-added "Added in version 0.2.14"
"""
if not self.handlers:
return
@@ -2042,8 +2056,6 @@ class AsyncCallbackManager(BaseCallbackManager):
Raises:
ValueError: If additional keyword arguments are passed.
!!! version-added "Added in version 0.2.14"
"""
if not self.handlers:
return
@@ -2555,9 +2567,6 @@ async def adispatch_custom_event(
This is due to a limitation in asyncio for python <= 3.10 that prevents
LangChain from automatically propagating the config object on the user's
behalf.
!!! version-added "Added in version 0.2.15"
"""
# Import locally to prevent circular imports.
from langchain_core.runnables.config import ( # noqa: PLC0415
@@ -2630,9 +2639,6 @@ def dispatch_custom_event(
foo_ = RunnableLambda(foo)
foo_.invoke({"a": "1"}, {"callbacks": [CustomCallbackManager()]})
```
!!! version-added "Added in version 0.2.15"
"""
# Import locally to prevent circular imports.
from langchain_core.runnables.config import ( # noqa: PLC0415

View File

@@ -24,7 +24,7 @@ class UsageMetadataCallbackHandler(BaseCallbackHandler):
from langchain_core.callbacks import UsageMetadataCallbackHandler
llm_1 = init_chat_model(model="openai:gpt-4o-mini")
llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-latest")
llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-20241022")
callback = UsageMetadataCallbackHandler()
result_1 = llm_1.invoke("Hello", config={"callbacks": [callback]})
@@ -43,7 +43,7 @@ class UsageMetadataCallbackHandler(BaseCallbackHandler):
'input_token_details': {'cache_read': 0, 'cache_creation': 0}}}
```
!!! version-added "Added in version 0.3.49"
!!! version-added "Added in `langchain-core` 0.3.49"
"""
@@ -109,7 +109,7 @@ def get_usage_metadata_callback(
from langchain_core.callbacks import get_usage_metadata_callback
llm_1 = init_chat_model(model="openai:gpt-4o-mini")
llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-latest")
llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-20241022")
with get_usage_metadata_callback() as cb:
llm_1.invoke("Hello")
@@ -134,7 +134,7 @@ def get_usage_metadata_callback(
}
```
!!! version-added "Added in version 0.3.49"
!!! version-added "Added in `langchain-core` 0.3.49"
"""
usage_metadata_callback_var: ContextVar[UsageMetadataCallbackHandler | None] = (

View File

@@ -121,7 +121,7 @@ class BaseChatMessageHistory(ABC):
This method may be deprecated in a future release.
Args:
message: The human message to add to the store.
message: The `HumanMessage` to add to the store.
"""
if isinstance(message, HumanMessage):
self.add_message(message)
@@ -129,7 +129,7 @@ class BaseChatMessageHistory(ABC):
self.add_message(HumanMessage(content=message))
def add_ai_message(self, message: AIMessage | str) -> None:
"""Convenience method for adding an AI message string to the store.
"""Convenience method for adding an `AIMessage` string to the store.
!!! note
This is a convenience method. Code should favor the bulk `add_messages`
@@ -138,7 +138,7 @@ class BaseChatMessageHistory(ABC):
This method may be deprecated in a future release.
Args:
message: The AI message to add.
message: The `AIMessage` to add.
"""
if isinstance(message, AIMessage):
self.add_message(message)
@@ -173,7 +173,7 @@ class BaseChatMessageHistory(ABC):
in an efficient manner to avoid unnecessary round-trips to the underlying store.
Args:
messages: A sequence of BaseMessage objects to store.
messages: A sequence of `BaseMessage` objects to store.
"""
for message in messages:
self.add_message(message)
@@ -182,7 +182,7 @@ class BaseChatMessageHistory(ABC):
"""Async add a list of messages.
Args:
messages: A sequence of BaseMessage objects to store.
messages: A sequence of `BaseMessage` objects to store.
"""
await run_in_executor(None, self.add_messages, messages)

View File

@@ -27,7 +27,7 @@ class BaseLoader(ABC): # noqa: B024
"""Interface for Document Loader.
Implementations should implement the lazy-loading method using generators
to avoid loading all Documents into memory at once.
to avoid loading all documents into memory at once.
`load` is provided just for user convenience and should not be overridden.
"""
@@ -53,9 +53,11 @@ class BaseLoader(ABC): # noqa: B024
def load_and_split(
self, text_splitter: TextSplitter | None = None
) -> list[Document]:
"""Load Documents and split into chunks. Chunks are returned as `Document`.
"""Load `Document` and split into chunks. Chunks are returned as `Document`.
Do not override this method. It should be considered to be deprecated!
!!! danger
Do not override this method. It should be considered to be deprecated!
Args:
text_splitter: `TextSplitter` instance to use for splitting documents.
@@ -135,7 +137,7 @@ class BaseBlobParser(ABC):
"""
def parse(self, blob: Blob) -> list[Document]:
"""Eagerly parse the blob into a `Document` or `Document` objects.
"""Eagerly parse the blob into a `Document` or list of `Document` objects.
This is a convenience method for interactive development environment.

View File

@@ -28,7 +28,7 @@ class BlobLoader(ABC):
def yield_blobs(
self,
) -> Iterable[Blob]:
"""A lazy loader for raw data represented by LangChain's Blob object.
"""A lazy loader for raw data represented by LangChain's `Blob` object.
Returns:
A generator over blobs

View File

@@ -14,13 +14,13 @@ from langchain_core.documents import Document
class LangSmithLoader(BaseLoader):
"""Load LangSmith Dataset examples as Documents.
"""Load LangSmith Dataset examples as `Document` objects.
Loads the example inputs as the Document page content and places the entire example
into the Document metadata. This allows you to easily create few-shot example
retrievers from the loaded documents.
Loads the example inputs as the `Document` page content and places the entire
example into the `Document` metadata. This allows you to easily create few-shot
example retrievers from the loaded documents.
??? note "Lazy load"
??? note "Lazy loading example"
```python
from langchain_core.document_loaders import LangSmithLoader
@@ -34,9 +34,6 @@ class LangSmithLoader(BaseLoader):
```python
# -> [Document("...", metadata={"inputs": {...}, "outputs": {...}, ...}), ...]
```
!!! version-added "Added in version 0.2.34"
"""
def __init__(
@@ -69,12 +66,11 @@ class LangSmithLoader(BaseLoader):
format_content: Function for converting the content extracted from the example
inputs into a string. Defaults to JSON-encoding the contents.
example_ids: The IDs of the examples to filter by.
as_of: The dataset version tag OR
timestamp to retrieve the examples as of.
Response examples will only be those that were present at the time
of the tagged (or timestamped) version.
as_of: The dataset version tag or timestamp to retrieve the examples as of.
Response examples will only be those that were present at the time of
the tagged (or timestamped) version.
splits: A list of dataset splits, which are
divisions of your dataset such as 'train', 'test', or 'validation'.
divisions of your dataset such as `train`, `test`, or `validation`.
Returns examples only from the specified splits.
inline_s3_urls: Whether to inline S3 URLs.
offset: The offset to start from.

View File

@@ -1,7 +1,28 @@
"""Documents module.
"""Documents module for data retrieval and processing workflows.
**Document** module is a collection of classes that handle documents
and their transformations.
This module provides core abstractions for handling data in retrieval-augmented
generation (RAG) pipelines, vector stores, and document processing workflows.
!!! warning "Documents vs. message content"
This module is distinct from `langchain_core.messages.content`, which provides
multimodal content blocks for **LLM chat I/O** (text, images, audio, etc. within
messages).
**Key distinction:**
- **Documents** (this module): For **data retrieval and processing workflows**
- Vector stores, retrievers, RAG pipelines
- Text chunking, embedding, and semantic search
- Example: Chunks of a PDF stored in a vector database
- **Content Blocks** (`messages.content`): For **LLM conversational I/O**
- Multimodal message content sent to/from models
- Tool calls, reasoning, citations within chat
- Example: An image sent to a vision model in a chat message (via
[`ImageContentBlock`][langchain.messages.ImageContentBlock])
While both can represent similar data types (text, files), they serve different
architectural purposes in LangChain applications.
"""
from typing import TYPE_CHECKING

View File

@@ -1,4 +1,16 @@
"""Base classes for media and documents."""
"""Base classes for media and documents.
This module contains core abstractions for **data retrieval and processing workflows**:
- `BaseMedia`: Base class providing `id` and `metadata` fields
- `Blob`: Raw data loading (files, binary data) - used by document loaders
- `Document`: Text content for retrieval (RAG, vector stores, semantic search)
!!! note "Not for LLM chat messages"
These classes are for data processing pipelines, not LLM I/O. For multimodal
content in chat messages (images, audio in conversations), see
`langchain.messages` content blocks instead.
"""
from __future__ import annotations
@@ -19,27 +31,23 @@ PathLike = str | PurePath
class BaseMedia(Serializable):
"""Use to represent media content.
"""Base class for content used in retrieval and data processing workflows.
Media objects can be used to represent raw data, such as text or binary data.
Provides common fields for content that needs to be stored, indexed, or searched.
LangChain Media objects allow associating metadata and an optional identifier
with the content.
The presence of an ID and metadata make it easier to store, index, and search
over the content in a structured way.
!!! note
For multimodal content in **chat messages** (images, audio sent to/from LLMs),
use `langchain.messages` content blocks instead.
"""
# The ID field is optional at the moment.
# It will likely become required in a future major release after
# it has been adopted by enough vectorstore implementations.
# it has been adopted by enough VectorStore implementations.
id: str | None = Field(default=None, coerce_numbers_to_str=True)
"""An optional identifier for the document.
Ideally this should be unique across the document collection and formatted
as a UUID, but this will not be enforced.
!!! version-added "Added in version 0.2.11"
"""
metadata: dict = Field(default_factory=dict)
@@ -47,71 +55,70 @@ class BaseMedia(Serializable):
class Blob(BaseMedia):
"""Blob represents raw data by either reference or value.
"""Raw data abstraction for document loading and file processing.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Represents raw bytes or text, either in-memory or by file reference. Used
primarily by document loaders to decouple data loading from parsing.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Inspired by [Mozilla's `Blob`](https://developer.mozilla.org/en-US/docs/Web/API/Blob)
Example: Initialize a blob from in-memory data
???+ example "Initialize a blob from in-memory data"
```python
from langchain_core.documents import Blob
```python
from langchain_core.documents import Blob
blob = Blob.from_data("Hello, world!")
blob = Blob.from_data("Hello, world!")
# Read the blob as a string
print(blob.as_string())
# Read the blob as a string
print(blob.as_string())
# Read the blob as bytes
print(blob.as_bytes())
# Read the blob as bytes
print(blob.as_bytes())
# Read the blob as a byte stream
with blob.as_bytes_io() as f:
print(f.read())
```
# Read the blob as a byte stream
with blob.as_bytes_io() as f:
print(f.read())
```
Example: Load from memory and specify mime-type and metadata
??? example "Load from memory and specify MIME type and metadata"
```python
from langchain_core.documents import Blob
```python
from langchain_core.documents import Blob
blob = Blob.from_data(
data="Hello, world!",
mime_type="text/plain",
metadata={"source": "https://example.com"},
)
```
blob = Blob.from_data(
data="Hello, world!",
mime_type="text/plain",
metadata={"source": "https://example.com"},
)
```
Example: Load the blob from a file
??? example "Load the blob from a file"
```python
from langchain_core.documents import Blob
```python
from langchain_core.documents import Blob
blob = Blob.from_path("path/to/file.txt")
blob = Blob.from_path("path/to/file.txt")
# Read the blob as a string
print(blob.as_string())
# Read the blob as a string
print(blob.as_string())
# Read the blob as bytes
print(blob.as_bytes())
# Read the blob as bytes
print(blob.as_bytes())
# Read the blob as a byte stream
with blob.as_bytes_io() as f:
print(f.read())
```
# Read the blob as a byte stream
with blob.as_bytes_io() as f:
print(f.read())
```
"""
data: bytes | str | None = None
"""Raw data associated with the blob."""
"""Raw data associated with the `Blob`."""
mimetype: str | None = None
"""MimeType not to be confused with a file extension."""
"""MIME type, not to be confused with a file extension."""
encoding: str = "utf-8"
"""Encoding to use if decoding the bytes into a string.
Use `utf-8` as default encoding, if decoding to string.
Uses `utf-8` as default encoding if decoding to string.
"""
path: PathLike | None = None
"""Location where the original content was found."""
@@ -125,9 +132,9 @@ class Blob(BaseMedia):
def source(self) -> str | None:
"""The source location of the blob as string if known otherwise none.
If a path is associated with the blob, it will default to the path location.
If a path is associated with the `Blob`, it will default to the path location.
Unless explicitly set via a metadata field called `"source"`, in which
Unless explicitly set via a metadata field called `'source'`, in which
case that value will be used instead.
"""
if self.metadata and "source" in self.metadata:
@@ -213,13 +220,13 @@ class Blob(BaseMedia):
Args:
path: Path-like object to file to be read
encoding: Encoding to use if decoding the bytes into a string
mime_type: If provided, will be set as the mime-type of the data
guess_type: If `True`, the mimetype will be guessed from the file extension,
if a mime-type was not provided
metadata: Metadata to associate with the blob
mime_type: If provided, will be set as the MIME type of the data
guess_type: If `True`, the MIME type will be guessed from the file
extension, if a MIME type was not provided
metadata: Metadata to associate with the `Blob`
Returns:
Blob instance
`Blob` instance
"""
if mime_type is None and guess_type:
mimetype = mimetypes.guess_type(path)[0] if guess_type else None
@@ -245,17 +252,17 @@ class Blob(BaseMedia):
path: str | None = None,
metadata: dict | None = None,
) -> Blob:
"""Initialize the blob from in-memory data.
"""Initialize the `Blob` from in-memory data.
Args:
data: The in-memory data associated with the blob
data: The in-memory data associated with the `Blob`
encoding: Encoding to use if decoding the bytes into a string
mime_type: If provided, will be set as the mime-type of the data
mime_type: If provided, will be set as the MIME type of the data
path: If provided, will be set as the source from which the data came
metadata: Metadata to associate with the blob
metadata: Metadata to associate with the `Blob`
Returns:
Blob instance
`Blob` instance
"""
return cls(
data=data,
@@ -276,6 +283,10 @@ class Blob(BaseMedia):
class Document(BaseMedia):
"""Class for storing a piece of text and associated metadata.
!!! note
`Document` is for **retrieval workflows**, not chat I/O. For sending text
to an LLM in a conversation, use message types from `langchain.messages`.
Example:
```python
from langchain_core.documents import Document
@@ -298,7 +309,7 @@ class Document(BaseMedia):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -311,10 +322,10 @@ class Document(BaseMedia):
return ["langchain", "schema", "document"]
def __str__(self) -> str:
"""Override __str__ to restrict it to page_content and metadata.
"""Override `__str__` to restrict it to page_content and metadata.
Returns:
A string representation of the Document.
A string representation of the `Document`.
"""
# The format matches pydantic format for __str__.
#

View File

@@ -21,14 +21,14 @@ class BaseDocumentCompressor(BaseModel, ABC):
This abstraction is primarily used for post-processing of retrieved documents.
Documents matching a given query are first retrieved.
`Document` objects matching a given query are first retrieved.
Then the list of documents can be further processed.
For example, one could re-rank the retrieved documents using an LLM.
!!! note
Users should favor using a RunnableLambda instead of sub-classing from this
Users should favor using a `RunnableLambda` instead of sub-classing from this
interface.
"""
@@ -43,9 +43,9 @@ class BaseDocumentCompressor(BaseModel, ABC):
"""Compress retrieved documents given the query context.
Args:
documents: The retrieved documents.
documents: The retrieved `Document` objects.
query: The query context.
callbacks: Optional callbacks to run during compression.
callbacks: Optional `Callbacks` to run during compression.
Returns:
The compressed documents.
@@ -61,9 +61,9 @@ class BaseDocumentCompressor(BaseModel, ABC):
"""Async compress retrieved documents given the query context.
Args:
documents: The retrieved documents.
documents: The retrieved `Document` objects.
query: The query context.
callbacks: Optional callbacks to run during compression.
callbacks: Optional `Callbacks` to run during compression.
Returns:
The compressed documents.

View File

@@ -16,8 +16,8 @@ if TYPE_CHECKING:
class BaseDocumentTransformer(ABC):
"""Abstract base class for document transformation.
A document transformation takes a sequence of Documents and returns a
sequence of transformed Documents.
A document transformation takes a sequence of `Document` objects and returns a
sequence of transformed `Document` objects.
Example:
```python
@@ -57,10 +57,10 @@ class BaseDocumentTransformer(ABC):
"""Transform a list of documents.
Args:
documents: A sequence of Documents to be transformed.
documents: A sequence of `Document` objects to be transformed.
Returns:
A sequence of transformed Documents.
A sequence of transformed `Document` objects.
"""
async def atransform_documents(
@@ -69,10 +69,10 @@ class BaseDocumentTransformer(ABC):
"""Asynchronously transform a list of documents.
Args:
documents: A sequence of Documents to be transformed.
documents: A sequence of `Document` objects to be transformed.
Returns:
A sequence of transformed Documents.
A sequence of transformed `Document` objects.
"""
return await run_in_executor(
None, self.transform_documents, documents, **kwargs

View File

@@ -18,7 +18,7 @@ class FakeEmbeddings(Embeddings, BaseModel):
This embedding model creates embeddings by sampling from a normal distribution.
!!! warning
!!! danger "Toy model"
Do not use this outside of testing, as it is not a real embedding model.
Instantiate:
@@ -73,7 +73,7 @@ class DeterministicFakeEmbedding(Embeddings, BaseModel):
This embedding model creates embeddings by sampling from a normal distribution
with a seed based on the hash of the text.
!!! warning
!!! danger "Toy model"
Do not use this outside of testing, as it is not a real embedding model.
Instantiate:

View File

@@ -29,7 +29,7 @@ class LengthBasedExampleSelector(BaseExampleSelector, BaseModel):
max_length: int = 2048
"""Max length for the prompt, beyond which examples are cut."""
example_text_lengths: list[int] = Field(default_factory=list) # :meta private:
example_text_lengths: list[int] = Field(default_factory=list)
"""Length of each example."""
def add_example(self, example: dict[str, str]) -> None:

View File

@@ -41,7 +41,7 @@ class _VectorStoreExampleSelector(BaseExampleSelector, BaseModel, ABC):
"""Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables."""
vectorstore_kwargs: dict[str, Any] | None = None
"""Extra arguments passed to similarity_search function of the vectorstore."""
"""Extra arguments passed to similarity_search function of the `VectorStore`."""
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -159,7 +159,7 @@ class SemanticSimilarityExampleSelector(_VectorStoreExampleSelector):
instead of all variables.
example_keys: If provided, keys to filter examples to.
vectorstore_kwargs: Extra arguments passed to similarity_search function
of the vectorstore.
of the `VectorStore`.
vectorstore_cls_kwargs: optional kwargs containing url for vector store
Returns:
@@ -203,7 +203,7 @@ class SemanticSimilarityExampleSelector(_VectorStoreExampleSelector):
instead of all variables.
example_keys: If provided, keys to filter examples to.
vectorstore_kwargs: Extra arguments passed to similarity_search function
of the vectorstore.
of the `VectorStore`.
vectorstore_cls_kwargs: optional kwargs containing url for vector store
Returns:
@@ -286,12 +286,12 @@ class MaxMarginalRelevanceExampleSelector(_VectorStoreExampleSelector):
embeddings: An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls: A vector store DB interface class, e.g. FAISS.
k: Number of examples to select.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
input_keys: If provided, the search is based on the input variables
instead of all variables.
example_keys: If provided, keys to filter examples to.
vectorstore_kwargs: Extra arguments passed to similarity_search function
of the vectorstore.
of the `VectorStore`.
vectorstore_cls_kwargs: optional kwargs containing url for vector store
Returns:
@@ -333,12 +333,12 @@ class MaxMarginalRelevanceExampleSelector(_VectorStoreExampleSelector):
embeddings: An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls: A vector store DB interface class, e.g. FAISS.
k: Number of examples to select.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
input_keys: If provided, the search is based on the input variables
instead of all variables.
example_keys: If provided, keys to filter examples to.
vectorstore_kwargs: Extra arguments passed to similarity_search function
of the vectorstore.
of the `VectorStore`.
vectorstore_cls_kwargs: optional kwargs containing url for vector store
Returns:

View File

@@ -16,9 +16,10 @@ class OutputParserException(ValueError, LangChainException): # noqa: N818
"""Exception that output parsers should raise to signify a parsing error.
This exists to differentiate parsing errors from other code or execution errors
that also may arise inside the output parser. `OutputParserException` will be
available to catch and handle in ways to fix the parsing error, while other
errors will be raised.
that also may arise inside the output parser.
`OutputParserException` will be available to catch and handle in ways to fix the
parsing error, while other errors will be raised.
"""
def __init__(
@@ -32,18 +33,19 @@ class OutputParserException(ValueError, LangChainException): # noqa: N818
Args:
error: The error that's being re-raised or an error message.
observation: String explanation of error which can be passed to a
model to try and remediate the issue.
observation: String explanation of error which can be passed to a model to
try and remediate the issue.
llm_output: String model output which is error-ing.
send_to_llm: Whether to send the observation and llm_output back to an Agent
after an `OutputParserException` has been raised.
This gives the underlying model driving the agent the context that the
previous output was improperly structured, in the hopes that it will
update the output to the correct format.
Raises:
ValueError: If `send_to_llm` is True but either observation or
ValueError: If `send_to_llm` is `True` but either observation or
`llm_output` are not provided.
"""
if isinstance(error, str):
@@ -66,11 +68,11 @@ class ErrorCode(Enum):
"""Error codes."""
INVALID_PROMPT_INPUT = "INVALID_PROMPT_INPUT"
INVALID_TOOL_RESULTS = "INVALID_TOOL_RESULTS"
INVALID_TOOL_RESULTS = "INVALID_TOOL_RESULTS" # Used in JS; not Py (yet)
MESSAGE_COERCION_FAILURE = "MESSAGE_COERCION_FAILURE"
MODEL_AUTHENTICATION = "MODEL_AUTHENTICATION"
MODEL_NOT_FOUND = "MODEL_NOT_FOUND"
MODEL_RATE_LIMIT = "MODEL_RATE_LIMIT"
MODEL_AUTHENTICATION = "MODEL_AUTHENTICATION" # Used in JS; not Py (yet)
MODEL_NOT_FOUND = "MODEL_NOT_FOUND" # Used in JS; not Py (yet)
MODEL_RATE_LIMIT = "MODEL_RATE_LIMIT" # Used in JS; not Py (yet)
OUTPUT_PARSING_FAILURE = "OUTPUT_PARSING_FAILURE"
@@ -86,6 +88,6 @@ def create_message(*, message: str, error_code: ErrorCode) -> str:
"""
return (
f"{message}\n"
"For troubleshooting, visit: https://python.langchain.com/docs/"
f"troubleshooting/errors/{error_code.value} "
"For troubleshooting, visit: https://docs.langchain.com/oss/python/langchain"
f"/errors/{error_code.value} "
)

View File

@@ -1,7 +1,7 @@
"""Code to help indexing data into a vectorstore.
This package contains helper logic to help deal with indexing data into
a vectorstore while avoiding duplicated content and over-writing content
a `VectorStore` while avoiding duplicated content and over-writing content
if it's unchanged.
"""

View File

@@ -6,16 +6,9 @@ import hashlib
import json
import uuid
import warnings
from collections.abc import (
AsyncIterable,
AsyncIterator,
Callable,
Iterable,
Iterator,
Sequence,
)
from itertools import islice
from typing import (
TYPE_CHECKING,
Any,
Literal,
TypedDict,
@@ -29,6 +22,16 @@ from langchain_core.exceptions import LangChainException
from langchain_core.indexing.base import DocumentIndex, RecordManager
from langchain_core.vectorstores import VectorStore
if TYPE_CHECKING:
from collections.abc import (
AsyncIterable,
AsyncIterator,
Callable,
Iterable,
Iterator,
Sequence,
)
# Magic UUID to use as a namespace for hashing.
# Used to try and generate a unique UUID for each document
# from hashing the document content and metadata.
@@ -298,48 +301,48 @@ def index(
For the time being, documents are indexed using their hashes, and users
are not able to specify the uid of the document.
!!! warning "Behavior changed in 0.3.25"
!!! warning "Behavior changed in `langchain-core` 0.3.25"
Added `scoped_full` cleanup mode.
!!! warning
* In full mode, the loader should be returning
the entire dataset, and not just a subset of the dataset.
Otherwise, the auto_cleanup will remove documents that it is not
supposed to.
the entire dataset, and not just a subset of the dataset.
Otherwise, the auto_cleanup will remove documents that it is not
supposed to.
* In incremental mode, if documents associated with a particular
source id appear across different batches, the indexing API
will do some redundant work. This will still result in the
correct end state of the index, but will unfortunately not be
100% efficient. For example, if a given document is split into 15
chunks, and we index them using a batch size of 5, we'll have 3 batches
all with the same source id. In general, to avoid doing too much
redundant work select as big a batch size as possible.
source id appear across different batches, the indexing API
will do some redundant work. This will still result in the
correct end state of the index, but will unfortunately not be
100% efficient. For example, if a given document is split into 15
chunks, and we index them using a batch size of 5, we'll have 3 batches
all with the same source id. In general, to avoid doing too much
redundant work select as big a batch size as possible.
* The `scoped_full` mode is suitable if determining an appropriate batch size
is challenging or if your data loader cannot return the entire dataset at
once. This mode keeps track of source IDs in memory, which should be fine
for most use cases. If your dataset is large (10M+ docs), you will likely
need to parallelize the indexing process regardless.
is challenging or if your data loader cannot return the entire dataset at
once. This mode keeps track of source IDs in memory, which should be fine
for most use cases. If your dataset is large (10M+ docs), you will likely
need to parallelize the indexing process regardless.
Args:
docs_source: Data loader or iterable of documents to index.
record_manager: Timestamped set to keep track of which documents were
updated.
vector_store: VectorStore or DocumentIndex to index the documents into.
vector_store: `VectorStore` or DocumentIndex to index the documents into.
batch_size: Batch size to use when indexing.
cleanup: How to handle clean up of documents.
- incremental: Cleans up all documents that haven't been updated AND
that are associated with source ids that were seen during indexing.
Clean up is done continuously during indexing helping to minimize the
probability of users seeing duplicated content.
that are associated with source IDs that were seen during indexing.
Clean up is done continuously during indexing helping to minimize the
probability of users seeing duplicated content.
- full: Delete all documents that have not been returned by the loader
during this run of indexing.
Clean up runs after all documents have been indexed.
This means that users may see duplicated content during indexing.
during this run of indexing.
Clean up runs after all documents have been indexed.
This means that users may see duplicated content during indexing.
- scoped_full: Similar to Full, but only deletes all documents
that haven't been updated AND that are associated with
source ids that were seen during indexing.
that haven't been updated AND that are associated with
source IDs that were seen during indexing.
- None: Do not delete any documents.
source_id_key: Optional key that helps identify the original source
of the document.
@@ -349,7 +352,7 @@ def index(
key_encoder: Hashing algorithm to use for hashing the document content and
metadata. Options include "blake2b", "sha256", and "sha512".
!!! version-added "Added in version 0.3.66"
!!! version-added "Added in `langchain-core` 0.3.66"
key_encoder: Hashing algorithm to use for hashing the document.
If not provided, a default encoder using SHA-1 will be used.
@@ -363,10 +366,10 @@ def index(
When changing the key encoder, you must change the
index as well to avoid duplicated documents in the cache.
upsert_kwargs: Additional keyword arguments to pass to the add_documents
method of the VectorStore or the upsert method of the DocumentIndex.
method of the `VectorStore` or the upsert method of the DocumentIndex.
For example, you can use this to specify a custom vector_field:
upsert_kwargs={"vector_field": "embedding"}
!!! version-added "Added in version 0.3.10"
!!! version-added "Added in `langchain-core` 0.3.10"
Returns:
Indexing result which contains information about how many documents
@@ -375,10 +378,10 @@ def index(
Raises:
ValueError: If cleanup mode is not one of 'incremental', 'full' or None
ValueError: If cleanup mode is incremental and source_id_key is None.
ValueError: If vectorstore does not have
ValueError: If `VectorStore` does not have
"delete" and "add_documents" required methods.
ValueError: If source_id_key is not None, but is not a string or callable.
TypeError: If `vectorstore` is not a VectorStore or a DocumentIndex.
TypeError: If `vectorstore` is not a `VectorStore` or a DocumentIndex.
AssertionError: If `source_id` is None when cleanup mode is incremental.
(should be unreachable code).
"""
@@ -415,7 +418,7 @@ def index(
raise ValueError(msg)
if type(destination).delete == VectorStore.delete:
# Checking if the vectorstore has overridden the default delete method
# Checking if the VectorStore has overridden the default delete method
# implementation which just raises a NotImplementedError
msg = "Vectorstore has not implemented the delete method"
raise ValueError(msg)
@@ -466,11 +469,11 @@ def index(
]
if cleanup in {"incremental", "scoped_full"}:
# source ids are required.
# Source IDs are required.
for source_id, hashed_doc in zip(source_ids, hashed_docs, strict=False):
if source_id is None:
msg = (
f"Source ids are required when cleanup mode is "
f"Source IDs are required when cleanup mode is "
f"incremental or scoped_full. "
f"Document that starts with "
f"content: {hashed_doc.page_content[:100]} "
@@ -479,7 +482,7 @@ def index(
raise ValueError(msg)
if cleanup == "scoped_full":
scoped_full_cleanup_source_ids.add(source_id)
# source ids cannot be None after for loop above.
# Source IDs cannot be None after for loop above.
source_ids = cast("Sequence[str]", source_ids)
exists_batch = record_manager.exists(
@@ -538,7 +541,7 @@ def index(
# If source IDs are provided, we can do the deletion incrementally!
if cleanup == "incremental":
# Get the uids of the documents that were not returned by the loader.
# mypy isn't good enough to determine that source ids cannot be None
# mypy isn't good enough to determine that source IDs cannot be None
# here due to a check that's happening above, so we check again.
for source_id in source_ids:
if source_id is None:
@@ -636,48 +639,48 @@ async def aindex(
For the time being, documents are indexed using their hashes, and users
are not able to specify the uid of the document.
!!! warning "Behavior changed in 0.3.25"
!!! warning "Behavior changed in `langchain-core` 0.3.25"
Added `scoped_full` cleanup mode.
!!! warning
* In full mode, the loader should be returning
the entire dataset, and not just a subset of the dataset.
Otherwise, the auto_cleanup will remove documents that it is not
supposed to.
the entire dataset, and not just a subset of the dataset.
Otherwise, the auto_cleanup will remove documents that it is not
supposed to.
* In incremental mode, if documents associated with a particular
source id appear across different batches, the indexing API
will do some redundant work. This will still result in the
correct end state of the index, but will unfortunately not be
100% efficient. For example, if a given document is split into 15
chunks, and we index them using a batch size of 5, we'll have 3 batches
all with the same source id. In general, to avoid doing too much
redundant work select as big a batch size as possible.
source id appear across different batches, the indexing API
will do some redundant work. This will still result in the
correct end state of the index, but will unfortunately not be
100% efficient. For example, if a given document is split into 15
chunks, and we index them using a batch size of 5, we'll have 3 batches
all with the same source id. In general, to avoid doing too much
redundant work select as big a batch size as possible.
* The `scoped_full` mode is suitable if determining an appropriate batch size
is challenging or if your data loader cannot return the entire dataset at
once. This mode keeps track of source IDs in memory, which should be fine
for most use cases. If your dataset is large (10M+ docs), you will likely
need to parallelize the indexing process regardless.
is challenging or if your data loader cannot return the entire dataset at
once. This mode keeps track of source IDs in memory, which should be fine
for most use cases. If your dataset is large (10M+ docs), you will likely
need to parallelize the indexing process regardless.
Args:
docs_source: Data loader or iterable of documents to index.
record_manager: Timestamped set to keep track of which documents were
updated.
vector_store: VectorStore or DocumentIndex to index the documents into.
vector_store: `VectorStore` or DocumentIndex to index the documents into.
batch_size: Batch size to use when indexing.
cleanup: How to handle clean up of documents.
- incremental: Cleans up all documents that haven't been updated AND
that are associated with source ids that were seen during indexing.
Clean up is done continuously during indexing helping to minimize the
probability of users seeing duplicated content.
that are associated with source IDs that were seen during indexing.
Clean up is done continuously during indexing helping to minimize the
probability of users seeing duplicated content.
- full: Delete all documents that have not been returned by the loader
during this run of indexing.
Clean up runs after all documents have been indexed.
This means that users may see duplicated content during indexing.
during this run of indexing.
Clean up runs after all documents have been indexed.
This means that users may see duplicated content during indexing.
- scoped_full: Similar to Full, but only deletes all documents
that haven't been updated AND that are associated with
source ids that were seen during indexing.
that haven't been updated AND that are associated with
source IDs that were seen during indexing.
- None: Do not delete any documents.
source_id_key: Optional key that helps identify the original source
of the document.
@@ -687,7 +690,7 @@ async def aindex(
key_encoder: Hashing algorithm to use for hashing the document content and
metadata. Options include "blake2b", "sha256", and "sha512".
!!! version-added "Added in version 0.3.66"
!!! version-added "Added in `langchain-core` 0.3.66"
key_encoder: Hashing algorithm to use for hashing the document.
If not provided, a default encoder using SHA-1 will be used.
@@ -701,10 +704,10 @@ async def aindex(
When changing the key encoder, you must change the
index as well to avoid duplicated documents in the cache.
upsert_kwargs: Additional keyword arguments to pass to the add_documents
method of the VectorStore or the upsert method of the DocumentIndex.
method of the `VectorStore` or the upsert method of the DocumentIndex.
For example, you can use this to specify a custom vector_field:
upsert_kwargs={"vector_field": "embedding"}
!!! version-added "Added in version 0.3.10"
!!! version-added "Added in `langchain-core` 0.3.10"
Returns:
Indexing result which contains information about how many documents
@@ -713,10 +716,10 @@ async def aindex(
Raises:
ValueError: If cleanup mode is not one of 'incremental', 'full' or None
ValueError: If cleanup mode is incremental and source_id_key is None.
ValueError: If vectorstore does not have
ValueError: If `VectorStore` does not have
"adelete" and "aadd_documents" required methods.
ValueError: If source_id_key is not None, but is not a string or callable.
TypeError: If `vector_store` is not a VectorStore or DocumentIndex.
TypeError: If `vector_store` is not a `VectorStore` or DocumentIndex.
AssertionError: If `source_id_key` is None when cleanup mode is
incremental or `scoped_full` (should be unreachable).
"""
@@ -757,7 +760,7 @@ async def aindex(
type(destination).adelete == VectorStore.adelete
and type(destination).delete == VectorStore.delete
):
# Checking if the vectorstore has overridden the default adelete or delete
# Checking if the VectorStore has overridden the default adelete or delete
# methods implementation which just raises a NotImplementedError
msg = "Vectorstore has not implemented the adelete or delete method"
raise ValueError(msg)
@@ -815,11 +818,11 @@ async def aindex(
]
if cleanup in {"incremental", "scoped_full"}:
# If the cleanup mode is incremental, source ids are required.
# If the cleanup mode is incremental, source IDs are required.
for source_id, hashed_doc in zip(source_ids, hashed_docs, strict=False):
if source_id is None:
msg = (
f"Source ids are required when cleanup mode is "
f"Source IDs are required when cleanup mode is "
f"incremental or scoped_full. "
f"Document that starts with "
f"content: {hashed_doc.page_content[:100]} "
@@ -828,7 +831,7 @@ async def aindex(
raise ValueError(msg)
if cleanup == "scoped_full":
scoped_full_cleanup_source_ids.add(source_id)
# source ids cannot be None after for loop above.
# Source IDs cannot be None after for loop above.
source_ids = cast("Sequence[str]", source_ids)
exists_batch = await record_manager.aexists(
@@ -888,7 +891,7 @@ async def aindex(
if cleanup == "incremental":
# Get the uids of the documents that were not returned by the loader.
# mypy isn't good enough to determine that source ids cannot be None
# mypy isn't good enough to determine that source IDs cannot be None
# here due to a check that's happening above, so we check again.
for source_id in source_ids:
if source_id is None:

View File

@@ -25,7 +25,7 @@ class RecordManager(ABC):
The record manager abstraction is used by the langchain indexing API.
The record manager keeps track of which documents have been
written into a vectorstore and when they were written.
written into a `VectorStore` and when they were written.
The indexing API computes hashes for each document and stores the hash
together with the write time and the source id in the record manager.
@@ -37,7 +37,7 @@ class RecordManager(ABC):
already been indexed, and to only index new documents.
The main benefit of this abstraction is that it works across many vectorstores.
To be supported, a vectorstore needs to only support the ability to add and
To be supported, a `VectorStore` needs to only support the ability to add and
delete documents by ID. Using the record manager, the indexing API will
be able to delete outdated documents and avoid redundant indexing of documents
that have already been indexed.
@@ -45,13 +45,13 @@ class RecordManager(ABC):
The main constraints of this abstraction are:
1. It relies on the time-stamps to determine which documents have been
indexed and which have not. This means that the time-stamps must be
monotonically increasing. The timestamp should be the timestamp
as measured by the server to minimize issues.
indexed and which have not. This means that the time-stamps must be
monotonically increasing. The timestamp should be the timestamp
as measured by the server to minimize issues.
2. The record manager is currently implemented separately from the
vectorstore, which means that the overall system becomes distributed
and may create issues with consistency. For example, writing to
record manager succeeds, but corresponding writing to vectorstore fails.
vectorstore, which means that the overall system becomes distributed
and may create issues with consistency. For example, writing to
record manager succeeds, but corresponding writing to `VectorStore` fails.
"""
def __init__(
@@ -460,7 +460,7 @@ class UpsertResponse(TypedDict):
class DeleteResponse(TypedDict, total=False):
"""A generic response for delete operation.
The fields in this response are optional and whether the vectorstore
The fields in this response are optional and whether the `VectorStore`
returns them or not is up to the implementation.
"""
@@ -508,8 +508,6 @@ class DocumentIndex(BaseRetriever):
1. Storing document in the index.
2. Fetching document by ID.
3. Searching for document using a query.
!!! version-added "Added in version 0.2.29"
"""
@abc.abstractmethod
@@ -520,7 +518,7 @@ class DocumentIndex(BaseRetriever):
if it is provided. If the ID is not provided, the upsert method is free
to generate an ID for the content.
When an ID is specified and the content already exists in the vectorstore,
When an ID is specified and the content already exists in the `VectorStore`,
the upsert method should update the content with the new data. If the content
does not exist, the upsert method should add the item to the `VectorStore`.
@@ -530,20 +528,20 @@ class DocumentIndex(BaseRetriever):
Returns:
A response object that contains the list of IDs that were
successfully added or updated in the vectorstore and the list of IDs that
successfully added or updated in the `VectorStore` and the list of IDs that
failed to be added or updated.
"""
async def aupsert(
self, items: Sequence[Document], /, **kwargs: Any
) -> UpsertResponse:
"""Add or update documents in the vectorstore. Async version of upsert.
"""Add or update documents in the `VectorStore`. Async version of `upsert`.
The upsert functionality should utilize the ID field of the item
if it is provided. If the ID is not provided, the upsert method is free
to generate an ID for the item.
When an ID is specified and the item already exists in the vectorstore,
When an ID is specified and the item already exists in the `VectorStore`,
the upsert method should update the item with the new data. If the item
does not exist, the upsert method should add the item to the `VectorStore`.
@@ -553,7 +551,7 @@ class DocumentIndex(BaseRetriever):
Returns:
A response object that contains the list of IDs that were
successfully added or updated in the vectorstore and the list of IDs that
successfully added or updated in the `VectorStore` and the list of IDs that
failed to be added or updated.
"""
return await run_in_executor(
@@ -570,7 +568,7 @@ class DocumentIndex(BaseRetriever):
Calling delete without any input parameters should raise a ValueError!
Args:
ids: List of ids to delete.
ids: List of IDs to delete.
**kwargs: Additional keyword arguments. This is up to the implementation.
For example, can include an option to delete the entire index,
or else issue a non-blocking delete etc.
@@ -588,7 +586,7 @@ class DocumentIndex(BaseRetriever):
Calling adelete without any input parameters should raise a ValueError!
Args:
ids: List of ids to delete.
ids: List of IDs to delete.
**kwargs: Additional keyword arguments. This is up to the implementation.
For example, can include an option to delete the entire index.

View File

@@ -23,8 +23,6 @@ class InMemoryDocumentIndex(DocumentIndex):
It provides a simple search API that returns documents by the number of
counts the given query appears in the document.
!!! version-added "Added in version 0.2.29"
"""
store: dict[str, Document] = Field(default_factory=dict)
@@ -64,10 +62,10 @@ class InMemoryDocumentIndex(DocumentIndex):
"""Delete by IDs.
Args:
ids: List of ids to delete.
ids: List of IDs to delete.
Raises:
ValueError: If ids is None.
ValueError: If IDs is None.
Returns:
A response object that contains the list of IDs that were successfully

View File

@@ -6,12 +6,13 @@ LangChain has two main classes to work with language models: chat models and
**Chat models**
Language models that use a sequence of messages as inputs and return chat messages
as outputs (as opposed to using plain text). Chat models support the assignment of
distinct roles to conversation messages, helping to distinguish messages from the AI,
users, and instructions such as system messages.
as outputs (as opposed to using plain text).
The key abstraction for chat models is `BaseChatModel`. Implementations
should inherit from this class.
Chat models support the assignment of distinct roles to conversation messages, helping
to distinguish messages from the AI, users, and instructions such as system messages.
The key abstraction for chat models is `BaseChatModel`. Implementations should inherit
from this class.
See existing [chat model integrations](https://docs.langchain.com/oss/python/integrations/chat).

View File

@@ -139,7 +139,7 @@ def _normalize_messages(
directly; this may change in the future
- LangChain v0 standard content blocks for backward compatibility
!!! warning "Behavior changed in 1.0.0"
!!! warning "Behavior changed in `langchain-core` 1.0.0"
In previous versions, this function returned messages in LangChain v0 format.
Now, it returns messages in LangChain v1 format, which upgraded chat models now
expect to receive when passing back in message history. For backward

View File

@@ -131,14 +131,19 @@ class BaseLanguageModel(
Caching is not currently supported for streaming methods of models.
"""
verbose: bool = Field(default_factory=_get_verbosity, exclude=True, repr=False)
"""Whether to print out response text."""
callbacks: Callbacks = Field(default=None, exclude=True)
"""Callbacks to add to the run trace."""
tags: list[str] | None = Field(default=None, exclude=True)
"""Tags to add to the run trace."""
metadata: dict[str, Any] | None = Field(default=None, exclude=True)
"""Metadata to add to the run trace."""
custom_get_token_ids: Callable[[str], list[int]] | None = Field(
default=None, exclude=True
)
@@ -195,15 +200,22 @@ class BaseLanguageModel(
type (e.g., pure text completion models vs chat models).
Args:
prompts: List of `PromptValue` objects. A `PromptValue` is an object that
can be converted to match the format of any language model (string for
pure text generation models and `BaseMessage` objects for chat models).
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
prompts: List of `PromptValue` objects.
A `PromptValue` is an object that can be converted to match the format
of any language model (string for pure text generation models and
`BaseMessage` objects for chat models).
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Returns:
An `LLMResult`, which contains a list of candidate `Generation` objects for
@@ -232,15 +244,22 @@ class BaseLanguageModel(
type (e.g., pure text completion models vs chat models).
Args:
prompts: List of `PromptValue` objects. A `PromptValue` is an object that
can be converted to match the format of any language model (string for
pure text generation models and `BaseMessage` objects for chat models).
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
prompts: List of `PromptValue` objects.
A `PromptValue` is an object that can be converted to match the format
of any language model (string for pure text generation models and
`BaseMessage` objects for chat models).
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Returns:
An `LLMResult`, which contains a list of candidate `Generation` objects for
@@ -262,13 +281,13 @@ class BaseLanguageModel(
return self.lc_attributes
def get_token_ids(self, text: str) -> list[int]:
"""Return the ordered ids of the tokens in a text.
"""Return the ordered IDs of the tokens in a text.
Args:
text: The string input to tokenize.
Returns:
A list of ids corresponding to the tokens in the text, in order they occur
A list of IDs corresponding to the tokens in the text, in order they occur
in the text.
"""
if self.custom_get_token_ids is not None:
@@ -280,6 +299,9 @@ class BaseLanguageModel(
Useful for checking if an input fits in a model's context window.
This should be overridden by model-specific implementations to provide accurate
token counts via model-specific tokenizers.
Args:
text: The string input to tokenize.
@@ -298,9 +320,17 @@ class BaseLanguageModel(
Useful for checking if an input fits in a model's context window.
This should be overridden by model-specific implementations to provide accurate
token counts via model-specific tokenizers.
!!! note
The base implementation of `get_num_tokens_from_messages` ignores tool
schemas.
* The base implementation of `get_num_tokens_from_messages` ignores tool
schemas.
* The base implementation of `get_num_tokens_from_messages` adds additional
prefixes to messages in represent user roles, which will add to the
overall token count. Model-specific implementations may choose to
handle this differently.
Args:
messages: The message inputs to tokenize.

View File

@@ -15,6 +15,7 @@ from typing import TYPE_CHECKING, Any, Literal, cast
from pydantic import BaseModel, ConfigDict, Field
from typing_extensions import override
from langchain_core._api.beta_decorator import beta
from langchain_core.caches import BaseCache
from langchain_core.callbacks import (
AsyncCallbackManager,
@@ -75,6 +76,8 @@ from langchain_core.utils.utils import LC_ID_PREFIX, from_env
if TYPE_CHECKING:
import uuid
from langchain_model_profiles import ModelProfile # type: ignore[import-untyped]
from langchain_core.output_parsers.base import OutputParserLike
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.tools import BaseTool
@@ -88,7 +91,10 @@ def _generate_response_from_error(error: BaseException) -> list[ChatGeneration]:
try:
metadata["body"] = response.json()
except Exception:
metadata["body"] = getattr(response, "text", None)
try:
metadata["body"] = getattr(response, "text", None)
except Exception:
metadata["body"] = None
if hasattr(response, "headers"):
try:
metadata["headers"] = dict(response.headers)
@@ -329,7 +335,7 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
[`langchain-openai`](https://pypi.org/project/langchain-openai)) can also use this
field to roll out new content formats in a backward-compatible way.
!!! version-added "Added in version 1.0"
!!! version-added "Added in `langchain-core` 1.0"
"""
@@ -842,16 +848,21 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
Args:
messages: List of list of messages.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
tags: The tags to apply.
metadata: The metadata to apply.
run_name: The name of the run.
run_id: The ID of the run.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Returns:
An `LLMResult`, which contains a list of candidate `Generations` for each
@@ -960,16 +971,21 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
Args:
messages: List of list of messages.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
tags: The tags to apply.
metadata: The metadata to apply.
run_name: The name of the run.
run_id: The ID of the run.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Returns:
An `LLMResult`, which contains a list of candidate `Generations` for each
@@ -1502,10 +1518,10 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
Args:
schema: The output schema. Can be passed in as:
- an OpenAI function/tool schema,
- a JSON Schema,
- a `TypedDict` class,
- or a Pydantic class.
- An OpenAI function/tool schema,
- A JSON Schema,
- A `TypedDict` class,
- Or a Pydantic class.
If `schema` is a Pydantic class then the model output will be a
Pydantic instance of that class, and the model-generated fields will be
@@ -1517,11 +1533,15 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
when specifying a Pydantic or `TypedDict` class.
include_raw:
If `False` then only the parsed structured output is returned. If
an error occurs during model output parsing it will be raised. If `True`
then both the raw model response (a `BaseMessage`) and the parsed model
response will be returned. If an error occurs during output parsing it
will be caught and returned as well.
If `False` then only the parsed structured output is returned.
If an error occurs during model output parsing it will be raised.
If `True` then both the raw model response (a `BaseMessage`) and the
parsed model response will be returned.
If an error occurs during output parsing it will be caught and returned
as well.
The final output is always a `dict` with keys `'raw'`, `'parsed'`, and
`'parsing_error'`.
@@ -1626,8 +1646,8 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
# }
```
!!! warning "Behavior changed in 0.2.26"
Added support for TypedDict class.
!!! warning "Behavior changed in `langchain-core` 0.2.26"
Added support for `TypedDict` class.
""" # noqa: E501
_ = kwargs.pop("method", None)
@@ -1668,6 +1688,40 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
return RunnableMap(raw=llm) | parser_with_fallback
return llm | output_parser
@property
@beta()
def profile(self) -> ModelProfile:
"""Return profiling information for the model.
This property relies on the `langchain-model-profiles` package to retrieve chat
model capabilities, such as context window sizes and supported features.
Raises:
ImportError: If `langchain-model-profiles` is not installed.
Returns:
A `ModelProfile` object containing profiling information for the model.
"""
try:
from langchain_model_profiles import get_model_profile # noqa: PLC0415
except ImportError as err:
informative_error_message = (
"To access model profiling information, please install the "
"`langchain-model-profiles` package: "
"`pip install langchain-model-profiles`."
)
raise ImportError(informative_error_message) from err
provider_id = self._llm_type
model_name = (
# Model name is not standardized across integrations. New integrations
# should prefer `model`.
getattr(self, "model", None)
or getattr(self, "model_name", None)
or getattr(self, "model_id", "")
)
return get_model_profile(provider_id, model_name) or {}
class SimpleChatModel(BaseChatModel):
"""Simplified implementation for a chat model to inherit from.
@@ -1726,9 +1780,12 @@ def _gen_info_and_msg_metadata(
}
_MAX_CLEANUP_DEPTH = 100
def _cleanup_llm_representation(serialized: Any, depth: int) -> None:
"""Remove non-serializable objects from a serialized object."""
if depth > 100: # Don't cooperate for pathological cases
if depth > _MAX_CLEANUP_DEPTH: # Don't cooperate for pathological cases
return
if not isinstance(serialized, dict):

View File

@@ -1,4 +1,4 @@
"""Fake chat model for testing purposes."""
"""Fake chat models for testing purposes."""
import asyncio
import re

View File

@@ -1,4 +1,7 @@
"""Base interface for large language models to expose."""
"""Base interface for traditional large language models (LLMs) to expose.
These are traditionally older models (newer models generally are chat models).
"""
from __future__ import annotations
@@ -648,9 +651,12 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: The prompts to generate from.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of the stop substrings.
If stop tokens are not supported consider raising NotImplementedError.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
If stop tokens are not supported consider raising `NotImplementedError`.
run_manager: Callback manager for the run.
Returns:
@@ -668,9 +674,12 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: The prompts to generate from.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of the stop substrings.
If stop tokens are not supported consider raising NotImplementedError.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
If stop tokens are not supported consider raising `NotImplementedError`.
run_manager: Callback manager for the run.
Returns:
@@ -702,11 +711,14 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Yields:
Generation chunks.
@@ -728,11 +740,14 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Yields:
Generation chunks.
@@ -843,10 +858,14 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: List of string prompts.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
tags: List of tags to associate with each prompt. If provided, the length
of the list must match the length of the prompts list.
metadata: List of metadata dictionaries to associate with each prompt. If
@@ -856,8 +875,9 @@ class BaseLLM(BaseLanguageModel[str], ABC):
length of the list must match the length of the prompts list.
run_id: List of run IDs to associate with each prompt. If provided, the
length of the list must match the length of the prompts list.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Raises:
ValueError: If prompts is not a list.
@@ -1113,10 +1133,14 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: List of string prompts.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
tags: List of tags to associate with each prompt. If provided, the length
of the list must match the length of the prompts list.
metadata: List of metadata dictionaries to associate with each prompt. If
@@ -1126,8 +1150,9 @@ class BaseLLM(BaseLanguageModel[str], ABC):
length of the list must match the length of the prompts list.
run_id: List of run IDs to associate with each prompt. If provided, the
length of the list must match the length of the prompts list.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Raises:
ValueError: If the length of `callbacks`, `tags`, `metadata`, or
@@ -1391,11 +1416,6 @@ class LLM(BaseLLM):
`astream` will use `_astream` if provided, otherwise it will implement
a fallback behavior that will use `_stream` if `_stream` is implemented,
and use `_acall` if `_stream` is not implemented.
Please see the following guide for more information on how to
implement a custom LLM:
https://python.langchain.com/docs/how_to/custom_llm/
"""
@abstractmethod
@@ -1412,12 +1432,16 @@ class LLM(BaseLLM):
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of the stop substrings.
If stop tokens are not supported consider raising NotImplementedError.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
If stop tokens are not supported consider raising `NotImplementedError`.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Returns:
The model output as a string. SHOULD NOT include the prompt.
@@ -1438,12 +1462,16 @@ class LLM(BaseLLM):
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of the stop substrings.
If stop tokens are not supported consider raising NotImplementedError.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
If stop tokens are not supported consider raising `NotImplementedError`.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
Returns:
The model output as a string. SHOULD NOT include the prompt.

View File

@@ -17,7 +17,7 @@ def default(obj: Any) -> Any:
obj: The object to serialize to json if it is a Serializable object.
Returns:
A json serializable object or a SerializedNotImplemented object.
A JSON serializable object or a SerializedNotImplemented object.
"""
if isinstance(obj, Serializable):
return obj.to_json()
@@ -38,7 +38,7 @@ def _dump_pydantic_models(obj: Any) -> Any:
def dumps(obj: Any, *, pretty: bool = False, **kwargs: Any) -> str:
"""Return a json string representation of an object.
"""Return a JSON string representation of an object.
Args:
obj: The object to dump.
@@ -47,7 +47,7 @@ def dumps(obj: Any, *, pretty: bool = False, **kwargs: Any) -> str:
**kwargs: Additional arguments to pass to `json.dumps`
Returns:
A json string representation of the object.
A JSON string representation of the object.
Raises:
ValueError: If `default` is passed as a kwarg.
@@ -71,14 +71,12 @@ def dumps(obj: Any, *, pretty: bool = False, **kwargs: Any) -> str:
def dumpd(obj: Any) -> Any:
"""Return a dict representation of an object.
!!! note
Unfortunately this function is not as efficient as it could be because it first
dumps the object to a json string and then loads it back into a dictionary.
Args:
obj: The object to dump.
Returns:
dictionary that can be serialized to json using json.dumps
Dictionary that can be serialized to json using `json.dumps`.
"""
# Unfortunately this function is not as efficient as it could be because it first
# dumps the object to a json string and then loads it back into a dictionary.
return json.loads(dumps(obj))

View File

@@ -61,13 +61,15 @@ class Reviver:
"""Initialize the reviver.
Args:
secrets_map: A map of secrets to load. If a secret is not found in
the map, it will be loaded from the environment if `secrets_from_env`
is True.
secrets_map: A map of secrets to load.
If a secret is not found in the map, it will be loaded from the
environment if `secrets_from_env` is `True`.
valid_namespaces: A list of additional namespaces (modules)
to allow to be deserialized.
secrets_from_env: Whether to load secrets from the environment.
additional_import_mappings: A dictionary of additional namespace mappings
You can use this to override default mappings or add new mappings.
ignore_unserializable_fields: Whether to ignore unserializable fields.
"""
@@ -195,13 +197,15 @@ def loads(
Args:
text: The string to load.
secrets_map: A map of secrets to load. If a secret is not found in
the map, it will be loaded from the environment if `secrets_from_env`
is True.
secrets_map: A map of secrets to load.
If a secret is not found in the map, it will be loaded from the environment
if `secrets_from_env` is `True`.
valid_namespaces: A list of additional namespaces (modules)
to allow to be deserialized.
secrets_from_env: Whether to load secrets from the environment.
additional_import_mappings: A dictionary of additional namespace mappings
You can use this to override default mappings or add new mappings.
ignore_unserializable_fields: Whether to ignore unserializable fields.
@@ -237,13 +241,15 @@ def load(
Args:
obj: The object to load.
secrets_map: A map of secrets to load. If a secret is not found in
the map, it will be loaded from the environment if `secrets_from_env`
is True.
secrets_map: A map of secrets to load.
If a secret is not found in the map, it will be loaded from the environment
if `secrets_from_env` is `True`.
valid_namespaces: A list of additional namespaces (modules)
to allow to be deserialized.
secrets_from_env: Whether to load secrets from the environment.
additional_import_mappings: A dictionary of additional namespace mappings
You can use this to override default mappings or add new mappings.
ignore_unserializable_fields: Whether to ignore unserializable fields.

View File

@@ -97,11 +97,14 @@ class Serializable(BaseModel, ABC):
by default. This is to prevent accidental serialization of objects that should
not be serialized.
- `get_lc_namespace`: Get the namespace of the LangChain object.
During deserialization, this namespace is used to identify
the correct class to instantiate.
Please see the `Reviver` class in `langchain_core.load.load` for more details.
During deserialization an additional mapping is handle classes that have moved
or been renamed across package versions.
- `lc_secrets`: A map of constructor argument names to secret ids.
- `lc_attributes`: List of additional attribute names that should be included
as part of the serialized representation.
@@ -194,7 +197,7 @@ class Serializable(BaseModel, ABC):
ValueError: If the class has deprecated attributes.
Returns:
A json serializable object or a `SerializedNotImplemented` object.
A JSON serializable object or a `SerializedNotImplemented` object.
"""
if not self.is_lc_serializable():
return self.to_json_not_implemented()

View File

@@ -9,6 +9,9 @@ if TYPE_CHECKING:
from langchain_core.messages.ai import (
AIMessage,
AIMessageChunk,
InputTokenDetails,
OutputTokenDetails,
UsageMetadata,
)
from langchain_core.messages.base import (
BaseMessage,
@@ -87,10 +90,12 @@ __all__ = (
"HumanMessage",
"HumanMessageChunk",
"ImageContentBlock",
"InputTokenDetails",
"InvalidToolCall",
"MessageLikeRepresentation",
"NonStandardAnnotation",
"NonStandardContentBlock",
"OutputTokenDetails",
"PlainTextContentBlock",
"ReasoningContentBlock",
"RemoveMessage",
@@ -104,6 +109,7 @@ __all__ = (
"ToolCallChunk",
"ToolMessage",
"ToolMessageChunk",
"UsageMetadata",
"VideoContentBlock",
"_message_from_dict",
"convert_to_messages",
@@ -145,6 +151,7 @@ _dynamic_imports = {
"HumanMessageChunk": "human",
"NonStandardAnnotation": "content",
"NonStandardContentBlock": "content",
"OutputTokenDetails": "ai",
"PlainTextContentBlock": "content",
"ReasoningContentBlock": "content",
"RemoveMessage": "modifier",
@@ -154,12 +161,14 @@ _dynamic_imports = {
"SystemMessage": "system",
"SystemMessageChunk": "system",
"ImageContentBlock": "content",
"InputTokenDetails": "ai",
"InvalidToolCall": "tool",
"TextContentBlock": "content",
"ToolCall": "tool",
"ToolCallChunk": "tool",
"ToolMessage": "tool",
"ToolMessageChunk": "tool",
"UsageMetadata": "ai",
"VideoContentBlock": "content",
"AnyMessage": "utils",
"MessageLikeRepresentation": "utils",

View File

@@ -48,10 +48,10 @@ class InputTokenDetails(TypedDict, total=False):
}
```
!!! version-added "Added in version 0.3.9"
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
"""
audio: int
@@ -83,7 +83,9 @@ class OutputTokenDetails(TypedDict, total=False):
}
```
!!! version-added "Added in version 0.3.9"
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
"""
@@ -121,9 +123,13 @@ class UsageMetadata(TypedDict):
}
```
!!! warning "Behavior changed in 0.3.9"
!!! warning "Behavior changed in `langchain-core` 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"
The LangSmith SDK also has a `UsageMetadata` class. While the two share fields,
LangSmith's `UsageMetadata` has additional fields to capture cost information
used by the LangSmith platform.
"""
input_tokens: int
@@ -131,7 +137,7 @@ class UsageMetadata(TypedDict):
output_tokens: int
"""Count of output (or completion) tokens. Sum of all output token types."""
total_tokens: int
"""Total token count. Sum of input_tokens + output_tokens."""
"""Total token count. Sum of `input_tokens` + `output_tokens`."""
input_token_details: NotRequired[InputTokenDetails]
"""Breakdown of input token counts.
@@ -141,7 +147,6 @@ class UsageMetadata(TypedDict):
"""Breakdown of output token counts.
Does *not* need to sum to full output token count. Does *not* need to have all keys.
"""
@@ -153,7 +158,6 @@ class AIMessage(BaseMessage):
This message represents the output of the model and consists of both
the raw output as returned by the model and standardized fields
(e.g., tool calls, usage metadata) added by the LangChain framework.
"""
tool_calls: list[ToolCall] = []
@@ -651,13 +655,13 @@ def add_ai_message_chunks(
chunk_id = id_
break
else:
# second pass: prefer lc_run-* ids over lc_* ids
# second pass: prefer lc_run-* IDs over lc_* IDs
for id_ in candidates:
if id_ and id_.startswith(LC_ID_PREFIX):
chunk_id = id_
break
else:
# third pass: take any remaining id (auto-generated lc_* ids)
# third pass: take any remaining ID (auto-generated lc_* IDs)
for id_ in candidates:
if id_:
chunk_id = id_

View File

@@ -5,11 +5,9 @@ from __future__ import annotations
from typing import TYPE_CHECKING, Any, cast, overload
from pydantic import ConfigDict, Field
from typing_extensions import Self
from langchain_core._api.deprecation import warn_deprecated
from langchain_core.load.serializable import Serializable
from langchain_core.messages import content as types
from langchain_core.utils import get_bolded_text
from langchain_core.utils._merge import merge_dicts, merge_lists
from langchain_core.utils.interactive_env import is_interactive_env
@@ -17,6 +15,9 @@ from langchain_core.utils.interactive_env import is_interactive_env
if TYPE_CHECKING:
from collections.abc import Sequence
from typing_extensions import Self
from langchain_core.messages import content as types
from langchain_core.prompts.chat import ChatPromptTemplate
@@ -93,6 +94,10 @@ class BaseMessage(Serializable):
"""Base abstract message class.
Messages are the inputs and outputs of a chat model.
Examples include [`HumanMessage`][langchain.messages.HumanMessage],
[`AIMessage`][langchain.messages.AIMessage], and
[`SystemMessage`][langchain.messages.SystemMessage].
"""
content: str | list[str | dict]
@@ -195,7 +200,7 @@ class BaseMessage(Serializable):
def content_blocks(self) -> list[types.ContentBlock]:
r"""Load content blocks from the message content.
!!! version-added "Added in version 1.0.0"
!!! version-added "Added in `langchain-core` 1.0.0"
"""
# Needed here to avoid circular import, as these classes import BaseMessages

View File

@@ -12,10 +12,11 @@ the implementation in `BaseMessage`.
from __future__ import annotations
from collections.abc import Callable
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from collections.abc import Callable
from langchain_core.messages import AIMessage, AIMessageChunk
from langchain_core.messages import content as types

View File

@@ -368,7 +368,7 @@ def _convert_to_v1_from_genai(message: AIMessage) -> list[types.ContentBlock]:
else:
# Assume it's raw base64 without data URI
try:
# Validate base64 and decode for mime type detection
# Validate base64 and decode for MIME type detection
decoded_bytes = base64.b64decode(url, validate=True)
image_url_b64_block = {
@@ -379,7 +379,7 @@ def _convert_to_v1_from_genai(message: AIMessage) -> list[types.ContentBlock]:
try:
import filetype # type: ignore[import-not-found] # noqa: PLC0415
# Guess mime type based on file bytes
# Guess MIME type based on file bytes
mime_type = None
kind = filetype.guess(decoded_bytes)
if kind:
@@ -458,6 +458,8 @@ def _convert_to_v1_from_genai(message: AIMessage) -> list[types.ContentBlock]:
if outcome is not None:
server_tool_result_block["extras"]["outcome"] = outcome
converted_blocks.append(server_tool_result_block)
elif item_type == "text":
converted_blocks.append(cast("types.TextContentBlock", item))
else:
# Unknown type, preserve as non-standard
converted_blocks.append({"type": "non_standard", "value": item})

View File

@@ -4,7 +4,6 @@ from __future__ import annotations
import json
import warnings
from collections.abc import Iterable
from typing import TYPE_CHECKING, Any, Literal, cast
from langchain_core.language_models._utils import (
@@ -14,6 +13,8 @@ from langchain_core.language_models._utils import (
from langchain_core.messages import content as types
if TYPE_CHECKING:
from collections.abc import Iterable
from langchain_core.messages import AIMessage, AIMessageChunk

View File

@@ -644,7 +644,7 @@ class AudioContentBlock(TypedDict):
class PlainTextContentBlock(TypedDict):
"""Plaintext data (e.g., from a document).
"""Plaintext data (e.g., from a `.txt` or `.md` document).
!!! note
A `PlainTextContentBlock` existed in `langchain-core<1.0.0`. Although the
@@ -767,7 +767,7 @@ class FileContentBlock(TypedDict):
class NonStandardContentBlock(TypedDict):
"""Provider-specific data.
"""Provider-specific content data.
This block contains data for which there is not yet a standard type.
@@ -802,7 +802,7 @@ class NonStandardContentBlock(TypedDict):
"""
value: dict[str, Any]
"""Provider-specific data."""
"""Provider-specific content data."""
index: NotRequired[int | str]
"""Index of block in aggregate response. Used during streaming."""
@@ -867,7 +867,7 @@ def _get_data_content_block_types() -> tuple[str, ...]:
Example: ("image", "video", "audio", "text-plain", "file")
Note that old style multimodal blocks type literals with new style blocks.
Speficially, "image", "audio", and "file".
Specifically, "image", "audio", and "file".
See the docstring of `_normalize_messages` in `language_models._utils` for details.
"""
@@ -906,7 +906,7 @@ def is_data_content_block(block: dict) -> bool:
# 'text' is checked to support v0 PlainTextContentBlock types
# We must guard against new style TextContentBlock which also has 'text' `type`
# by ensuring the presense of `source_type`
# by ensuring the presence of `source_type`
if block["type"] == "text" and "source_type" not in block: # noqa: SIM103 # This is more readable
return False
@@ -1399,7 +1399,7 @@ def create_non_standard_block(
"""Create a `NonStandardContentBlock`.
Args:
value: Provider-specific data.
value: Provider-specific content data.
id: Content block identifier. Generated automatically if not provided.
index: Index of block in aggregate response. Used during streaming.

View File

@@ -86,7 +86,7 @@ AnyMessage = Annotated[
| Annotated[ToolMessageChunk, Tag(tag="ToolMessageChunk")],
Field(discriminator=Discriminator(_get_type)),
]
""""A type representing any defined `Message` or `MessageChunk` type."""
"""A type representing any defined `Message` or `MessageChunk` type."""
def get_buffer_string(
@@ -328,12 +328,16 @@ def _convert_to_message(message: MessageLikeRepresentation) -> BaseMessage:
"""
if isinstance(message, BaseMessage):
message_ = message
elif isinstance(message, str):
message_ = _create_message_from_message_type("human", message)
elif isinstance(message, Sequence) and len(message) == 2:
# mypy doesn't realise this can't be a string given the previous branch
message_type_str, template = message # type: ignore[misc]
message_ = _create_message_from_message_type(message_type_str, template)
elif isinstance(message, Sequence):
if isinstance(message, str):
message_ = _create_message_from_message_type("human", message)
else:
try:
message_type_str, template = message
except ValueError as e:
msg = "Message as a sequence must be (role string, template)"
raise NotImplementedError(msg) from e
message_ = _create_message_from_message_type(message_type_str, template)
elif isinstance(message, dict):
msg_kwargs = message.copy()
try:
@@ -439,8 +443,8 @@ def filter_messages(
exclude_ids: Message IDs to exclude.
exclude_tool_calls: Tool call IDs to exclude.
Can be one of the following:
- `True`: all `AIMessage`s with tool calls and all
`ToolMessage` objects will be excluded.
- `True`: All `AIMessage` objects with tool calls and all `ToolMessage`
objects will be excluded.
- a sequence of tool call IDs to exclude:
- `ToolMessage` objects with the corresponding tool call ID will be
excluded.
@@ -1025,18 +1029,18 @@ def convert_to_openai_messages(
messages: Message-like object or iterable of objects whose contents are
in OpenAI, Anthropic, Bedrock Converse, or VertexAI formats.
text_format: How to format string or text block contents:
- `'string'`:
If a message has a string content, this is left as a string. If
a message has content blocks that are all of type `'text'`, these
are joined with a newline to make a single string. If a message has
content blocks and at least one isn't of type `'text'`, then
all blocks are left as dicts.
- `'block'`:
If a message has a string content, this is turned into a list
with a single content block of type `'text'`. If a message has
content blocks these are left as is.
include_id: Whether to include message ids in the openai messages, if they
are present in the source messages.
- `'string'`:
If a message has a string content, this is left as a string. If
a message has content blocks that are all of type `'text'`, these
are joined with a newline to make a single string. If a message has
content blocks and at least one isn't of type `'text'`, then
all blocks are left as dicts.
- `'block'`:
If a message has a string content, this is turned into a list
with a single content block of type `'text'`. If a message has
content blocks these are left as is.
include_id: Whether to include message IDs in the openai messages, if they
are present in the source messages.
Raises:
ValueError: if an unrecognized `text_format` is specified, or if a message
@@ -1097,7 +1101,7 @@ def convert_to_openai_messages(
# ]
```
!!! version-added "Added in version 0.3.11"
!!! version-added "Added in `langchain-core` 0.3.11"
""" # noqa: E501
if text_format not in {"string", "block"}:
@@ -1697,7 +1701,7 @@ def count_tokens_approximately(
Warning:
This function does not currently support counting image tokens.
!!! version-added "Added in version 0.3.46"
!!! version-added "Added in `langchain-core` 0.3.46"
"""
token_count = 0.0

View File

@@ -1,4 +1,20 @@
"""**OutputParser** classes parse the output of an LLM call."""
"""`OutputParser` classes parse the output of an LLM call into structured data.
!!! tip "Structured output"
Output parsers emerged as an early solution to the challenge of obtaining structured
output from LLMs.
Today, most LLMs support [structured output](https://docs.langchain.com/oss/python/langchain/models#structured-outputs)
natively. In such cases, using output parsers may be unnecessary, and you should
leverage the model's built-in capabilities for structured output. Refer to the
[documentation of your chosen model](https://docs.langchain.com/oss/python/integrations/providers/overview)
for guidance on how to achieve structured output directly.
Output parsers remain valuable when working with models that do not support
structured output natively, or when you require additional processing or validation
of the model's output beyond its inherent capabilities.
"""
from typing import TYPE_CHECKING

View File

@@ -135,6 +135,9 @@ class BaseOutputParser(
Example:
```python
# Implement a simple boolean output parser
class BooleanOutputParser(BaseOutputParser[bool]):
true_val: str = "YES"
false_val: str = "NO"

View File

@@ -1,11 +1,16 @@
"""Format instructions."""
JSON_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
JSON_FORMAT_INSTRUCTIONS = """STRICT OUTPUT FORMAT:
- Return only the JSON value that conforms to the schema. Do not include any additional text, explanations, headings, or separators.
- Do not wrap the JSON in Markdown or code fences (no ``` or ```json).
- Do not prepend or append any text (e.g., do not write "Here is the JSON:").
- The response must be a single top-level JSON value exactly as required by the schema (object/array/etc.), with no trailing commas or comments.
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
The output should be formatted as a JSON instance that conforms to the JSON schema below.
Here is the output schema:
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}} the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
Here is the output schema (shown in a code block for readability only — do not include any backticks or Markdown in your output):
```
{schema}
```""" # noqa: E501

View File

@@ -31,11 +31,14 @@ TBaseModel = TypeVar("TBaseModel", bound=PydanticBaseModel)
class JsonOutputParser(BaseCumulativeTransformOutputParser[Any]):
"""Parse the output of an LLM call to a JSON object.
Probably the most reliable output parser for getting structured data that does *not*
use function calling.
When used in streaming mode, it will yield partial JSON objects containing
all the keys that have been returned so far.
In streaming, if `diff` is set to `True`, yields JSONPatch operations
describing the difference between the previous and the current object.
In streaming, if `diff` is set to `True`, yields JSONPatch operations describing the
difference between the previous and the current object.
"""
pydantic_object: Annotated[type[TBaseModel] | None, SkipValidation()] = None # type: ignore[valid-type]

View File

@@ -41,7 +41,7 @@ def droplastn(
class ListOutputParser(BaseTransformOutputParser[list[str]]):
"""Parse the output of an LLM call to a list."""
"""Parse the output of a model to a list."""
@property
def _type(self) -> str:
@@ -74,30 +74,30 @@ class ListOutputParser(BaseTransformOutputParser[list[str]]):
buffer = ""
for chunk in input:
if isinstance(chunk, BaseMessage):
# extract text
# Extract text
chunk_content = chunk.content
if not isinstance(chunk_content, str):
continue
buffer += chunk_content
else:
# add current chunk to buffer
# Add current chunk to buffer
buffer += chunk
# parse buffer into a list of parts
# Parse buffer into a list of parts
try:
done_idx = 0
# yield only complete parts
# Yield only complete parts
for m in droplastn(self.parse_iter(buffer), 1):
done_idx = m.end()
yield [m.group(1)]
buffer = buffer[done_idx:]
except NotImplementedError:
parts = self.parse(buffer)
# yield only complete parts
# Yield only complete parts
if len(parts) > 1:
for part in parts[:-1]:
yield [part]
buffer = parts[-1]
# yield the last part
# Yield the last part
for part in self.parse(buffer):
yield [part]
@@ -108,40 +108,40 @@ class ListOutputParser(BaseTransformOutputParser[list[str]]):
buffer = ""
async for chunk in input:
if isinstance(chunk, BaseMessage):
# extract text
# Extract text
chunk_content = chunk.content
if not isinstance(chunk_content, str):
continue
buffer += chunk_content
else:
# add current chunk to buffer
# Add current chunk to buffer
buffer += chunk
# parse buffer into a list of parts
# Parse buffer into a list of parts
try:
done_idx = 0
# yield only complete parts
# Yield only complete parts
for m in droplastn(self.parse_iter(buffer), 1):
done_idx = m.end()
yield [m.group(1)]
buffer = buffer[done_idx:]
except NotImplementedError:
parts = self.parse(buffer)
# yield only complete parts
# Yield only complete parts
if len(parts) > 1:
for part in parts[:-1]:
yield [part]
buffer = parts[-1]
# yield the last part
# Yield the last part
for part in self.parse(buffer):
yield [part]
class CommaSeparatedListOutputParser(ListOutputParser):
"""Parse the output of an LLM call to a comma-separated list."""
"""Parse the output of a model to a comma-separated list."""
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -177,7 +177,7 @@ class CommaSeparatedListOutputParser(ListOutputParser):
)
return [item for sublist in reader for item in sublist]
except csv.Error:
# keep old logic for backup
# Keep old logic for backup
return [part.strip() for part in text.split(",")]
@property

View File

@@ -15,7 +15,11 @@ from langchain_core.messages.tool import tool_call as create_tool_call
from langchain_core.output_parsers.transform import BaseCumulativeTransformOutputParser
from langchain_core.outputs import ChatGeneration, Generation
from langchain_core.utils.json import parse_partial_json
from langchain_core.utils.pydantic import TypeBaseModel
from langchain_core.utils.pydantic import (
TypeBaseModel,
is_pydantic_v1_subclass,
is_pydantic_v2_subclass,
)
logger = logging.getLogger(__name__)
@@ -224,7 +228,7 @@ class JsonOutputKeyToolsParser(JsonOutputToolsParser):
result: The result of the LLM call.
partial: Whether to parse partial JSON.
If `True`, the output will be a JSON object containing
all the keys that have been returned so far.
all the keys that have been returned so far.
If `False`, the output will be the full JSON object.
Raises:
@@ -307,7 +311,7 @@ class PydanticToolsParser(JsonOutputToolsParser):
result: The result of the LLM call.
partial: Whether to parse partial JSON.
If `True`, the output will be a JSON object containing
all the keys that have been returned so far.
all the keys that have been returned so far.
If `False`, the output will be the full JSON object.
Returns:
@@ -323,7 +327,15 @@ class PydanticToolsParser(JsonOutputToolsParser):
return None if self.first_tool_only else []
json_results = [json_results] if self.first_tool_only else json_results
name_dict = {tool.__name__: tool for tool in self.tools}
name_dict_v2: dict[str, TypeBaseModel] = {
tool.model_config.get("title") or tool.__name__: tool
for tool in self.tools
if is_pydantic_v2_subclass(tool)
}
name_dict_v1: dict[str, TypeBaseModel] = {
tool.__name__: tool for tool in self.tools if is_pydantic_v1_subclass(tool)
}
name_dict: dict[str, TypeBaseModel] = {**name_dict_v2, **name_dict_v1}
pydantic_objects = []
for res in json_results:
if not isinstance(res["args"], dict):

View File

@@ -86,7 +86,7 @@ class PydanticOutputParser(JsonOutputParser, Generic[TBaseModel]):
The format instructions for the JSON output.
"""
# Copy schema to avoid altering original Pydantic schema.
schema = dict(self.pydantic_object.model_json_schema().items())
schema = dict(self._get_schema(self.pydantic_object).items())
# Remove extraneous fields.
reduced_schema = schema

View File

@@ -6,14 +6,14 @@ from langchain_core.output_parsers.transform import BaseTransformOutputParser
class StrOutputParser(BaseTransformOutputParser[str]):
"""OutputParser that parses LLMResult into the top likely string."""
"""OutputParser that parses `LLMResult` into the top likely string."""
@classmethod
def is_lc_serializable(cls) -> bool:
"""StrOutputParser is serializable.
"""`StrOutputParser` is serializable.
Returns:
True
`True`
"""
return True

View File

@@ -43,19 +43,19 @@ class _StreamingParser:
"""Streaming parser for XML.
This implementation is pulled into a class to avoid implementation
drift between transform and atransform of the XMLOutputParser.
drift between transform and atransform of the `XMLOutputParser`.
"""
def __init__(self, parser: Literal["defusedxml", "xml"]) -> None:
"""Initialize the streaming parser.
Args:
parser: Parser to use for XML parsing. Can be either 'defusedxml' or 'xml'.
See documentation in XMLOutputParser for more information.
parser: Parser to use for XML parsing. Can be either `'defusedxml'` or
`'xml'`. See documentation in `XMLOutputParser` for more information.
Raises:
ImportError: If defusedxml is not installed and the defusedxml
parser is requested.
ImportError: If `defusedxml` is not installed and the `defusedxml` parser is
requested.
"""
if parser == "defusedxml":
if not _HAS_DEFUSEDXML:
@@ -79,10 +79,10 @@ class _StreamingParser:
"""Parse a chunk of text.
Args:
chunk: A chunk of text to parse. This can be a string or a BaseMessage.
chunk: A chunk of text to parse. This can be a `str` or a `BaseMessage`.
Yields:
A dictionary representing the parsed XML element.
A `dict` representing the parsed XML element.
Raises:
xml.etree.ElementTree.ParseError: If the XML is not well-formed.
@@ -147,46 +147,49 @@ class _StreamingParser:
class XMLOutputParser(BaseTransformOutputParser):
"""Parse an output using xml format."""
"""Parse an output using xml format.
Returns a dictionary of tags.
"""
tags: list[str] | None = None
"""Tags to tell the LLM to expect in the XML output.
Note this may not be perfect depending on the LLM implementation.
For example, with tags=["foo", "bar", "baz"]:
For example, with `tags=["foo", "bar", "baz"]`:
1. A well-formatted XML instance:
"<foo>\n <bar>\n <baz></baz>\n </bar>\n</foo>"
`"<foo>\n <bar>\n <baz></baz>\n </bar>\n</foo>"`
2. A badly-formatted XML instance (missing closing tag for 'bar'):
"<foo>\n <bar>\n </foo>"
`"<foo>\n <bar>\n </foo>"`
3. A badly-formatted XML instance (unexpected 'tag' element):
"<foo>\n <tag>\n </tag>\n</foo>"
`"<foo>\n <tag>\n </tag>\n</foo>"`
"""
encoding_matcher: re.Pattern = re.compile(
r"<([^>]*encoding[^>]*)>\n(.*)", re.MULTILINE | re.DOTALL
)
parser: Literal["defusedxml", "xml"] = "defusedxml"
"""Parser to use for XML parsing. Can be either 'defusedxml' or 'xml'.
"""Parser to use for XML parsing. Can be either `'defusedxml'` or `'xml'`.
* 'defusedxml' is the default parser and is used to prevent XML vulnerabilities
present in some distributions of Python's standard library xml.
`defusedxml` is a wrapper around the standard library parser that
sets up the parser with secure defaults.
* 'xml' is the standard library parser.
* `'defusedxml'` is the default parser and is used to prevent XML vulnerabilities
present in some distributions of Python's standard library xml.
`defusedxml` is a wrapper around the standard library parser that
sets up the parser with secure defaults.
* `'xml'` is the standard library parser.
Use `xml` only if you are sure that your distribution of the standard library
is not vulnerable to XML vulnerabilities.
Use `xml` only if you are sure that your distribution of the standard library is not
vulnerable to XML vulnerabilities.
Please review the following resources for more information:
* https://docs.python.org/3/library/xml.html#xml-vulnerabilities
* https://github.com/tiran/defusedxml
The standard library relies on libexpat for parsing XML:
https://github.com/libexpat/libexpat
The standard library relies on [`libexpat`](https://github.com/libexpat/libexpat)
for parsing XML.
"""
def get_format_instructions(self) -> str:
@@ -200,12 +203,12 @@ class XMLOutputParser(BaseTransformOutputParser):
text: The output of an LLM call.
Returns:
A dictionary representing the parsed XML.
A `dict` representing the parsed XML.
Raises:
OutputParserException: If the XML is not well-formed.
ImportError: If defusedxml is not installed and the defusedxml
parser is requested.
ImportError: If defus`edxml is not installed and the `defusedxml` parser is
requested.
"""
# Try to find XML string within triple backticks
# Imports are temporarily placed here to avoid issue with caching on CI

View File

@@ -2,15 +2,17 @@
from __future__ import annotations
from typing import Literal
from typing import TYPE_CHECKING, Literal
from pydantic import model_validator
from typing_extensions import Self
from langchain_core.messages import BaseMessage, BaseMessageChunk
from langchain_core.outputs.generation import Generation
from langchain_core.utils._merge import merge_dicts
if TYPE_CHECKING:
from typing_extensions import Self
class ChatGeneration(Generation):
"""A single chat generation output.

View File

@@ -11,9 +11,8 @@ from langchain_core.utils._merge import merge_dicts
class Generation(Serializable):
"""A single text generation output.
Generation represents the response from an
`"old-fashioned" LLM <https://python.langchain.com/docs/concepts/text_llms/>__` that
generates regular text (not chat messages).
Generation represents the response from an "old-fashioned" LLM (string-in,
string-out) that generates regular text (not chat messages).
This model is used internally by chat model and will eventually
be mapped to a more general `LLMResult` object, and then projected into
@@ -21,8 +20,7 @@ class Generation(Serializable):
LangChain users working with chat models will usually access information via
`AIMessage` (returned from runnable interfaces) or `LLMResult` (available
via callbacks). Please refer the `AIMessage` and `LLMResult` schema documentation
for more information.
via callbacks). Please refer to `AIMessage` and `LLMResult` for more information.
"""
text: str
@@ -35,11 +33,13 @@ class Generation(Serializable):
"""
type: Literal["Generation"] = "Generation"
"""Type is used exclusively for serialization purposes.
Set to "Generation" for this class."""
Set to "Generation" for this class.
"""
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -53,7 +53,7 @@ class Generation(Serializable):
class GenerationChunk(Generation):
"""Generation chunk, which can be concatenated with other Generation chunks."""
"""`GenerationChunk`, which can be concatenated with other Generation chunks."""
def __add__(self, other: GenerationChunk) -> GenerationChunk:
"""Concatenate two `GenerationChunk`s.

View File

@@ -30,15 +30,13 @@ class PromptValue(Serializable, ABC):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the LangChain object.
This is used to determine the namespace of the object when serializing.
Returns:
`["langchain", "schema", "prompt"]`
"""
@@ -50,7 +48,7 @@ class PromptValue(Serializable, ABC):
@abstractmethod
def to_messages(self) -> list[BaseMessage]:
"""Return prompt as a list of Messages."""
"""Return prompt as a list of messages."""
class StringPromptValue(PromptValue):
@@ -64,8 +62,6 @@ class StringPromptValue(PromptValue):
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the LangChain object.
This is used to determine the namespace of the object when serializing.
Returns:
`["langchain", "prompts", "base"]`
"""
@@ -101,8 +97,6 @@ class ChatPromptValue(PromptValue):
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the LangChain object.
This is used to determine the namespace of the object when serializing.
Returns:
`["langchain", "prompts", "chat"]`
"""

View File

@@ -6,7 +6,7 @@ import contextlib
import json
import typing
from abc import ABC, abstractmethod
from collections.abc import Callable, Mapping
from collections.abc import Mapping
from functools import cached_property
from pathlib import Path
from typing import (
@@ -33,6 +33,8 @@ from langchain_core.runnables.config import ensure_config
from langchain_core.utils.pydantic import create_model_v2
if TYPE_CHECKING:
from collections.abc import Callable
from langchain_core.documents import Document
@@ -46,21 +48,27 @@ class BasePromptTemplate(
input_variables: list[str]
"""A list of the names of the variables whose values are required as inputs to the
prompt."""
prompt.
"""
optional_variables: list[str] = Field(default=[])
"""optional_variables: A list of the names of the variables for placeholder
or MessagePlaceholder that are optional. These variables are auto inferred
from the prompt and user need not provide them."""
"""A list of the names of the variables for placeholder or `MessagePlaceholder` that
are optional.
These variables are auto inferred from the prompt and user need not provide them.
"""
input_types: typing.Dict[str, Any] = Field(default_factory=dict, exclude=True) # noqa: UP006
"""A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings."""
If not provided, all variables are assumed to be strings.
"""
output_parser: BaseOutputParser | None = None
"""How to parse the output of calling an LLM on this formatted prompt."""
partial_variables: Mapping[str, Any] = Field(default_factory=dict)
"""A dictionary of the partial variables the prompt template carries.
Partial variables populate the template so that you don't need to
pass them in every time you call the prompt."""
Partial variables populate the template so that you don't need to pass them in every
time you call the prompt.
"""
metadata: typing.Dict[str, Any] | None = None # noqa: UP006
"""Metadata to be used for tracing."""
tags: list[str] | None = None
@@ -105,7 +113,7 @@ class BasePromptTemplate(
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
model_config = ConfigDict(
@@ -127,7 +135,7 @@ class BasePromptTemplate(
"""Get the input schema for the prompt.
Args:
config: configuration for the prompt.
config: Configuration for the prompt.
Returns:
The input schema for the prompt.
@@ -195,8 +203,8 @@ class BasePromptTemplate(
"""Invoke the prompt.
Args:
input: Dict, input to the prompt.
config: RunnableConfig, configuration for the prompt.
input: Input to the prompt.
config: Configuration for the prompt.
Returns:
The output of the prompt.
@@ -221,8 +229,8 @@ class BasePromptTemplate(
"""Async invoke the prompt.
Args:
input: Dict, input to the prompt.
config: RunnableConfig, configuration for the prompt.
input: Input to the prompt.
config: Configuration for the prompt.
Returns:
The output of the prompt.
@@ -242,7 +250,7 @@ class BasePromptTemplate(
@abstractmethod
def format_prompt(self, **kwargs: Any) -> PromptValue:
"""Create Prompt Value.
"""Create `PromptValue`.
Args:
**kwargs: Any arguments to be passed to the prompt template.
@@ -252,7 +260,7 @@ class BasePromptTemplate(
"""
async def aformat_prompt(self, **kwargs: Any) -> PromptValue:
"""Async create Prompt Value.
"""Async create `PromptValue`.
Args:
**kwargs: Any arguments to be passed to the prompt template.
@@ -266,7 +274,7 @@ class BasePromptTemplate(
"""Return a partial of the prompt template.
Args:
**kwargs: partial variables to set.
**kwargs: Partial variables to set.
Returns:
A partial of the prompt template.
@@ -296,9 +304,9 @@ class BasePromptTemplate(
A formatted string.
Example:
```python
prompt.format(variable1="foo")
```
```python
prompt.format(variable1="foo")
```
"""
async def aformat(self, **kwargs: Any) -> FormatOutputType:
@@ -311,9 +319,9 @@ class BasePromptTemplate(
A formatted string.
Example:
```python
await prompt.aformat(variable1="foo")
```
```python
await prompt.aformat(variable1="foo")
```
"""
return self.format(**kwargs)
@@ -348,9 +356,9 @@ class BasePromptTemplate(
NotImplementedError: If the prompt type is not implemented.
Example:
```python
prompt.save(file_path="path/prompt.yaml")
```
```python
prompt.save(file_path="path/prompt.yaml")
```
"""
if self.partial_variables:
msg = "Cannot save prompt with partial variables."
@@ -402,23 +410,23 @@ def format_document(doc: Document, prompt: BasePromptTemplate[str]) -> str:
First, this pulls information from the document from two sources:
1. page_content:
This takes the information from the `document.page_content`
and assigns it to a variable named `page_content`.
2. metadata:
This takes information from `document.metadata` and assigns
it to variables of the same name.
1. `page_content`:
This takes the information from the `document.page_content` and assigns it to a
variable named `page_content`.
2. `metadata`:
This takes information from `document.metadata` and assigns it to variables of
the same name.
Those variables are then passed into the `prompt` to produce a formatted string.
Args:
doc: Document, the page_content and metadata will be used to create
doc: `Document`, the `page_content` and `metadata` will be used to create
the final string.
prompt: BasePromptTemplate, will be used to format the page_content
and metadata into the final string.
prompt: `BasePromptTemplate`, will be used to format the `page_content`
and `metadata` into the final string.
Returns:
string of the document formatted.
String of the document formatted.
Example:
```python
@@ -429,7 +437,6 @@ def format_document(doc: Document, prompt: BasePromptTemplate[str]) -> str:
prompt = PromptTemplate.from_template("Page {page}: {page_content}")
format_document(doc, prompt)
>>> "Page 1: This is a joke"
```
"""
return prompt.format(**_get_document_info(doc, prompt))
@@ -440,22 +447,22 @@ async def aformat_document(doc: Document, prompt: BasePromptTemplate[str]) -> st
First, this pulls information from the document from two sources:
1. page_content:
This takes the information from the `document.page_content`
and assigns it to a variable named `page_content`.
2. metadata:
This takes information from `document.metadata` and assigns
it to variables of the same name.
1. `page_content`:
This takes the information from the `document.page_content` and assigns it to a
variable named `page_content`.
2. `metadata`:
This takes information from `document.metadata` and assigns it to variables of
the same name.
Those variables are then passed into the `prompt` to produce a formatted string.
Args:
doc: Document, the page_content and metadata will be used to create
doc: `Document`, the `page_content` and `metadata` will be used to create
the final string.
prompt: BasePromptTemplate, will be used to format the page_content
and metadata into the final string.
prompt: `BasePromptTemplate`, will be used to format the `page_content`
and `metadata` into the final string.
Returns:
string of the document formatted.
String of the document formatted.
"""
return await prompt.aformat(**_get_document_info(doc, prompt))

View File

@@ -587,14 +587,15 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
for prompt in self.prompt:
inputs = {var: kwargs[var] for var in prompt.input_variables}
if isinstance(prompt, StringPromptTemplate):
formatted: str | ImageURL | dict[str, Any] = prompt.format(**inputs)
content.append({"type": "text", "text": formatted})
formatted_text: str = prompt.format(**inputs)
if formatted_text != "":
content.append({"type": "text", "text": formatted_text})
elif isinstance(prompt, ImagePromptTemplate):
formatted = prompt.format(**inputs)
content.append({"type": "image_url", "image_url": formatted})
formatted_image: ImageURL = prompt.format(**inputs)
content.append({"type": "image_url", "image_url": formatted_image})
elif isinstance(prompt, DictPromptTemplate):
formatted = prompt.format(**inputs)
content.append(formatted)
formatted_dict: dict[str, Any] = prompt.format(**inputs)
content.append(formatted_dict)
return self._msg_class(
content=content, additional_kwargs=self.additional_kwargs
)
@@ -617,16 +618,15 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
for prompt in self.prompt:
inputs = {var: kwargs[var] for var in prompt.input_variables}
if isinstance(prompt, StringPromptTemplate):
formatted: str | ImageURL | dict[str, Any] = await prompt.aformat(
**inputs
)
content.append({"type": "text", "text": formatted})
formatted_text: str = await prompt.aformat(**inputs)
if formatted_text != "":
content.append({"type": "text", "text": formatted_text})
elif isinstance(prompt, ImagePromptTemplate):
formatted = await prompt.aformat(**inputs)
content.append({"type": "image_url", "image_url": formatted})
formatted_image: ImageURL = await prompt.aformat(**inputs)
content.append({"type": "image_url", "image_url": formatted_image})
elif isinstance(prompt, DictPromptTemplate):
formatted = prompt.format(**inputs)
content.append(formatted)
formatted_dict: dict[str, Any] = prompt.format(**inputs)
content.append(formatted_dict)
return self._msg_class(
content=content, additional_kwargs=self.additional_kwargs
)
@@ -776,42 +776,36 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
Use to create flexible templated prompts for chat models.
Examples:
!!! warning "Behavior changed in 0.2.24"
You can pass any Message-like formats supported by
`ChatPromptTemplate.from_messages()` directly to `ChatPromptTemplate()`
init.
```python
from langchain_core.prompts import ChatPromptTemplate
```python
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate(
[
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
]
)
template = ChatPromptTemplate(
[
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
]
)
prompt_value = template.invoke(
{
"name": "Bob",
"user_input": "What is your name?",
}
)
# Output:
# ChatPromptValue(
# messages=[
# SystemMessage(content='You are a helpful AI bot. Your name is Bob.'),
# HumanMessage(content='Hello, how are you doing?'),
# AIMessage(content="I'm doing well, thanks!"),
# HumanMessage(content='What is your name?')
# ]
# )
```
prompt_value = template.invoke(
{
"name": "Bob",
"user_input": "What is your name?",
}
)
# Output:
# ChatPromptValue(
# messages=[
# SystemMessage(content='You are a helpful AI bot. Your name is Bob.'),
# HumanMessage(content='Hello, how are you doing?'),
# AIMessage(content="I'm doing well, thanks!"),
# HumanMessage(content='What is your name?')
# ]
# )
```
Messages Placeholder:
!!! note "Messages Placeholder"
```python
# In addition to Human/AI/Tool/Function messages,
@@ -852,13 +846,12 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
# )
```
Single-variable template:
!!! note "Single-variable template"
If your prompt has only a single input variable (i.e., 1 instance of "{variable_nams}"),
and you invoke the template with a non-dict object, the prompt template will
inject the provided argument into that variable location.
```python
from langchain_core.prompts import ChatPromptTemplate
@@ -898,25 +891,35 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
"""Create a chat prompt template from a variety of message formats.
Args:
messages: sequence of message representations.
messages: Sequence of message representations.
A message can be represented using the following formats:
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., ("human", "{user_input}"),
(4) 2-tuple of (message class, template), (5) a string which is
shorthand for ("human", template); e.g., "{user_input}".
template_format: format of the template.
1. `BaseMessagePromptTemplate`
2. `BaseMessage`
3. 2-tuple of `(message type, template)`; e.g.,
`("human", "{user_input}")`
4. 2-tuple of `(message class, template)`
5. A string which is shorthand for `("human", template)`; e.g.,
`"{user_input}"`
template_format: Format of the template.
input_variables: A list of the names of the variables whose values are
required as inputs to the prompt.
optional_variables: A list of the names of the variables for placeholder
or MessagePlaceholder that are optional.
These variables are auto inferred from the prompt and user need not
provide them.
partial_variables: A dictionary of the partial variables the prompt
template carries. Partial variables populate the template so that you
don't need to pass them in every time you call the prompt.
template carries.
Partial variables populate the template so that you don't need to pass
them in every time you call the prompt.
validate_template: Whether to validate the template.
input_types: A dictionary of the types of the variables the prompt template
expects. If not provided, all variables are assumed to be strings.
expects.
If not provided, all variables are assumed to be strings.
Examples:
Instantiation from a list of message templates:
@@ -1121,12 +1124,17 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
)
```
Args:
messages: sequence of message representations.
messages: Sequence of message representations.
A message can be represented using the following formats:
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., ("human", "{user_input}"),
(4) 2-tuple of (message class, template), (5) a string which is
shorthand for ("human", template); e.g., "{user_input}".
1. `BaseMessagePromptTemplate`
2. `BaseMessage`
3. 2-tuple of `(message type, template)`; e.g.,
`("human", "{user_input}")`
4. 2-tuple of `(message class, template)`
5. A string which is shorthand for `("human", template)`; e.g.,
`"{user_input}"`
template_format: format of the template.
Returns:
@@ -1238,7 +1246,7 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
"""Extend the chat template with a sequence of messages.
Args:
messages: sequence of message representations to append.
messages: Sequence of message representations to append.
"""
self.messages.extend(
[_convert_to_message_template(message) for message in messages]
@@ -1335,11 +1343,25 @@ def _create_template_from_message_type(
raise ValueError(msg)
var_name = template[1:-1]
message = MessagesPlaceholder(variable_name=var_name, optional=True)
elif len(template) == 2 and isinstance(template[1], bool):
var_name_wrapped, is_optional = template
else:
try:
var_name_wrapped, is_optional = template
except ValueError as e:
msg = (
"Unexpected arguments for placeholder message type."
" Expected either a single string variable name"
" or a list of [variable_name: str, is_optional: bool]."
f" Got: {template}"
)
raise ValueError(msg) from e
if not isinstance(is_optional, bool):
msg = f"Expected is_optional to be a boolean. Got: {is_optional}"
raise ValueError(msg) # noqa: TRY004
if not isinstance(var_name_wrapped, str):
msg = f"Expected variable name to be a string. Got: {var_name_wrapped}"
raise ValueError(msg) # noqa:TRY004
raise ValueError(msg) # noqa: TRY004
if var_name_wrapped[0] != "{" or var_name_wrapped[-1] != "}":
msg = (
f"Invalid placeholder template: {var_name_wrapped}."
@@ -1349,14 +1371,6 @@ def _create_template_from_message_type(
var_name = var_name_wrapped[1:-1]
message = MessagesPlaceholder(variable_name=var_name, optional=is_optional)
else:
msg = (
"Unexpected arguments for placeholder message type."
" Expected either a single string variable name"
" or a list of [variable_name: str, is_optional: bool]."
f" Got: {template}"
)
raise ValueError(msg)
else:
msg = (
f"Unexpected message type: {message_type}. Use one of 'human',"
@@ -1410,10 +1424,11 @@ def _convert_to_message_template(
)
raise ValueError(msg)
message = (message["role"], message["content"])
if len(message) != 2:
try:
message_type_str, template = message
except ValueError as e:
msg = f"Expected 2-tuple of (role, template), got {message}"
raise ValueError(msg)
message_type_str, template = message
raise ValueError(msg) from e
if isinstance(message_type_str, str):
message_ = _create_template_from_message_type(
message_type_str, template, template_format=template_format

View File

@@ -69,7 +69,7 @@ class DictPromptTemplate(RunnableSerializable[dict, dict]):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod

View File

@@ -6,10 +6,10 @@ from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, Any
from langchain_core.load import Serializable
from langchain_core.messages import BaseMessage
from langchain_core.utils.interactive_env import is_interactive_env
if TYPE_CHECKING:
from langchain_core.messages import BaseMessage
from langchain_core.prompts.chat import ChatPromptTemplate
@@ -18,7 +18,7 @@ class BaseMessagePromptTemplate(Serializable, ABC):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -32,13 +32,13 @@ class BaseMessagePromptTemplate(Serializable, ABC):
@abstractmethod
def format_messages(self, **kwargs: Any) -> list[BaseMessage]:
"""Format messages from kwargs. Should return a list of BaseMessages.
"""Format messages from kwargs. Should return a list of `BaseMessage` objects.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
List of BaseMessages.
List of `BaseMessage` objects.
"""
async def aformat_messages(self, **kwargs: Any) -> list[BaseMessage]:
@@ -48,7 +48,7 @@ class BaseMessagePromptTemplate(Serializable, ABC):
**kwargs: Keyword arguments to use for formatting.
Returns:
List of BaseMessages.
List of `BaseMessage` objects.
"""
return self.format_messages(**kwargs)

View File

@@ -4,9 +4,8 @@ from __future__ import annotations
import warnings
from abc import ABC
from collections.abc import Callable, Sequence
from string import Formatter
from typing import Any, Literal
from typing import TYPE_CHECKING, Any, Literal
from pydantic import BaseModel, create_model
@@ -16,6 +15,9 @@ from langchain_core.utils import get_colored_text, mustache
from langchain_core.utils.formatting import formatter
from langchain_core.utils.interactive_env import is_interactive_env
if TYPE_CHECKING:
from collections.abc import Callable, Sequence
try:
from jinja2 import Environment, meta
from jinja2.sandbox import SandboxedEnvironment
@@ -122,13 +124,16 @@ def mustache_formatter(template: str, /, **kwargs: Any) -> str:
def mustache_template_vars(
template: str,
) -> set[str]:
"""Get the variables from a mustache template.
"""Get the top-level variables from a mustache template.
For nested variables like `{{person.name}}`, only the top-level
key (`person`) is returned.
Args:
template: The template string.
Returns:
The variables from the template.
The top-level variables from the template.
"""
variables: set[str] = set()
section_depth = 0

View File

@@ -104,19 +104,23 @@ class StructuredPrompt(ChatPromptTemplate):
)
```
Args:
messages: sequence of message representations.
messages: Sequence of message representations.
A message can be represented using the following formats:
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., ("human", "{user_input}"),
(4) 2-tuple of (message class, template), (5) a string which is
shorthand for ("human", template); e.g., "{user_input}"
schema: a dictionary representation of function call, or a Pydantic model.
1. `BaseMessagePromptTemplate`
2. `BaseMessage`
3. 2-tuple of `(message type, template)`; e.g.,
`("human", "{user_input}")`
4. 2-tuple of `(message class, template)`
5. A string which is shorthand for `("human", template)`; e.g.,
`"{user_input}"`
schema: A dictionary representation of function call, or a Pydantic model.
**kwargs: Any additional kwargs to pass through to
`ChatModel.with_structured_output(schema, **kwargs)`.
Returns:
a structured prompt template
A structured prompt template
"""
return cls(messages, schema, **kwargs)

View File

@@ -50,65 +50,65 @@ class LangSmithRetrieverParams(TypedDict, total=False):
class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
"""Abstract base class for a Document retrieval system.
"""Abstract base class for a document retrieval system.
A retrieval system is defined as something that can take string queries and return
the most 'relevant' Documents from some source.
the most 'relevant' documents from some source.
Usage:
A retriever follows the standard Runnable interface, and should be used
via the standard Runnable methods of `invoke`, `ainvoke`, `batch`, `abatch`.
A retriever follows the standard `Runnable` interface, and should be used via the
standard `Runnable` methods of `invoke`, `ainvoke`, `batch`, `abatch`.
Implementation:
When implementing a custom retriever, the class should implement
the `_get_relevant_documents` method to define the logic for retrieving documents.
When implementing a custom retriever, the class should implement the
`_get_relevant_documents` method to define the logic for retrieving documents.
Optionally, an async native implementations can be provided by overriding the
`_aget_relevant_documents` method.
Example: A retriever that returns the first 5 documents from a list of documents
!!! example "Retriever that returns the first 5 documents from a list of documents"
```python
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
```python
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
class SimpleRetriever(BaseRetriever):
docs: list[Document]
k: int = 5
class SimpleRetriever(BaseRetriever):
docs: list[Document]
k: int = 5
def _get_relevant_documents(self, query: str) -> list[Document]:
\"\"\"Return the first k documents from the list of documents\"\"\"
return self.docs[:self.k]
def _get_relevant_documents(self, query: str) -> list[Document]:
\"\"\"Return the first k documents from the list of documents\"\"\"
return self.docs[:self.k]
async def _aget_relevant_documents(self, query: str) -> list[Document]:
\"\"\"(Optional) async native implementation.\"\"\"
return self.docs[:self.k]
```
async def _aget_relevant_documents(self, query: str) -> list[Document]:
\"\"\"(Optional) async native implementation.\"\"\"
return self.docs[:self.k]
```
Example: A simple retriever based on a scikit-learn vectorizer
!!! example "Simple retriever based on a scikit-learn vectorizer"
```python
from sklearn.metrics.pairwise import cosine_similarity
```python
from sklearn.metrics.pairwise import cosine_similarity
class TFIDFRetriever(BaseRetriever, BaseModel):
vectorizer: Any
docs: list[Document]
tfidf_array: Any
k: int = 4
class TFIDFRetriever(BaseRetriever, BaseModel):
vectorizer: Any
docs: list[Document]
tfidf_array: Any
k: int = 4
class Config:
arbitrary_types_allowed = True
class Config:
arbitrary_types_allowed = True
def _get_relevant_documents(self, query: str) -> list[Document]:
# Ip -- (n_docs,x), Op -- (n_docs,n_Feats)
query_vec = self.vectorizer.transform([query])
# Op -- (n_docs,1) -- Cosine Sim with each doc
results = cosine_similarity(self.tfidf_array, query_vec).reshape((-1,))
return [self.docs[i] for i in results.argsort()[-self.k :][::-1]]
```
def _get_relevant_documents(self, query: str) -> list[Document]:
# Ip -- (n_docs,x), Op -- (n_docs,n_Feats)
query_vec = self.vectorizer.transform([query])
# Op -- (n_docs,1) -- Cosine Sim with each doc
results = cosine_similarity(self.tfidf_array, query_vec).reshape((-1,))
return [self.docs[i] for i in results.argsort()[-self.k :][::-1]]
```
"""
model_config = ConfigDict(
@@ -119,15 +119,19 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
_expects_other_args: bool = False
tags: list[str] | None = None
"""Optional list of tags associated with the retriever.
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a retriever with its
use case.
"""
metadata: dict[str, Any] | None = None
"""Optional metadata associated with the retriever.
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a retriever with its
use case.
"""

View File

@@ -118,6 +118,8 @@ if TYPE_CHECKING:
Other = TypeVar("Other")
_RUNNABLE_GENERIC_NUM_ARGS = 2 # Input and Output
class Runnable(ABC, Generic[Input, Output]):
"""A unit of work that can be invoked, batched, streamed, transformed and composed.
@@ -147,11 +149,11 @@ class Runnable(ABC, Generic[Input, Output]):
the `input_schema` property, the `output_schema` property and `config_schema`
method.
LCEL and Composition
====================
Composition
===========
Runnable objects can be composed together to create chains in a declarative way.
The LangChain Expression Language (LCEL) is a declarative way to compose
`Runnable` objectsinto chains.
Any chain constructed this way will automatically have sync, async, batch, and
streaming support.
@@ -235,21 +237,21 @@ class Runnable(ABC, Generic[Input, Output]):
You can set the global debug flag to True to enable debug output for all chains:
```python
from langchain_core.globals import set_debug
```python
from langchain_core.globals import set_debug
set_debug(True)
```
set_debug(True)
```
Alternatively, you can pass existing or custom callbacks to any given chain:
```python
from langchain_core.tracers import ConsoleCallbackHandler
```python
from langchain_core.tracers import ConsoleCallbackHandler
chain.invoke(..., config={"callbacks": [ConsoleCallbackHandler()]})
```
chain.invoke(..., config={"callbacks": [ConsoleCallbackHandler()]})
```
For a UI (and much more) checkout [LangSmith](https://docs.smith.langchain.com/).
For a UI (and much more) checkout [LangSmith](https://docs.langchain.com/langsmith/home).
"""
@@ -309,7 +311,10 @@ class Runnable(ABC, Generic[Input, Output]):
for base in self.__class__.mro():
if hasattr(base, "__pydantic_generic_metadata__"):
metadata = base.__pydantic_generic_metadata__
if "args" in metadata and len(metadata["args"]) == 2:
if (
"args" in metadata
and len(metadata["args"]) == _RUNNABLE_GENERIC_NUM_ARGS
):
return metadata["args"][0]
# If we didn't find a Pydantic model in the parent classes,
@@ -317,7 +322,7 @@ class Runnable(ABC, Generic[Input, Output]):
# Runnables that are not pydantic models.
for cls in self.__class__.__orig_bases__: # type: ignore[attr-defined]
type_args = get_args(cls)
if type_args and len(type_args) == 2:
if type_args and len(type_args) == _RUNNABLE_GENERIC_NUM_ARGS:
return type_args[0]
msg = (
@@ -340,12 +345,15 @@ class Runnable(ABC, Generic[Input, Output]):
for base in self.__class__.mro():
if hasattr(base, "__pydantic_generic_metadata__"):
metadata = base.__pydantic_generic_metadata__
if "args" in metadata and len(metadata["args"]) == 2:
if (
"args" in metadata
and len(metadata["args"]) == _RUNNABLE_GENERIC_NUM_ARGS
):
return metadata["args"][1]
for cls in self.__class__.__orig_bases__: # type: ignore[attr-defined]
type_args = get_args(cls)
if type_args and len(type_args) == 2:
if type_args and len(type_args) == _RUNNABLE_GENERIC_NUM_ARGS:
return type_args[1]
msg = (
@@ -424,7 +432,7 @@ class Runnable(ABC, Generic[Input, Output]):
print(runnable.get_input_jsonschema())
```
!!! version-added "Added in version 0.3.0"
!!! version-added "Added in `langchain-core` 0.3.0"
"""
return self.get_input_schema(config).model_json_schema()
@@ -502,7 +510,7 @@ class Runnable(ABC, Generic[Input, Output]):
print(runnable.get_output_jsonschema())
```
!!! version-added "Added in version 0.3.0"
!!! version-added "Added in `langchain-core` 0.3.0"
"""
return self.get_output_schema(config).model_json_schema()
@@ -566,7 +574,7 @@ class Runnable(ABC, Generic[Input, Output]):
Returns:
A JSON schema that represents the config of the `Runnable`.
!!! version-added "Added in version 0.3.0"
!!! version-added "Added in `langchain-core` 0.3.0"
"""
return self.config_schema(include=include).model_json_schema()
@@ -766,7 +774,7 @@ class Runnable(ABC, Generic[Input, Output]):
"""Assigns new fields to the `dict` output of this `Runnable`.
```python
from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.language_models.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
@@ -818,10 +826,12 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
input: The input to the `Runnable`.
config: A config to use when invoking the `Runnable`.
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
do in parallel, and other keys. Please refer to the `RunnableConfig`
for more details.
do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
Returns:
The output of the `Runnable`.
@@ -838,10 +848,12 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
input: The input to the `Runnable`.
config: A config to use when invoking the `Runnable`.
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
do in parallel, and other keys. Please refer to the `RunnableConfig`
for more details.
do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
Returns:
The output of the `Runnable`.
@@ -868,8 +880,9 @@ class Runnable(ABC, Generic[Input, Output]):
config: A config to use when invoking the `Runnable`. The config supports
standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work
to do in parallel, and other keys. Please refer to the
`RunnableConfig` for more details.
to do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
return_exceptions: Whether to return exceptions instead of raising them.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
@@ -932,10 +945,12 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
inputs: A list of inputs to the `Runnable`.
config: A config to use when invoking the `Runnable`.
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
do in parallel, and other keys. Please refer to the `RunnableConfig`
for more details.
do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
return_exceptions: Whether to return exceptions instead of raising them.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
@@ -998,10 +1013,12 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
inputs: A list of inputs to the `Runnable`.
config: A config to use when invoking the `Runnable`.
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
do in parallel, and other keys. Please refer to the `RunnableConfig`
for more details.
do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
return_exceptions: Whether to return exceptions instead of raising them.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
@@ -1061,10 +1078,12 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
inputs: A list of inputs to the `Runnable`.
config: A config to use when invoking the `Runnable`.
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
do in parallel, and other keys. Please refer to the `RunnableConfig`
for more details.
do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
return_exceptions: Whether to return exceptions instead of raising them.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
@@ -1742,46 +1761,52 @@ class Runnable(ABC, Generic[Input, Output]):
import time
import asyncio
def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
async def test_runnable(time_to_sleep: int):
print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
await asyncio.sleep(time_to_sleep)
print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")
async def fn_start(run_obj: Runnable):
print(f"on start callback starts at {format_t(time.time())}")
await asyncio.sleep(3)
print(f"on start callback ends at {format_t(time.time())}")
async def fn_end(run_obj: Runnable):
print(f"on end callback starts at {format_t(time.time())}")
await asyncio.sleep(2)
print(f"on end callback ends at {format_t(time.time())}")
runnable = RunnableLambda(test_runnable).with_alisteners(
on_start=fn_start,
on_end=fn_end
on_start=fn_start, on_end=fn_end
)
async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
asyncio.run(concurrent_runs())
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00
asyncio.run(concurrent_runs())
# Result:
# on start callback starts at 2025-03-01T07:05:22.875378+00:00
# on start callback starts at 2025-03-01T07:05:22.875495+00:00
# on start callback ends at 2025-03-01T07:05:25.878862+00:00
# on start callback ends at 2025-03-01T07:05:25.878947+00:00
# Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
# Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
# Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
# on end callback starts at 2025-03-01T07:05:27.882360+00:00
# Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
# on end callback starts at 2025-03-01T07:05:28.882428+00:00
# on end callback ends at 2025-03-01T07:05:29.883893+00:00
# on end callback ends at 2025-03-01T07:05:30.884831+00:00
```
"""
return RunnableBinding(
@@ -1843,7 +1868,7 @@ class Runnable(ABC, Generic[Input, Output]):
`exp_base`, and `jitter` (all `float` values).
Returns:
A new Runnable that retries the original Runnable on exceptions.
A new `Runnable` that retries the original `Runnable` on exceptions.
Example:
```python
@@ -1927,7 +1952,9 @@ class Runnable(ABC, Generic[Input, Output]):
exceptions_to_handle: A tuple of exception types to handle.
exception_key: If `string` is specified then handled exceptions will be
passed to fallbacks as part of the input under the specified key.
If `None`, exceptions will not be passed to fallbacks.
If used, the base `Runnable` and its fallbacks must accept a
dictionary as input.
@@ -1963,7 +1990,9 @@ class Runnable(ABC, Generic[Input, Output]):
exceptions_to_handle: A tuple of exception types to handle.
exception_key: If `string` is specified then handled exceptions will be
passed to fallbacks as part of the input under the specified key.
If `None`, exceptions will not be passed to fallbacks.
If used, the base `Runnable` and its fallbacks must accept a
dictionary as input.
@@ -2429,10 +2458,14 @@ class Runnable(ABC, Generic[Input, Output]):
`as_tool` will instantiate a `BaseTool` with a name, description, and
`args_schema` from a `Runnable`. Where possible, schemas are inferred
from `runnable.get_input_schema`. Alternatively (e.g., if the
`Runnable` takes a dict as input and the specific dict keys are not typed),
the schema can be specified directly with `args_schema`. You can also
pass `arg_types` to just specify the required arguments and their types.
from `runnable.get_input_schema`.
Alternatively (e.g., if the `Runnable` takes a dict as input and the specific
`dict` keys are not typed), the schema can be specified directly with
`args_schema`.
You can also pass `arg_types` to just specify the required arguments and their
types.
Args:
args_schema: The schema for the tool.
@@ -2501,7 +2534,7 @@ class Runnable(ABC, Generic[Input, Output]):
as_tool.invoke({"a": 3, "b": [1, 2]})
```
String input:
`str` input:
```python
from langchain_core.runnables import RunnableLambda
@@ -2519,9 +2552,6 @@ class Runnable(ABC, Generic[Input, Output]):
as_tool = runnable.as_tool()
as_tool.invoke("b")
```
!!! version-added "Added in version 0.2.14"
"""
# Avoid circular import
from langchain_core.tools import convert_runnable_to_tool # noqa: PLC0415
@@ -2640,7 +2670,7 @@ class RunnableSerializable(Serializable, Runnable[Input, Output]):
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-3-7-sonnet-20250219"
model_name="claude-sonnet-4-5-20250929"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
@@ -2753,6 +2783,9 @@ def _seq_output_schema(
return last.get_output_schema(config)
_RUNNABLE_SEQUENCE_MIN_STEPS = 2
class RunnableSequence(RunnableSerializable[Input, Output]):
"""Sequence of `Runnable` objects, where the output of one is the input of the next.
@@ -2862,7 +2895,7 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
name: The name of the `Runnable`.
first: The first `Runnable` in the sequence.
middle: The middle `Runnable` objects in the sequence.
last: The last Runnable in the sequence.
last: The last `Runnable` in the sequence.
Raises:
ValueError: If the sequence has less than 2 steps.
@@ -2875,8 +2908,11 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
steps_flat.extend(step.steps)
else:
steps_flat.append(coerce_to_runnable(step))
if len(steps_flat) < 2:
msg = f"RunnableSequence must have at least 2 steps, got {len(steps_flat)}"
if len(steps_flat) < _RUNNABLE_SEQUENCE_MIN_STEPS:
msg = (
f"RunnableSequence must have at least {_RUNNABLE_SEQUENCE_MIN_STEPS} "
f"steps, got {len(steps_flat)}"
)
raise ValueError(msg)
super().__init__(
first=steps_flat[0],
@@ -2907,7 +2943,7 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
model_config = ConfigDict(
@@ -3503,7 +3539,7 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
Returns a mapping of their outputs.
`RunnableParallel` is one of the two main composition primitives for the LCEL,
`RunnableParallel` is one of the two main composition primitives,
alongside `RunnableSequence`. It invokes `Runnable`s concurrently, providing the
same input to each.
@@ -3613,7 +3649,7 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -3671,6 +3707,12 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
== "object"
for s in self.steps__.values()
):
for step in self.steps__.values():
fields = step.get_input_schema(config).model_fields
root_field = fields.get("root")
if root_field is not None and root_field.annotation != Any:
return super().get_input_schema(config)
# This is correct, but pydantic typings/mypy don't think so.
return create_model_v2(
self.get_name("Input"),
@@ -4480,7 +4522,7 @@ class RunnableLambda(Runnable[Input, Output]):
# on itemgetter objects, so we have to parse the repr
items = str(func).replace("operator.itemgetter(", "")[:-1].split(", ")
if all(
item[0] == "'" and item[-1] == "'" and len(item) > 2 for item in items
item[0] == "'" and item[-1] == "'" and item != "''" for item in items
):
fields = {item[1:-1]: (Any, ...) for item in items}
# It's a dict, lol
@@ -5142,7 +5184,7 @@ class RunnableEachBase(RunnableSerializable[list[Input], list[Output]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -5325,7 +5367,7 @@ class RunnableEach(RunnableEachBase[Input, Output]):
class RunnableBindingBase(RunnableSerializable[Input, Output]): # type: ignore[no-redef]
"""`Runnable` that delegates calls to another `Runnable` with a set of kwargs.
"""`Runnable` that delegates calls to another `Runnable` with a set of `**kwargs`.
Use only if creating a new `RunnableBinding` subclass with different `__init__`
args.
@@ -5465,7 +5507,7 @@ class RunnableBindingBase(RunnableSerializable[Input, Output]): # type: ignore[
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -5755,7 +5797,7 @@ class RunnableBinding(RunnableBindingBase[Input, Output]): # type: ignore[no-re
```python
# Create a Runnable binding that invokes the chat model with the
# additional kwarg `stop=['-']` when running it.
from langchain_community.chat_models import ChatOpenAI
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
model.invoke('Say "Parrot-MAGIC"', stop=["-"]) # Should return `Parrot`

View File

@@ -36,17 +36,19 @@ from langchain_core.runnables.utils import (
get_unique_config_specs,
)
_MIN_BRANCHES = 2
class RunnableBranch(RunnableSerializable[Input, Output]):
"""Runnable that selects which branch to run based on a condition.
"""`Runnable` that selects which branch to run based on a condition.
The Runnable is initialized with a list of (condition, Runnable) pairs and
The `Runnable` is initialized with a list of `(condition, Runnable)` pairs and
a default branch.
When operating on an input, the first condition that evaluates to True is
selected, and the corresponding Runnable is run on the input.
selected, and the corresponding `Runnable` is run on the input.
If no condition evaluates to True, the default branch is run on the input.
If no condition evaluates to `True`, the default branch is run on the input.
Examples:
```python
@@ -65,9 +67,9 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
"""
branches: Sequence[tuple[Runnable[Input, bool], Runnable[Input, Output]]]
"""A list of (condition, Runnable) pairs."""
"""A list of `(condition, Runnable)` pairs."""
default: Runnable[Input, Output]
"""A Runnable to run if no condition is met."""
"""A `Runnable` to run if no condition is met."""
def __init__(
self,
@@ -79,19 +81,19 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
]
| RunnableLike,
) -> None:
"""A Runnable that runs one of two branches based on a condition.
"""A `Runnable` that runs one of two branches based on a condition.
Args:
*branches: A list of (condition, Runnable) pairs.
Defaults a Runnable to run if no condition is met.
*branches: A list of `(condition, Runnable)` pairs.
Defaults a `Runnable` to run if no condition is met.
Raises:
ValueError: If the number of branches is less than 2.
TypeError: If the default branch is not Runnable, Callable or Mapping.
TypeError: If a branch is not a tuple or list.
ValueError: If a branch is not of length 2.
ValueError: If the number of branches is less than `2`.
TypeError: If the default branch is not `Runnable`, `Callable` or `Mapping`.
TypeError: If a branch is not a `tuple` or `list`.
ValueError: If a branch is not of length `2`.
"""
if len(branches) < 2:
if len(branches) < _MIN_BRANCHES:
msg = "RunnableBranch requires at least two branches"
raise ValueError(msg)
@@ -118,7 +120,7 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
)
raise TypeError(msg)
if len(branch) != 2:
if len(branch) != _MIN_BRANCHES:
msg = (
f"RunnableBranch branches must be "
f"tuples or lists of length 2, not {len(branch)}"
@@ -140,7 +142,7 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -187,12 +189,12 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
def invoke(
self, input: Input, config: RunnableConfig | None = None, **kwargs: Any
) -> Output:
"""First evaluates the condition, then delegate to true or false branch.
"""First evaluates the condition, then delegate to `True` or `False` branch.
Args:
input: The input to the Runnable.
config: The configuration for the Runnable.
**kwargs: Additional keyword arguments to pass to the Runnable.
input: The input to the `Runnable`.
config: The configuration for the `Runnable`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Returns:
The output of the branch that was run.
@@ -297,12 +299,12 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> Iterator[Output]:
"""First evaluates the condition, then delegate to true or false branch.
"""First evaluates the condition, then delegate to `True` or `False` branch.
Args:
input: The input to the Runnable.
config: The configuration for the Runnable.
**kwargs: Additional keyword arguments to pass to the Runnable.
input: The input to the `Runnable`.
config: The configuration for the Runna`ble.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Yields:
The output of the branch that was run.
@@ -381,12 +383,12 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> AsyncIterator[Output]:
"""First evaluates the condition, then delegate to true or false branch.
"""First evaluates the condition, then delegate to `True` or `False` branch.
Args:
input: The input to the Runnable.
config: The configuration for the Runnable.
**kwargs: Additional keyword arguments to pass to the Runnable.
input: The input to the `Runnable`.
config: The configuration for the `Runnable`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
Yields:
The output of the branch that was run.

View File

@@ -1,4 +1,4 @@
"""Runnables that can be dynamically configured."""
"""`Runnable` objects that can be dynamically configured."""
from __future__ import annotations
@@ -47,14 +47,14 @@ if TYPE_CHECKING:
class DynamicRunnable(RunnableSerializable[Input, Output]):
"""Serializable Runnable that can be dynamically configured.
"""Serializable `Runnable` that can be dynamically configured.
A DynamicRunnable should be initiated using the `configurable_fields` or
`configurable_alternatives` method of a Runnable.
A `DynamicRunnable` should be initiated using the `configurable_fields` or
`configurable_alternatives` method of a `Runnable`.
"""
default: RunnableSerializable[Input, Output]
"""The default Runnable to use."""
"""The default `Runnable` to use."""
config: RunnableConfig | None = None
"""The configuration to use."""
@@ -66,7 +66,7 @@ class DynamicRunnable(RunnableSerializable[Input, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -120,13 +120,13 @@ class DynamicRunnable(RunnableSerializable[Input, Output]):
def prepare(
self, config: RunnableConfig | None = None
) -> tuple[Runnable[Input, Output], RunnableConfig]:
"""Prepare the Runnable for invocation.
"""Prepare the `Runnable` for invocation.
Args:
config: The configuration to use.
Returns:
The prepared Runnable and configuration.
The prepared `Runnable` and configuration.
"""
runnable: Runnable[Input, Output] = self
while isinstance(runnable, DynamicRunnable):
@@ -316,12 +316,12 @@ class DynamicRunnable(RunnableSerializable[Input, Output]):
class RunnableConfigurableFields(DynamicRunnable[Input, Output]):
"""Runnable that can be dynamically configured.
"""`Runnable` that can be dynamically configured.
A RunnableConfigurableFields should be initiated using the
`configurable_fields` method of a Runnable.
A `RunnableConfigurableFields` should be initiated using the
`configurable_fields` method of a `Runnable`.
Here is an example of using a RunnableConfigurableFields with LLMs:
Here is an example of using a `RunnableConfigurableFields` with LLMs:
```python
from langchain_core.prompts import PromptTemplate
@@ -348,7 +348,7 @@ class RunnableConfigurableFields(DynamicRunnable[Input, Output]):
chain.invoke({"x": 0}, config={"configurable": {"temperature": 0.9}})
```
Here is an example of using a RunnableConfigurableFields with HubRunnables:
Here is an example of using a `RunnableConfigurableFields` with `HubRunnables`:
```python
from langchain_core.prompts import PromptTemplate
@@ -380,7 +380,7 @@ class RunnableConfigurableFields(DynamicRunnable[Input, Output]):
@property
def config_specs(self) -> list[ConfigurableFieldSpec]:
"""Get the configuration specs for the RunnableConfigurableFields.
"""Get the configuration specs for the `RunnableConfigurableFields`.
Returns:
The configuration specs.
@@ -473,13 +473,13 @@ _enums_for_spec_lock = threading.Lock()
class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
"""Runnable that can be dynamically configured.
"""`Runnable` that can be dynamically configured.
A RunnableConfigurableAlternatives should be initiated using the
`configurable_alternatives` method of a Runnable or can be
A `RunnableConfigurableAlternatives` should be initiated using the
`configurable_alternatives` method of a `Runnable` or can be
initiated directly as well.
Here is an example of using a RunnableConfigurableAlternatives that uses
Here is an example of using a `RunnableConfigurableAlternatives` that uses
alternative prompts to illustrate its functionality:
```python
@@ -506,7 +506,7 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
chain.with_config(configurable={"prompt": "poem"}).invoke({"topic": "bears"})
```
Equivalently, you can initialize RunnableConfigurableAlternatives directly
Equivalently, you can initialize `RunnableConfigurableAlternatives` directly
and use in LCEL in the same way:
```python
@@ -531,7 +531,7 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
"""
which: ConfigurableField
"""The ConfigurableField to use to choose between alternatives."""
"""The `ConfigurableField` to use to choose between alternatives."""
alternatives: dict[
str,
@@ -544,8 +544,9 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
prefix_keys: bool
"""Whether to prefix configurable fields of each alternative with a namespace
of the form <which.id>==<alternative_key>, eg. a key named "temperature" used by
the alternative named "gpt3" becomes "model==gpt3/temperature"."""
of the form <which.id>==<alternative_key>, e.g. a key named "temperature" used by
the alternative named "gpt3" becomes "model==gpt3/temperature".
"""
@property
@override
@@ -638,24 +639,24 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
def _strremoveprefix(s: str, prefix: str) -> str:
"""str.removeprefix() is only available in Python 3.9+."""
"""`str.removeprefix()` is only available in Python 3.9+."""
return s.replace(prefix, "", 1) if s.startswith(prefix) else s
def prefix_config_spec(
spec: ConfigurableFieldSpec, prefix: str
) -> ConfigurableFieldSpec:
"""Prefix the id of a ConfigurableFieldSpec.
"""Prefix the id of a `ConfigurableFieldSpec`.
This is useful when a RunnableConfigurableAlternatives is used as a
ConfigurableField of another RunnableConfigurableAlternatives.
This is useful when a `RunnableConfigurableAlternatives` is used as a
`ConfigurableField` of another `RunnableConfigurableAlternatives`.
Args:
spec: The ConfigurableFieldSpec to prefix.
spec: The `ConfigurableFieldSpec` to prefix.
prefix: The prefix to add.
Returns:
The prefixed ConfigurableFieldSpec.
The prefixed `ConfigurableFieldSpec`.
"""
return (
ConfigurableFieldSpec(
@@ -677,15 +678,15 @@ def make_options_spec(
) -> ConfigurableFieldSpec:
"""Make options spec.
Make a ConfigurableFieldSpec for a ConfigurableFieldSingleOption or
ConfigurableFieldMultiOption.
Make a `ConfigurableFieldSpec` for a `ConfigurableFieldSingleOption` or
`ConfigurableFieldMultiOption`.
Args:
spec: The ConfigurableFieldSingleOption or ConfigurableFieldMultiOption.
spec: The `ConfigurableFieldSingleOption` or `ConfigurableFieldMultiOption`.
description: The description to use if the spec does not have one.
Returns:
The ConfigurableFieldSpec.
The `ConfigurableFieldSpec`.
"""
with _enums_for_spec_lock:
if enum := _enums_for_spec.get(spec):

View File

@@ -35,20 +35,20 @@ if TYPE_CHECKING:
class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
"""Runnable that can fallback to other Runnables if it fails.
"""`Runnable` that can fallback to other `Runnable`s if it fails.
External APIs (e.g., APIs for a language model) may at times experience
degraded performance or even downtime.
In these cases, it can be useful to have a fallback Runnable that can be
used in place of the original Runnable (e.g., fallback to another LLM provider).
In these cases, it can be useful to have a fallback `Runnable` that can be
used in place of the original `Runnable` (e.g., fallback to another LLM provider).
Fallbacks can be defined at the level of a single Runnable, or at the level
of a chain of Runnables. Fallbacks are tried in order until one succeeds or
Fallbacks can be defined at the level of a single `Runnable`, or at the level
of a chain of `Runnable`s. Fallbacks are tried in order until one succeeds or
all fail.
While you can instantiate a `RunnableWithFallbacks` directly, it is usually
more convenient to use the `with_fallbacks` method on a Runnable.
more convenient to use the `with_fallbacks` method on a `Runnable`.
Example:
```python
@@ -87,7 +87,7 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
"""
runnable: Runnable[Input, Output]
"""The Runnable to run first."""
"""The `Runnable` to run first."""
fallbacks: Sequence[Runnable[Input, Output]]
"""A sequence of fallbacks to try."""
exceptions_to_handle: tuple[type[BaseException], ...] = (Exception,)
@@ -97,9 +97,12 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
"""
exception_key: str | None = None
"""If `string` is specified then handled exceptions will be passed to fallbacks as
part of the input under the specified key. If `None`, exceptions
will not be passed to fallbacks. If used, the base Runnable and its fallbacks
must accept a dictionary as input."""
part of the input under the specified key.
If `None`, exceptions will not be passed to fallbacks.
If used, the base `Runnable` and its fallbacks must accept a dictionary as input.
"""
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -137,7 +140,7 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -152,10 +155,10 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
@property
def runnables(self) -> Iterator[Runnable[Input, Output]]:
"""Iterator over the Runnable and its fallbacks.
"""Iterator over the `Runnable` and its fallbacks.
Yields:
The Runnable then its fallbacks.
The `Runnable` then its fallbacks.
"""
yield self.runnable
yield from self.fallbacks
@@ -589,14 +592,14 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
await run_manager.on_chain_end(output)
def __getattr__(self, name: str) -> Any:
"""Get an attribute from the wrapped Runnable and its fallbacks.
"""Get an attribute from the wrapped `Runnable` and its fallbacks.
Returns:
If the attribute is anything other than a method that outputs a Runnable,
returns getattr(self.runnable, name). If the attribute is a method that
does return a new Runnable (e.g. model.bind_tools([...]) outputs a new
RunnableBinding) then self.runnable and each of the runnables in
self.fallbacks is replaced with getattr(x, name).
If the attribute is anything other than a method that outputs a `Runnable`,
returns `getattr(self.runnable, name)`. If the attribute is a method that
does return a new `Runnable` (e.g. `model.bind_tools([...])` outputs a new
`RunnableBinding`) then `self.runnable` and each of the runnables in
`self.fallbacks` is replaced with `getattr(x, name)`.
Example:
```python
@@ -604,7 +607,7 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
from langchain_anthropic import ChatAnthropic
gpt_4o = ChatOpenAI(model="gpt-4o")
claude_3_sonnet = ChatAnthropic(model="claude-3-7-sonnet-20250219")
claude_3_sonnet = ChatAnthropic(model="claude-sonnet-4-5-20250929")
model = gpt_4o.with_fallbacks([claude_3_sonnet])
model.model_name
@@ -618,7 +621,6 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
runnable=RunnableBinding(bound=ChatOpenAI(...), kwargs={"tools": [...]}),
fallbacks=[RunnableBinding(bound=ChatAnthropic(...), kwargs={"tools": [...]})],
)
```
""" # noqa: E501
attr = getattr(self.runnable, name)

View File

@@ -4,7 +4,6 @@ from __future__ import annotations
import inspect
from collections import defaultdict
from collections.abc import Callable
from dataclasses import dataclass, field
from enum import Enum
from typing import (
@@ -22,7 +21,7 @@ from langchain_core.runnables.base import Runnable, RunnableSerializable
from langchain_core.utils.pydantic import _IgnoreUnserializable, is_basemodel_subclass
if TYPE_CHECKING:
from collections.abc import Sequence
from collections.abc import Callable, Sequence
from pydantic import BaseModel
@@ -132,7 +131,7 @@ class Branch(NamedTuple):
condition: Callable[..., str]
"""A callable that returns a string representation of the condition."""
ends: dict[str, str] | None
"""Optional dictionary of end node ids for the branches. """
"""Optional dictionary of end node IDs for the branches. """
class CurveStyle(Enum):
@@ -642,6 +641,7 @@ class Graph:
retry_delay: float = 1.0,
frontmatter_config: dict[str, Any] | None = None,
base_url: str | None = None,
proxies: dict[str, str] | None = None,
) -> bytes:
"""Draw the graph as a PNG image using Mermaid.
@@ -674,11 +674,10 @@ class Graph:
}
```
base_url: The base URL of the Mermaid server for rendering via API.
proxies: HTTP/HTTPS proxies for requests (e.g. `{"http": "http://127.0.0.1:7890"}`).
Returns:
The PNG image as bytes.
"""
# Import locally to prevent circular import
from langchain_core.runnables.graph_mermaid import ( # noqa: PLC0415
@@ -699,6 +698,7 @@ class Graph:
padding=padding,
max_retries=max_retries,
retry_delay=retry_delay,
proxies=proxies,
base_url=base_url,
)
@@ -706,8 +706,10 @@ class Graph:
def _first_node(graph: Graph, exclude: Sequence[str] = ()) -> Node | None:
"""Find the single node that is not a target of any edge.
Exclude nodes/sources with ids in the exclude list.
Exclude nodes/sources with IDs in the exclude list.
If there is no such node, or there are multiple, return `None`.
When drawing the graph, this node would be the origin.
"""
targets = {edge.target for edge in graph.edges if edge.source not in exclude}
@@ -722,8 +724,10 @@ def _first_node(graph: Graph, exclude: Sequence[str] = ()) -> Node | None:
def _last_node(graph: Graph, exclude: Sequence[str] = ()) -> Node | None:
"""Find the single node that is not a source of any edge.
Exclude nodes/targets with ids in the exclude list.
Exclude nodes/targets with IDs in the exclude list.
If there is no such node, or there are multiple, return `None`.
When drawing the graph, this node would be the destination.
"""
sources = {edge.source for edge in graph.edges if edge.target not in exclude}

View File

@@ -7,7 +7,6 @@ from __future__ import annotations
import math
import os
from collections.abc import Mapping, Sequence
from typing import TYPE_CHECKING, Any
try:
@@ -20,6 +19,8 @@ except ImportError:
_HAS_GRANDALF = False
if TYPE_CHECKING:
from collections.abc import Mapping, Sequence
from langchain_core.runnables.graph import Edge as LangEdge

View File

@@ -281,6 +281,7 @@ def draw_mermaid_png(
max_retries: int = 1,
retry_delay: float = 1.0,
base_url: str | None = None,
proxies: dict[str, str] | None = None,
) -> bytes:
"""Draws a Mermaid graph as PNG using provided syntax.
@@ -293,6 +294,7 @@ def draw_mermaid_png(
max_retries: Maximum number of retries (MermaidDrawMethod.API).
retry_delay: Delay between retries (MermaidDrawMethod.API).
base_url: Base URL for the Mermaid.ink API.
proxies: HTTP/HTTPS proxies for requests (e.g. `{"http": "http://127.0.0.1:7890"}`).
Returns:
PNG image bytes.
@@ -314,6 +316,7 @@ def draw_mermaid_png(
max_retries=max_retries,
retry_delay=retry_delay,
base_url=base_url,
proxies=proxies,
)
else:
supported_methods = ", ".join([m.value for m in MermaidDrawMethod])
@@ -405,6 +408,7 @@ def _render_mermaid_using_api(
file_type: Literal["jpeg", "png", "webp"] | None = "png",
max_retries: int = 1,
retry_delay: float = 1.0,
proxies: dict[str, str] | None = None,
base_url: str | None = None,
) -> bytes:
"""Renders Mermaid graph using the Mermaid.INK API."""
@@ -445,7 +449,7 @@ def _render_mermaid_using_api(
for attempt in range(max_retries + 1):
try:
response = requests.get(image_url, timeout=10)
response = requests.get(image_url, timeout=10, proxies=proxies)
if response.status_code == requests.codes.ok:
img_bytes = response.content
if output_file_path is not None:
@@ -454,7 +458,10 @@ def _render_mermaid_using_api(
return img_bytes
# If we get a server error (5xx), retry
if 500 <= response.status_code < 600 and attempt < max_retries:
if (
requests.codes.internal_server_error <= response.status_code
and attempt < max_retries
):
# Exponential backoff with jitter
sleep_time = retry_delay * (2**attempt) * (0.5 + 0.5 * random.random()) # noqa: S311 not used for crypto
time.sleep(sleep_time)

View File

@@ -1,5 +1,6 @@
"""Helper class to draw a state graph into a PNG file."""
from itertools import groupby
from typing import Any
from langchain_core.runnables.graph import Graph, LabelsDict
@@ -141,6 +142,7 @@ class PngDrawer:
# Add nodes, conditional edges, and edges to the graph
self.add_nodes(viz, graph)
self.add_edges(viz, graph)
self.add_subgraph(viz, [node.split(":") for node in graph.nodes])
# Update entrypoint and END styles
self.update_styles(viz, graph)
@@ -161,6 +163,32 @@ class PngDrawer:
for node in graph.nodes:
self.add_node(viz, node)
def add_subgraph(
self,
viz: Any,
nodes: list[list[str]],
parent_prefix: list[str] | None = None,
) -> None:
"""Add subgraphs to the graph.
Args:
viz: The graphviz object.
nodes: The nodes to add.
parent_prefix: The prefix of the parent subgraph.
"""
for prefix, grouped in groupby(
[node[:] for node in sorted(nodes)],
key=lambda x: x.pop(0),
):
current_prefix = (parent_prefix or []) + [prefix]
grouped_nodes = list(grouped)
if len(grouped_nodes) > 1:
subgraph = viz.add_subgraph(
[":".join(current_prefix + node) for node in grouped_nodes],
name="cluster_" + ":".join(current_prefix),
)
self.add_subgraph(subgraph, grouped_nodes, current_prefix)
def add_edges(self, viz: Any, graph: Graph) -> None:
"""Add edges to the graph.

View File

@@ -36,23 +36,23 @@ GetSessionHistoryCallable = Callable[..., BaseChatMessageHistory]
class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
"""Runnable that manages chat message history for another Runnable.
"""`Runnable` that manages chat message history for another `Runnable`.
A chat message history is a sequence of messages that represent a conversation.
RunnableWithMessageHistory wraps another Runnable and manages the chat message
`RunnableWithMessageHistory` wraps another `Runnable` and manages the chat message
history for it; it is responsible for reading and updating the chat message
history.
The formats supported for the inputs and outputs of the wrapped Runnable
The formats supported for the inputs and outputs of the wrapped `Runnable`
are described below.
RunnableWithMessageHistory must always be called with a config that contains
`RunnableWithMessageHistory` must always be called with a config that contains
the appropriate parameters for the chat message history factory.
By default, the Runnable is expected to take a single configuration parameter
By default, the `Runnable` is expected to take a single configuration parameter
called `session_id` which is a string. This parameter is used to create a new
or look up an existing chat message history that matches the given session_id.
or look up an existing chat message history that matches the given `session_id`.
In this case, the invocation would look like this:
@@ -117,12 +117,12 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
```
Example where the wrapped Runnable takes a dictionary input:
Example where the wrapped `Runnable` takes a dictionary input:
```python
from typing import Optional
from langchain_community.chat_models import ChatAnthropic
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
@@ -166,7 +166,7 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
print(store) # noqa: T201
```
Example where the session factory takes two keys, user_id and conversation id):
Example where the session factory takes two keys (`user_id` and `conversation_id`):
```python
store = {}
@@ -223,21 +223,28 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
"""
get_session_history: GetSessionHistoryCallable
"""Function that returns a new BaseChatMessageHistory.
"""Function that returns a new `BaseChatMessageHistory`.
This function should either take a single positional argument `session_id` of type
string and return a corresponding chat message history instance"""
string and return a corresponding chat message history instance
"""
input_messages_key: str | None = None
"""Must be specified if the base runnable accepts a dict as input.
The key in the input dict that contains the messages."""
"""Must be specified if the base `Runnable` accepts a `dict` as input.
The key in the input `dict` that contains the messages.
"""
output_messages_key: str | None = None
"""Must be specified if the base Runnable returns a dict as output.
The key in the output dict that contains the messages."""
"""Must be specified if the base `Runnable` returns a `dict` as output.
The key in the output `dict` that contains the messages.
"""
history_messages_key: str | None = None
"""Must be specified if the base runnable accepts a dict as input and expects a
separate key for historical messages."""
"""Must be specified if the base `Runnable` accepts a `dict` as input and expects a
separate key for historical messages.
"""
history_factory_config: Sequence[ConfigurableFieldSpec]
"""Configure fields that should be passed to the chat history factory.
See `ConfigurableFieldSpec` for more details."""
See `ConfigurableFieldSpec` for more details.
"""
def __init__(
self,
@@ -254,15 +261,16 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
history_factory_config: Sequence[ConfigurableFieldSpec] | None = None,
**kwargs: Any,
) -> None:
"""Initialize RunnableWithMessageHistory.
"""Initialize `RunnableWithMessageHistory`.
Args:
runnable: The base Runnable to be wrapped.
runnable: The base `Runnable` to be wrapped.
Must take as input one of:
1. A list of `BaseMessage`
2. A dict with one key for all messages
3. A dict with one key for the current input string/message(s) and
2. A `dict` with one key for all messages
3. A `dict` with one key for the current input string/message(s) and
a separate key for historical messages. If the input key points
to a string, it will be treated as a `HumanMessage` in history.
@@ -270,13 +278,15 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
1. A string which can be treated as an `AIMessage`
2. A `BaseMessage` or sequence of `BaseMessage`
3. A dict with a key for a `BaseMessage` or sequence of
3. A `dict` with a key for a `BaseMessage` or sequence of
`BaseMessage`
get_session_history: Function that returns a new BaseChatMessageHistory.
get_session_history: Function that returns a new `BaseChatMessageHistory`.
This function should either take a single positional argument
`session_id` of type string and return a corresponding
chat message history instance.
```python
def get_session_history(
session_id: str, *, user_id: str | None = None
@@ -295,16 +305,17 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
) -> BaseChatMessageHistory: ...
```
input_messages_key: Must be specified if the base runnable accepts a dict
input_messages_key: Must be specified if the base runnable accepts a `dict`
as input.
output_messages_key: Must be specified if the base runnable returns a dict
output_messages_key: Must be specified if the base runnable returns a `dict`
as output.
history_messages_key: Must be specified if the base runnable accepts a dict
as input and expects a separate key for historical messages.
history_messages_key: Must be specified if the base runnable accepts a
`dict` as input and expects a separate key for historical messages.
history_factory_config: Configure fields that should be passed to the
chat history factory. See `ConfigurableFieldSpec` for more details.
Specifying these allows you to pass multiple config keys
into the get_session_history factory.
Specifying these allows you to pass multiple config keys into the
`get_session_history` factory.
**kwargs: Arbitrary additional kwargs to pass to parent class
`RunnableBindingBase` init.
@@ -364,7 +375,7 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
@property
@override
def config_specs(self) -> list[ConfigurableFieldSpec]:
"""Get the configuration specs for the RunnableWithMessageHistory."""
"""Get the configuration specs for the `RunnableWithMessageHistory`."""
return get_unique_config_specs(
super().config_specs + list(self.history_factory_config)
)
@@ -606,6 +617,6 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
def _get_parameter_names(callable_: GetSessionHistoryCallable) -> list[str]:
"""Get the parameter names of the callable."""
"""Get the parameter names of the `Callable`."""
sig = inspect.signature(callable_)
return list(sig.parameters.keys())

View File

@@ -51,10 +51,10 @@ def identity(x: Other) -> Other:
"""Identity function.
Args:
x: input.
x: Input.
Returns:
output.
Output.
"""
return x
@@ -63,10 +63,10 @@ async def aidentity(x: Other) -> Other:
"""Async identity function.
Args:
x: input.
x: Input.
Returns:
output.
Output.
"""
return x
@@ -74,11 +74,11 @@ async def aidentity(x: Other) -> Other:
class RunnablePassthrough(RunnableSerializable[Other, Other]):
"""Runnable to passthrough inputs unchanged or with additional keys.
This Runnable behaves almost like the identity function, except that it
This `Runnable` behaves almost like the identity function, except that it
can be configured to add additional keys to the output, if the input is a
dict.
The examples below demonstrate this Runnable works using a few simple
The examples below demonstrate this `Runnable` works using a few simple
chains. The chains rely on simple lambdas to make the examples easy to execute
and experiment with.
@@ -164,7 +164,7 @@ class RunnablePassthrough(RunnableSerializable[Other, Other]):
input_type: type[Other] | None = None,
**kwargs: Any,
) -> None:
"""Create e RunnablePassthrough.
"""Create a `RunnablePassthrough`.
Args:
func: Function to be called with the input.
@@ -180,7 +180,7 @@ class RunnablePassthrough(RunnableSerializable[Other, Other]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -213,11 +213,11 @@ class RunnablePassthrough(RunnableSerializable[Other, Other]):
"""Merge the Dict input with the output produced by the mapping argument.
Args:
**kwargs: Runnable, Callable or a Mapping from keys to Runnables
or Callables.
**kwargs: `Runnable`, `Callable` or a `Mapping` from keys to `Runnable`
objects or `Callable`s.
Returns:
A Runnable that merges the Dict input with the output produced by the
A `Runnable` that merges the `dict` input with the output produced by the
mapping argument.
"""
return RunnableAssign(RunnableParallel[dict[str, Any]](kwargs))
@@ -350,7 +350,7 @@ _graph_passthrough: RunnablePassthrough = RunnablePassthrough()
class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
"""Runnable that assigns key-value pairs to dict[str, Any] inputs.
"""Runnable that assigns key-value pairs to `dict[str, Any]` inputs.
The `RunnableAssign` class takes input dictionaries and, through a
`RunnableParallel` instance, applies transformations, then combines
@@ -392,7 +392,7 @@ class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
mapper: RunnableParallel
def __init__(self, mapper: RunnableParallel[dict[str, Any]], **kwargs: Any) -> None:
"""Create a RunnableAssign.
"""Create a `RunnableAssign`.
Args:
mapper: A `RunnableParallel` instance that will be used to transform the
@@ -403,7 +403,7 @@ class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod
@@ -668,13 +668,19 @@ class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
yield chunk
class RunnablePick(RunnableSerializable[dict[str, Any], dict[str, Any]]):
"""Runnable that picks keys from dict[str, Any] inputs.
class RunnablePick(RunnableSerializable[dict[str, Any], Any]):
"""`Runnable` that picks keys from `dict[str, Any]` inputs.
RunnablePick class represents a Runnable that selectively picks keys from a
`RunnablePick` class represents a `Runnable` that selectively picks keys from a
dictionary input. It allows you to specify one or more keys to extract
from the input dictionary. It returns a new dictionary containing only
the selected keys.
from the input dictionary.
!!! note "Return Type Behavior"
The return type depends on the `keys` parameter:
- When `keys` is a `str`: Returns the single value associated with that key
- When `keys` is a `list`: Returns a dictionary containing only the selected
keys
Example:
```python
@@ -687,18 +693,22 @@ class RunnablePick(RunnableSerializable[dict[str, Any], dict[str, Any]]):
"country": "USA",
}
runnable = RunnablePick(keys=["name", "age"])
# Single key - returns the value directly
runnable_single = RunnablePick(keys="name")
result_single = runnable_single.invoke(input_data)
print(result_single) # Output: "John"
output_data = runnable.invoke(input_data)
print(output_data) # Output: {'name': 'John', 'age': 30}
# Multiple keys - returns a dictionary
runnable_multiple = RunnablePick(keys=["name", "age"])
result_multiple = runnable_multiple.invoke(input_data)
print(result_multiple) # Output: {'name': 'John', 'age': 30}
```
"""
keys: str | list[str]
def __init__(self, keys: str | list[str], **kwargs: Any) -> None:
"""Create a RunnablePick.
"""Create a `RunnablePick`.
Args:
keys: A single key or a list of keys to pick from the input dictionary.
@@ -708,7 +718,7 @@ class RunnablePick(RunnableSerializable[dict[str, Any], dict[str, Any]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod

View File

@@ -40,11 +40,11 @@ class RouterInput(TypedDict):
key: str
"""The key to route on."""
input: Any
"""The input to pass to the selected Runnable."""
"""The input to pass to the selected `Runnable`."""
class RouterRunnable(RunnableSerializable[RouterInput, Output]):
"""Runnable that routes to a set of Runnables based on Input['key'].
"""`Runnable` that routes to a set of `Runnable` based on `Input['key']`.
Returns the output of the selected Runnable.
@@ -74,10 +74,10 @@ class RouterRunnable(RunnableSerializable[RouterInput, Output]):
self,
runnables: Mapping[str, Runnable[Any, Output] | Callable[[Any], Output]],
) -> None:
"""Create a RouterRunnable.
"""Create a `RouterRunnable`.
Args:
runnables: A mapping of keys to Runnables.
runnables: A mapping of keys to `Runnable` objects.
"""
super().__init__(
runnables={key: coerce_to_runnable(r) for key, r in runnables.items()}
@@ -90,7 +90,7 @@ class RouterRunnable(RunnableSerializable[RouterInput, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return True as this class is serializable."""
"""Return `True` as this class is serializable."""
return True
@classmethod

View File

@@ -28,7 +28,7 @@ class EventData(TypedDict, total=False):
This field is only available if the `Runnable` raised an exception.
!!! version-added "Added in version 1.0.0"
!!! version-added "Added in `langchain-core` 1.0.0"
"""
output: Any
"""The output of the `Runnable` that generated the event.
@@ -168,10 +168,7 @@ class StandardStreamEvent(BaseStreamEvent):
class CustomStreamEvent(BaseStreamEvent):
"""Custom stream event created by the user.
!!! version-added "Added in version 0.2.15"
"""
"""Custom stream event created by the user."""
# Overwrite the event field to be more specific.
event: Literal["on_custom_event"] # type: ignore[misc]

View File

@@ -7,8 +7,7 @@ import asyncio
import inspect
import sys
import textwrap
from collections.abc import Callable, Mapping, Sequence
from contextvars import Context
from collections.abc import Mapping, Sequence
from functools import lru_cache
from inspect import signature
from itertools import groupby
@@ -31,9 +30,11 @@ if TYPE_CHECKING:
AsyncIterable,
AsyncIterator,
Awaitable,
Callable,
Coroutine,
Iterable,
)
from contextvars import Context
from langchain_core.runnables.schema import StreamEvent

View File

@@ -86,7 +86,7 @@ class BaseStore(ABC, Generic[K, V]):
Returns:
A sequence of optional values associated with the keys.
If a key is not found, the corresponding value will be None.
If a key is not found, the corresponding value will be `None`.
"""
async def amget(self, keys: Sequence[K]) -> list[V | None]:
@@ -97,7 +97,7 @@ class BaseStore(ABC, Generic[K, V]):
Returns:
A sequence of optional values associated with the keys.
If a key is not found, the corresponding value will be None.
If a key is not found, the corresponding value will be `None`.
"""
return await run_in_executor(None, self.mget, keys)
@@ -243,8 +243,7 @@ class InMemoryStore(InMemoryBaseStore[Any]):
"""In-memory store for any type of data.
Attributes:
store (dict[str, Any]): The underlying dictionary that stores
the key-value pairs.
store: The underlying dictionary that stores the key-value pairs.
Examples:
```python
@@ -267,8 +266,7 @@ class InMemoryByteStore(InMemoryBaseStore[bytes]):
"""In-memory store for bytes.
Attributes:
store (dict[str, bytes]): The underlying dictionary that stores
the key-value pairs.
store: The underlying dictionary that stores the key-value pairs.
Examples:
```python

View File

@@ -125,9 +125,11 @@ def print_sys_info(*, additional_pkgs: Sequence[str] = ()) -> None:
for dep in sub_dependencies:
try:
dep_version = metadata.version(dep)
print(f"> {dep}: {dep_version}")
except Exception:
print(f"> {dep}: Installed. No version info available.")
dep_version = None
if dep_version is not None:
print(f"> {dep}: {dep_version}")
if __name__ == "__main__":

Some files were not shown because too many files have changed in this diff Show More