Compare commits

..

6 Commits

Author SHA1 Message Date
Chester Curme
ca462b71dc typo 2025-10-29 11:10:38 -04:00
Chester Curme
33a91fdf9a update snapshots for xai 2025-10-29 11:10:29 -04:00
Chester Curme
8b05cb4522 fix snapshot 2025-10-29 11:04:07 -04:00
Chester Curme
9982e28aaa update some snapshots 2025-10-29 10:52:32 -04:00
Chester Curme
e53e91bcb2 update openai 2025-10-29 10:50:25 -04:00
Chester Curme
8797b167f5 update core 2025-10-29 10:41:16 -04:00
419 changed files with 16979 additions and 37452 deletions

View File

@@ -8,15 +8,16 @@ body:
value: |
Thank you for taking the time to file a bug report.
For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Use this to report BUGS in LangChain. For usage questions, feature requests and general design questions, please use the [LangChain Forum](https://forum.langchain.com/).
Check these before submitting to see if your issue has already been reported, fixed or if there's another way to solve your problem:
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
if there's another way to solve your problem:
* [Documentation](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference Documentation](https://reference.langchain.com/python/),
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://reference.langchain.com/python/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
* [LangChain Forum](https://forum.langchain.com/),
- type: checkboxes
id: checks
attributes:
@@ -35,48 +36,16 @@ body:
required: true
- label: This is not related to the langchain-community package.
required: true
- label: I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
required: true
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Which `langchain` package(s) is this bug related to? Select at least one.
Note that if the package you are reporting for is not listed here, it is not in this repository (e.g. `langchain-google-genai` is in [`langchain-ai/langchain-google`](https://github.com/langchain-ai/langchain-google/)).
Please report issues for other packages to their respective repositories.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general
- type: textarea
id: reproduction
validations:
required: true
attributes:
label: Example Code (Python)
label: Example Code
description: |
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
@@ -84,12 +53,15 @@ body:
**Important!**
* Avoid screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
* Reduce your code to the minimum required to reproduce the issue if possible.
* Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
* Reduce your code to the minimum required to reproduce the issue if possible. This makes it much easier for others to help you.
* Use code tags (e.g., ```python ... ```) to correctly [format your code](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting).
* INCLUDE the language label (e.g. `python`) after the first three backticks to enable syntax highlighting. (e.g., ```python rather than ```).
(This will be automatically formatted into code, so no need for backticks.)
render: python
placeholder: |
The following code:
```python
from langchain_core.runnables import RunnableLambda
def bad_code(inputs) -> int:
@@ -97,14 +69,17 @@ body:
chain = RunnableLambda(bad_code)
chain.invoke('Hello!')
```
- type: textarea
id: error
validations:
required: false
attributes:
label: Error Message and Stack Trace (if applicable)
description: |
If you are reporting an error, please copy and paste the full error message and
stack trace.
(This will be automatically formatted into code, so no need for backticks.)
render: shell
If you are reporting an error, please include the full error message and stack trace.
placeholder: |
Exception + full stack trace
- type: textarea
id: description
attributes:
@@ -124,7 +99,9 @@ body:
attributes:
label: System Info
description: |
Please share your system info with us.
Please share your system info with us. Do NOT skip this step and please don't trim
the output. Most users don't include enough information here and it makes it harder
for us to help you.
Run the following command in your terminal and paste the output here:
@@ -136,6 +113,8 @@ body:
from langchain_core import sys_info
sys_info.print_sys_info()
```
alternatively, put the entire output of `pip freeze` here.
placeholder: |
python -m langchain_core.sys_info
validations:

View File

@@ -1,18 +1,9 @@
blank_issues_enabled: false
version: 2.1
contact_links:
- name: 📚 Documentation issue
url: https://github.com/langchain-ai/docs/issues/new?template=01-langchain.yml
- name: 📚 Documentation
url: https://github.com/langchain-ai/docs/issues/new?template=langchain.yml
about: Report an issue related to the LangChain documentation
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: General community discussions and support
- name: 📚 LangChain Documentation
url: https://docs.langchain.com/oss/python/langchain/overview
about: View the official LangChain documentation
- name: 📚 API Reference Documentation
url: https://reference.langchain.com/python/
about: View the official LangChain API reference documentation
- name: 💬 LangChain Forum
url: https://forum.langchain.com/
about: Ask questions and get help from the community

View File

@@ -13,11 +13,11 @@ body:
Relevant links to check before filing a feature request to see if your request has already been made or
if there's another way to achieve what you want:
* [Documentation](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference Documentation](https://reference.langchain.com/python/),
* [LangChain Forum](https://forum.langchain.com/),
* [LangChain documentation with the integrated search](https://docs.langchain.com/oss/python/langchain/overview),
* [API Reference](https://reference.langchain.com/python/),
* [LangChain ChatBot](https://chat.langchain.com/)
* [GitHub search](https://github.com/langchain-ai/langchain),
* [LangChain Forum](https://forum.langchain.com/),
- type: checkboxes
id: checks
attributes:
@@ -34,40 +34,6 @@ body:
required: true
- label: This is not related to the langchain-community package.
required: true
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Which `langchain` package(s) is this request related to? Select at least one.
Note that if the package you are requesting for is not listed here, it is not in this repository (e.g. `langchain-google-genai` is in `langchain-ai/langchain`).
Please submit feature requests for other packages to their respective repositories.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general
- type: textarea
id: feature-description
validations:

View File

@@ -18,33 +18,3 @@ body:
attributes:
label: Issue Content
description: Add the content of the issue here.
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Please select package(s) that this issue is related to.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general

View File

@@ -25,13 +25,13 @@ body:
label: Task Description
description: |
Provide a clear and detailed description of the task.
What needs to be done? Be specific about the scope and requirements.
placeholder: |
This task involves...
The goal is to...
Specific requirements:
- ...
- ...
@@ -43,7 +43,7 @@ body:
label: Acceptance Criteria
description: |
Define the criteria that must be met for this task to be considered complete.
What are the specific deliverables or outcomes expected?
placeholder: |
This task will be complete when:
@@ -58,15 +58,15 @@ body:
label: Context and Background
description: |
Provide any relevant context, background information, or links to related issues/PRs.
Why is this task needed? What problem does it solve?
placeholder: |
Background:
- ...
Related issues/PRs:
- #...
Additional context:
- ...
validations:
@@ -77,45 +77,15 @@ body:
label: Dependencies
description: |
List any dependencies or blockers for this task.
Are there other tasks, issues, or external factors that need to be completed first?
placeholder: |
This task depends on:
- [ ] Issue #...
- [ ] PR #...
- [ ] External dependency: ...
Blocked by:
- ...
validations:
required: false
- type: checkboxes
id: package
attributes:
label: Package (Required)
description: |
Please select package(s) that this task is related to.
options:
- label: langchain
- label: langchain-openai
- label: langchain-anthropic
- label: langchain-classic
- label: langchain-core
- label: langchain-cli
- label: langchain-model-profiles
- label: langchain-tests
- label: langchain-text-splitters
- label: langchain-chroma
- label: langchain-deepseek
- label: langchain-exa
- label: langchain-fireworks
- label: langchain-groq
- label: langchain-huggingface
- label: langchain-mistralai
- label: langchain-nomic
- label: langchain-ollama
- label: langchain-perplexity
- label: langchain-prompty
- label: langchain-qdrant
- label: langchain-xai
- label: Other / not sure / general

93
.github/actions/poetry_setup/action.yml vendored Normal file
View File

@@ -0,0 +1,93 @@
# An action for setting up poetry install with caching.
# Using a custom action since the default action does not
# take poetry install groups into account.
# Action code from:
# https://github.com/actions/setup-python/issues/505#issuecomment-1273013236
name: poetry-install-with-caching
description: Poetry install with support for caching of dependency groups.
inputs:
python-version:
description: Python version, supporting MAJOR.MINOR only
required: true
poetry-version:
description: Poetry version
required: true
cache-key:
description: Cache key to use for manual handling of caching
required: true
working-directory:
description: Directory whose poetry.lock file should be cached
required: true
runs:
using: composite
steps:
- uses: actions/setup-python@v5
name: Setup python ${{ inputs.python-version }}
id: setup-python
with:
python-version: ${{ inputs.python-version }}
- uses: actions/cache@v4
id: cache-bin-poetry
name: Cache Poetry binary - Python ${{ inputs.python-version }}
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "1"
with:
path: |
/opt/pipx/venvs/poetry
# This step caches the poetry installation, so make sure it's keyed on the poetry version as well.
key: bin-poetry-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-${{ inputs.poetry-version }}
- name: Refresh shell hashtable and fixup softlinks
if: steps.cache-bin-poetry.outputs.cache-hit == 'true'
shell: bash
env:
POETRY_VERSION: ${{ inputs.poetry-version }}
PYTHON_VERSION: ${{ inputs.python-version }}
run: |
set -eux
# Refresh the shell hashtable, to ensure correct `which` output.
hash -r
# `actions/cache@v3` doesn't always seem able to correctly unpack softlinks.
# Delete and recreate the softlinks pipx expects to have.
rm /opt/pipx/venvs/poetry/bin/python
cd /opt/pipx/venvs/poetry/bin
ln -s "$(which "python$PYTHON_VERSION")" python
chmod +x python
cd /opt/pipx_bin/
ln -s /opt/pipx/venvs/poetry/bin/poetry poetry
chmod +x poetry
# Ensure everything got set up correctly.
/opt/pipx/venvs/poetry/bin/python --version
/opt/pipx_bin/poetry --version
- name: Install poetry
if: steps.cache-bin-poetry.outputs.cache-hit != 'true'
shell: bash
env:
POETRY_VERSION: ${{ inputs.poetry-version }}
PYTHON_VERSION: ${{ inputs.python-version }}
# Install poetry using the python version installed by setup-python step.
run: pipx install "poetry==$POETRY_VERSION" --python '${{ steps.setup-python.outputs.python-path }}' --verbose
- name: Restore pip and poetry cached dependencies
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "4"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}
with:
path: |
~/.cache/pip
~/.cache/pypoetry/virtualenvs
~/.cache/pypoetry/cache
~/.cache/pypoetry/artifacts
${{ env.WORKDIR }}/.venv
key: py-deps-${{ runner.os }}-${{ runner.arch }}-py-${{ inputs.python-version }}-poetry-${{ inputs.poetry-version }}-${{ inputs.cache-key }}-${{ hashFiles(format('{0}/**/poetry.lock', env.WORKDIR)) }}

View File

@@ -7,12 +7,13 @@ core:
- any-glob-to-any-file:
- "libs/core/**/*"
langchain-classic:
langchain:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain/**/*"
- "libs/langchain_v1/**/*"
langchain:
v1:
- changed-files:
- any-glob-to-any-file:
- "libs/langchain_v1/**/*"
@@ -27,11 +28,6 @@ standard-tests:
- any-glob-to-any-file:
- "libs/standard-tests/**/*"
model-profiles:
- changed-files:
- any-glob-to-any-file:
- "libs/model-profiles/**/*"
text-splitters:
- changed-files:
- any-glob-to-any-file:
@@ -43,81 +39,6 @@ integration:
- any-glob-to-any-file:
- "libs/partners/**/*"
anthropic:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/anthropic/**/*"
chroma:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/chroma/**/*"
deepseek:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/deepseek/**/*"
exa:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/exa/**/*"
fireworks:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/fireworks/**/*"
groq:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/groq/**/*"
huggingface:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/huggingface/**/*"
mistralai:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/mistralai/**/*"
nomic:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/nomic/**/*"
ollama:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/ollama/**/*"
openai:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/openai/**/*"
perplexity:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/perplexity/**/*"
prompty:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/prompty/**/*"
qdrant:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/qdrant/**/*"
xai:
- changed-files:
- any-glob-to-any-file:
- "libs/partners/xai/**/*"
# Infrastructure and DevOps
infra:
- changed-files:

41
.github/pr-title-labeler.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
# PR title labeler config
#
# Labels PRs based on conventional commit patterns in titles
#
# Format: type(scope): description or type!: description (breaking)
add-missing-labels: true
clear-prexisting: false
include-commits: false
include-title: true
label-for-breaking-changes: breaking
label-mapping:
documentation: ["docs"]
feature: ["feat"]
fix: ["fix"]
infra: ["build", "ci", "chore"]
integration:
[
"anthropic",
"chroma",
"deepseek",
"exa",
"fireworks",
"groq",
"huggingface",
"mistralai",
"nomic",
"ollama",
"openai",
"perplexity",
"prompty",
"qdrant",
"xai",
]
linting: ["style"]
performance: ["perf"]
refactor: ["refactor"]
release: ["release"]
revert: ["revert"]
tests: ["test"]

View File

@@ -30,7 +30,6 @@ LANGCHAIN_DIRS = [
"libs/text-splitters",
"libs/langchain",
"libs/langchain_v1",
"libs/model-profiles",
]
# When set to True, we are ignoring core dependents

View File

@@ -98,7 +98,7 @@ def _check_python_version_from_requirement(
return True
else:
marker_str = str(requirement.marker)
if "python_version" in marker_str or "python_full_version" in marker_str:
if "python_version" or "python_full_version" in marker_str:
python_version_str = "".join(
char
for char in marker_str

View File

@@ -149,8 +149,8 @@ jobs:
fi
fi
# if PREV_TAG is empty or came out to 0.0.0, let it be empty
if [ -z "$PREV_TAG" ] || [ "$PREV_TAG" = "$PKG_NAME==0.0.0" ]; then
# if PREV_TAG is empty, let it be empty
if [ -z "$PREV_TAG" ]; then
echo "No previous tag found - first release"
else
# confirm prev-tag actually exists in git repo with git tag
@@ -179,8 +179,8 @@ jobs:
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
run: |
PREAMBLE="Changes since $PREV_TAG"
# if PREV_TAG is empty or 0.0.0, then we are releasing the first version
if [ -z "$PREV_TAG" ] || [ "$PREV_TAG" = "$PKG_NAME==0.0.0" ]; then
# if PREV_TAG is empty, then we are releasing the first version
if [ -z "$PREV_TAG" ]; then
PREAMBLE="Initial release"
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
fi
@@ -377,7 +377,6 @@ jobs:
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
LANGCHAIN_TESTS_USER_AGENT: ${{ secrets.LANGCHAIN_TESTS_USER_AGENT }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
@@ -410,7 +409,6 @@ jobs:
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
LANGCHAIN_TESTS_USER_AGENT: ${{ secrets.LANGCHAIN_TESTS_USER_AGENT }}
steps:
- uses: actions/checkout@v5
@@ -444,7 +442,7 @@ jobs:
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| grep -E '[0-9]+\.[0-9]+\.[0-9]+$' \
| grep -E '[0-9]+\.[0-9]+\.[0-9]+([a-zA-Z]+[0-9]+)?$' \
| sort -Vr \
| head -n 1
)"

View File

@@ -1,107 +0,0 @@
name: Auto Label Issues by Package
on:
issues:
types: [opened, edited]
jobs:
label-by-package:
permissions:
issues: write
runs-on: ubuntu-latest
steps:
- name: Sync package labels
uses: actions/github-script@v6
with:
script: |
const body = context.payload.issue.body || "";
// Extract text under "### Package"
const match = body.match(/### Package\s+([\s\S]*?)\n###/i);
if (!match) return;
const packageSection = match[1].trim();
// Mapping table for package names to labels
const mapping = {
"langchain": "langchain",
"langchain-openai": "openai",
"langchain-anthropic": "anthropic",
"langchain-classic": "langchain-classic",
"langchain-core": "core",
"langchain-cli": "cli",
"langchain-model-profiles": "model-profiles",
"langchain-tests": "standard-tests",
"langchain-text-splitters": "text-splitters",
"langchain-chroma": "chroma",
"langchain-deepseek": "deepseek",
"langchain-exa": "exa",
"langchain-fireworks": "fireworks",
"langchain-groq": "groq",
"langchain-huggingface": "huggingface",
"langchain-mistralai": "mistralai",
"langchain-nomic": "nomic",
"langchain-ollama": "ollama",
"langchain-perplexity": "perplexity",
"langchain-prompty": "prompty",
"langchain-qdrant": "qdrant",
"langchain-xai": "xai",
};
// All possible package labels we manage
const allPackageLabels = Object.values(mapping);
const selectedLabels = [];
// Check if this is checkbox format (multiple selection)
const checkboxMatches = packageSection.match(/- \[x\]\s+([^\n\r]+)/gi);
if (checkboxMatches) {
// Handle checkbox format
for (const match of checkboxMatches) {
const packageName = match.replace(/- \[x\]\s+/i, '').trim();
const label = mapping[packageName];
if (label && !selectedLabels.includes(label)) {
selectedLabels.push(label);
}
}
} else {
// Handle dropdown format (single selection)
const label = mapping[packageSection];
if (label) {
selectedLabels.push(label);
}
}
// Get current issue labels
const issue = await github.rest.issues.get({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number
});
const currentLabels = issue.data.labels.map(label => label.name);
const currentPackageLabels = currentLabels.filter(label => allPackageLabels.includes(label));
// Determine labels to add and remove
const labelsToAdd = selectedLabels.filter(label => !currentPackageLabels.includes(label));
const labelsToRemove = currentPackageLabels.filter(label => !selectedLabels.includes(label));
// Add new labels
if (labelsToAdd.length > 0) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
labels: labelsToAdd
});
}
// Remove old labels
for (const label of labelsToRemove) {
await github.rest.issues.removeLabel({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: context.issue.number,
name: label
});
}

View File

@@ -155,7 +155,6 @@ jobs:
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
LANGCHAIN_TESTS_USER_AGENT: ${{ secrets.LANGCHAIN_TESTS_USER_AGENT }}
run: |
cd langchain/${{ matrix.working-directory }}
make integration_tests

View File

@@ -26,14 +26,12 @@
# * revert — reverts a previous commit
# * release — prepare a new release
#
# Allowed Scope(s) (optional):
# Allowed Scopes (optional):
# core, cli, langchain, langchain_v1, langchain-classic, standard-tests,
# text-splitters, docs, anthropic, chroma, deepseek, exa, fireworks, groq,
# huggingface, mistralai, nomic, ollama, openai, perplexity, prompty, qdrant,
# xai, infra, deps
#
# Multiple scopes can be used by separating them with a comma.
#
# Rules:
# 1. The 'Type' must start with a lowercase letter.
# 2. Breaking changes: append "!" after type/scope (e.g., feat!: drop x support)
@@ -81,8 +79,8 @@ jobs:
core
cli
langchain
langchain_v1
langchain-classic
model-profiles
standard-tests
text-splitters
docs

View File

@@ -1,28 +1,40 @@
<div align="center">
<a href="https://www.langchain.com/">
<picture>
<source media="(prefers-color-scheme: light)" srcset=".github/images/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset=".github/images/logo-light.svg">
<img alt="LangChain Logo" src=".github/images/logo-dark.svg" width="80%">
</picture>
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset=".github/images/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset=".github/images/logo-light.svg">
<img alt="LangChain Logo" src=".github/images/logo-dark.svg" width="80%">
</picture>
</p>
<p align="center">
The platform for reliable agents.
</p>
<p align="center">
<a href="https://opensource.org/licenses/MIT" target="_blank">
<img src="https://img.shields.io/pypi/l/langchain" alt="PyPI - License">
</a>
</div>
<a href="https://pypistats.org/packages/langchain" target="_blank">
<img src="https://img.shields.io/pepy/dt/langchain" alt="PyPI - Downloads">
</a>
<a href="https://pypi.org/project/langchain/#history" target="_blank">
<img src="https://img.shields.io/pypi/v/langchain?label=%20" alt="Version">
</a>
<a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain" target="_blank">
<img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode" alt="Open in Dev Containers">
</a>
<a href="https://codespaces.new/langchain-ai/langchain" target="_blank">
<img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20">
</a>
<a href="https://codspeed.io/langchain-ai/langchain" target="_blank">
<img src="https://img.shields.io/endpoint?url=https://codspeed.io/badge.json" alt="CodSpeed Badge">
</a>
<a href="https://twitter.com/langchainai" target="_blank">
<img src="https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI" alt="Twitter / X">
</a>
</p>
<div align="center">
<h3>The platform for reliable agents.</h3>
</div>
<div align="center">
<a href="https://opensource.org/licenses/MIT" target="_blank"><img src="https://img.shields.io/pypi/l/langchain" alt="PyPI - License"></a>
<a href="https://pypistats.org/packages/langchain" target="_blank"><img src="https://img.shields.io/pepy/dt/langchain" alt="PyPI - Downloads"></a>
<a href="https://pypi.org/project/langchain/#history" target="_blank"><img src="https://img.shields.io/pypi/v/langchain?label=%20" alt="Version"></a>
<a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain" target="_blank"><img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode" alt="Open in Dev Containers"></a>
<a href="https://codespaces.new/langchain-ai/langchain" target="_blank"><img src="https://github.com/codespaces/badge.svg" alt="Open in Github Codespace" title="Open in Github Codespace" width="150" height="20"></a>
<a href="https://codspeed.io/langchain-ai/langchain" target="_blank"><img src="https://img.shields.io/endpoint?url=https://codspeed.io/badge.json" alt="CodSpeed Badge"></a>
<a href="https://twitter.com/langchainai" target="_blank"><img src="https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI" alt="Twitter / X"></a>
</div>
LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development all while future-proofing decisions as the underlying technology evolves.
LangChain is a framework for building agents and LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
```bash
pip install langchain
@@ -32,10 +44,7 @@ If you're looking for more advanced customization or agent orchestration, check
---
**Documentation**:
- [docs.langchain.com](https://docs.langchain.com/oss/python/langchain/overview) Comprehensive documentation, including conceptual overviews and guides
- [reference.langchain.com/python](https://reference.langchain.com/python) API reference docs for LangChain packages
**Documentation**: To learn more about LangChain, check out [the docs](https://docs.langchain.com/oss/python/langchain/overview).
**Discussions**: Visit the [LangChain Forum](https://forum.langchain.com) to connect with the community and share all of your technical questions, ideas, and feedback.
@@ -48,12 +57,8 @@ LangChain helps developers build applications powered by LLMs through a standard
Use LangChain for:
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChain's vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your application's needs. As the industry frontier evolves, adapt quickly LangChain's abstractions keep you moving without losing momentum.
- **Rapid prototyping**. Quickly build and iterate on LLM applications with LangChain's modular, component-based architecture. Test different approaches and workflows without rebuilding from scratch, accelerating your development cycle.
- **Production-ready features**. Deploy reliable applications with built-in support for monitoring, evaluation, and debugging through integrations like LangSmith. Scale with confidence using battle-tested patterns and best practices.
- **Vibrant community and ecosystem**. Leverage a rich ecosystem of integrations, templates, and community-contributed components. Benefit from continuous improvements and stay up-to-date with the latest AI developments through an active open-source community.
- **Flexible abstraction layers**. Work at the level of abstraction that suits your needs - from high-level chains for quick starts to low-level components for fine-grained control. LangChain grows with your application's complexity.
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and external/internal systems, drawing from LangChains vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team experiments to find the best choice for your applications needs. As the industry frontier evolves, adapt quickly LangChains abstractions keep you moving without losing momentum.
## LangChain ecosystem
@@ -61,14 +66,12 @@ While the LangChain framework can be used standalone, it also integrates seamles
To improve your LLM application development, pair LangChain with:
- [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview) Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- [Integrations](https://docs.langchain.com/oss/python/integrations/providers/overview) List of LangChain integrations, including chat & embedding models, tools & toolkits, and more
- [LangSmith](https://www.langchain.com/langsmith) Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangSmith Deployment](https://docs.langchain.com/langsmith/deployments) Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams and iterate quickly with visual prototyping in [LangSmith Studio](https://docs.langchain.com/langsmith/studio).
- [Deep Agents](https://github.com/langchain-ai/deepagents) *(new!)* Build agents that can plan, use subagents, and leverage file systems for complex tasks
- [LangGraph](https://docs.langchain.com/oss/python/langgraph/overview) - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- [LangSmith](https://www.langchain.com/langsmith) - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- [LangSmith Deployment](https://docs.langchain.com/langsmith/deployments) - Deploy and scale agents effortlessly with a purpose-built deployment platform for long-running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in [LangSmith Studio](https://docs.langchain.com/langsmith/studio).
## Additional resources
- [API Reference](https://reference.langchain.com/python) Detailed reference on navigating base packages and integrations for LangChain.
- [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview) Learn how to contribute to LangChain projects and find good first issues.
- [Code of Conduct](https://github.com/langchain-ai/langchain/blob/master/.github/CODE_OF_CONDUCT.md) Our community guidelines and standards for participation.
- [API Reference](https://reference.langchain.com/python): Detailed reference on navigating base packages and integrations for LangChain.
- [Integrations](https://docs.langchain.com/oss/python/integrations/providers/overview): List of LangChain integrations, including chat & embedding models, tools & toolkits, and more
- [Contributing Guide](https://docs.langchain.com/oss/python/contributing/overview): Learn how to contribute to LangChain and find good first issues.

View File

@@ -55,10 +55,10 @@ All out of scope targets defined by huntr as well as:
* **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties (see [package warning](https://pypi.org/project/langchain-experimental/)), bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
* **tools**: Tools in either `langchain` or `langchain-community` are not eligible for bug
* **tools**: Tools in either langchain or langchain-community are not eligible for bug
bounties. This includes the following directories
* `libs/langchain/langchain/tools`
* `libs/community/langchain_community/tools`
* libs/langchain/langchain/tools
* libs/community/langchain_community/tools
* Please review the [Best Practices](#best-practices)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible

View File

@@ -295,7 +295,7 @@
"source": [
"## TODO: Any functionality specific to this vector store\n",
"\n",
"E.g. creating a persistent database to save to your disk, etc."
"E.g. creating a persisten database to save to your disk, etc."
]
},
{

View File

@@ -6,8 +6,9 @@ import hashlib
import logging
import re
import shutil
from collections.abc import Sequence
from pathlib import Path
from typing import TYPE_CHECKING, Any, TypedDict
from typing import Any, TypedDict
from git import Repo
@@ -17,9 +18,6 @@ from langchain_cli.constants import (
DEFAULT_GIT_SUBDIRECTORY,
)
if TYPE_CHECKING:
from collections.abc import Sequence
logger = logging.getLogger(__name__)

View File

@@ -1,11 +1,9 @@
from __future__ import annotations
from dataclasses import dataclass
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from .file import File
from .folder import Folder
from .file import File
from .folder import Folder
@dataclass

View File

@@ -1,12 +1,9 @@
from __future__ import annotations
from typing import TYPE_CHECKING
from pathlib import Path
from .file import File
if TYPE_CHECKING:
from pathlib import Path
class Folder:
def __init__(self, name: str, *files: Folder | File) -> None:

View File

@@ -34,7 +34,7 @@ The LangChain ecosystem is built on top of `langchain-core`. Some of the benefit
## 📖 Documentation
For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_core/). For conceptual guides, tutorials, and examples on using LangChain, see the [LangChain Docs](https://docs.langchain.com/oss/python/langchain/overview).
For full documentation, see the [API reference](https://reference.langchain.com/python/langchain_core/).
## 📕 Releases & Versioning

View File

@@ -52,33 +52,31 @@ class AgentAction(Serializable):
"""The input to pass in to the Tool."""
log: str
"""Additional information to log about the action.
This log can be used in a few ways. First, it can be used to audit what exactly the
LLM predicted to lead to this `(tool, tool_input)`.
Second, it can be used in future iterations to show the LLMs prior thoughts. This is
useful when `(tool, tool_input)` does not contain full information about the LLM
prediction (for example, any `thought` before the tool/tool_input).
"""
This log can be used in a few ways. First, it can be used to audit
what exactly the LLM predicted to lead to this (tool, tool_input).
Second, it can be used in future iterations to show the LLMs prior
thoughts. This is useful when (tool, tool_input) does not contain
full information about the LLM prediction (for example, any `thought`
before the tool/tool_input)."""
type: Literal["AgentAction"] = "AgentAction"
# Override init to support instantiation by position for backward compat.
def __init__(self, tool: str, tool_input: str | dict, log: str, **kwargs: Any):
"""Create an `AgentAction`.
"""Create an AgentAction.
Args:
tool: The name of the tool to execute.
tool_input: The input to pass in to the `Tool`.
tool_input: The input to pass in to the Tool.
log: Additional information to log about the action.
"""
super().__init__(tool=tool, tool_input=tool_input, log=log, **kwargs)
@classmethod
def is_lc_serializable(cls) -> bool:
"""`AgentAction` is serializable.
"""AgentAction is serializable.
Returns:
`True`
True
"""
return True
@@ -100,23 +98,19 @@ class AgentAction(Serializable):
class AgentActionMessageLog(AgentAction):
"""Representation of an action to be executed by an agent.
This is similar to `AgentAction`, but includes a message log consisting of
chat messages.
This is useful when working with `ChatModels`, and is used to reconstruct
conversation history from the agent's perspective.
This is similar to AgentAction, but includes a message log consisting of
chat messages. This is useful when working with ChatModels, and is used
to reconstruct conversation history from the agent's perspective.
"""
message_log: Sequence[BaseMessage]
"""Similar to log, this can be used to pass along extra information about what exact
messages were predicted by the LLM before parsing out the `(tool, tool_input)`.
This is again useful if `(tool, tool_input)` cannot be used to fully recreate the
LLM prediction, and you need that LLM prediction (for future agent iteration).
"""Similar to log, this can be used to pass along extra
information about what exact messages were predicted by the LLM
before parsing out the (tool, tool_input). This is again useful
if (tool, tool_input) cannot be used to fully recreate the LLM
prediction, and you need that LLM prediction (for future agent iteration).
Compared to `log`, this is useful when the underlying LLM is a
chat model (and therefore returns messages rather than a string).
"""
chat model (and therefore returns messages rather than a string)."""
# Ignoring type because we're overriding the type from AgentAction.
# And this is the correct thing to do in this case.
# The type literal is used for serialization purposes.
@@ -124,12 +118,12 @@ class AgentActionMessageLog(AgentAction):
class AgentStep(Serializable):
"""Result of running an `AgentAction`."""
"""Result of running an AgentAction."""
action: AgentAction
"""The `AgentAction` that was executed."""
"""The AgentAction that was executed."""
observation: Any
"""The result of the `AgentAction`."""
"""The result of the AgentAction."""
@property
def messages(self) -> Sequence[BaseMessage]:
@@ -138,22 +132,19 @@ class AgentStep(Serializable):
class AgentFinish(Serializable):
"""Final return value of an `ActionAgent`.
"""Final return value of an ActionAgent.
Agents return an `AgentFinish` when they have reached a stopping condition.
Agents return an AgentFinish when they have reached a stopping condition.
"""
return_values: dict
"""Dictionary of return values."""
log: str
"""Additional information to log about the return value.
This is used to pass along the full LLM prediction, not just the parsed out
return value.
For example, if the full LLM prediction was `Final Answer: 2` you may want to just
return `2` as a return value, but pass along the full string as a `log` (for
debugging or observability purposes).
return value. For example, if the full LLM prediction was
`Final Answer: 2` you may want to just return `2` as a return value, but pass
along the full string as a `log` (for debugging or observability purposes).
"""
type: Literal["AgentFinish"] = "AgentFinish"
@@ -163,7 +154,7 @@ class AgentFinish(Serializable):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -211,7 +202,7 @@ def _convert_agent_observation_to_messages(
observation: Observation to convert to a message.
Returns:
`AIMessage` that corresponds to the original tool invocation.
AIMessage that corresponds to the original tool invocation.
"""
if isinstance(agent_action, AgentActionMessageLog):
return [_create_function_message(agent_action, observation)]
@@ -234,7 +225,7 @@ def _create_function_message(
observation: the result of the tool invocation.
Returns:
`FunctionMessage` that corresponds to the original tool invocation.
FunctionMessage that corresponds to the original tool invocation.
"""
if not isinstance(observation, str):
try:

View File

@@ -2,8 +2,8 @@
Distinct from provider-based [prompt caching](https://docs.langchain.com/oss/python/langchain/models#prompt-caching).
!!! warning "Beta feature"
This is a beta feature. Please be wary of deploying experimental code to production
!!! warning
This is a beta feature! Please be wary of deploying experimental code to production
unless you've taken appropriate precautions.
A cache is useful for two reasons:
@@ -49,18 +49,17 @@ class BaseCache(ABC):
"""Look up based on `prompt` and `llm_string`.
A cache implementation is expected to generate a key from the 2-tuple
of `prompt` and `llm_string` (e.g., by concatenating them with a delimiter).
of prompt and llm_string (e.g., by concatenating them with a delimiter).
Args:
prompt: A string representation of the prompt.
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
These invocation parameters are serialized into a string representation.
These invocation parameters are serialized into a string
representation.
Returns:
On a cache miss, return `None`. On a cache hit, return the cached value.
@@ -79,10 +78,8 @@ class BaseCache(ABC):
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
These invocation parameters are serialized into a string
representation.
return_val: The value to be cached. The value is a list of `Generation`
@@ -97,17 +94,15 @@ class BaseCache(ABC):
"""Async look up based on `prompt` and `llm_string`.
A cache implementation is expected to generate a key from the 2-tuple
of `prompt` and `llm_string` (e.g., by concatenating them with a delimiter).
of prompt and llm_string (e.g., by concatenating them with a delimiter).
Args:
prompt: A string representation of the prompt.
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
These invocation parameters are serialized into a string
representation.
@@ -130,10 +125,8 @@ class BaseCache(ABC):
In the case of a chat model, the prompt is a non-trivial
serialization of the prompt into the language model.
llm_string: A string representation of the LLM configuration.
This is used to capture the invocation parameters of the LLM
(e.g., model name, temperature, stop tokens, max tokens, etc.).
These invocation parameters are serialized into a string
representation.
return_val: The value to be cached. The value is a list of `Generation`

View File

@@ -5,12 +5,13 @@ from __future__ import annotations
import logging
from typing import TYPE_CHECKING, Any
from typing_extensions import Self
if TYPE_CHECKING:
from collections.abc import Sequence
from uuid import UUID
from tenacity import RetryCallState
from typing_extensions import Self
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.documents import Document

View File

@@ -39,6 +39,7 @@ from langchain_core.tracers.context import (
tracing_v2_callback_var,
)
from langchain_core.tracers.langchain import LangChainTracer
from langchain_core.tracers.schemas import Run
from langchain_core.tracers.stdout import ConsoleCallbackHandler
from langchain_core.utils.env import env_var_is_set
@@ -51,7 +52,6 @@ if TYPE_CHECKING:
from langchain_core.documents import Document
from langchain_core.outputs import ChatGenerationChunk, GenerationChunk, LLMResult
from langchain_core.runnables.config import RunnableConfig
from langchain_core.tracers.schemas import Run
logger = logging.getLogger(__name__)
@@ -229,24 +229,7 @@ def shielded(func: Func) -> Func:
@functools.wraps(func)
async def wrapped(*args: Any, **kwargs: Any) -> Any:
# Capture the current context to preserve context variables
ctx = copy_context()
# Create the coroutine
coro = func(*args, **kwargs)
# For Python 3.11+, create task with explicit context
# For older versions, fallback to original behavior
try:
# Create a task with the captured context to preserve context variables
task = asyncio.create_task(coro, context=ctx) # type: ignore[call-arg, unused-ignore]
# `call-arg` used to not fail 3.9 or 3.10 tests
return await asyncio.shield(task)
except TypeError:
# Python < 3.11 fallback - create task normally then shield
# This won't preserve context perfectly but is better than nothing
task = asyncio.create_task(coro)
return await asyncio.shield(task)
return await asyncio.shield(func(*args, **kwargs))
return cast("Func", wrapped)

View File

@@ -24,7 +24,7 @@ class UsageMetadataCallbackHandler(BaseCallbackHandler):
from langchain_core.callbacks import UsageMetadataCallbackHandler
llm_1 = init_chat_model(model="openai:gpt-4o-mini")
llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-20241022")
llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-latest")
callback = UsageMetadataCallbackHandler()
result_1 = llm_1.invoke("Hello", config={"callbacks": [callback]})
@@ -43,7 +43,7 @@ class UsageMetadataCallbackHandler(BaseCallbackHandler):
'input_token_details': {'cache_read': 0, 'cache_creation': 0}}}
```
!!! version-added "Added in `langchain-core` 0.3.49"
!!! version-added "Added in version 0.3.49"
"""
@@ -109,7 +109,7 @@ def get_usage_metadata_callback(
from langchain_core.callbacks import get_usage_metadata_callback
llm_1 = init_chat_model(model="openai:gpt-4o-mini")
llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-20241022")
llm_2 = init_chat_model(model="anthropic:claude-3-5-haiku-latest")
with get_usage_metadata_callback() as cb:
llm_1.invoke("Hello")
@@ -134,7 +134,7 @@ def get_usage_metadata_callback(
}
```
!!! version-added "Added in `langchain-core` 0.3.49"
!!! version-added "Added in version 0.3.49"
"""
usage_metadata_callback_var: ContextVar[UsageMetadataCallbackHandler | None] = (

View File

@@ -1,28 +1,7 @@
"""Documents module for data retrieval and processing workflows.
"""Documents module.
This module provides core abstractions for handling data in retrieval-augmented
generation (RAG) pipelines, vector stores, and document processing workflows.
!!! warning "Documents vs. message content"
This module is distinct from `langchain_core.messages.content`, which provides
multimodal content blocks for **LLM chat I/O** (text, images, audio, etc. within
messages).
**Key distinction:**
- **Documents** (this module): For **data retrieval and processing workflows**
- Vector stores, retrievers, RAG pipelines
- Text chunking, embedding, and semantic search
- Example: Chunks of a PDF stored in a vector database
- **Content Blocks** (`messages.content`): For **LLM conversational I/O**
- Multimodal message content sent to/from models
- Tool calls, reasoning, citations within chat
- Example: An image sent to a vision model in a chat message (via
[`ImageContentBlock`][langchain.messages.ImageContentBlock])
While both can represent similar data types (text, files), they serve different
architectural purposes in LangChain applications.
**Document** module is a collection of classes that handle documents
and their transformations.
"""
from typing import TYPE_CHECKING

View File

@@ -1,16 +1,4 @@
"""Base classes for media and documents.
This module contains core abstractions for **data retrieval and processing workflows**:
- `BaseMedia`: Base class providing `id` and `metadata` fields
- `Blob`: Raw data loading (files, binary data) - used by document loaders
- `Document`: Text content for retrieval (RAG, vector stores, semantic search)
!!! note "Not for LLM chat messages"
These classes are for data processing pipelines, not LLM I/O. For multimodal
content in chat messages (images, audio in conversations), see
`langchain.messages` content blocks instead.
"""
"""Base classes for media and documents."""
from __future__ import annotations
@@ -31,13 +19,15 @@ PathLike = str | PurePath
class BaseMedia(Serializable):
"""Base class for content used in retrieval and data processing workflows.
"""Use to represent media content.
Provides common fields for content that needs to be stored, indexed, or searched.
Media objects can be used to represent raw data, such as text or binary data.
!!! note
For multimodal content in **chat messages** (images, audio sent to/from LLMs),
use `langchain.messages` content blocks instead.
LangChain Media objects allow associating metadata and an optional identifier
with the content.
The presence of an ID and metadata make it easier to store, index, and search
over the content in a structured way.
"""
# The ID field is optional at the moment.
@@ -55,70 +45,71 @@ class BaseMedia(Serializable):
class Blob(BaseMedia):
"""Raw data abstraction for document loading and file processing.
"""Blob represents raw data by either reference or value.
Represents raw bytes or text, either in-memory or by file reference. Used
primarily by document loaders to decouple data loading from parsing.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by [Mozilla's `Blob`](https://developer.mozilla.org/en-US/docs/Web/API/Blob)
???+ example "Initialize a blob from in-memory data"
Example: Initialize a blob from in-memory data
```python
from langchain_core.documents import Blob
```python
from langchain_core.documents import Blob
blob = Blob.from_data("Hello, world!")
blob = Blob.from_data("Hello, world!")
# Read the blob as a string
print(blob.as_string())
# Read the blob as a string
print(blob.as_string())
# Read the blob as bytes
print(blob.as_bytes())
# Read the blob as bytes
print(blob.as_bytes())
# Read the blob as a byte stream
with blob.as_bytes_io() as f:
print(f.read())
```
# Read the blob as a byte stream
with blob.as_bytes_io() as f:
print(f.read())
```
??? example "Load from memory and specify MIME type and metadata"
Example: Load from memory and specify mime-type and metadata
```python
from langchain_core.documents import Blob
```python
from langchain_core.documents import Blob
blob = Blob.from_data(
data="Hello, world!",
mime_type="text/plain",
metadata={"source": "https://example.com"},
)
```
blob = Blob.from_data(
data="Hello, world!",
mime_type="text/plain",
metadata={"source": "https://example.com"},
)
```
??? example "Load the blob from a file"
Example: Load the blob from a file
```python
from langchain_core.documents import Blob
```python
from langchain_core.documents import Blob
blob = Blob.from_path("path/to/file.txt")
blob = Blob.from_path("path/to/file.txt")
# Read the blob as a string
print(blob.as_string())
# Read the blob as a string
print(blob.as_string())
# Read the blob as bytes
print(blob.as_bytes())
# Read the blob as bytes
print(blob.as_bytes())
# Read the blob as a byte stream
with blob.as_bytes_io() as f:
print(f.read())
```
# Read the blob as a byte stream
with blob.as_bytes_io() as f:
print(f.read())
```
"""
data: bytes | str | None = None
"""Raw data associated with the `Blob`."""
mimetype: str | None = None
"""MIME type, not to be confused with a file extension."""
"""MimeType not to be confused with a file extension."""
encoding: str = "utf-8"
"""Encoding to use if decoding the bytes into a string.
Uses `utf-8` as default encoding if decoding to string.
Use `utf-8` as default encoding, if decoding to string.
"""
path: PathLike | None = None
"""Location where the original content was found."""
@@ -134,7 +125,7 @@ class Blob(BaseMedia):
If a path is associated with the `Blob`, it will default to the path location.
Unless explicitly set via a metadata field called `'source'`, in which
Unless explicitly set via a metadata field called `"source"`, in which
case that value will be used instead.
"""
if self.metadata and "source" in self.metadata:
@@ -222,7 +213,7 @@ class Blob(BaseMedia):
encoding: Encoding to use if decoding the bytes into a string
mime_type: If provided, will be set as the MIME type of the data
guess_type: If `True`, the MIME type will be guessed from the file
extension, if a MIME type was not provided
extension, if a mime-type was not provided
metadata: Metadata to associate with the `Blob`
Returns:
@@ -283,10 +274,6 @@ class Blob(BaseMedia):
class Document(BaseMedia):
"""Class for storing a piece of text and associated metadata.
!!! note
`Document` is for **retrieval workflows**, not chat I/O. For sending text
to an LLM in a conversation, use message types from `langchain.messages`.
Example:
```python
from langchain_core.documents import Document
@@ -309,7 +296,7 @@ class Document(BaseMedia):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -322,10 +309,10 @@ class Document(BaseMedia):
return ["langchain", "schema", "document"]
def __str__(self) -> str:
"""Override `__str__` to restrict it to page_content and metadata.
"""Override __str__ to restrict it to page_content and metadata.
Returns:
A string representation of the `Document`.
A string representation of the Document.
"""
# The format matches pydantic format for __str__.
#

View File

@@ -21,14 +21,14 @@ class BaseDocumentCompressor(BaseModel, ABC):
This abstraction is primarily used for post-processing of retrieved documents.
`Document` objects matching a given query are first retrieved.
Documents matching a given query are first retrieved.
Then the list of documents can be further processed.
For example, one could re-rank the retrieved documents using an LLM.
!!! note
Users should favor using a `RunnableLambda` instead of sub-classing from this
Users should favor using a RunnableLambda instead of sub-classing from this
interface.
"""
@@ -43,9 +43,9 @@ class BaseDocumentCompressor(BaseModel, ABC):
"""Compress retrieved documents given the query context.
Args:
documents: The retrieved `Document` objects.
documents: The retrieved documents.
query: The query context.
callbacks: Optional `Callbacks` to run during compression.
callbacks: Optional callbacks to run during compression.
Returns:
The compressed documents.
@@ -61,9 +61,9 @@ class BaseDocumentCompressor(BaseModel, ABC):
"""Async compress retrieved documents given the query context.
Args:
documents: The retrieved `Document` objects.
documents: The retrieved documents.
query: The query context.
callbacks: Optional `Callbacks` to run during compression.
callbacks: Optional callbacks to run during compression.
Returns:
The compressed documents.

View File

@@ -16,8 +16,8 @@ if TYPE_CHECKING:
class BaseDocumentTransformer(ABC):
"""Abstract base class for document transformation.
A document transformation takes a sequence of `Document` objects and returns a
sequence of transformed `Document` objects.
A document transformation takes a sequence of Documents and returns a
sequence of transformed Documents.
Example:
```python

View File

@@ -18,7 +18,7 @@ class FakeEmbeddings(Embeddings, BaseModel):
This embedding model creates embeddings by sampling from a normal distribution.
!!! danger "Toy model"
!!! warning
Do not use this outside of testing, as it is not a real embedding model.
Instantiate:
@@ -73,7 +73,7 @@ class DeterministicFakeEmbedding(Embeddings, BaseModel):
This embedding model creates embeddings by sampling from a normal distribution
with a seed based on the hash of the text.
!!! danger "Toy model"
!!! warning
Do not use this outside of testing, as it is not a real embedding model.
Instantiate:

View File

@@ -29,7 +29,7 @@ class LengthBasedExampleSelector(BaseExampleSelector, BaseModel):
max_length: int = 2048
"""Max length for the prompt, beyond which examples are cut."""
example_text_lengths: list[int] = Field(default_factory=list)
example_text_lengths: list[int] = Field(default_factory=list) # :meta private:
"""Length of each example."""
def add_example(self, example: dict[str, str]) -> None:

View File

@@ -16,10 +16,9 @@ class OutputParserException(ValueError, LangChainException): # noqa: N818
"""Exception that output parsers should raise to signify a parsing error.
This exists to differentiate parsing errors from other code or execution errors
that also may arise inside the output parser.
`OutputParserException` will be available to catch and handle in ways to fix the
parsing error, while other errors will be raised.
that also may arise inside the output parser. `OutputParserException` will be
available to catch and handle in ways to fix the parsing error, while other
errors will be raised.
"""
def __init__(
@@ -33,19 +32,18 @@ class OutputParserException(ValueError, LangChainException): # noqa: N818
Args:
error: The error that's being re-raised or an error message.
observation: String explanation of error which can be passed to a model to
try and remediate the issue.
observation: String explanation of error which can be passed to a
model to try and remediate the issue.
llm_output: String model output which is error-ing.
send_to_llm: Whether to send the observation and llm_output back to an Agent
after an `OutputParserException` has been raised.
This gives the underlying model driving the agent the context that the
previous output was improperly structured, in the hopes that it will
update the output to the correct format.
Raises:
ValueError: If `send_to_llm` is `True` but either observation or
ValueError: If `send_to_llm` is True but either observation or
`llm_output` are not provided.
"""
if isinstance(error, str):
@@ -68,11 +66,11 @@ class ErrorCode(Enum):
"""Error codes."""
INVALID_PROMPT_INPUT = "INVALID_PROMPT_INPUT"
INVALID_TOOL_RESULTS = "INVALID_TOOL_RESULTS" # Used in JS; not Py (yet)
INVALID_TOOL_RESULTS = "INVALID_TOOL_RESULTS"
MESSAGE_COERCION_FAILURE = "MESSAGE_COERCION_FAILURE"
MODEL_AUTHENTICATION = "MODEL_AUTHENTICATION" # Used in JS; not Py (yet)
MODEL_NOT_FOUND = "MODEL_NOT_FOUND" # Used in JS; not Py (yet)
MODEL_RATE_LIMIT = "MODEL_RATE_LIMIT" # Used in JS; not Py (yet)
MODEL_AUTHENTICATION = "MODEL_AUTHENTICATION"
MODEL_NOT_FOUND = "MODEL_NOT_FOUND"
MODEL_RATE_LIMIT = "MODEL_RATE_LIMIT"
OUTPUT_PARSING_FAILURE = "OUTPUT_PARSING_FAILURE"

View File

@@ -6,9 +6,16 @@ import hashlib
import json
import uuid
import warnings
from collections.abc import (
AsyncIterable,
AsyncIterator,
Callable,
Iterable,
Iterator,
Sequence,
)
from itertools import islice
from typing import (
TYPE_CHECKING,
Any,
Literal,
TypedDict,
@@ -22,16 +29,6 @@ from langchain_core.exceptions import LangChainException
from langchain_core.indexing.base import DocumentIndex, RecordManager
from langchain_core.vectorstores import VectorStore
if TYPE_CHECKING:
from collections.abc import (
AsyncIterable,
AsyncIterator,
Callable,
Iterable,
Iterator,
Sequence,
)
# Magic UUID to use as a namespace for hashing.
# Used to try and generate a unique UUID for each document
# from hashing the document content and metadata.
@@ -301,7 +298,7 @@ def index(
For the time being, documents are indexed using their hashes, and users
are not able to specify the uid of the document.
!!! warning "Behavior changed in `langchain-core` 0.3.25"
!!! warning "Behavior changed in 0.3.25"
Added `scoped_full` cleanup mode.
!!! warning
@@ -352,7 +349,7 @@ def index(
key_encoder: Hashing algorithm to use for hashing the document content and
metadata. Options include "blake2b", "sha256", and "sha512".
!!! version-added "Added in `langchain-core` 0.3.66"
!!! version-added "Added in version 0.3.66"
key_encoder: Hashing algorithm to use for hashing the document.
If not provided, a default encoder using SHA-1 will be used.
@@ -369,7 +366,7 @@ def index(
method of the `VectorStore` or the upsert method of the DocumentIndex.
For example, you can use this to specify a custom vector_field:
upsert_kwargs={"vector_field": "embedding"}
!!! version-added "Added in `langchain-core` 0.3.10"
!!! version-added "Added in version 0.3.10"
Returns:
Indexing result which contains information about how many documents
@@ -639,7 +636,7 @@ async def aindex(
For the time being, documents are indexed using their hashes, and users
are not able to specify the uid of the document.
!!! warning "Behavior changed in `langchain-core` 0.3.25"
!!! warning "Behavior changed in 0.3.25"
Added `scoped_full` cleanup mode.
!!! warning
@@ -690,7 +687,7 @@ async def aindex(
key_encoder: Hashing algorithm to use for hashing the document content and
metadata. Options include "blake2b", "sha256", and "sha512".
!!! version-added "Added in `langchain-core` 0.3.66"
!!! version-added "Added in version 0.3.66"
key_encoder: Hashing algorithm to use for hashing the document.
If not provided, a default encoder using SHA-1 will be used.
@@ -707,7 +704,7 @@ async def aindex(
method of the `VectorStore` or the upsert method of the DocumentIndex.
For example, you can use this to specify a custom vector_field:
upsert_kwargs={"vector_field": "embedding"}
!!! version-added "Added in `langchain-core` 0.3.10"
!!! version-added "Added in version 0.3.10"
Returns:
Indexing result which contains information about how many documents

View File

@@ -6,13 +6,12 @@ LangChain has two main classes to work with language models: chat models and
**Chat models**
Language models that use a sequence of messages as inputs and return chat messages
as outputs (as opposed to using plain text).
as outputs (as opposed to using plain text). Chat models support the assignment of
distinct roles to conversation messages, helping to distinguish messages from the AI,
users, and instructions such as system messages.
Chat models support the assignment of distinct roles to conversation messages, helping
to distinguish messages from the AI, users, and instructions such as system messages.
The key abstraction for chat models is `BaseChatModel`. Implementations should inherit
from this class.
The key abstraction for chat models is `BaseChatModel`. Implementations
should inherit from this class.
See existing [chat model integrations](https://docs.langchain.com/oss/python/integrations/chat).

View File

@@ -139,7 +139,7 @@ def _normalize_messages(
directly; this may change in the future
- LangChain v0 standard content blocks for backward compatibility
!!! warning "Behavior changed in `langchain-core` 1.0.0"
!!! warning "Behavior changed in 1.0.0"
In previous versions, this function returned messages in LangChain v0 format.
Now, it returns messages in LangChain v1 format, which upgraded chat models now
expect to receive when passing back in message history. For backward

View File

@@ -131,19 +131,14 @@ class BaseLanguageModel(
Caching is not currently supported for streaming methods of models.
"""
verbose: bool = Field(default_factory=_get_verbosity, exclude=True, repr=False)
"""Whether to print out response text."""
callbacks: Callbacks = Field(default=None, exclude=True)
"""Callbacks to add to the run trace."""
tags: list[str] | None = Field(default=None, exclude=True)
"""Tags to add to the run trace."""
metadata: dict[str, Any] | None = Field(default=None, exclude=True)
"""Metadata to add to the run trace."""
custom_get_token_ids: Callable[[str], list[int]] | None = Field(
default=None, exclude=True
)
@@ -200,22 +195,15 @@ class BaseLanguageModel(
type (e.g., pure text completion models vs chat models).
Args:
prompts: List of `PromptValue` objects.
A `PromptValue` is an object that can be converted to match the format
of any language model (string for pure text generation models and
`BaseMessage` objects for chat models).
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
prompts: List of `PromptValue` objects. A `PromptValue` is an object that
can be converted to match the format of any language model (string for
pure text generation models and `BaseMessage` objects for chat models).
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns:
An `LLMResult`, which contains a list of candidate `Generation` objects for
@@ -244,22 +232,15 @@ class BaseLanguageModel(
type (e.g., pure text completion models vs chat models).
Args:
prompts: List of `PromptValue` objects.
A `PromptValue` is an object that can be converted to match the format
of any language model (string for pure text generation models and
`BaseMessage` objects for chat models).
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
prompts: List of `PromptValue` objects. A `PromptValue` is an object that
can be converted to match the format of any language model (string for
pure text generation models and `BaseMessage` objects for chat models).
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns:
An `LLMResult`, which contains a list of candidate `Generation` objects for
@@ -299,9 +280,6 @@ class BaseLanguageModel(
Useful for checking if an input fits in a model's context window.
This should be overridden by model-specific implementations to provide accurate
token counts via model-specific tokenizers.
Args:
text: The string input to tokenize.
@@ -320,17 +298,9 @@ class BaseLanguageModel(
Useful for checking if an input fits in a model's context window.
This should be overridden by model-specific implementations to provide accurate
token counts via model-specific tokenizers.
!!! note
* The base implementation of `get_num_tokens_from_messages` ignores tool
schemas.
* The base implementation of `get_num_tokens_from_messages` adds additional
prefixes to messages in represent user roles, which will add to the
overall token count. Model-specific implementations may choose to
handle this differently.
The base implementation of `get_num_tokens_from_messages` ignores tool
schemas.
Args:
messages: The message inputs to tokenize.

View File

@@ -15,7 +15,6 @@ from typing import TYPE_CHECKING, Any, Literal, cast
from pydantic import BaseModel, ConfigDict, Field
from typing_extensions import override
from langchain_core._api.beta_decorator import beta
from langchain_core.caches import BaseCache
from langchain_core.callbacks import (
AsyncCallbackManager,
@@ -76,8 +75,6 @@ from langchain_core.utils.utils import LC_ID_PREFIX, from_env
if TYPE_CHECKING:
import uuid
from langchain_model_profiles import ModelProfile # type: ignore[import-untyped]
from langchain_core.output_parsers.base import OutputParserLike
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.tools import BaseTool
@@ -91,10 +88,7 @@ def _generate_response_from_error(error: BaseException) -> list[ChatGeneration]:
try:
metadata["body"] = response.json()
except Exception:
try:
metadata["body"] = getattr(response, "text", None)
except Exception:
metadata["body"] = None
metadata["body"] = getattr(response, "text", None)
if hasattr(response, "headers"):
try:
metadata["headers"] = dict(response.headers)
@@ -316,6 +310,13 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
does not properly support streaming.
"""
model_provider: str | None = None
"""The model provider name, e.g., 'openai', 'anthropic', etc.
Used to assign provenance on messages generated by the model, and to look up
model capabilities (e.g., context window sizes and feature support).
"""
output_version: str | None = Field(
default_factory=from_env("LC_OUTPUT_VERSION", default=None)
)
@@ -335,7 +336,7 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
[`langchain-openai`](https://pypi.org/project/langchain-openai)) can also use this
field to roll out new content formats in a backward-compatible way.
!!! version-added "Added in `langchain-core` 1.0"
!!! version-added "Added in version 1.0"
"""
@@ -523,7 +524,9 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
for chunk in self._stream(input_messages, stop=stop, **kwargs):
if chunk.message.id is None:
chunk.message.id = run_id
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
chunk.message.response_metadata = _gen_info_and_msg_metadata(
chunk, model_provider=self.model_provider
)
if self.output_version == "v1":
# Overwrite .content with .content_blocks
chunk.message = _update_message_content_to_blocks(
@@ -655,7 +658,9 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
):
if chunk.message.id is None:
chunk.message.id = run_id
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
chunk.message.response_metadata = _gen_info_and_msg_metadata(
chunk, model_provider=self.model_provider
)
if self.output_version == "v1":
# Overwrite .content with .content_blocks
chunk.message = _update_message_content_to_blocks(
@@ -848,21 +853,16 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
Args:
messages: List of list of messages.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
tags: The tags to apply.
metadata: The metadata to apply.
run_name: The name of the run.
run_id: The ID of the run.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns:
An `LLMResult`, which contains a list of candidate `Generations` for each
@@ -971,21 +971,16 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
Args:
messages: List of list of messages.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
tags: The tags to apply.
metadata: The metadata to apply.
run_name: The name of the run.
run_id: The ID of the run.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns:
An `LLMResult`, which contains a list of candidate `Generations` for each
@@ -1163,7 +1158,9 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
index = -1
index_type = ""
for chunk in self._stream(messages, stop=stop, **kwargs):
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
chunk.message.response_metadata = _gen_info_and_msg_metadata(
chunk, model_provider=self.model_provider
)
if self.output_version == "v1":
# Overwrite .content with .content_blocks
chunk.message = _update_message_content_to_blocks(
@@ -1211,6 +1208,20 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
else:
result = self._generate(messages, stop=stop, **kwargs)
# Add response metadata to each generation
for idx, generation in enumerate(result.generations):
if run_manager and generation.message.id is None:
generation.message.id = f"{LC_ID_PREFIX}-{run_manager.run_id}-{idx}"
generation.message.response_metadata = _gen_info_and_msg_metadata(
generation, model_provider=self.model_provider
)
if len(result.generations) == 1 and result.llm_output is not None:
result.generations[0].message.response_metadata = {
**result.llm_output,
**result.generations[0].message.response_metadata,
}
if self.output_version == "v1":
# Overwrite .content with .content_blocks
for generation in result.generations:
@@ -1218,18 +1229,6 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
generation.message, "v1"
)
# Add response metadata to each generation
for idx, generation in enumerate(result.generations):
if run_manager and generation.message.id is None:
generation.message.id = f"{LC_ID_PREFIX}-{run_manager.run_id}-{idx}"
generation.message.response_metadata = _gen_info_and_msg_metadata(
generation
)
if len(result.generations) == 1 and result.llm_output is not None:
result.generations[0].message.response_metadata = {
**result.llm_output,
**result.generations[0].message.response_metadata,
}
if check_cache and llm_cache:
llm_cache.update(prompt, llm_string, result.generations)
return result
@@ -1281,7 +1280,9 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
index = -1
index_type = ""
async for chunk in self._astream(messages, stop=stop, **kwargs):
chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
chunk.message.response_metadata = _gen_info_and_msg_metadata(
chunk, model_provider=self.model_provider
)
if self.output_version == "v1":
# Overwrite .content with .content_blocks
chunk.message = _update_message_content_to_blocks(
@@ -1329,6 +1330,19 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
else:
result = await self._agenerate(messages, stop=stop, **kwargs)
# Add response metadata to each generation
for idx, generation in enumerate(result.generations):
if run_manager and generation.message.id is None:
generation.message.id = f"{LC_ID_PREFIX}-{run_manager.run_id}-{idx}"
generation.message.response_metadata = _gen_info_and_msg_metadata(
generation, model_provider=self.model_provider
)
if len(result.generations) == 1 and result.llm_output is not None:
result.generations[0].message.response_metadata = {
**result.llm_output,
**result.generations[0].message.response_metadata,
}
if self.output_version == "v1":
# Overwrite .content with .content_blocks
for generation in result.generations:
@@ -1336,18 +1350,6 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
generation.message, "v1"
)
# Add response metadata to each generation
for idx, generation in enumerate(result.generations):
if run_manager and generation.message.id is None:
generation.message.id = f"{LC_ID_PREFIX}-{run_manager.run_id}-{idx}"
generation.message.response_metadata = _gen_info_and_msg_metadata(
generation
)
if len(result.generations) == 1 and result.llm_output is not None:
result.generations[0].message.response_metadata = {
**result.llm_output,
**result.generations[0].message.response_metadata,
}
if check_cache and llm_cache:
await llm_cache.aupdate(prompt, llm_string, result.generations)
return result
@@ -1518,10 +1520,10 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
Args:
schema: The output schema. Can be passed in as:
- An OpenAI function/tool schema,
- A JSON Schema,
- A `TypedDict` class,
- Or a Pydantic class.
- an OpenAI function/tool schema,
- a JSON Schema,
- a `TypedDict` class,
- or a Pydantic class.
If `schema` is a Pydantic class then the model output will be a
Pydantic instance of that class, and the model-generated fields will be
@@ -1533,15 +1535,11 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
when specifying a Pydantic or `TypedDict` class.
include_raw:
If `False` then only the parsed structured output is returned.
If an error occurs during model output parsing it will be raised.
If `True` then both the raw model response (a `BaseMessage`) and the
parsed model response will be returned.
If an error occurs during output parsing it will be caught and returned
as well.
If `False` then only the parsed structured output is returned. If
an error occurs during model output parsing it will be raised. If `True`
then both the raw model response (a `BaseMessage`) and the parsed model
response will be returned. If an error occurs during output parsing it
will be caught and returned as well.
The final output is always a `dict` with keys `'raw'`, `'parsed'`, and
`'parsing_error'`.
@@ -1646,8 +1644,8 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
# }
```
!!! warning "Behavior changed in `langchain-core` 0.2.26"
Added support for `TypedDict` class.
!!! warning "Behavior changed in 0.2.26"
Added support for TypedDict class.
""" # noqa: E501
_ = kwargs.pop("method", None)
@@ -1688,40 +1686,6 @@ class BaseChatModel(BaseLanguageModel[AIMessage], ABC):
return RunnableMap(raw=llm) | parser_with_fallback
return llm | output_parser
@property
@beta()
def profile(self) -> ModelProfile:
"""Return profiling information for the model.
This property relies on the `langchain-model-profiles` package to retrieve chat
model capabilities, such as context window sizes and supported features.
Raises:
ImportError: If `langchain-model-profiles` is not installed.
Returns:
A `ModelProfile` object containing profiling information for the model.
"""
try:
from langchain_model_profiles import get_model_profile # noqa: PLC0415
except ImportError as err:
informative_error_message = (
"To access model profiling information, please install the "
"`langchain-model-profiles` package: "
"`pip install langchain-model-profiles`."
)
raise ImportError(informative_error_message) from err
provider_id = self._llm_type
model_name = (
# Model name is not standardized across integrations. New integrations
# should prefer `model`.
getattr(self, "model", None)
or getattr(self, "model_name", None)
or getattr(self, "model_id", "")
)
return get_model_profile(provider_id, model_name) or {}
class SimpleChatModel(BaseChatModel):
"""Simplified implementation for a chat model to inherit from.
@@ -1773,19 +1737,20 @@ class SimpleChatModel(BaseChatModel):
def _gen_info_and_msg_metadata(
generation: ChatGeneration | ChatGenerationChunk,
model_provider: str | None = None,
) -> dict:
return {
metadata = {
**(generation.generation_info or {}),
**generation.message.response_metadata,
}
_MAX_CLEANUP_DEPTH = 100
if model_provider and "model_provider" not in metadata:
metadata["model_provider"] = model_provider
return metadata
def _cleanup_llm_representation(serialized: Any, depth: int) -> None:
"""Remove non-serializable objects from a serialized object."""
if depth > _MAX_CLEANUP_DEPTH: # Don't cooperate for pathological cases
if depth > 100: # Don't cooperate for pathological cases
return
if not isinstance(serialized, dict):

View File

@@ -1,4 +1,4 @@
"""Fake chat models for testing purposes."""
"""Fake chat model for testing purposes."""
import asyncio
import re

View File

@@ -1,7 +1,4 @@
"""Base interface for traditional large language models (LLMs) to expose.
These are traditionally older models (newer models generally are chat models).
"""
"""Base interface for large language models to expose."""
from __future__ import annotations
@@ -651,12 +648,9 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: The prompts to generate from.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
If stop tokens are not supported consider raising `NotImplementedError`.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of the stop substrings.
If stop tokens are not supported consider raising NotImplementedError.
run_manager: Callback manager for the run.
Returns:
@@ -674,12 +668,9 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: The prompts to generate from.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
If stop tokens are not supported consider raising `NotImplementedError`.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of the stop substrings.
If stop tokens are not supported consider raising NotImplementedError.
run_manager: Callback manager for the run.
Returns:
@@ -711,14 +702,11 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Yields:
Generation chunks.
@@ -740,14 +728,11 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Yields:
Generation chunks.
@@ -858,14 +843,10 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: List of string prompts.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
tags: List of tags to associate with each prompt. If provided, the length
of the list must match the length of the prompts list.
metadata: List of metadata dictionaries to associate with each prompt. If
@@ -875,9 +856,8 @@ class BaseLLM(BaseLanguageModel[str], ABC):
length of the list must match the length of the prompts list.
run_id: List of run IDs to associate with each prompt. If provided, the
length of the list must match the length of the prompts list.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Raises:
ValueError: If prompts is not a list.
@@ -1133,14 +1113,10 @@ class BaseLLM(BaseLanguageModel[str], ABC):
Args:
prompts: List of string prompts.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
callbacks: `Callbacks` to pass through.
Used for executing additional functionality, such as logging or
streaming, throughout generation.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: `Callbacks` to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
tags: List of tags to associate with each prompt. If provided, the length
of the list must match the length of the prompts list.
metadata: List of metadata dictionaries to associate with each prompt. If
@@ -1150,9 +1126,8 @@ class BaseLLM(BaseLanguageModel[str], ABC):
length of the list must match the length of the prompts list.
run_id: List of run IDs to associate with each prompt. If provided, the
length of the list must match the length of the prompts list.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Raises:
ValueError: If the length of `callbacks`, `tags`, `metadata`, or
@@ -1432,16 +1407,12 @@ class LLM(BaseLLM):
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
If stop tokens are not supported consider raising `NotImplementedError`.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of the stop substrings.
If stop tokens are not supported consider raising NotImplementedError.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns:
The model output as a string. SHOULD NOT include the prompt.
@@ -1462,16 +1433,12 @@ class LLM(BaseLLM):
Args:
prompt: The prompt to generate from.
stop: Stop words to use when generating.
Model output is cut off at the first occurrence of any of these
substrings.
If stop tokens are not supported consider raising `NotImplementedError`.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of the stop substrings.
If stop tokens are not supported consider raising NotImplementedError.
run_manager: Callback manager for the run.
**kwargs: Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns:
The model output as a string. SHOULD NOT include the prompt.

View File

@@ -61,15 +61,13 @@ class Reviver:
"""Initialize the reviver.
Args:
secrets_map: A map of secrets to load.
If a secret is not found in the map, it will be loaded from the
environment if `secrets_from_env` is `True`.
secrets_map: A map of secrets to load. If a secret is not found in
the map, it will be loaded from the environment if `secrets_from_env`
is True.
valid_namespaces: A list of additional namespaces (modules)
to allow to be deserialized.
secrets_from_env: Whether to load secrets from the environment.
additional_import_mappings: A dictionary of additional namespace mappings
You can use this to override default mappings or add new mappings.
ignore_unserializable_fields: Whether to ignore unserializable fields.
"""
@@ -197,15 +195,13 @@ def loads(
Args:
text: The string to load.
secrets_map: A map of secrets to load.
If a secret is not found in the map, it will be loaded from the environment
if `secrets_from_env` is `True`.
secrets_map: A map of secrets to load. If a secret is not found in
the map, it will be loaded from the environment if `secrets_from_env`
is True.
valid_namespaces: A list of additional namespaces (modules)
to allow to be deserialized.
secrets_from_env: Whether to load secrets from the environment.
additional_import_mappings: A dictionary of additional namespace mappings
You can use this to override default mappings or add new mappings.
ignore_unserializable_fields: Whether to ignore unserializable fields.
@@ -241,15 +237,13 @@ def load(
Args:
obj: The object to load.
secrets_map: A map of secrets to load.
If a secret is not found in the map, it will be loaded from the environment
if `secrets_from_env` is `True`.
secrets_map: A map of secrets to load. If a secret is not found in
the map, it will be loaded from the environment if `secrets_from_env`
is True.
valid_namespaces: A list of additional namespaces (modules)
to allow to be deserialized.
secrets_from_env: Whether to load secrets from the environment.
additional_import_mappings: A dictionary of additional namespace mappings
You can use this to override default mappings or add new mappings.
ignore_unserializable_fields: Whether to ignore unserializable fields.

View File

@@ -97,14 +97,11 @@ class Serializable(BaseModel, ABC):
by default. This is to prevent accidental serialization of objects that should
not be serialized.
- `get_lc_namespace`: Get the namespace of the LangChain object.
During deserialization, this namespace is used to identify
the correct class to instantiate.
Please see the `Reviver` class in `langchain_core.load.load` for more details.
During deserialization an additional mapping is handle classes that have moved
or been renamed across package versions.
- `lc_secrets`: A map of constructor argument names to secret ids.
- `lc_attributes`: List of additional attribute names that should be included
as part of the serialized representation.

View File

@@ -50,7 +50,7 @@ class InputTokenDetails(TypedDict, total=False):
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
"""
@@ -85,7 +85,7 @@ class OutputTokenDetails(TypedDict, total=False):
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
"""
@@ -123,7 +123,7 @@ class UsageMetadata(TypedDict):
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"

View File

@@ -5,9 +5,11 @@ from __future__ import annotations
from typing import TYPE_CHECKING, Any, cast, overload
from pydantic import ConfigDict, Field
from typing_extensions import Self
from langchain_core._api.deprecation import warn_deprecated
from langchain_core.load.serializable import Serializable
from langchain_core.messages import content as types
from langchain_core.utils import get_bolded_text
from langchain_core.utils._merge import merge_dicts, merge_lists
from langchain_core.utils.interactive_env import is_interactive_env
@@ -15,9 +17,6 @@ from langchain_core.utils.interactive_env import is_interactive_env
if TYPE_CHECKING:
from collections.abc import Sequence
from typing_extensions import Self
from langchain_core.messages import content as types
from langchain_core.prompts.chat import ChatPromptTemplate
@@ -200,7 +199,7 @@ class BaseMessage(Serializable):
def content_blocks(self) -> list[types.ContentBlock]:
r"""Load content blocks from the message content.
!!! version-added "Added in `langchain-core` 1.0.0"
!!! version-added "Added in version 1.0.0"
"""
# Needed here to avoid circular import, as these classes import BaseMessages

View File

@@ -12,11 +12,10 @@ the implementation in `BaseMessage`.
from __future__ import annotations
from collections.abc import Callable
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from collections.abc import Callable
from langchain_core.messages import AIMessage, AIMessageChunk
from langchain_core.messages import content as types

View File

@@ -368,7 +368,7 @@ def _convert_to_v1_from_genai(message: AIMessage) -> list[types.ContentBlock]:
else:
# Assume it's raw base64 without data URI
try:
# Validate base64 and decode for MIME type detection
# Validate base64 and decode for mime type detection
decoded_bytes = base64.b64decode(url, validate=True)
image_url_b64_block = {
@@ -379,7 +379,7 @@ def _convert_to_v1_from_genai(message: AIMessage) -> list[types.ContentBlock]:
try:
import filetype # type: ignore[import-not-found] # noqa: PLC0415
# Guess MIME type based on file bytes
# Guess mime type based on file bytes
mime_type = None
kind = filetype.guess(decoded_bytes)
if kind:

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import json
import warnings
from collections.abc import Iterable
from typing import TYPE_CHECKING, Any, Literal, cast
from langchain_core.language_models._utils import (
@@ -13,8 +14,6 @@ from langchain_core.language_models._utils import (
from langchain_core.messages import content as types
if TYPE_CHECKING:
from collections.abc import Iterable
from langchain_core.messages import AIMessage, AIMessageChunk

View File

@@ -644,7 +644,7 @@ class AudioContentBlock(TypedDict):
class PlainTextContentBlock(TypedDict):
"""Plaintext data (e.g., from a `.txt` or `.md` document).
"""Plaintext data (e.g., from a document).
!!! note
A `PlainTextContentBlock` existed in `langchain-core<1.0.0`. Although the
@@ -767,7 +767,7 @@ class FileContentBlock(TypedDict):
class NonStandardContentBlock(TypedDict):
"""Provider-specific content data.
"""Provider-specific data.
This block contains data for which there is not yet a standard type.
@@ -802,7 +802,7 @@ class NonStandardContentBlock(TypedDict):
"""
value: dict[str, Any]
"""Provider-specific content data."""
"""Provider-specific data."""
index: NotRequired[int | str]
"""Index of block in aggregate response. Used during streaming."""
@@ -867,7 +867,7 @@ def _get_data_content_block_types() -> tuple[str, ...]:
Example: ("image", "video", "audio", "text-plain", "file")
Note that old style multimodal blocks type literals with new style blocks.
Specifically, "image", "audio", and "file".
Speficially, "image", "audio", and "file".
See the docstring of `_normalize_messages` in `language_models._utils` for details.
"""
@@ -906,7 +906,7 @@ def is_data_content_block(block: dict) -> bool:
# 'text' is checked to support v0 PlainTextContentBlock types
# We must guard against new style TextContentBlock which also has 'text' `type`
# by ensuring the presence of `source_type`
# by ensuring the presense of `source_type`
if block["type"] == "text" and "source_type" not in block: # noqa: SIM103 # This is more readable
return False
@@ -1399,7 +1399,7 @@ def create_non_standard_block(
"""Create a `NonStandardContentBlock`.
Args:
value: Provider-specific content data.
value: Provider-specific data.
id: Content block identifier. Generated automatically if not provided.
index: Index of block in aggregate response. Used during streaming.

View File

@@ -86,7 +86,7 @@ AnyMessage = Annotated[
| Annotated[ToolMessageChunk, Tag(tag="ToolMessageChunk")],
Field(discriminator=Discriminator(_get_type)),
]
"""A type representing any defined `Message` or `MessageChunk` type."""
""""A type representing any defined `Message` or `MessageChunk` type."""
def get_buffer_string(
@@ -328,16 +328,12 @@ def _convert_to_message(message: MessageLikeRepresentation) -> BaseMessage:
"""
if isinstance(message, BaseMessage):
message_ = message
elif isinstance(message, Sequence):
if isinstance(message, str):
message_ = _create_message_from_message_type("human", message)
else:
try:
message_type_str, template = message
except ValueError as e:
msg = "Message as a sequence must be (role string, template)"
raise NotImplementedError(msg) from e
message_ = _create_message_from_message_type(message_type_str, template)
elif isinstance(message, str):
message_ = _create_message_from_message_type("human", message)
elif isinstance(message, Sequence) and len(message) == 2:
# mypy doesn't realise this can't be a string given the previous branch
message_type_str, template = message # type: ignore[misc]
message_ = _create_message_from_message_type(message_type_str, template)
elif isinstance(message, dict):
msg_kwargs = message.copy()
try:
@@ -1101,7 +1097,7 @@ def convert_to_openai_messages(
# ]
```
!!! version-added "Added in `langchain-core` 0.3.11"
!!! version-added "Added in version 0.3.11"
""" # noqa: E501
if text_format not in {"string", "block"}:
@@ -1701,7 +1697,7 @@ def count_tokens_approximately(
Warning:
This function does not currently support counting image tokens.
!!! version-added "Added in `langchain-core` 0.3.46"
!!! version-added "Added in version 0.3.46"
"""
token_count = 0.0

View File

@@ -1,16 +1,11 @@
"""Format instructions."""
JSON_FORMAT_INSTRUCTIONS = """STRICT OUTPUT FORMAT:
- Return only the JSON value that conforms to the schema. Do not include any additional text, explanations, headings, or separators.
- Do not wrap the JSON in Markdown or code fences (no ``` or ```json).
- Do not prepend or append any text (e.g., do not write "Here is the JSON:").
- The response must be a single top-level JSON value exactly as required by the schema (object/array/etc.), with no trailing commas or comments.
JSON_FORMAT_INSTRUCTIONS = """The output should be formatted as a JSON instance that conforms to the JSON schema below.
The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}}
the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
As an example, for the schema {{"properties": {{"foo": {{"title": "Foo", "description": "a list of strings", "type": "array", "items": {{"type": "string"}}}}}}, "required": ["foo"]}} the object {{"foo": ["bar", "baz"]}} is a well-formatted instance of the schema. The object {{"properties": {{"foo": ["bar", "baz"]}}}} is not well-formatted.
Here is the output schema (shown in a code block for readability only — do not include any backticks or Markdown in your output):
Here is the output schema:
```
{schema}
```""" # noqa: E501

View File

@@ -15,11 +15,7 @@ from langchain_core.messages.tool import tool_call as create_tool_call
from langchain_core.output_parsers.transform import BaseCumulativeTransformOutputParser
from langchain_core.outputs import ChatGeneration, Generation
from langchain_core.utils.json import parse_partial_json
from langchain_core.utils.pydantic import (
TypeBaseModel,
is_pydantic_v1_subclass,
is_pydantic_v2_subclass,
)
from langchain_core.utils.pydantic import TypeBaseModel
logger = logging.getLogger(__name__)
@@ -327,15 +323,7 @@ class PydanticToolsParser(JsonOutputToolsParser):
return None if self.first_tool_only else []
json_results = [json_results] if self.first_tool_only else json_results
name_dict_v2: dict[str, TypeBaseModel] = {
tool.model_config.get("title") or tool.__name__: tool
for tool in self.tools
if is_pydantic_v2_subclass(tool)
}
name_dict_v1: dict[str, TypeBaseModel] = {
tool.__name__: tool for tool in self.tools if is_pydantic_v1_subclass(tool)
}
name_dict: dict[str, TypeBaseModel] = {**name_dict_v2, **name_dict_v1}
name_dict = {tool.__name__: tool for tool in self.tools}
pydantic_objects = []
for res in json_results:
if not isinstance(res["args"], dict):

View File

@@ -2,17 +2,15 @@
from __future__ import annotations
from typing import TYPE_CHECKING, Literal
from typing import Literal
from pydantic import model_validator
from typing_extensions import Self
from langchain_core.messages import BaseMessage, BaseMessageChunk
from langchain_core.outputs.generation import Generation
from langchain_core.utils._merge import merge_dicts
if TYPE_CHECKING:
from typing_extensions import Self
class ChatGeneration(Generation):
"""A single chat generation output.

View File

@@ -20,7 +20,8 @@ class Generation(Serializable):
LangChain users working with chat models will usually access information via
`AIMessage` (returned from runnable interfaces) or `LLMResult` (available
via callbacks). Please refer to `AIMessage` and `LLMResult` for more information.
via callbacks). Please refer the `AIMessage` and `LLMResult` schema documentation
for more information.
"""
text: str
@@ -33,13 +34,11 @@ class Generation(Serializable):
"""
type: Literal["Generation"] = "Generation"
"""Type is used exclusively for serialization purposes.
Set to "Generation" for this class.
"""
Set to "Generation" for this class."""
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -53,7 +52,7 @@ class Generation(Serializable):
class GenerationChunk(Generation):
"""`GenerationChunk`, which can be concatenated with other Generation chunks."""
"""Generation chunk, which can be concatenated with other Generation chunks."""
def __add__(self, other: GenerationChunk) -> GenerationChunk:
"""Concatenate two `GenerationChunk`s.

View File

@@ -30,13 +30,15 @@ class PromptValue(Serializable, ABC):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the LangChain object.
This is used to determine the namespace of the object when serializing.
Returns:
`["langchain", "schema", "prompt"]`
"""
@@ -48,7 +50,7 @@ class PromptValue(Serializable, ABC):
@abstractmethod
def to_messages(self) -> list[BaseMessage]:
"""Return prompt as a list of messages."""
"""Return prompt as a list of Messages."""
class StringPromptValue(PromptValue):
@@ -62,6 +64,8 @@ class StringPromptValue(PromptValue):
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the LangChain object.
This is used to determine the namespace of the object when serializing.
Returns:
`["langchain", "prompts", "base"]`
"""
@@ -97,6 +101,8 @@ class ChatPromptValue(PromptValue):
def get_lc_namespace(cls) -> list[str]:
"""Get the namespace of the LangChain object.
This is used to determine the namespace of the object when serializing.
Returns:
`["langchain", "prompts", "chat"]`
"""

View File

@@ -6,7 +6,7 @@ import contextlib
import json
import typing
from abc import ABC, abstractmethod
from collections.abc import Mapping
from collections.abc import Callable, Mapping
from functools import cached_property
from pathlib import Path
from typing import (
@@ -33,8 +33,6 @@ from langchain_core.runnables.config import ensure_config
from langchain_core.utils.pydantic import create_model_v2
if TYPE_CHECKING:
from collections.abc import Callable
from langchain_core.documents import Document
@@ -48,27 +46,21 @@ class BasePromptTemplate(
input_variables: list[str]
"""A list of the names of the variables whose values are required as inputs to the
prompt.
"""
prompt."""
optional_variables: list[str] = Field(default=[])
"""A list of the names of the variables for placeholder or `MessagePlaceholder` that
are optional.
These variables are auto inferred from the prompt and user need not provide them.
"""
"""optional_variables: A list of the names of the variables for placeholder
or MessagePlaceholder that are optional. These variables are auto inferred
from the prompt and user need not provide them."""
input_types: typing.Dict[str, Any] = Field(default_factory=dict, exclude=True) # noqa: UP006
"""A dictionary of the types of the variables the prompt template expects.
If not provided, all variables are assumed to be strings.
"""
If not provided, all variables are assumed to be strings."""
output_parser: BaseOutputParser | None = None
"""How to parse the output of calling an LLM on this formatted prompt."""
partial_variables: Mapping[str, Any] = Field(default_factory=dict)
"""A dictionary of the partial variables the prompt template carries.
Partial variables populate the template so that you don't need to pass them in every
time you call the prompt.
"""
Partial variables populate the template so that you don't need to
pass them in every time you call the prompt."""
metadata: typing.Dict[str, Any] | None = None # noqa: UP006
"""Metadata to be used for tracing."""
tags: list[str] | None = None
@@ -113,7 +105,7 @@ class BasePromptTemplate(
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
model_config = ConfigDict(
@@ -135,7 +127,7 @@ class BasePromptTemplate(
"""Get the input schema for the prompt.
Args:
config: Configuration for the prompt.
config: configuration for the prompt.
Returns:
The input schema for the prompt.
@@ -203,8 +195,8 @@ class BasePromptTemplate(
"""Invoke the prompt.
Args:
input: Input to the prompt.
config: Configuration for the prompt.
input: Dict, input to the prompt.
config: RunnableConfig, configuration for the prompt.
Returns:
The output of the prompt.
@@ -229,8 +221,8 @@ class BasePromptTemplate(
"""Async invoke the prompt.
Args:
input: Input to the prompt.
config: Configuration for the prompt.
input: Dict, input to the prompt.
config: RunnableConfig, configuration for the prompt.
Returns:
The output of the prompt.
@@ -250,7 +242,7 @@ class BasePromptTemplate(
@abstractmethod
def format_prompt(self, **kwargs: Any) -> PromptValue:
"""Create `PromptValue`.
"""Create Prompt Value.
Args:
**kwargs: Any arguments to be passed to the prompt template.
@@ -260,7 +252,7 @@ class BasePromptTemplate(
"""
async def aformat_prompt(self, **kwargs: Any) -> PromptValue:
"""Async create `PromptValue`.
"""Async create Prompt Value.
Args:
**kwargs: Any arguments to be passed to the prompt template.
@@ -274,7 +266,7 @@ class BasePromptTemplate(
"""Return a partial of the prompt template.
Args:
**kwargs: Partial variables to set.
**kwargs: partial variables to set.
Returns:
A partial of the prompt template.
@@ -304,9 +296,9 @@ class BasePromptTemplate(
A formatted string.
Example:
```python
prompt.format(variable1="foo")
```
```python
prompt.format(variable1="foo")
```
"""
async def aformat(self, **kwargs: Any) -> FormatOutputType:
@@ -319,9 +311,9 @@ class BasePromptTemplate(
A formatted string.
Example:
```python
await prompt.aformat(variable1="foo")
```
```python
await prompt.aformat(variable1="foo")
```
"""
return self.format(**kwargs)
@@ -356,9 +348,9 @@ class BasePromptTemplate(
NotImplementedError: If the prompt type is not implemented.
Example:
```python
prompt.save(file_path="path/prompt.yaml")
```
```python
prompt.save(file_path="path/prompt.yaml")
```
"""
if self.partial_variables:
msg = "Cannot save prompt with partial variables."
@@ -410,23 +402,23 @@ def format_document(doc: Document, prompt: BasePromptTemplate[str]) -> str:
First, this pulls information from the document from two sources:
1. `page_content`:
This takes the information from the `document.page_content` and assigns it to a
variable named `page_content`.
2. `metadata`:
This takes information from `document.metadata` and assigns it to variables of
the same name.
1. page_content:
This takes the information from the `document.page_content`
and assigns it to a variable named `page_content`.
2. metadata:
This takes information from `document.metadata` and assigns
it to variables of the same name.
Those variables are then passed into the `prompt` to produce a formatted string.
Args:
doc: `Document`, the `page_content` and `metadata` will be used to create
doc: Document, the page_content and metadata will be used to create
the final string.
prompt: `BasePromptTemplate`, will be used to format the `page_content`
and `metadata` into the final string.
prompt: BasePromptTemplate, will be used to format the page_content
and metadata into the final string.
Returns:
String of the document formatted.
string of the document formatted.
Example:
```python
@@ -437,6 +429,7 @@ def format_document(doc: Document, prompt: BasePromptTemplate[str]) -> str:
prompt = PromptTemplate.from_template("Page {page}: {page_content}")
format_document(doc, prompt)
>>> "Page 1: This is a joke"
```
"""
return prompt.format(**_get_document_info(doc, prompt))
@@ -447,22 +440,22 @@ async def aformat_document(doc: Document, prompt: BasePromptTemplate[str]) -> st
First, this pulls information from the document from two sources:
1. `page_content`:
This takes the information from the `document.page_content` and assigns it to a
variable named `page_content`.
2. `metadata`:
This takes information from `document.metadata` and assigns it to variables of
the same name.
1. page_content:
This takes the information from the `document.page_content`
and assigns it to a variable named `page_content`.
2. metadata:
This takes information from `document.metadata` and assigns
it to variables of the same name.
Those variables are then passed into the `prompt` to produce a formatted string.
Args:
doc: `Document`, the `page_content` and `metadata` will be used to create
doc: Document, the page_content and metadata will be used to create
the final string.
prompt: `BasePromptTemplate`, will be used to format the `page_content`
and `metadata` into the final string.
prompt: BasePromptTemplate, will be used to format the page_content
and metadata into the final string.
Returns:
String of the document formatted.
string of the document formatted.
"""
return await prompt.aformat(**_get_document_info(doc, prompt))

View File

@@ -587,15 +587,14 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
for prompt in self.prompt:
inputs = {var: kwargs[var] for var in prompt.input_variables}
if isinstance(prompt, StringPromptTemplate):
formatted_text: str = prompt.format(**inputs)
if formatted_text != "":
content.append({"type": "text", "text": formatted_text})
formatted: str | ImageURL | dict[str, Any] = prompt.format(**inputs)
content.append({"type": "text", "text": formatted})
elif isinstance(prompt, ImagePromptTemplate):
formatted_image: ImageURL = prompt.format(**inputs)
content.append({"type": "image_url", "image_url": formatted_image})
formatted = prompt.format(**inputs)
content.append({"type": "image_url", "image_url": formatted})
elif isinstance(prompt, DictPromptTemplate):
formatted_dict: dict[str, Any] = prompt.format(**inputs)
content.append(formatted_dict)
formatted = prompt.format(**inputs)
content.append(formatted)
return self._msg_class(
content=content, additional_kwargs=self.additional_kwargs
)
@@ -618,15 +617,16 @@ class _StringImageMessagePromptTemplate(BaseMessagePromptTemplate):
for prompt in self.prompt:
inputs = {var: kwargs[var] for var in prompt.input_variables}
if isinstance(prompt, StringPromptTemplate):
formatted_text: str = await prompt.aformat(**inputs)
if formatted_text != "":
content.append({"type": "text", "text": formatted_text})
formatted: str | ImageURL | dict[str, Any] = await prompt.aformat(
**inputs
)
content.append({"type": "text", "text": formatted})
elif isinstance(prompt, ImagePromptTemplate):
formatted_image: ImageURL = await prompt.aformat(**inputs)
content.append({"type": "image_url", "image_url": formatted_image})
formatted = await prompt.aformat(**inputs)
content.append({"type": "image_url", "image_url": formatted})
elif isinstance(prompt, DictPromptTemplate):
formatted_dict: dict[str, Any] = prompt.format(**inputs)
content.append(formatted_dict)
formatted = prompt.format(**inputs)
content.append(formatted)
return self._msg_class(
content=content, additional_kwargs=self.additional_kwargs
)
@@ -776,6 +776,11 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
Use to create flexible templated prompts for chat models.
!!! warning "Behavior changed in 0.2.24"
You can pass any Message-like formats supported by
`ChatPromptTemplate.from_messages()` directly to `ChatPromptTemplate()`
init.
```python
from langchain_core.prompts import ChatPromptTemplate
@@ -891,35 +896,25 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
"""Create a chat prompt template from a variety of message formats.
Args:
messages: Sequence of message representations.
messages: sequence of message representations.
A message can be represented using the following formats:
1. `BaseMessagePromptTemplate`
2. `BaseMessage`
3. 2-tuple of `(message type, template)`; e.g.,
`("human", "{user_input}")`
4. 2-tuple of `(message class, template)`
5. A string which is shorthand for `("human", template)`; e.g.,
`"{user_input}"`
template_format: Format of the template.
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., ("human", "{user_input}"),
(4) 2-tuple of (message class, template), (5) a string which is
shorthand for ("human", template); e.g., "{user_input}".
template_format: format of the template.
input_variables: A list of the names of the variables whose values are
required as inputs to the prompt.
optional_variables: A list of the names of the variables for placeholder
or MessagePlaceholder that are optional.
These variables are auto inferred from the prompt and user need not
provide them.
partial_variables: A dictionary of the partial variables the prompt
template carries.
Partial variables populate the template so that you don't need to pass
them in every time you call the prompt.
template carries. Partial variables populate the template so that you
don't need to pass them in every time you call the prompt.
validate_template: Whether to validate the template.
input_types: A dictionary of the types of the variables the prompt template
expects.
If not provided, all variables are assumed to be strings.
expects. If not provided, all variables are assumed to be strings.
Examples:
Instantiation from a list of message templates:
@@ -1124,17 +1119,12 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
)
```
Args:
messages: Sequence of message representations.
messages: sequence of message representations.
A message can be represented using the following formats:
1. `BaseMessagePromptTemplate`
2. `BaseMessage`
3. 2-tuple of `(message type, template)`; e.g.,
`("human", "{user_input}")`
4. 2-tuple of `(message class, template)`
5. A string which is shorthand for `("human", template)`; e.g.,
`"{user_input}"`
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., ("human", "{user_input}"),
(4) 2-tuple of (message class, template), (5) a string which is
shorthand for ("human", template); e.g., "{user_input}".
template_format: format of the template.
Returns:
@@ -1246,7 +1236,7 @@ class ChatPromptTemplate(BaseChatPromptTemplate):
"""Extend the chat template with a sequence of messages.
Args:
messages: Sequence of message representations to append.
messages: sequence of message representations to append.
"""
self.messages.extend(
[_convert_to_message_template(message) for message in messages]
@@ -1343,25 +1333,11 @@ def _create_template_from_message_type(
raise ValueError(msg)
var_name = template[1:-1]
message = MessagesPlaceholder(variable_name=var_name, optional=True)
else:
try:
var_name_wrapped, is_optional = template
except ValueError as e:
msg = (
"Unexpected arguments for placeholder message type."
" Expected either a single string variable name"
" or a list of [variable_name: str, is_optional: bool]."
f" Got: {template}"
)
raise ValueError(msg) from e
if not isinstance(is_optional, bool):
msg = f"Expected is_optional to be a boolean. Got: {is_optional}"
raise ValueError(msg) # noqa: TRY004
elif len(template) == 2 and isinstance(template[1], bool):
var_name_wrapped, is_optional = template
if not isinstance(var_name_wrapped, str):
msg = f"Expected variable name to be a string. Got: {var_name_wrapped}"
raise ValueError(msg) # noqa: TRY004
raise ValueError(msg) # noqa:TRY004
if var_name_wrapped[0] != "{" or var_name_wrapped[-1] != "}":
msg = (
f"Invalid placeholder template: {var_name_wrapped}."
@@ -1371,6 +1347,14 @@ def _create_template_from_message_type(
var_name = var_name_wrapped[1:-1]
message = MessagesPlaceholder(variable_name=var_name, optional=is_optional)
else:
msg = (
"Unexpected arguments for placeholder message type."
" Expected either a single string variable name"
" or a list of [variable_name: str, is_optional: bool]."
f" Got: {template}"
)
raise ValueError(msg)
else:
msg = (
f"Unexpected message type: {message_type}. Use one of 'human',"
@@ -1424,11 +1408,10 @@ def _convert_to_message_template(
)
raise ValueError(msg)
message = (message["role"], message["content"])
try:
message_type_str, template = message
except ValueError as e:
if len(message) != 2:
msg = f"Expected 2-tuple of (role, template), got {message}"
raise ValueError(msg) from e
raise ValueError(msg)
message_type_str, template = message
if isinstance(message_type_str, str):
message_ = _create_template_from_message_type(
message_type_str, template, template_format=template_format

View File

@@ -69,7 +69,7 @@ class DictPromptTemplate(RunnableSerializable[dict, dict]):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod

View File

@@ -6,10 +6,10 @@ from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, Any
from langchain_core.load import Serializable
from langchain_core.messages import BaseMessage
from langchain_core.utils.interactive_env import is_interactive_env
if TYPE_CHECKING:
from langchain_core.messages import BaseMessage
from langchain_core.prompts.chat import ChatPromptTemplate
@@ -18,7 +18,7 @@ class BaseMessagePromptTemplate(Serializable, ABC):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -32,13 +32,13 @@ class BaseMessagePromptTemplate(Serializable, ABC):
@abstractmethod
def format_messages(self, **kwargs: Any) -> list[BaseMessage]:
"""Format messages from kwargs. Should return a list of `BaseMessage` objects.
"""Format messages from kwargs. Should return a list of BaseMessages.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
List of `BaseMessage` objects.
List of BaseMessages.
"""
async def aformat_messages(self, **kwargs: Any) -> list[BaseMessage]:
@@ -48,7 +48,7 @@ class BaseMessagePromptTemplate(Serializable, ABC):
**kwargs: Keyword arguments to use for formatting.
Returns:
List of `BaseMessage` objects.
List of BaseMessages.
"""
return self.format_messages(**kwargs)

View File

@@ -4,8 +4,9 @@ from __future__ import annotations
import warnings
from abc import ABC
from collections.abc import Callable, Sequence
from string import Formatter
from typing import TYPE_CHECKING, Any, Literal
from typing import Any, Literal
from pydantic import BaseModel, create_model
@@ -15,9 +16,6 @@ from langchain_core.utils import get_colored_text, mustache
from langchain_core.utils.formatting import formatter
from langchain_core.utils.interactive_env import is_interactive_env
if TYPE_CHECKING:
from collections.abc import Callable, Sequence
try:
from jinja2 import Environment, meta
from jinja2.sandbox import SandboxedEnvironment

View File

@@ -104,23 +104,19 @@ class StructuredPrompt(ChatPromptTemplate):
)
```
Args:
messages: Sequence of message representations.
messages: sequence of message representations.
A message can be represented using the following formats:
1. `BaseMessagePromptTemplate`
2. `BaseMessage`
3. 2-tuple of `(message type, template)`; e.g.,
`("human", "{user_input}")`
4. 2-tuple of `(message class, template)`
5. A string which is shorthand for `("human", template)`; e.g.,
`"{user_input}"`
schema: A dictionary representation of function call, or a Pydantic model.
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., ("human", "{user_input}"),
(4) 2-tuple of (message class, template), (5) a string which is
shorthand for ("human", template); e.g., "{user_input}"
schema: a dictionary representation of function call, or a Pydantic model.
**kwargs: Any additional kwargs to pass through to
`ChatModel.with_structured_output(schema, **kwargs)`.
Returns:
A structured prompt template
a structured prompt template
"""
return cls(messages, schema, **kwargs)

View File

@@ -105,9 +105,7 @@ class InMemoryRateLimiter(BaseRateLimiter):
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(
model_name="claude-sonnet-4-5-20250929", rate_limiter=rate_limiter
)
model = ChatAnthropic(model_name="claude-sonnet-4-5", rate_limiter=rate_limiter)
for _ in range(5):
tic = time.time()

View File

@@ -50,65 +50,65 @@ class LangSmithRetrieverParams(TypedDict, total=False):
class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
"""Abstract base class for a document retrieval system.
"""Abstract base class for a Document retrieval system.
A retrieval system is defined as something that can take string queries and return
the most 'relevant' documents from some source.
the most 'relevant' Documents from some source.
Usage:
A retriever follows the standard `Runnable` interface, and should be used via the
standard `Runnable` methods of `invoke`, `ainvoke`, `batch`, `abatch`.
A retriever follows the standard Runnable interface, and should be used
via the standard Runnable methods of `invoke`, `ainvoke`, `batch`, `abatch`.
Implementation:
When implementing a custom retriever, the class should implement the
`_get_relevant_documents` method to define the logic for retrieving documents.
When implementing a custom retriever, the class should implement
the `_get_relevant_documents` method to define the logic for retrieving documents.
Optionally, an async native implementations can be provided by overriding the
`_aget_relevant_documents` method.
!!! example "Retriever that returns the first 5 documents from a list of documents"
Example: A retriever that returns the first 5 documents from a list of documents
```python
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
```python
from langchain_core.documents import Document
from langchain_core.retrievers import BaseRetriever
class SimpleRetriever(BaseRetriever):
docs: list[Document]
k: int = 5
class SimpleRetriever(BaseRetriever):
docs: list[Document]
k: int = 5
def _get_relevant_documents(self, query: str) -> list[Document]:
\"\"\"Return the first k documents from the list of documents\"\"\"
return self.docs[:self.k]
def _get_relevant_documents(self, query: str) -> list[Document]:
\"\"\"Return the first k documents from the list of documents\"\"\"
return self.docs[:self.k]
async def _aget_relevant_documents(self, query: str) -> list[Document]:
\"\"\"(Optional) async native implementation.\"\"\"
return self.docs[:self.k]
```
async def _aget_relevant_documents(self, query: str) -> list[Document]:
\"\"\"(Optional) async native implementation.\"\"\"
return self.docs[:self.k]
```
!!! example "Simple retriever based on a scikit-learn vectorizer"
Example: A simple retriever based on a scikit-learn vectorizer
```python
from sklearn.metrics.pairwise import cosine_similarity
```python
from sklearn.metrics.pairwise import cosine_similarity
class TFIDFRetriever(BaseRetriever, BaseModel):
vectorizer: Any
docs: list[Document]
tfidf_array: Any
k: int = 4
class TFIDFRetriever(BaseRetriever, BaseModel):
vectorizer: Any
docs: list[Document]
tfidf_array: Any
k: int = 4
class Config:
arbitrary_types_allowed = True
class Config:
arbitrary_types_allowed = True
def _get_relevant_documents(self, query: str) -> list[Document]:
# Ip -- (n_docs,x), Op -- (n_docs,n_Feats)
query_vec = self.vectorizer.transform([query])
# Op -- (n_docs,1) -- Cosine Sim with each doc
results = cosine_similarity(self.tfidf_array, query_vec).reshape((-1,))
return [self.docs[i] for i in results.argsort()[-self.k :][::-1]]
```
def _get_relevant_documents(self, query: str) -> list[Document]:
# Ip -- (n_docs,x), Op -- (n_docs,n_Feats)
query_vec = self.vectorizer.transform([query])
# Op -- (n_docs,1) -- Cosine Sim with each doc
results = cosine_similarity(self.tfidf_array, query_vec).reshape((-1,))
return [self.docs[i] for i in results.argsort()[-self.k :][::-1]]
```
"""
model_config = ConfigDict(
@@ -119,19 +119,15 @@ class BaseRetriever(RunnableSerializable[RetrieverInput, RetrieverOutput], ABC):
_expects_other_args: bool = False
tags: list[str] | None = None
"""Optional list of tags associated with the retriever.
These tags will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a retriever with its
use case.
"""
metadata: dict[str, Any] | None = None
"""Optional metadata associated with the retriever.
This metadata will be associated with each call to this retriever,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a retriever with its
use case.
"""

View File

@@ -118,8 +118,6 @@ if TYPE_CHECKING:
Other = TypeVar("Other")
_RUNNABLE_GENERIC_NUM_ARGS = 2 # Input and Output
class Runnable(ABC, Generic[Input, Output]):
"""A unit of work that can be invoked, batched, streamed, transformed and composed.
@@ -311,10 +309,7 @@ class Runnable(ABC, Generic[Input, Output]):
for base in self.__class__.mro():
if hasattr(base, "__pydantic_generic_metadata__"):
metadata = base.__pydantic_generic_metadata__
if (
"args" in metadata
and len(metadata["args"]) == _RUNNABLE_GENERIC_NUM_ARGS
):
if "args" in metadata and len(metadata["args"]) == 2:
return metadata["args"][0]
# If we didn't find a Pydantic model in the parent classes,
@@ -322,7 +317,7 @@ class Runnable(ABC, Generic[Input, Output]):
# Runnables that are not pydantic models.
for cls in self.__class__.__orig_bases__: # type: ignore[attr-defined]
type_args = get_args(cls)
if type_args and len(type_args) == _RUNNABLE_GENERIC_NUM_ARGS:
if type_args and len(type_args) == 2:
return type_args[0]
msg = (
@@ -345,15 +340,12 @@ class Runnable(ABC, Generic[Input, Output]):
for base in self.__class__.mro():
if hasattr(base, "__pydantic_generic_metadata__"):
metadata = base.__pydantic_generic_metadata__
if (
"args" in metadata
and len(metadata["args"]) == _RUNNABLE_GENERIC_NUM_ARGS
):
if "args" in metadata and len(metadata["args"]) == 2:
return metadata["args"][1]
for cls in self.__class__.__orig_bases__: # type: ignore[attr-defined]
type_args = get_args(cls)
if type_args and len(type_args) == _RUNNABLE_GENERIC_NUM_ARGS:
if type_args and len(type_args) == 2:
return type_args[1]
msg = (
@@ -432,7 +424,7 @@ class Runnable(ABC, Generic[Input, Output]):
print(runnable.get_input_jsonschema())
```
!!! version-added "Added in `langchain-core` 0.3.0"
!!! version-added "Added in version 0.3.0"
"""
return self.get_input_schema(config).model_json_schema()
@@ -510,7 +502,7 @@ class Runnable(ABC, Generic[Input, Output]):
print(runnable.get_output_jsonschema())
```
!!! version-added "Added in `langchain-core` 0.3.0"
!!! version-added "Added in version 0.3.0"
"""
return self.get_output_schema(config).model_json_schema()
@@ -574,7 +566,7 @@ class Runnable(ABC, Generic[Input, Output]):
Returns:
A JSON schema that represents the config of the `Runnable`.
!!! version-added "Added in `langchain-core` 0.3.0"
!!! version-added "Added in version 0.3.0"
"""
return self.config_schema(include=include).model_json_schema()
@@ -774,7 +766,7 @@ class Runnable(ABC, Generic[Input, Output]):
"""Assigns new fields to the `dict` output of this `Runnable`.
```python
from langchain_core.language_models.fake import FakeStreamingListLLM
from langchain_community.llms.fake import FakeStreamingListLLM
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import SystemMessagePromptTemplate
from langchain_core.runnables import Runnable
@@ -826,12 +818,10 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
input: The input to the `Runnable`.
config: A config to use when invoking the `Runnable`.
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
do in parallel, and other keys. Please refer to the `RunnableConfig`
for more details.
Returns:
The output of the `Runnable`.
@@ -848,12 +838,10 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
input: The input to the `Runnable`.
config: A config to use when invoking the `Runnable`.
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
do in parallel, and other keys. Please refer to the `RunnableConfig`
for more details.
Returns:
The output of the `Runnable`.
@@ -880,9 +868,8 @@ class Runnable(ABC, Generic[Input, Output]):
config: A config to use when invoking the `Runnable`. The config supports
standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work
to do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
to do in parallel, and other keys. Please refer to the
`RunnableConfig` for more details.
return_exceptions: Whether to return exceptions instead of raising them.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
@@ -945,12 +932,10 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
inputs: A list of inputs to the `Runnable`.
config: A config to use when invoking the `Runnable`.
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
do in parallel, and other keys. Please refer to the `RunnableConfig`
for more details.
return_exceptions: Whether to return exceptions instead of raising them.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
@@ -1013,12 +998,10 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
inputs: A list of inputs to the `Runnable`.
config: A config to use when invoking the `Runnable`.
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
do in parallel, and other keys. Please refer to the `RunnableConfig`
for more details.
return_exceptions: Whether to return exceptions instead of raising them.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
@@ -1078,12 +1061,10 @@ class Runnable(ABC, Generic[Input, Output]):
Args:
inputs: A list of inputs to the `Runnable`.
config: A config to use when invoking the `Runnable`.
The config supports standard keys like `'tags'`, `'metadata'` for
tracing purposes, `'max_concurrency'` for controlling how much work to
do in parallel, and other keys.
Please refer to `RunnableConfig` for more details.
do in parallel, and other keys. Please refer to the `RunnableConfig`
for more details.
return_exceptions: Whether to return exceptions instead of raising them.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
@@ -1761,52 +1742,46 @@ class Runnable(ABC, Generic[Input, Output]):
import time
import asyncio
def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
async def test_runnable(time_to_sleep: int):
print(f"Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}")
await asyncio.sleep(time_to_sleep)
print(f"Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}")
async def fn_start(run_obj: Runnable):
print(f"on start callback starts at {format_t(time.time())}")
await asyncio.sleep(3)
print(f"on start callback ends at {format_t(time.time())}")
async def fn_end(run_obj: Runnable):
print(f"on end callback starts at {format_t(time.time())}")
await asyncio.sleep(2)
print(f"on end callback ends at {format_t(time.time())}")
runnable = RunnableLambda(test_runnable).with_alisteners(
on_start=fn_start, on_end=fn_end
on_start=fn_start,
on_end=fn_end
)
async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
asyncio.run(concurrent_runs())
# Result:
# on start callback starts at 2025-03-01T07:05:22.875378+00:00
# on start callback starts at 2025-03-01T07:05:22.875495+00:00
# on start callback ends at 2025-03-01T07:05:25.878862+00:00
# on start callback ends at 2025-03-01T07:05:25.878947+00:00
# Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
# Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
# Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
# on end callback starts at 2025-03-01T07:05:27.882360+00:00
# Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
# on end callback starts at 2025-03-01T07:05:28.882428+00:00
# on end callback ends at 2025-03-01T07:05:29.883893+00:00
# on end callback ends at 2025-03-01T07:05:30.884831+00:00
Result:
on start callback starts at 2025-03-01T07:05:22.875378+00:00
on start callback starts at 2025-03-01T07:05:22.875495+00:00
on start callback ends at 2025-03-01T07:05:25.878862+00:00
on start callback ends at 2025-03-01T07:05:25.878947+00:00
Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00
Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00
Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00
on end callback starts at 2025-03-01T07:05:27.882360+00:00
Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00
on end callback starts at 2025-03-01T07:05:28.882428+00:00
on end callback ends at 2025-03-01T07:05:29.883893+00:00
on end callback ends at 2025-03-01T07:05:30.884831+00:00
```
"""
return RunnableBinding(
@@ -1868,7 +1843,7 @@ class Runnable(ABC, Generic[Input, Output]):
`exp_base`, and `jitter` (all `float` values).
Returns:
A new `Runnable` that retries the original `Runnable` on exceptions.
A new Runnable that retries the original Runnable on exceptions.
Example:
```python
@@ -1952,9 +1927,7 @@ class Runnable(ABC, Generic[Input, Output]):
exceptions_to_handle: A tuple of exception types to handle.
exception_key: If `string` is specified then handled exceptions will be
passed to fallbacks as part of the input under the specified key.
If `None`, exceptions will not be passed to fallbacks.
If used, the base `Runnable` and its fallbacks must accept a
dictionary as input.
@@ -1990,9 +1963,7 @@ class Runnable(ABC, Generic[Input, Output]):
exceptions_to_handle: A tuple of exception types to handle.
exception_key: If `string` is specified then handled exceptions will be
passed to fallbacks as part of the input under the specified key.
If `None`, exceptions will not be passed to fallbacks.
If used, the base `Runnable` and its fallbacks must accept a
dictionary as input.
@@ -2458,14 +2429,10 @@ class Runnable(ABC, Generic[Input, Output]):
`as_tool` will instantiate a `BaseTool` with a name, description, and
`args_schema` from a `Runnable`. Where possible, schemas are inferred
from `runnable.get_input_schema`.
Alternatively (e.g., if the `Runnable` takes a dict as input and the specific
`dict` keys are not typed), the schema can be specified directly with
`args_schema`.
You can also pass `arg_types` to just specify the required arguments and their
types.
from `runnable.get_input_schema`. Alternatively (e.g., if the
`Runnable` takes a dict as input and the specific dict keys are not typed),
the schema can be specified directly with `args_schema`. You can also
pass `arg_types` to just specify the required arguments and their types.
Args:
args_schema: The schema for the tool.
@@ -2534,7 +2501,7 @@ class Runnable(ABC, Generic[Input, Output]):
as_tool.invoke({"a": 3, "b": [1, 2]})
```
`str` input:
String input:
```python
from langchain_core.runnables import RunnableLambda
@@ -2670,7 +2637,7 @@ class RunnableSerializable(Serializable, Runnable[Input, Output]):
from langchain_openai import ChatOpenAI
model = ChatAnthropic(
model_name="claude-sonnet-4-5-20250929"
model_name="claude-3-7-sonnet-20250219"
).configurable_alternatives(
ConfigurableField(id="llm"),
default_key="anthropic",
@@ -2783,9 +2750,6 @@ def _seq_output_schema(
return last.get_output_schema(config)
_RUNNABLE_SEQUENCE_MIN_STEPS = 2
class RunnableSequence(RunnableSerializable[Input, Output]):
"""Sequence of `Runnable` objects, where the output of one is the input of the next.
@@ -2895,7 +2859,7 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
name: The name of the `Runnable`.
first: The first `Runnable` in the sequence.
middle: The middle `Runnable` objects in the sequence.
last: The last `Runnable` in the sequence.
last: The last Runnable in the sequence.
Raises:
ValueError: If the sequence has less than 2 steps.
@@ -2908,11 +2872,8 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
steps_flat.extend(step.steps)
else:
steps_flat.append(coerce_to_runnable(step))
if len(steps_flat) < _RUNNABLE_SEQUENCE_MIN_STEPS:
msg = (
f"RunnableSequence must have at least {_RUNNABLE_SEQUENCE_MIN_STEPS} "
f"steps, got {len(steps_flat)}"
)
if len(steps_flat) < 2:
msg = f"RunnableSequence must have at least 2 steps, got {len(steps_flat)}"
raise ValueError(msg)
super().__init__(
first=steps_flat[0],
@@ -2943,7 +2904,7 @@ class RunnableSequence(RunnableSerializable[Input, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
model_config = ConfigDict(
@@ -3649,7 +3610,7 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -3707,12 +3668,6 @@ class RunnableParallel(RunnableSerializable[Input, dict[str, Any]]):
== "object"
for s in self.steps__.values()
):
for step in self.steps__.values():
fields = step.get_input_schema(config).model_fields
root_field = fields.get("root")
if root_field is not None and root_field.annotation != Any:
return super().get_input_schema(config)
# This is correct, but pydantic typings/mypy don't think so.
return create_model_v2(
self.get_name("Input"),
@@ -4522,7 +4477,7 @@ class RunnableLambda(Runnable[Input, Output]):
# on itemgetter objects, so we have to parse the repr
items = str(func).replace("operator.itemgetter(", "")[:-1].split(", ")
if all(
item[0] == "'" and item[-1] == "'" and item != "''" for item in items
item[0] == "'" and item[-1] == "'" and len(item) > 2 for item in items
):
fields = {item[1:-1]: (Any, ...) for item in items}
# It's a dict, lol
@@ -5184,7 +5139,7 @@ class RunnableEachBase(RunnableSerializable[list[Input], list[Output]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -5367,7 +5322,7 @@ class RunnableEach(RunnableEachBase[Input, Output]):
class RunnableBindingBase(RunnableSerializable[Input, Output]): # type: ignore[no-redef]
"""`Runnable` that delegates calls to another `Runnable` with a set of `**kwargs`.
"""`Runnable` that delegates calls to another `Runnable` with a set of kwargs.
Use only if creating a new `RunnableBinding` subclass with different `__init__`
args.
@@ -5507,7 +5462,7 @@ class RunnableBindingBase(RunnableSerializable[Input, Output]): # type: ignore[
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -5797,7 +5752,7 @@ class RunnableBinding(RunnableBindingBase[Input, Output]): # type: ignore[no-re
```python
# Create a Runnable binding that invokes the chat model with the
# additional kwarg `stop=['-']` when running it.
from langchain_openai import ChatOpenAI
from langchain_community.chat_models import ChatOpenAI
model = ChatOpenAI()
model.invoke('Say "Parrot-MAGIC"', stop=["-"]) # Should return `Parrot`

View File

@@ -36,13 +36,11 @@ from langchain_core.runnables.utils import (
get_unique_config_specs,
)
_MIN_BRANCHES = 2
class RunnableBranch(RunnableSerializable[Input, Output]):
"""`Runnable` that selects which branch to run based on a condition.
"""Runnable that selects which branch to run based on a condition.
The `Runnable` is initialized with a list of `(condition, Runnable)` pairs and
The Runnable is initialized with a list of `(condition, Runnable)` pairs and
a default branch.
When operating on an input, the first condition that evaluates to True is
@@ -88,12 +86,12 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
Defaults a `Runnable` to run if no condition is met.
Raises:
ValueError: If the number of branches is less than `2`.
ValueError: If the number of branches is less than 2.
TypeError: If the default branch is not `Runnable`, `Callable` or `Mapping`.
TypeError: If a branch is not a `tuple` or `list`.
ValueError: If a branch is not of length `2`.
TypeError: If a branch is not a tuple or list.
ValueError: If a branch is not of length 2.
"""
if len(branches) < _MIN_BRANCHES:
if len(branches) < 2:
msg = "RunnableBranch requires at least two branches"
raise ValueError(msg)
@@ -120,7 +118,7 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
)
raise TypeError(msg)
if len(branch) != _MIN_BRANCHES:
if len(branch) != 2:
msg = (
f"RunnableBranch branches must be "
f"tuples or lists of length 2, not {len(branch)}"
@@ -142,7 +140,7 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -189,12 +187,12 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
def invoke(
self, input: Input, config: RunnableConfig | None = None, **kwargs: Any
) -> Output:
"""First evaluates the condition, then delegate to `True` or `False` branch.
"""First evaluates the condition, then delegate to true or false branch.
Args:
input: The input to the `Runnable`.
config: The configuration for the `Runnable`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
input: The input to the Runnable.
config: The configuration for the Runnable.
**kwargs: Additional keyword arguments to pass to the Runnable.
Returns:
The output of the branch that was run.
@@ -299,12 +297,12 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> Iterator[Output]:
"""First evaluates the condition, then delegate to `True` or `False` branch.
"""First evaluates the condition, then delegate to true or false branch.
Args:
input: The input to the `Runnable`.
config: The configuration for the Runna`ble.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
input: The input to the Runnable.
config: The configuration for the Runnable.
**kwargs: Additional keyword arguments to pass to the Runnable.
Yields:
The output of the branch that was run.
@@ -383,12 +381,12 @@ class RunnableBranch(RunnableSerializable[Input, Output]):
config: RunnableConfig | None = None,
**kwargs: Any | None,
) -> AsyncIterator[Output]:
"""First evaluates the condition, then delegate to `True` or `False` branch.
"""First evaluates the condition, then delegate to true or false branch.
Args:
input: The input to the `Runnable`.
config: The configuration for the `Runnable`.
**kwargs: Additional keyword arguments to pass to the `Runnable`.
input: The input to the Runnable.
config: The configuration for the Runnable.
**kwargs: Additional keyword arguments to pass to the Runnable.
Yields:
The output of the branch that was run.

View File

@@ -1,4 +1,4 @@
"""`Runnable` objects that can be dynamically configured."""
"""Runnables that can be dynamically configured."""
from __future__ import annotations
@@ -47,14 +47,14 @@ if TYPE_CHECKING:
class DynamicRunnable(RunnableSerializable[Input, Output]):
"""Serializable `Runnable` that can be dynamically configured.
"""Serializable Runnable that can be dynamically configured.
A `DynamicRunnable` should be initiated using the `configurable_fields` or
`configurable_alternatives` method of a `Runnable`.
A DynamicRunnable should be initiated using the `configurable_fields` or
`configurable_alternatives` method of a Runnable.
"""
default: RunnableSerializable[Input, Output]
"""The default `Runnable` to use."""
"""The default Runnable to use."""
config: RunnableConfig | None = None
"""The configuration to use."""
@@ -66,7 +66,7 @@ class DynamicRunnable(RunnableSerializable[Input, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -120,13 +120,13 @@ class DynamicRunnable(RunnableSerializable[Input, Output]):
def prepare(
self, config: RunnableConfig | None = None
) -> tuple[Runnable[Input, Output], RunnableConfig]:
"""Prepare the `Runnable` for invocation.
"""Prepare the Runnable for invocation.
Args:
config: The configuration to use.
Returns:
The prepared `Runnable` and configuration.
The prepared Runnable and configuration.
"""
runnable: Runnable[Input, Output] = self
while isinstance(runnable, DynamicRunnable):
@@ -316,12 +316,12 @@ class DynamicRunnable(RunnableSerializable[Input, Output]):
class RunnableConfigurableFields(DynamicRunnable[Input, Output]):
"""`Runnable` that can be dynamically configured.
"""Runnable that can be dynamically configured.
A `RunnableConfigurableFields` should be initiated using the
`configurable_fields` method of a `Runnable`.
A RunnableConfigurableFields should be initiated using the
`configurable_fields` method of a Runnable.
Here is an example of using a `RunnableConfigurableFields` with LLMs:
Here is an example of using a RunnableConfigurableFields with LLMs:
```python
from langchain_core.prompts import PromptTemplate
@@ -348,7 +348,7 @@ class RunnableConfigurableFields(DynamicRunnable[Input, Output]):
chain.invoke({"x": 0}, config={"configurable": {"temperature": 0.9}})
```
Here is an example of using a `RunnableConfigurableFields` with `HubRunnables`:
Here is an example of using a RunnableConfigurableFields with HubRunnables:
```python
from langchain_core.prompts import PromptTemplate
@@ -380,7 +380,7 @@ class RunnableConfigurableFields(DynamicRunnable[Input, Output]):
@property
def config_specs(self) -> list[ConfigurableFieldSpec]:
"""Get the configuration specs for the `RunnableConfigurableFields`.
"""Get the configuration specs for the RunnableConfigurableFields.
Returns:
The configuration specs.
@@ -473,10 +473,10 @@ _enums_for_spec_lock = threading.Lock()
class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
"""`Runnable` that can be dynamically configured.
"""Runnable that can be dynamically configured.
A `RunnableConfigurableAlternatives` should be initiated using the
`configurable_alternatives` method of a `Runnable` or can be
`configurable_alternatives` method of a Runnable or can be
initiated directly as well.
Here is an example of using a `RunnableConfigurableAlternatives` that uses
@@ -531,7 +531,7 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
"""
which: ConfigurableField
"""The `ConfigurableField` to use to choose between alternatives."""
"""The ConfigurableField to use to choose between alternatives."""
alternatives: dict[
str,
@@ -544,9 +544,8 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
prefix_keys: bool
"""Whether to prefix configurable fields of each alternative with a namespace
of the form <which.id>==<alternative_key>, e.g. a key named "temperature" used by
the alternative named "gpt3" becomes "model==gpt3/temperature".
"""
of the form <which.id>==<alternative_key>, eg. a key named "temperature" used by
the alternative named "gpt3" becomes "model==gpt3/temperature"."""
@property
@override
@@ -639,24 +638,24 @@ class RunnableConfigurableAlternatives(DynamicRunnable[Input, Output]):
def _strremoveprefix(s: str, prefix: str) -> str:
"""`str.removeprefix()` is only available in Python 3.9+."""
"""str.removeprefix() is only available in Python 3.9+."""
return s.replace(prefix, "", 1) if s.startswith(prefix) else s
def prefix_config_spec(
spec: ConfigurableFieldSpec, prefix: str
) -> ConfigurableFieldSpec:
"""Prefix the id of a `ConfigurableFieldSpec`.
"""Prefix the id of a ConfigurableFieldSpec.
This is useful when a `RunnableConfigurableAlternatives` is used as a
`ConfigurableField` of another `RunnableConfigurableAlternatives`.
This is useful when a RunnableConfigurableAlternatives is used as a
ConfigurableField of another RunnableConfigurableAlternatives.
Args:
spec: The `ConfigurableFieldSpec` to prefix.
spec: The ConfigurableFieldSpec to prefix.
prefix: The prefix to add.
Returns:
The prefixed `ConfigurableFieldSpec`.
The prefixed ConfigurableFieldSpec.
"""
return (
ConfigurableFieldSpec(
@@ -678,15 +677,15 @@ def make_options_spec(
) -> ConfigurableFieldSpec:
"""Make options spec.
Make a `ConfigurableFieldSpec` for a `ConfigurableFieldSingleOption` or
`ConfigurableFieldMultiOption`.
Make a ConfigurableFieldSpec for a ConfigurableFieldSingleOption or
ConfigurableFieldMultiOption.
Args:
spec: The `ConfigurableFieldSingleOption` or `ConfigurableFieldMultiOption`.
spec: The ConfigurableFieldSingleOption or ConfigurableFieldMultiOption.
description: The description to use if the spec does not have one.
Returns:
The `ConfigurableFieldSpec`.
The ConfigurableFieldSpec.
"""
with _enums_for_spec_lock:
if enum := _enums_for_spec.get(spec):

View File

@@ -35,20 +35,20 @@ if TYPE_CHECKING:
class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
"""`Runnable` that can fallback to other `Runnable`s if it fails.
"""Runnable that can fallback to other Runnables if it fails.
External APIs (e.g., APIs for a language model) may at times experience
degraded performance or even downtime.
In these cases, it can be useful to have a fallback `Runnable` that can be
used in place of the original `Runnable` (e.g., fallback to another LLM provider).
In these cases, it can be useful to have a fallback Runnable that can be
used in place of the original Runnable (e.g., fallback to another LLM provider).
Fallbacks can be defined at the level of a single `Runnable`, or at the level
of a chain of `Runnable`s. Fallbacks are tried in order until one succeeds or
Fallbacks can be defined at the level of a single Runnable, or at the level
of a chain of Runnables. Fallbacks are tried in order until one succeeds or
all fail.
While you can instantiate a `RunnableWithFallbacks` directly, it is usually
more convenient to use the `with_fallbacks` method on a `Runnable`.
more convenient to use the `with_fallbacks` method on a Runnable.
Example:
```python
@@ -87,7 +87,7 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
"""
runnable: Runnable[Input, Output]
"""The `Runnable` to run first."""
"""The Runnable to run first."""
fallbacks: Sequence[Runnable[Input, Output]]
"""A sequence of fallbacks to try."""
exceptions_to_handle: tuple[type[BaseException], ...] = (Exception,)
@@ -97,12 +97,9 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
"""
exception_key: str | None = None
"""If `string` is specified then handled exceptions will be passed to fallbacks as
part of the input under the specified key.
If `None`, exceptions will not be passed to fallbacks.
If used, the base `Runnable` and its fallbacks must accept a dictionary as input.
"""
part of the input under the specified key. If `None`, exceptions
will not be passed to fallbacks. If used, the base Runnable and its fallbacks
must accept a dictionary as input."""
model_config = ConfigDict(
arbitrary_types_allowed=True,
@@ -140,7 +137,7 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -155,10 +152,10 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
@property
def runnables(self) -> Iterator[Runnable[Input, Output]]:
"""Iterator over the `Runnable` and its fallbacks.
"""Iterator over the Runnable and its fallbacks.
Yields:
The `Runnable` then its fallbacks.
The Runnable then its fallbacks.
"""
yield self.runnable
yield from self.fallbacks
@@ -592,14 +589,14 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
await run_manager.on_chain_end(output)
def __getattr__(self, name: str) -> Any:
"""Get an attribute from the wrapped `Runnable` and its fallbacks.
"""Get an attribute from the wrapped Runnable and its fallbacks.
Returns:
If the attribute is anything other than a method that outputs a `Runnable`,
returns `getattr(self.runnable, name)`. If the attribute is a method that
does return a new `Runnable` (e.g. `model.bind_tools([...])` outputs a new
`RunnableBinding`) then `self.runnable` and each of the runnables in
`self.fallbacks` is replaced with `getattr(x, name)`.
If the attribute is anything other than a method that outputs a Runnable,
returns getattr(self.runnable, name). If the attribute is a method that
does return a new Runnable (e.g. model.bind_tools([...]) outputs a new
RunnableBinding) then self.runnable and each of the runnables in
self.fallbacks is replaced with getattr(x, name).
Example:
```python
@@ -607,7 +604,7 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
from langchain_anthropic import ChatAnthropic
gpt_4o = ChatOpenAI(model="gpt-4o")
claude_3_sonnet = ChatAnthropic(model="claude-sonnet-4-5-20250929")
claude_3_sonnet = ChatAnthropic(model="claude-3-7-sonnet-20250219")
model = gpt_4o.with_fallbacks([claude_3_sonnet])
model.model_name
@@ -621,6 +618,7 @@ class RunnableWithFallbacks(RunnableSerializable[Input, Output]):
runnable=RunnableBinding(bound=ChatOpenAI(...), kwargs={"tools": [...]}),
fallbacks=[RunnableBinding(bound=ChatAnthropic(...), kwargs={"tools": [...]})],
)
```
""" # noqa: E501
attr = getattr(self.runnable, name)

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import inspect
from collections import defaultdict
from collections.abc import Callable
from dataclasses import dataclass, field
from enum import Enum
from typing import (
@@ -21,7 +22,7 @@ from langchain_core.runnables.base import Runnable, RunnableSerializable
from langchain_core.utils.pydantic import _IgnoreUnserializable, is_basemodel_subclass
if TYPE_CHECKING:
from collections.abc import Callable, Sequence
from collections.abc import Sequence
from pydantic import BaseModel
@@ -641,7 +642,6 @@ class Graph:
retry_delay: float = 1.0,
frontmatter_config: dict[str, Any] | None = None,
base_url: str | None = None,
proxies: dict[str, str] | None = None,
) -> bytes:
"""Draw the graph as a PNG image using Mermaid.
@@ -674,10 +674,11 @@ class Graph:
}
```
base_url: The base URL of the Mermaid server for rendering via API.
proxies: HTTP/HTTPS proxies for requests (e.g. `{"http": "http://127.0.0.1:7890"}`).
Returns:
The PNG image as bytes.
"""
# Import locally to prevent circular import
from langchain_core.runnables.graph_mermaid import ( # noqa: PLC0415
@@ -698,7 +699,6 @@ class Graph:
padding=padding,
max_retries=max_retries,
retry_delay=retry_delay,
proxies=proxies,
base_url=base_url,
)

View File

@@ -7,6 +7,7 @@ from __future__ import annotations
import math
import os
from collections.abc import Mapping, Sequence
from typing import TYPE_CHECKING, Any
try:
@@ -19,8 +20,6 @@ except ImportError:
_HAS_GRANDALF = False
if TYPE_CHECKING:
from collections.abc import Mapping, Sequence
from langchain_core.runnables.graph import Edge as LangEdge

View File

@@ -281,7 +281,6 @@ def draw_mermaid_png(
max_retries: int = 1,
retry_delay: float = 1.0,
base_url: str | None = None,
proxies: dict[str, str] | None = None,
) -> bytes:
"""Draws a Mermaid graph as PNG using provided syntax.
@@ -294,7 +293,6 @@ def draw_mermaid_png(
max_retries: Maximum number of retries (MermaidDrawMethod.API).
retry_delay: Delay between retries (MermaidDrawMethod.API).
base_url: Base URL for the Mermaid.ink API.
proxies: HTTP/HTTPS proxies for requests (e.g. `{"http": "http://127.0.0.1:7890"}`).
Returns:
PNG image bytes.
@@ -316,7 +314,6 @@ def draw_mermaid_png(
max_retries=max_retries,
retry_delay=retry_delay,
base_url=base_url,
proxies=proxies,
)
else:
supported_methods = ", ".join([m.value for m in MermaidDrawMethod])
@@ -408,7 +405,6 @@ def _render_mermaid_using_api(
file_type: Literal["jpeg", "png", "webp"] | None = "png",
max_retries: int = 1,
retry_delay: float = 1.0,
proxies: dict[str, str] | None = None,
base_url: str | None = None,
) -> bytes:
"""Renders Mermaid graph using the Mermaid.INK API."""
@@ -449,7 +445,7 @@ def _render_mermaid_using_api(
for attempt in range(max_retries + 1):
try:
response = requests.get(image_url, timeout=10, proxies=proxies)
response = requests.get(image_url, timeout=10)
if response.status_code == requests.codes.ok:
img_bytes = response.content
if output_file_path is not None:
@@ -458,10 +454,7 @@ def _render_mermaid_using_api(
return img_bytes
# If we get a server error (5xx), retry
if (
requests.codes.internal_server_error <= response.status_code
and attempt < max_retries
):
if 500 <= response.status_code < 600 and attempt < max_retries:
# Exponential backoff with jitter
sleep_time = retry_delay * (2**attempt) * (0.5 + 0.5 * random.random()) # noqa: S311 not used for crypto
time.sleep(sleep_time)

View File

@@ -1,6 +1,5 @@
"""Helper class to draw a state graph into a PNG file."""
from itertools import groupby
from typing import Any
from langchain_core.runnables.graph import Graph, LabelsDict
@@ -142,7 +141,6 @@ class PngDrawer:
# Add nodes, conditional edges, and edges to the graph
self.add_nodes(viz, graph)
self.add_edges(viz, graph)
self.add_subgraph(viz, [node.split(":") for node in graph.nodes])
# Update entrypoint and END styles
self.update_styles(viz, graph)
@@ -163,32 +161,6 @@ class PngDrawer:
for node in graph.nodes:
self.add_node(viz, node)
def add_subgraph(
self,
viz: Any,
nodes: list[list[str]],
parent_prefix: list[str] | None = None,
) -> None:
"""Add subgraphs to the graph.
Args:
viz: The graphviz object.
nodes: The nodes to add.
parent_prefix: The prefix of the parent subgraph.
"""
for prefix, grouped in groupby(
[node[:] for node in sorted(nodes)],
key=lambda x: x.pop(0),
):
current_prefix = (parent_prefix or []) + [prefix]
grouped_nodes = list(grouped)
if len(grouped_nodes) > 1:
subgraph = viz.add_subgraph(
[":".join(current_prefix + node) for node in grouped_nodes],
name="cluster_" + ":".join(current_prefix),
)
self.add_subgraph(subgraph, grouped_nodes, current_prefix)
def add_edges(self, viz: Any, graph: Graph) -> None:
"""Add edges to the graph.

View File

@@ -36,23 +36,23 @@ GetSessionHistoryCallable = Callable[..., BaseChatMessageHistory]
class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
"""`Runnable` that manages chat message history for another `Runnable`.
"""Runnable that manages chat message history for another Runnable.
A chat message history is a sequence of messages that represent a conversation.
`RunnableWithMessageHistory` wraps another `Runnable` and manages the chat message
RunnableWithMessageHistory wraps another Runnable and manages the chat message
history for it; it is responsible for reading and updating the chat message
history.
The formats supported for the inputs and outputs of the wrapped `Runnable`
The formats supported for the inputs and outputs of the wrapped Runnable
are described below.
`RunnableWithMessageHistory` must always be called with a config that contains
RunnableWithMessageHistory must always be called with a config that contains
the appropriate parameters for the chat message history factory.
By default, the `Runnable` is expected to take a single configuration parameter
By default, the Runnable is expected to take a single configuration parameter
called `session_id` which is a string. This parameter is used to create a new
or look up an existing chat message history that matches the given `session_id`.
or look up an existing chat message history that matches the given session_id.
In this case, the invocation would look like this:
@@ -117,12 +117,12 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
```
Example where the wrapped `Runnable` takes a dictionary input:
Example where the wrapped Runnable takes a dictionary input:
```python
from typing import Optional
from langchain_anthropic import ChatAnthropic
from langchain_community.chat_models import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
@@ -166,7 +166,7 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
print(store) # noqa: T201
```
Example where the session factory takes two keys (`user_id` and `conversation_id`):
Example where the session factory takes two keys, user_id and conversation id):
```python
store = {}
@@ -223,28 +223,21 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
"""
get_session_history: GetSessionHistoryCallable
"""Function that returns a new `BaseChatMessageHistory`.
"""Function that returns a new BaseChatMessageHistory.
This function should either take a single positional argument `session_id` of type
string and return a corresponding chat message history instance
"""
string and return a corresponding chat message history instance"""
input_messages_key: str | None = None
"""Must be specified if the base `Runnable` accepts a `dict` as input.
The key in the input `dict` that contains the messages.
"""
"""Must be specified if the base runnable accepts a dict as input.
The key in the input dict that contains the messages."""
output_messages_key: str | None = None
"""Must be specified if the base `Runnable` returns a `dict` as output.
The key in the output `dict` that contains the messages.
"""
"""Must be specified if the base Runnable returns a dict as output.
The key in the output dict that contains the messages."""
history_messages_key: str | None = None
"""Must be specified if the base `Runnable` accepts a `dict` as input and expects a
separate key for historical messages.
"""
"""Must be specified if the base runnable accepts a dict as input and expects a
separate key for historical messages."""
history_factory_config: Sequence[ConfigurableFieldSpec]
"""Configure fields that should be passed to the chat history factory.
See `ConfigurableFieldSpec` for more details.
"""
See `ConfigurableFieldSpec` for more details."""
def __init__(
self,
@@ -261,16 +254,15 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
history_factory_config: Sequence[ConfigurableFieldSpec] | None = None,
**kwargs: Any,
) -> None:
"""Initialize `RunnableWithMessageHistory`.
"""Initialize RunnableWithMessageHistory.
Args:
runnable: The base `Runnable` to be wrapped.
runnable: The base Runnable to be wrapped.
Must take as input one of:
1. A list of `BaseMessage`
2. A `dict` with one key for all messages
3. A `dict` with one key for the current input string/message(s) and
2. A dict with one key for all messages
3. A dict with one key for the current input string/message(s) and
a separate key for historical messages. If the input key points
to a string, it will be treated as a `HumanMessage` in history.
@@ -278,15 +270,13 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
1. A string which can be treated as an `AIMessage`
2. A `BaseMessage` or sequence of `BaseMessage`
3. A `dict` with a key for a `BaseMessage` or sequence of
3. A dict with a key for a `BaseMessage` or sequence of
`BaseMessage`
get_session_history: Function that returns a new `BaseChatMessageHistory`.
get_session_history: Function that returns a new BaseChatMessageHistory.
This function should either take a single positional argument
`session_id` of type string and return a corresponding
chat message history instance.
```python
def get_session_history(
session_id: str, *, user_id: str | None = None
@@ -305,17 +295,16 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
) -> BaseChatMessageHistory: ...
```
input_messages_key: Must be specified if the base runnable accepts a `dict`
input_messages_key: Must be specified if the base runnable accepts a dict
as input.
output_messages_key: Must be specified if the base runnable returns a `dict`
output_messages_key: Must be specified if the base runnable returns a dict
as output.
history_messages_key: Must be specified if the base runnable accepts a
`dict` as input and expects a separate key for historical messages.
history_messages_key: Must be specified if the base runnable accepts a dict
as input and expects a separate key for historical messages.
history_factory_config: Configure fields that should be passed to the
chat history factory. See `ConfigurableFieldSpec` for more details.
Specifying these allows you to pass multiple config keys into the
`get_session_history` factory.
Specifying these allows you to pass multiple config keys
into the get_session_history factory.
**kwargs: Arbitrary additional kwargs to pass to parent class
`RunnableBindingBase` init.
@@ -375,7 +364,7 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
@property
@override
def config_specs(self) -> list[ConfigurableFieldSpec]:
"""Get the configuration specs for the `RunnableWithMessageHistory`."""
"""Get the configuration specs for the RunnableWithMessageHistory."""
return get_unique_config_specs(
super().config_specs + list(self.history_factory_config)
)
@@ -617,6 +606,6 @@ class RunnableWithMessageHistory(RunnableBindingBase): # type: ignore[no-redef]
def _get_parameter_names(callable_: GetSessionHistoryCallable) -> list[str]:
"""Get the parameter names of the `Callable`."""
"""Get the parameter names of the callable."""
sig = inspect.signature(callable_)
return list(sig.parameters.keys())

View File

@@ -51,10 +51,10 @@ def identity(x: Other) -> Other:
"""Identity function.
Args:
x: Input.
x: input.
Returns:
Output.
output.
"""
return x
@@ -63,10 +63,10 @@ async def aidentity(x: Other) -> Other:
"""Async identity function.
Args:
x: Input.
x: input.
Returns:
Output.
output.
"""
return x
@@ -74,11 +74,11 @@ async def aidentity(x: Other) -> Other:
class RunnablePassthrough(RunnableSerializable[Other, Other]):
"""Runnable to passthrough inputs unchanged or with additional keys.
This `Runnable` behaves almost like the identity function, except that it
This Runnable behaves almost like the identity function, except that it
can be configured to add additional keys to the output, if the input is a
dict.
The examples below demonstrate this `Runnable` works using a few simple
The examples below demonstrate this Runnable works using a few simple
chains. The chains rely on simple lambdas to make the examples easy to execute
and experiment with.
@@ -164,7 +164,7 @@ class RunnablePassthrough(RunnableSerializable[Other, Other]):
input_type: type[Other] | None = None,
**kwargs: Any,
) -> None:
"""Create a `RunnablePassthrough`.
"""Create e RunnablePassthrough.
Args:
func: Function to be called with the input.
@@ -180,7 +180,7 @@ class RunnablePassthrough(RunnableSerializable[Other, Other]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -213,11 +213,11 @@ class RunnablePassthrough(RunnableSerializable[Other, Other]):
"""Merge the Dict input with the output produced by the mapping argument.
Args:
**kwargs: `Runnable`, `Callable` or a `Mapping` from keys to `Runnable`
objects or `Callable`s.
**kwargs: Runnable, Callable or a Mapping from keys to Runnables
or Callables.
Returns:
A `Runnable` that merges the `dict` input with the output produced by the
A Runnable that merges the Dict input with the output produced by the
mapping argument.
"""
return RunnableAssign(RunnableParallel[dict[str, Any]](kwargs))
@@ -350,7 +350,7 @@ _graph_passthrough: RunnablePassthrough = RunnablePassthrough()
class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
"""Runnable that assigns key-value pairs to `dict[str, Any]` inputs.
"""Runnable that assigns key-value pairs to dict[str, Any] inputs.
The `RunnableAssign` class takes input dictionaries and, through a
`RunnableParallel` instance, applies transformations, then combines
@@ -392,7 +392,7 @@ class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
mapper: RunnableParallel
def __init__(self, mapper: RunnableParallel[dict[str, Any]], **kwargs: Any) -> None:
"""Create a `RunnableAssign`.
"""Create a RunnableAssign.
Args:
mapper: A `RunnableParallel` instance that will be used to transform the
@@ -403,7 +403,7 @@ class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod
@@ -668,19 +668,13 @@ class RunnableAssign(RunnableSerializable[dict[str, Any], dict[str, Any]]):
yield chunk
class RunnablePick(RunnableSerializable[dict[str, Any], Any]):
"""`Runnable` that picks keys from `dict[str, Any]` inputs.
class RunnablePick(RunnableSerializable[dict[str, Any], dict[str, Any]]):
"""Runnable that picks keys from dict[str, Any] inputs.
`RunnablePick` class represents a `Runnable` that selectively picks keys from a
RunnablePick class represents a Runnable that selectively picks keys from a
dictionary input. It allows you to specify one or more keys to extract
from the input dictionary.
!!! note "Return Type Behavior"
The return type depends on the `keys` parameter:
- When `keys` is a `str`: Returns the single value associated with that key
- When `keys` is a `list`: Returns a dictionary containing only the selected
keys
from the input dictionary. It returns a new dictionary containing only
the selected keys.
Example:
```python
@@ -693,22 +687,18 @@ class RunnablePick(RunnableSerializable[dict[str, Any], Any]):
"country": "USA",
}
# Single key - returns the value directly
runnable_single = RunnablePick(keys="name")
result_single = runnable_single.invoke(input_data)
print(result_single) # Output: "John"
runnable = RunnablePick(keys=["name", "age"])
# Multiple keys - returns a dictionary
runnable_multiple = RunnablePick(keys=["name", "age"])
result_multiple = runnable_multiple.invoke(input_data)
print(result_multiple) # Output: {'name': 'John', 'age': 30}
output_data = runnable.invoke(input_data)
print(output_data) # Output: {'name': 'John', 'age': 30}
```
"""
keys: str | list[str]
def __init__(self, keys: str | list[str], **kwargs: Any) -> None:
"""Create a `RunnablePick`.
"""Create a RunnablePick.
Args:
keys: A single key or a list of keys to pick from the input dictionary.
@@ -718,7 +708,7 @@ class RunnablePick(RunnableSerializable[dict[str, Any], Any]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod

View File

@@ -40,11 +40,11 @@ class RouterInput(TypedDict):
key: str
"""The key to route on."""
input: Any
"""The input to pass to the selected `Runnable`."""
"""The input to pass to the selected Runnable."""
class RouterRunnable(RunnableSerializable[RouterInput, Output]):
"""`Runnable` that routes to a set of `Runnable` based on `Input['key']`.
"""Runnable that routes to a set of Runnables based on Input['key'].
Returns the output of the selected Runnable.
@@ -74,10 +74,10 @@ class RouterRunnable(RunnableSerializable[RouterInput, Output]):
self,
runnables: Mapping[str, Runnable[Any, Output] | Callable[[Any], Output]],
) -> None:
"""Create a `RouterRunnable`.
"""Create a RouterRunnable.
Args:
runnables: A mapping of keys to `Runnable` objects.
runnables: A mapping of keys to Runnables.
"""
super().__init__(
runnables={key: coerce_to_runnable(r) for key, r in runnables.items()}
@@ -90,7 +90,7 @@ class RouterRunnable(RunnableSerializable[RouterInput, Output]):
@classmethod
@override
def is_lc_serializable(cls) -> bool:
"""Return `True` as this class is serializable."""
"""Return True as this class is serializable."""
return True
@classmethod

View File

@@ -28,7 +28,7 @@ class EventData(TypedDict, total=False):
This field is only available if the `Runnable` raised an exception.
!!! version-added "Added in `langchain-core` 1.0.0"
!!! version-added "Added in version 1.0.0"
"""
output: Any
"""The output of the `Runnable` that generated the event.

View File

@@ -7,7 +7,8 @@ import asyncio
import inspect
import sys
import textwrap
from collections.abc import Mapping, Sequence
from collections.abc import Callable, Mapping, Sequence
from contextvars import Context
from functools import lru_cache
from inspect import signature
from itertools import groupby
@@ -30,11 +31,9 @@ if TYPE_CHECKING:
AsyncIterable,
AsyncIterator,
Awaitable,
Callable,
Coroutine,
Iterable,
)
from contextvars import Context
from langchain_core.runnables.schema import StreamEvent

View File

@@ -125,11 +125,9 @@ def print_sys_info(*, additional_pkgs: Sequence[str] = ()) -> None:
for dep in sub_dependencies:
try:
dep_version = metadata.version(dep)
except Exception:
dep_version = None
if dep_version is not None:
print(f"> {dep}: {dep_version}")
except Exception:
print(f"> {dep}: Installed. No version info available.")
if __name__ == "__main__":

View File

@@ -386,14 +386,11 @@ class ToolException(Exception): # noqa: N818
ArgsSchema = TypeBaseModel | dict[str, Any]
_EMPTY_SET: frozenset[str] = frozenset()
class BaseTool(RunnableSerializable[str | dict | ToolCall, Any]):
"""Base class for all LangChain tools.
This abstract class defines the interface that all LangChain tools must implement.
Tools are components that can be called by agents to perform specific actions.
"""
@@ -404,7 +401,7 @@ class BaseTool(RunnableSerializable[str | dict | ToolCall, Any]):
**kwargs: Additional keyword arguments passed to the parent class.
Raises:
SchemaAnnotationError: If `args_schema` has incorrect type annotation.
SchemaAnnotationError: If args_schema has incorrect type annotation.
"""
super().__init_subclass__(**kwargs)
@@ -445,15 +442,15 @@ class ChildTool(BaseTool):
Args schema should be either:
- A subclass of `pydantic.BaseModel`.
- A subclass of `pydantic.v1.BaseModel` if accessing v1 namespace in pydantic 2
- A JSON schema dict
- A subclass of pydantic.BaseModel.
- A subclass of pydantic.v1.BaseModel if accessing v1 namespace in pydantic 2
- a JSON schema dict
"""
return_direct: bool = False
"""Whether to return the tool's output directly.
Setting this to `True` means that after the tool is called, the `AgentExecutor` will
stop looping.
Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
"""
verbose: bool = False
"""Whether to log the tool's progress."""
@@ -463,37 +460,31 @@ class ChildTool(BaseTool):
tags: list[str] | None = None
"""Optional list of tags associated with the tool.
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to, e.g., identify a specific instance of a tool with its use
case.
You can use these to eg identify a specific instance of a tool with its use case.
"""
metadata: dict[str, Any] | None = None
"""Optional metadata associated with the tool.
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to, e.g., identify a specific instance of a tool with its use
case.
You can use these to eg identify a specific instance of a tool with its use case.
"""
handle_tool_error: bool | str | Callable[[ToolException], str] | None = False
"""Handle the content of the `ToolException` thrown."""
"""Handle the content of the ToolException thrown."""
handle_validation_error: (
bool | str | Callable[[ValidationError | ValidationErrorV1], str] | None
) = False
"""Handle the content of the `ValidationError` thrown."""
"""Handle the content of the ValidationError thrown."""
response_format: Literal["content", "content_and_artifact"] = "content"
"""The tool response format.
If `'content'` then the output of the tool is interpreted as the contents of a
`ToolMessage`. If `'content_and_artifact'` then the output is expected to be a
two-tuple corresponding to the `(content, artifact)` of a `ToolMessage`.
If `"content"` then the output of the tool is interpreted as the contents of a
`ToolMessage`. If `"content_and_artifact"` then the output is expected to be a
two-tuple corresponding to the (content, artifact) of a `ToolMessage`.
"""
def __init__(self, **kwargs: Any) -> None:
@@ -501,7 +492,7 @@ class ChildTool(BaseTool):
Raises:
TypeError: If `args_schema` is not a subclass of pydantic `BaseModel` or
`dict`.
dict.
"""
if (
"args_schema" in kwargs
@@ -535,7 +526,7 @@ class ChildTool(BaseTool):
"""Get the tool's input arguments schema.
Returns:
`dict` containing the tool's argument properties.
Dictionary containing the tool's argument properties.
"""
if isinstance(self.args_schema, dict):
json_schema = self.args_schema
@@ -571,11 +562,6 @@ class ChildTool(BaseTool):
self.name, full_schema, fields, fn_description=self.description
)
@functools.cached_property
def _injected_args_keys(self) -> frozenset[str]:
# base implementation doesn't manage injected args
return _EMPTY_SET
# --- Runnable ---
@override
@@ -630,9 +616,9 @@ class ChildTool(BaseTool):
Raises:
ValueError: If `string` input is provided with JSON schema `args_schema`.
ValueError: If `InjectedToolCallId` is required but `tool_call_id` is not
ValueError: If InjectedToolCallId is required but `tool_call_id` is not
provided.
TypeError: If `args_schema` is not a Pydantic `BaseModel` or dict.
TypeError: If args_schema is not a Pydantic `BaseModel` or dict.
"""
input_args = self.args_schema
if isinstance(tool_input, str):
@@ -656,7 +642,6 @@ class ChildTool(BaseTool):
if isinstance(input_args, dict):
return tool_input
if issubclass(input_args, BaseModel):
# Check args_schema for InjectedToolCallId
for k, v in get_all_basemodel_annotations(input_args).items():
if _is_injected_arg_type(v, injected_type=InjectedToolCallId):
if tool_call_id is None:
@@ -672,7 +657,6 @@ class ChildTool(BaseTool):
result = input_args.model_validate(tool_input)
result_dict = result.model_dump()
elif issubclass(input_args, BaseModelV1):
# Check args_schema for InjectedToolCallId
for k, v in get_all_basemodel_annotations(input_args).items():
if _is_injected_arg_type(v, injected_type=InjectedToolCallId):
if tool_call_id is None:
@@ -692,25 +676,9 @@ class ChildTool(BaseTool):
f"args_schema must be a Pydantic BaseModel, got {self.args_schema}"
)
raise NotImplementedError(msg)
validated_input = {
k: getattr(result, k) for k in result_dict if k in tool_input
return {
k: getattr(result, k) for k, v in result_dict.items() if k in tool_input
}
for k in self._injected_args_keys:
if k == "tool_call_id":
if tool_call_id is None:
msg = (
"When tool includes an InjectedToolCallId "
"argument, tool must always be invoked with a full "
"model ToolCall of the form: {'args': {...}, "
"'name': '...', 'type': 'tool_call', "
"'tool_call_id': '...'}"
)
raise ValueError(msg)
validated_input[k] = tool_call_id
if k in tool_input:
injected_val = tool_input[k]
validated_input[k] = injected_val
return validated_input
return tool_input
@abstractmethod
@@ -739,35 +707,6 @@ class ChildTool(BaseTool):
kwargs["run_manager"] = kwargs["run_manager"].get_sync()
return await run_in_executor(None, self._run, *args, **kwargs)
def _filter_injected_args(self, tool_input: dict) -> dict:
"""Filter out injected tool arguments from the input dictionary.
Injected arguments are those annotated with `InjectedToolArg` or its
subclasses, or arguments in `FILTERED_ARGS` like `run_manager` and callbacks.
Args:
tool_input: The tool input dictionary to filter.
Returns:
A filtered dictionary with injected arguments removed.
"""
# Start with filtered args from the constant
filtered_keys = set[str](FILTERED_ARGS)
# If we have an args_schema, use it to identify injected args
if self.args_schema is not None:
try:
annotations = get_all_basemodel_annotations(self.args_schema)
for field_name, field_type in annotations.items():
if _is_injected_arg_type(field_type):
filtered_keys.add(field_name)
except Exception: # noqa: S110
# If we can't get annotations, just use FILTERED_ARGS
pass
# Filter out the injected keys from tool_input
return {k: v for k, v in tool_input.items() if k not in filtered_keys}
def _to_args_and_kwargs(
self, tool_input: str | dict, tool_call_id: str | None
) -> tuple[tuple, dict]:
@@ -778,7 +717,7 @@ class ChildTool(BaseTool):
tool_call_id: The ID of the tool call, if available.
Returns:
A tuple of `(positional_args, keyword_args)` for the tool.
A tuple of (positional_args, keyword_args) for the tool.
Raises:
TypeError: If the tool input type is invalid.
@@ -855,29 +794,17 @@ class ChildTool(BaseTool):
self.metadata,
)
# Filter out injected arguments from callback inputs
filtered_tool_input = (
self._filter_injected_args(tool_input)
if isinstance(tool_input, dict)
else None
)
# Use filtered inputs for the input_str parameter as well
tool_input_str = (
tool_input
if isinstance(tool_input, str)
else str(
filtered_tool_input if filtered_tool_input is not None else tool_input
)
)
run_manager = callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
tool_input_str,
tool_input if isinstance(tool_input, str) else str(tool_input),
color=start_color,
name=run_name,
run_id=run_id,
inputs=filtered_tool_input,
# Inputs by definition should always be dicts.
# For now, it's unclear whether this assumption is ever violated,
# but if it is we will send a `None` value to the callback instead
# TODO: will need to address issue via a patch.
inputs=tool_input if isinstance(tool_input, dict) else None,
**kwargs,
)
@@ -897,19 +824,16 @@ class ChildTool(BaseTool):
tool_kwargs |= {config_param: config}
response = context.run(self._run, *tool_args, **tool_kwargs)
if self.response_format == "content_and_artifact":
msg = (
"Since response_format='content_and_artifact' "
"a two-tuple of the message content and raw tool output is "
f"expected. Instead, generated response is of type: "
f"{type(response)}."
)
if not isinstance(response, tuple):
if not isinstance(response, tuple) or len(response) != 2:
msg = (
"Since response_format='content_and_artifact' "
"a two-tuple of the message content and raw tool output is "
f"expected. Instead generated response of type: "
f"{type(response)}."
)
error_to_raise = ValueError(msg)
else:
try:
content, artifact = response
except ValueError:
error_to_raise = ValueError(msg)
content, artifact = response
else:
content = response
except (ValidationError, ValidationErrorV1) as e:
@@ -981,30 +905,17 @@ class ChildTool(BaseTool):
metadata,
self.metadata,
)
# Filter out injected arguments from callback inputs
filtered_tool_input = (
self._filter_injected_args(tool_input)
if isinstance(tool_input, dict)
else None
)
# Use filtered inputs for the input_str parameter as well
tool_input_str = (
tool_input
if isinstance(tool_input, str)
else str(
filtered_tool_input if filtered_tool_input is not None else tool_input
)
)
run_manager = await callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
tool_input_str,
tool_input if isinstance(tool_input, str) else str(tool_input),
color=start_color,
name=run_name,
run_id=run_id,
inputs=filtered_tool_input,
# Inputs by definition should always be dicts.
# For now, it's unclear whether this assumption is ever violated,
# but if it is we will send a `None` value to the callback instead
# TODO: will need to address issue via a patch.
inputs=tool_input if isinstance(tool_input, dict) else None,
**kwargs,
)
content = None
@@ -1026,19 +937,16 @@ class ChildTool(BaseTool):
coro = self._arun(*tool_args, **tool_kwargs)
response = await coro_with_context(coro, context)
if self.response_format == "content_and_artifact":
msg = (
"Since response_format='content_and_artifact' "
"a two-tuple of the message content and raw tool output is "
f"expected. Instead, generated response is of type: "
f"{type(response)}."
)
if not isinstance(response, tuple):
if not isinstance(response, tuple) or len(response) != 2:
msg = (
"Since response_format='content_and_artifact' "
"a two-tuple of the message content and raw tool output is "
f"expected. Instead generated response of type: "
f"{type(response)}."
)
error_to_raise = ValueError(msg)
else:
try:
content, artifact = response
except ValueError:
error_to_raise = ValueError(msg)
content, artifact = response
else:
content = response
except ValidationError as e:
@@ -1086,7 +994,7 @@ def _handle_validation_error(
Args:
e: The validation error that occurred.
flag: How to handle the error (`bool`, `str`, or `Callable`).
flag: How to handle the error (bool, string, or callable).
Returns:
The error message to return.
@@ -1118,7 +1026,7 @@ def _handle_tool_error(
Args:
e: The tool exception that occurred.
flag: How to handle the error (`bool`, `str`, or `Callable`).
flag: How to handle the error (bool, string, or callable).
Returns:
The error message to return.
@@ -1149,12 +1057,12 @@ def _prep_run_args(
"""Prepare arguments for tool execution.
Args:
value: The input value (`str`, `dict`, or `ToolCall`).
value: The input value (string, dict, or ToolCall).
config: The runnable configuration.
**kwargs: Additional keyword arguments.
Returns:
A tuple of `(tool_input, run_kwargs)`.
A tuple of (tool_input, run_kwargs).
"""
config = ensure_config(config)
if _is_tool_call(value):
@@ -1185,7 +1093,7 @@ def _format_output(
name: str,
status: str,
) -> ToolOutputMixin | Any:
"""Format tool output as a `ToolMessage` if appropriate.
"""Format tool output as a ToolMessage if appropriate.
Args:
content: The main content of the tool output.
@@ -1195,7 +1103,7 @@ def _format_output(
status: The execution status.
Returns:
The formatted output, either as a `ToolMessage` or the original content.
The formatted output, either as a ToolMessage or the original content.
"""
if isinstance(content, ToolOutputMixin) or tool_call_id is None:
return content
@@ -1266,7 +1174,7 @@ def _get_type_hints(func: Callable) -> dict[str, type] | None:
func: The function to get type hints from.
Returns:
`dict` of type hints, or `None` if extraction fails.
Dictionary of type hints, or None if extraction fails.
"""
if isinstance(func, functools.partial):
func = func.func
@@ -1277,13 +1185,13 @@ def _get_type_hints(func: Callable) -> dict[str, type] | None:
def _get_runnable_config_param(func: Callable) -> str | None:
"""Find the parameter name for `RunnableConfig` in a function.
"""Find the parameter name for RunnableConfig in a function.
Args:
func: The function to check.
Returns:
The parameter name for `RunnableConfig`, or `None` if not found.
The parameter name for RunnableConfig, or None if not found.
"""
type_hints = _get_type_hints(func)
if not type_hints:
@@ -1307,11 +1215,9 @@ class _DirectlyInjectedToolArg:
Injected via direct type annotation, rather than annotated metadata.
For example, `ToolRuntime` is a directly injected argument.
For example, ToolRuntime is a directly injected argument.
Note the direct annotation rather than the verbose alternative:
`Annotated[ToolRuntime, InjectedRuntime]`
Annotated[ToolRuntime, InjectedRuntime]
```python
from langchain_core.tools import tool, ToolRuntime
@@ -1354,11 +1260,11 @@ class InjectedToolCallId(InjectedToolArg):
def _is_directly_injected_arg_type(type_: Any) -> bool:
"""Check if a type annotation indicates a directly injected argument.
This is currently only used for `ToolRuntime`.
Checks if either the annotation itself is a subclass of `_DirectlyInjectedToolArg`
or the origin of the annotation is a subclass of `_DirectlyInjectedToolArg`.
This is currently only used for ToolRuntime.
Checks if either the annotation itself is a subclass of _DirectlyInjectedToolArg
or the origin of the annotation is a subclass of _DirectlyInjectedToolArg.
Ex: `ToolRuntime` or `ToolRuntime[ContextT, StateT]` would both return `True`.
Ex: ToolRuntime or ToolRuntime[ContextT, StateT] would both return True.
"""
return (
isinstance(type_, type) and issubclass(type_, _DirectlyInjectedToolArg)
@@ -1400,14 +1306,14 @@ def _is_injected_arg_type(
def get_all_basemodel_annotations(
cls: TypeBaseModel | Any, *, default_to_bound: bool = True
) -> dict[str, type | TypeVar]:
"""Get all annotations from a Pydantic `BaseModel` and its parents.
"""Get all annotations from a Pydantic BaseModel and its parents.
Args:
cls: The Pydantic `BaseModel` class.
default_to_bound: Whether to default to the bound of a `TypeVar` if it exists.
cls: The Pydantic BaseModel class.
default_to_bound: Whether to default to the bound of a TypeVar if it exists.
Returns:
`dict` of field names to their type annotations.
A dictionary of field names to their type annotations.
"""
# cls has no subscript: cls = FooBar
if isinstance(cls, type):
@@ -1473,15 +1379,15 @@ def _replace_type_vars(
*,
default_to_bound: bool = True,
) -> type | TypeVar:
"""Replace `TypeVar`s in a type annotation with concrete types.
"""Replace TypeVars in a type annotation with concrete types.
Args:
type_: The type annotation to process.
generic_map: Mapping of `TypeVar`s to concrete types.
default_to_bound: Whether to use `TypeVar` bounds as defaults.
generic_map: Mapping of TypeVars to concrete types.
default_to_bound: Whether to use TypeVar bounds as defaults.
Returns:
The type with `TypeVar`s replaced.
The type with TypeVars replaced.
"""
generic_map = generic_map or {}
if isinstance(type_, TypeVar):

View File

@@ -81,72 +81,57 @@ def tool(
parse_docstring: bool = False,
error_on_invalid_docstring: bool = True,
) -> BaseTool | Callable[[Callable | Runnable], BaseTool]:
"""Convert Python functions and `Runnables` to LangChain tools.
Can be used as a decorator with or without arguments to create tools from functions.
Functions can have any signature - the tool will automatically infer input schemas
unless disabled.
!!! note "Requirements"
- Functions must have type hints for proper schema inference
- When `infer_schema=False`, functions must be `(str) -> str` and have
docstrings
- When using with `Runnable`, a string name must be provided
"""Make tools out of Python functions, can be used with or without arguments.
Args:
name_or_callable: Optional name of the tool or the `Callable` to be
converted to a tool. Overrides the function's name.
Must be provided as a positional argument.
runnable: Optional `Runnable` to convert to a tool.
Must be provided as a positional argument.
name_or_callable: Optional name of the tool or the callable to be
converted to a tool. Must be provided as a positional argument.
runnable: Optional runnable to convert to a tool. Must be provided as a
positional argument.
description: Optional description for the tool.
Precedence for the tool description value is as follows:
- This `description` argument
- `description` argument
(used even if docstring and/or `args_schema` are provided)
- Tool function docstring
(used even if `args_schema` is provided)
- `args_schema` description
(used only if `description` and docstring are not provided)
(used only if `description` / docstring are not provided)
*args: Extra positional arguments. Must be empty.
return_direct: Whether to return directly from the tool rather than continuing
the agent loop.
return_direct: Whether to return directly from the tool rather
than continuing the agent loop.
args_schema: Optional argument schema for user to specify.
infer_schema: Whether to infer the schema of the arguments from the function's
signature. This also makes the resultant tool accept a dictionary input to
its `run()` function.
response_format: The tool response format.
If `'content'`, then the output of the tool is interpreted as the contents
of a `ToolMessage`.
If `'content_and_artifact'`, then the output is expected to be a two-tuple
infer_schema: Whether to infer the schema of the arguments from
the function's signature. This also makes the resultant tool
accept a dictionary input to its `run()` function.
response_format: The tool response format. If `"content"` then the output of
the tool is interpreted as the contents of a `ToolMessage`. If
`"content_and_artifact"` then the output is expected to be a two-tuple
corresponding to the `(content, artifact)` of a `ToolMessage`.
parse_docstring: If `infer_schema` and `parse_docstring`, will attempt to
parse_docstring: if `infer_schema` and `parse_docstring`, will attempt to
parse parameter descriptions from Google Style function docstrings.
error_on_invalid_docstring: If `parse_docstring` is provided, configure
error_on_invalid_docstring: if `parse_docstring` is provided, configure
whether to raise `ValueError` on invalid Google Style docstrings.
Raises:
ValueError: If too many positional arguments are provided (e.g. violating the
`*args` constraint).
ValueError: If a `Runnable` is provided without a string name. When using `tool`
with a `Runnable`, a `str` name must be provided as the `name_or_callable`.
ValueError: If too many positional arguments are provided.
ValueError: If a runnable is provided without a string name.
ValueError: If the first argument is not a string or callable with
a `__name__` attribute.
ValueError: If the function does not have a docstring and description
is not provided and `infer_schema` is `False`.
ValueError: If `parse_docstring` is `True` and the function has an invalid
Google-style docstring and `error_on_invalid_docstring` is True.
ValueError: If a `Runnable` is provided that does not have an object schema.
ValueError: If a Runnable is provided that does not have an object schema.
Returns:
The tool.
Requires:
- Function must be of type `(str) -> str`
- Function must have a docstring
Examples:
```python
@tool

View File

@@ -83,12 +83,11 @@ def create_retriever_tool(
model, so should be descriptive.
document_prompt: The prompt to use for the document.
document_separator: The separator to use between documents.
response_format: The tool response format.
If `"content"` then the output of the tool is interpreted as the contents of
a `ToolMessage`. If `"content_and_artifact"` then the output is expected to
be a two-tuple corresponding to the `(content, artifact)` of a `ToolMessage`
(artifact being a list of documents in this case).
response_format: The tool response format. If `"content"` then the output of
the tool is interpreted as the contents of a `ToolMessage`. If
`"content_and_artifact"` then the output is expected to be a two-tuple
corresponding to the `(content, artifact)` of a `ToolMessage` (artifact
being a list of documents in this case).
Returns:
Tool class to pass to an agent.

View File

@@ -2,7 +2,6 @@
from __future__ import annotations
import functools
import textwrap
from collections.abc import Awaitable, Callable
from inspect import signature
@@ -22,12 +21,10 @@ from langchain_core.callbacks import (
)
from langchain_core.runnables import RunnableConfig, run_in_executor
from langchain_core.tools.base import (
_EMPTY_SET,
FILTERED_ARGS,
ArgsSchema,
BaseTool,
_get_runnable_config_param,
_is_injected_arg_type,
create_schema_from_function,
)
from langchain_core.utils.pydantic import is_basemodel_subclass
@@ -154,13 +151,11 @@ class StructuredTool(BaseTool):
return_direct: Whether to return the result directly or as a callback.
args_schema: The schema of the tool's input arguments.
infer_schema: Whether to infer the schema from the function's signature.
response_format: The tool response format.
If `"content"` then the output of the tool is interpreted as the
contents of a `ToolMessage`. If `"content_and_artifact"` then the output
is expected to be a two-tuple corresponding to the `(content, artifact)`
of a `ToolMessage`.
parse_docstring: If `infer_schema` and `parse_docstring`, will attempt
response_format: The tool response format. If `"content"` then the output of
the tool is interpreted as the contents of a `ToolMessage`. If
`"content_and_artifact"` then the output is expected to be a two-tuple
corresponding to the `(content, artifact)` of a `ToolMessage`.
parse_docstring: if `infer_schema` and `parse_docstring`, will attempt
to parse parameter descriptions from Google Style function docstrings.
error_on_invalid_docstring: if `parse_docstring` is provided, configure
whether to raise `ValueError` on invalid Google Style docstrings.
@@ -244,17 +239,6 @@ class StructuredTool(BaseTool):
**kwargs,
)
@functools.cached_property
def _injected_args_keys(self) -> frozenset[str]:
fn = self.func or self.coroutine
if fn is None:
return _EMPTY_SET
return frozenset(
k
for k, v in signature(fn).parameters.items()
if _is_injected_arg_type(v.annotation)
)
def _filter_schema_args(func: Callable) -> list[str]:
filter_args = list(FILTERED_ARGS)

View File

@@ -15,6 +15,12 @@ from typing import (
from langchain_core.exceptions import TracerException
from langchain_core.load import dumpd
from langchain_core.outputs import (
ChatGeneration,
ChatGenerationChunk,
GenerationChunk,
LLMResult,
)
from langchain_core.tracers.schemas import Run
if TYPE_CHECKING:
@@ -25,12 +31,6 @@ if TYPE_CHECKING:
from langchain_core.documents import Document
from langchain_core.messages import BaseMessage
from langchain_core.outputs import (
ChatGeneration,
ChatGenerationChunk,
GenerationChunk,
LLMResult,
)
logger = logging.getLogger(__name__)

View File

@@ -8,6 +8,7 @@ import logging
import types
import typing
import uuid
from collections.abc import Callable
from typing import (
TYPE_CHECKING,
Annotated,
@@ -32,8 +33,6 @@ from langchain_core.utils.json_schema import dereference_refs
from langchain_core.utils.pydantic import is_basemodel_subclass
if TYPE_CHECKING:
from collections.abc import Callable
from langchain_core.tools import BaseTool
logger = logging.getLogger(__name__)
@@ -352,7 +351,7 @@ def convert_to_openai_function(
Raises:
ValueError: If function is not in a supported format.
!!! warning "Behavior changed in `langchain-core` 0.3.16"
!!! warning "Behavior changed in 0.3.16"
`description` and `parameters` keys are now optional. Only `name` is
required and guaranteed to be part of the output.
"""
@@ -413,7 +412,7 @@ def convert_to_openai_function(
if strict is not None:
if "strict" in oai_function and oai_function["strict"] != strict:
msg = (
f"Tool/function already has a 'strict' key with value "
f"Tool/function already has a 'strict' key wth value "
f"{oai_function['strict']} which is different from the explicit "
f"`strict` arg received {strict=}."
)
@@ -476,16 +475,16 @@ def convert_to_openai_tool(
A dict version of the passed in tool which is compatible with the
OpenAI tool-calling API.
!!! warning "Behavior changed in `langchain-core` 0.3.16"
!!! warning "Behavior changed in 0.3.16"
`description` and `parameters` keys are now optional. Only `name` is
required and guaranteed to be part of the output.
!!! warning "Behavior changed in `langchain-core` 0.3.44"
!!! warning "Behavior changed in 0.3.44"
Return OpenAI Responses API-style tools unchanged. This includes
any dict with `"type"` in `"file_search"`, `"function"`,
`"computer_use_preview"`, `"web_search_preview"`.
!!! warning "Behavior changed in `langchain-core` 0.3.63"
!!! warning "Behavior changed in 0.3.63"
Added support for OpenAI's image generation built-in tool.
"""
# Import locally to prevent circular import
@@ -654,9 +653,6 @@ def tool_example_to_messages(
return messages
_MIN_DOCSTRING_BLOCKS = 2
def _parse_google_docstring(
docstring: str | None,
args: list[str],
@@ -675,7 +671,7 @@ def _parse_google_docstring(
arg for arg in args if arg not in {"run_manager", "callbacks", "return"}
}
if filtered_annotations and (
len(docstring_blocks) < _MIN_DOCSTRING_BLOCKS
len(docstring_blocks) < 2
or not any(block.startswith("Args:") for block in docstring_blocks[1:])
):
msg = "Found invalid Google-Style docstring."

View File

@@ -26,9 +26,6 @@ def get_color_mapping(
colors = list(_TEXT_COLOR_MAPPING.keys())
if excluded_colors is not None:
colors = [c for c in colors if c not in excluded_colors]
if not colors:
msg = "No colors available after applying exclusions."
raise ValueError(msg)
return {item: colors[i % len(colors)] for i, item in enumerate(items)}

View File

@@ -4,13 +4,11 @@ from __future__ import annotations
import json
import re
from typing import TYPE_CHECKING, Any
from collections.abc import Callable
from typing import Any
from langchain_core.exceptions import OutputParserException
if TYPE_CHECKING:
from collections.abc import Callable
def _replace_new_line(match: re.Match[str]) -> str:
value = match.group(2)

View File

@@ -5,6 +5,7 @@ from __future__ import annotations
import inspect
import textwrap
import warnings
from collections.abc import Callable
from contextlib import nullcontext
from functools import lru_cache, wraps
from types import GenericAlias
@@ -40,12 +41,10 @@ from pydantic.json_schema import (
)
from pydantic.v1 import BaseModel as BaseModelV1
from pydantic.v1 import create_model as create_model_v1
from pydantic.v1.fields import ModelField
from typing_extensions import deprecated, override
if TYPE_CHECKING:
from collections.abc import Callable
from pydantic.v1.fields import ModelField
from pydantic_core import core_schema
PYDANTIC_VERSION = version.parse(pydantic.__version__)
@@ -66,8 +65,8 @@ def get_pydantic_major_version() -> int:
PYDANTIC_MAJOR_VERSION = PYDANTIC_VERSION.major
PYDANTIC_MINOR_VERSION = PYDANTIC_VERSION.minor
IS_PYDANTIC_V1 = False
IS_PYDANTIC_V2 = True
IS_PYDANTIC_V1 = PYDANTIC_VERSION.major == 1
IS_PYDANTIC_V2 = PYDANTIC_VERSION.major == 2
PydanticBaseModel = BaseModel
TypeBaseModel = type[BaseModel]

View File

@@ -11,6 +11,7 @@ import logging
import math
import warnings
from abc import ABC, abstractmethod
from collections.abc import Callable
from itertools import cycle
from typing import (
TYPE_CHECKING,
@@ -28,7 +29,7 @@ from langchain_core.retrievers import BaseRetriever, LangSmithRetrieverParams
from langchain_core.runnables.config import run_in_executor
if TYPE_CHECKING:
from collections.abc import Callable, Collection, Iterable, Iterator, Sequence
from collections.abc import Collection, Iterable, Iterator, Sequence
from langchain_core.callbacks.manager import (
AsyncCallbackManagerForRetrieverRun,
@@ -236,9 +237,8 @@ class VectorStore(ABC):
Args:
documents: Documents to add to the `VectorStore`.
**kwargs: Additional keyword arguments.
If kwargs contains IDs and documents contain ids, the IDs in the kwargs
will receive precedence.
if kwargs contains IDs and documents contain ids,
the IDs in the kwargs will receive precedence.
Returns:
List of IDs of the added texts.
@@ -421,7 +421,7 @@ class VectorStore(ABC):
**kwargs: Arguments to pass to the search method.
Returns:
List of tuples of `(doc, similarity_score)`.
List of Tuples of `(doc, similarity_score)`.
"""
raise NotImplementedError
@@ -435,7 +435,7 @@ class VectorStore(ABC):
**kwargs: Arguments to pass to the search method.
Returns:
List of tuples of `(doc, similarity_score)`.
List of Tuples of `(doc, similarity_score)`.
"""
# This is a temporary workaround to make the similarity search
# asynchronous. The proper solution is to make the similarity search
@@ -465,7 +465,7 @@ class VectorStore(ABC):
to filter the resulting set of retrieved docs
Returns:
List of tuples of `(doc, similarity_score)`
List of Tuples of `(doc, similarity_score)`
"""
relevance_score_fn = self._select_relevance_score_fn()
docs_and_scores = self.similarity_search_with_score(query, k, **kwargs)
@@ -492,7 +492,7 @@ class VectorStore(ABC):
to filter the resulting set of retrieved docs
Returns:
List of tuples of `(doc, similarity_score)`
List of Tuples of `(doc, similarity_score)`
"""
relevance_score_fn = self._select_relevance_score_fn()
docs_and_scores = await self.asimilarity_search_with_score(query, k, **kwargs)
@@ -516,7 +516,7 @@ class VectorStore(ABC):
to filter the resulting set of retrieved docs
Returns:
List of tuples of `(doc, similarity_score)`.
List of Tuples of `(doc, similarity_score)`.
"""
score_threshold = kwargs.pop("score_threshold", None)
@@ -565,7 +565,7 @@ class VectorStore(ABC):
to filter the resulting set of retrieved docs
Returns:
List of tuples of `(doc, similarity_score)`
List of Tuples of `(doc, similarity_score)`
"""
score_threshold = kwargs.pop("score_threshold", None)
@@ -667,7 +667,7 @@ class VectorStore(ABC):
k: Number of `Document` objects to return.
fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
lambda_mult: Number between `0` and `1` that determines the degree
of diversity among the results with `0` corresponding
of diversity among the results with 0 corresponding
to maximum diversity and `1` to minimum diversity.
**kwargs: Arguments to pass to the search method.
@@ -694,7 +694,7 @@ class VectorStore(ABC):
k: Number of `Document` objects to return.
fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
lambda_mult: Number between `0` and `1` that determines the degree
of diversity among the results with `0` corresponding
of diversity among the results with 0 corresponding
to maximum diversity and `1` to minimum diversity.
**kwargs: Arguments to pass to the search method.
@@ -732,7 +732,7 @@ class VectorStore(ABC):
k: Number of `Document` objects to return.
fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
lambda_mult: Number between `0` and `1` that determines the degree
of diversity among the results with `0` corresponding
of diversity among the results with 0 corresponding
to maximum diversity and `1` to minimum diversity.
**kwargs: Arguments to pass to the search method.
@@ -759,7 +759,7 @@ class VectorStore(ABC):
k: Number of `Document` objects to return.
fetch_k: Number of `Document` objects to fetch to pass to MMR algorithm.
lambda_mult: Number between `0` and `1` that determines the degree
of diversity among the results with `0` corresponding
of diversity among the results with 0 corresponding
to maximum diversity and `1` to minimum diversity.
**kwargs: Arguments to pass to the search method.

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import json
import uuid
from collections.abc import Callable
from pathlib import Path
from typing import (
TYPE_CHECKING,
@@ -19,7 +20,7 @@ from langchain_core.vectorstores.utils import _cosine_similarity as cosine_simil
from langchain_core.vectorstores.utils import maximal_marginal_relevance
if TYPE_CHECKING:
from collections.abc import Callable, Iterator, Sequence
from collections.abc import Iterator, Sequence
from langchain_core.embeddings import Embeddings

View File

@@ -1,3 +1,3 @@
"""langchain-core version information and utilities."""
VERSION = "1.0.6"
VERSION = "1.0.1"

View File

@@ -3,13 +3,8 @@ requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "langchain-core"
description = "Building applications with LLMs through composability"
license = {text = "MIT"}
readme = "README.md"
authors = []
version = "1.0.6"
license = {text = "MIT"}
requires-python = ">=3.10.0,<4.0.0"
dependencies = [
"langsmith>=0.3.45,<1.0.0",
@@ -20,6 +15,10 @@ dependencies = [
"packaging>=23.2.0,<26.0.0",
"pydantic>=2.7.4,<3.0.0",
]
name = "langchain-core"
version = "1.0.1"
description = "Building applications with LLMs through composability"
readme = "README.md"
[project.urls]
Homepage = "https://docs.langchain.com/"
@@ -36,7 +35,6 @@ typing = [
"mypy>=1.18.1,<1.19.0",
"types-pyyaml>=6.0.12.2,<7.0.0.0",
"types-requests>=2.28.11.5,<3.0.0.0",
"langchain-model-profiles",
"langchain-text-splitters",
]
dev = [
@@ -58,7 +56,6 @@ test = [
"blockbuster>=1.5.18,<1.6.0",
"numpy>=1.26.4; python_version<'3.13'",
"numpy>=2.1.0; python_version>='3.13'",
"langchain-model-profiles",
"langchain-tests",
"pytest-benchmark",
"pytest-codspeed",
@@ -66,7 +63,6 @@ test = [
test_integration = []
[tool.uv.sources]
langchain-model-profiles = { path = "../model-profiles" }
langchain-tests = { path = "../standard-tests" }
langchain-text-splitters = { path = "../text-splitters" }
@@ -105,6 +101,7 @@ ignore = [
"ANN401", # No Any types
"BLE", # Blind exceptions
"ERA", # No commented-out code
"PLR2004", # Comparison to magic number
]
unfixable = [
"B028", # People should intentionally tune the stacklevel
@@ -125,7 +122,7 @@ ignore-var-parameters = true # ignore missing documentation for *args and **kwa
"langchain_core/utils/mustache.py" = [ "PLW0603",]
"langchain_core/sys_info.py" = [ "T201",]
"tests/unit_tests/test_tools.py" = [ "ARG",]
"tests/**" = [ "D1", "PLR2004", "S", "SLF",]
"tests/**" = [ "D1", "S", "SLF",]
"scripts/**" = [ "INP", "S",]
[tool.coverage.run]
@@ -133,10 +130,7 @@ omit = [ "tests/*",]
[tool.pytest.ini_options]
addopts = "--snapshot-warn-unused --strict-markers --strict-config --durations=5"
markers = [
"requires: mark tests as requiring a specific library",
"compile: mark placeholder test used to compile integration tests without running them",
]
markers = [ "requires: mark tests as requiring a specific library", "compile: mark placeholder test used to compile integration tests without running them", ]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
filterwarnings = [ "ignore::langchain_core._api.beta_decorator.LangChainBetaWarning",]
asyncio_default_fixture_loop_scope = "function"

View File

@@ -148,65 +148,4 @@ async def test_inline_handlers_share_parent_context_multiple() -> None:
2,
3,
3,
]
async def test_shielded_callback_context_preservation() -> None:
"""Verify that shielded callbacks preserve context variables.
This test specifically addresses the issue where async callbacks decorated
with @shielded do not properly preserve context variables, breaking
instrumentation and other context-dependent functionality.
The issue manifests in callbacks that use the @shielded decorator:
* on_llm_end
* on_llm_error
* on_chain_end
* on_chain_error
* And other shielded callback methods
"""
context_var: contextvars.ContextVar[str] = contextvars.ContextVar("test_context")
class ContextTestHandler(AsyncCallbackHandler):
"""Handler that reads context variables in shielded callbacks."""
def __init__(self) -> None:
self.run_inline = False
self.context_values: list[str] = []
@override
async def on_llm_end(self, response: Any, **kwargs: Any) -> None:
"""This method is decorated with @shielded in the run manager."""
# This should preserve the context variable value
self.context_values.append(context_var.get("not_found"))
@override
async def on_chain_end(self, outputs: Any, **kwargs: Any) -> None:
"""This method is decorated with @shielded in the run manager."""
# This should preserve the context variable value
self.context_values.append(context_var.get("not_found"))
# Set up the test context
context_var.set("test_value")
handler = ContextTestHandler()
manager = AsyncCallbackManager(handlers=[handler])
# Create run managers that have the shielded methods
llm_managers = await manager.on_llm_start({}, ["test prompt"])
llm_run_manager = llm_managers[0]
chain_run_manager = await manager.on_chain_start({}, {"test": "input"})
# Test LLM end callback (which is shielded)
await llm_run_manager.on_llm_end({"response": "test"}) # type: ignore[arg-type]
# Test Chain end callback (which is shielded)
await chain_run_manager.on_chain_end({"output": "test"})
# The context should be preserved in shielded callbacks
# This was the main issue - shielded decorators were not preserving context
assert handler.context_values == ["test_value", "test_value"], (
f"Expected context values ['test_value', 'test_value'], "
f"but got {handler.context_values}. "
f"This indicates the shielded decorator is not preserving context variables."
)
], f"Expected order of states was broken due to context loss. Got {states}"

View File

@@ -33,7 +33,7 @@ def test_hashing() -> None:
# hash should be deterministic
assert hashed_document.id == "fd1dc827-051b-537d-a1fe-1fa043e8b276"
# Verify that hashing with sha1 is deterministic
# Verify that hashing with sha1 is determinstic
another_hashed_document = _get_document_with_hash(document, key_encoder="sha1")
assert another_hashed_document.id == hashed_document.id

View File

@@ -18,7 +18,6 @@ from langchain_core.language_models import (
ParrotFakeChatModel,
)
from langchain_core.language_models._utils import _normalize_messages
from langchain_core.language_models.chat_models import _generate_response_from_error
from langchain_core.language_models.fake_chat_models import (
FakeListChatModelError,
GenericFakeChatModel,
@@ -1220,108 +1219,55 @@ def test_get_ls_params() -> None:
assert ls_params["ls_stop"] == ["stop"]
def test_model_profiles() -> None:
model = GenericFakeChatModel(messages=iter([]))
profile = model.profile
assert profile == {}
@pytest.mark.parametrize("output_version", ["v0", "v1"])
def test_model_provider_on_metadata(output_version: str) -> None:
"""Test we assign model_provider to response metadata."""
messages = [AIMessage("hello")]
chunks = [AIMessageChunk(content="good"), AIMessageChunk(content="bye")]
class MyModel(GenericFakeChatModel):
model: str = "gpt-5"
@property
def _llm_type(self) -> str:
return "openai-chat"
model = MyModel(messages=iter([]))
profile = model.profile
assert profile
class MockResponse:
"""Mock response for testing _generate_response_from_error."""
def __init__(
self,
status_code: int = 400,
headers: dict[str, str] | None = None,
json_data: dict[str, Any] | None = None,
json_raises: type[Exception] | None = None,
text_raises: type[Exception] | None = None,
):
self.status_code = status_code
self.headers = headers or {}
self._json_data = json_data
self._json_raises = json_raises
self._text_raises = text_raises
def json(self) -> dict[str, Any]:
if self._json_raises:
msg = "JSON parsing failed"
raise self._json_raises(msg)
return self._json_data or {}
@property
def text(self) -> str:
if self._text_raises:
msg = "Text access failed"
raise self._text_raises(msg)
return ""
class MockAPIError(Exception):
"""Mock API error with response attribute."""
def __init__(self, message: str, response: MockResponse | None = None):
super().__init__(message)
self.message = message
if response is not None:
self.response = response
def test_generate_response_from_error_with_valid_json() -> None:
"""Test `_generate_response_from_error` with valid JSON response."""
response = MockResponse(
status_code=400,
headers={"content-type": "application/json"},
json_data={"error": {"message": "Bad request", "type": "invalid_request"}},
model = _AnotherFakeChatModel(
responses=iter(messages),
chunks=iter(chunks),
output_version=output_version,
model_provider="provider_foo",
)
error = MockAPIError("API Error", response=response)
generations = _generate_response_from_error(error)
response = model.invoke("hello")
assert response.response_metadata["model_provider"] == "provider_foo"
assert len(generations) == 1
generation = generations[0]
assert isinstance(generation, ChatGeneration)
assert isinstance(generation.message, AIMessage)
assert generation.message.content == ""
response = model.invoke("hello", stream=True)
assert response.response_metadata["model_provider"] == "provider_foo"
metadata = generation.message.response_metadata
assert metadata["body"] == {
"error": {"message": "Bad request", "type": "invalid_request"}
}
assert metadata["headers"] == {"content-type": "application/json"}
assert metadata["status_code"] == 400
model.chunks = iter([AIMessageChunk(content="good"), AIMessageChunk(content="bye")])
full: AIMessageChunk | None = None
for chunk in model.stream("hello"):
full = chunk if full is None else full + chunk
assert full is not None
assert full.response_metadata["model_provider"] == "provider_foo"
def test_generate_response_from_error_handles_streaming_response_failure() -> None:
# Simulates scenario where accessing response.json() or response.text
# raises ResponseNotRead on streaming responses
response = MockResponse(
status_code=400,
headers={"content-type": "application/json"},
json_raises=Exception, # Simulates ResponseNotRead or similar
text_raises=Exception,
@pytest.mark.parametrize("output_version", ["v0", "v1"])
async def test_model_provider_on_metadata_async(output_version: str) -> None:
"""Test we assign model_provider to response metadata."""
messages = [AIMessage("hello")]
chunks = [AIMessageChunk(content="good"), AIMessageChunk(content="bye")]
model = _AnotherFakeChatModel(
responses=iter(messages),
chunks=iter(chunks),
output_version=output_version,
model_provider="provider_foo",
)
error = MockAPIError("API Error", response=response)
# This should NOT raise an exception, but should handle it gracefully
generations = _generate_response_from_error(error)
response = await model.ainvoke("hello")
assert response.response_metadata["model_provider"] == "provider_foo"
assert len(generations) == 1
generation = generations[0]
metadata = generation.message.response_metadata
response = await model.ainvoke("hello", stream=True)
assert response.response_metadata["model_provider"] == "provider_foo"
# When both fail, body should be None instead of raising an exception
assert metadata["body"] is None
assert metadata["headers"] == {"content-type": "application/json"}
assert metadata["status_code"] == 400
model.chunks = iter([AIMessageChunk(content="good"), AIMessageChunk(content="bye")])
full: AIMessageChunk | None = None
async for chunk in model.astream("hello"):
full = chunk if full is None else full + chunk
assert full is not None
assert full.response_metadata["model_provider"] == "provider_foo"

View File

@@ -1,140 +0,0 @@
"""Test groq block translator."""
from typing import cast
import pytest
from langchain_core.messages import AIMessage
from langchain_core.messages import content as types
from langchain_core.messages.base import _extract_reasoning_from_additional_kwargs
from langchain_core.messages.block_translators import PROVIDER_TRANSLATORS
from langchain_core.messages.block_translators.groq import (
_parse_code_json,
translate_content,
)
def test_groq_translator_registered() -> None:
"""Test that groq translator is properly registered."""
assert "groq" in PROVIDER_TRANSLATORS
assert "translate_content" in PROVIDER_TRANSLATORS["groq"]
assert "translate_content_chunk" in PROVIDER_TRANSLATORS["groq"]
def test_extract_reasoning_from_additional_kwargs_exists() -> None:
"""Test that _extract_reasoning_from_additional_kwargs can be imported."""
# Verify it's callable
assert callable(_extract_reasoning_from_additional_kwargs)
def test_groq_translate_content_basic() -> None:
"""Test basic groq content translation."""
# Test with simple text message
message = AIMessage(content="Hello world")
blocks = translate_content(message)
assert isinstance(blocks, list)
assert len(blocks) == 1
assert blocks[0]["type"] == "text"
assert blocks[0]["text"] == "Hello world"
def test_groq_translate_content_with_reasoning() -> None:
"""Test groq content translation with reasoning content."""
# Test with reasoning content in additional_kwargs
message = AIMessage(
content="Final answer",
additional_kwargs={"reasoning_content": "Let me think about this..."},
)
blocks = translate_content(message)
assert isinstance(blocks, list)
assert len(blocks) == 2
# First block should be reasoning
assert blocks[0]["type"] == "reasoning"
assert blocks[0]["reasoning"] == "Let me think about this..."
# Second block should be text
assert blocks[1]["type"] == "text"
assert blocks[1]["text"] == "Final answer"
def test_groq_translate_content_with_tool_calls() -> None:
"""Test groq content translation with tool calls."""
# Test with tool calls
message = AIMessage(
content="",
tool_calls=[
{
"name": "search",
"args": {"query": "test"},
"id": "call_123",
}
],
)
blocks = translate_content(message)
assert isinstance(blocks, list)
assert len(blocks) == 1
assert blocks[0]["type"] == "tool_call"
assert blocks[0]["name"] == "search"
assert blocks[0]["args"] == {"query": "test"}
assert blocks[0]["id"] == "call_123"
def test_groq_translate_content_with_executed_tools() -> None:
"""Test groq content translation with executed tools (built-in tools)."""
# Test with executed_tools in additional_kwargs (Groq built-in tools)
message = AIMessage(
content="",
additional_kwargs={
"executed_tools": [
{
"type": "python",
"arguments": '{"code": "print(\\"hello\\")"}',
"output": "hello\\n",
}
]
},
)
blocks = translate_content(message)
assert isinstance(blocks, list)
# Should have server_tool_call and server_tool_result
assert len(blocks) >= 2
# Check for server_tool_call
tool_call_blocks = [
cast("types.ServerToolCall", b)
for b in blocks
if b.get("type") == "server_tool_call"
]
assert len(tool_call_blocks) == 1
assert tool_call_blocks[0]["name"] == "code_interpreter"
assert "code" in tool_call_blocks[0]["args"]
# Check for server_tool_result
tool_result_blocks = [
cast("types.ServerToolResult", b)
for b in blocks
if b.get("type") == "server_tool_result"
]
assert len(tool_result_blocks) == 1
assert tool_result_blocks[0]["output"] == "hello\\n"
assert tool_result_blocks[0]["status"] == "success"
def test_parse_code_json() -> None:
"""Test the _parse_code_json helper function."""
# Test valid code JSON
result = _parse_code_json('{"code": "print(\'hello\')"}')
assert result == {"code": "print('hello')"}
# Test code with unescaped quotes (Groq format)
result = _parse_code_json('{"code": "print("hello")"}')
assert result == {"code": 'print("hello")'}
# Test invalid format raises ValueError
with pytest.raises(ValueError, match="Could not extract Python code"):
_parse_code_json('{"invalid": "format"}')

View File

@@ -1,4 +1,3 @@
import sys
from collections.abc import AsyncIterator, Iterator
from typing import Any
@@ -887,461 +886,3 @@ def test_max_tokens_error(caplog: Any) -> None:
"`max_tokens` stop reason" in msg and record.levelname == "ERROR"
for record, msg in zip(caplog.records, caplog.messages, strict=False)
)
def test_pydantic_tools_parser_with_mixed_pydantic_versions() -> None:
"""Test PydanticToolsParser with both Pydantic v1 and v2 models."""
# For Python 3.14+ compatibility, use create_model for Pydantic v1
if sys.version_info >= (3, 14):
WeatherV1 = pydantic.v1.create_model( # noqa: N806
"WeatherV1",
__doc__="Weather information using Pydantic v1.",
temperature=(int, ...),
conditions=(str, ...),
)
else:
class WeatherV1(pydantic.v1.BaseModel):
"""Weather information using Pydantic v1."""
temperature: int
conditions: str
class LocationV2(BaseModel):
"""Location information using Pydantic v2."""
city: str
country: str
# Test with Pydantic v1 model
parser_v1 = PydanticToolsParser(tools=[WeatherV1])
message_v1 = AIMessage(
content="",
tool_calls=[
{
"id": "call_weather",
"name": "WeatherV1",
"args": {"temperature": 25, "conditions": "sunny"},
}
],
)
generation_v1 = ChatGeneration(message=message_v1)
result_v1 = parser_v1.parse_result([generation_v1])
assert len(result_v1) == 1
assert isinstance(result_v1[0], WeatherV1)
assert result_v1[0].temperature == 25 # type: ignore[attr-defined,unused-ignore]
assert result_v1[0].conditions == "sunny" # type: ignore[attr-defined,unused-ignore]
# Test with Pydantic v2 model
parser_v2 = PydanticToolsParser(tools=[LocationV2])
message_v2 = AIMessage(
content="",
tool_calls=[
{
"id": "call_location",
"name": "LocationV2",
"args": {"city": "Paris", "country": "France"},
}
],
)
generation_v2 = ChatGeneration(message=message_v2)
result_v2 = parser_v2.parse_result([generation_v2])
assert len(result_v2) == 1
assert isinstance(result_v2[0], LocationV2)
assert result_v2[0].city == "Paris"
assert result_v2[0].country == "France"
# Test with both v1 and v2 models
parser_mixed = PydanticToolsParser(tools=[WeatherV1, LocationV2])
message_mixed = AIMessage(
content="",
tool_calls=[
{
"id": "call_weather",
"name": "WeatherV1",
"args": {"temperature": 20, "conditions": "cloudy"},
},
{
"id": "call_location",
"name": "LocationV2",
"args": {"city": "London", "country": "UK"},
},
],
)
generation_mixed = ChatGeneration(message=message_mixed)
result_mixed = parser_mixed.parse_result([generation_mixed])
assert len(result_mixed) == 2
assert isinstance(result_mixed[0], WeatherV1)
assert result_mixed[0].temperature == 20 # type: ignore[attr-defined,unused-ignore]
assert isinstance(result_mixed[1], LocationV2)
assert result_mixed[1].city == "London"
def test_pydantic_tools_parser_with_custom_title() -> None:
"""Test PydanticToolsParser with Pydantic v2 model using custom title."""
class CustomTitleTool(BaseModel):
"""Tool with custom title in model config."""
model_config = {"title": "MyCustomToolName"}
value: int
description: str
# Test with custom title - tool should be callable by custom name
parser = PydanticToolsParser(tools=[CustomTitleTool])
message = AIMessage(
content="",
tool_calls=[
{
"id": "call_custom",
"name": "MyCustomToolName",
"args": {"value": 42, "description": "test"},
}
],
)
generation = ChatGeneration(message=message)
result = parser.parse_result([generation])
assert len(result) == 1
assert isinstance(result[0], CustomTitleTool)
assert result[0].value == 42
assert result[0].description == "test"
def test_pydantic_tools_parser_name_dict_fallback() -> None:
"""Test that name_dict properly falls back to __name__ when title is None."""
class ToolWithoutTitle(BaseModel):
"""Tool without explicit title."""
data: str
# Ensure model_config doesn't have a title or it's None
# (This is the default behavior)
parser = PydanticToolsParser(tools=[ToolWithoutTitle])
message = AIMessage(
content="",
tool_calls=[
{
"id": "call_no_title",
"name": "ToolWithoutTitle",
"args": {"data": "test_data"},
}
],
)
generation = ChatGeneration(message=message)
result = parser.parse_result([generation])
assert len(result) == 1
assert isinstance(result[0], ToolWithoutTitle)
assert result[0].data == "test_data"
def test_pydantic_tools_parser_with_nested_models() -> None:
"""Test PydanticToolsParser with nested Pydantic v1 and v2 models."""
# Nested v1 models
if sys.version_info >= (3, 14):
AddressV1 = pydantic.v1.create_model( # noqa: N806
"AddressV1",
__doc__="Address using Pydantic v1.",
street=(str, ...),
city=(str, ...),
zip_code=(str, ...),
)
PersonV1 = pydantic.v1.create_model( # noqa: N806
"PersonV1",
__doc__="Person with nested address using Pydantic v1.",
name=(str, ...),
age=(int, ...),
address=(AddressV1, ...),
)
else:
class AddressV1(pydantic.v1.BaseModel):
"""Address using Pydantic v1."""
street: str
city: str
zip_code: str
class PersonV1(pydantic.v1.BaseModel):
"""Person with nested address using Pydantic v1."""
name: str
age: int
address: AddressV1
# Nested v2 models
class CoordinatesV2(BaseModel):
"""Coordinates using Pydantic v2."""
latitude: float
longitude: float
class LocationV2(BaseModel):
"""Location with nested coordinates using Pydantic v2."""
name: str
coordinates: CoordinatesV2
# Test with nested Pydantic v1 model
parser_v1 = PydanticToolsParser(tools=[PersonV1])
message_v1 = AIMessage(
content="",
tool_calls=[
{
"id": "call_person",
"name": "PersonV1",
"args": {
"name": "Alice",
"age": 30,
"address": {
"street": "123 Main St",
"city": "Springfield",
"zip_code": "12345",
},
},
}
],
)
generation_v1 = ChatGeneration(message=message_v1)
result_v1 = parser_v1.parse_result([generation_v1])
assert len(result_v1) == 1
assert isinstance(result_v1[0], PersonV1)
assert result_v1[0].name == "Alice" # type: ignore[attr-defined,unused-ignore]
assert result_v1[0].age == 30 # type: ignore[attr-defined,unused-ignore]
assert isinstance(result_v1[0].address, AddressV1) # type: ignore[attr-defined,unused-ignore]
assert result_v1[0].address.street == "123 Main St" # type: ignore[attr-defined,unused-ignore]
assert result_v1[0].address.city == "Springfield" # type: ignore[attr-defined,unused-ignore]
# Test with nested Pydantic v2 model
parser_v2 = PydanticToolsParser(tools=[LocationV2])
message_v2 = AIMessage(
content="",
tool_calls=[
{
"id": "call_location",
"name": "LocationV2",
"args": {
"name": "Eiffel Tower",
"coordinates": {"latitude": 48.8584, "longitude": 2.2945},
},
}
],
)
generation_v2 = ChatGeneration(message=message_v2)
result_v2 = parser_v2.parse_result([generation_v2])
assert len(result_v2) == 1
assert isinstance(result_v2[0], LocationV2)
assert result_v2[0].name == "Eiffel Tower"
assert isinstance(result_v2[0].coordinates, CoordinatesV2)
assert result_v2[0].coordinates.latitude == 48.8584
assert result_v2[0].coordinates.longitude == 2.2945
# Test with both nested models in one message
parser_mixed = PydanticToolsParser(tools=[PersonV1, LocationV2])
message_mixed = AIMessage(
content="",
tool_calls=[
{
"id": "call_person",
"name": "PersonV1",
"args": {
"name": "Bob",
"age": 25,
"address": {
"street": "456 Oak Ave",
"city": "Portland",
"zip_code": "97201",
},
},
},
{
"id": "call_location",
"name": "LocationV2",
"args": {
"name": "Golden Gate Bridge",
"coordinates": {"latitude": 37.8199, "longitude": -122.4783},
},
},
],
)
generation_mixed = ChatGeneration(message=message_mixed)
result_mixed = parser_mixed.parse_result([generation_mixed])
assert len(result_mixed) == 2
assert isinstance(result_mixed[0], PersonV1)
assert result_mixed[0].name == "Bob" # type: ignore[attr-defined,unused-ignore]
assert result_mixed[0].address.city == "Portland" # type: ignore[attr-defined,unused-ignore]
assert isinstance(result_mixed[1], LocationV2)
assert result_mixed[1].name == "Golden Gate Bridge"
assert result_mixed[1].coordinates.latitude == 37.8199
def test_pydantic_tools_parser_with_optional_fields() -> None:
"""Test PydanticToolsParser with optional fields in v1 and v2 models."""
if sys.version_info >= (3, 14):
ProductV1 = pydantic.v1.create_model( # noqa: N806
"ProductV1",
__doc__="Product with optional fields using Pydantic v1.",
name=(str, ...),
price=(float, ...),
description=(str | None, None),
stock=(int, 0),
)
else:
class ProductV1(pydantic.v1.BaseModel):
"""Product with optional fields using Pydantic v1."""
name: str
price: float
description: str | None = None
stock: int = 0
# v2 model with optional fields
class UserV2(BaseModel):
"""User with optional fields using Pydantic v2."""
username: str
email: str
bio: str | None = None
age: int | None = None
# Test v1 with all fields provided
parser_v1_full = PydanticToolsParser(tools=[ProductV1])
message_v1_full = AIMessage(
content="",
tool_calls=[
{
"id": "call_product_full",
"name": "ProductV1",
"args": {
"name": "Laptop",
"price": 999.99,
"description": "High-end laptop",
"stock": 50,
},
}
],
)
generation_v1_full = ChatGeneration(message=message_v1_full)
result_v1_full = parser_v1_full.parse_result([generation_v1_full])
assert len(result_v1_full) == 1
assert isinstance(result_v1_full[0], ProductV1)
assert result_v1_full[0].name == "Laptop" # type: ignore[attr-defined,unused-ignore]
assert result_v1_full[0].price == 999.99 # type: ignore[attr-defined,unused-ignore]
assert result_v1_full[0].description == "High-end laptop" # type: ignore[attr-defined,unused-ignore]
assert result_v1_full[0].stock == 50 # type: ignore[attr-defined,unused-ignore]
# Test v1 with only required fields
parser_v1_minimal = PydanticToolsParser(tools=[ProductV1])
message_v1_minimal = AIMessage(
content="",
tool_calls=[
{
"id": "call_product_minimal",
"name": "ProductV1",
"args": {"name": "Mouse", "price": 29.99},
}
],
)
generation_v1_minimal = ChatGeneration(message=message_v1_minimal)
result_v1_minimal = parser_v1_minimal.parse_result([generation_v1_minimal])
assert len(result_v1_minimal) == 1
assert isinstance(result_v1_minimal[0], ProductV1)
assert result_v1_minimal[0].name == "Mouse" # type: ignore[attr-defined,unused-ignore]
assert result_v1_minimal[0].price == 29.99 # type: ignore[attr-defined,unused-ignore]
assert result_v1_minimal[0].description is None # type: ignore[attr-defined,unused-ignore]
assert result_v1_minimal[0].stock == 0 # type: ignore[attr-defined,unused-ignore]
# Test v2 with all fields provided
parser_v2_full = PydanticToolsParser(tools=[UserV2])
message_v2_full = AIMessage(
content="",
tool_calls=[
{
"id": "call_user_full",
"name": "UserV2",
"args": {
"username": "john_doe",
"email": "john@example.com",
"bio": "Software developer",
"age": 28,
},
}
],
)
generation_v2_full = ChatGeneration(message=message_v2_full)
result_v2_full = parser_v2_full.parse_result([generation_v2_full])
assert len(result_v2_full) == 1
assert isinstance(result_v2_full[0], UserV2)
assert result_v2_full[0].username == "john_doe"
assert result_v2_full[0].email == "john@example.com"
assert result_v2_full[0].bio == "Software developer"
assert result_v2_full[0].age == 28
# Test v2 with only required fields
parser_v2_minimal = PydanticToolsParser(tools=[UserV2])
message_v2_minimal = AIMessage(
content="",
tool_calls=[
{
"id": "call_user_minimal",
"name": "UserV2",
"args": {"username": "jane_smith", "email": "jane@example.com"},
}
],
)
generation_v2_minimal = ChatGeneration(message=message_v2_minimal)
result_v2_minimal = parser_v2_minimal.parse_result([generation_v2_minimal])
assert len(result_v2_minimal) == 1
assert isinstance(result_v2_minimal[0], UserV2)
assert result_v2_minimal[0].username == "jane_smith"
assert result_v2_minimal[0].email == "jane@example.com"
assert result_v2_minimal[0].bio is None
assert result_v2_minimal[0].age is None
# Test mixed v1 and v2 with partial optional fields
parser_mixed = PydanticToolsParser(tools=[ProductV1, UserV2])
message_mixed = AIMessage(
content="",
tool_calls=[
{
"id": "call_product",
"name": "ProductV1",
"args": {"name": "Keyboard", "price": 79.99, "stock": 100},
},
{
"id": "call_user",
"name": "UserV2",
"args": {
"username": "alice",
"email": "alice@example.com",
"age": 35,
},
},
],
)
generation_mixed = ChatGeneration(message=message_mixed)
result_mixed = parser_mixed.parse_result([generation_mixed])
assert len(result_mixed) == 2
assert isinstance(result_mixed[0], ProductV1)
assert result_mixed[0].name == "Keyboard" # type: ignore[attr-defined,unused-ignore]
assert result_mixed[0].description is None # type: ignore[attr-defined,unused-ignore]
assert result_mixed[0].stock == 100 # type: ignore[attr-defined,unused-ignore]
assert isinstance(result_mixed[1], UserV2)
assert result_mixed[1].username == "alice"
assert result_mixed[1].bio is None
assert result_mixed[1].age == 35

View File

@@ -682,7 +682,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -800,7 +800,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -1319,7 +1319,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"
@@ -2096,7 +2096,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -2214,7 +2214,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -2733,7 +2733,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"

View File

@@ -1193,350 +1193,3 @@ def test_dict_message_prompt_template_errors_on_jinja2() -> None:
_ = ChatPromptTemplate.from_messages(
[("human", [prompt])], template_format="jinja2"
)
def test_rendering_prompt_with_conditionals_no_empty_text_blocks() -> None:
manifest = {
"lc": 1,
"type": "constructor",
"id": ["langchain_core", "prompts", "chat", "ChatPromptTemplate"],
"kwargs": {
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"chat",
"SystemMessagePromptTemplate",
],
"kwargs": {
"prompt": {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": [],
"template_format": "mustache",
"template": "Always echo back whatever I send you.",
},
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"chat",
"HumanMessagePromptTemplate",
],
"kwargs": {
"prompt": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": [],
"template_format": "mustache",
"template": "Here is the teacher's prompt:",
"additional_content_fields": {
"text": "Here is the teacher's prompt:",
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": ["promptDescription"],
"template_format": "mustache",
"template": '"{{promptDescription}}"\n',
"additional_content_fields": {
"text": '"{{promptDescription}}"\n',
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": [],
"template_format": "mustache",
"template": "Here is the expected answer or success criteria given by the teacher:", # noqa: E501
"additional_content_fields": {
"text": "Here is the expected answer or success criteria given by the teacher:", # noqa: E501
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": ["expectedResponse"],
"template_format": "mustache",
"template": '"{{expectedResponse}}"\n',
"additional_content_fields": {
"text": '"{{expectedResponse}}"\n',
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": [],
"template_format": "mustache",
"template": "Note: This may be just one example of many possible correct ways for the student to respond.\n", # noqa: E501
"additional_content_fields": {
"text": "Note: This may be just one example of many possible correct ways for the student to respond.\n", # noqa: E501
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": [],
"template_format": "mustache",
"template": "For your evaluation of the student's response:\n", # noqa: E501
"additional_content_fields": {
"text": "For your evaluation of the student's response:\n", # noqa: E501
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": [],
"template_format": "mustache",
"template": "Here is a transcript of the student's explanation:", # noqa: E501
"additional_content_fields": {
"text": "Here is a transcript of the student's explanation:", # noqa: E501
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": ["responseTranscript"],
"template_format": "mustache",
"template": '"{{responseTranscript}}"\n',
"additional_content_fields": {
"text": '"{{responseTranscript}}"\n',
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": ["readingFluencyAnalysis"],
"template_format": "mustache",
"template": "{{#readingFluencyAnalysis}} For this task, the student's reading pronunciation and fluency were important. Here is analysis of the student's oral response: \"{{readingFluencyAnalysis}}\" {{/readingFluencyAnalysis}}", # noqa: E501
"additional_content_fields": {
"text": "{{#readingFluencyAnalysis}} For this task, the student's reading pronunciation and fluency were important. Here is analysis of the student's oral response: \"{{readingFluencyAnalysis}}\" {{/readingFluencyAnalysis}}", # noqa: E501
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": ["readingFluencyAnalysis"],
"template_format": "mustache",
"template": "{{#readingFluencyAnalysis}}Root analysis of the student's response (step 3) in this oral analysis rather than inconsistencies in the transcript.{{/readingFluencyAnalysis}}", # noqa: E501
"additional_content_fields": {
"text": "{{#readingFluencyAnalysis}}Root analysis of the student's response (step 3) in this oral analysis rather than inconsistencies in the transcript.{{/readingFluencyAnalysis}}", # noqa: E501
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": ["readingFluencyAnalysis"],
"template_format": "mustache",
"template": "{{#readingFluencyAnalysis}}Remember this is a student, so we care about general fluency - not voice acting. {{/readingFluencyAnalysis}}\n", # noqa: E501
"additional_content_fields": {
"text": "{{#readingFluencyAnalysis}}Remember this is a student, so we care about general fluency - not voice acting. {{/readingFluencyAnalysis}}\n", # noqa: E501
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": ["multipleChoiceAnalysis"],
"template_format": "mustache",
"template": "{{#multipleChoiceAnalysis}}Here is an analysis of the student's multiple choice response: {{multipleChoiceAnalysis}}{{/multipleChoiceAnalysis}}\n", # noqa: E501
"additional_content_fields": {
"text": "{{#multipleChoiceAnalysis}}Here is an analysis of the student's multiple choice response: {{multipleChoiceAnalysis}}{{/multipleChoiceAnalysis}}\n", # noqa: E501
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"prompt",
"PromptTemplate",
],
"kwargs": {
"input_variables": [],
"template_format": "mustache",
"template": "Here is the student's whiteboard:\n",
"additional_content_fields": {
"text": "Here is the student's whiteboard:\n",
},
},
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompts",
"image",
"ImagePromptTemplate",
],
"kwargs": {
"template": {
"url": "{{whiteboard}}",
},
"input_variables": ["whiteboard"],
"template_format": "mustache",
"additional_content_fields": {
"image_url": {
"url": "{{whiteboard}}",
},
},
},
},
],
"additional_options": {},
},
},
],
"input_variables": [
"promptDescription",
"expectedResponse",
"responseTranscript",
"readingFluencyAnalysis",
"readingFluencyAnalysis",
"readingFluencyAnalysis",
"multipleChoiceAnalysis",
"whiteboard",
],
"template_format": "mustache",
"metadata": {
"lc_hub_owner": "jacob",
"lc_hub_repo": "mustache-conditionals",
"lc_hub_commit_hash": "836ad82d512409ea6024fb760b76a27ba58fc68b1179656c0ba2789778686d46", # noqa: E501
},
},
}
# Load the ChatPromptTemplate from the manifest
template = load(manifest)
# Format with conditional data - rules is empty, so mustache conditionals
# should not render
result = template.invoke(
{
"promptDescription": "What is the capital of the USA?",
"expectedResponse": "Washington, D.C.",
"responseTranscript": "Washington, D.C.",
"readingFluencyAnalysis": None,
"multipleChoiceAnalysis": "testing2",
"whiteboard": "https://foo.com/bar.png",
}
)
content = result.messages[1].content
assert isinstance(content, list)
assert not [
block for block in content if block["type"] == "text" and block["text"] == ""
]

View File

@@ -105,7 +105,7 @@ def _remove_additionalproperties(schema: dict) -> dict[str, Any]:
generating JSON schemas for dict properties with `Any` or `object` values.
Pydantic 2.12 and later versions include `"additionalProperties": True` when
generating JSON schemas for `TypedDict`.
generating JSON schemas for TypedDict.
"""
if isinstance(schema, dict):
if (

View File

@@ -1106,7 +1106,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -1224,7 +1224,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -1743,7 +1743,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"

View File

@@ -2629,7 +2629,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -2746,7 +2746,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -3259,7 +3259,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"
@@ -4086,7 +4086,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -4203,7 +4203,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -4735,7 +4735,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"
@@ -5574,7 +5574,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -5691,7 +5691,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -6223,7 +6223,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"
@@ -6937,7 +6937,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -7054,7 +7054,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -7567,7 +7567,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"
@@ -8436,7 +8436,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -8553,7 +8553,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -9085,7 +9085,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"
@@ -9844,7 +9844,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -9961,7 +9961,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -10474,7 +10474,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"
@@ -11251,7 +11251,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -11368,7 +11368,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -11911,7 +11911,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"
@@ -12700,7 +12700,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -12817,7 +12817,7 @@
May also hold extra provider-specific keys.
!!! version-added "Added in `langchain-core` 0.3.9"
!!! version-added "Added in version 0.3.9"
''',
'properties': dict({
'audio': dict({
@@ -13349,7 +13349,7 @@
}
```
!!! warning "Behavior changed in `langchain-core` 0.3.9"
!!! warning "Behavior changed in 0.3.9"
Added `input_token_details` and `output_token_details`.
!!! note "LangSmith SDK"

View File

@@ -3,14 +3,12 @@
import asyncio
import time
from threading import Lock
from typing import TYPE_CHECKING, Any
from typing import Any
import pytest
from langchain_core.runnables import RunnableConfig, RunnableLambda
if TYPE_CHECKING:
from langchain_core.runnables.base import Runnable
from langchain_core.runnables.base import Runnable
@pytest.mark.asyncio

Some files were not shown because too many files have changed in this diff Show More